Hacker News new | past | comments | ask | show | jobs | submit login

The analogy falls apart with AI, though.

- American AI “safety” practices have little or nothing in common with best practices in other fields of engineering, to the extent they can be said to exist at all (most AI safety work focuses on making sure the AI doesn’t say anything that might offend someone).

- When a rocket blows up, people die but we learn from the mistake. When an AI seriously “threatens humanity”, humanity dies and we very possibly don’t get a second chance.




A rocket booster falling on your head is a real threat.

AI taking over the world is, as yet, an imagined threat.

When AI proves it will take over the world and China builds theirs without the "don't take over the world" code mandated by every other country, let's talk more.


> let's talk more.

Unfortunately if that happens, we won’t be able to!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: