Hacker News new | past | comments | ask | show | jobs | submit login

There are at least two failure cases here:

- a military AI in the hands of bad actors that does bad stuff with it intentionally.

- a badly coded runaway AI that destroys earth.

These two failure modes are not mutually exclusive. When nukes were first developed, the physicists thought there is a small but plausible chance, around 1%, that detonating a nuke would ignite the air and blow up the whole world.

Let's imagine we live in a world where they're right. Let's suppose somebody comes around and says, "let's ignore the smelly and badly dressed and megalomanic physicists and their mumbo jumbo, the real problem is if a terrorist gets their hands on one of these and blows up a city."

Well, yes, that would be a problem. But the other thing is also a problem. And it would kill a lot more people.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: