Hacker News new | past | comments | ask | show | jobs | submit login

I’ve been working on this issue for a while and the conclusion I have come to is:

We’re not going to see actual movement on managing AI risk until there is the equivalent of a Hiroshima/three mile island/chernobyl from a self-improving system that has no human in the loop.

Not enough people actually believe ASI is possible and harmful, to create a movement that will stop the people who are pursuing it who don’t care or don’t believe its going to be harmful.

It would have been impossible to have a nuclear weapons ban prior to World War II because 1. Almost nobody knew about it 2. Nobody would have actually believed it could be that bad

The question you should ask is, if someone does make it, on any timeline is there any possible counter at that point?




It seems like your nuclear analogy might have an answer to the question. Take that theory that it could have been possible the nuclear chain reaction ignited the atmosphere.

It seems like the worst-case predictions about AI are at that scale. There is also the possibility that AI causes problems on a much smaller scale. A country’s weapons system goes terribly wrong. A single company’s business goes berserk. Stock trading bot at a large enough bank/fund causes a recession. Stuff like that seems rather likely to happen as systems are adopted before they’re “ready” and before they’re anything close to AGI.


The book “weapons of math destruction” is full of examples of AI fucking up badly and ruining people’s lives today. It’s just that the damage is more diffuse than Hiroshima and nowhere NEAR as gruesome and specially located and sudden.


Unfortunately,

“There’s No Fire Alarm For AGI” https://intelligence.org/2017/10/13/fire-alarm/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: