Hacker News new | past | comments | ask | show | jobs | submit login

That's not how risk works.

If we want to grow to adulthood as a species and manage to colonize the cosmos, we need to pass _every_ skill challenge. If in 50 years there'll be a risk of unfriendly superintelligence and it'll have needed 40 years of run-up prep work to make safe, then it will do us absolutely zero good to claim that we instead concentrated on risk of military AI and hey, we got this far, right?

Considering the amount of human utility on the line, "one foot short of the goal" is little better than "stumbled at the first hurdle".




I think the article dealt pretty well with risk: you survive by focus finite resources on the X% chance of stopping Y% chances of the near elimination of humanity, not the A% chance of stopping a B% probability of an even worse than the near elimination of humanity event where X is large, Y is a small fraction and the product of A and B is barely distinguishable from zero, despite it getting more column inches most of the rest of the near-neglible probability proposed solutions for exceptionally low probability extinction events put together.

I also tend to agree with Maciej that the argument for focusing on the A probability of B is rescued by making the AI threat seem even worse with appeals to human utility like "but what if, instead of simply killing off humanity, they decided to enslave us or keep us alive forever to administer eternal punishment..." either.


Well yes, we have finite resources to deploy.

But most resources are not spent on risk mitigation. The question of what priority to give risk mitigation naturally (should) go up the more credible existential risks are identified.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: