Hacker News new | past | comments | ask | show | jobs | submit login

I'm in favor of public discussions. It's tragic that there is a segment of the relevant powers that are doing their best to shut down any of these kinds of discussions. Thankfully, they seem to be failing.

But to answer the question, the danger of AI is uniquely relevant because it is the only danger that may end up being totally out of our control. Nukes, pandemics, climate change, etc. all represent x-risk to various degrees. But we will always be in control of whether they come to fruition. We can pretty much always short circuit any of these processes that is leading to our extinction.

A fully formed AGI, i.e. an autonomous superintelligent agent, represents the potential for a total loss of control. With AGI, there is a point past which we cannot go back.




I don't agree with the uniqueness of AI risk. A large asteroid impacting earth is a non-hypothetical existential risk presently out of our control. We do not currently plan on spending trillions to have a comprehensive early detection system and equipment to control that risk.

The differentiating thing here is that blocking hypothetical AI risk is cheap, while mitigating real risks is expensive.


We have a decent grasp of the odds of an asteroid extinction event in a way that we don't for AI.


And that makes taking that extinction risk everyday better?


How do we not take it every day?

We can work on it in the long term and things like developing safe AI may have more impact on mitigating asteroid risk than working on scaling up existing nuclear, rocket, and observation tech to tackle it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: