Hacker News new | past | comments | ask | show | jobs | submit login

Apologies, the Russian bot comment was more me venting frustration at the prevalence of low-effort response like yours (sorry) to those who try to raise concerns about AI safety.

I do agree with you that extinction from AI isn't likely to be an issue this decade. However, I would note that it's difficult to predict what the rate of change is likely to be once you have scalable general intelligence.

I can't speak for people who signed this, but for me the trends and risks of AI are just as clear as those of climate change. I don't worry that climate change is going to be a major issue this decade (and perhaps not even next), but it's obvious where the trend is going when you project out.

Similarly the "real" risks of AI may not be this decade, but they are coming. And again, I'd stress it's extremely hard to project when that will be since when you have a scalable general intelligence progress is likely to accelerate exponentially.

So that said, where do we disagree here? Are you saying with a high level of certainty that extinction risks from AI are too far in the future to worry about? If so, when do you think extinction risks from AI are likely to be a concern – a couple of decades, more? Do you hold similar views about the present extinction risk of climate change – and if so, why not?

Could I also ask if you believe any resources in the present should be dedicated to the existential risks future AI capabilities could pose to humanity? And if not, when would you like to see resources put into those risks? Is there some level of capability that you're waiting to see before you begin to be concerned?




> low-effort response like yours

That wasn't my comment; I agree it was low-effort and I never would have posted it myself. I don't think they're a Russian bot though.

As for the rest: I just don't see any way feasible way AI can pose any serious danger unless we start connecting it to things like nuclear weapons, automated tanks, stuff like that. The solution to that is simple and obvious: don't do that. Even if an AI were to start behaving maliciously the solution would be simple: pull the plug, quite literally (or stop the power plants, cut the power lines, whatever). I feel people have been overthinking all of this far too much.

I also don't think climate change is an extension-level threat; clearly we will survive as a species. It's just a far more pressing and immediate economic and humanitarian problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: