Hacker News new | past | comments | ask | show | jobs | submit login

My issue is that, while of course you're right that one can't know for sure, some people conclude that simply not being able to know justifies whipping up a frenzy and (in all likelihood) wasting part of a billion dollars that could go to, say, more productive research even within AI/ML.

Anyone could make a long list of things it's impossible to know for sure that would destroy humnaity. Many such things are vastly more likely to do so than malicious AGI.




Ok, I'll bite. Can you list several things that are vastly more likely to lead to the complete extinction of the human race (I'll assume that's what you meant by "destroy humanity") than malicious AGI?


Climate Change, leading to frequent crop failures, leading to the collapse of most nations and most industrial capacity.

That's what keeps me up at night these days.


That would be at the top of my list. And then there's nuclear attacks/disaster, asteroid hitting the earth, superbugs and biological weapons gone awry, physics experiment gone awry creating a black hole... I put superintelligent AI down towards alien invasion on my personal list of humanity-ending risks.


We rank all these scenarios very differently then :)

Climate change and nuclear war are huge issues, and we correctly invest orders of magnitude more money and time (which might still not be enough) trying to prevent or limit these than trying to prevent the birth of a malicious AGI, but they are extremely unlikely to lead to a complete extinction of the human race.

Regarding the other scenarios, if you allow me to move the goalposts from pure risk assessment:

- Preparing against an alien invasion seems futile, since given the timescales at play in the universe, the first aliens we meet will likely be millions of years behind or ahead of us.

- One way to survive an asteroid impact is to colonize another planet ahead of time, which Musk is working on.

- Physics experiments requiring specialized hardware like the LHC already have much more oversight than AI research where a breakthough could potentially happen in a garage on commodity hardware.

So it makes a lot of sense to me to invest some money in preventing an AI doomsday scenario next.


Musk says AI is the single biggest existential threat (unless I'm mistaken and someone else similarly famous said that, maybe Thiel?). From this it appears that you, along with the rest of society (as you mentioned by how much is spent on oversight), think he is wrong.

Nobody is arguing against funding AI. It's the fear-mongering we disagree with. It harms the field.


The question was about complete extinction. A population reduction by 90% is a horrible disaster, but it sets us back "just" a couple centuries, but it's hardly something that can eliminate humanity permanently; unlike quite a few other things.


- Asteroid strike

- Superbug pandemic

- Supervolcano eruption




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: