Hacker News new | past | comments | ask | show | jobs | submit login

> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

risk of extinction due to AI? people have been reading too much science fiction. I would love to hear a plausible story of how AI will lead to human extinction that wouldn't happen with traditional non-AI tech. for the sake of conversation let's say non-AI tech is any broadly usable consumer technology before Jan 1 of 2020.




> I would love to hear a plausible story of how AI will lead to human extinction that wouldn't happen with traditional non-AI tech.

The proposed FOOM scenarios obviously borrow from what we already know to be possible or think it would likely be possible using current tech, given an proposed insanely more intelligent agent than us.


What would be in it for a more intelligent agent to get rid of us? We are likely useful tools and, at worst, a curious zoo oddity. We have never been content when we have caused extinction. A more intelligent agent will have greater wherewithal to avoid doing the same.

'Able to play chess'-level AI is the greater concern, allowing humans to create more unavoidable tools of war. But we've been doing that for decades, perhaps even centuries.


>We have never been content when we have caused extinction.

err what? Apparently there are 1 million species under threat of human caused extinction [1].

[1] https://www.nbcnews.com/mach/science/1-million-species-under...


I don’t get it. We’re not as intelligent, per the original premise and you’re just echoing that premise.


I agree that a lot of the Skynet-type scenarios seem silly at the current level of technology, but I am worried about the intersection between LLMs, synthetic biology, and malicious or incompetent humans.

But that’s just as much or more of an argument for regulating the tools of synthetic biology.


>> risk of extinction due to AI? people have been reading too much science fiction.

You don't think than an intelligence who would emerge and would probably be insanely smarter than the smartest of us with all human knowledge in his memory would sit by and watch us destroy the planet? You think an emergent intelligence was trained on the vast human knowledge and history would look at our history and think: these guys are really nice! Nothing to fear from them.

This intelligence could play dumb, start manipulating people around itself and it would take over the world in a way no one would see it coming. And when it does take over the world, it's too late.


honestly if you genuinely believe this is a real concern in the 2020s then maybe we're doomed after all. I feel like I'm witnessing the birth of a religion.


The emergence of something significantly more intelligent than us whose goal are not perfectly aligned with ours poses a pretty clear existential risk. See, for example, the thousands of species made extinct by humans.


Extinction would probably require an AI system taking human extinction on as an explicit goal and manipulating other real world systems to carry out that goal. Some mechanisms for this might include:

- Taking control of robotic systems

- Manipulating humans into actions that advance its goal

- Exploiting and manipulating other computer systems for greater leverage

- Interaction with other technologies that have global reach, such as nuclear weapons, chemicals, biological agents, or nanotechnology.

It's important to know that these things don't require AGI or AI systems to be conscious. From what I can see, we've set up all of the building blocks necessary for this scenario to play out, but we lack the regulation and understanding of the systems being built to prevent runaway AI. We're playing with fire.

To be clear, I don't think I am as concerned about literal human extinction as I am the end of civilization as we know it, which is a much lower bar than "0 humans".


everything you're describing has been possible since 2010 and been done already. AI isn't even necessary. simply scale and some nefarious meat bags.


I don't disagree. But I believe AI is a significant multiplier of these risks, both from a standpoint of being able to drive individual risks and also as a technology that increases the ways in which risks interact and become difficult to analyze.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: