Hacker News new | past | comments | ask | show | jobs | submit login

> - Many AI systems we have developed quickly become superhuman (Go, StarCraft, many aspects of ChatGPT and Midjourney).

This is a) cherrypicking (there are plenty of "AI systems" we have developed that have failed to become superhuman), and b) not accounting for massive differences in the types of "AI systems" being talked about in the different cases.

> - The AI capabilities progress is way faster than AI alignment progress, we have no idea how to control AI systems or make them want the same things we do.

We know perfectly well how to prevent an AI from doing things that can either harm us directly, or allow it to rewrite its own code. Just because we don't know how to make an LLM cite its sources doesn't mean we don't know how to sandbox our code.

"Control over AI systems" and "AI alignment" are not binary things.

Nowhere in your "evidence" have you presented anything that says AGI is even possible.

Nowhere in your "evidence" have you presented anything that says that, if AGI is possible, the very first time an AGI is created, it will seek to make itself superintelligent regardless of its creators' intentions, or that it will see itself as being in opposition to humanity in any way—that it will "want" anything beyond what it was created to want.

Nowhere in your "evidence" have you presented anything that says that, if AGI is possible, and wants to become superintelligent, and sees humans as being its enemies, it will have the capability to improve itself in those ways faster than we can detect it.

All of these missing links are science fiction scenarios. That's at least three (possibly four, depending on how you want to count them) potentially-impossible gaps that have to be crossed in order to get from where we are today to the nightmare scenario you posit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: