Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, ants have not created humanity. I don't think it's inconceivable for a friendly AI to exist that "enjoys" protecting us in the way a friendly god might. And given that we have AI (well, language models...) that can explain jokes before we have AI that can drive cars, AI might be better at understanding our motives than the stereotypical paperclip maximizer.

However, all of this is moot if the team developing the AI does not even try to align it.




Yeah, I'm not arguing alignment is not possible - but that we don't know how to do it and it's really important that we figure it out before we figure out AGI (which seems unlikely).

The ant example is just to try to illustrate the spectrum of intelligence in a way more people may understand (rather than just thinking of smart person and dumb person as the entirety of the spectrum). In the case of a true self-improving AGI the delta is probably much larger than that between an ant and a human, but at least the example makes more of the point (at least that was my goal).

The other common mistake is people think intelligence implies human-like thinking or goals, but this is just false. A lot of bad arguments from laypeople tend to be related to this because they just haven't read a lot about the problem.


One avenue of hope for successful AI alignment that I've read somewhere is that we don't need most laypeople to understand the risks of it going wrong, because for once the most powerful people on this planet have incentives that are aligned with ours. (Not like global warming, where you can buy your way out of the mess.)

I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.

Edit: I'm aware that there are funds available for AI alignment research, and I'm seriously thinking of switching into this field, mental health be damned. But it would help a lot more if someone could change Eric Schmidt's mind, for example.


> I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.

It has occurred to me that social networks are a vulnerable channel which we've already seen APTs exploit to shift human behavior. It's possible that Musk is motivated to close this backdoor into human society. That would also be consistent with statements he's made about "authenticating all real humans."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: