One avenue of hope for successful AI alignment that I've read somewhere is that we don't need most laypeople to understand the risks of it going wrong, because for once the most powerful people on this planet have incentives that are aligned with ours. (Not like global warming, where you can buy your way out of the mess.)
I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.
Edit: I'm aware that there are funds available for AI alignment research, and I'm seriously thinking of switching into this field, mental health be damned. But it would help a lot more if someone could change Eric Schmidt's mind, for example.
> I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.
It has occurred to me that social networks are a vulnerable channel which we've already seen APTs exploit to shift human behavior. It's possible that Musk is motivated to close this backdoor into human society. That would also be consistent with statements he's made about "authenticating all real humans."
I really hope someone with very deep pockets will find a way to steer the ship more towards AI safety. It's frustrating to see someone like Elon Musk, who was publicly worried about this very specific issue a few years ago, waste his time and money on buying Twitter.
Edit: I'm aware that there are funds available for AI alignment research, and I'm seriously thinking of switching into this field, mental health be damned. But it would help a lot more if someone could change Eric Schmidt's mind, for example.