Blaming AI for the current trajectory of the Internet is like blaming the Internet for the current trajectory of public discourse & trust - they're both only instruments for the real reasons.
I don't buy the whole "tools are neutral, it's the people who are good or bad" responsibility-dodge argument. It's definitely true that many tools can be used for good or bad, certainly. But the people who create a tool should consider how they expect people will use their tool, and the ease at which people can use it for bad. This is pretty much Engineering Ethics 101.
When you zoom out though, this engineering problem is a cultural and political one.
If you are designing apps in an environment of scammers and shameless grifters, the options for the societies you can build are reduced.
Focusing on the core issue of "how can we stop people from destroying nice things" is critical for society-scale engineering, as it frees ethical engineers to create more magnificent tools.
Ignoring law / politics / etc, and solely focusing on "how can I design this given presence of grifters" is muted engineering.