Yep, it sure seems that billions of human beings have voluntarily contributed personal data to massive datasets, exactly of the kind that AI would need to be able to manipulate (or worse, exploit) each of those human beings.
And unscramble an egg, and reverse flow of entropy too.
The problem of AI alignment is that human value system is rather complex (to the point we haven't formalized any meaningful chunk of it, nor it seems we'll be able to any time soon), and random deviations from it can easily lead to - what we'd consider - horrifying tragedies. A random AI mind plucked out of space of possible minds is highly unlikely to have internalized a good approximation of our value system, for the same reason putting a scrambled egg in a bag and shaking it vigorously won't get you a fresh egg back.
Manipulation and exploitation are, by default, what happens when an agent with power over you finds you standing between it and the thing it wants. Almost every point in the space of possible minds will feature this behavior. "Love, understand, help each other", in the sense we understand it, is a very specialized, specific set of behaviors - few points in the space of possible minds will feature it.
Or, in short, there's a good statistical argument (to which I do not do justice here), that if you make a smart enough AI without doing a perfect job alignment, it will kill us all - most likely unintentionally.
> A random AI mind plucked out of space of possible minds is highly unlikely to have internalized a good approximation of our value system
There is no common value system amongst humans. We have voluntary murderers, cannibals, mad scientists, misanthropes, various religions, wannabe influencers and lifetime recluses. Not even self preservation is certainly a factor when dealing with humans.