It's bad for there to be anything near us that exceeds our (collective) cognitive capabilities unless the human-capability-exceeding thing cares about us, and no one has a good plan for arranging for an AI to care about us even a tiny bit. There are many plans, but most of them are hare-brained and none of them are good or even acceptable.
Also: no one knows with any reliability how to tell whether the next big training run will produce an AI that exceeds our cognitive capabilities, so the big training runs should stop now.
IMO a much bigger risk is them being straight up given a lot of power because we think they "want" (or at least will do) what we want, but there's some tiny difference we don't notice until much too late. Even paperclip maximisers are nothing more than that.
You know, like basically all software bugs. Except expressed in literally non-comprehensible matrix weights whose behaviour we can only determine by running it rather than source code we can check in advance and make predictions about the performance of.
Yes. Skynet is very dangerous and not safe. In Terminator, humanity is saved because Skynet is dumb, not because Skynet is not dangerous or because Skynet is safe.
Just because other people with poor judgement are building it, that does not make it safe.