A “superhuman” AI is just a machine, a very expensive one. It can be turned off and we control the outputs it has. Why would an AI have the ability to launch nuclear weapons unless we gave it a button? A “superhuman” intelligence is without a body, so we control any interfaces it has access to. The Internet could be accessed, but any attempt to “hack” through the Internet is met by routine packet defenses. The AI is still governed by physical laws and would only have so much “free” computation power to do things like script a hack. Perhaps it could do that kind of thing more efficiently.
Maybe in the far, far future when we have androids which can house an AI we will have to worry. But designing a body is one problem. Designing an intelligence is another.
Supercomputers used to be giant machines we had in giant warehouses... Now the phone in your pocket has the computing power of the 1980s walking around with you. Assuming your super intelligence will always be huge is... well not a great assumption.
Also superintelligence doesn't need a body itself. It just needs yours. Putin for example has commanded hundreds of thousands of dumbasses to go get themselves killed in Ukraine. In this case does it matter if Putin is flesh and blood, or a processor that lists out commands for others to follow as long as they are willing to listen?
My point is that a superintelligence will require specialized equipment. I specifically mentioned it because there is a thought that a superintelligence can just replicate itself onto your phone, as you mentioned.
But this replication must follow the physical laws we have. It must also follow the laws we attempt to enforce in our networks.
But you are correct, if a superintelligence were to somehow convince a human to rid itself of agency, sure.
Maybe in the far, far future when we have androids which can house an AI we will have to worry. But designing a body is one problem. Designing an intelligence is another.