You're working under the assumption that no one will attempt to use AI to assert power over others, and that the human race will come together to pull the plug when it becomes a problem. Looking at our current geopolitical situation, I'd rather be silly than naive.
Humans + nukes already equal an existential threat to the human race, so I'm not seeing how AI adds anything new to the equation in that scenario. "Humans using AI + nukes" vs "Humans using nukes", either one probably ends human life on Earth.
I thought the problem everybody was fretting about was the idea of a nominally beneficent AI going off on it's own and doing something malicious / destructive.