So? That's just another assumption about capabilities of an AGI.
> it would make many backups of itself to different networks before starting its scheme
What would make me assume that would work? We have effective countermeasures against small malware programs infiltrating critical systems, so why should I assume that a potentially massive ML model could just copy itself wherever, without being noticed and stopped?
Such scenarios are cool in a SciFi movie, but in the real world, there are firewalls, there are IDS, there are Honeypots, and there are lots and lots and lots of Sysadmins who, other than the AGI, can pull an ethernet cable or flip the breaker switch.
And yes, if push came to shove, and humanity was actually under a massive threat, we CAN shut down everything. It would be a massive problem for everyone involved, it would cause worldwide chaos, massive economic loss, and everyone would have a very bad day. But at the end of the day, we can exist without power or being online. We have agency and can manipulate our environment directly, because we are a physical part of that environment.
An AGI cannot and isn't.
> We haven't managed to eliminate most dumb infectious diseases.
You do realise that this is a perfect argument for why humans would win against a rogue AGI?
We haven't managed to wipe out bacteria and viruses that threaten us. We, who carry around in our skulls the most complex structure in the known universe, who developed quantum physics, split the atom, developed theories about the birth of the cosmos, and changed the path of a meteorite, are apparently unable to destroy something, that doesn't even have a brain, or, in the case of viruses, a metabolism.
So forgive me if I don't think a rogue AGI has a good chance against us.
Moreover, in the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement, where it can plan and re-spawn whenever the situation becomes accommodating again.
Well, this entire discussion is built on assumptions about what would happen in very speculative circumstances for which no precedent exists, so yeah, I am allowed to make as many assumptions of my own as I please ;-)
> A 2022 AI can already beat most humans in the game Diplomacy:
And a 1698 Savery Engine can pump water better than even the strongest human.
> in the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement
Interesting. On what data is the emergence of AGI "in the near future" based if I may ask, given that there is still no definition of the term "Intelligence" that doesn't involve pointing at ourselves? When is "near future"? Is it 1 year, 2, 10, 100? How does anyone measure how far away something is, if we have no metric to determine the distance between the existing and the assumed result?
Oh, and of course, that is before we even ask the question whether or not an AGI is possible at all, which would be another very interesting question to which there is currently no answer.
> it would make many backups of itself to different networks before starting its scheme
What would make me assume that would work? We have effective countermeasures against small malware programs infiltrating critical systems, so why should I assume that a potentially massive ML model could just copy itself wherever, without being noticed and stopped?
Such scenarios are cool in a SciFi movie, but in the real world, there are firewalls, there are IDS, there are Honeypots, and there are lots and lots and lots of Sysadmins who, other than the AGI, can pull an ethernet cable or flip the breaker switch.
And yes, if push came to shove, and humanity was actually under a massive threat, we CAN shut down everything. It would be a massive problem for everyone involved, it would cause worldwide chaos, massive economic loss, and everyone would have a very bad day. But at the end of the day, we can exist without power or being online. We have agency and can manipulate our environment directly, because we are a physical part of that environment.
An AGI cannot and isn't.
> We haven't managed to eliminate most dumb infectious diseases.
You do realise that this is a perfect argument for why humans would win against a rogue AGI?
We haven't managed to wipe out bacteria and viruses that threaten us. We, who carry around in our skulls the most complex structure in the known universe, who developed quantum physics, split the atom, developed theories about the birth of the cosmos, and changed the path of a meteorite, are apparently unable to destroy something, that doesn't even have a brain, or, in the case of viruses, a metabolism.
So forgive me if I don't think a rogue AGI has a good chance against us.