Yes. Though AI is already in an arms race (mostly US vs. China/Russia).
Likely: Future AI will be decentralized for exactly these reasons. We don't want a single bad actor to control it. Security agencies are now warning that Russia is building a large botnet in the case it needs to go to war, and wants to disable enemy infrastructure. The US has similar needs.
Well designed game theory makes it possible for adversaries to cooperate. So it is no guarantee that Alice is always susceptible to Bob's attacks. Cryptography provides methods that can't be attacked if properly implemented. Defense and offense also can have differing costs: It can be way (computationally) cheaper to create defenses for Alice, than it is to craft adversarial offenses for Bob.
Though the risk is real: Spam preceded spam-filters. There was a short period (in internet years) where spam was more effective than our methods to counter it. So intelligent self-modifying worms/viruses will probably precede intelligent self-learning anti-viruses.
We also see both inverse reinforcement learning (learning about the policy of another agent through observing its behavior), adversarial RL (forcing another trading bot to make unprofitable decisions), and computational arms-races (who has the lowest latency?) between High Frequency Trading firms.
Likely: Future AI will be decentralized for exactly these reasons. We don't want a single bad actor to control it. Security agencies are now warning that Russia is building a large botnet in the case it needs to go to war, and wants to disable enemy infrastructure. The US has similar needs.
Well designed game theory makes it possible for adversaries to cooperate. So it is no guarantee that Alice is always susceptible to Bob's attacks. Cryptography provides methods that can't be attacked if properly implemented. Defense and offense also can have differing costs: It can be way (computationally) cheaper to create defenses for Alice, than it is to craft adversarial offenses for Bob.
Though the risk is real: Spam preceded spam-filters. There was a short period (in internet years) where spam was more effective than our methods to counter it. So intelligent self-modifying worms/viruses will probably precede intelligent self-learning anti-viruses.
We also see both inverse reinforcement learning (learning about the policy of another agent through observing its behavior), adversarial RL (forcing another trading bot to make unprofitable decisions), and computational arms-races (who has the lowest latency?) between High Frequency Trading firms.