But what about a bot, particularly one that is powered by the type of AI that is complex enough to make its own inferences and form its own conclusions based on the data presented, rather than being fed a bunch of rules?
That bot then figures out that, in aggregate, we're all a bunch of assholes, then something something Skynet.
Hmm, interesting thought. Where I take this is that the War Games scenario might be more likely. Given that AI will not just wake up one day fully self-aware and aware of its externalities, I'll assume it learns about things as it needs to know. Needs to know where the opponents nuclear missiles are stationed. Needs to know the major population centers, which are just ints stored in database rows. Needs to nothing about "people" because those aren't parameters worth knowing for the task.
So, yeah, I've always been a Terminator/Matrix kind of guy, but your comment moved me to wonder if I've considered all possibilities.
Have you read Rule 34 by Charles Stross? In it, a machine learning spam filter decides that the best way to stop spam is to bump off the spammers themselves. No-one realises its found such a drastic solution until people start dying in mysterious ways.
I'm not sure the robots will take over Matrix or Terminator style - instead, as algorithms get more complex and are able to take greater leaps of logic, we'll see more and more strange behaviour until suddenly things get really weird.
And then it'll be too late. Too many of our systems will be reliant on them to reverse course.
The digital domain will have become quite literally another domain of life, and we'll just have to deal with the messiness and unpredictability that entails.
That bot then figures out that, in aggregate, we're all a bunch of assholes, then something something Skynet.