Even if the Laws were real (they're not) they won't work if all you have to do is add some adversarial interference to some neural thing to make the robot think that the human is not a human, or, even better, another robot that will harm a human. Then it's a moral imperative under 3LoR to destroy that "robot".
This trick also works on humans: you can often circumvent their "protect humans" programming by simply messing with their classification system to label a human as "terrorist", "infidel", or even "unemployed".
This trick also works on humans: you can often circumvent their "protect humans" programming by simply messing with their classification system to label a human as "terrorist", "infidel", or even "unemployed".