They provided an example and finding a technical flaw in the example they chose doesn't invalidate the broader concern as applied to other domains.
"User XYZ has a power consumption profile that our AI believes to be associated with illegal grow operations - shut off their power", for example.
It's the shutting off power that's a problem, not the AI.
You'll catch shit for taking direct action yourself. But if your AI does it, without you telling it to..."oops!"
Tech is dead. Welcome to the age of social engineering.
They provided an example and finding a technical flaw in the example they chose doesn't invalidate the broader concern as applied to other domains.