Hacker News new | past | comments | ask | show | jobs | submit login

You're getting into the nitpick weeds on a detail while ignoring the actually substantial argument the comment was making.

They provided an example and finding a technical flaw in the example they chose doesn't invalidate the broader concern as applied to other domains.




I asked a question for clarification.


There are so many potential abuses in a situation like this that I find it baffling that you can't imagine any of them.

"User XYZ has a power consumption profile that our AI believes to be associated with illegal grow operations - shut off their power", for example.


But why would AI be required for that? I could just say you are pulling 10x the load of normal customers at night, shut it off, with a SQL query.

It's the shutting off power that's a problem, not the AI.


Externalizing of blame.

You'll catch shit for taking direct action yourself. But if your AI does it, without you telling it to..."oops!"

Tech is dead. Welcome to the age of social engineering.


And how would your understanding change based on the answer? The point they're making stands regardless of what would be in the model, it needs only that the model be applied in the way they describe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: