Hacker News new | past | comments | ask | show | jobs | submit login

I say it is good if people with bad intentions blow themselves up from following questionable instructions, and also good to keep them paranoid about it.



They might hurt innocent people like neighbors, housemates, family members, or random members of the public with their mistakes.


That's not an AIs fault though.


I only meant to challenge the suggestion that no meaningful harm comes from this kind of misinformation. A bomb maker hurting themselves could hurt other people. Now probably this does not mean we should “blame” the AI, but it’s still important to understand what harms may come when we think about how to manage new technologies, even if the answer is “do nothing”.


That's a legal question we have yet to resolve.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: