Hacker News new | past | comments | ask | show | jobs | submit login

What seems most likely to me, at least before AGI is invented, is that the human will ask the LLM for strategies to make money under pressure, it will suggest something unethical, and the human will commit the insider trading themselves. When questioned, they will blame the AI for misleading them.

If LLMs are eventually regarded by a lot of people as an authoritative source, regardless of whether or not they are, I expect a lot of such cases of "morality laundering" to appear.




The easy solution is to stop letting everyone deflect blame onto black-box systems.

Teslas too-- a self-driving car mows down a crowd and nobody is held responsible?

Taking illegal action based on what GPT told you doesn't excuse you pulling the trigger. You pulled the trigger. You go to jail.

Don't normalize "just following orders" or it's going to end predictably.


Tesla has attempted to insulate itself from blame by requiring drivers to take full responsibility for autonomous acts of the vehicle under their supervision. (Perhaps also by having autonomous modes disengage before impending collisions — presumably it helps with PR/legally to say autonomous systems were not active at the time of collision?)

Most don’t think autonomous systems will become safe or accepted as safe until manufacturers are willing to assume liability and indemnify users, as Mercedes has.l, by contrast.

https://insideevs.com/news/575160/mercedes-accepts-legal-res...

[Edited to note the attempt, so as not to assert success — I don’t think that’s settled]


> Tesla has insulated itself from blame by requiring drivers to take full responsibility for autonomous acts of the vehicle under their supervision.

Well, they've made drivers feel more exposed by doing that. I don't think you can actually negate product liability law that way, but if you make people feel like they bear all responsibility, it might help marginally even if it isn't legally effective.


>if you make people feel like they bear all responsibility, it might help marginally even if it isn't legally effective.

Since we are talking about a system that needs a human on alert and ready to take over at any time to function safely, I wholeheartedly agree. Human nature is still human nature, people will zone out and look for diversions regardless, but fear will motivate some to be a bit more vigilant in spite of the boredom of staring intently at a road that you aren't personally navigating.


I'm not sure there's going to be a lot. I think people will catch on.

"Federal Judge Kevin Castel is considering punishments for Schwartz and his associates. In an order on Friday, Castel scheduled a June 8 hearing at which Schwartz, fellow attorney Peter LoDuca, and the law firm must show cause for why they should not be sanctioned."

https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: