Hacker News new | past | comments | ask | show | jobs | submit login

"We were only following orders" suddenly feels even more scary when they come from a computer and not a human.



It shouldn't. Blindly following orders from humans has much the same results.


When the orders come from a human then there is someone who is ultimatly responsible, someone to put through the courts, someone to throw in jail.

When the orders come from a computer. Will any of that happen, will there be someone responsible?


Where the orders comes from has nothing to do with anything in terms of who is held responsible. The person who acts is responsible. That they were following orders is not a valid excuse. This was what the Nuremberg Trials established for us. It doesn't matter if someone told you to do something, you still have the only capacity to make the decision whether you actually do it or not.

Humans are very quick to abdicate their own moral authority, and it is the most repugnant human impulse that exists. If someone puts a gun to your head, and tells you to kill someone or die, you are still responsible for whoever you kill. Others might consider your actions understandable, even forgivable, but you're still a murderer and still one who valued their own life above anothers.

If you do something because you were "just doing your job", that's even worse. That's exactly identical to doing something for money. Which is usually looked very poorly on by society.


yeah, but punitive justice is the least consequential thing in those situations. The major reason you'd punish someone there is to stop someone from willfully repeating the scenario. There exists a path for learning and prevention in automated systems too.

The danger I see is the stifling of potential human subjective decisions. That broad decisions will be made and carried out without a sanity/humanity check by humans.

I think measures should be put in place for a guarantee of human intervention in the case of situational anomaly or system error as well as an emphasis on redress.

Since automation scales and a human workforce doesn't, a guarantee of human intervention might not be practically possible.

Anyone have any ideas on how this kind of thing might be mitigated?


I think it won't be mitigated, given my experience working as a data scientist. It will be ignored because it is profitable to ignore it. If we slowed down and figured out how to make machines mimic human empathy before we make them make decisions it could be mitigated, but we won't do that. Ask your boss about it.


"It looks like the devs didn't do their job properly..."?


But from the other side of the system, it's perfect. Now everybody involved can say they were just following orders!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: