Hacker News new | past | comments | ask | show | jobs | submit login

Other HN commenters have pointed out abundant methodological errors in these scenarios.

I'll take another tack: I believe it is a category error to ask humans to determine the moral or "most moral" action in these scenarios. There are two sufficient conditions for this:

1. There is no present, efficient moral agent. A self-driving car, no matter how "smart" (i.e., proficient at self-driving) it is, is not a moral agent: it is not capable of obeying a self-derived legislation, nor does it have an autonomous morality. Asking the self-driving car to do the right thing is like asking the puppy-kicking machine to do the right thing.

2. The scenario is not one where an efficient moral action occurs. The efficient moral action is really a complex of actions, tracing from the decision to design a "self-driving" car in the first place to the decision to put it in the street, knowing full well that it isn't an agent in its own right. That is an immoral action, and it's the only relevant one.

As such, all we humans can do in these scenarios is grasp towards the decision we think is most "preferable," where "preferable" comes down to a bunch of confabulating sentiments (age, weight, "social value", how much gore we have to witness, etc.). But that's not a moral decision on our part: the moral decision was made long before the scenario was entered.




> the moral decision was made long before the scenario was entered.

The company manufacturing the car needs to make this decision when writing the software. At that time, it's a decision being made by moral agents.


> The company manufacturing the car needs to make this decision when writing the software. At that time, it's a decision being made by moral agents.

I think even that is a step beyond: acquiescing to the task of writing software that will kill people as part of an agent-less decision is, itself, an immoral task.

The puppy-kicking machine analogy was supposed to be a little tongue-in-cheek, but it is appropriate: if it's bad to kick puppies, then we probably should consider not building the machine in the first place instead of trying to teach it morality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: