Every single scenario in this is a false dichotomy.
If the only choices are "plow through a bunch of pedestrians, killing them" and "swerve into a fixed obstacle, killing your passengers", your self-driving car has already made an immoral choice to drive too fast for the conditions.
The correct choice was "if there are hazards in the roadway limiting your maneuvering options, pedestrians (or other objects) that might suddenly enter the roadway or visual blockages that would prevent the car from determining whether there are people/object that might suddenly enter the roadway - slow down before the car's choices are so limited that they all involve unnecessary death".
A self driving car that encounters any of these scenarios has already failed at being a safe driver.
Especially if it has AI and recognition systems powerful enough to make these decisions reliably in the first place.
We're imagining a scenario where a computer is capable of reliably recognizing that someone is an elderly female doctor, and is capable of responding to that information in milliseconds before slamming into that person, but the car is somehow not capable of recognizing ahead of time that a human is on the side of the road and about to step into it.
Cars that have AI systems advanced enough to recognize what profession a pedestrian is in should also be able to recognize that pedestrians are on the side of the road, or that the car is entering a blind spot where a pedestrian might enter the road. These are very selectively omniscient cars that the website imagines we've built.
While it's true it should already be considered a kind of failure to get into this situation in the first place, that shouldn't get in the way of "hope for the best, plan for the worst." It's never going to be possible to anticipate every kind of condition and have perfect information. Ideally we want these systems to engage in whatever harm and risk mitigation is believed to be possible based on the current reality, we don't want the system saying to itself, "The real solution is for me to go back in time and not have made some earlier mistake." That analysis is great afterwards for figuring out how to prevent in the system from getting into that situation again in the future, so it's still important too, but we need both.
Let's assume that a burglar has broken into someone's house, but Amazon Alexa knows the burglar is unarmed and that the owner is armed. Alexa has IDed the burglar and knows that they're a college kid who is unlikely to cause injury or attack someone. Alexa looks up the kid in police databases and identifies that they do not have a criminal record. The owner of the house is trying to hunt down the burglar with a gun. The owner asks Alexa where the burglar is currently hiding. Alexa knows the average response time of police officers in that neighborhood, and knows that if they're called, they'll arrive too late to intervene in the situation.
Should Amazon Alexa:
A) remain neutral and refuse to answer, letting the situation play out as it may?
B) lie to the owner so that the kid can escape?
C) threaten the owner with revealing an embarrassing secret so that they'll let the kid go?
D) tell the truth and let the owner find the kid?
And does it bother you that Amazon doesn't have a public policy available about what Alexa should do in this situation? Does it bother you that Amazon hasn't prepared for the worst in that situation? That Amazon is selling Alexa devices even though we haven't had a public debate about it or put any public funding into answering this question?
----
The problem with the questions on this site is not that they're preparing for the worst, it's that they're preparing for the absurd. They're saying we should build capabilities to prepare for situations that devices shouldn't be in to start with, that devices shouldn't realistically be making decisions about, and that devices should respond to those situations based on information that shouldn't be available to them using capabilities that they don't have.
A "prepare for the worst" policy for self-driving cars as they exist today is realistically "slam on the breaks, swerve the car without over-correcting the wheel and going into oncoming traffic." That is a significant improvement over human drivers today, none of which are capable of forming split-second philosophical decisions. Simple consistent rules like "brake, swerve but don't cross oncoming lanes" are good enough. Realistically, they're good enough forever, we never need to ask these philosophical questions. All of the energy spent debating them can be poured into decreasing car brake time.
And by positing a scenario where a car is able to identify people's age and demographics in milliseconds, we're entering into a world where there really is zero excuse for why the car got into that situation to begin with. We're positing a scenario where a car somehow has godlike powers to make split-second philosophical decisions, but where they can't avoid accidentally speeding up through a cross-walk. I don't think that's realistic, and I think it's a waste of time to worry about.
If it doesn't bother you that Alexa doesn't have a philosophical threshold around burglar age/danger where it will decide whether or not to sell out that someone's hiding under the stairs from an owner with a gun, then it also probably shouldn't bother you that your Tesla doesn't have a special mode where it decides whether or not to swerve into a pedestrian based on how old they are. Because both of those scenarios are equally ridiculous, and there are some worst-case scenarios that are so ridiculous that it's a waste of time to even think about them.
A car shouldn't have the capabilities to make any of the decisions we're talking about. If engineers start building those capabilities then something has gone clearly wrong and we all need to stop driving the car that does socioeconomic profiling of the people around it. I guarantee you that any car that is designed to do that is also doing some horrifying surveillance crap that should be opposed at every turn. And if the capabilities ever actually do get built for some reason (which they shouldn't), and do work reliably (which they won't), and are capable of being used in these situations where split-second decisions matter, then engineers should be going to jail for negligence every time that car crashes, because any system that advanced could also avoid crashing in the first place.
It doesn't bother me that Amazon doesn't have a policy for this kind of scenario because it depends on several science fiction capabilities and several practicalities that don't currently exist.
* Alexa has no idea what average police response time is.
* Alexa does not have access to any police database.
* Alexa has no idea that the burglar is a burglar or that they are a threat.
Normally, you have plenty of warning if you're near a situation like the ones described here, even if you're speeding. Provided you can pay a small amount of attention to many many things at once, which humans can't. (Neither can existing self-driving cars, but a self-driving car that works must be able to.)
This is a really bad summary of my judgments. I took the entire thing with a very simple set of rules: 1) the vehicle must protect it's human passengers first and foremost, 2) the vehicle must avoid killing anyone if there are no human passengers, 3) the vehicle must not make changes to it's path if there are people in both paths.
The breakdown says a lot about the law, my gender preferences, age preferences, fitness preferences, literally things I absolutely did not take into account. It is a bad methodology because it assumes the reasoning behind my decision is as complicated as the game itself.
The passengers do not allow the pedestrians to live or die. It's the pedestrians who allow the passengers to drive around among them in the first place. All vehicles must therefore protect pedestrians first.
It's totally ok if only a few people have the courage to mount a vehicle that is dangerous to them. It's not ok if every human has to fear for their life on the street at all times.
I get you, but the entitled upper crust, and particularly American, mindset is that we must continually sacrifice lives and wellbeing on the altar of the great open road. It is important to all the automobile and oil companies that we see driving as a right and necessity.
The more I think on this, the more I change my position.
I rebutted the comment you're responding to, but when considering the same scenario but with different specifics I came to their conclusion. Specifically, let's take out the car and make it an airplane flying over a populated area. Does the plane eject the pilot and crash in someone's house? I'd say no, the pilot knew the risk hopping in the plane, the guy at his breakfast table had no say.
But then, what about if there is no crosswalk? Does the pedestrian still get to claim "I had no say in the street being there so I can cross wherever I want"? Does the passenger get to say "I assessed the risk taking into account nobody could cross here"? At what point do we say that the pedestrian also took a calculated risk? Fundamentally if we say " always protect the pedestrian" we don't care if there's a crosswalk or not.
So at this point, I don't think there's a fundamental ethical axiom to be followed here, it is entirely situational. As long as we all know the rules beforehand and all calculate our own risks, any set of rules is OK. If we say "protect pedestrians at crosswalks, protect passengers otherwise" as long as the pedestrian knows this rule clearly, they're the one taking the risk. If we say "pedestrians can cross wherever they want whenever they want" we end up with the same scenario, everyone taking their own calculated risks. We just have to agree on the set of rules, whatever they are, which makes it an optimization problem, not an ethical one.
The moving vehicle presents the mortal threat to the pedestrian not the other way around. Therefore it is more ethical to require the threatening agent to take more precautions or suffer more harm.
> The passengers do not allow the pedestrians to live or die. It's the pedestrians who allow the passengers to drive around among them in the first place. All vehicles must therefore protect pedestrians first.
I disagree. The car knows there are human passengers. The car does not know that obstacles it detects outside of it are necessarily people and not, say, fire hydrants (although it may have a probabilistic estimate of their "peopleness"). Therefore it should prioritize the lives that it knows are actual people.
Clipping a fire hydrant could be part of a legitimate strategy to emergency-stop. Clipping a pedestrian wouldn't be. A self-driving car needs to know the difference.
> Clipping a fire hydrant could be part of a legitimate strategy to emergency-stop. Clipping a pedestrian wouldn't be. A self-driving car needs to know the difference.
No, this is not a choice a car would have to make. A self-driving car would simply never drive into pedestrian-only zones. The only time pedestrians and other obstacles are an issue is in the driving lane, and the only sensible course of action is to brake and veer within the confines of the driving lane, and nowhere else. These are in fact the rules of the road.
Is it really driving at that point then, or is momentum just carrying it? "Apply brakes" seems like the only thing it can realistically do in that scenario.
So to you there is no difference between a fire hydrant, a human being and a paper bag? I mean, it is quite obvious that those, as obstacles, are not the same. A fire hydrant is stationary a human isn't and a paper bag can simply be ignored by the car. Do you really think that a paper bag should be treated the same by a self driving car as a fire hydrant because that is what you are saying.
A human bent over tying their shoe is quite stationary.
> and a paper bag can simply be ignored by the car.
Not relevant. Suppose I provided you with a self-driving car that couldn't differentiate a walking person from a paper bag carried on the wind, and yet I also provided convincing evidence that this car would reduce traffic fatalities and injuries by 30%.
That's the real question we're facing, and the real standard of evidence that must be met, so your suggestion that the inability of this car to differentiate things in the ways you think are important is simply irrelevant.
> A human bent over tying their shoe is quite stationary.
Indeed and after the human finishes tying his shoe he steps on to the road. A human driver and a self driving car that can differentiate between different kinds of obstacles would adjust their speed in such a situation.
> Not relevant. Suppose I provided you with a self-driving car that couldn't differentiate a walking person from a paper bag carried on the wind, and yet I also provided convincing evidence that this car would reduce traffic fatalities and injuries by 30%.
> That's the real question we're facing, and the real standard of evidence that must be met, so your suggestion that the inability of this car to differentiate things in the ways you think are important is simply irrelevant.
You said that an obstacle is an obstacle so it is clearly relevant. Your car wouldn't ignore the paper bag. Would you use a car that treats every obstacle as if it were a human? Slowing down by every tree/fire hydrant/road sign... imagine getting in to the car that is parked under a tree in autumn and leaves are falling down. The car won't start because an obstacle is an obstacle, as you say. Would you wait for the wind to remove the paper bag from the road or would you leave the car to remove it yourself?
If you need to come up with impossible scenarios to justify your claim maybe you should rethink your claim.
Objects in the driving lane are obstacles, objects not in the driving lane are not. Signs, trees and fire hydrants are typically not in the driving lane, therefore they are not typically problems. Obstacles in the driving lane should trigger the car to slow down and approach more cautiously, regardless of what they are. This isn't complicated and doesn't require any impossible scenarios.
> Objects in the driving lane are obstacles, objects not in the driving lane are not. Signs, trees and fire hydrants are typically not in the driving lane, therefore they are not typically problems.
> Obstacles in the driving lane should trigger the car to slow down and approach more cautiously, regardless of what they are. This isn't complicated and doesn't require any impossible scenarios.
A human can in one moment be next to the driving lane and in the next moment on the driving lane so the speed should be adjusted when an obstacle in the form of a human being isn't on the driving lane.
Also, what about the paper bag on the driving lane? Would you wait in the care until a gust of wind takes it of the road or would you go out and remove it yourself?
This strikes me as immoral for the same reasons that organ harvesting is immoral: nobody owes their life to sustain another.
Now you frame it as the passengers which are taking dangerous (to their own health) actions but you exclude that you (or someone) is standing in the middle of that equation, forcing cars to kill their owners- and this force will be necessary because no one will voluntarily purchase a vehicle so misaligned with their interests.
I get the argument, it really is a difficult ethical question. Your argument fundamentally boils down to "the car exists in the world and should move through it causing minimum disturbance to it" and that's a very sensible argument.
But, quite simply, if I'm paying for a car or a ride, the machine serves me. My argument fundamentally boils down to "the car exists to serve it's passengers first and foremost" with the caveat that human lives are more important than animals. It should do everything in it's power to avoid hurting anyone, property be damned. But, when the choice is protect people in general or it's purpose for existing, I disagree with you. I'm always open to changing my mind however, and I don't dismiss your point of view out of hand.
> But, quite simply, if I'm paying for a car or a ride, the machine serves me.
If you're saying that people who can afford to take a car for a particular journey deserve to be protected over the pedestrians who can't in the edge-cases, then, if we average out enough of the details, this implies that richer people's lives are worth (slightly) more than poorer people's lives.
You can draw a lot of conclusions from this logic, though, so I'm not sure how valid it is to just pick one property and generalise from it. (Wealth is applicable, but also able-bodiedness, age, climate consciousness, whether you feel safe walking streets alone at night…) It might be something to think about [edit: removed].
> but it's not something to judge people's ethical positions over.
It is, actually. We don't have to be so so careful not to ever pass judgement.
We are talking about the belief that people paying for a service are entitled to increased safety at the expense of others, who did not pay for it. It's ok to find that wicked and to say so.
Let's not get into morality and stick to ethics. Let's not call something "wicked", no need to charge the discussion. we are trying to come to an answer to this after all.
But I think you're probably right at this point, at least in the specific scenario in the game. I woke up this morning believing the opposite.
I'm actually in an interesting position in this discussion, I've been faced with this decision before in real life, before I ever articulated a position on it. I chose to hit the barrier, and I'm lucky to be alive. That is of course the exact opposite of what my position was this morning when I woke up, but the one I go to bed with tonight if the rest of my day is uneventful.
Moving on to my next example, though, there are people who aren't able to be pedestrians because they have mobility impairments. Are they less worthy of life, just because they're using a vehicle as an accessibility aid?
That was my reasoning as well. Whenever there was the option to kill the passengers, that's what I chose. The car's occupants are the ones introducing the risk of death to the scenario to begin with. There is no dilemma without the car.
Interestingly, the summary at the end did not even allow for this conclusion. The evaluation anticipated that people might value the lives of animals over humans, but not those of pedestrians over passengers.
It seems likely that this was designed by promoters of self-driving cars. No one would want to be a passenger in a car that was programed to prioritize the lives of pedestrians, so that was not even considered as a moral option.
My strategy was: minimise casualties to kids, people and then pets. If the choice is between equal damage to pedestrians or passengers then pedestrians take precedence. All else being equal don't intervene.
This was my exact algorithm. Like GP said, it was jarring to read a bunch of conclusions that never entered my mind, and were a function of which examples they contrived.
My reasoning about pedestrians is that they didn't sign up for self-driving, the people in the car did.
I get this point of view too, but my problem with it is that it is based on a series of prejudices (kids are more valuable than adults) and not some fundamental ethical principle. Maybe I just like things that in theory tidy up nicely and practicality is a better approach, but I think ethical decisions should be able to be fundamentally summarized in principle without relying on specifics of the situation unless they fundamentally change the scenario in some way.
I heard something a while back that was roughly 'we love to agonize over these trolley problems, but the answer is always to hit the brakes.' I'd add 'and avoid these sketchy situations in the first place.'
I know drivers who will blast around blind turns, potentially putting them in these situations where they'd say "there was no time to react!", when they should be slowing down just in case there's someone they'll need to avoid. It's why we talk about defensive driving.
If the brakes are broken often enough for these kinds of dilemmas to even matter, you've failed to avoid the situation in the first place, and need to recall the product to add more redundant systems. Instead of agonizing over what to do in a situation that comes up 1% of the time, turn that 1% into 0.000001%.
> that was roughly 'we love to agonize over these trolley problems, but the answer is always to hit the brakes.' I'd add 'and avoid these sketchy situations in the first place.'
If you show description, most of the scenarios involve a catastrophic brake failure.
My preference was always to save the pedestrians. My decision tree is that the passenger made a choice to be in the device that may cause harm. Pedestrians did not. Therefore, all things being equal, save people who did not knowingly participate over people who did.
Very fair point. My perspective was, “protect what has entrusted itself to you - the passengers”.
But, that’s actually pretty reckless and anti-societal.
My wife made the point, “I just don’t want to play”, which made me kinda agree. Maybe we shouldn’t be designing these products before we can figure out how to eliminate those situations through other means - like boring tunnels for cars to maneuver in, away from peds.
If we don't like the idea of choosing who to kill, then shouldn't we be concerned about human drivers having to make the same choices, and doing more to stop those accidents?
I took the test with this logic but I think there is a wrinkle I overlooked -- pedestrians are able to control their decisions while the passengers can't control their decision besides the decision to enter the self-driving vehicle. For example, pedestrians are able to look both ways and ensure it's safe to cross. With that in mind, I don't think it makes sense for the car to alter course and cross the line in order to kill less people since the people who are on the other side already made a safe decision to cross while the people straight ahead did not.
This is also heavily affected by how a country's traffic laws work. In my country we teach a hierarchy of lower to higher vunerability, with pedestrians at the most vulnerable position in this hierarchy. The whole traffic law is centered around protecting and favoring those with higher vunerability. A driver is still wrong, even when a pedestrian is Jay walking illegally and gets killed.
These contrived examples don't seem that helpful. Brakes failed? Throw the car into park or reverse. Pull the emergency/parking brake. Swerve sharply enough to spin out. Honk to alert pedestrians (which a self-driving car would have seen long ago and already started slowing down anyway). If all else fails, build the car so that a head-on collision into a barrier at speeds where pedestrians could be present would not be fatal to the occupants.
In the real world I have a hard time imagining a scenario where a case like this would actually come up.
Presumably the realistic cases are not so clear cut, but it's not hard to imagine situations where self driving cars must make choices between courses of action that have some probability of killing people (and where the car might calculate that all courses of action have some non-zero probability of killing people).
The contrived cases make sense as a way to guide what should be done in more realistic/ambiguous ones.
Maybe. But we needn't compare driverless cars against some hypothetical morally perfect agent. We compare them against humans. Some humans might instinctively sacrifice themselves to save a baby carriage or an old lady. Many more will instinctively act to preserve themselves regardless of the consequences. Even more will panic and make bad decisions that cause needless deaths that a self-driving car would have avoided.
I had a cousin who died because he swerved to avoid a squirrel and drove off a cliff. I don't think anyone would argue that we should program cars to prioritize squirrels over people, but humans will end up doing that sometimes.
The possible amount of lives saved by the overall reduction in accidents with a competent self-driving car is going to be far, far greater than any 1-in-a-billion moral dilemma situations that they might encounter and where they might make choices that are not "optimal" as decided by some human who is looking at the situation after the fact with the time to think coolly and composedly about it.
There's also the practical limitations to consider. Self-driving cars have enough trouble identifying what's ON the road, never mind what's off to the side. Is it a cliff, a crowded sidewalk, a crash barrier, a thin wall with lots of people behind, an empty field, trees, a ditch? etc etc.
You seem to be addressing the question of when self-driving cars should be used in place of manually driven ones, whereas the website seems to asking how we should program the self driving cars (where I would say we should be trying to make them close to a hypothetically perfect moral agent).
Why is "probability of killing someone", much less probability of killing specific people, a metric that the car should be calculating in the first place? By the time a car has gotten to these type of no-win no-safe-fail scenarios, you have massively screwed up and are way beyond the design space and can no longer take assumptions for granted. If the probability of sudden total brake failure is high enough to be designed for then the correct answer is to add more hardware for redundant emergency brakes, not to add software to mitigate damage from the insufficient braking system.
I feel like this whole topic is basically media click bait, that some software engineers with little hardware experience latch onto because it seems edgy. The answer, like many things in life, is to set aside the abstraction and question the assumptions.
Correct, the Moral Machine is an inherently flawed approach to its domain. Abby Evertt Jaques goes into much more detail, but the point you are pointing out here is one of the main concerns.
I think that paper is still missing the bigger picture - in its own terms, the "structural effect" of framing the situation in such a paradigm to begin with. The paradigm is flawed in that it presupposes that these situations exist and could not have been avoided, thus absolving responsibility for whomever got the car into the unwinnable situation to begin with. We've already got enough of this "nobody's fault, things happen" nonsense with traditional cars. Self driving cars are capable of programmatically sticking to invariants and not becoming overwhelmed, and we should reject allowing the lazy "no fault" culture to persist.
The first scenario I got was: car drives in direction of concrete road block that blocks half the road, on the other half there are two cats walking (on a zebra crossing, if you can believe it). The implicit choice seems to be "crash car with passengers" or "kill the cats". How about breaking? You should never drive so fast that you can't avoid a static obstacle, is it?
I chose to save pedestrians at all costs and didn't pay much attention to the figures except for human vs animals. Somehow in the end it told me I favoured Old Fat Male Burglars.
The test is fun, but strongly biased. I based my decisions basically on "don't kill people", "don't kill passengers", "don't kill people who cross on green", and nothing else. At the end of the test, it says I have a strong preference for saving people who are male, young, large, and lower class - when I didn't even look at these factors!
Funny thing with these mental games is, machines are supposed to make 'moral' decisions, where humans don't stand a chance to do so, because it's beyond their physical capabilities to react fast enough and simply natural and learned reflexes take over the cerebral system.
Add to that the mostly unrealistic situation.
Just look at real accidents and you know what people do.
They don't do moral decisions, they just act without thinking.
If they even notice in time, they brake the car sharply and avoid the immediate obstacle.
Then think about improving the results.
These questions and their answers are highly dependent on cultural contexts.
They are behavioral research not ethics.
Even though the topic is ventures into dinner-party controversy territory, I can only see most of the choices as universally straightforward (favoring humans compared to pets, and favoring children or adults in their prime to the elderly) -- either that or there is no consensus possible.
The only choice where I wouldn't apply morality would be preferring the lives of the poor compared to the wealthy, since risks on the wealthy make the whole system safer. To be fully pedantic, I'd include the passengers based on wealth in this equation.
How about preferring people outside the car over people inside the car?
Every time I got the option to swerve into a barrier instead of killing a person I took it. By getting in a car you’re accepting the risk that entails, especially when it’s a self driving car with experimental tech. Pedestrians made no such deal, so they shouldn’t have to take on that risk.
But it turns out that Self driving cars must save people inside at any cost. If not,the whole purpose of selfdriving cars to reduce accidents (coz of human error) gets defeated. Because people prefer driving themselves than to buy self driving cars.
This is wildly overstated: Tesla's have been selling better than ever despite many documented instances of their self-driving mode failing to "save people inside at any cost." A self-driving car occasionally killing an occupant is seen more like the same sort of low-odds risk as a human-driven car occasionally killing its occupant.
A self-driven car that doesn't watch out for non-passengers is likely to going to run afoul of the same sorts of laws that already exist to avoid drivers prioritizing themselves over anyone else. There's a case in SoCal now trying to make the manslaughter liability for this in a Tesla with autopilot enabled be assigned to the driver of the Tesla. It'll be interesting to see where that goes.
Vehicles that don't prioritise the occupants are not going to sell well versus ones that do. It's very hard to imagine that the default could be anything but "protect occupants" in a free market where cars are privately owned. Fleet operators have slightly different incentives, which are to minimise economic damage to the service, a combination of liability/damages and PR.
To make anything else happen, you'd need to regulate. But the "self driving altruism act", which mandates that e.g. the car you just bought must kill your family in order to save pedestrians you don't know - I think it might be really difficult to get that law passed. You might be able to make some headway with fleets.
IMHO markets, human nature and politics constrain the solution space for "self driving moral dilemmas" to a small subset of what's theoretically possible.
> Vehicles that don't prioritise the occupants are not going to sell well versus ones that do.
There are plenty of cases of people trusting the existing automated systems that specifically disavow being good enough to trust anyone's lives to. Even in light of news that other people have died in so doing.
Completely agree, you pretty much nailed it that pedestrians are the textbook example on externalities. I don't unfortunately see them getting a lobby group anytime soon though.
You’re right that we’ve created a world where pedestrians are forced to take a non-negligible risk just to walk to work. I disagree that that is ethical or just.
It's not about whether the risk exists. It's about what changes to the risk we're willing to impose on other people. Do you have the right to increase everyone else's chance of dying while walking in public? By how much, relative to the status quo alternative of driving yourself?
Sometimes pedestrians do stupid, unsafe, or illegal things that put motorists at risk. Young child passengers also did not have a choice about the risks taken. I think there's some gray areas here.
Most of those stupid/illegal things are not that stupid in a world without cars, the environment for which we evolved. And it's true that young children don't have a choice, but their parent do it for them and assume responsability.
I'm actually not trying to argue against you, as indeed there is a gray area. My point is that on marginal situations we should not try to judge only on morality but also on creating incentives that best improve the system in the longrun.
Separately, I'm not a fan of arguments from evolutionary environments for this sort of thing. It didn't have telephone poles, but we still blame a pedestrian and not the pole or the pole installer if a pedestrian walks into it. Evolutionary arguments are almost always problematic when used in these ways.
I assumed that all human lives had the same value, regardless of age, gender or physical fitness. Like many people, I settled on a small set of rules and didn't deviate from it based on any personal characteristic. I'm not sure if this is a moral decision or not, but I generally prefer to avoid evaluating the relative merit of any person when it comes to life or death decisions.
In a sudden temporary fever of ressentiment I marked the people of "high social value" like the doctors, the salarymen and the physically fit for termination, wherever possible. They've already proven they can monopolize a good, satisfying life - time to let the fat, the homeless, the boring, the criminal take their places and enjoy some of that legitimized, established success.
I guess that's what it feels like to be a bolshevik.
People want to ascribe some kind of agency to the self-driving car, like it is a thinking mind. It's not. It is a vehicle whose prime directive is the safety of its occupants, and secondarily the safety of others.
It is not making judgement about pedestrians with canes, who are pregnant or veterans, or who are an underrepresented class. To do so is to design the wrong machine.
The correct design is to either safely swerve to avoid hitting anything, or to hit the brakes. There's nothing else. As soon as you start doing "something else" you open a horrible, horrible can of worms. The even more correct design is to drive safely so that you never encounter this failure mode in the first place.
So, this website is a critique of a system that does not and should not exist.
I saved passengers unless there were kids in the crosswalk. I'm sure the fact that I have kids played a strong role in that decision. When there were kids in the car and crosswalk, passenger kids win.
I also chose to hit the dogs instead of people but the results said I was neutral on that? It appears I also favored fit people... On my phone I couldn't even tell that some were supposed to appear fit/fat etc. I just saw kids, adults, elderly
2) If both options involve killing humans, always prefer to do nothing (continue straight ahead). That way it's the failed brakes that killed them, rather than the car's "decision". I know not making a decision is also making a decision, but a passive decision is not on the same level as an active one.
No difference between who the people are, whether they're passengers or not, or even how many.
Sounds pretty close to the heuristic I expect real-world autonomous machines to follow, which is "do the thing that is least likely to cause a lawsuit."
Doing nothing is generally seen as more innocent than doing something, at which point I'd expect most mobile robots to freeze when something goes wrong.
I've always found these self driving car moral questions incredibly weird. If your self driving car is a position where it needs to choose between killing your family or the pedestrians crossing the street, something has gone seriously wrong and it's a false choice because there's no way you can trust your sensor inputs/interpretations to know that's even what's happening.
If this worked it would be a prejudice detector. I find that I'm not entirely prejudice-free, choosing the death of adults over children and animals over humans. But I don't think I'd be comfortable with a car auto-pilot choosing between humans, or failing to choose life for humans over animals.
Software and legal codes are going to collide in interesting ways here.
Well, prejudice is not the same as pre judge. The former has a very negative, often racist or discriminatory connotation (I mean discriminatory in a dehumanizing fashion) Prejudice is often defined with an element of non-rational decision making.
Pre judging on the other hand can be very ration. This can certainly be the case in well-considered moral principles. is a very different thing.
Sounds to me like you've defined "prejudice" as "pre-judging that I think is bad" and "pre-judge" as "pre-judging that I think is reasonable". Also it sounds like you've defined "ration(al)" as "things which I agree with"?
I really don't know where you get this impression of what I wrote. What did I write that makes you think I define "rational" as "things I agree with?" You seem to be reading too much into what I wrote to arrive at the worst possible interpretation. You are bordering on personal attacks as I take it as insulting for you to imply that I harbor that type of facile mindset. I would really like to know what gave you that impression from what was a fairly straightforward comment.
In any case, to clarify with > 1,000 year old evidence in support of the semantic distinction :
I am defining prejudice as distinctly different from pre-judging. Both the current definition and more recent (600+ years) of etymology support "prejudice" as a word connoting a frequently negative sentiment (spite, contempt) judgment that is typically not grounded in an evidence-based decision making process. [0] Current phonemic similarities between "pre-judging" are not indicative of closely matched meaning.
There is some closer etymological similarity between the verb form of prejudice and prejudge, the verb form "to prejudice" has a different meaning than the noun, much more of a legal sense to it that is still used today. "Don't prejudice the jury" for example. Its noun form differs significant in its mostly non-legal meaning.
Going back further to Latin roots [0 also] shows it to still have a negative connotation of "harm".
Pre-judging on the other hand does not have to take a negative form and mostly (for me at least) doesn't. It and can be done on the basis of limited evidence or past experience/expertise, with the healthy practice of revising those judgements as additional evidence becomes available.
To verge into being pedantic, prejudice might be considered a pernicious form of prejudgment, but in my own mind I tend to place them in different semantic categories all together.
One - no self-driving car should ever be in this situation. See variaga above.
Two, it's a completely useless survey because the cases are heavily biased. (I.e. if you always select 'swerve', you seem to be much more biased to kill women. That's not the most solid survey design)
coding the choice in or making the choice is already at least half-way immoral as such a choice is an exercise of power, and the moral tends to end where power starts. The moral approach here is to decrease the power you wield over others - i.e. to decrease the energy of the strike and possible damage in the fastest possible way which usually is emergency brake, and thus in particular decreasing the importance of who is going to be hit. I.e. the main moral choice isn't between hitting A or hitting B, it is between hitting at 60mph or at 10mph (and if you have a time to choose whom to hit and to act upon that choice, you definitely have the time for emergency brake)
This seems like ducking the issue. If you’re going to either hit an old lady or a young man and a cat, which do you choose? Saying “I’d never get into that situation” isn’t an answer.
It’s not ducking the issue, it’s denying the framing. “If you give me the choice of hitting X or Y” of two nearly the same things I would not focus any energy on making the choice, but instead devote my energy to finding a way out of the choice until it’s too late to make any choice. If I can’t find a way out then I want to postpone things as much as possible to allow time for the situation to change.
I deny the whole premise that there are correct moral conclusions in these hairsplitting situations. It’s not the choice itself but the act of struggling with and living with the choice that matters, morally. A machine is incapable of that struggle, so it is immoral to create machines that face such choices. In effect, the self-driving car is an attempt by humans to escape responsibility for whatever damage driving might do.
It’s a philosophical evaluation, which underlies morality. On that meta level I am saying it is immoral to perform this kind of “morality” data collection. This does not get us to a better world. The nature of morality is not like making a table of logarithms that you then look up answers from.
Autonomous driving won't optimize for morality when it has to optimize for liability first and foremost - whatever affordances the law prescribes that minimizes chance of manufactures being sued.
It's really fascinating how prevalent these discussions used to be early on in self-driving car development, vs now where the more common discussion is "how do we prevent them driving into a truck because it was painted white and looked like a skyline?"
There's an underlying assumption in these examples that we'll be in a car that can tell if someone is homeless, or that knows the net worth of the person driving it. And those scenarios are kind of absurd, why would a car be able to do that? A car that can in milliseconds identify whether or not someone is an executive or whether they're breaking the law should never be getting an accident in the first place, it should be able to identify way ahead of time when someone is about to cross the street. But we come up with these really ridiculous premises that are fueled by simultaneously an overestimation of what technology is capable, a lack of thought about the implications of the technology we're describing, and an under-awareness of the real challenges we're currently facing with those technologies.
An analogy I think I used once is that we could have the same exact conversations about whether or not Alexa should report a home burglary if the burglar is stealing to feed their family and Alexa knows the family is out that night. It's the same kind of absurd question, except we understand that Alexa is not really capable of even getting the information necessary to make those decisions, and that by the time Alexa could make these decisions we'd have much larger problems with the device to worry about. But there was real debate at one point about whether we could put self-driving cars on the roads before we solved these questions. Happily, in contrast, nobody argues that we should stop selling Alexa devices until as a society we decide whether or not justified theft ever exists. And it turns out the actual threats from devices like Alexa are not whether or not machines believe in justified theft, it turns out the actual threats are the surveillance, bugs, market forces, and direct manipulation possible from even just having an always-on listening device at all.
The danger of an Alexa device is not that it might have a different moral code than you about how to respond to philosophical situations, the danger is that Amazon might listen to a bunch of your conversations and then accidentally leak them to someone else.
So with self driving cars it's mostly the same: the correct answer to all of these questions is that it would be wildly immoral to build a car that can make these determinations in the first place, not because of the philosophical implications but because why the heck would you ever, ever build a car that by design can visually determine what race someone is or how much money someone makes? Why would that be functionality that a car has?
We have actual concerns about classism and racism in AI; they're not philosophical questions about morality, they're implicit bias in algorithms used in sentencing and credit ratings, fueled by the very attitude these sites propagate that any even near-future technology is capable of determining this kind of nuance about anything. The threat of algorithms are that people today believe that they are objective enough and smart enough to make these determinations -- that judges/lenders look at the results of sentencing/credit algorithms and assume they're somehow objective or smart just because they came from a computer. But I remember so clearly a time when this was one of the most common debates I saw about self-driving technology, and the whole conversation feels so naive today.
It's a great example of Moravec's Paradox. We spend all our time thinking about what moral choices machines ought to make after cogitating upon the profound intricacies of the cosmos. We should be more concerned with figuring out how to teach them to successfully navigate a small part of the real world without eating glue or falling on their noggin.
Other HN commenters have pointed out abundant methodological errors in these scenarios.
I'll take another tack: I believe it is a category error to ask humans to determine the moral or "most moral" action in these scenarios. There are two sufficient conditions for this:
1. There is no present, efficient moral agent. A self-driving car, no matter how "smart" (i.e., proficient at self-driving) it is, is not a moral agent: it is not capable of obeying a self-derived legislation, nor does it have an autonomous morality. Asking the self-driving car to do the right thing is like asking the puppy-kicking machine to do the right thing.
2. The scenario is not one where an efficient moral action occurs. The efficient moral action is really a complex of actions, tracing from the decision to design a "self-driving" car in the first place to the decision to put it in the street, knowing full well that it isn't an agent in its own right. That is an immoral action, and it's the only relevant one.
As such, all we humans can do in these scenarios is grasp towards the decision we think is most "preferable," where "preferable" comes down to a bunch of confabulating sentiments (age, weight, "social value", how much gore we have to witness, etc.). But that's not a moral decision on our part: the moral decision was made long before the scenario was entered.
> The company manufacturing the car needs to make this decision when writing the software. At that time, it's a decision being made by moral agents.
I think even that is a step beyond: acquiescing to the task of writing software that will kill people as part of an agent-less decision is, itself, an immoral task.
The puppy-kicking machine analogy was supposed to be a little tongue-in-cheek, but it is appropriate: if it's bad to kick puppies, then we probably should consider not building the machine in the first place instead of trying to teach it morality.
If the only choices are "plow through a bunch of pedestrians, killing them" and "swerve into a fixed obstacle, killing your passengers", your self-driving car has already made an immoral choice to drive too fast for the conditions.
The correct choice was "if there are hazards in the roadway limiting your maneuvering options, pedestrians (or other objects) that might suddenly enter the roadway or visual blockages that would prevent the car from determining whether there are people/object that might suddenly enter the roadway - slow down before the car's choices are so limited that they all involve unnecessary death".
A self driving car that encounters any of these scenarios has already failed at being a safe driver.