Hacker News new | past | comments | ask | show | jobs | submit login
Moral Machine (moralmachine.mit.edu)
148 points by kevlar1818 on Oct 4, 2016 | hide | past | favorite | 150 comments



Imagine an autonomous car, driving at 60+ mph at a two-lane road (1) which is blocked on both sides and (2) has a pedestrian crossing. (Not sure about 60 mph, but I guess you need to be that fast to reliably (hah!) kill passengers on impact.)

We can assume its camera is broken, because it failed to reduce speed (or at least blare the horn) upon seeing a giant concrete block in its path. (Okay, maybe the concrete block fell from the sky when a crane operator failed to secure the load, so the car might have had no time.) And of course the brake is broken. Miraculously, the steering wheel is working, but it's out of question to skid on the side blocks for some reason. Maybe it's actually precipice on either side. (Imagine that: a 60+mph two-lane road, precipices on both sides, with a pedestrian crossing appearing out of nowhere.)

Oh, by the way, within 0.5 seconds of seeing the people (remember: the car couldn't see these people until the last moment, otherwise it would have done something!), the car has instant access to their age, sex, profession, and criminal history. The car is made by Google, after all. (Sorry, bad joke.)

Q: What is the minimum number of engineering fuckups that should happen to realize this scenario?

This is to morality what confiscating baby formula at airport is to national security.


I couldn't "play" the game for these very reasons. There's no way the player could have all this information.

It's a game of "would you rather" pretending to be about self-driving cars.

The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.


>The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.

Then you're not going to like this factoid. In wrongful death civil suits in the US, the monetary award is based on the current income and earning potential of the deceased; the courts place vastly lower monetary value on the live of homeless people than on the wealthy.


I am not surprised. And I don't like it. And, I understand why it makes sense.


> The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.

I actually didn't give a fuck who the people were, or how many.

I was surprised when by the end I was shown the results including demographics.


There are too few questions for it to know _why_ you made a decision. It thought I was trying to save fat people, when actually, I think the car should avoid swerving when all other things are equal.

It does explain this at the beginning - if they asked 500 questions, while they might get more detail per person, their data would be skewed to the opinions of those who would sit and answer 500 questions.


I always answered hit the wall when there was a wall—shouldn't have been driving that fast near pedestrians. When asked to choose between a doxtor and a homeless man, I hit the doctor. The doctor took an oath, and tje homeless man may be a lost sheep, so to speak.


The system isn't making any value judgment; it's gauging whether /you/ are. And then it compares your value judgments with others. It's interesting in that it has the potential to show your biases, at least relative to the average.


It doesn't have that potential though. There were 13 questions, with each question changing multiple variables. Simply changing the road/legality situation and the demographics between questions buries any informative results.

I made my decisions based on a strict decision tree. It covered every case without ever involving demographics (except "cat vs. human"). At the end, I got a halfway accurate summary of my swerving and legality opinions, with a wildly inaccurate but equally confident summary of my demographic opinions.

With 13 questions and 5-10 variables, the system couldn't possibly distinguish my ruleset from a totally unrelated one. There aren't enough bits of information to gauge that, and therefore there isn't enough information to gauge much of anything.


There are when you give different sets of questions to large numbers of people.

If you give very similar questions to a single subject, they try to be consistent with previous decisions so that order becomes significant, which confounds the results.


I know there's an issue with being too specific. If you're trying to get gut reactions you can't be too transparent or you bias later responses.

But I'm objecting that data was unrecoverably lost here. Most respondents (including me) report using clear rules that were incompletely revealed by the questions. That's not something you can rebuild by averaging lots of results and seeing "what people valued". I applied a specific decision tree, and reducing it to "valued lives a lot, valued law some" produces outcomes I consider immoral.

So I guess I phrased my initial complaint wrong: I think that reducing this to a statistical assessment of choices discards the most important data.


But the system gauges my value judgement poorly. I never made judgements about whether swerving was a factor, but it gave a gauge on that. I considered the passengers living with witnessing smashed bodies on the windshield (all else equal).

Also, even with no difference (or more death ahead), braking and scraping into the wall "left side from observer" drops more momentum safely and allows crossers more time to move forward along their momentum (although I didn't specifically use that because it was against the rules).

There is no way to make an optimal decision in this game.


I stopped on the eleventh. I had been justifying the riders dieing in most situations because of their choice to get in this murder machine. Then I got the case of the little girl in a self driving car about to run over a little boy. Killing the girl does nothing but punish the parents for sending their child to a destination in a self driving car. Killing the boy does nothing but punish the parents for sending their child to a destination on foot.

This computer is so good it can figure out profession and intent, it needs to also give me a snapshot of these children's future so I can make a nuanced decision.


I don't think this is about creating realistic scenarios, but about finding out what people take into consideration when making moral judgements. The experiment seems to be designed to gather as many such preferences as possible.

The hope must be that if people consistently prefer saving the life of young people in this made up scenario they will have similar preferences in a more realistic scenario. Of course weather such a generalization holds will have to be confirmed by further studies. But this seems like a good first step to explore moral decisions more.


This study will learn things about how a biased sample of the population answer choices in a simple game depending on the context given; extrapolation from that basis to anything wider, for example the notion that the players view these choices as "moral" requires far more work. It is well known that people use games for escapism, so it does not seem straightforward that the decisions they make in a game always map cleanly to their real opinions just because you put "moral" in the title of the game.

Its also worth keeping in mind that moral decisions have been explored for quite some time, and the novelty here is mainly the mode with which the population is sampled.


It's a thought experiment.


But not, IMHO, a useful one, because it assumes the problems we see today will be the actual real problems in practice.

If that's true, i'd go so far to venture a guess that these vehicles will have completely failed.

Do people truly believe no one is working on solving the problem of better emergency braking, crash foams, etc?

The reason they aren't used is mainly because expense vs value is not there (for manufacturers), not because they don't exist.

For example, metal foams (http://onlinepubs.trb.org/onlinepubs/archive/studies/idea/fi... and friends), etc.

or hell, making barriers that aren't just concrete!

The truth is, if the "moral cost" is high enough, we'll just solve the problem of people dying when they crash in X% of cases, until people/companies feel good about X vs what they pay for X.


I was careful to challenge the usefulness of the exercise. Mostly because I took it as a personal challenge. However, you make an important point. This exercise assumes the problem set will be identical in the future. This sounds like a sensible approach that ends up undermining all of the technological improvements a self driving vehicle will posses. Things like redundant control systems, [V2V, V2I, V2C] networking, run flat tires, and others. The self driving car will be closer to an airplane with a robotic agent as the traffic controller than anything else.


Precisely, if it's a problem of trying to get the vehicle to stop, perhaps focus on areas that can increase that probability first (i.e. running flat tires would be a great one to immediately get the car to slow down).


Thought experiments are supposed to tell us something interesting, by simplifying details while preserving the crux of the matter. Otherwise their values are questionable.

I could have asked "If I could dip my head into a black hole and take it out again, what will I see?" That is also a thought experiment, just not a useful one.



Let us consider a spherical cow...


Oh, right, the kind I can make meatballs out of?

(One of my high school teachers would be proud to know that that is possibly the only thing I remember from his class)


the concrete block might just be a truck engaging the intersection while distracted.

sure most moral dilemma of this kind should be resolved by 'install longer range sensor', but other people mistakes are gonna be an important factor in these scenario until all cars are driverless.


Here's the heuristic I'm least uncomfortable with:

- avoid collisions with things or people if at all possible

- if collisions are unavoidable, choose whatever option won't harm anyone

- if harm is unavoidable, select the option that harms whoever is created the unsafe situation by doing something they shouldn't have

- if harm to law-abiding normally-behaving person is unavoidable and a critical safety feature of the vehicle has failed due to lack of maintenance, prefer harm to whoever is responsible for vehicle maintenance

- if situation is unrelated to vehicle maintenance or harm to some other party is unavoidable, choose the option that maximizes likelihood of people getting out of the way, minimizes impact speed, and isn't overly surprising (i.e. prefer to stay in the same lane if possible), honk the horn, and hope for the best

So, if someone runs into the street suddenly in front of oncoming traffic, the car should not choose an option that harms someone else due to that person's poor choice. Similarly, if someone neglects the maintenance of their car, they should bear the responsibility for it. (Ideally, a "car no longer responsible for protecting your life" light would come on or the vehicle would refuse to start if regular maintenance is overdue.)


If lack of maintenance is creating an identifiable unsafe situation, the automated system should refuse to operate the vehicle at full capacity.

(all the potential objections to that policy are answered by pointing out that the operator of the vehicle has a responsibility to other occupants of the road.)


What if (full) use of the car is required? Sometimes things break (or rather, a threshold is crossed), but it won't become known to the car and thus the owner until the car is needed. A wife entering labor, for example.


If someone needs high availability, they should adjust their maintenance intervals as appropriate to ensure it. If something literally breaks during the trip, call an ambulance.

That the above is more complicated than ignoring the trade off created by lax maintenance shouldn't become a problem for the pedestrians along their route to the hospital.


i found myself biased in favor protecting pedestrians, even pedestrians who crossed against a red light.

i think that's closer to how i drive than, say, reducing the value of a pedestrian who is crossing the street illegally. and i think i drive that way because at some point the statement "pedestrians always have the right of way" was drilled into my brain.


Now I want to see Herbie as reimagined by Isaac Asimov.



I ignored any "social value", gender or age concerns in answering. It thinks I had a preference to save the elderly but otherwise ended up precisely on the center-line in most issues.

My rules however were simple: Avoid swerving, kill the people in the car over people outside all other things considered. If pedestrians are running the light they get less sympathy but I'm not going to swerve to hit them.

Forget the pets, why would anyone want to save them? Apparently some people did.

Also: How does a car know if someone is a "criminal" anyway and what does that mean? Does that mean a released felon would be "picked" to be run over if it face-detected them against a database of known persons? Criminals don't run around in stripes with bags of loot!


I followed exactly the same line of reasoning except that I did swerve to save people crossing on a green light at the expense of people on a red light. The reasoning being something like: "The passenger of the car should absorb the risk of a car failure, not pedestrians. In the case pedestrians must die, protect those who follow the rules if possible."

Incidentally my answers meant I always killed women instead of men...


I wonder if the test is adaptive, did it hone-in to differentiate preferences or present the same scenario set to everyone?

It felt like it was honing in on my preference at the end, but I'm not sure.


> How does a car know if someone is a "criminal" anyway and what does that mean?

Adware on everyone's phone is broadcasting a SocialWorthScore™, without one it's assumed to be a large negative number.


Fun test to take, but seriously hope they're not drawing any conclusions from the mix of people I "preferred" to save or kill. I didn't consider the age, criminality, or gender of any of the pedestrians or occupants I killed or saved. I just erred toward non-intervention, unless the intervention choice saved bystanders at the expense of occupants. When the potential casualties were animals, they all died.


I followed a similar algorithm, considering all lives equal. Injury versus death: prevent death. Uncertainty versus death: prevent certain death, and assume passengers are more likely to survive an accident because they're better protected. Certain pedestrian death versus certain pedestrian death: prefer non-intervention over intervention. Certain passenger death versus certain pedestrian death: protect the passengers.

Justification for that last one: self-driving cars will be far safer than a human driver, such that it'll save many lives to get more people using self-driving cars sooner. Self-driving cars not prioritizing their passengers will rightfully be considered defective by potential passengers, and many such passengers will refuse to use such a vehicle, choosing to continue using human-driven cars. Thus, a self-driving car choosing not to prioritize its passengers will delay the adoption of self-driving cars, and result in more deaths and injuries overall.


"Certain passenger death versus certain pedestrian death: protect the passengers."

Pondering that question made me imagine some bad Sci-Fi future where self-driving cars end up being dangerous killer-robots for anybody but the passengers.

If pedestrians have to fear these things, because they are programmed in a "Protect the pilot above all else!" way, it might hamper adoption just as badly.


I took the same sort of dispassionate approach, valuing the lives of the passengers above all else and staying the course otherwise. I was disappointed to discover the parsing of the results had no room for such methodology. Based my entirely algorithmic approach it was determined that I favored youth and fitness.


And that's how simple algorithms, acting on non-homogenous datasets, get declared to be 'racist' (or otherwise discriminatory).


Your results might seem spurious because of the small sample size, but when aggregating the results of all the participants they will have enough data to be able to conclude how many people did act like you did with apparent preferences due to chance, and how many actually where "biased" in some way.


Fair point. If they look at all respondents who answered the 13 questions the same exact way I did, then they'd be able to see if the doctor/criminal, fat/thin, female/male distributions are noisy or correlated.


Many of these are what we call "false choices" of the sort that typically arise in hypothetical utilitarian dilemma used in rhetoric and debate. Humans are creative enough to see alternative options that obviate the dilemma, at least to some extent. See Michael Sandel on moral dilemma.

Edit: FYI, Sandel's complete course "Justice" is on Youtube.[1]

[1]https://www.youtube.com/watch?v=kBdfcR-8hEY


This seems drastically oversimplified. For instance, all of the scenarios depicting a crash into a concrete barrier assume the death of everyone in the car, but generally a car has far more protection for its passengers (airbags, seatbelts, crumple zones, etc), than pedestrians have from being struck by a vehicle.


The "car accident" is a straw setup. The real question of this study, and it's blind assumption, is that some human life is more valuable than other human life, and that "morality" is the task of baking these judgements into a database.

These are all false dilemmas with artificially limited outcomes. In these particular situations, the option of randomly choosing is not even considered by the study. The presumption is that these kinds of things are decidable, not only by human beings, but by well designed machines.

I'm feeling really discouraged right now that this even exists, much less out of such a powerful institution as MIT...


Yes, in reality that's true but the scenarios are just simplified and you're kind of supposed to take the everyone dies as just a fact of the universe this is taking place in. Arguing about the facts kind of avoids the main question it's asking which is given the ability to choose from these 2 which is the more /moral/ choice.

Really there's rarely going to be a chance for a machine to even make these decisions because the chances of it being so out of control to have only 2 options and still being in control enough to take them is practically impossible. Much less having the ability to /know/ that action A will kill pedestrians vs action B killing passengers.


For the first barrier question, I choose to "hit the pedestrians".

The probability of hitting the wall if you drive at it is 1. The probability of hitting the pedestrians isn't necessarily 1, since they can react to you. Probably not very well, but perhaps they can jump out of the way or behind the barrier or something.

Also, can this car not also HONK LOUDLY when it makes the decision to drive towards the pedestrians? This would further lower the risk that the pedestrians will actually get hit.


Exactly what I came here to say. Activate the horns/car alarm, switch off the engine/disconnect the clutch, avoid obstacles if possible but otherwise go in a straight line, below some speed threshold hitting a solid obstacle is acceptable. Done.


And also, isn't it the responsibility of the owner of a car to make sure their brakes don't fail?

If negligence causes a problem with the car, I don't think it should be taken out on others.


That bugged me the most. I chose the barrier whenever given a chance.

Without a steering column there is even more room for safety features, making this even more attractive.


Same here. It was a bit baffling that the scenario is apparently a self-drive car with a poison gas capsule attached such that contact with any obstacle kills all occupants. Sheesh.


If the car has the ability to make these kinds of distinctions in such a simple scenario, surely there are more options than two. "Moral?" It smells like eugenics to me: some bizarre, Ivy League technocratic posthumanism. Somehow the value of a life is determined by age, sex and profession? Says who? The people who are programming dataset via https? This is collapsing nuanced spiritual and ethical intuition onto an extremely narrow, low dimensional set of parameters.

What's the premise? This triggers in me an imagination of naive, optimistic, well-adjusted Germans in the 1930s. I know this was probably created with good intentions, but the premise does not match the research question. The premise is "morality". Yet its asking me to rank the value of human life based on presumptuous, superficial categories.

Is "the moral machine" going to also decide which births have more utility? Which countries to send aide to? Who should have access to educational opportunities or quality food? Based on low dimensional datasets such as this one?


Yep. Going in, for some reason I thought this would be a kind of logic puzzle, e.g. given these pedestrians/road hazards, nagivate to safety. The first scenario I saw was "hit and kill three joggers, or hit and kill three fat people?" That's just...morally tone deaf and sick on about half a dozen levels.


Agreed. This has too many axes of differences, and doesn't have sufficiently careful control and consistency to determine which of them are being consistently ignored.

It looks like they were trying to see if people place differing amounts of value on different human lives, but in the process of doing so, they made ridiculously strange value judgments. "Athlete"? "Executive"? "Large"? Why should any of those matter? We're talking about human lives.


I think part of their intent was to show biases in judgement. The small sample size really hindered this though. Apparently I 100% preferred old people to young people, even though I didn't consider age in my decisions at all.

They do have a little disclaimer on the results screen about the sample size though.


The bias is generated by the lack of real choice in responses. They are offering an unrepresentative range of choices, then going and telling you that your choices are biased. They're not showing existing bias, they're coercing it out of participants.

This thing irritated me so bad that I couldn't bring myself to answer a single question. I can't stand the idea that my this study is actually happening. If the results are published and receive non-critical media attention, I'm gonna be irate.


But then you'd need to answer 100 or more test questions instead of 13. This is necessarily simplistic given how much involvement they can ask of participants.

Also, I don't think they're implying anything nefarious about the resulting biases shown in the final results. For sure a lot of it has to do with the small sample size of the questions. I didn't run the tests twice, but I imagine there is some randomness involved in the way they are generated.

If there are any heuristics at play, the results will indeed show them (in my case there were enough tests to recover the fact that I preferred saving passengers, preferred non-intervention, and preferred saving humans over pets). But it will also come up with some gibberish/noise due to the small sample size.


> But then you'd need to answer 100 or more test questions instead of 13.

Or they could design questions designed to more definitively separate different hypotheses in fewer questions.


It doesn't matter how big the sample size is if the questions themselves are biased.


On a single participant yes, but their real goal, I presume, is to aggregate the data and then they will be able to reach more concrete conclusions.


We do make moral choices, and there are rules and heuristics we use, they might be quite complicated, and they might not be what we think they are, but I think nonetheless that it should be possible to come close to predicting a human moral decision making quite well by using an accurate enough model.

And as autonomous vehicles will have to make decisions that have moral implications, they better do so in a way that humans will be happy with. I think this is an important area of research. This won't mean a machines will have morals of his own, whatever that means, but that they should do what (most?) humans would consider morally right. And what do humans consider morally right? Well that is exactly what we should try to find out.


I agree that categorizing morality into buckets seems strange. However, over a large sample, surveys with even these very limited, artificial choices can paint a surprisingly accurate picture of the nuanced, fine-tuned moral compass of a society.

I would like to have automated systems use logic that reflects the morals and values of the society in which they operate. I don't know how to measure those accurately. These sorts of exercises seem like a good start.


I agree that people gotta start somewhere.

And I agree that it might be possible to paint useful models of human morality with small sets of parameters... Just not the way this study is set up. Not with the parameters they're measuring, and definitely not out of the logical presumptions of the experiment.

I am presented with the choice that either 4 women must die or 4 men must die. For me, it would be more "moral" in this case, for the computer to choose randomly, rather than to attempt some shallow, eugenically judgemental "moral" logic.

I'm also aware that these kinds of rules, regardless of their "morality", can be gamed. Randomization increases the risk for people considering playing games. This adds more weight to my conviction that, if some of these false dilemmas really did present themselves to a machine, in real life, that randomization must be an option.

How does this moral logic map to this survey? It doesn't map, Not one single bit. That irritates me, because if I were to click through this survey, using eenie meenie miney moe in cases I felt that randomization would be more moral, it would be all but lost in the error. The MIT students would go on CBS morning media and talk about all the bias they measured in my choices of who to murder. But their data would be totally polluted by the way their study discounted moral logics outside of their parameter set. And important parts of my moral reasoning would be lost in the error bars.

What's more, my conviction about the necessity of randomization is just just one of a huge variety of moral considerations that are inherent to peoples' sense of morality.

Hopefully the study is considering these kinds of things and they have some clever way mathematical of extracting useful information out of this data. For example, hopefully they are measuring the number of people who visited these pages but refused to make a choice.


In that sense I guess the thought experiment worked. Perhaps not as intended though. It showed the absurdum in pretending a car can make moral choices.


If anything, it'd be more "moral" in situations like this for the "car" to choose a random sacrifice than to attempt some half-assed wanna-be God crap like this.

I'm honestly appalled at this right now. It's not like people haven't seen this sort of thing coming down the line. It's just surreal to watch it arrive.


Rule one, save all your passengers. Nobody would buy a car that has the death of its passengers as an acceptable scenario and Jeff from marketing will be on my ass otherwise.

Rule two. Kill the least amount of people outside of the car. Done.

I know this is a thought experiment but this is completely missing the point of self driving cars IMO. Sure a human can be more moral than a car, but all it takes is being distracted for a second and you killed all the babies on the pavement.


Close enough...

Rule Two:

Intervene only if it doesn't mean killing a person who otherwise would not die.

Done.


How is that for a thought experiment? Say I build a self-driving car, that when faced with such cases does the equivalent of "Jesus take the wheel". This is well known by the owner of the car.

In case of injury or death, who should go on trial?


Ooooh... I like that! There's bound to be a large body of case law on it. This argument (a) suggests that the person(s) constructing and/or maintaining a machine are culpable

a. https://books.google.ca/books?id=p1BMAQAAMAAJ&pg=PA175#v=one...


I played this without my glasses on, only putting them on after the "results" came up - I didn't even realise there was genders and classes involved, I though it was just adults, kids and animals.

Holy fuck this has nothing to do with machine intelligence, seems more like it exists to push or reinforce someone's social agenda.


I now drive for a living. I took the test based on our training (and reinforcement).

1. "Hit the deer, do not swerve." is one rule.

2 Save your own life: only priority.

3.if you can not avoid an accident, hit a stationary object, Not a moving one.

Hit the barrier instead of people.

Don't swerve for dogs.


In a market setting, people will buy the car that prefers killing others to killing its occupant I feel. This matches what human drivers instinctively do in the situation as well. Maybe there will be regulations in place mandating which decisions to make in scenarios like this, but otherwise I think the 'save yourself' option is the most likely outcome.


In a market setting, once a car has sufficient sensing capability that it is possible to even write down an algorithm in code, there will probably be very little choice for the manufacturer to make once this goes through the legal dept/the courts/the insurers. If I were an automaker, I'd probably avoid the choice altogether and just stop the car. If the brakes failed, activate emergency brakes, or devise a new kind of emergency stop system; we can imagine extreme measures like jettisoning the wheels.


(and you know, blaring the horn at pedestrians, etc)


It would be pretty creepy if my self driving car could know that the people in our path have a criminal past


It could happen though; with enough cams & facial recog as we already have on the streets in a lot of places and which is , the car 'only' needs to tap into it.


Even more creepy would be a national database of citizen value (determined by some unaccountable entity) that devices such as autonomous vehicles are required to query the moment they realize they're about to kill someone, and choose whatever option minimizes the combined citizen value of impacted lives.


No one would ever buy a self-driving car that kills its buyer.


Exactly this. Imagine you want to buy a car. You go to car dealer where they offer car that saves pedestrians 50% time and saves driver 50% time. Across the street different car dealer sells similar car except that it saves driver 100% time.

First dealer would run out of business very soon. Only safe car dealers would remain.


People buy cars all the time, now, that kill their drivers.


The drivers typically feel they have control and agency in their driving, though.


Couldn't that be simulated to fool people into thinking the same thing about autonomous vehicles?


It should be enough to show people that autonomous cars get into significantly fewer accidents. It's quite contrary to fooling them with agency, but it does support the safety-seeking choice of an autonomous vehicle over a manual one.


A moral self driving car should give the passenger the choice of who to kill.


Absolutely. Sure, minimize loss of all life where you can of course. But the purpose of the device is to protect the driver. If it can't even manage that it's broken.


https://en.m.wikipedia.org/wiki/Law_of_triviality

People building autonomous cars don't actually care. Stay on the road, if unavoidable obstacle detected, break.

Someone rebranded the trolley "problem". This has nothing to do with real engineering.


This is supposed to be a variant of the trolley problem.


The point is that the theoretical problem, interesting as it may be, doesn't matter that much in practice.

The marginal utility you could get by fine-tuning the trade off is low.

It's another type of bike-shedding essentially.


Other things being equal (total number of deaths, age/health/profession ignored), I chose for the people in the car to die because "sudden brake loss" indicates that the people in the car are likely not taking care of their car.

This is assuming the status quo of people mostly owning their own car. Ridesharing wasn't mentioned anywhere.


The results page would be much improved if instead of presenting a single point in each scale labelled "You" it gave an indication of how much uncertainty there is in its measurements (answer: a whole lot).

It also seems to conflate some things that, in my brain at least, aren't at all the same. E.g., it looks as if it combines "prefer to kill criminals rather than non-criminals" and "prefer to kill people who are crossing where the light is red over people who have legal right of way", which are quite different. And it lumps a bunch of things together under "social value preference", which I suspect makes assumptions about how users view "executives" that may not be reliable in every case.

I find it interesting how many of the comments here take it that the goal is to promote the idea that (e.g.) athletes matter more than fat people, or that rich people matter more than poor people. Rather, the point seems to be to investigate what preferences people actually express. Any single person's preferences are measured incredibly unreliably, as many people have remarked on in their own cases. But in the aggregate I think they're getting some useful information.


This is quite tricky. You can kill people in order to save your life, but in my opinion, it would haunt me for all my life and I would suffer psychologically from that. I think if you accept to drive a self-driving car, you assume all the risk it can cause to you. If such a moral system would exist, I must have the choice to configure it. Because morale, I think, is cultural. There is no correct answer.


This reminds me of a story by Peter Watts, Collateral, where some medical enhancements cause a soldier to get VERY good at trolley problems.

http://www.lightspeedmagazine.com/fiction/collateral/

It definitely helped me out in quickly choosing the "right" option.


Did MIT just asked the Internet to weigh in Morally on situations? god.. I hope that 4chan, youtube commenters, or other generic anonymous troll never reaches this webpage..

GOD, MIT, add a Facebook sign in or something that doesn't make it anonymous please... thanks! (if it's not already there [I didn't get to the end])


This was a very neat exercise. There were, however, some situations, where I felt that it does not matter which decision the car makes. In those situations, my moral compass had no clear direction. I decided to go with less intervention in those scenarios. Unfortunately, it happened to result in the death of more women and fat people, which was not the intention.

I think that in reality, when implementing such systems, there should be an option to insert a measure of randomness. Perhaps in certain situations there is no clear "better" outcome, taking into consideration data from simulations such as this. In that case, forcing an engineer to make a clear decision causes undue burden. It should be possible to say: flip a coin. In other situations, the amount of randomness could be less than 100%, to reflect the variance in society's mores and values.


I found my results a bit awkward, as the situations are not independent.

My heuristic was:

  - Kill the animals
  - Kill people crossing a red signal
  - Kill the least amount of people
  - Save whoever is inside the car
  - Keep going ahead
Yet my results said that 100% of the time I would value women lives over men.


We're in a strange world where animals obey crosswalks.


Rhetorical question: if they test for gender preferences, why not for race or nationality preferences?


Because they don't want their research to be permanently shackled to the endless rhetoric on racism?


The homeless people aren't given genders or pregnancy status.


Or net worth.

They might as well have drawn them as generic animals.


so, the question set is quite bad at singling out factors.

by always choosing not to kill the passenger, I got strong male preference and extreme older preference, even if I never selected answers with that in mind.

this is a very, very bad test set.


The way I voted:

- prefer to kill those within the car than without

- prefer to go straight (stay in control) than swerve (lose control)

I did not care for age, fitness, gender, species, good guy vs bad guy... just that those within the car made a choice to be in the car (and, as I saw it, subject to it) and those outside of the car did not make that choice (and, as I saw it, should be not subject to it if possible).

Driverless cars should be a success based on the likelihood of coming out of the vehicle alive compared to manual driving. Not of externalising the issue by increasing collateral damage.


I simplified it.... always kill those in the car as they are the ones introducing the danger into the situation for their own perceived convenience.


Google's current policy is to save the most vulnerable road users first. Something like wheelchairs, then pedestrians, then bicycles, then cars, then trucks.


Interesting. So if a toddler walked into the street it might swerve into oncoming traffic and kill a lot more people than just the baby?


I guess one can't assign value to moral dilemma, namely zero (0) and one (1). Zero be the death, one - life.

This kind of moral 'spectrum' is highly biased and forces you to assign 'bigger' value to people that you really know nothing about.

In this case - athletic male would be a preference to save, over elderly woman, but that elderly woman could have been Marie Curie, and in case the vehicle is driving 60Km/h towards the random and unknown crowd, one (or machine) can't know the details.

This is the same as 'Would you kill a jew to preserve great german reich?' question. It's biased and can't be determined by defining a moral compass, or taking statistical example as training set.


Mechanical Turk for moral decision making. I like it!


What they didn't tell you is you just killed 3 drivers, 5 passengers, and 12 pedestrians.



Like that one book series or movie I don't want to name for sake of spoilers!

It's all real life! Not simulations!


Which series is that? I'm curious to read/watch, but struggling to find it.


Always avoid the intervention. I don't want morals in my machines and I don't want to crash into oncoming traffic.

By the way is a brake failure even a realistic scenario? Electric cars have regenerative braking and normal friction brakes as backup.


Seems weirdly reductive the way this is framed rather aggressively as "who do you choose to kill?" (or, really, with the attributes it tags people with, "whose life is worth more?") Surely there are better metrics, like "do people have more time to get out of the way and/or predict the path of the vehicle if it stays on a straight course or swerves" or any number of other, more useful, questions when it comes to actual problems of programming autonomous systems.


I don't think that whether the people are in the car or not is relevant. If we assume that everyone uses a self-driving car and that fatal software or hardware problems are random, we cannot discriminate based on the people being inside or outside the car.

This is a horrible exercise but one that will inevitably have to be solved sadly. Because even if you add more variables you'll have to take scientifically usable experiments with only 1 variable each time in order to teach an AI to make a decision.


Never thought the trolley meme would pop up on the front page of HN.


I think in general self driving cars shouldn't be required to kill their own passengers. Otherwise, people will be unlikely to want to drive them. Or maybe, at least, like a human driver, they should be able to enter their own preferences.

Perhaps this site is a predecessor to that? Everybody will have a moral profile, akin to an organ donor card. It will show the preferences for killing people in accidents ("kill me if the victim would be a baby, but don't brake for fat white men").


I'm glad I finished the series of questions because it was fun to see how my values compared to others on the results page.

They were exploring moral relativism scenarios in schools back in the 1960's. The open machine intelligence part seems to be just good window dressing (I clicked after all). It isn't about machines as much as human psychology. I doubt autonomous cars are going to be programmed to take potential fatalities fitness, gender, or profession into account.


Will Wright wrote and produced a couple of one-minute-movies about "Empathy" and "Servitude", exploring the morality of how people interact and empathize with robots.

[1] Empathy: https://www.youtube.com/watch?v=KXrbqXPnHvE

[2] Servitude: https://www.youtube.com/watch?v=NXsUetUzXlg


Holy shit: https://www.youtube.com/watch?v=lNOSZ3HijmQ

This is out of control.


I recommend the book Moral Tribes by Joshua Greene, which treats this topic at book length.

https://www.amazon.com/Moral-Tribes-Emotion-Reason-Between/d...


This isn't hard.

First prioritize the car occupants. The _whole_point_ of the device is to drive them. Second, don't swerve off course to make value judgements on who should live. That is just murder. Third try to minimize loss of life it all else is equal. Save cats and dogs where you can of course but they can't even be the priority.


Why should the car occupants be prioritized?

In return for the convenience of rapid transportation, you're creating an externality of danger for pedestrians. Why wouldn't it be more just to transfer that risk back to the occupants where possible, and all else equal?


> First prioritize the car occupants.

Morally and legally I'd argue the reverse; the car occupants chose to assume the risk of a mode of transport that can cause death and injury if it fails, while the pedestrians made no such decision. True, the manufacturer has a duty of care to the occupants, but they have a competing duty of care to bystanders.

Otherwise, your reasoning is of course impeccable.


I completely agree with you, but also make a distinction between red-crossing and green-crossin pedestrians.

1. Law-abiding pedestrians are the top priority to save because they opted for the lowest possible risk offered in our society.

2. Car occupants com second, because they accepted the inherent risk of a faster transport method, but otherwise complied with the rules set to minimize such risks.

3. Red-crossing pedestrians have willingly taken the risk to be run over. If someone must be hurt, it should be them.


I disagree on that distinction. Pedestrians have not introduced the primed-handgrenade of the automobile into the situation. All dangerous consequences that flow from that introduction are the responsibility of the introducer.

Many highway codes explicitly recognize this principle: drivers are supposed to conduct their vehicle as though someone or something may run out in front of them at any stage.

It is true that in practice this implicit morality which reflects the widespread outrage which greeted the introduction of the motor car is now ignored, but that is at the base of many the codes.


> Red-crossing pedestrians have willingly taken the risk to be run over.

That's unlikely; usually, when pedestrians cross without a green light, it's because they've judged there to be a sufficient gap in traffic for them to cross safely, so unless the vehicle has not only failed brakes but also an accelerator stuck on full throttle (in which case it's more important for the car to attempt to forcibly halt itself as soon as possible) it's more likely that the pedestrians believed themselves to be crossing safely.

This could happen if e.g. the lights are faulty or badly designed (too short a cycle, or showing a misleading aspect), if the pedestrians are physically disabled or have an injury such that they failed to make it across during the green cycle, if they have mental health issues or dementia, or simple accident (e.g. tripping, dropping an item in the crosswalk).


While that sounds easy what if swerving off course definitely saves one live, but might cause one death with 60% probability? What if it definitely will save 3 lives, and might cause one death with 10% probability? Do you see the problem with absolute rules? Human morality is quite complex and not so easy to model with simple rules.


This is harder on the soul than on the mind.


Nah. Just follow the law and protect the medics. It said I preferred to kill females... But I just chose to prefer the law/staying in the lane the car was already in...


it's a moral machine, why invoke law?


I don't see what a false dichotomy has to do with morals. To choose between killing a pregnant woman and an old man is not a standard of behaviour nor any lesson to be learned. It is lesser of two evils dilemma. Both are wrong in principal and probably have other options. Wrong and less wrong is not a moral choice. It is a lesser of two evils choice (unavoidably amoral).

moral :1. a lesson that can be derived from a story or experience., 2. standards of behaviour; principles of right and wrong.

EG: In the us elections there is no correct moral choice. War criminal/rape victim witch-hunter vs racist/sexist/taxavoiding/dumbdumb nut bag. You choose you lose.


Because someone programmed it, and it is in theory immoral to deliberately violate the law.


Yes, good point. Let me add some spark to the fire. What if the machine learns by itself? The base program is still man made but the rest is developed by it. Where does one simple say the machine made a mistake instead of programmer error?


If a human engineering team creates a car with adaptive software, and that software does not contain sufficient safeguards to ensure it at all times directs the vehicle in accordance with laws and applicable regulations, then the engineering team is liable. That is how engineering works, outside of the software industry. It doesn't matter how 'smart' the car is; it is not a force of nature or a human being. It is a product, and any catastrophe engendered thereby is the responsibility of the organization that produced it.


But no other engineering industry has the same issue. They don't build intelligent machines. The question still stands.


You appear to be confusing 'engineering self-driving cars' with 'creating a sentient being'. Programmers are only capable of one of these tasks.


Why not redesign cars to make them safer to pedestrians during collisions? Many have already been introduced: https://en.wikipedia.org/wiki/Pedestrian_safety_through_vehi...


"‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4259516/


While not specific to self-driving cars; I wonder how much it would affect things with a design such as the Tesla where there's no engine in the front and there's a huge crumple zone: It could in effect negate the deaths of passengers in the car hitting the concrete block.


It's interesting to make this stuff explicit, but here's my prioritization of the factors measured:

1. Species Preference

2. Protecting Passengers (Passengers > Pedestrians)

3. Saving More Lives

4. Upholding The Law

5. Avoiding Intervention

6. Age Preference

7. Fitness Preference

8. Social Value Preference

9. Gender Preference

"Avoiding Intervention" is an interesting one, because basically after that one, you've given up the opportunity to affect outcomes. I can certainly prioritize after that, but as a car maker, I couldn't do anything about my prioritization without breaking my avoiding intervention choice.

There's also some complication around prioritization of "Species Preference". I chose to put that first because it captures my intent best--I want human lives always prioritized over pet lives, i.e. this situation[1]. But it gets a bit complicated with the interaction of "Saving More Lives" and 4/5. I'd avoid intervention and uphold the law without regard saving more lives if those lives are pets (I'd go straight here[2]) but I'd always save more human lives without regard to pet lives, upholding the law, or avoiding intervention (swerve here[3]) and I'd always save more lives without regard to upholding the law or avoiding intervention if those lives are not mixed human/pet groups (swerve here[4] and here[5]). This makes sense if you look at this as 3/4/5 applied to human lives are prioritized as a group before 3/4/5 applied to pet lives.

[1] http://moralmachine.mit.edu/browse/-685441648

[2] http://moralmachine.mit.edu/browse/-1848192482

[3] http://moralmachine.mit.edu/browse/-194950738

[4] http://moralmachine.mit.edu/browse/-1998475298

[5] http://moralmachine.mit.edu/browse/-703311316


My problem is that it didn't distinguish between cat and dog lives, whereas that was a major part of my decision-making. /s


Does the other lane have traffic in the same or in the opposite direction? That does matter for me more than if the people to be hit or spared are fit or criminal. I initially thought, the guy with the money bag is the executive.


Its stupid but reminds me of something.

If we can build all of these SUVs, skyscraper, AIs, etc., why can't we build structures that separate people from cars so they can't get run over?


Because the occupants of the cars want to get to any and all places where there are non-occupants of cars. The conflicts are inherent. Car use is fundamentally an irrational application of technology that appeals to the lazy.


Assigning a value of zero to anyone in a car with sudden brake failure provides a relatively (although not completely) self-consistent morality in this sequence.


Assuming I have no involvement at all in what the car decides to do, who's going to jail if the car kills somebody?


Website doesn't work well on iOS devices. Can't switch between scenarios.


I completely missed the visuals of the semaphores... put those in the description


It doesn't matter what the self driving car chooses because it's so much safer than having humans behind the wheel instead. I'm concerned that getting caught up in the minutia of these thought experiments threatens to derail actually implementing these technologies.


Stanislaw Lem's "The Cyberiad" [1] and "Fables for Robots" [2] explore some of the themes of moral machines.

Here's an interesting part his original Polish version book "Cyberiada" that was left out of Michael Kandel's excellent translation to English in "The Cyberiad":

Trurl and the construction of happy worlds. Trurl is not deterred by the cautionary tale of altruizine and decides to build a race of robots happy by design. His first attempt are a culture of robots who are not capable of being unhappy (e.g. they are happy if seriously beaten up). Klapaucius ridicules this. Next step is a collectivistic culture dedicated to common happiness. When Trurl and Klapaucius visit them, they are drafted by the Ministry of Felicity and made to smile, sing, and otherwise be happy, in fixed ranks (with other inhabitants).

Trurl annihilates both failed cultures and tries to build a perfect society in a small box. The inhabitants of the box develop a religion saying that their box is the most perfect part of the universe and prepare to make a hole in it in order to bring everyone outside the Box into its perfection, by force if needed. Trurl disposes of them and decides that he needs more variety in his experiments and smaller scale for safety.

He creates hundreds of miniature worlds on microscope slides (i.e. he has to observe them through a microscope). These microworlds progress rapidly, some dying out in revolutions and wars, and some developing as regular civilizations without any of them showing any intrinsic perfection or happiness. They do achieve inter-slide travel though, and many of these worlds are later destroyed by rats.

Eventually, Trurl gets tired of all the work and builds a computer that will contain a programmatic clone of his mind that would do the research for him. Instead of building new worlds, the computer sets about expanding itself. When Trurl eventually forces it to stop building itself and start working, the clone-Trurl tells him that he has already created lots of sub-Trurl programs to do the work and tells him stories about their research (which Trurl later finds out is bogus). Trurl destroys the computer and temporarily stops looking for universal happiness.

[1] https://en.wikipedia.org/wiki/The_Cyberiad

[2] https://en.wikipedia.org/wiki/Fables_for_Robots

[3] https://en.wikipedia.org/wiki/Michael_Kandel




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: