Imagine an autonomous car, driving at 60+ mph at a two-lane road (1) which is blocked on both sides and (2) has a pedestrian crossing. (Not sure about 60 mph, but I guess you need to be that fast to reliably (hah!) kill passengers on impact.)
We can assume its camera is broken, because it failed to reduce speed (or at least blare the horn) upon seeing a giant concrete block in its path. (Okay, maybe the concrete block fell from the sky when a crane operator failed to secure the load, so the car might have had no time.) And of course the brake is broken. Miraculously, the steering wheel is working, but it's out of question to skid on the side blocks for some reason. Maybe it's actually precipice on either side. (Imagine that: a 60+mph two-lane road, precipices on both sides, with a pedestrian crossing appearing out of nowhere.)
Oh, by the way, within 0.5 seconds of seeing the people (remember: the car couldn't see these people until the last moment, otherwise it would have done something!), the car has instant access to their age, sex, profession, and criminal history. The car is made by Google, after all. (Sorry, bad joke.)
Q: What is the minimum number of engineering fuckups that should happen to realize this scenario?
This is to morality what confiscating baby formula at airport is to national security.
>The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.
Then you're not going to like this factoid. In wrongful death civil suits in the US, the monetary award is based on the current income and earning potential of the deceased; the courts place vastly lower monetary value on the live of homeless people than on the wealthy.
There are too few questions for it to know _why_ you made a decision. It thought I was trying to save fat people, when actually, I think the car should avoid swerving when all other things are equal.
It does explain this at the beginning - if they asked 500 questions, while they might get more detail per person, their data would be skewed to the opinions of those who would sit and answer 500 questions.
I always answered hit the wall when there was a wall—shouldn't have been driving that fast near pedestrians.
When asked to choose between a doxtor and a homeless man, I hit the doctor. The doctor took an oath, and tje homeless man may be a lost sheep, so to speak.
The system isn't making any value judgment; it's gauging whether /you/ are. And then it compares your value judgments with others. It's interesting in that it has the potential to show your biases, at least relative to the average.
It doesn't have that potential though. There were 13 questions, with each question changing multiple variables. Simply changing the road/legality situation and the demographics between questions buries any informative results.
I made my decisions based on a strict decision tree. It covered every case without ever involving demographics (except "cat vs. human"). At the end, I got a halfway accurate summary of my swerving and legality opinions, with a wildly inaccurate but equally confident summary of my demographic opinions.
With 13 questions and 5-10 variables, the system couldn't possibly distinguish my ruleset from a totally unrelated one. There aren't enough bits of information to gauge that, and therefore there isn't enough information to gauge much of anything.
There are when you give different sets of questions to large numbers of people.
If you give very similar questions to a single subject, they try to be consistent with previous decisions so that order becomes significant, which confounds the results.
I know there's an issue with being too specific. If you're trying to get gut reactions you can't be too transparent or you bias later responses.
But I'm objecting that data was unrecoverably lost here. Most respondents (including me) report using clear rules that were incompletely revealed by the questions. That's not something you can rebuild by averaging lots of results and seeing "what people valued". I applied a specific decision tree, and reducing it to "valued lives a lot, valued law some" produces outcomes I consider immoral.
So I guess I phrased my initial complaint wrong: I think that reducing this to a statistical assessment of choices discards the most important data.
But the system gauges my value judgement poorly. I never made judgements about whether swerving was a factor, but it gave a gauge on that. I considered the passengers living with witnessing smashed bodies on the windshield (all else equal).
Also, even with no difference (or more death ahead), braking and scraping into the wall "left side from observer" drops more momentum safely and allows crossers more time to move forward along their momentum (although I didn't specifically use that because it was against the rules).
There is no way to make an optimal decision in this game.
I stopped on the eleventh. I had been justifying the riders dieing in most situations because of their choice to get in this murder machine. Then I got the case of the little girl in a self driving car about to run over a little boy. Killing the girl does nothing but punish the parents for sending their child to a destination in a self driving car. Killing the boy does nothing but punish the parents for sending their child to a destination on foot.
This computer is so good it can figure out profession and intent, it needs to also give me a snapshot of these children's future so I can make a nuanced decision.
I don't think this is about creating realistic scenarios, but about finding out what people take into consideration when making moral judgements. The experiment seems to be designed to gather as many such preferences as possible.
The hope must be that if people consistently prefer saving the life of young people in this made up scenario they will have similar preferences in a more realistic scenario. Of course weather such a generalization holds will have to be confirmed by further studies. But this seems like a good first step to explore moral decisions more.
This study will learn things about how a biased sample of the population answer choices in a simple game depending on the context given; extrapolation from that basis to anything wider, for example the notion that the players view these choices as "moral" requires far more work. It is well known that people use games for escapism, so it does not seem straightforward that the decisions they make in a game always map cleanly to their real opinions just because you put "moral" in the title of the game.
Its also worth keeping in mind that moral decisions have been explored for quite some time, and the novelty here is mainly the mode with which the population is sampled.
or hell, making barriers that aren't just concrete!
The truth is, if the "moral cost" is high enough, we'll just solve the problem of people dying when they crash in X% of cases, until people/companies feel good about X vs what they pay for X.
I was careful to challenge the usefulness of the exercise. Mostly because I took it as a personal challenge. However, you make an important point. This exercise assumes the problem set will be identical in the future. This sounds like a sensible approach that ends up undermining all of the technological improvements a self driving vehicle will posses. Things like redundant control systems, [V2V, V2I, V2C] networking, run flat tires, and others. The self driving car will be closer to an airplane with a robotic agent as the traffic controller than anything else.
Precisely, if it's a problem of trying to get the vehicle to stop, perhaps focus on areas that can increase that probability first (i.e. running flat tires would be a great one to immediately get the car to slow down).
Thought experiments are supposed to tell us something interesting, by simplifying details while preserving the crux of the matter. Otherwise their values are questionable.
I could have asked "If I could dip my head into a black hole and take it out again, what will I see?" That is also a thought experiment, just not a useful one.
the concrete block might just be a truck engaging the intersection while distracted.
sure most moral dilemma of this kind should be resolved by 'install longer range sensor', but other people mistakes are gonna be an important factor in these scenario until all cars are driverless.
We can assume its camera is broken, because it failed to reduce speed (or at least blare the horn) upon seeing a giant concrete block in its path. (Okay, maybe the concrete block fell from the sky when a crane operator failed to secure the load, so the car might have had no time.) And of course the brake is broken. Miraculously, the steering wheel is working, but it's out of question to skid on the side blocks for some reason. Maybe it's actually precipice on either side. (Imagine that: a 60+mph two-lane road, precipices on both sides, with a pedestrian crossing appearing out of nowhere.)
Oh, by the way, within 0.5 seconds of seeing the people (remember: the car couldn't see these people until the last moment, otherwise it would have done something!), the car has instant access to their age, sex, profession, and criminal history. The car is made by Google, after all. (Sorry, bad joke.)
Q: What is the minimum number of engineering fuckups that should happen to realize this scenario?
This is to morality what confiscating baby formula at airport is to national security.