Hacker News new | past | comments | ask | show | jobs | submit login

I couldn't "play" the game for these very reasons. There's no way the player could have all this information.

It's a game of "would you rather" pretending to be about self-driving cars.

The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.




>The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.

Then you're not going to like this factoid. In wrongful death civil suits in the US, the monetary award is based on the current income and earning potential of the deceased; the courts place vastly lower monetary value on the live of homeless people than on the wealthy.


I am not surprised. And I don't like it. And, I understand why it makes sense.


> The second question which required a choice between killing homeless people and wealthier ones made me too disgusted to continue.

I actually didn't give a fuck who the people were, or how many.

I was surprised when by the end I was shown the results including demographics.


There are too few questions for it to know _why_ you made a decision. It thought I was trying to save fat people, when actually, I think the car should avoid swerving when all other things are equal.

It does explain this at the beginning - if they asked 500 questions, while they might get more detail per person, their data would be skewed to the opinions of those who would sit and answer 500 questions.


I always answered hit the wall when there was a wall—shouldn't have been driving that fast near pedestrians. When asked to choose between a doxtor and a homeless man, I hit the doctor. The doctor took an oath, and tje homeless man may be a lost sheep, so to speak.


The system isn't making any value judgment; it's gauging whether /you/ are. And then it compares your value judgments with others. It's interesting in that it has the potential to show your biases, at least relative to the average.


It doesn't have that potential though. There were 13 questions, with each question changing multiple variables. Simply changing the road/legality situation and the demographics between questions buries any informative results.

I made my decisions based on a strict decision tree. It covered every case without ever involving demographics (except "cat vs. human"). At the end, I got a halfway accurate summary of my swerving and legality opinions, with a wildly inaccurate but equally confident summary of my demographic opinions.

With 13 questions and 5-10 variables, the system couldn't possibly distinguish my ruleset from a totally unrelated one. There aren't enough bits of information to gauge that, and therefore there isn't enough information to gauge much of anything.


There are when you give different sets of questions to large numbers of people.

If you give very similar questions to a single subject, they try to be consistent with previous decisions so that order becomes significant, which confounds the results.


I know there's an issue with being too specific. If you're trying to get gut reactions you can't be too transparent or you bias later responses.

But I'm objecting that data was unrecoverably lost here. Most respondents (including me) report using clear rules that were incompletely revealed by the questions. That's not something you can rebuild by averaging lots of results and seeing "what people valued". I applied a specific decision tree, and reducing it to "valued lives a lot, valued law some" produces outcomes I consider immoral.

So I guess I phrased my initial complaint wrong: I think that reducing this to a statistical assessment of choices discards the most important data.


But the system gauges my value judgement poorly. I never made judgements about whether swerving was a factor, but it gave a gauge on that. I considered the passengers living with witnessing smashed bodies on the windshield (all else equal).

Also, even with no difference (or more death ahead), braking and scraping into the wall "left side from observer" drops more momentum safely and allows crossers more time to move forward along their momentum (although I didn't specifically use that because it was against the rules).

There is no way to make an optimal decision in this game.


I stopped on the eleventh. I had been justifying the riders dieing in most situations because of their choice to get in this murder machine. Then I got the case of the little girl in a self driving car about to run over a little boy. Killing the girl does nothing but punish the parents for sending their child to a destination in a self driving car. Killing the boy does nothing but punish the parents for sending their child to a destination on foot.

This computer is so good it can figure out profession and intent, it needs to also give me a snapshot of these children's future so I can make a nuanced decision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: