Hacker News new | past | comments | ask | show | jobs | submit login

Maybe. But we needn't compare driverless cars against some hypothetical morally perfect agent. We compare them against humans. Some humans might instinctively sacrifice themselves to save a baby carriage or an old lady. Many more will instinctively act to preserve themselves regardless of the consequences. Even more will panic and make bad decisions that cause needless deaths that a self-driving car would have avoided.

I had a cousin who died because he swerved to avoid a squirrel and drove off a cliff. I don't think anyone would argue that we should program cars to prioritize squirrels over people, but humans will end up doing that sometimes.

The possible amount of lives saved by the overall reduction in accidents with a competent self-driving car is going to be far, far greater than any 1-in-a-billion moral dilemma situations that they might encounter and where they might make choices that are not "optimal" as decided by some human who is looking at the situation after the fact with the time to think coolly and composedly about it.




There's also the practical limitations to consider. Self-driving cars have enough trouble identifying what's ON the road, never mind what's off to the side. Is it a cliff, a crowded sidewalk, a crash barrier, a thin wall with lots of people behind, an empty field, trees, a ditch? etc etc.


You seem to be addressing the question of when self-driving cars should be used in place of manually driven ones, whereas the website seems to asking how we should program the self driving cars (where I would say we should be trying to make them close to a hypothetically perfect moral agent).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: