Hacker News new | past | comments | ask | show | jobs | submit login

>Would you bet a family member?

"think of the children!"

Lives at stake don't change anything here. The question is whether self-driving cars, even with the errors, are safer for people than regular drivers on average. If so, then absolutely yes everyone should bet their lives and their families'.

Thousands of people are dying every day in cars. This is not something we need to wait for it to be perfect. It only needs to be better.




I think people intuitively ascribe a moral dimension to whether people are involved in accidents, hence why they are more worried about dying on an airplane than in a car, even though the former is much less likely than the latter.

If I die in a plane, I have no control over the matter. If I die in a car, then at least I may have had control, and it could be chalked up to my inattentiveness, bad driving behavior, etc. The latter then implies the individual has direct control over whether they live or die, and can thus ensure life by being a responsible driver. I bet the same reasoning will be applied to self driving cars, even if they are orders of magnitude safer, just as the case with airplanes.


The great part of self-driving cars is that just like when there are major airplane accidents there is a thorough learning process afterwards and all flights become safer as a result on a frequent basis.

This is the same reality for self-driving cars. The edge cases will happen but they won't keep happening with rigorous improvements in the models and trained behaviour.

Flight safety has improved dramatically over the past century and the ability for self driving cars to adapt will be even faster, as it's mostly just software.

Of course this implies that an accident had to happen for such an improvement to take place. But we have both simulation improving drastically to help alleviate that plus the alternative is the current situation where the same types of accidents keep happening again and again with only the occasional improvement to car technology and safety features.


> Lives at stake don't change anything here. The question is whether self-driving cars, even with the errors, are safer for people than regular drivers on average. If so, then absolutely yes everyone should bet their lives and their families'.

Maybe logically that makes sense but from an ethical perspective I argue it's much more complicated than that (e.g. the trolley problem)

In the current system if a human is at fault, they take the blame for the accident. If we decide to move to self driving cars that we know are far from perfect but statistically better than humans, who do we blame when an accident inevitably happens? Do we blame the manufacturer even though their system is operating within the limits they've advertised?

Or do we just say well, it's better than it used to be and it's no one's fault? When the systems become significantly better than humans, I can see this perhaps being a reasonable argument, but if it's just slightly better, I'm not sure people will be convinced.


I'm voting for the "less dead people" option. Mostly because I'm a selfish person, been in automobile accidents caused by lapsing human attention, and I want it to be less likely that I'll die in a car crash.


But it's not just about quantity. It's also _different_ people who will die. That radically alters things from an ethical perspective.


Yep. Medical professionals have been aware of this dilemma for millennia: many people die from an ailment if no treatment is attempted, but bad approaches to treatment can kill people that would have survived otherwise. And setting 'better average accident rates' as the threshold for self driving vehicle software developers to be immune from the consequence of their errors is like setting 'better than witch doctors' as the threshold for making doctors immune from claims of malpractice.

Move fast, break different things, is not the answer.


What if its very much better average accident rates? This isn't black-and-white.


No, it certainly isn't black and white. Indeed 'much better' is hard to even define when human drivers cover an enormous amount of miles per accident, miles driven are heterogenous in terms of risk, there isn't even necessarily a universally accepted classification of accident severity or whether drivers should be excluded from the sample as being 'at fault' to an unacceptable degree. Plus the AV software isn't staying the same forever: every release introduces new potential edge case bugs, and any new edge case bug which produces a fatality every hundred million miles makes that software release more lethal than human drivers, even if it's better at not denting cars whilst parking and always observes speed limits in between. I don't think every new release is getting a enough billion miles of driving with safety drivers to reassure there's no statistically significant risk of new edge case bugs though.

And in context, we still punish surgeons for causing fatalities through gross negligence even though overall they are many orders of magnitude better at performing surgery than the average human.


Sophistry. 'Much better' can be very clear, in terms of death or injury, or property damage, or insurance claims, or half a dozen reasonable measures.

Sure it takes miles to determine what's better. Once automated driving is happening in millions (instead of hundreds) of cars on the road, it will take only days to measure.


I mean, the 'half a dozen reasonable measures' is a problem, not a solution, when they're not all saying the same thing. And sure, it only takes days before we know the latest version of the software actually isn't safer than the average human. And a lot of unnecessary deaths, and the likelihood the fix will cause other unnecessary deaths instead [maybe more, maybe less]. It's frankly sociopathic to dismiss the possibility this might be a problem as sophistry.


Straw man? There are many phases to testing a new piece of software, short of deploying everything to the field indiscriminately.

Some of us believe (perhaps wrong but there it is) that the human error rate will be trivially easy to improve upon. That's not sociopathic. It would be unhelpful to dismiss this innovation (self-driving cars) because of FUD.


Some of us believe, based on the evidence that the human fatal error rate is as low as 3 per billion miles driven in many countries, and some people actually are better than average drivers. Might be trivially easy to improve upon human ability to not to dent cars whilst parking or observe speed limits, but you're going to struggle to argue that improving on the fatal error rate is trivially easy for AI, or that the insurance cost of the dents matters more than the lives anyway.

People who actually want initiatives to succeed are going to have to do better than sneering dismissal in response to anybody people pointing out obvious facts that complex software seldom runs for a billion hours without bugs and successfully overfitting to simulation data in a testing process doesn't mean a new iteration of software will handle novelty it hasn't been designed to solve less fatally than humans over the billions of real world miles we need to be sure.


People CAN drive well. But understand in my rural state the highway department has signs over the road, showing fatalities for the year. It averages one a day. I don't think the cancer patients in the hospital die that frequently.

So you can name-call all you like and disparage dialog because you disagree or whatever. But I don't think a billion miles between accidents is anywhere close to what I see every day.

FUD isn't a position, its got no place in this public-safety discussion.


I vote for that option as well.

So far, I have been killed exactly zero times in car crashes. All the empirical evidence tells me that there's no need to surrender control to a computer.

If I die in a crash, perhaps I'll change my mind...


Do we gain something from placing blame? Who do we blame for people who die from natural disasters? Freak occurrences?

Are deaths where blame can be placed preferable to deaths where it cannot? By what factor? Should we try to exchange one of the latter for two of the former?


"The question is whether self-driving cars, even with the errors, are safer for people than regular drivers on average. If so..."

You could say "the question is whether aircraft, even with the errors, are safe for people than regular drivers on average. If so..." - then how should we change policy towards Boeing in light of the 737 MAX fiasco? Should we then avoid any adverse action towards them and focus on encouraging more people to fly?

If we declare that something is safer, particularly before it even exists, isn't there a danger of a feedback loop that prevents it from being safer?

Regular drivers kill people, but they are also generally vulnerable to the crashes that they cause. Boeing engineers, or the programmers of self-driving AIs, don't have their interests aligned with you, the occupant of the vehicle, nearly as much.


There's still some nuance that's important. If self-driving cars always sacrifice other road users to protect the driver, self driving cars could be reduce death/injury overall, but there's a question about whether or not this behavior is ethical. And if the training datasets consistently label cars but not other road users, then this bias could be baked in completely by accident.

It only needs to be better.

A Pedestrian likely has a different definition of "better" than the car driver.


I would never buy a car that's just better than an average driver: many crashes come from driving while drunk, in a fog, on ice, etc.

I would only buy a car that's as good as a fully attentive, sober, skilled and well rested driver.

Besides, there will be unforceen issues that increase your likelihood of death: hacking, sensor failures, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: