Self-driving cars have to be at least as safe as human-driven ones for them to ever become accepted and thus widespread. Realistically, they need to be many times safer than human-driven cars for acceptance. It doesn't matter who's technically at fault; they cannot cause more deaths than human drivers. Everyone makes mistakes and they don't deserve to be killed by an algorithm for it.
I'm going to say something that I suspect many will quietly agree with but may not be willing to say out loud.
I'm confident that self-driving cars will eventually be safer than human driven ones. I'm not confident that this can be realistically achieved without killing some unfortunate bystanders in the process of getting to that point. That doesn't mean we should accept these cars running amok killing people all over the place. It doesn't mean I think this specific case involving Uber is "reasonable" either. I just think, in a general sense, a realist has to be able to try to find some balance between short term lives lost and long term lives saved.
The thing is that any other industry that works like this(we have to accept some people will be hurt/die to achieve progress) - like the medical testing industry - is extremely heavily regulated and most crucially, all participants in testing are consenting to being tested. Letting autonomous cars on public roads is testing it on people who have not consented being tested on. If I get hit by an autonomous car I really don't care that thanks to this Uber/Google/Waymo/Whoever gets to improve their algorithm. I don't care how they want to do it - but I don't want them testing this stuff on public roads.
Over 5k people die in pedestrian/motor vehicle crashes every year. None of them are consenting to being killed. To the extent that you might consider "walking near paved roads" consenting, then that distinction doesn't change if there's computer-driven cars on the road.
The software and hardware has been extensively tested on test tracks, they built a whole fake road system in Pittsburgh[1].
Even drug companies can't know what they don't know and end up killing people with extensively tested drugs. Airplanes fall out of the sky because of unknown bugs that simply cannot be reasonably tested for.
Random untested people without driver's licenses and people who no longer possess the faculties to drive safely are on the road all the time. You're being hyperbolic.
Can you give an example of an industry that began with regulation?
Medical industry is heavily regulated now, but regulations appeared after it has grown in size. Similarly transport, construction, and anything else I could think of.
The reason it happens this way is that it's hard to set up standards and practices for something that is entirely new - when you set them up too soon, they may end up impossible to achieve, and block innovation altogether.
Funnily enough, the automobile industry is one of them. The Locomotives on Highways Act predated the car by decades in the UK, and its regulations were extremely onerous. The relaxation of speed limits from 4mph to 14mph and removing of the need for a human escort on foot helped, but we had functioning automobile manufacturing in the UK before then. The speed limit on UK roads was 20mph or lower until the 1930s, and Britons built a thriving domestic car manufacturing industry and even broke land speed records several times in that period.
(Similarly, and less comically in retrospect, crash testing regimes resulted in a lot of safety improvements that would have been unlikely as a result of purely commercial considerations)
Many crash testing standards were led by manufacturers. Funny that consumer ratings groups rate the safety of vehicles and so there's commercial interest to meet higher than required standards.
>>Can you give an example of an industry that began with regulation?
The issue here is that almost every industry began in times when we collectively didn't give much shit until things went wrong. If you wanted to make a watch factory using radium - sure, go ahead. It was only after people started dying that we figured out that maybe this needs regulation. But if you wanted to make a commercial product in 2018 using radium you would have to go through plenty of hoops, and rightfully so.
I don't see why autonomous cars are any different - I don't see why we're starting with the assumption that it's fine to test this without strict regulation applied to everything else automotive. Like, if you wanted to start a startup making new tyres using a revolutionary compound, it would not be allowed to just go ahead on public roads - they would need to pass extremely strict testing that we apply to tyres since they are a life/death kind of component. But autonomous cars are allowed as-is right now? With a failure mode where the car literally doesn't react at all to a person crossing the road, despite having 4(!) sets of sensors used to detect this exact kind of obstacle? That is an engineering failure of the worst kind. The law tried to make it better by requiring a driver behind the wheel - but that's still clearly not enough.
Plus you would have a selection bias. High regulations induce much larger upfront costs. That would kill most of these nescent industries. So industries that would fall under heavy regulation now just never see the light of the day.
It was actually Peter Thiel's grief against over regulation. It kills creativity in domains that are much more important to mankind than websites.
> Can you give an example of an industry that began with regulation?
Is version 1.0 of a product technically considered patched? I'm assuming, categorically, no..
That said, despite where we're talking about it, Uber is not primarily in the tech industry - they don't sell you software, they sell you rides - so they are a taxi business, foremost. The taxi business (artificially chauffeured, or, not,) is actually a perfect example of an industry where regulation makes it palatable at all, compared to what v1 taxi cabs were like...
There is also the issue of whether or not we even have the means to regulate (or even come up with a formalization for the axiomatic ethical examination of near-black-box/difficultly analyzable tech).
That's a separate concern from whether or not the consumer will be liable for mistakes made by the provider until regulation is in place.
imo, innovation is pretty unimportant compared to regulation, and far less valuable, too..
There's a lot of evidence that the current regulation of medicine and medical devices is net negative, i.e. it effectively causes more deaths than were regulation lessened.
The problem is that the general public will not accept deaths caused by machines even if they are statistically safer. As an example, the German government gave in to public pressure after Fukushima and decided to shut down all nuclear power plants even though total death rate of coal power is A LOT higher and this event did not even happen in their own continent.
If stuff like this continues to happen, i can see some government banning self driving tech for the foreseeable future.
It's actually the same with terrorism, school shootings and plane crashes. They are all statistically insignificant compared to other causes of non natural death, starting with common assaults, drownings and road accidents. But they monopolise the headlines and create an irrational fear.
I wonder if there is a Simpson's paradox effect at play here, and that it is rationally sensible to be more afraid of planes than cars.
Even though in absolute terms there are more deaths by automobile, that is also because people are typically around cars more frequently than they are on planes. I.e. any given school shooting or plane ride IS more risky than any given bath or car journey, but the latter occur much more frequently.
Cars & trucks 5.75 deaths per billion miles
Commercial airlines 0.06 deaths per billion miles
So airlines are a couple orders of magnitude safer by miles traveled.
In my own personal case I put about 10,000 miles on my car each year, and I travel about 50,000 - 80,000 miles on an airplane. So airlines are still safer as I don't travel 2 orders of magnitude farther on them.
Deaths per mile is a good metric in that it is the same for everybody, so to speak, but this is also a weakness. Some people fly far further than they drive or _would_ drive, so for them we should look at per-journey metrics. This must be calculated individually because we all have different typical journeys, but it produces a more meaningful number. For example, any given day that I drive, I drive around 10 miles. When I fly, I usually fly 2600 miles. Over billion days on which I drive, I die 57.5 times, and over a billion flying days I die 156 times - so for me, I am ~3 times as likely to die in transit on a day I fly as on a day I drive. _That's_ what I think about when I file onto the plane. I'm not comparing my odds of death to the hypothetical equivalent drive, in part because I wouldn't drive that 2600 miles so it's an apples to oranges comparison. I'm comparing my odds of death today to my odds of death yesterday and my odds of death tomorrow!
I really doubt if you dug into the flying statistics that all those miles are created equal. Apparently only 16% of air crashes happen during the cruise portion of a flight[1]. You probably need to consider the chance of crashing on a per-flight rather than per-mile basis (as 84% of accidents happen in the takeoff and landing phases, which are approximately the same length for all flights). A long haul flight (which 2600 miles would be) would therefore appear to be more dangerous than it actually is.
The same page gives the odds of dying in a single airplane flight as 1 in 29.4 million. The odds of dying per mile driven are approximately 1 in 174 million. On that basis, your 10 mile journey has a risk of death of approximately 1 in 17.4 million - death is 1.7x more likely.
There's probably a bunch of other ways to interpret the statistics - car miles aren't created equally either, there's a difference between long haul and short haul flight fatality statistics, there's a big difference based on where the airline operates. It's (like all things in life) not very clear cut.
You bring some good information into this, which also gets more to the point I was trying to make, specific math aside - though I am comforted your math shows my flights are probably safer! Given that neither airplane journeys & miles nor automobile journeys & miles are created equally, per-mile statistics aren't particularly meaningful to an individual traveler.
First those 2600 miles are not absolutely safe, so it is a higher risk than an average trip.
And if you qualify the airline deaths based on journey type then you also need to qualify the car trip by journey type. As he is presumably sober, not tired, using a seat belt, not speeding 30+ MPH over speed limit etc his risks of dying over 10 miles is less than half that of the average 10 miles driven.
So, his risk of death per trip is actually higher when flying.
We could attempt to attribute it to a statistical phenomenon from a freshman sociology course that is neckbeard-stroking worthy, or maybe editors just figured out that "man hit by train gone wild" will almost always sell more papers than "man knifes other man in street". Plus whatever political bias and what not is involved. Then there's often a positive feedback loop with what society ends up paying attention to.
I absolutely agree that some lives will be lost in the name of progress. I also hate the NIMBY ism showing up in these threads. However, I don't care for Uber or its ambition. I'm more worried that Uber's negligence will set the whole industry back with a public backlash.
I think the most uncomfortable part that people won't bring up is the fact that cars are essentially high speed, high weight metal death machines on wheels.
Just a quick Google search and the first results give me this info:
>Nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. More than half of all road traffic deaths occur among young adults ages 15-44.
Of course unmanned driving tests will happen at a very, very small fraction of the amount compared with manned drivers we have to pull stats from but I don't think it will take long, if the unmanned tests persist enough to conjure up the required amount of data needed to make a legitimate comparison to prove that unmanned driving will be markedly safer.
To be clear, I personally don't espouse these "NIMBY"ist ideas, I'm just pointing out that they exist and that they will be a real problem for widespread deployment of autonomous vehicles.
Uber, with their recklessness, is inflaming them a lot.
It can be realistically achieved, because it already has been achieved. Self-driving cars done right are already safer than humans - see Waymo's record. The woman didn't die today to somehow save people tomorrow. She died to fuel Uber's business ambitions.
Waymo's record is only "safer than humans" because they haven't had a fatality yet and thus get 0/x, anything can be "safer than humans" by that count.
I looked up what numbers I could find, according to the OECD's Road Safety Annual Report 2016, in 2014 the US averaged about one death per 150 million vehicle-kilometers (vkm) (6.7 killed per billion vkm).
According to Waymo's own page on the subject, they have driven "more than 5 million miles" (8mvkm) since 2009[0] and they seem to be by far the lead vkm producer in the autonomous space[1]. Just to be as safe as 2014 US human driving they need to clock in an other 140mvkm with no more than one fatality, and even then that could be debated to be in more selected conditions than human driving.
Don't get me wrong, I'm a big believer in autonomous driving and I think it would be a great, great thing, but saying it is already safer than human driving is at best misleading.
Yep, it's like the study done by US government on the chances of accidental detonation of any of their nuclear weapons - and basically the study concluded that the chances are zero, because it hasn't happened yet(the study also found that the cold-war era weapons had no locks, no redundant fail safe mechanisms, and could be detonated by literally anyone who got their hands on one - but hey, none of them were so they must be safe!)
Well, an obvious question to ask is if you are honestly prepared that it is your or your relatives' lives, that are lost in the name of progress / long term gain?
Perhaps, it is easier to speak of things like "short term life lost vs long term life's saved", as long as it is other people losing lives.
To rephrase the question - are you prepared to slightly increase chances of injury of your family for the next 5-10 years, if leads to drastically increasing their safety after 5-10 years, and forever after?
This is not rephrasing, this is changing the question to something completely different and wrong.
In addition, not providing any data there or alternative is just misleading.
Ex: I would NOT accept increasing the risk by 10 in the next 5 years to reduce it by 30% afterwards. I would NOT accept increasing the risk by 2 during 5 years if we could instead just increase it by 5% during 10 years and get the same result. I would NOT accept even having the same probability if we could first learn more and take 2 more years and a few billions to get a more stable technology before launch.
The point is, the "what if it is your family" appeal goes both ways. The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside.
> Realistically, they need to be many times safer than human-driven cars for acceptance.
Yes, many times yes. The promise of self-driving cars is that they are totally safe, that is, nothing will happen except in exceptional, unforeseeable circumstances.
The reason for this is, in a self-driving car, nobody's in charge, and therefore, any incident (let alone, a fatal accident) is immensely more complicated than the same incident with a human driver.
(It's obvious that a self-driving car with a safety driver at the wheel doesn't provide any benefit whatsoever and isn't, in fact, a self-driving car).
A self-driving car needs to:
- transport people who are unable to drive or are "irresponsible" in the legal sense (kids, seniors, drunk people, injured/sick people going to the hospital, etc.)
- drive itself with absolutely nobody onboard
The analogy with trains is flawed because trains have drivers that deal with problems when there is one. Even "autonomous" metro trains are monitored individually.
In fact, save for special conditions like a closed network on a campus, etc., I'm less and less sure self-driving cars will ever be a thing.
It will be a thing, at least in the US where in many places public transport is close to non-existing. It could take a long time outside limited "safe" areas.
> Self-driving cars have to be at least as safe as human-driven ones for them to ever become accepted and thus widespread.
Do they? There are other technologies that started out less safe but were accepted and became safer after acceptance, further boosting their popularity. Planes and cars would be examples and mills in the industrial revolution, but there are undoubtably others.
Machines cannot have an operating mode where they can kill someone, even if in the grand scheme of things they are saving lives. Prime example: Therac radiotheraphy machines - it's without a doubt that these machines have saved countless lives, but they had a weird error state where instead of treating the patient, they would kill them instead. At that point we didn't go "oh well, a human operated radiotherapy machine would have killed more people because humans are inherently worse at this stuff" - we banned all of them until the problem could be found, rectified, the solution tested and implemented. It also contributed to much stronger industry testing standards for medical devices.
I'd say an automonous car which can completely fail to react to a person in its path is firmly in the "therac-level of failure" category - even if on average they crash much less than humans do. They should all be taken off road and only allowed once we determine through rigorous industrial-level proof testing that it can't happen again.
1) Yes, absolutely. As a result of Therac machines being put on hold while the flaw was investigated I am sure some people didn't receive life-saving radiation treatment when they were scheduled to - that's preferable to operating unsafe machinery. And you word it in such a way as if its 3000 deaths/day or nothing and there is nothing in between - you must know very well it's not as simple as this. I suspect emergency breaking systems(mandatory soon in US, already mandatory on new cars in EU) will reduce that number close to zero long before anything with an "autonomous" badge will be allowed anywhere near public roads.
2) You can do real-world testing without testing on public roads and endangering everyone around you - which is what Tesla was/is doing - all their cars were gathering data which wasn't actually used for autonomous driving yet, but they were able to see what the car would have done had it been running in autonomous mode.
3) No, I do not believe that. I do believe that a certain category of problems has been completely eliminated by rigorous testing, certification processes and engineering practices that prevent those problems in the first place. That's not the same as saying that there will never ever be an issue with a radiotherapy machine. To bring the topic back to self driving cars - the processes should be developed that will ensure that the car cannot not react(like it did in this case) when there is person in its path - it should be physically impossible from the software point of view. If the hardware cannot provide clear enough data to guarantee that, then it simply shouldn't be allowed on the road.
Agree completely. Training self driving in cars "virtual reality"and collecting data from human driven cars and then processing that data through the self driving engine should be done extensively before they are ever allowed on public roads.
The driver assists you talk about will increase the safety of human driven cars to the point the self driving safety argument becomes less and less relevant.
The idea of a fully autonomous self driving car that can pick you up from your cabin by the lake and drive you into the city is in "flying car" territory for me. The AI required to execute that journey is so far off that the society could have changed enough to make the idea obsolete.
Self driving approved routes are the only way I see this working. Eg. goods haulage or reducing congestion in city centers.
> Machines cannot have an operating mode where they can kill someone
Yes, they can, and, in fact, many operational machines do. You probably mean should not, but even that is clearly not the standard our society applies. Except for machines designed to kill people, we generally try to mitigate this to the extent reasonable, but lots of machines that have killed people are retained in operation in the same configuration.
> Machnes cannot have an operating mode where they can kill someone.
I agree that they shouldn’t, but they certain do. Strangely we seem fairly ok with devices that kill us when we use them wrong (eg cars, power tools, weapons) but we are outraged when devices that make decisions for themselves kills us. For the victim it’s little different, but somehow it is.
I believe the difference is that the decision is taken by someone who has a conscious mental model of how he's going to be punished for killing someone.
I think it more that if someone is killed by something they control it is seen as more acceptable than them being killed by something they don’t have any control over. For example someone dying in a car crash on the way to work is not front page news. However if someone dies on the way to work when the train crashes, it is.
If that alone explains it, then the situation is even stranger. We are alarmed at an infrequent event but accept as normal that lots of people die because of cars. Not seeing the wood for the trees.
Good point. I’m struggling to think of a recent example but maybe smart phones? How you would find stats on deaths due to smart phones though? Obviously lives have been saved with them too, but the number lost due to distraction must be considerable.
Probably the best point I've seen made by this. Humans are not computers and will do illogical things like jay walk across busy streets. Can't say I know the details for this particular situation, but many times a driver will see the telltale signs that someone is about to run across the street before it happens.
Hazard perception is actually a part of the driving test in the UK. They make candidates sit in front of a computer, watch clips of situations and click the mouse when they think there's a dangerous situation developing.
The governing system in the world at that time was not really universal democracy. It was mostly rule by the rich (who bought those cars) even in places which were ostensibly democratic.
Sure. In most of the Western world, Universal Suffrage came only after World War 1. Till then there were generally barriers of class, race or gender on who could vote.
That's not "rule by the rich". The middle class and even much of the poor still voted. You just had to own land, which isn't exactly a high barrier. The point was voting should be one vote per household, by the head of the family.
The "human safety driver" concept is inherently flawed and is incompatible with the way our minds are wired. You cannot have a person sitting still in bored inaction for hundreds of hours in a row and expect them to remain attentive enough for a split second reaction in an emergency situation.
So yes, I am discounting the safety driver entirely. Autonomous cars need to drive themselves safely. That person was in the driver's seat as a scapegoat in reserve.
1) On average the Uber car had to be intervened every 20 km or so. That's every 20-30min. If it happens that often, the additional driver would have to be doing something regularly during the day. Not looking so often at the road is dangerous. Uber should've been monitoring for this as well, just like they do with train drivers.
2) The train drivers in Europe are specifically hired to do just that. The ones in Netherlands are tested yearly. They're put on suspension when they run a red sign. Every so often they have to go through a special test for their attention span training. Uber obviously didn't do any of that.
Human eye's dynamic range perception in the dark is far superior to the poor quality video camera that was released.
The woman had crossed two lanes of the road before coming in front of the car - ie. fairly unrestricted range of view. A human driver paying attention would almost certainly have seen her and stopped or at least slowed down to a level where this would not have been a fatality.
I regularly drive 50mph on unlit roads in a rural setting where a deer could walk out at anytime -- and I hope that I can swerve and brake to miss it -- or at least slow down.
Sure, but aren't they already if you look at the accident per total miles driven ratio?
Of course, there will be accidents and people will die. People die every day in car accidents.
The human driver fatality rate is roughly 1 per 100M miles. There has been 1 fatality with less than 20M self-driving miles across all companies. So, if you group all the companies together, they are still more dangerous than human drivers.
But frankly, I think grouping Uber together with more responsible companies does the whole industry a disservice.
I don't think "deserves" belongs in the discussion. It's an unpopular opinion, but I believe the pedestrian caused this situation by putting themselves in a dangerous situation and not paying attention. That can get you killed in many situations regards less of if you deserve it or not.
I agree mostly with what you are saying, just not the undertone that this infortunate accident is somehow evidence of AVs "causing" more deaths than human drivers.
People put themselves in dangerous situations all the time. I don't believe for one second that you've never made a traffic mistake in your life, either as a pedestrian or as a driver. But you've been fortunate to be surrounded by others who'll pick up the slack for your mistake and will yield even if they don't have to if it will prevent an accident. The traffic rules don't matter nearly as much as the outcome of people not being killed. Traffic rules are just a means to an end.
So yes, it's quite possible for autonomous vehicles to cause vastly more deaths than human drivers even while being slaves to the rules. Hell, you could do it too as a human driver if you removed your conscience. Just don't hit the brakes the next time you have the right of way and a jaywalker steps out in front of you. You'll easily kill someone within a week and have it be their "fault".
You're onto an important theme: so much of driving is really about group communication--everything from signaling to anticipating behavior based on your own past actions, to waving people through intersections or slowing down to let someone merge or speeding up to make your intent more clear, or hanging back when the traffic gets stupid, or avoiding drivers who seem irrational.
That's why I think the real self-driving car problem is a very special case of the Turing Test--one that might be more difficult to win or solve.
Most higher end cars already have assistant systems that prevent you from running over people or into unexpected obstacles. The problem here seems to be that Uber disabled some of those systems that the Volvo base car normally has.
> I believe the pedestrian caused this situation by putting themselves in a dangerous situation and not paying attention
That's the thing - every driver has faced those situations, hundreds, thousands of times. Odds are very good a failure one of those times would have significant penalty for any driver, even if it wasn't their fault.
Others are making good points but also, if the system can't be trusted to detect a pedestrian in these circumstances we can't be sure this bug wouldn't allow it to hit a stalled car or large piece of debris in the road either. Even if the car had just hit the bicycle and missed the woman this would be an utterly unacceptable level of performance for an AV to display after completing closed track testing.
> It's an unpopular opinion, but I believe the pedestrian caused this situation by putting themselves in a dangerous situation and not paying attention.
That doesn't mean that the Uber system wasn't dangerously defective and that a fit foservice self-driving system would have noticed in time to react (which may not have been sufficient to avert a collision, and may not have prevented a fatality, but given the speeds involved would have significantly reduced the risk of fatality.
That the pedestrian should not have entered the roadway where and when they did may relieve Uber of legal liability for this collision, it doesn't stop the failure to react at all from being evidence of a critical safety flaw.
And, had the same thing happened at an equally poorly lit, not specially marked, intersection in many jurisdictions (including California, not sure about Arizona), Uber would be legally at fault because of pedestrian right of way. If Uber's system can't see and react to people in dark clothes at night, it's not even remotely suitable for use, at least at night.