Hacker News new | past | comments | ask | show | jobs | submit login
Uber will not re-apply for self-driving car permit in California (techcrunch.com)
253 points by ExcelSaga on March 28, 2018 | hide | past | favorite | 354 comments



I enjoy hating on Uber as much as the next person, but this honestly just sounds like shrewd focusing of effort in light of recent events on Uber's part. Their permit was expiring, and there was going to be a load of paperwork along with probably lengthy boardroom meetings with authorities explaining what went wrong and why it won't happen again, if Uber wanted to renew.

I see no reason (in this article, at least) to suspect that Uber won't re-apply at a later date when they have a better handle on what went wrong in Tempe.


Well it's not like they made a booboo, and some bugs caused some inconvenience to a few or many people. Someone died.


If Uber is not found legally at fault which seems likely given the circumstances and based on opinions from people with legal expertise in these cases, I highly doubt they will let themselves got derailed completely by this. Too much money is at stake.

Further, we don't stop using trains, or buses when a pedestrian makes an error and gets killed because they "could have not been hit, maybe". For those who have paid attention to those other accidents(even though the local transit isn't as sexy to hate on as Uber, and they didn't involve AI), it can actually be quite shocking how little of a blip it causes in the transportation system.


Self-driving cars have to be at least as safe as human-driven ones for them to ever become accepted and thus widespread. Realistically, they need to be many times safer than human-driven cars for acceptance. It doesn't matter who's technically at fault; they cannot cause more deaths than human drivers. Everyone makes mistakes and they don't deserve to be killed by an algorithm for it.


I'm going to say something that I suspect many will quietly agree with but may not be willing to say out loud.

I'm confident that self-driving cars will eventually be safer than human driven ones. I'm not confident that this can be realistically achieved without killing some unfortunate bystanders in the process of getting to that point. That doesn't mean we should accept these cars running amok killing people all over the place. It doesn't mean I think this specific case involving Uber is "reasonable" either. I just think, in a general sense, a realist has to be able to try to find some balance between short term lives lost and long term lives saved.


The thing is that any other industry that works like this(we have to accept some people will be hurt/die to achieve progress) - like the medical testing industry - is extremely heavily regulated and most crucially, all participants in testing are consenting to being tested. Letting autonomous cars on public roads is testing it on people who have not consented being tested on. If I get hit by an autonomous car I really don't care that thanks to this Uber/Google/Waymo/Whoever gets to improve their algorithm. I don't care how they want to do it - but I don't want them testing this stuff on public roads.


Over 5k people die in pedestrian/motor vehicle crashes every year. None of them are consenting to being killed. To the extent that you might consider "walking near paved roads" consenting, then that distinction doesn't change if there's computer-driven cars on the road.


It is different, because they are testing untested algorithms that have unknown risks and failure modes.


The software and hardware has been extensively tested on test tracks, they built a whole fake road system in Pittsburgh[1].

Even drug companies can't know what they don't know and end up killing people with extensively tested drugs. Airplanes fall out of the sky because of unknown bugs that simply cannot be reasonably tested for.

Human progress relies on taking mitigated risks.

[1] http://www.businessinsider.com/ubers-fake-city-pittsburgh-se...


Random untested people without driver's licenses and people who no longer possess the faculties to drive safely are on the road all the time. You're being hyperbolic.


The difference is obviously that those people break the law, so it's already regulated against.


Can you give an example of an industry that began with regulation?

Medical industry is heavily regulated now, but regulations appeared after it has grown in size. Similarly transport, construction, and anything else I could think of.

The reason it happens this way is that it's hard to set up standards and practices for something that is entirely new - when you set them up too soon, they may end up impossible to achieve, and block innovation altogether.


Funnily enough, the automobile industry is one of them. The Locomotives on Highways Act predated the car by decades in the UK, and its regulations were extremely onerous. The relaxation of speed limits from 4mph to 14mph and removing of the need for a human escort on foot helped, but we had functioning automobile manufacturing in the UK before then. The speed limit on UK roads was 20mph or lower until the 1930s, and Britons built a thriving domestic car manufacturing industry and even broke land speed records several times in that period.

(Similarly, and less comically in retrospect, crash testing regimes resulted in a lot of safety improvements that would have been unlikely as a result of purely commercial considerations)


The horse and railway industries lobbied for that act of course.

The 14mph and no red flag escort was only for machines <3 tonnes, in 1896, probably only a few 100 machines on the road then anyway


Many crash testing standards were led by manufacturers. Funny that consumer ratings groups rate the safety of vehicles and so there's commercial interest to meet higher than required standards.


>>Can you give an example of an industry that began with regulation?

The issue here is that almost every industry began in times when we collectively didn't give much shit until things went wrong. If you wanted to make a watch factory using radium - sure, go ahead. It was only after people started dying that we figured out that maybe this needs regulation. But if you wanted to make a commercial product in 2018 using radium you would have to go through plenty of hoops, and rightfully so.

I don't see why autonomous cars are any different - I don't see why we're starting with the assumption that it's fine to test this without strict regulation applied to everything else automotive. Like, if you wanted to start a startup making new tyres using a revolutionary compound, it would not be allowed to just go ahead on public roads - they would need to pass extremely strict testing that we apply to tyres since they are a life/death kind of component. But autonomous cars are allowed as-is right now? With a failure mode where the car literally doesn't react at all to a person crossing the road, despite having 4(!) sets of sensors used to detect this exact kind of obstacle? That is an engineering failure of the worst kind. The law tried to make it better by requiring a driver behind the wheel - but that's still clearly not enough.


Plus you would have a selection bias. High regulations induce much larger upfront costs. That would kill most of these nescent industries. So industries that would fall under heavy regulation now just never see the light of the day.

It was actually Peter Thiel's grief against over regulation. It kills creativity in domains that are much more important to mankind than websites.


> Can you give an example of an industry that began with regulation?

Is version 1.0 of a product technically considered patched? I'm assuming, categorically, no..

That said, despite where we're talking about it, Uber is not primarily in the tech industry - they don't sell you software, they sell you rides - so they are a taxi business, foremost. The taxi business (artificially chauffeured, or, not,) is actually a perfect example of an industry where regulation makes it palatable at all, compared to what v1 taxi cabs were like...

There is also the issue of whether or not we even have the means to regulate (or even come up with a formalization for the axiomatic ethical examination of near-black-box/difficultly analyzable tech).

That's a separate concern from whether or not the consumer will be liable for mistakes made by the provider until regulation is in place.

imo, innovation is pretty unimportant compared to regulation, and far less valuable, too..


There's a lot of evidence that the current regulation of medicine and medical devices is net negative, i.e. it effectively causes more deaths than were regulation lessened.


The problem is that the general public will not accept deaths caused by machines even if they are statistically safer. As an example, the German government gave in to public pressure after Fukushima and decided to shut down all nuclear power plants even though total death rate of coal power is A LOT higher and this event did not even happen in their own continent.

If stuff like this continues to happen, i can see some government banning self driving tech for the foreseeable future.


It's actually the same with terrorism, school shootings and plane crashes. They are all statistically insignificant compared to other causes of non natural death, starting with common assaults, drownings and road accidents. But they monopolise the headlines and create an irrational fear.


I wonder if there is a Simpson's paradox effect at play here, and that it is rationally sensible to be more afraid of planes than cars.

Even though in absolute terms there are more deaths by automobile, that is also because people are typically around cars more frequently than they are on planes. I.e. any given school shooting or plane ride IS more risky than any given bath or car journey, but the latter occur much more frequently.

Not sure exactly what the numbers are.


Some statistics [1] for travel in the US:

  Cars & trucks 5.75 deaths per billion miles
  Commercial airlines 0.06 deaths per billion miles
So airlines are a couple orders of magnitude safer by miles traveled.

In my own personal case I put about 10,000 miles on my car each year, and I travel about 50,000 - 80,000 miles on an airplane. So airlines are still safer as I don't travel 2 orders of magnitude farther on them.

[1] http://money.cnn.com/2015/05/13/news/economy/train-plane-car...


Deaths per mile is a good metric in that it is the same for everybody, so to speak, but this is also a weakness. Some people fly far further than they drive or _would_ drive, so for them we should look at per-journey metrics. This must be calculated individually because we all have different typical journeys, but it produces a more meaningful number. For example, any given day that I drive, I drive around 10 miles. When I fly, I usually fly 2600 miles. Over billion days on which I drive, I die 57.5 times, and over a billion flying days I die 156 times - so for me, I am ~3 times as likely to die in transit on a day I fly as on a day I drive. _That's_ what I think about when I file onto the plane. I'm not comparing my odds of death to the hypothetical equivalent drive, in part because I wouldn't drive that 2600 miles so it's an apples to oranges comparison. I'm comparing my odds of death today to my odds of death yesterday and my odds of death tomorrow!


I really doubt if you dug into the flying statistics that all those miles are created equal. Apparently only 16% of air crashes happen during the cruise portion of a flight[1]. You probably need to consider the chance of crashing on a per-flight rather than per-mile basis (as 84% of accidents happen in the takeoff and landing phases, which are approximately the same length for all flights). A long haul flight (which 2600 miles would be) would therefore appear to be more dangerous than it actually is.

[1]: https://www.statisticbrain.com/airplane-crash-statistics/

The same page gives the odds of dying in a single airplane flight as 1 in 29.4 million. The odds of dying per mile driven are approximately 1 in 174 million. On that basis, your 10 mile journey has a risk of death of approximately 1 in 17.4 million - death is 1.7x more likely.

There's probably a bunch of other ways to interpret the statistics - car miles aren't created equally either, there's a difference between long haul and short haul flight fatality statistics, there's a big difference based on where the airline operates. It's (like all things in life) not very clear cut.


You bring some good information into this, which also gets more to the point I was trying to make, specific math aside - though I am comforted your math shows my flights are probably safer! Given that neither airplane journeys & miles nor automobile journeys & miles are created equally, per-mile statistics aren't particularly meaningful to an individual traveler.


First those 2600 miles are not absolutely safe, so it is a higher risk than an average trip.

And if you qualify the airline deaths based on journey type then you also need to qualify the car trip by journey type. As he is presumably sober, not tired, using a seat belt, not speeding 30+ MPH over speed limit etc his risks of dying over 10 miles is less than half that of the average 10 miles driven.

So, his risk of death per trip is actually higher when flying.


deaths per billion miles is the metric we should use?


We could attempt to attribute it to a statistical phenomenon from a freshman sociology course that is neckbeard-stroking worthy, or maybe editors just figured out that "man hit by train gone wild" will almost always sell more papers than "man knifes other man in street". Plus whatever political bias and what not is involved. Then there's often a positive feedback loop with what society ends up paying attention to.


I absolutely agree that some lives will be lost in the name of progress. I also hate the NIMBY ism showing up in these threads. However, I don't care for Uber or its ambition. I'm more worried that Uber's negligence will set the whole industry back with a public backlash.


I think the most uncomfortable part that people won't bring up is the fact that cars are essentially high speed, high weight metal death machines on wheels.

Just a quick Google search and the first results give me this info:

>Nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. More than half of all road traffic deaths occur among young adults ages 15-44.

From this page: http://asirt.org/initiatives/informing-road-users/road-safet...

Of course unmanned driving tests will happen at a very, very small fraction of the amount compared with manned drivers we have to pull stats from but I don't think it will take long, if the unmanned tests persist enough to conjure up the required amount of data needed to make a legitimate comparison to prove that unmanned driving will be markedly safer.


>Over 90% of all road fatalities occur in low and middle-income countries, which have less than half of the world's vehicles.

The only person who died in a road fatality in phoenix that day was killed by a self driving car.


And how about in the rest of Arizona and/or the rest of the week?


To be clear, I personally don't espouse these "NIMBY"ist ideas, I'm just pointing out that they exist and that they will be a real problem for widespread deployment of autonomous vehicles.

Uber, with their recklessness, is inflaming them a lot.


It can be realistically achieved, because it already has been achieved. Self-driving cars done right are already safer than humans - see Waymo's record. The woman didn't die today to somehow save people tomorrow. She died to fuel Uber's business ambitions.


Waymo's record is only "safer than humans" because they haven't had a fatality yet and thus get 0/x, anything can be "safer than humans" by that count.

I looked up what numbers I could find, according to the OECD's Road Safety Annual Report 2016, in 2014 the US averaged about one death per 150 million vehicle-kilometers (vkm) (6.7 killed per billion vkm).

According to Waymo's own page on the subject, they have driven "more than 5 million miles" (8mvkm) since 2009[0] and they seem to be by far the lead vkm producer in the autonomous space[1]. Just to be as safe as 2014 US human driving they need to clock in an other 140mvkm with no more than one fatality, and even then that could be debated to be in more selected conditions than human driving.

Don't get me wrong, I'm a big believer in autonomous driving and I think it would be a great, great thing, but saying it is already safer than human driving is at best misleading.

[0] https://waymo.com/ontheroad/

[1] https://www.theverge.com/2018/1/31/16956902/california-dmv-s...


Yep, it's like the study done by US government on the chances of accidental detonation of any of their nuclear weapons - and basically the study concluded that the chances are zero, because it hasn't happened yet(the study also found that the cold-war era weapons had no locks, no redundant fail safe mechanisms, and could be detonated by literally anyone who got their hands on one - but hey, none of them were so they must be safe!)


Well, an obvious question to ask is if you are honestly prepared that it is your or your relatives' lives, that are lost in the name of progress / long term gain?

Perhaps, it is easier to speak of things like "short term life lost vs long term life's saved", as long as it is other people losing lives.


To rephrase the question - are you prepared to slightly increase chances of injury of your family for the next 5-10 years, if leads to drastically increasing their safety after 5-10 years, and forever after?

You have to be quite shortsighted to say "no".


This is not rephrasing, this is changing the question to something completely different and wrong.

In addition, not providing any data there or alternative is just misleading.

Ex: I would NOT accept increasing the risk by 10 in the next 5 years to reduce it by 30% afterwards. I would NOT accept increasing the risk by 2 during 5 years if we could instead just increase it by 5% during 10 years and get the same result. I would NOT accept even having the same probability if we could first learn more and take 2 more years and a few billions to get a more stable technology before launch.


The point is, the "what if it is your family" appeal goes both ways. The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside.


That’s basically inviting the reader to discard their rationality and answer emotionally.


Clearly it cannot be achieved without killing someone since it has already happened now :-(


> Realistically, they need to be many times safer than human-driven cars for acceptance.

Yes, many times yes. The promise of self-driving cars is that they are totally safe, that is, nothing will happen except in exceptional, unforeseeable circumstances.

The reason for this is, in a self-driving car, nobody's in charge, and therefore, any incident (let alone, a fatal accident) is immensely more complicated than the same incident with a human driver.

(It's obvious that a self-driving car with a safety driver at the wheel doesn't provide any benefit whatsoever and isn't, in fact, a self-driving car).

A self-driving car needs to:

- transport people who are unable to drive or are "irresponsible" in the legal sense (kids, seniors, drunk people, injured/sick people going to the hospital, etc.)

- drive itself with absolutely nobody onboard

The analogy with trains is flawed because trains have drivers that deal with problems when there is one. Even "autonomous" metro trains are monitored individually.

In fact, save for special conditions like a closed network on a campus, etc., I'm less and less sure self-driving cars will ever be a thing.


It will be a thing, at least in the US where in many places public transport is close to non-existing. It could take a long time outside limited "safe" areas.


> Self-driving cars have to be at least as safe as human-driven ones for them to ever become accepted and thus widespread.

Do they? There are other technologies that started out less safe but were accepted and became safer after acceptance, further boosting their popularity. Planes and cars would be examples and mills in the industrial revolution, but there are undoubtably others.


Machines cannot have an operating mode where they can kill someone, even if in the grand scheme of things they are saving lives. Prime example: Therac radiotheraphy machines - it's without a doubt that these machines have saved countless lives, but they had a weird error state where instead of treating the patient, they would kill them instead. At that point we didn't go "oh well, a human operated radiotherapy machine would have killed more people because humans are inherently worse at this stuff" - we banned all of them until the problem could be found, rectified, the solution tested and implemented. It also contributed to much stronger industry testing standards for medical devices.

I'd say an automonous car which can completely fail to react to a person in its path is firmly in the "therac-level of failure" category - even if on average they crash much less than humans do. They should all be taken off road and only allowed once we determine through rigorous industrial-level proof testing that it can't happen again.


So, you're saying that unless we can achieve zero failure rate of automated cars, we should stick with 3000 fatalities caused by humans every day?

You also believe that it's possible to do lab tests so good that real-world testing will not be necessary.

Finally, you believe that the radiotherapy machines are 100% safe right now, and will never have a life-threatening problem again. Right?


1) Yes, absolutely. As a result of Therac machines being put on hold while the flaw was investigated I am sure some people didn't receive life-saving radiation treatment when they were scheduled to - that's preferable to operating unsafe machinery. And you word it in such a way as if its 3000 deaths/day or nothing and there is nothing in between - you must know very well it's not as simple as this. I suspect emergency breaking systems(mandatory soon in US, already mandatory on new cars in EU) will reduce that number close to zero long before anything with an "autonomous" badge will be allowed anywhere near public roads.

2) You can do real-world testing without testing on public roads and endangering everyone around you - which is what Tesla was/is doing - all their cars were gathering data which wasn't actually used for autonomous driving yet, but they were able to see what the car would have done had it been running in autonomous mode.

3) No, I do not believe that. I do believe that a certain category of problems has been completely eliminated by rigorous testing, certification processes and engineering practices that prevent those problems in the first place. That's not the same as saying that there will never ever be an issue with a radiotherapy machine. To bring the topic back to self driving cars - the processes should be developed that will ensure that the car cannot not react(like it did in this case) when there is person in its path - it should be physically impossible from the software point of view. If the hardware cannot provide clear enough data to guarantee that, then it simply shouldn't be allowed on the road.


Agree completely. Training self driving in cars "virtual reality"and collecting data from human driven cars and then processing that data through the self driving engine should be done extensively before they are ever allowed on public roads.

The driver assists you talk about will increase the safety of human driven cars to the point the self driving safety argument becomes less and less relevant.

The idea of a fully autonomous self driving car that can pick you up from your cabin by the lake and drive you into the city is in "flying car" territory for me. The AI required to execute that journey is so far off that the society could have changed enough to make the idea obsolete.

Self driving approved routes are the only way I see this working. Eg. goods haulage or reducing congestion in city centers.


> Machines cannot have an operating mode where they can kill someone

Yes, they can, and, in fact, many operational machines do. You probably mean should not, but even that is clearly not the standard our society applies. Except for machines designed to kill people, we generally try to mitigate this to the extent reasonable, but lots of machines that have killed people are retained in operation in the same configuration.


> Machnes cannot have an operating mode where they can kill someone.

I agree that they shouldn’t, but they certain do. Strangely we seem fairly ok with devices that kill us when we use them wrong (eg cars, power tools, weapons) but we are outraged when devices that make decisions for themselves kills us. For the victim it’s little different, but somehow it is.


I believe the difference is that the decision is taken by someone who has a conscious mental model of how he's going to be punished for killing someone.


I think it more that if someone is killed by something they control it is seen as more acceptable than them being killed by something they don’t have any control over. For example someone dying in a car crash on the way to work is not front page news. However if someone dies on the way to work when the train crashes, it is.


Car crashes vs. train crashes is because of frequency: Trains crash seldomly enough for it to be newsworthy.

When more cars are autonomous, we may also see news about fatal incidents fade into the background noise.


If that alone explains it, then the situation is even stranger. We are alarmed at an infrequent event but accept as normal that lots of people die because of cars. Not seeing the wood for the trees.


Both of those examples occurred at a time when human life held less value


Good point. I’m struggling to think of a recent example but maybe smart phones? How you would find stats on deaths due to smart phones though? Obviously lives have been saved with them too, but the number lost due to distraction must be considerable.


Probably the best point I've seen made by this. Humans are not computers and will do illogical things like jay walk across busy streets. Can't say I know the details for this particular situation, but many times a driver will see the telltale signs that someone is about to run across the street before it happens.


Hazard perception is actually a part of the driving test in the UK. They make candidates sit in front of a computer, watch clips of situations and click the mouse when they think there's a dangerous situation developing.


The first cars killed way more people than horses and they didn't get banned.


The first cars had to have a man with a flag walk in front of them when passing through towns.


The governing system in the world at that time was not really universal democracy. It was mostly rule by the rich (who bought those cars) even in places which were ostensibly democratic.


But this is untrue. Many regs were in play due to wagons, that applied even to the rich. Do you have a source, or just playing identity politics?


Sure. In most of the Western world, Universal Suffrage came only after World War 1. Till then there were generally barriers of class, race or gender on who could vote.

https://en.m.wikipedia.org/wiki/Universal_suffrage


That's not "rule by the rich". The middle class and even much of the poor still voted. You just had to own land, which isn't exactly a high barrier. The point was voting should be one vote per household, by the head of the family.


Soooo like today?


Source? Many people were killed falling from horses at that time, cars were much safer Only unpopular due to cost and unreliability, not safety.


I think in this particular incident, though, there was a human behind the wheel who also failed to react to the pedestiran.


The "human safety driver" concept is inherently flawed and is incompatible with the way our minds are wired. You cannot have a person sitting still in bored inaction for hundreds of hours in a row and expect them to remain attentive enough for a split second reaction in an emergency situation.

So yes, I am discounting the safety driver entirely. Autonomous cars need to drive themselves safely. That person was in the driver's seat as a scapegoat in reserve.


1) On average the Uber car had to be intervened every 20 km or so. That's every 20-30min. If it happens that often, the additional driver would have to be doing something regularly during the day. Not looking so often at the road is dangerous. Uber should've been monitoring for this as well, just like they do with train drivers.

2) The train drivers in Europe are specifically hired to do just that. The ones in Netherlands are tested yearly. They're put on suspension when they run a red sign. Every so often they have to go through a special test for their attention span training. Uber obviously didn't do any of that.


It's hard to say for sure but the released video[0] makes it seem like most human drivers wouldn't have avoided the pedestrian in time.

[0] https://www.youtube.com/watch?v=pO9iRUx5wmM


Human eye's dynamic range perception in the dark is far superior to the poor quality video camera that was released.

The woman had crossed two lanes of the road before coming in front of the car - ie. fairly unrestricted range of view. A human driver paying attention would almost certainly have seen her and stopped or at least slowed down to a level where this would not have been a fatality.


Aside from that, the LIDAR did see the person. The system just didn't respond at all.


I regularly drive 50mph on unlit roads in a rural setting where a deer could walk out at anytime -- and I hope that I can swerve and brake to miss it -- or at least slow down.

No excuse for a modern car at only 38mph.


Sure, but aren't they already if you look at the accident per total miles driven ratio? Of course, there will be accidents and people will die. People die every day in car accidents.


No, Uber's self-driving vehicles are significantly more deadly on a per-mile basis. That's the whole problem.


I'm not talking about Uber. I mean in total across companies (incl. Waymo).


The human driver fatality rate is roughly 1 per 100M miles. There has been 1 fatality with less than 20M self-driving miles across all companies. So, if you group all the companies together, they are still more dangerous than human drivers.

But frankly, I think grouping Uber together with more responsible companies does the whole industry a disservice.


I don't think "deserves" belongs in the discussion. It's an unpopular opinion, but I believe the pedestrian caused this situation by putting themselves in a dangerous situation and not paying attention. That can get you killed in many situations regards less of if you deserve it or not.

I agree mostly with what you are saying, just not the undertone that this infortunate accident is somehow evidence of AVs "causing" more deaths than human drivers.


People put themselves in dangerous situations all the time. I don't believe for one second that you've never made a traffic mistake in your life, either as a pedestrian or as a driver. But you've been fortunate to be surrounded by others who'll pick up the slack for your mistake and will yield even if they don't have to if it will prevent an accident. The traffic rules don't matter nearly as much as the outcome of people not being killed. Traffic rules are just a means to an end.

So yes, it's quite possible for autonomous vehicles to cause vastly more deaths than human drivers even while being slaves to the rules. Hell, you could do it too as a human driver if you removed your conscience. Just don't hit the brakes the next time you have the right of way and a jaywalker steps out in front of you. You'll easily kill someone within a week and have it be their "fault".


You're onto an important theme: so much of driving is really about group communication--everything from signaling to anticipating behavior based on your own past actions, to waving people through intersections or slowing down to let someone merge or speeding up to make your intent more clear, or hanging back when the traffic gets stupid, or avoiding drivers who seem irrational.

That's why I think the real self-driving car problem is a very special case of the Turing Test--one that might be more difficult to win or solve.


Most higher end cars already have assistant systems that prevent you from running over people or into unexpected obstacles. The problem here seems to be that Uber disabled some of those systems that the Volvo base car normally has.


I’ve read about the incident but haven’t seen mention of why they did this. Was their own system supposed to provide the same safety or something?


Apparently every self driving company does that. So e.g. Waymo also disables such safety systems.


> I believe the pedestrian caused this situation by putting themselves in a dangerous situation and not paying attention

That's the thing - every driver has faced those situations, hundreds, thousands of times. Odds are very good a failure one of those times would have significant penalty for any driver, even if it wasn't their fault.


Others are making good points but also, if the system can't be trusted to detect a pedestrian in these circumstances we can't be sure this bug wouldn't allow it to hit a stalled car or large piece of debris in the road either. Even if the car had just hit the bicycle and missed the woman this would be an utterly unacceptable level of performance for an AV to display after completing closed track testing.


> It's an unpopular opinion, but I believe the pedestrian caused this situation by putting themselves in a dangerous situation and not paying attention.

That doesn't mean that the Uber system wasn't dangerously defective and that a fit foservice self-driving system would have noticed in time to react (which may not have been sufficient to avert a collision, and may not have prevented a fatality, but given the speeds involved would have significantly reduced the risk of fatality.

That the pedestrian should not have entered the roadway where and when they did may relieve Uber of legal liability for this collision, it doesn't stop the failure to react at all from being evidence of a critical safety flaw.

And, had the same thing happened at an equally poorly lit, not specially marked, intersection in many jurisdictions (including California, not sure about Arizona), Uber would be legally at fault because of pedestrian right of way. If Uber's system can't see and react to people in dark clothes at night, it's not even remotely suitable for use, at least at night.


> It's an unpopular opinion, but I believe the pedestrian caused this situation by putting themselves in a dangerous situation

Interesting! What reasons do you have to think this is an unpopular opinion? To me, it sounds like an obvious conclusion.


[flagged]


> Yes, the pedestrian was either intoxicated or suicidally stupid.

Or they made a mistake or were distracted, like millions of people every day.


Watch the video. The victim was not "distracted."


> Further, we don't stop using trains, or buses when a pedestrian makes an error and gets killed because they "could have not been hit, maybe".

That argument might be convincing if the car had detected the pedestrian, slammed the brakes and evaded, but still could not overcome physics in time to prevent fatal injuries. But that's not what happened.

We could have built self-driving cars that mindlessly follow the road no matter what fifty years ago, but we didn't, because that kind of vehicle has no place on a public road. What changed now is not that the same danger is suddenly ok because it's 2018, what changed is that some self driving cars are supposedly much better now. Those that are not need to continue to be confined to the test track.


> we don't stop using trains, or buses when a pedestrian makes an error and gets killed

This isn't about autonomous cars as a general mode of transport, it's about Uber's autonomous cars specifically.

If Uber's cars are more dangerous than human-driven cars, then Uber's cars shouldn't be on public roads.

Waymo's cars can continue to operate since from all reports it appears they've been appropriately conservative when managing risk.

And don't blame the pedestrian. There was plenty of time to stop or swerve to keep this woman alive, but Uber's systems and "safety driver" failed miserably.


> There was plenty of time to stop or swerve to keep this woman alive

You have never been in a serious car wreck. I've been in three in my life and I am telling you, any human being that wasn't a racecar driver in a state of flow would not have been able to stop the car before hitting that pedestrian. It's terrible, yes, and Uber's autonomous vehicle should have stopped, but it simply wouldn't be possible for a typical human being to take appropriate action in that window of time when the pedestrian came out of the shadow. One way or the other, whatever "action" you think someone could take in that particular moment would have resulted in an injured or killed pedestrian and/or an injured or killed driver.

In this specific situation, Uber's autonomous vehicle was no more dangerous than a vehicle driven by a human being.

People do this all the time with things like this. Car wrecks, combat, disasters, fights, emergencies. Everybody has the inside track on what the failures were and why they wouldn't have done it that way and why they wouldn't have failed in that situation.


Have you seen the video from the guy who drove the same route?[1] Streetlights do not produce a string of light dots surrounded by complete darkness. Either the footage has been darkened, or, more likely, the camera just has incredibly shitty dynamic range.

Furthermore, the driver did not have his eyes on the road, he was looking at a display, or probably his phone. I'm not exactly sure, but it doesn't even look like he had his hands on the wheel. Which isn't surprising, like, at all. You cannot sit somebody into a car without anything to do for hours, and expect them to be able to step in fast enough. If you want those people to stay alert, you have to take the appropriate steps like driving in pairs and extremely frequent rests.

[1]: https://www.youtube.com/watch?v=1XOVxSCG8u0


You know, actually, I did see this earlier today. EEVblog made a new video where he talks about some of the developments. One of those was all the footage out on the internet from other people with other cameras showing that it wasn't nearly as dark as the Uber video. Which raises some questions, as you mention, especially with Uber's reputation being what it is. Something else interesting he said was that the makers of the vehicle's own collision-avoidance system -- which Uber had turned off -- took the footage released by the police and ran it through their collision-avoidance computer vision algorithms, and it detected the pedestrian even in that dark video. So it's beyond any doubt that a computer both could have and should have stopped the vehicle. I'm eager to see the rest of the data (as I'm sure other autonomous vehicle divisions are, too) and how exactly Uber's automation failed so spectacularly.


I don't believe that we have enough information to conclude that. Yes, she wasn't visible in the released video until 1-2 seconds before impact. But there were no obstructions preventing a driver from seeing her two lanes to the left, about five seconds before impact. Landscaping did shadow her from the nearest street light, but it's unclear whether the driver could have seen her. Perhaps in the headlights.


The pedestrian wasn’t in shadow, the terrible low-light performance dash cam only made it seem like she was.

If the safety driver had been paying attention she would have seen her and been able to stop.

If visibility really was that low then they shouldn’t have been driving that fast - drive to the conditions.


Even if Uber it's not legally at fault, they have acted irresponsibly (not limiting the vehicle's speed, disabling the vehicle's builtin safeguards), which resulted in a self-driving car hitting and killing a person. Doesnt't really matter who's at fault: a self driving car killed a person so the next thing you know there's going to be public backlash against self driving vehicles, just as it happened with nuclear energy.

Automated transit is protected with platform screen doors, at least in Paris, France. This is a very good example of responsible automation. Also the EZ10 driverless bus which travels at 20 kmh and the Waymo driverless cars which also cruise at low speeds.


Even for humans: In France the Badinter law (1975) says in case of a collision between a car and a pedestrian, the car is guilty. Penalties can be nonexistent if the pedestrian threw himself towards the car, but the driver bears the theoretical responsibility. Which is quite good in pedagogy: You have something that can kill, in your hands, you get the responsibility of not committing damage.


I can totally see Dara Khosrowshahi shutting down the self-driving program, given the reports that he already considered this possibility right after he was hired.


> If Uber is not found legally at fault which seems likely given the circumstances and based on opinions from people with legal expertise in these cases

It's possible somebody said that they won't be held criminally liable. I would find it quite a stretch for anyone to say they won't be open to a pretty devastating civil action.


If train or bus operating companies were found to be criminally negligent, they would be in trouble as well. Your strawman about 'stop using ...' and your manipulative framing ('hating') are distasteful given the circumstances.


I don't understand why people say Uber is not at fault. 1.) The lidar system should have detected the woman with plenty of time to dodge her. The car could have manoeuvred to the empty lane to avoid the collision.


I think it's too early to demand higher level of proactivity from driverless cars than could be provided by a person behind the wheel.

We know that level of safety provided by a driver is possible. But higher level is for now just speculative, even though additional sensors exist. You'd be punishing developing technology for not achieving the perfection yet.


But I think we should at least expect it to perform as well as a sober driver. I'm sure the woman would be alive today if there were a human driving the car.


I'm mean, are you really sure or just saying that? I'd be shocked if you were really, literally sure of that alternate outcome in a system as complex as the universe by simply replacing the AI with a human.

Humans don't "see" stuff just because the light is entering their eyeballs. A human driver could have been checking their mirrors, the radio, distracted by another pedestrian on the other side of the road, and or just plane not registered the person. The motorcycle safety course teaches you to not use eye contact as a means of determining that a driver "sees" you. That's right, an attentive driver actively looking for on coming traffic during daylight can "see" right through you.

I'm really, literally sure she would not have been hit by that car had she not walked out in front of it..


Perhaps I drank too much of the LIDAR cool-aid. The woman was 2/3's the way across the high way - she didn't just materialize. What's the purpose of this 360 degree 3D map if the AI isn't using it to avoid a collision?


Jaywalking is not the same as walking on train tracks --- and the U.S. (and many other countries) heavily regulate how train tracks are divided and secured from pedestrians to prevent these deaths, and trespassing. There is a reason why yardmen have a reputation of beating homeless.


I agree with you. The driverless car pedestrian strike is getting so much attention because it's new tech. I guess it's also getting attention because sone people think self driving tech is meant to be flawless instead of simply better than a human driver, on average.


It’s getting attention because Uber are a known for cutting corners, and it’s very likely their shitty code killed a person.

You don’t get to hand wave away a death with the justification that the tech is better “on average”. The tech gets a pass when they demonstrate that they took reasonable steps to protect the public.

That means not shipping half baked product with crossed fingers so they can try and increase their stock price.


> Uber are a known for cutting corners, and it’s very likely their shitty code killed a person

By that logic each time Uber driver kills someone you should be angry at Uber for cutting corners by hiring shitty drivers.


What makes you think we arent? I too, have a disdain for bad drivers, and even more for companies who pay bad drivers to be on the road. Real, or automated.


"Well it's not like they made a booboo, and some bugs caused some inconvenience to a few or many people. Someone died."

Fascinating how narratives and bias work.

The police reported that the pedestrian, wearing black, entered the roadway randomly and unexpectedly, directly in front of a moving vehicle, and concluded that a human driver would not have reacted in time either. The police reported the accident as "unavoidable", and showed that there was a crosswalk very close AND that there was signs advising pedestrians not to cross out of crosswalks right there too. "It's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway"

If you had the proper context, you would have said "a human killed themselves" I think because no matter who or what is driving, when you jump in front of a moving car, you choose death.

People jump in front of trains all the time, and it sucks but it doesn't mean trains are bad or their conductors are murderers or anything.

Now having said that:

- I think their super-human system should be capable of reacting to what humans cannot, and should have reacted here

- I think Uber is not a good technology company and has zero chance of making a successful self-driving car themselves and all of this is a waste of money driven by the vanity of their former CEO who fancied himself Musk


For context, the accident happened under a brightly lit portion of the street. Safety driver could have easily avoided it if she were paying attention to the road instead of her phone.

Picture of where it happened: https://imgur.com/gallery/XQrAB


You can clearly see the shadows on the left that she walked out of are not lit.

Also, the ISO is BLOWN OUT on your image to create the false perception of a well lit scene, you can tell by the significant pixel noise that incredible amounts of digital amplification have been applied to this image.

This is a better approximation of human sight: https://i.imgur.com/djOuoZr.jpg

The image you linked is just so digitally distorted that it's very misleading. Heck, your linked image is so distorted that the sky is literally grey!


I am pretty sure that this accident happened because of the way autonomous software makes decisions on its surroundings. I am almost certain the car knew there was an object but evaluated it incorrectly.

There are lots and lots of false positives a self driving car has to disregard just to make its way along any street. Mailboxes on curbs, random parked cars, bridges, signs, and so on. I think the car unfortunately made a bad decision.

To say the LIDAR/RADAR or IR cameras didn't pick the person up is to say that these cars are woefully under-equipped to sense their environment.

I think the Police report was just working off of current operating Police knowledge and expectations of the law.


If there is a bike in the road, I want my car to stop; whether a prankster kids push it out just in time or it's a woman pushing it across slowly.


I don't think I implied anything to say that a car should treat a bike or any other object moving into its path as a false positive. To the contrary, what I was saying is that it did treat it as a false positive but that was obviously not the intention of the engineers.

My overall point is simple, the car almost certainly saw the person - it just had to have, but the way it evaluated what to do about it was wrong because evaluating reality, as it turns out, is incredibly complex.


> The police reported that the pedestrian, wearing black, entered the roadway randomly and unexpectedly, directly in front of a moving vehicle, and concluded that a human driver would not have reacted in time either

She did not enter the roadway directly in front of the Uber. She entered the road from the left and had to cross multiple lanes to get in front of the Uber, which was in the right lane. There were no obstructions between her and the Uber during all of this, and even if the road had not been well lit by streetlights, she would have been in range of the Uber's headlights for over 4 seconds before the collision (the low beams on that model car have a range of 250 ft, and the car was traveling 55 ft/second).

> The police reported the accident as "unavoidable", and showed that there was a crosswalk very close

The nearest crosswalk was 300 ft away, which I would not count as "very close".


> The police reported that the pedestrian, wearing black, entered the roadway randomly and unexpectedly, directly in front of a moving vehicle, and concluded that a human driver would not have reacted in time either.

That police report is at best sloppy and unwilling to do anything other than take limited evidence at face value and at worst an active attempt at spreading misinformation.

The pedestrian was crossing from the median across two lanes of traffic, and was struck on the right side of the right-hand lane of traffic (at least 75% of the way through crossing the road). Even the video in question (which, as many have noted, is far worse quality than anything a driver would be seeing) shows the pedestrian a good second before impact, which is enough time to at least initiate emergency braking even for a human--and the car shows no sign of having done this.


Despite the Uber-released video being misleadingly darkened, you can clearly see the victim was already in the road when she was hit, she did not "leap in front" of the car. The car made no attempt to avoid a collision, she was simply mowed down.


Yes. And as a pedestrian, I would expect cars to slow down somewhat in advance when I would cross the street like that. That didn't even happen. Even if the car would brake in time, it would still be scary as hell compared to the same situation with real human drivers.


Sorry, but I think this not reflective of what actually happened, you are subtly implying this was a suicide when we have zero evidence of it being anything other than a horrific accident.

The police are biased because of the political situation in the state where the governor persistently courted Uber with relaxed rules.


> Fascinating how narratives and bias work.

> the pedestrian, wearing black, entered the roadway randomly and unexpectedly

People do random and unexpected things all the time, often while wearing dark clothes.

Claiming someone has a bias doesn't reflect well on your argument.


Out of curiosity, what's with the emphasis on the pedestrian wearing black? Was it a night accident or something?


Yes. The video from the internal camera shows a pitch black night with almost no light and a black clad figure appearing out of nowhere. However this footage is quite disputed as this level of darkness would suggest that the car had basically no lights to speak of and that the roadside lightning did not work either, none of which were of course true.


They had Blue Jeans on and where moving a pink bike that had reflective surfaces. The dash cam was very deceptive, a normal driver would have easily seen this a long way off if the driver had been paying attention. On top of that Uber disabled the cars built in safety system that likely would have also worked.


> top of that Uber disabled the cars built in safety system that likely would have also worked.

What safety system? I haven't read anything saying this, so any source would be appreciated!



Re: sibling comment - they moved to a single lidar on the roof with their new vehicles. It has a ~3 meter blindspot around the car


At 45MPH you can't stop within 3 meter's. The car needed to break 20+ meters away to avoid hitting them.


A 3 meter blind spot around the car doesn't necessitate it can't see beyond 3 meters. Just as you have blind spots in your vision around your body (like you can't see your legs when looking directly forward) but you can see long distances.


The LIDAR is on the roof. I can see the roof blocking some things close to the car depending on how tall they are.

However, my point is what happens within 3 meters is irrelevant, the car needs to look ahead and brake much sooner than the last 15 feet to avoid hitting stuff.


I think we both agree then, I just mis-interpreted the emphasis of your point.


The dash cam doesn't show a figure appearing out of nowhere. It shows a person crossing the street in a slow and predictable manner as if she has no concern that the car speeding toward her will fail to see her and slow down. Her white shoes and blue jeans stand out in the video.


All that's negated by the video showing her looking down at her cell phone.


You guys do realize this is how it works right? If you have any alcohol in your bloodstream, any accident you get into is automatically your fault and you were driving under the influence. The circumstances of the accident simply don't matter anymore, you're the one thrown under the bus for the whole thing.


If that was a human driver, it's very likely that the result would have been the same.



Interesting but I'm pained by drivers taping themselves with their phone on a stretch of road that has already claimed one life.


That was my first thought. "Ok, I hope they're not driving...crap, they are."


Not likely at all, the released video makes it seem darker than it really would have been.


Not slowing down at all, as you hit somebody while using your smartphone?


Prediction - Uber will go back to two people in each car and will be back on the road (eventually).

This is enough of a mitigation to appease the public and the regulators.


I agree with you on all points. Should Uber continue their self-driving efforts, I hope they take both public safety and passenger safety more seriously.


[flagged]


This is exactly what the burgeoning automobile industry did early last century. Its why we have j walking laws. Cars were faster than horses and so people were getting hit. The automotive industry responded in the same way you describe essentially creating j walking laws. History repeats itself!


The States is the only place I’ve ever been that seems to have jaywalking laws, I don’t think they are common at all. Even in the states it seems hard to get pinged for it. Most places require you to cross at a crossing if one is nearby, which is fairly common sense.


Nope. Most European states have them - but it's a CYA/gotcha sort of law, not enforced unless the police desperately needs something to "ping" you for. I have been ticketed for it exactly once - the cop indicated he had a quota of tickets to fill for the month.


Could you give an example of a European country with them? The ones I’m familiar with require a crossing to be used if nearby, that’s it. Or are you counting that as a jaywalking law?

This Wikipedia entry covers many European countries. https://en.m.wikipedia.org/wiki/Jaywalking


Well, the traffic code is a law, and usually defines various rights and responsibilities for traffic users; since there are jaywalking clauses, I would definitely count that.


Yet you disagreed with my original comment and follow up, which state exactly that.


Most countries don't have interstates cutting through city centers, either.


For some reason, this is the exact response I would expect from Uber.


I think with all the other things we've learned about their self driving program, it's unlikely to return. Months before this, there were other articles posted here that talked about how some safety drivers needed to step in every couple of minutes.

Uber's model is based around selling at a loss to undercut Taxis, Lyft and other ride sharing companies, just enough to keep the investment capital coming in, with the hopes of ultimately cutting costs by removing the driver.

Even with Waymo, GM, Tesla and others, I don't think we're going to realistically see wide scale deployment for autonomous vehicles within the next ten years. They have money to burn. Uber doesn't.

There's a lot of tax money going into that research too and it really needs to be fed back into the public transport infrastructure that's crumbling around the US. I wish research on autonomous road vehicles stayed in the private sector. They're cool tech, but they'll be useless if systems like the NYC Metro and Chicago CTA fall apart.


I don't believe they will ever be widespread with infrastructure how it currently is. I don't believe it's the correct problem to be solving either.

I feel self driving cars are a stop-gap for better public transport inside smarter layouts of cities where cars are not the primary method of getting people and stuff around.

The only good fit for autonomous cars I see is long distance suburban commutes, or highway commutes, which is more a symptom of suburbia than a smart way to layout a city.

Part of my reasoning involves the dispelling the assumption that we have a right to a car and road based travel option, so I get that I am probably in a minority of thought process.

I also love my cars, love travelling on the open road and love combustion engines. But I would give it up for a safer, cleaner and more personable, community focused city.


It's easier to make cars self-driving than to fundamentally re-architect the layout of every single city in the nation, though. We've always had these grand transportation ideas. They don't seem to pan out.


> It's easier to make cars self-driving than to fundamentally re-architect the layout of every single city in the nation, though.

We did that for cars, though, so it's not unprecedented.


I absolutely agree, it's probably too late for established cities and suburban sprawl. I guess I just hope for a better solution than autonomous cars.

I guess this is just a step in the evolution of per-individual transport, and eventually we will get rid of roads and automobiles as we know them for self-locomoting ambiguous transport pods that don't need big asphalt/concrete roads.

We waste so much space to roads and bespoke parking lots.


No, that's totally wrong. The United States use to have more passenger rail than Europe did today. It happened over a very short period of time and it can be made better in a short period of time. Seattle's ST3 project could prove this if it moves the suburban areas close together around rail stations.

There's a great Wendover video one this:

https://www.youtube.com/watch?v=-cjfTG8DbwA

Transportation is the single greatest factor in reducing the number of people in poverty in a city. This isn't just opinion, there are a number of studies that suggest this. American can greatly reduce poverty, pollution and gridlock if we got over our apprehension to rails.


Do you mean it's wrong that it's too late? I hope so, it is just hard to see the incumbent attitude and industries that support it going away quite so easily.

I think we might be in agreeance, and I am all for more rail and public transport. My little prediction was just musing about a possible outcome should we follow the autonomous car route, which I think would be the wrong choice.

I am happy to see that there are some projects working on alternatives (or I guess in some ways, bringing back what used to work).

With extensive rail comes easier cycling infrastructure as well, as it just follows the rail and is separated from the car traffic. It's a much more positive outcome for many reasons.


You also only pit yourself against Big Oil instead of also against Big Auto and city planners.


It would take a complete uprooting of transport as we know it, no doubt. I suspect it would take something drastic like a federal ban on automobiles. Not going to happen.


It’s panning out just fine here in NYC.


Uber is not going to give up just because they killed one “loser” (probably how they think of her, given that they let an unsafe product on the road to kill her). The thought of cutting all the “losers” on their payroll and replacing them with enslaved robots is too appealing to them.


for those who haven't seen it, the manufacturer of the lidar system believes their hardware was functioning normally, and the problem is in uber's software: https://www.forbes.com/sites/alanohnsman/2018/03/23/lidar-ma...


[flagged]


An inattentive driver killing a pedestrian isn't typically treated as criminal, as far as I know. As a pedestrian and a driver, I have mixed feelings about this.


Not sure about the US but in other countries it often is.

In Australia for example if you are using your phone and kill someone you absolutely will be seeing jail time.


> An inattentive driver killing a pedestrian isn't typically treated as criminal, as far as I know.

It's usually, though exact standards vary, vehicular homicide if there is criminal negligence, which can include inattention while driving especially if it results in a citable (on its own, before considering the fatality) violation of a traffic mandate that causes then fatality.


A vehicle involved death is always treated as a potential criminal case.


Sure, the police always check for intoxication, etc.


What about Toyotas faulty acceleration?

My... My understanding was that you can totally have this be treated as criminal but that most families don't want to press charges? Because, well, all of it sucks.


If you charge the engineers criminally, the result is that they won't cooperate with investigations, and will do their best to cover things up.

With "no fault" investigations, the engineers work with the investigators to identify and fix the problems. The result is a lot better than the adversarial system.


We aren’t talking about a micro service going down, we’re talking about someone dying.

If that driver hadn’t been in a self driving car, they’d clearly be going to jail for looking at their phone while driving.


That's right, we're talking about someone dying. Is it better to have the problem fixed, or better to get revenge? In aviation, for example, some mechanics in Chicago were putting jet engines on using the wrong procedure. This resulted in a crack in the engine mount, the engine fell off during takeoff, resulting in a fiery crash where everyone died.

Everyone cooperated with the investigation. The mechanics were not criminally charged. The procedures were changed so it wouldn't happen again.

Under your proposal, the shop would have had every incentive to alter records, deny, obfuscate, and in general impede finding the truth as much as possible. Do you think that would have been better?

I know of no cases where mechanics, engineers, air traffic controllers, regulators or pilots were criminally charged in a fatal accident. The result is we have incredibly safe air travel. I don't know about you, but I'm happy about that result.


Has there ever been any back-stabbing after the engineers "cooperate" and volunteer potentially self-incriminating information?


> What about Toyotas faulty acceleration?

This is a great case to read up on. The testimony from experts in this case was top notch.

I believe the criminal charges in the case were related to the degree of deception Toyota engaged in with regards to unintended acceleration in their vehicles.

edit: Someone did a case study: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_...


Toyota paid a huge fine and other penalties, but looks like the criminal charge has been dismissed: https://www.reuters.com/article/us-usa-toyota/u-s-judge-dism...


Families don't press criminal charges, the DA does. But if someone doesn't want to testify, a case can fall apart.


What faulty acceleration?

You mean https://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle... ?

> The most common problem was drivers hitting the accelerator pedal when they thought they were hitting the brake, which the NHTSA called "pedal misapplication.” Of the 58 cases reported, 18 were dismissed out of hand. Of the remaining 40, 39 of them were found to have no cause; the remainder being an instance of “pedal entrapment.”



So... the NHTSA started an investigation in 2010, and Toyota 'provided to the American public, NHTSA and the United States Congress an inaccurate timeline of events that made it appear as if TOYOTA had learned of the sticky pedal in the United States in “October 2009,”', when "In fact, TOYOTA had begun its investigation of sticky pedal in the United States no later than August 2009, had already reproduced the problem in a U.S. pedal by no later than September 2009, and had taken active steps in the months following that testing to hide the problem from NHTSA and the public."

But the NHTSA's 10-month investigation beginning in 2010 when Toyota had already informed them of the sticky pedals came to the conclusion that zero of the reported cases of unintended acceleration involved sticky pedals.

That's a case for lying to the government, and we see a judgment against Toyota for lying to the government. How is it a case for unintended acceleration?


Read the second line as well to make conclusions.


> An inattentive driver killing a pedestrian isn't typically treated as criminal, as far as I know

They certainly should be treated like a criminal. They used a deadly weapon to kill someone because they didn’t respect their responsibility.


A software developer fucked up. It could have been much worse. It's going to make our industry look worse if we aren't held accountable.


Nah, I'd rather not be held accountable for endless scope creep and 80 hour work weeks delivering a product I repeatedly called a pile of bugs


Engineers who are held responsible for their work and can go to jail when they get it wrong don't accept work conditions like you describe.

Software engineering might be a better profession to work in if it was regulated the same way other engineering fields are.


Makes you wonder if a governing body will ever come about - eg medical councils/boards, engineers associations etc. Presumably it’s these type of problems that they came into existence for. It’s a double edged sword, but often works in the interest of the profession where I am. The biggest difficulty I’d see is that defining a software developer is not prticularly easy.


It's generally considered a form of negligent homicide, which is a crime in the USA.


No, "ordinary" negligence justifies a charge of vehicular manslaughter, which has lesser penalties than plain manslaughter.

Even then, it seems like many states require "gross" negligence, like speeding or intoxication to charge with a crime. Someone who simply "didn't see the pedestrian" but was otherwise following the law will likely not be charged with a crime. They'll get sued, of course.


Criminal negligence statutes (I don't know Arizona law in particular) virtually always have a "gross negligence" requirement. A reasonable person would have had to have known that the action was likely to cause harm. It's not enough just to make a fuckup.

That just doesn't seem to apply here, absent evidence that we don't have (e.g. Uber knew the LIDAR was being ignored, something like that). This was just a software bug. You don't throw people in jail because they wrote buggy software.


The safety driver was texting. Virtually everyone knows if you're a driver you're not supposed to text and drive. Safety drivers are held to an even higher level of road safety knowledge, so they should understand that they're beta testing a literally lethal weapon, and take reasonable precautions, like staying focused on the road.


> You don't throw people in jail because they wrote buggy software.

That depends upon the application.

Too many programmers think "Meh. Bugs are no big deal." because their code doesn't interact with the real world.

The problem is that the useful code that doesn't interact with the world has basically all been written, code is now starting to interact with he world by default.

Hang on, programmers, this ride is about to get bumpy.


Do you have an example from anywhere ever of a programmer being prosecuted for a bug?


Behold, The Great Worm of yore! https://en.wikipedia.org/wiki/Robert_Tappan_Morris#Criminal_...

(I thought of the first computer-aided fatality, but the sources don't seem to mention any prosecution w/r/t Therac-25)


No, the Morris worm was criminal even if it didn't kill systems. The fact that it was accidentally killing systems surely changed the prosecutorial decisionmaking, but it wasn't relevant to the case itself. Morris was prosecuted for penetrating and abusing systems he didn't own, not for a bug in software that would have been non-criminal if it worked properly.


Which is it? You're saying we don't have enough details AND that it's just buggy software.


Upthread it was pointed out that the LIDAR manufacturer believes their system was operating correctly. Assuming that, it becomes a software bug more or less by definition.

But fine: let's say it was the LIDAR system was at fault. Do they go to jail, then? How about the bureaucrats who approved the test?

All this doesn't matter. I'm simply saying that ALL the facts in evidence point to this being a reasonable mistake, and reasonable mistakes don't qualify for negligence prosecution. If you want to argue to the contrary you need evidence, which doens't seem to exist. Therefore the upthread question "shouldn't someone go to jail?" is answered with a firm "no".


How could you possibly know if the mistake was reasonable?


Because you're demanding jail time, and the burden of proof goes the other way. I don't need to know whether the mistake was reasonable, you need to prove it wasn't if you want to argue someone should go to jail.


To be horribly utalitarian - every week you set back the development of self-driving tech, you're 'killing' future people who are involved in human-at-fault collisions. Counterintuitively, it is the most ethical choice to continue to accelerate self driving car development even if it is sometimes conducted in an unsafe manner.


Other scenario: Waymo will beat out Uber no matter if slowed, so any deaths stopped by stopping Uber is better. In order for yours to be true, we have to believe that Uber is contributing to the future of self-driving tech. I think they will be perpetually playing catchup to Waymo until their bankruptcy.


It would be utilitarian if we knew that self-driving tech was the best outcome out of all potential alternatives. We don't.


Not disagreeing, just curious: besides the obvious "drive it yourself", what are the in-the-works alternatives to self-driving cars? I'll admit I don't follow the space too closely.


I'm not in this crowd, but mass-transit advocates argue that the resources/attention poured into self-driving could be better allocated toward investment in mass transit infrastructure.


Mass transit infrastructure will not provide VC's with 100x returns.


This also assumes a good market of people who want to use mass transit vs something private.


Ehhh, I think given enough time and money just about any problem can be solved. Given what we know today, it seems pretty reasonable to say that we will achieve a car that can drive itself in any situation and not cause deaths...at some point in the future.


If everyone on earth gives me all their money and property and absolute power to rule, I will do so benevolently and in a way that maximizes the happiness of the world's population.

Therefore, every moment you spend not actively giving me money and property and power, and persuading others to do so, is a moment in which you are actively causing harm and suffering to billions of people who would have had happiness sooner if not for your delay.

You monster.


> Counterintuitively, it is the most ethical choice to continue to accelerate self driving car development even if it is sometimes conducted in an unsafe manner.

However, conducting self-driving car development in an unsafe manner is a good way of making people oppose self-driving cars, which leads to development of self-driving technology taking longer.


Who is setting back self-driving tech? Law enforcement who are trying to keep real people safe? Or Uber by being their cavalier selves about safety standards?


See, the issue is not just in utility. "Overeager operators disable safety checks for an experiment, death ensues, entire field of engineering suffers a major setback" has been seen before: have you heard of Chernobyl? (You have, thousands of times, just like anyone else from 1986 on) Yes, today's reactors are far safer and whatnot - yet the disaster is so ingrained that any discussion of nuclear power is practically destroyed just by bringing that name up.

"People behave rationally" is the greatest error assumed by utilitarianism, do not underestimate the reptile brain and its response to fear.


This assumes that self-driving cars will be safer. I think this is an unrealistic idea.

Look on the road some time. You see a lot of cars with dented bodywork, running one headlight, often poorly maintained, and depending on the make and model very cheaply made. Many of the cars may be ten years old, or older. They are often sold used, with limited warranty or dealer service.

And you really want to add self-driving to this? You think a 8 year old kia whose operator hasn't updated it because he can't afford the dealer service fees is going to be safer?


> This assumes that self-driving cars will be safer. I think this is an unrealistic idea.

I've seen this suggested a few times, and it makes me wonder if this is caused by religious beliefs or general pessimism.

The reason is that the only way it is an unrealistic idea is if we assume that human intelligence can not possibly be matched by a machine and/or if we assume that the progress towards genera AI will be so slow that we for the intents and purposes of the debate will take very long to match it.

The only thing that will stop us from eventually matching human intelligence with machines is if there is some super-natural soul necessary to match it. Even then, for that to stop us, said soul would need to be a necessary condition to make self-driving cars safer than human, which sounds even more implausible to me, given the many advantages self-driving cars can obtain:

Additional views of the road. Benefiting from accumulated knowledge from billions of hours of driving. Potentially wireless information exchange with the other automated cars in the vicinity (can't see through the fog very well? well, maybe the 10 cars around you can fill in blanks).

I think it's a totally unrealistic idea that self-driving cars won't get to a safety level where human drivers will be outlawed on public roads.


You completely missed the meat of the comment which was about how cars get old, worn out and damaged over time, and aren't always well maintained, and that this will apply to self-drivers too and affect their safety.


No, I didn't. There's no reasons we can't build self driving systems that aren't sufficiently hardened in terms of hardware that they will outlast the care and/or with sufficient self-diagnostic ability to refuse to drive if they are degraded in any way.


Well that's alright then.


If we can make cars that actually drive themselves, I'm sure we can make them refuse to run if there are serious maintenance issues.


Ah, DRM will save us, you say? Today's farmers disagree - "make them refuse to run" will be misused by vendors, as it already is today. https://hn.algolia.com/?q=tractors+repair


That assumes Uber had a reasonable chance at advancing self-driving tech. Everything demonstrated to date makes them seem more like Theranos, and it would be difficult to find someone willing to argue they were advancing medical progress.


That amounts to "the ends justify the means." Some people go for that, some don't. Personally, I usually don't.


> That amounts to "the ends justify the means."

That, uh, is what "utilitarian" means.



That way eugenics lies.


But who would go to jail? The developer responsible for writing the faulty code? Or the QA team that missed it on review? The project lead who signed off and checked the code into the repo? The supervisors of any of these people? I'm really not sure how you aportion blame on something whose development is so widely distributed.


What about the 'safety' driver who paid no attention to the road until after the impact? The dash cam video showed them messing with their phone for the entire duration. They were legally in charge of, and driving, the vehicle. Much as I hate to throw the lowest member of the pecking order under a bus, this was absolutely their responsibility.


I'm confused why more people here aren't calling the driver out on this.

My understanding is that Uber knows these cars aren't ready and that's why they have a human at the controls.

I would have thought the driver would be instructed to maintain awareness of the road as though he were operating the vehicle and that looking away from the road in a manner that suggests he was using his phone would immediately relieve Uber of any responsibility.

Perhaps the collision still would have happened, but at least the driver would have the defence of "I was paying attention and meeting all the job-role requirements as safety-operator of the vehicle."


If the driver was paying attention but was unable to react in time, then I'd expect the same result as any other time that a commercial driver on public roads is involved in a fatal accident: Some kind of investigation, and if no fault is found, everyone shrugs and it's forgotten because shit happens.


The fault is with Uber because their software should have been able to use LIDAR to detect the person walking in the road.


Yep.


Typically the employee would be criminally negligent, and the employer would be civilly liable. Uber's $20B is a legal war chest, so this was fully anticipated.


you all keep calling it 'safety driver' but it was for all the intended purposed since the beginning of this fray the 'scapegoat driver'


This is frustrating, and we've just begun seeing these fatal car accidents done by self-driving cars.

These things and liability should have been decided before the cars were allowed on the road. But when Congress deregulated them at the federal level everyone hailed it as a good thing and completely ignored all the potential negatives of that.

Now, even if Uber is found guilty, it may get away with it, because there may be no law clearly attributing guilt to a self-driving car maker in case of accidents like these.


I think that by the letter of the law, liability would be strictly with the "safety driver" (aka scapegoat on duty). They were surely told to keep attentive all the time and - surprise - it did not happen.

The root cause, in my opinion, is allowing cars on the road that inevitably lull their nominal driver into inattentiveness.

There might be ways to keep a safety driver engaged so that the chance of attention failure is significantly lower than inevitable, but it doesn't seem like Uber had been looking for them very hard.


It's not a unique situation. Developers are responsible for software that kills.


This is idealistic, but generally, it should go up to the boss. Either the lead of the self-driving unit or the CEO of Uber. It's their responsibility to make sure the software is ready before testing it on the streets. They get paid a huge salary exactly because of the responsibility this is.


Seems like there should be some kind of legal process for figuring this out. People have died from faulty software before.

(Someone went to jail for VW and there was no death directly as a result of the emissions cheating.)


There were dozens of statistical deaths caused by all that increased particulate pollution. Air pollution is not harmless and it does reduce people's lifespans.


Statistical deaths from pollution are weird though, in that you are perfectly fine legally as long as you pollute within the accepted threshold. Driving 50 miles in a car that pollutes twice the allowed amount somehow makes you responsible for fractional statistical deaths in a way that driving 100 miles in a car that pollutes the allowed amount does not.

This observation does in no way absolve the polluters, but the big picture would be incomplete without this perspective. We all have some blood from statistical deaths in our hands, some people more than others, and again some of them more than others because they broke some rules we introduced to make the deaths not run or of bounds.


But VW was not polluting within the accepted threshold. They were very specifically defrauding regulators to try to get away with polluting far beyond the accepted threshold.


(Someone went to jail for VW and there was no death directly as a result of the emissions cheating.)

That was an instance of deliberate fraud.


The developer’s boss(es) seems more useful in this case. Unless a single developer was grossly negligent, then throw them in there too, but even then their supervisors are on the hook for signing off on grossly negligent software.


Yes, the developer.


What if the developer had sent an email to their boss saying "hey I think this is a bad idea" and their boss pushed it anyway? What if there is an open bug report that the developer intends to fix, but management won't give them the free hours to do it?


Then it's up to courts to take that into account. This is not a completely alien situation on Mars; it decomposes into manageable problems, some of which have been solved previously.


I'd argue that prosecuting a developer because his employer used his code to kill someone would be comparable to prosecuting a gun manufacturer because a customer used their weapon to kill someone.


This is comparable to prosecuting the developer who works for a gun manufacturer when a new type of digital "safety" fails.


An equally fair comparison. I'm not a lawyer, this is not legal advice, and I'm really just spit-balling for discussion's sake and because I consider it interesting; please do poke holes in my reasoning.

Presumably the developer in this scenario is not responsible for guaranteeing to and/or misleading a customer that his code adequately renders a deadly weapon safe, more likely that responsibility falls on the managers whose project it was to implement such a digital "safety" and instructed the developer to write the code in the first place.

Assuming the developer did not write the code with the intent of bringing about a person's death, which might involve fooling his superiors as to the efficacy of his work, I'd wager he can't be guilty of a crime. Outside of that, I believe vicarious liability applies.

Perhaps he was a one-man department of this hypothetical gun manufacturer, implementing, deploying, and marketing his product himself and of his own initiative; I don't think there's any doubt he'd be liable in this case, the extent of which is dependent on his intention.

Perhaps our developer was self-employed as a contractor, and warranted to his client (the hypothetical gun manufacturer) that the code was safe; assuming the manufacturer used it as warranted, and it failed, I'd assume the developer would be mostly liable for the result.


Gun companies are a bad analogy because they are one of the only exceptions to having legal liability when their products kill people. Almost every other industry can be held liable if their product kills someone.


Ahh, I wasn't aware of that.

What about heavy machinery, like construction plant? Or power tools? Or common kitchen utensils? Would they satisfy the analogy?


Related, has anyone gone to jail for faulty aircraft surface control software?


Did you see the video?

Imagine if it was just a person at the wheel. Would a person have gone to jail for hitting that person, in the video?


The low quality dash-cam video that looks nothing like real life? Street lights and headlights cast more light than is evident in that video.

Do you ever drive at night? Then you should intuitively know if it was really that dark, the headlights were off, or fog was present, or something else that would indicate a speed much lower than the posted limit was sensible (vs this car, which was speeding st the time).


That manipulated video does more to make me distrust Uber than the crash itself. They're showing what their true colors are, same as always. I don't trust them to do self-driving cars safely.


Pedantry: video is probably not manipulated ("has not been postprocessed"), but it is manipulative ("released with intent to manipulate audience"): I've seen similarly crappy, unedited videos from cheap dashcams. Indeed, it would make sense to buy the worst-quality scapegoat camera - you could then claim "see, it's really, really dark out there, nothing we could have done!"


The selective choice of what to release, even though they clearly have much better information, is what's manipulative. No disagreement there.


If the person knew they were there, yes. That's the case with the LIDAR.


It would be involuntary man-slaughter at worst. Hard to prove right?


Why? The car should have all the logs, shouldn't it? It's won't be 100% an account of what happened, but it should be pretty close - unless neither Uber's lidar nor radar was functioning, in which case that could ironically save the company.


Would a human driver be prosecuted criminally?


In Germany, almost certainly (involuntary manslaughter).


If they had LIDAR capability, yes.


> Edit: It's disgusting you guys are down voting me over this.

Adding this is not a mature response to downvotes. I was indifferent to your original comment. I downvoted because of this addendum.


No.


Over what? Correct me if I am wrong, but the car had the right of way and the person illegally crossed. Regardless of any improvements in what the self driving car/driver could/should have done, it seems clear cut to me that the root cause lies with the person.


So reading this article [0], it seems as though both parties are evaluated for their actions and blame is assigned proportionally. So while the women jaywalking would probably get the bulk of the blame for jaywalking, the driver/uber might also get some due to software error/driver inattention.

My guess for why the cops are not really going after uber is because the women was reportedly homeless so she does not have any family to fight for her case.

[0]: http://www.alllaw.com/articles/nolo/auto-accident/pedestrian...


Just because you have right of way does not allow you to murder someone.


Murder? This was not a premeditated killing. It was an accident that everyone involved certainly wishes did not happen.

Although, you might be able to argue for involuntary manslaughter but I'm no expert.


I’m not sure about Arizona, but murder in common law doesn’t require intent, only that a reasonable person could have predicted the action would lead to someone’s death.


I know you're using hyperbole, but murder does imply a pre-determined intent to kill. At worst this is involuntary manslaughter.


Just because they will not re-apply in California now doesn't mean their efforts are stopping - and it doesn't mean they won't apply for permits in different states or just re-apply in California at a later date.

HN seems to be jumping to radical conclusions on the level of Reddit these days but I get that everyone is a bit emotional around Facebook / Uber during this news cycle...


But doesn't that mean they're scaling back testing?

Considering the fact that Ubers entire valuation justification relies on them both getting self driving cars first....


It just means they won’t test in California in the near future.

It doesn’t automatically mean they will scale back their testing across the board.


My guess is that they don’t want to be seen asking for something they’ll publically fail to get. At this point, outside of the world of tech, people who were downright alarmed by SDV’s now have a place to hang their hat. Politicians follow the direction of popular sentiment, and they’re not going to risk the next Uber fatality having their stamp on it. They’ll be thinking the next corpse might be a child’s as well, and how that would hurt their prospects for re-election.

On HN there are a lot of Vulcans who have stats ready to go about human driver fatalities, but that’s far out of touch with the average voter. Fear = Dread/Sense of Control, and most people think they have a high degree of control as the driver of a car. I’m not arguing that they’re right, but they’re the majority.


If I wanted to put Uber, Waymo, and other self driving companies out of business, I’d run on the platform of not allowing Silicon Valley technocrats deciding which kids in the street ate going to die the next time their reckless attempt at disruption goes astray.

It doesn’t matter if you’re being intellectually disingenuous if you think you can win on the platform.


But you'd be out-voted by elderly people who are afraid of losing their mobility when they lose the ability to drive a car.


Ever been to FL or AZ? The oldest people you’ve ever seen are still driving, slowly, dangerously, and happily. No one is taking their licenses with the AARP, around. Meanwhile you may be overestimating the love very old people have for automation, the prospect of being killed by an SDV, or being driven around town by a machine.


The realities of how hard it is for an elderly person to get their license renewed aren't directly relevant here. What matters is the perception of the voters. If they perceive this to be a problem, then someone with money can pay for an ad to play upon those fears and use them to persuade someone to vote against a candidate.


I’m from AZ and there are plenty of people there who don’t drive because they’re too old to.


My grandmother is 86, lives in Florida, and the state renews her license annually with no qualms.


I wasn't talking about whether the law forbids it. There are plenty of people who get too old and frail to drive, and stop driving of their own accord, without the government forcing them to.


Data size = 1.

I wish people stop posting anecdotal experiences. Not just saying for your comment to be clear.


That is a really scary thought because it has probably already been thought up by someone in politics already. That's too easy a win card for it not to get played pretty quickly


Exactly, and this issue is highly emotive for many. The rational counter argument, that humans are generally terrible drivers, is just guaranteed to piss people off more.


Even the stats are pretty bad for Uber: while the sample size is minimal, an average driver is expected to drive ~33x more miles than Uber did in total before a fatality.


> while the sample size is minimal

This already disqualifies this from being called "stats". With a single data point you might be able to fit in a probability distribution, but there's enough wiggle room there to drive a self-driving oil tanker through.


But judging from the low fps video provided, I would guess most drivers would not have been able to avoiding hitting the person in that specific situation.


Something is real fishy with Uber and the Arizona fatality. If you believe Uber's dashcam footage it was dark and hard to see the woman. Several people in the area posted comparison dashcam footage showing it just wasn't that dark [1].

This raises a bunch of questions:

- Was this the unprocessed footage from a dashcam in the Uber car?

- How bright were the headlights?

- What other sensors could've and should've detected an object to be avoided on the road?

I honestly don't know why this view seems to be so unpopular here (probably because of the optimism for self-driving cars, which I share) but why isn't criminal negligence for Uber a possibility here? Apparently they also disabled one of the safety systems in the Volvo XC90 designed to avoid this sort of thing.

If it does come to light that Uber misrepresented the conditions of the crash, tampered with the evidence in any way or even tampered with other safety measures then shouldn't a criminal case be a consideration?

[1]: https://arstechnica.com/cars/2018/03/police-chief-said-uber-...


At this point we really don't have enough information to know what happened other than the car hit and killed the pedestrian. We don't know how or if this dash cam video played into the control systems of the car. The accident is tragic and of course should be looked into. The NTSB is investigating[0] and I assume will have access to a lot better information than we have at this point, including all of that you're interested in. Lots of arm-chair quarterbacking at this point, given the paucity of information available right now.

As for "I honestly don't know why this view seems to be so unpopular here", there are opinions all over the map, including those very similar to yours. This discussion has been going on for days in multiple submissions, and again, based on very little information.

[0]: https://www.ntsb.gov/news/press-releases/Pages/NR20180320.as...


There should be a way of increasing safety driver attentiveness.

E.g. make drivers labelers. E.g. driver have to label any pedestrians they see on a special HUD. Add financial incentives for labeling pedestrians quickly.

Some variation of this idea may reduce data acquisition costs and improve safety drivers.


Munroe, R., https://xkcd.com/1897/ [XKCD]


I'm thinking about this simple trick to keep the safety driver focused: at random intervals, the software would provide a stimulus (such as a red LED light projected on windshield), and the person would need to react as quickly as possible by pushing a button. Thus the person would know she has to expect the need to react quickly, and the software can measure the level of attentiveness.


I think in this case, from the video, it is clear that the safety driver is not looking at the road all the time.

But lets consider the scenario that the human driver was 100% alert and focused and were looking at the road all of the time. How would this have played out in this scenario?

The car is driving along. The human driver sees the pedestrian 20-30m down the road but notices that the car is not slowing down.

What does the human driver do in this situation?

Slam on the brakes just in case? Do nothing assuming the car has worked out that it is safe? Or wait assuming that the car will make a decision in time, and then slam on the brakes if it doesn't ... but would it be too late by then?

What (if any) decision do you make to override - and when do you make it - when you're using an autonomous system?

I dont think this sort of thing is clear-cut. Obviously in hindsight in this scenario we'd all pick "slam on the brake", but in this sort of scenario where it is kinda "now or never" for braking, if you override the system too early you never get to really test/develop the system, but if you leave it too late you dont have enough time to prevent the accident. There must be an incentive/encouragement for these human drivers to "let the system do its job" rather than disengaging all the time, otherwise no one would ever let the systems run in full-auto all the time.

It sounds like a tough job. Presumably there are systems in the car to indicate what it "sees", but cross-referencing that against what your eyes see is probably quite difficult "Of the 50-60 things on the screen that the car has registered, it has missed this one thing." I'd not want that job.


Every incident is logged including the SDC's response for the particular situation. Let the humans take over and avoid the accident.

Then you try to recreate it in your test environment and see what would have happened if not for human intervention. If it would have resulted in an accident, fix the code to make sure it doesn't need human intervention, it to your test case and make sure all of the existing tests pass.


Self driving trains have been in existence for decades. In NYC (which I don't think is heavily automated) they require the drivers to point when they pull in to a station to prove they're paying attention. Many freight trains have a throttle that is dialed in to place at a set speed. To ensure engineers are focused there's a dead man's switch that requires activation at set intervals.



there are deep learning solutions that classify super well wether the driver is paying attention or not.

you could just play a buzzer when that happens, and tell them they will get fired if it happens too often (after human review of the tapes ofc)


There would be an adversarial neural network on the other end trying to fool the former into falsely detecting attention, i'm not sure this would end well. In the end they might have ended up with a perfectly distracted driver that shows no visible signs of inattentiveness in the interior video.

But I fully agree with the general idea: if Uber had quickly conjured up one of their glossy presentations outlining an impressive arsenal of countermeasures against safety driver inattentiveness, they would be in a much better position now. They didn't though, and everybody (including myself) is assuming that Uber just did not care: if a safety driver fails at their impossible job, it's their responsibility, not Uber's. Turns out it might still be Uber's problem though.


This seems like a misunderstanding of the problem. The problem is that it's very difficult paying attention to something you have no control over or interaction with. To stick in a prod to punish people isn't going to make it easier. You'll end up just firing a lot of drivers. What would be better is to give the human something to do. For example you could put a set of input devices in the car for the human: Tell them to record their 'level of confidence' so you can check that against the cars confidence levels, or input suggesting steering adjustments to gauge where the car is differing from normal human behaviour.

That way when your driver has become an active participant (albeit probably useless) in the driving.


Train drivers are expected do just that. Driving a train has way less interaction than a car. They managed to setup a system (at least in Netherlands) where they do pay attention. Specifically hiring people based on personality/skill, various things in the train, checks, suspension when you miss a red sign, etc.

It seems Uber did none of that. It's super obvious the safety driver isn't paying attention. Why wasn't this noticed? Why wasn't a system setup to handle this? It feels like they cut corners.


To add to your comment: GM's SuperCruise semi-autonomous mode uses a real-time head and eye-tracking system to ensure the driver is alert. I don't know if it's based on DL, but it's available in a production vehicle.


Dead men's switches tend to do the job - to a point. Railway operators tend to learn to keep this particular annoyance at bay, but in extreme cases (tired/etc.) to exclusion of all other alertness.


That sounds worse than just driving. It also definitely switches who is the machine and who is the master, like one of those breathalyzers attached to the ignition.


That is what Tesla’s auto pilot does. It periodically starts flashing that you need wiggle the steering wheel to ensure you are there.



Oh my, that is so dangerous since it won’t know you aren’t holding the wheel if it needs to disengage.


what does it do if you don't? does it just pull over to the side of the road?


Yes, eventually after a few warnings.


The self driving car program seems like reaching for immortality. Just because your company identifies something that will eventually kill it on a long enough timeline doesn’t mean you need to develop that product yourself. Wonder how much better off they could have been if they had focused on tweaking aspects of their current service.


Their business model would be destroyed by the first company to do self driving. The costs of paying drivers is a big part of the fee paid by riders. It was the natural progression for Uber.

It just seems like Waymo is doing a much better job. The internal problems at Uber are going to have lasting effects even if the problems have already ended.


Yes, but there's a big difference between perfecting the tech and implementing it for a use case. Uber would have the logistics and infrastrcuture to implement it, that's an easy call to license the tech from someone that has the tech down pat but no interest in being a taxi company. Heck they could probably have partnered with Google as a testing partner.


Well, but leverage matters. Whoever owns self driving patents will take close to 100% of the profit margins involved in taxis for the life of the patents.

For example, Apple gets most of the margin out of an iPhone. Apple could switch hardware vendors. The hardware vendors can't find anyone else who sells the iPhone OS.

Same thing here. If you have the self driving tech, the logistics and a shiny app and the dumb car hardware are the easy part you can buy for the lowest price available from many competing vendors (or conversely, license your technology exclusively to whoever pays the highest price -- either way, you have total leverage here and they have none, because without self driving tech there is no business)

Anyway, Waymo does not seem at all disinterested in the taxi business, probably because they realize that's the easy part. Alphabet engineers can put out a decent dispatch app in a fiscal quarter easily, and it markets itself. Or they could create a bidding war between Uber, Lyft, Via, etc.... Even with massive investor subsidies, nobody will be able to beat the rock bottom prices of a true driverless taxi service.

Uber investors know this, which is why Uber was forced to pretend to develop driverless tech, because if they don't, then Uber stock should be selling at earnings multiples more appropriate for a steel mill, not a software company.


They choose this path long ago when they took over a billion in investment + debt, and probably could be argued it was locked in even sooner than that. This a vehicle to get investors to that important rentier position and they have to make it or implode in debt trying. Their valuation has this fate baked in.


Yes but my point is: they could've just made money for the next fifteen years or so. Then they could've maybe sold the brand, mapping division and data to the company that kills them, or done some sort of merger. Companies don't have to live forever to make sense. Every apartment building, for instance, will eventually need to be replaced.


The problem they see is that whoever gets this working will have a structural advantage over a company that requires drivers and probably launch a competing car on demand service.

This is really about establishing a moat around their existing business.


I think this might be the bursting of the Silicon Valley's self driving car bubble. People are starting to realise tech is nowhere near to make self driving cars feasible on scale. We still might be decade or more away from tech being advanced enough to consider going in this direction.


> "tech is nowhere near to make self driving cars feasible on scale"

To be more specific, Uber's tech is the one that's nowhere near feasible. According to recently leaked data, Uber's self driving system is literally orders of magnitude more failure prone than Waymo's[1].

[1] https://www.nytimes.com/2018/03/23/technology/uber-self-driv...


From what I understood elsewhere, the numbers in that article are about different measures (apples vs oranges comparison).

Article states:

> 5,600 miles before the driver had to take control from the computer to steer out of trouble

Apparently they have a number for a) number of miles per intervention and b) number of miles per intervention to prevent an accident. Waymo's number is measure b, Uber's number if measure a.


That is probably correct. But I am skeptical of even Waymo. I don't think even with Waymo's tech it's reasonable to expect any deployment of self driving cars on a meaningful scale in next 5-10 years (apart from some very controlled conditions like roads or lanes dedicated to self driving cars but that will require infrastructure build-up which will take decade or two).


>nowhere near ... feasible

On the other hand

>Waymo now testing its self-driving cars on public roads with no one at the wheel https://news.ycombinator.com/item?id=15644680

> Waymo is doing it live. Two months after the Alphabet self-driving car spinoff announced it would start running a truly driver-free service in Phoenix this year (as in, cars romping about with no one at the wheel), the company now unveils how it will do it: with the help of thousands more Chrysler Pacifica hybrids. https://www.wired.com/story/waymo-launches-self-driving-mini...


In related news Nvidia has also suspended its self-driving car tests in the wake of the Uber crash. Uber uses Nvidia’s self-driving technology in their cars.


Uber could actually be a viable business if they gave up on being a unicorn. Right now, you can step out of an airport in a new country, pull out your phone, type in an address, and someone will drive you there. They're a universal ride-hailing service, and they could probably even charge a premium for that versus local taxis. Unfortunately, they're probably in too deep to become a medium-sized business.

Also, building their own fleet of robo-taxis is insane when their current business model relies on (underpaying for using) people's personal cars.


Has anyone asked how many hours or miles that self-driving car drove autonomously before the fatality? It could very well be that the accident proves that, while horrible, self-driving cars are much less susceptible to fatal accidents. To date, I have not seen this data anywhere...


I think those statistics are difficult to get a complete picture on.

In california at least the manufacturers have to log miles driven in autonomous mode and the number of disengagement events over those miles.

That can be misleading though, since a car could log thousands of miles on a nice smooth stretch of interstate over and over, then spend significant amount of time disengaged in areas with pedestrians, cross walks, and stop signs.

The same pinch of salt should be applied to stats that someone like Uber themselves makes claims on, about how many miles they have driven autonomously with no incidents.

We could probably draw conclusions about safety per mile travelled on highways but probably not in trickier surface street scenarios without a lot more detail from the manufacturers.


> “We proactively suspended our self-driving operations, including in California, immediately following the Tempe incident,”

They mean "reactively", not "proactively". Pathetic.


Uber is a dying company that is bleeding out.

Uber's current business model:[1]

Uber is the middleman in a two-sided marketplace. It connects drivers with passengers, and takes a cut for this service.

Uber's competitive advantage is that it had the capital to attract drivers - with lots of drivers on its network, costs & wait times are lower for passengers, so passengers use Uber. Cue flywheel metaphor.

BUT Uber is still not profitable.[2] Unlike Amazon, there are no economies of scale in this business. Cities are local markets - Uber's dominance in A doesn't help it in B. And, as Uber acquires more customers, its cost of driver acquisition increases. Uber is so big that in many cities, it is churning through drivers and needs to incentivize the current drivers to stay on the platform.

Counterpoint: This is all a short-term concern! Once drivers are eliminated from the equation, Uber will be insanely profitable!

Wrong.

Once self-driving cars go mainstream:

Uber's competitive advantage completely disappears. As a passenger, all I care about is getting from point A to point B safely, quickly, and cheaply.

As a manufacturer of self-driving cars, why would I sell or lease them to a middleman (or if we're getting extreme, even to consumers!) to take the profits in this space? As Tesla, GM, etc. I am going to pump out a fleet of self-driving cars and have consumers pay for a subscription plan (X number of minutes / miles per month on my fleet). Because this is a very capital-intensive business, the government will then grant local monopolies to different providers and regulate them as utilities, much as it does today with telecoms.

Uber provides no value in this scenario. The winners in this space are going to be car manufacturers and the developers (licensers) of the self-driving software that get legal approval (and it seems extremely unlikely that this will be Uber).

[1] Full disclosure, half of this is a repost of what I wrote on another thread earlier today.

[2] http://www.businessinsider.com/ubers-losses-narrowed-in-q4-b...


> no economies of scale

Brand is scalable. Company processes are scalable. There are many reasons to be a large company serving many cities.


Fair points. Perhaps I should have specified that the unit economics don't scale with expansion to new cities.


Uber's gameplan is to own every car once they develop self-driving technology.


By the way Bezos himself says Amazon was his lottery ticket.. and mind you all Lottery tickets have an expiry date :). May be this administration is going to tax the heck out of Amazon.


Which state/area is next on the list?


Is this the death of Uber?


I think the better question is "Is this the death of Uber's self driving car program/".

Uber is likely to survive in some form without one, they have the ridership and can use someone else's self driving cars in the future. They will just get a much smaller piece of the pie.


I don't know, if they have the balls to walk into Waymo and ask for their tech to be licensed? But the answer could be no - but Alphabet is an investor, so who knows...

Uber's dominance in the taxi app space + Waymo cred?


This was actually Uber's original plan. They only started their own because Google was giving them the cold shoulder and then announced they were making their own ride service using the cars.


I personally don't believe they can survive with human drivers (unit economics) and I personally don't believe they will catch up in engineering and technology of self driving cars, especially after this (but I never thought they would, too far behind from the start). So I would say maybe yes.

But there are probably ways out of this for them. They will probably chug along for at least a decade and in that time anything can happen.


Curious why you don't think they can survive with human drivers? Every financial report I've been able to get hard data out of seems to show them very close to operational profitability. Somewhere within 10-20%.

I personally don't feel there is that much price sensitivity in the market to make a 15% price increase cause volumes to dip substantially. I could of course be wrong, but it seems to me Uber is at a point they could be "instantly" profitable if they simply decided to be and decided to stop expanding.


Uber is already doing pretty well with price segmentation in the US (the Uber Express POOL comes with a forced 2 minute wait); they can certainly:

- raise prices of uberX, price sensitive riders can still use POOL or Express POOL - reduce marketing and growth expenses; reframe KPIs and targets in term of "dollar ROI" instead of new rider/driver targets

Uber absolutely works from a unit economics perspective. How do you think taxis exist?


Taxis are usually a bit pricier and also don't subsidize a large team of software engineers at Silicon Valley salaries.


Here in NYC, the mortgage on a taxi medallion costs a lot more than the total salary of all Uber developers divided by the number of Uber drivers.

There are other costs that go into taxis that Ubers don't have. Taxis have back-office support staff too (the people you call up to order one to you).


Uber and Lyft are basically equivalent services in cities they both serve. A 15% price increase for one is enough to drive lots of people to just default to the other.


If there is a team with real software that Uber can acquire, they can catch up. GM and Waymo seemingly do not need another team. While they can outbid Uber, Uber can always close the deal by promising that the new team's software will be the new face of Uber self-driving and not shelved like it would be at GM or Waymo


Are there any teams out there besides GM and Waymo that were doing much better than Uber at this?


They can always buy their own autonomous vehicles. Or start incentives for drivers to own/lease them and drive with Uber, unless we get some laws that say you can't ride-share with an autonomous vehicle


They can probably survive with human drivers, they just won't survive with $100 bil valuation.


Maybe just the death of the idea that self-driving cars were somehow integral Uber's business model. To the extent that vision was a major driver of valuations, and given their ongoing need to raise capital then perhaps it could lead to their demise. On the other hand if this gets them to refocus on their current business it might perhaps help their longevity.


Self-driving cars never made any sense as part of Uber's business model. What were they going to do, buy a fleet of vehicles, build garages, hire mechanics? They're a software company.


It's not like you drive millions of miles without a small fleet of vehicle. Not only is that what they would do on large scale, it's what they already started to do on small scale.

Uber's a company whose current main product is software, that doesn't mean they can't branch out into new things and are forever tied to software. I'd also argue that Uber already has a lot more experience with working with a million different people than most other companies (currently drivers, in the future possibly mechanics, garage owners, etc).


Why not? Google started out as a software company and now has plenty of different hardware lines.


Maybe they thought they'd franchise out the tech, lease and loan terms the way they do for their cars now? It's a good point, I dont know what they were thinking, if anything aside from "that'll sort itself out when the time comes"


Pretty much, they lose money on every ride. They raised money on the vision that they would be able to cut out the labor cost with self-driving.


Probably just the death of the Uber self-driving program. Although that could perhaps spell death sometime in the future.


Won't they just focus the testing in other states? Nevada? Penn?


Good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: