Hacker News new | past | comments | ask | show | jobs | submit login
Uber’s self-driving car could not detect pedestrians outside of a crosswalk (theregister.co.uk)
457 points by notlukesky on Nov 6, 2019 | hide | past | favorite | 563 comments



> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

System can't decide what's happening.

> It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg

System is too slow to realize something serious is happening.

> That triggered what Uber called “action suppression,” in which the system held off braking for one second

A hardcoded 1 second delay during a potential emergency situation. Horrifying.

I bet they added it because the system kept randomly thinking something serious was going to happen for a few milliseconds when everything was going fine. If you ever find yourself doing that for a safety critical piece of software, you should stop and reconsider what you are doing. This is a hacky patch over a serious underlying classification issue. You need to fix the underlying problem, not hackily patch over it.

How is this not the title of the story? This is so much worse than the "it couldn't see her as a person, only as a bicycle". At least the car would still try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.

> with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

And then top it off with systemic issues around the backup driver not actually being ready to react.


>> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

> That triggered what Uber called “action suppression,” in which the system held off braking for one second

This is borderline criminal negligence.

> with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??

SMH. This is just ... I'm flabbergasted.


> The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

> Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??

That sort of depends on the specifics of how their obstacle prediction and avoidance works - having a fuzzy view on what exactly something is at 5.6 seconds out is probably ok. The important bit is that it notices that the obstacle is moving out to the road and should stop it. Classification is not needed to avoid objects. The key words here are actually "at each change, the object's tracking history is unavailable" and "without a tracking history, ADS predicts the bicycle's path as static" which is a horrible oversight.

>> That triggered what Uber called “action suppression,” in which the system held off braking for one second

> This is borderline criminal negligence.

Yeah, this is. Even though it was too late to swerve, there was still enough time to slow down and attempt to reduce the speed of impact. _This_ is probably where an alarm should have fired - the car is now in a really unsafe state and the safety driver should absolutely know about it.

Disclaimer - Up until a few months ago I worked at a major (non Uber) self driving car company.


Here's a quick way to get safety in your collision avoidance system up to spec: randomly choose coders of the system to be the objects avoided during tests.


You might be able to literally count on two hands the number of accidents where we have this level of transparency into the thought processes and sensory data of the 'driver'. And control over what happens next time. There is no doubt that the engineering data gathered here and in other accidents is going to contribute to a massive (truly massive, unspeakably massive, enormously massive - all such adjectives are appropriate) reduction in both actual deaths from car accidents and statistical deaths from the huge amount of time people waste driving.

The big picture is so overwhelmingly positive that even if the engineers were purposefully running people over to gain testing data it would still probably be a large net win for greater society. Thankfully there is no call for such reckless data gathering.

If anything, punishments in this case should be much more lenient than normal rather than breaking out the cruel and unusual measures.


> If anything, punishments in this case should be much more lenient than normal rather than breaking out the cruel and unusual measures.

How is it cruel or unusual to subject someone to risks posed by a system they're building? The Wright brothers didn't hit up bars to find guys willing to drunkenly try and soar into the sky on an oversized kite. The vast majority of things that were ever invented involved huge risks and often injuries to the inventor. Why do people writing code get exemptions from that?

I agree with the GP: the hardcoded 1 second wait time sounds exactly like some hack I'd put in a bit of code, the key difference being, my code makes fun little applications on an iPhone work, it does not DRIVE A 3500 POUND VEHICLE IN PUBLIC.

I bet if that engineer thought it would be him in front of that things bumper, he would've put in a bit more work to figure out what was going on.


> How is it cruel or unusual to subject someone to risks posed by a system they're building?

More unusual than cruel, I must admit. We just don't do that anywhere.

If I were to guess why - if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable. That sounds nice if you don't think about it too hard but in practice it would shave % off how much we produce in goods and services for no reason.

Aeroplanes are a great example. It sounds nice to say they are a super-safe form of transport, but actually thinking through the risks people face on their commute to work each day the amount of money we spend doesn't make sense. I mean, what if the risk of being in a plane crash was only as rare as being struck by lightening? And the ticket was much cheaper? That wouldn't be so terrible. I don't know a lot of people who were struck by lightening but I know a lot of people who don't have much money but take overseas holidays anyway.

> I bet if that engineer thought it would be him in front of that things bumper, he would've put in a bit more work to figure out what was going on.

Probably not in my experience. It is quite hard to get better performance out of a knowledge worker by threatening them. From what I've seen they tend to either resign or just do what they were already doing and hope. It isn't like anyone can stop writing buggy code by trying harder.


> More unusual than cruel, I must admit. We just don't do that anywhere.

Again I disagree, perhaps it would be unusual to do it via some sort of mandate after the fact, but the long history of human innovation involves tons of inventors risking life and/or limb to test safety systems of their own design. I'm reminded of the person who invested that special brake for circular saws that when it detected human flesh in front of the blade, it clamped down so hard it would often destroy the tool. The thing is, it also prevented you from losing a finger.

> if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable.

I mean, maybe? I just think there's something lost there when you have a software engineer working on the code in a large company that's then testing on the unwitting public. I'm not saying it has to be a standing threat of "we're testing this on you" but, I mean, look what happened. An ugly hack that has no business in a codebase like this went in and someone was seriously injured, and I'm not saying the engineer necessarily deserved to be injured in their stead, but the person surely didn't have any role in this, they were just in the wrong place at the wrong time.

> It isn't like anyone can stop writing buggy code by trying harder.

One person, no, but an organization can. Talking of airplanes, when you look into the testing and QA setups for something like Boeing, where the software is literally keeping people in the air alive, its layer after layer of review, review, review, testing, analysis, on and on. Something like "if you think there's a pedestrian ahead, wait one second and check again" would NEVER have made it through a system like that.

You know I'm all for innovations but Silicon Valley's VC firms have a nasty tendency to treat everything like it's "just another codebase" forgetting that some of this stuff is doing real important shit. Elizabeth Holmes and Theranos come to mind.


> when you look into the testing and QA setups for something like Boeing

Boeing has a corporate lineage that extends back for more than a century and for most of that they did not have the levels of engineering safety excellence they can manage today. The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.

And that goes to what I'm trying to communicate in this thread - people are bringing up examples of how people deviating from standard practice in mature, well developed industries where there are highly safe alternatives.

This is a different industry. Today in 2019 mankind knows how to fly a plane safely but does not know how to drive safely - I've worked in a high safety environment, they were notching speeds down to 30kmph and 40kmph down from 100kmph on public roads because the risk of moving any faster than that just isn't acceptable. People were substantially more likely to die on the way to work than at it. They'd probably have bought in 20kmph if the workers would reliably follow it. Driving is the single highest risk activity we participate in. Developing car self-driving technology has an obvious and imminent potential to save a lot of lives. Now we aren't about to suspend the normal legal processes but anyone who is contributing to the effort is probably reducing the overall body count even if the codebase is buggy today and even if there are a few casualties along the way.

What matters is the speed with which we get safe self driving cars. Speed of improvement, and all that jazz. Making mistakes and learning from them quickly is just too effective; that is how all the high safety systems we enjoy today started out.

It is unfortunate if people don't like weighing up pros and cons, but slow-and-steady every step of the way is going to have a higher body count than a few risks and a few deaths learnt from quickly. We should minimise total deaths, not minimise deaths caused by Uber cars. Real-world experience with level 3/4 autonomous technology is probably worth more than a few tens of human lives, because it will very quickly save hundreds if not thousands of people as it is proven and deployed widely.


> The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.

But those lessons were learned. We know how to do it now. Just like...

> Today in 2019 mankind ... does not know how to drive safely

Yes we do, and by and large we do it correctly. It's easy to think nobody knows how to drive if you spend 5 minutes on r/idiotsincars, but that's selection bias. For every moron getting into an avoidable accident each day there are millions of drivers who left home and returned completely safely.

You can make the argument that people sometimes engage in too-risky behaviors while driving, that I'd agree with, but people know how. Just like people know how to develop safety systems that don't compromise safety, even when they choose to not, as I believe happened here.

> Making mistakes and learning from them quickly is just too effective; that is how all the high safety systems we enjoy today started out.

But again, we know how to do this already. And again, my issue isn't even that someone got hurt while we perfected the tech, all of our safe transit systems today are built on top of a pile of bodies, because that's how we learned- my issue is the person hurt was not an engineer, was not even a test subject. Uber jumped the gun. They have autonomous vehicles out in the wild, with human moderators who are not paying attention. That is unacceptable.

There was a whole chain of errors here:

* Ugly hacks in safety control software * Lack of alerts passed to the moderator of the vehicle * The moderator not paying attention

All of these are varying levels and kinds of negligence. And someone got hurt, not because the technology isn't perfect, but because Uber isn't taking it seriously. The way you hear them talk these are like a month away from being a thing, and have been for years. It's the type of talk you expect from a Move Fast Break Things company, and that kind of attitude has NO BUSINESS in the realm of safety, full stop.


> The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.

This is true, but those lessons that got them to that culture are written in blood.

Like the old saying, "Experience is learning from one's own mistakes. Wisdom is learning from others mistakes." I don't think re-learning the same mistakes (process mistakes, not technological) is something that a mature engineering organization does.

One of my worries is that SV brings a very different "move fast and break things" mindset that doesn't translate well to safety-critical systems.

As for the rest of your post, what you're talking about is assessment of risk. Expecting higher levels of risk of an experimental system is fine, but there's a difference when the person assuming that risk (in this case, the bicyclist) doesn't have a say in whether that level of risk is acceptable


>>If I were to guess why - if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable. That sounds nice if you don't think about it too hard but in practice it would shave % off how much we produce in goods and services for no reason.

Certified engineers absolutely can and do go to prison for structural mistakes in their projects, especially if those mistakes lead to loss of human life, but sometimes even without it - chief engineer of what was the tallest human-made structure on Earth(radio mast near Warsaw) went to jail for 2 years after the mast collapsed due incorrect design of the support structures.

Does it stop people from becoming engineers or from embarking on difficult projects? Of course not. If anything, it shows that maybe programmers working on projects like the autonomous cars should not only be certified the same way engineers are, but also held liable for what they do.


The problem here is that the engineers are likely working under an industrial exemption so there is no professional license liability assumed.

I personally have not come across a single software engineer or controls engineer who has had to certify a design.


These kind of metrics should be mandatory for self-driving cars. Just because, say, you can produce a painstakingly detailed timeline should not free you from consequences if that timeline indicates you acted with wanton disregard that any reasonable person could tell you would result in loss of life.

If someone is so stupid that they don’t realize that a car needs to be able to avoid people in the road, even outside crosswalks, they have no business doing development of self-driving cars. They simply aren’t qualified. Ditto if that person simply doesn’t care. We’re not learning anything from this that we didn’t already know. Of course if you do nothing to avoid a person or a bicycle at 40 MPH, you’re gonna kill someone eventually.

This kind of thinking reeks of the “it works with expected input, problem solved” that causes so many issues in critical software.

This is like Boeing with the 737 Max. They didn’t want to deal with failure conditions, so they avoided detecting them so they could pretend like they don’t exist. Subsequently, those failure conditions then caused two airplanes to crash and kill the people aboard.


Aeroplanes have 50 years of global experience in how to fly planes safely. Self driving cars aren't even a full product yet.

If (and it is a big if) there is one engineer who was grossly negligent then yes they shouldn't be working on self driving cars.

But it is far more likely that this was totally normal corporate ineptitude and will be fixed in the due course of normal engineering processes.

A safe culture is not built by people making wild guesses about what and why on forums. It is likely that the engineer responsible for this code is also going to be responsible on net for a very large number of lives saved and judgement of who may have been culpable for what should be left to the courts. Morally I'm happy to say that I believe not only is he or she in the clear but probably deserves a pat on the back for helping to push forward the technology most likely to promote save young lives that I've ever seen. I've had young friends who died from car crashes and ... not much else. Maybe drugs and diseases in a few rare cases. I want technology that gets people taking their their hands off the wheel and I want it ASAP. It doesn't need to be perfect; it just needs to be about as good as what we have now and consistently improving. Anyone halfway competent who is working on that as a goal has my near total support.

This is not the time to be discouraging people who work on self driving cars. This is a time to do things properly, non-judgmentally and encouragingly while acknowledging that we can't have cars running people over if we can possibly avoid it and fixing what mistakes get found.


> Aeroplanes have 50 years of global experience in how to fly planes safely. Self driving cars aren't even a full product yet.

We have over a century on accident statistics for normal cars. Anyone pretending that pedestrians will not jaywalk on a well lit street with low traffic should not develop self driving cars and most certainly should not have a drivers license .

> it is likely that the engineer responsible for this code is also going to be responsible on net for a very large number of lives saved and judgement of who may have been culpable for what should be left to the courts.

Citation needed. Claims like that can be used to proclaim the most hardened serial killer a saint. I mean if we just let them continue they might start saving people at some point, point me to any "currently applicable" evidence that says otherwise.

> I've had young friends who died from car crashes and ... not much else.

In my area people die from old age and cancer. Maybe you should move to a place with sane traffic laws that doesn't let the mental equivalent of a three year old loose on the local population.

> Anyone halfway competent who is working on that as a goal has my near total support.

As a test obstacle on the road?


"But it is far more likely that this was totally normal corporate ineptitude and will be fixed in the due course of normal engineering processes."

Normal engineering processes were built on bodies. We do not want to minimize outrage here. Public anger and pressure is the only way to keep engineering process going.


Oh wait. So the creators are to be treated with leniency, but LITERALLY running over innocent bystanders is an unfortunate coincidence which should be endured for the brighter future? If you do not see why this is wrong, do I have news for you.


> There is no doubt that the engineering data gathered here and in other accidents is going to contribute to a massive

Im going to strongly opposed this, and to support my perspective I present to you the Boing / FAA fiasco.


Air travel is the safest form of transport we have. Perfection is impossible, but air safety is pretty close. They didn't get there by getting outraged every time an aeronautical engineer made a mistake; they got there because every time there was a crash they gathered data, thought carefully and made changes.

There is no call to get riled up because Uber as a corporate entity let a mistake slip through. The justice system will come up with something that is fair enough and we will all be better off for the engineering learnings. This is young technology, which is different from aircraft.


> Perfection is impossible, but air safety is pretty close.

Intentionally programming a system to ignore object history each time reclassification happens seems like a glaring oversight.

Sitting behind the wheel as a test driver using your phone seems like a glaring oversight.

I’ll continue with my rile until change occurs, cheers.


I think you are absolutely correct. and i'm disappointed again at the broader HN downvoters who lack any sense of perspective. It's like the hackers all turned into fuzzy-headed luddites here.


Come on. It's one thing to be run over by a fellow human being, it's another to be run over by a system developed by a corporation in the pursuit of greater profits.

As a species, we've long accepted that living around each other poses some hazards and we've made our peace with it.

But to stretch that agreement to a multi-billion dollar corporation that only wants to make money off it? That's too much to ask for.


I think the "for-profit"" by a "multi-billion dollar corporation" thought process is clouding your judgment.

It is such a trite and overused argument.

Why does it feel so different to you if the accident was caused by negligent human texting, versus an engineer making decisions in code?

If anything that engineer was most likely operating in much better faith than the texting, perhaps drunk human driver.


It's a rather complicated question. Intention matters a lot when it comes to dealing with on-road incidents. If there is no intention, we deem it an "accident", else we deem it manslaughter. The punishment - by courts and by society - are much harsher for the latter.

Can code be "accidental"? Surely not. Someone had an intent to write and implement it. If it fails, its not "accidental"; the system was just programmed that way.

So the question is: are we okay with for-profit companies intentionally writing software that can lead to deaths?


Ah, sure: The New World Order requires sacrifices, as it's axiomatically Good, Correct, and Right. Anyone opposing it is the Enemy. (Now technooptimism-flavored: when Friend Computer runs you over, it's because you were secretly a traitor.)


I’d add the managers and VCs that probably put all the pressures in the world onto the developers, that end up doing hacks just to meet the unreasonable expectation of the employers.


I know you jest, but it's very easy to not reliably kill anybody. Just don't let the car move at all, or stop every time it detects something as large as a mosquito. What's hard is to not kill anybody and actually drive.


Damn straight. These fuckers (and their executives) should be the ones whose lives are on the line, not innocent people on random roads.


>>The important bit is that it notices that the obstacle is moving out to the road and should stop it

The problem here is that if I'm doing 70mph+ on the motorway and I see a plastic bag flying in my path, the correct course of action isn't to brake hard or panically try to avoid it - but a computer cannot tell a difference between a soft plastic bag and something more rigid. If the only classifier is that "there's an object in my way, I need to stop" then that's also super dangerous. In fact, there are cases where unfortunately the correct course of action is to keep going straight and hit the object - any kind of small animal you shouldn't try to avoid, especially at higher speeds - yes they will wreck your car, but a panicked pull to the left or right can actually kill you. As a human you should do this instinctively - another human = avoid at all cost, small animal = brake hard but keep going straight, plastic bags = don't brake, don't move, keep going straight. How do you teach that to a machine if the machine can't reliably tell what it's looking at?


My conclusion is that until you have hardware and software to classify objects as well a human you don't even consider testing on public roads. I can't understand the optimism that with such bad hardware and software but with tons of training it will magically get better then a human, Also we need some tests before we let anyone put his self driving car on the streets, should I be able to hack my car and npm install self driving and then test on your street ?


I don't have a great answer to that - I don't know Uber's capabilities and I can't really speak to my previous company's capabilities either. Sorry!

Edit: I will say that "size and relative speed of the object" is an excellent start, though.


My guess from the NTSB description is that their tracker had object category as part of the track information, and that detections of one category weren't allowed to match a track of another category. This is useful if, say, you are tracking a bicycle and a pedestrian next to each other. When you go to reconcile detections with tracks, you know that a bicycle isn't going to suddenly turn into a pedestrian, this rule helps to keep you from switching the tracks of the two objects.

Unfortunately that also means that when you have a person walking a bicycle that might be recognized as either one, this situation happens, and they initialize the velocity of a newly tracked object as zero.

Even this though shouldn't have been enough to sink them, because the track should have had a mean-zero velocity, but with high uncertainty. If the uncertainty were carried forward into the planner, it would have noted a fairly high collision probability no matter what lane the vehicle was in, and the only solution would be to hit the brakes or disconnect.

Furthermore, if you've initiated a track on a bicycle, and there's no nearby occlusion, and suddenly the bicycle disappears, this should be cause for concern in the car and lead to a disconnect, because bicycles don't just suddenly cease to exist. They can go out of frame or go behind something but they don't just disappear.


> Classification is not needed to avoid objects.

Taking your disclaimer into account, that is a very wrong assumption. If you classify an object wrongly that means your model will not be able to properly estimate what the object's capabilities are, including its potential acceleration speed and most likely trajectory from a stand-still. And that information will come in very handy in 'object avoidance' because a stationary object could start to move any time.


Yes, I suppose I wasn't exact when I stated that. Classification helps very much, but is no excuse to throw away other data you have. It should be an augmentation to the track the object has.


Then we are in agreement.


> Classification is not needed to avoid objects. The key words here are actually "at each change, the object's tracking history is unavailable" and "without a tracking history, ADS predicts the bicycle's path as static" which is a horrible oversight.

Was it an oversight or were they having trouble with the algorithm(s) by keeping the object history and decided to purge it?


Pure speculation incoming: probably a completely unintended bug. Classify as static, static objects shouldn't have tracks, clear the history. Your object is reclassified as dynamic, whoops, we already cleared the history, oh well.


Static is not a classification; in this case the classification was cycling between "unknown", "vehicle", and "bicycle", and each time the classification changed the history was reset. Then each detected object can be either static or be assigned a predicted path, but unknown objects without a history are always treated as static.

In any case, the history clearing seems to be a bug, they updated the software to keep the position history even when changing the classification.


In your example "unknown" is implicitly "static". Either way, neither of us have access to Uber's source history to say what exactly went wrong. They do claim to have fixed it though, you're right.


Strange that the history itself is not used for the classification.


Why aren't they driving the cars with the driverless software in learn mode, scaled up to many human drivers that are training the autonomous classifiers edge case dataset e.g. human driver stops for this particular shaped object (person in darkness entering street) in 99% of cases, so we stop as well. (As opposed to not stopping for a floating grocery bag where it applies the opposite rule to keep driving based off recorded human driving behavior)

Also, we can detect human gait already can't we? There has to be better signifiers for human life that we can override these computer vision systems with to prevent these tragedies.

What about external safety features, such as an external airbag that deploys on impact, cushioning the object being struck if the system senses immediate collision is unavoidable. Some food for thought for anyone working in this space, would love to hear your thoughts and ideas.


> Why aren't they driving the cars with the driverless software in learn mode, scaled up to many human drivers that are training the autonomous classifiers edge case dataset e.g. human driver stops for this particular shaped object (person in darkness entering street) in 99% of cases, so we stop as well. (As opposed to not stopping for a floating grocery bag where it applies the opposite rule to keep driving based off recorded human driving behavior)

This basically is being done already - recorded data is hand labeled as ground truth for classifiers. "human in darkness entering the street" might be a rare case, though. Humans in the dark on the sidewalk should be pretty common.

> Also, we can detect human gait already can't we? There has to be better signifiers for human life that we can override these computer vision systems with to prevent these tragedies.

I'm not a perception expert - it looks like in this case the victim was mostly classified by LIDAR capture. I don't know if the cameras were not good enough for conditions or if their tech was not up to the task. This is sort of a red herring though - even if the victim was classified as "bicycle" or "car" she still should have been avoided.

> What about external safety features, such as an external airbag that deploys on impact, cushioning the object being struck if the system senses immediate collision is unavoidable. Some food for thought for anyone working in this space, would love to hear your thoughts and ideas.

Why not do this on all cars?


A bit off topic, but what is your overall view of the self driving sector, aside from the more egregiously irresponsible players?


Long term optimistic. Short to medium term is fuzzy for me. :)


> Disclaimer - Up until a few months ago I worked at a major (non Uber) self driving car company.

That'd be a disclosure. (When you disclose something, it's a disclosure.)


> That'd be a disclosure.

It's a disclosure that is (among other things) disclaiming neutrality, so also a disclaimer.


> It's a disclosure that is (among other things) disclaiming neutrality

That being the purpose of a disclosure, my point stands. What was yours?


My point is that your pedantry claiming as wrong a reference to one completely accurate description because another description, which is not exclusive of the one provided, also applies is, itself, wrong, and, further, that the personal insults based on it are, —as well as being unnecessary, infantile, off topic, and counter to the HN guidelines—also unjustified.


[flagged]



> If you being shown to be incorrect is a "personal insult",

The whole point is that the person in question was not incorrect at all, they simply used one of two perfectly accurate descriptions when you preferred the other. Your description of them as having been incorrect was incorrect. (I note, though, that you have since edited the personal insults about he fitness of the poster you purported to correct for their prior job from your post, which would be commendable were it not directly linked to your dishonest pretense that that insult did not exist and that a reference to an insult in your comment must be to the mere act of proposing a correction.)

Also, the person in question was not me, I'm just the one who called you out on the error in your “correction”. So for someone so critical (incorrectly) of others’ failure to pay close attention to detail, you are doing a poor job of demonstrating attention to detail.


The insult was in your later, snide reply. “What was yours?”


>This is borderline criminal negligence.

Thank you for flatly and plainly stating this. That is unacceptable and anyone involved with approving that decision should be prosecuted in the death.


I'd also argue that US government should shut down Uber's self-driving research division. If they want self-driving cars they can license from Waymo, or someone else who is actually taking it seriously.

Its pretty clear that Uber's techbros have some serious culture issues. They have shown they are incompetent and should be disqualified by regulators.


There’s a reason Waymo is currently deployed only around Chandler, AZ, and I’m pretty sure it’s consistent building codes as relates to public highways. They picked a different optimization path to get to MVP than Uber. But you can be assured that too is a shortcut.


I'm a little biased but I don't think Waymo is shortcutting this. If you look at the data they're looking at and how it's being processed, it's several orders of magnitude more reliable than what Uber was using. I have yet to see a sensor data video from Waymo that hasn't targetted all obstacles successfully, even if they're not categorized right. If anything, I think Waymo is, in some cases, being too conservative with its driving data (although, admittedly, I don't think I really mean that since the alternative is the loss of human life). If you've ever driven behind a Waymo vehicle, they're annoyingly strict when it comes to following posted speed limits, stopping for obstacles, and reacting/erring on the side of caution. It's an infuriating exercise in patience but I hope that it will pay off in the long term.


> If you've ever driven behind a Waymo vehicle, they're annoyingly strict when it comes to following posted speed limits, stopping for obstacles, and reacting/erring on the side of caution. It's an infuriating exercise in patience but I hope that it will pay off in the long term.

I've noticed when I am riding with a good (human) driver who obeys pretty much all rules of the road and drives slower than the posted speed limit (at a speed they are comfortable driving) that there are always a few drivers behind us who will honk their horns, flash their lights, or even pass into the next lane just to pass back a little too close for comfort. I don't know if they are always horrible humans but that's besides the point.

How would a self-driving car react in such a situation? Would human drivers be better behaved around a Waymo vehicle because the Waymo car has a lot of cameras and sensors and can pretty much show a clear cut case of road rage?


This is exactly where I think that Waymo wouldn't get into that situation. A Waymo vehicle would let the car in as its cameras will actually react to the turn signal. Technically, a line of cars could stop a Waymo vehicle in its tracks just by getting next to it and turning their blinkers on. The car would try to yield and zipper merge but will almost always defer to letting cars in if they get too close. Again, it's super annoying as a human but I think I prefer how conservative they are when it comes to safety.


Speed limits are generally set too low. Someone driving even slower than the posted limit can often be assumed to be intoxicated or impaired.


Sorry, any speed limit will be "too low"; people will consistently drive at appx the speed limit +15%, whatever the number. Shifting the speed limit to old limit1.15 will result in speeds of new limit1.15.


That's not how it works, because it turns out that drivers are generally not suicidal.

The studies have been done, they've been replicated, and the statistics are very clear. Google the '85th percentile rule'.


I haven’t had the experience of driving in proximity to a Waymo vehicle like you have. Your statement could be viewed as taking a conservative approach to that automation is in itself a shortcut. That of course is speculation; I feel confident that when it comes to automated driving by virtue of the infancy of the technology, there are things we don’t know that we don’t know yet.


>Your statement could be viewed as taking a conservative approach to that automation is in itself a shortcut.

Can you expand on that? I'm not sure I follow what you're saying here.


I’m thinking the conservative approach in this context is a shortcut to solving all of the problems. For example, hitting the brakes at any unrecognized object keeps the passengers safe, and is a shortcut in that the recognition models for that particular object don’t have to be ironed out. Or, selecting a test city like Chandler, approx 60 years old, having the signal pole and crosswalks positioned at the same offset from every intersection is in itself a shortcut. That tech will never work in The Hollywood Hills, for example.


What happens when Waymo is in an accident and you shutdown their division too?

Human drivers in fatal accidents don't have their licenses permanently revoked and there's no outrage.


"permanently"

Humans have their licenses temporarily revoked because it's assumed that humans can change their ways (maybe or maybe not a reasonable assumption).

With a computer, you can't expect it will get better unless its programming is improved. So isn't that reasonable grounds for "permanent" revocation, at least in the sense of indefinite, where you wouldn't do the same thing to a human?

Criminal negligence resulting in death by motor vehicle is a minimum of six months license revocation in NY.


It isn't (just) that there was an accident and somebody died.

It's that this quality of software was allowed to operate in a capacity where that could happen.


I've interviewed an Uber engineer once, from their self-driving division. Incredible arrogance. Unforgettable. He clearly believed he is the pinnacle of creation, best software engineer around.

I've asked him a few questions, probing gently his experience and skills. Aside from his arrogance there was nothing special. No love of technology. No depth. Nothing. We didn't hire him.

Companies have different cultures. Companies hire and promote differently. And it matters.


I think it'll be hard to come up with a legally defensible reason to demand that they shut down all research efforts without looking like it's a vendetta.

Also, Uber's business plan is 100% to maintain share in the rideshare market until their cars drive themselves. That is the end game here. To shut down their self-driving research is to doom the company entirely.

Not that I'd be against that. Would be great to see Uber die. It's already caused enough headache and pain.


> I think it'll be hard to come up with a legally defensible reason to demand that they shut down all research efforts without looking like it's a vendetta.

They don't have to shut down their research. I don't care if they want to write software and play on private tracks.

As I see it, Uber should be explaining exactly why their privilege to endanger the public by operating unsafe robots on public roads should not be revoked for criminal incompetence.


Their business plan is and always was a stupid plan by alarmingly stupid leadership. Local or national leaders have no obligation to literally sacrifice more citizens to enable it.


I always thought the self driving car was an excuse. Something like "We are loosing money now, but if you invest in us we will magically be profitable in the future."


> Also, Uber's business plan is 100% to maintain share in the rideshare market until their cars drive themselves. That is the end game here. To shut down their self-driving research is to doom the company entirely.

Uber's business is clearly more important than lost human lives due to negligence of proper software safety measures. /s


A fatality should do nicely. /s


> This is borderline criminal negligence.

Yes, but on the border between “criminal negligence” and “depraved indifference” [0] not the border between “criminal negligence” and “innocent conduct”.

[0] https://en.m.wikipedia.org/wiki/Depraved-heart_murder


> If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

Well, also, any of those categories should trigger braking themselves. You can't hit vehicles or bicycles. You can't hit "other" either. If you plan to hit something, it needs to be a piece of paper or some such. As a_t48 points out, classification is not needed to avoid objects, so being confused about what the object is is totally irrelevant.


This, whether it's a speed bump, trash, tire tread, tow hitch, bicycle, animal, or person in the road, it does significantly more damage at 35mph than it does at the reasonable decrease to 25mph.

Basically, absent vehicles behind you and to each side, a driver should stop until the object can be identified and determined if it will do damage or be avoided.


> Well, also, any of those categories should trigger braking themselves. You can't hit vehicles or bicycles.

A bicycle on the side of the same lane will typically move in the same direction and stay on the side. Thus the car has to overtake.

That is different from a person crossing the street.

You always have to make assumptions, about elements in the path. And it has to have an option to handle the worst case.


> Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??

I wonder if it's because their system is not reliable enough yet; a continuous alarm isn't useful.


Then it can't be deployed on public roadways!


You know, I don't usually like this quote when people deploy it, because it rarely applies. Frankly I find it questionable even in the original movie. But it applies here like gangbusters: "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."

There are a lot of commenters here who seem to be operating on a subconscious presumption that being "the best we could do" is some sort of defense. It's not. If the best you can do is not good enough, don't deploy it.


Considering the aforementioned one second "got a weird result, wait and see if it goes away" system, this seems highly likely. If everything's an alarm, nothing is.


It all makes sense if you realize that Uber’s tech fundamentally doesn’t work, and if you programmed it react immediately or set off alarms immediately the false positives would make it unusable.


I think it's a bit vaguely described but I think the pedestrian wasn't in the path of the vehicle at 5.6 seconds since they were walking perpendicularly. The system couldn't classify the person or that they were walking into its path. Or at least that what I gathered from that. I guess if it classifies something as unknown it's basically would have no behavior? I am just speculating.


> If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

Don't be so sure. What if it's a bird, and you're traveling at 70 mph on a busy freeway? Slowing down or stopping could result in getting rear-ended or worse.

Clearly more testing is needed, and more work needs to be put into identifying objects early.


The first thing a driver needs to learn is how to slow down and stop safely.

If you're traveling at 70 mph (which the vehicle wasn't) and you see something up ahead you're not sure about, you can slow down a bit. If you get rear ended when you take your foot off the accelerator, that's not your fault.

If you're still confused, light braking is appropriate. At some point, you'll know to either fully brake or go on through.

This system doesn't need more testing. It needs to be designed so it can stop safely and so that it will stop safely and alert the operator in case it is unable to determine if it's safe to proceed.


If you can't tell a bird from a car or a bike you have zero business being in the self driving car arena.


It's not that I disagree with you, it's just by your standards, [almost] no one should be in the self-driving car arena.


That's quite possibly a reasonable assessment of the situation.


...yet?

At least this doesn't seem to be that hard of a line to draw in terms of competency before allowing on public roads


Not on public roads.


What do you do when your radar says it could be a person, but your vision sensor tells you it's probably a bird?


Go back to the drawing board.


Stop anyway.

And admit we're going to continue to need human safety drivers...


Uber: Block any remediation actions, wait 1 second and hope for the best?


You slow down; because even if it's a bird, cat, dog, or armadillo it does damage to your car (including getting it bloody) if you hit it at a higher speed.


You quit and then get a job at McDonalds, because you're clearly not qualified for self-driving cars development.


So you need to wait for sensors that have 100% accurate object detection before you can work on a self-driving car?

Meanwhile, you let humans who are far less than 100% accurate (and are often distracted) keep killing people? I went into a panic stop one rainy night when a billowing piece of plastic blew across the road in front of me, it looked exactly like a jogging person to myself and my passenger.

We don't need self-driving cars to be perfect before they are released, just better than people.


If you really want self driving cars to happen then you will need to take one very important factor into account that has absolutely nothing to do with technology: human psychology.

People are not going to be swayed by statistics, they are going to be swayed by their emotions and if the self driving car attempts that 'hit the road' so to speak are going to be sub-par in the public perception even if they are might be statistical win then they will get banned. So if you are really gung ho on having self-driving happening then this should concern you too because otherwise you will never reach that goal.

Self driving cars need to be - right from the get go - objectively obviously better than human drivers in all everyday situations. Every time that you end up in a situation where a self driving car kills someone human drivers - and all their relatives and the press - are going to go 'well, that was stupid, that would not have happened to me' and then it will very soon be game over. And you won't get to play again for the next 50 years.

See also: AI winter.


The way it works in safety critical software safety analysis in my experience is that you have a hazard analysis/failure modes effects analysis that factors in severity x probability (and sometimes a detectibility measure)

So if you identify a failure mode that contributes to a catastrophic hazard for instance, you better build your system to drive the probability down. The resultant severity x probability score you end with has to fall within the risk parameters deemed acceptable by management/safety


Self-driving cars are far away from matching human driver skill level right now. We do not need them to be "perfect" we need them to stop doing such a silly mistakes as described in the article - that is a very basic requirement before they can be allowed on public roads.


Self-driving cars are far away from matching human driver skill level right now.

Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?

silly mistakes

A person died.


A person died.

Lots of people die in car accidents.

Nearly 40,000 people die each year in car accidents. At least 8 of the top 10 causes of accidents would be improved with self-driving cars.

1. Speeding

2. Drunk Driving

3. Speeding

4. Reckless Driving

5. Rain

6. Running Red Lights

7. Night Driving

8. Design Defects

9. Tailgating

10. Wrong-Way Driving/ Improper Turns

Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?

Aside from figuring out where the edge of the road is, the biggest accident risk that I've seen with driving in heavy snow is speed -- no one wants to drive 15mph for 2 hours through a snowstorm to stay safe, so they drive 30 - 50mph instead.

And I'm not sure how to solve the road visibility issue with self driving cars, but presumably the same heuristics that humans use could be emulated in software (which I suppose is primarily following the car ahead or his tracks or looking for roadside signs that mark the edge of the road).


Lots of people die in car accidents.

I doubt anyone would refer to those as silly mistakes either.

My point with the second part is that humans have proven driving in snow storm and places that aren't fully mapped is possible, something that self-driving cars have not.


> We don't need self-driving cars to be perfect before they are released, just better than people.

Who goes to jail when the self driving car kills people?


This is a real dilemma. IIRC, some car companies have already stated they will take responsibility


no ones going to jail from MCAS auto-correcting planes into the ground.


Yet. It may still come to that. If the Germans can go after Martin Winterkorn long after he thought he was in the clear then this may still happen as well.

https://www.theverge.com/2019/9/24/20881534/volkswagen-diese...


Are there lawsuits/investigations still pending or they are over and no one from Boeing was found to be guilty?


The Boeing story is only getting started for real. It may take a new head of the FAA before the whole thing will be tackled frontally but eventually that will happen. The USA can't afford an FAA that is without respect.


Depends. If it's a hardware failure, then no one should go to jail, just like today, if my wheel falls off (through no fault of my own) and I run into a child, I wouldn't expect to go to jail (heck, drivers already get pretty much free rein to run down pedestrians by saying "I didn't see him"). The car manufacturer may have some financial liability if it was a product defect, but again no jail time.

The interesting moral dilemma is what to do if the car decided it was better to run into a pedestrian and protect the driver than to run into a brick wall and protect the pedestrian.

There's no easy answer to that dilemma.

https://www.nature.com/articles/d41586-018-07135-0


The choice shouldn't be between human drivers and human supervised computer drivers. Computer supervision of human drivers is viable, effective, and allows for evolutionary progress.


But may be less safe than fully computer controlled cars unless that computer supervision is able to take control completely -- humans tend to view safety features as an excuse to push the envelope.

"I can text because my car will warn me if I run off the road or stop me before I hit the car in front of me"

"I don't need to slow down on this snowy road, ABS will help me stop safely"

"Sure, I'm driving too fast for this road, but my airbags and seatbelts will protect me if I crash"

https://www.wired.com/2011/07/active-safety-systems-could-cr...


There were articles in July 2017 about Volvo's self-driving cars not coping with kangaroos (they bounce) which make up the majority of car/animal collisions in Australia. A kangaroo is a lot bigger than a bird but couldn't be identified as being on the road.


Yes, but we can also say what a responsible self-driving car should be detecting/recording/ decisioning at that point, paraphrasing:

- unknown object detected ahead, collision possible- likely

- dangerous object (car) detected approaching from rear with likely trajectory intercepting this vehicle (people easily forget multitasking/sensing like this is something an autonomous car should be able to do better than a human who can only do relatively intermittent serial scans of its environment)

- initiate partial slow down and possibly change path: make some decision weighting the two detected likely collision obstacles.

You do not have to slam on brakes and be rear ended, but speed is a major factor in fatal crashes, so even if you can drop 30% of your momentum by the time of impact and avoid the rear end, that's still a responsible decision.

And we can accept that sometimes cars are put in potential no-win situations (collision with two incoming objects unavoidable).

What's a negligent/borderline insane decision? Put a one second hard-coded delay in there because otherwise we have to admit we don't have self-driving cars since we can't get the software to move the vehicle reliably if it's trying to avoid its own predicted collisions.

(Another issue is an inability to maintain object identity/history and it's interaction with trajectory prediction... personally, IMO, it is negligent to put an autonomous car on a public road that displays that behaviour, but that's just me)


> What if it's a bird,

Conversely, there are many things that are bird sized that can do significant damage to a car and even be fatal to the car behind you. Ie: a trailer hitch caused one of the first Tesla battery fires; loose pieces of pavement have been known to be kicked up and kill a following car.


No it shouldn't have applied the brakes at that point. There are lots of vehicles around in most cases. If the car braked every time it spotted another vehicle it would almost never move. Only take corrective action if the paths look like they will intersect instead.


At five seconds from impact it absolutely should have. And if it couldn't tell that it was that close to the "vehicle" that's even more reason it shouldn't be anywhere near the road in this state. The system should be able to behave cautiously in situations where input is erratic. Good human drivers don't go full speed ahead when they can't see.


It took over four seconds for the system to decide swerving was not sufficient based on path prediction. The improvements made after this report would have initiated braking seconds earlier.

-5.6s 44mph Classification: Vehicle - by radar Path prediction: None; not on the path of the SUV

- Radar makes the first detection of the pedestrian and estimates its speed.

...

-1.5s 44mph Classification: Unknown - by lidar Path prediction: Static; partially on the path of the SUV

- Lidar detects an unknown object; since this is a changed classification, and an unknown object, it lacks tracking history and is not assigned a goal. ADS predicts the object’s path as static. - Although the detected object is partially in the SUV’s lane of travel, the ADS generates a motion plan around the object (maneuver to the right of the object); this motion plan remains valid—avoiding the object—for the next two data points.

-1.2s 43mph Classification: Bicycle - by lidar Path prediction: The travel lane of the SUV; fully on the path of the SUV

- Lidar detects a bicycle; although this is a changed classification and without a tracking history, it was assigned a goal. ADS predicts the bicycle to be on the path of the SUV. - The ADS motion plan—generated 300 msec earlier—for steering around the bicycle was no longer possible; as such, this situation becomes hazardous. - Action suppression begins.

* The vehicle started decelerating due to the approaching intersection, where the pre-planned route includes a right turn at Curry Road. The deceleration plan was generated 3.6 seconds before impact

https://dms.ntsb.gov/public/62500-62999/62978/629713.pdf


> If the car braked every time it spotted another vehicle it would almost never move.

In this particular case I would be perfectly ok with that. If you can't operate a vehicle safely then coming to a stop or failing to get moving is fine.


When I drive there are things I see which I am not sure what they are but I don't care since they are stationary over there and I am going that a way over here instead. I don't stop to take a good look before continuing.


If they're in your path and you can't tell the difference is what we're talking about here. And that difference is very important because the dynamic characteristics of a pedestrian, bicycle, car or other object should factor into your model of their future trajectory and speed.


"There's something straight ahead of me, unclear what. Let's just ram into it, full throttle!" If that's your thought process, please cease driving immediately and permanently.


If the car can't detect what's in front of it, then it most definitely should brake. This is up to the engineers to solve this issue, not for a vehicle to continue into a dangerous situation blindly.


> There are lots of vehicles around in most cases.

In this case, we know there weren't many vehicles around.

This very scenario is a great example where I'd want a car to stop if it saw a deer or even a dog or armadillo.

> If the car braked every time it spotted another vehicle it would almost never move.

In defensive driving it's often taught that you are supposed to slow whenever you're approaching another vehicle and don't know what it's doing. You're supposed to exercise caution at intersections, and definitely supposed to exercise caution when passing, being passed, there are things in the road or other people or vehicles on the side of the road.


> This is borderline criminal negligence.

Counterargument: the safety driver is supposed to be prepared to take control at any time, should the vehicle do something unsafe.

It seems to me to be clearly criminal to use this code to run a fully automated environment (i.e. no safety driver). It's not clear to me what the expectations should be of the code when there is supposed to be an attentive safety driver in the vehicle.

I think the safety driver is going to jail, because they were watching videos instead of watching the road at the time their vehicle killed a pedestrian.


> Counterargument: the safety driver is supposed to be prepared to take control at any time, should the vehicle do something unsafe.

That's impossible. This has been tested in countless studies with train drivers and pilots and it is absolutely impossible for a human to stay alert if an 'almost good enough' computer system is in the drivers seat.

By the time you're needed your situation awareness will be nil.


Counter-counterargument: The safety driver should have something to do, and "something is moving closer to the car in a way we're not expecting" would be a great thing to show the safety driver.

Even if this was constantly happening, it would give the safety driver some sense of purpose - their job would be constantly figuring out "is this a real thing or not" - and then they wouldn't be bored out of their mind and be watching videos.


> The safety driver should have something to do

I agree with that, and the NTSB should consider adding this to their requirements when approving test programs of this sort.

But stepping back, I think there's a very significant difference in culpability between "safety driver couldn't react in time because they zoned out" and "safety driver was watching a sitcom". In the first case, the driver was trying to do their job, and the nature of the ask made it difficult/impossible. In the second case, the driver was knowingly not doing their job, and was knowingly engaging in unsafe behaviour. We don't have any examples of a fatal accident involving the first kind of error, and this case is an example of the second kind.

> "something is moving closer to the car in a way we're not expecting" would be a great thing to show the safety driver.

Isn't that what you get by looking through the windshield using your eyeballs?


"In the second case, the driver was knowingly not doing their job, and was knowingly engaging in unsafe behaviour."

Maybe, but the bigger picture is if you hire people for low wages and give them impossible tasks, you're not paying enough for them to make a good attempt or be a scapegoat. The problem is management.


There is a clear subtext in everything put out by Tesla and Uber that their cars are self-driving. That is to say, they drive themselves. That is to say, nobody else needs to drive them in order for them to get from A to B.

If they are truly trying to stick to the guns of "well the 'driver' should have been prepared to take over at zero notice", then their (potentially criminal) negligence is in how they present and market their product and not in the software. But it's one or the other.


Not to mention, the idea that a human failsafe will be attentive enough to respond instantly is ridiculous to begin with. If the car drives itself 99% of the time, it better drive itself 100% of the time, because the human is checking out.


> because the human is checking out.

In some cases in the most literal sense of that.


Hangon, Uber is not selling their product to anybody (yet). This is completely different than what Tesla is doing.

Yes, if this code was in a Tesla, it would be criminally negligent, because in the driver's seat of a Tesla is a consumer that bought a product called "autopilot".

Uber, and Waymo, and Cruise, are all _testing_ their systems, and that's why they installed safety drivers. There's no marketing here, and there is no customer -- the person in the driver's seat was employed to sit there and pay attention to what's going on, for the explicit reason that the car's computer is NOT yet capable of driving itself.


> There is a clear subtext in everything put out

That's in marketing materials rather then the legal ones. Which is great to make regular people interpret it "look, it can drive itself, I'll just fiddle with my phone" but doesn't put any of the legal responsibility on the manufacturer's shoulders. I'm guessing regulators will eventually have to weigh in on this.


I don’t believe we should think of this person’s profession as a safety driver. We should think of them as a fall guy.

It’s not reasonable for a human to sit around for hours doing absolutely nothing, then suddenly be thrust into an emergency with seconds to respond without warning. Most humans aren’t capable of that level of vigilance. If they aren’t watching a video, they’ll be daydreaming. It’s very unlikely they will be able to recover from any kind of situation requiring an immediate response. At a system design level, by placing a human in that role you have already failed.

Maybe if you take the most qualified humans with the best reaction times, they would have a chance, but the qualifications for that job would be more like those of a fighter pilot. This job is not recruiting our best and brightest, it’s recruiting people who want to sit around doing nothing.

This person’s role in the system isn’t to provide safety - it’s to absorb liability. Ergo, a fall guy.


  This is borderline criminal negligence.
I don't think there is much border.


> This is borderline criminal negligence.

Uber had told the world who they are: criminals. They have flaunted many laws since inception.

Will they ever stop? Who will stop them?


They have flaunted their criminality. They have flouted the law.


> The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

I suspect there's a big problem in the other direction, too. If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.

If it's (appropriately) conservative about starting to brake it will be braking/slowing all the time. If it's not conservative, people or cars will occasionally get hit. The former could make people uncomfortable or carsick or create some subtler danger by stopping short needlessly. The latter might mostly work and mostly not kill people, except for when it does.


Then your system is not ready to be on the road. We're taking about metal box weighting at least a ton, launched at dozen of miles an hour on roads where they can hit people.

"They had to not react quickly because they keep mis-detecting things" is really not an acceptable justification. If that's the why (and it probably is) that car should never have been on the street.


Exactly my point. I'm not apologizing for the software. I have severe doubt that software is anywhere close to making safe and appropriate decisions in the context of real world driving with open ended conditions about the makeup of the road, full range of obstacles and edge case conditions, weather, lighting, etc.

The scariest thing is that it's so fucking familiar, this idea of technologists solving a range of simple cases and assuming the solution extrapolates to success in the real world. We cannot get into these cars if that kind of software development feels anything like everyday software development, where random web site bugs can be tolerated.


No, the scariest thing is that capital may eventually push through a launch of these crappy devices before they're ready. And they'll never be ready.


I imagine most self-driving systems will have a higher success rate than human drivers. Uber's may not of been one of those systems though.


If self-driving cars were remotely possible, it would be obvious, because Google Maps would be insanely perfect. That's the bottom line as far as I'm concerned. Every time it directs me to do something stupid, I think, what if it were telling my car directly?


I'm sure the situation will improve eventually, but I'm guessing we're still several decades away and that it'll take advances in both sensor technology and software before driverless cars work at the same level as a human driver in all conditions. In perfect weather, on perfect roads, with minimal traffic and zero surprises the tech works mostly OK now, but those are basically lab conditions and not the reality most of us drive in today.


How do you feel about Tesla releasing their fully self driving software soon? They seem to have a very low accident rate so far.


I would want better reliability than "it shouldn't be a problem...until it is." That's pretty much the same reason my 82yo mom used and let me be the first to say you don't want to be driving next to her car (she doesn't drive anymore).

As a preliminary step to any regulatory approval Tesla should release every byte of data from their tests so we can analyze the scenarios and events that the software has dealt with, so we can second-guess them. I seem to remember a common criticism of Tesla is that it's kinda shitty to work there and I don't think the best work comes out of an environment like that.

We should know for sure what they/you mean by "seem to" and "very low." Trade secret protection is insignificant when public safety is involved.


Are 2 errors better than 35,000 human errors?


Depends on the errors!


Along the same lines, if they can figure out how to drive a car, they can figure out how to give instructions to human drivers that are far, far better than the state of the art. So, when they release their revolutionary navigation system, I'll try it. Not buying a car from them though.


Why do you think they'll never be ready?


Because I don't think computer vision will ever be good enough to be safe for this purpose.


We trust human vision for this purpose; is there anything human vision is doing that would be impossible to emulate with a computer, given enough research and development?


What exactly do you mean by "enough?" "Enough to solve the problem" is a tautology.


You're the one who used the word "enough" in "ever be good enough to solve the problem." What did you mean by it? Are you claiming that human perception cannot ever be synthetically matched?


If we apply this to a human, let's say I'm getting old and my eyes aren't as good any more, I don't just keep driving like nothing changed, I need to find a way to see better, because the risk of getting into an accident is higher. If the car's system cannot tell what an object is, you don't just assume it's going to be ok, you need to either get better sensors or find some other real solution.


>* suspect there's a big problem in the other direction, too. If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.*

Oh no, it doesn't work and Travis & friends have been lying for years. Wake me when there are consequences for the follies of rich white guys.


>If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.

if your car always sees empty space ahead as non-empty space, you probably shouldn't let the car on the road until you fix that. Once you fixed that, if the car sees that the space ahead is non-empty, even it can't classify it, it should slow down well before and warn the driver and continue with the braking if the driver is asleep/watching youtube. It is AZ, there is no rain nor snowflakes falling which would mislead the lidar. An object ahead - slow down and stop if you can't navigate it.

Presenting it as the AI-hard issue of misclassification is just an attention misdirection from and a whitewhashing/laundering of the foundational issue of knowingly letting car on the road with missing basic safety level of "don't hit objects in front of the car". Similar to the Boeing blaming 737 MAX crashes on the failed sensor.


>It is AZ, there is no rain nor snowflakes falling

This is so understated. This car was basically driving in ideal conditions at night for a self-driving car. If it can't avoid a hit under these conditions, it clearly wasn't ready to be on the road.


> If the system starts to brake every time it thinks it might need to, it will happen all the time.

One thing that self driving will "give back" is time to the occupant.

And the thing that most human driven accidents are correlated to damage; speed.


To rant a bit more about this one second delay thing.

This reeks of a type of thinking where you are relying on other parts of the system to compensate. You might expect to hear things like "it's okay, the safety driver will catch it". Speaking for myself personally, this type of thinking comes very naturally. I like to come up with little proofs that a problem is handled by one part of a program, and then forget about that problem. But in my experience (which does not involve writing anything safety critical) this strategy kinda sucks at getting things right. Dependencies change, assumptions become invalid, holes in your intuitive proof become apparent, etc, etc, etc, and the thing falls over.

If you are designing a safety critical system, something you really want to work, I don't think you should be thinking in the mode where each problem is handled by one part of the system. You need to be thinking in terms of defense in depth. When something goes wrong, many parts of the system should all be able to detect and correct the problem. And then when something bad does come up, and 9 out of 10 of those defensive layers each individually were sufficient to save the day so there was no disaster, you should go figure out what the hell went wrong in the tenth.


> Dependencies change, assumptions become invalid, holes in your intuitive proof become apparent, etc, etc, etc, and the thing falls over.

I apply encryption to storage. I can't tell you how often people try to push back on encrypting storage with stories like, "But we have access controls and auditing in place. And when we have a deprovisioning process for our drives. Encryption is costly and redundant, so why should we do it?"

Through the years I can recount several after-the-fact incidents where encryption ended up saving their bacon because of weird and entirely unanticipated events. One notable one was where a hypervisor bug caused memory to persist to an unintended location during suspend/resume, and the only reason customer data wasn't exposed in production was because the storage target was encrypted. In another case the "streams were crossed" when assigning virtual disks to virtual machines. The (older) base disk images weren't encrypted in that case, but because the newer machines were applying encryption in the backend before the blocks were exposed to the guest OS, the "unencrypted" disk content came across as garbage (plaintext was "decrypted," which with the algorithms we were using was equivalent to encrypting), again preserving the confidentiality of the original disk images.

The concept of "belt and suspenders" is often lost on people when it comes to safety and security systems.


It's depressing how much trouble some people have in understanding the idea of defense in depth (https://en.wikipedia.org/wiki/Defense_in_depth_(computing)).

Oh, you have access controls in place? Great. What happens if they fail?

Oh, you have a deprovisioning process in place? Great. What happens when someone doesn't follow it?

Systems fail all the time. If your defense only has one layer, when that layer fails (and it will, eventually) you're SOL. Multiple layers of defense give you resiliency.


> This reeks of a type of thinking where you are relying on other parts of the system to compensate.

This is what Boeing did with Max. The airframe wasn’t stable in and of itself, and they relied on software to compensate. Terrible idea.


>And then when something bad does come up, and 9 out of 10 of those defensive layers

And there should be a definitive priority established between those layers so that, if one fails, the other 9 don't attempt to correct in different ways. It should fail from the most conservative to the least so that a false positive results in erring towards stopping the vehicle.


This. The right way to think is that each component, in parallel, have a chance of succeeding, so chance of total system failure is exponentially small in the number of components. Not: oh if this layer fails, the next one will catch it... which makes the chance of failure as high as the weakest link.


It seems like Uber put in this "reaction delay" to prevent the cars from driving/maneuvering erratically (think excessive braking and avoidance turning). This, along with allowing the cars to drive on public roads at all before handling obvious concerns like pedestrians outside of crosswalks, is supposed to be balanced out by having a human ready to intervene and handle these situations.

I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle. Yes, this particular driver was actively negligent by apparently watching videos when they should have been monitoring the road. But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs. And that could be enough that their delay in taking over could lead to the worst.

Certainly not defending the safety driver here - or Uber. But I think there's a bit of a paradox in that the better an AV system performs, and the more the human driver trusts it, the easier it is for that human to mentally disengage. Even if only subconsciously. This seems like a difficult problem to overcome, especially if AV development is counting on tracking driver interventions to further train the models for new, unexpected, or fringe driving situations.


We will never have any way to know whether an average attentive human would have correctly parsed this situation or would also have hit the unexpected pedestrian in the middle of the street at night, but it's worth remembering that trying to make broad assessments of self-driving technology from this one accident is reasoning from a single data point.

One advantage the self-driving cars have over a human driver is that NTSB and Uber can yank the memory and replay the logs to see what went wrong, correct the problem, and push the correction to the next generation of vehicles. That's not a trick you can pull off with our current fleet of human drivers, unfortunately(1).

(1) This is not a universal problem with human operators, per se... The airline industry has a great culture of observing air accidents and learning from them as a responsibility of individual pilots. We don't have a similar process for individual drivers, and there are far, far more car crashes than air crashes so the time commitment would be impractical at 100% of accidents.


Humans can learn and transmit lessons. There is usually more objective evidence especially nowadays with cameras everywhere.


Oh, they definitely can, but I'm saying there's basically zero culture of that in the common automotive sector.


>> I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle.

Why not run these systems in shadow mode to collect data, rather than active? Have the human completely in control and compare system's proposed response to human's. At my last job running a new algorithm in shadow mode against the current one was a common way to approach (financially) risky changes.


> But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs.

As somebody who used to regularly drive the I-5 between SF and LA, I can wholeheartedly vouch for this statement.


> > That triggered what Uber called “action suppression,” in which the system held off braking for one second

> A hardcoded 1 second delay during a potential emergency situation. Horrifying.

Also laughable, if it wasn't so horrifying. The self driving car evangelists always argue how much faster their cars could react than humans. It's basically their main selling point and the reason why these things ought to be so much safer than humans.

Sorry, but I as a human don't have a one second delay built in. That's an absurdly slow reaction time for which I would have to be seriously drunk to not beat it.


There's research on this topic, and you'd be surprised.

The average human apparently has a 2.3 second delay to unexpected, interruptive stimulus while driving (https://copradar.com/redlight/factors/IEA2000_ABS51.pdf). We almost never perceive it as such because we tend to measure our own reaction times from the point we are conscious of stimulus to the point we take willful action to respond to it, but the hard numbers appear to show that critical information can take 1+ seconds to percolate to conscious observation (remember, the brain spends a lot of time figuring out what in the soup of sensory nonsense is worthy of attention).


The critical part is that you need to compare apples to apples - in this case, the one second delay is from the point at which the car had a clear idea of there being an obstacle in its path until it would have started to apply the brakes. If you want to compare this to humans, you also need to remove the sequence of time during which the human identifies the potential obstacle as relevant and subsequently as something he would crash into.

Whether this time is shorter or longer for humans is another question entirely (though the human intelligences' ability to deduce intent from behavior and forecast actions of other humans in traffic should give robocars a good challenge in that department as well). But in terms of raw reaction time after determining "I have to brake NOW", a human is definitely faster than one second.


That study shows a ~1 second delay from the incursion being visible to a human to them releasing the accelerator and a further 2.3 seconds to have the brake fully depressed, by which time they were also steering. The study also implies the average human response time was adequate to avoid the collision...

To put things into perspective, the Uber spend 4 seconds after actually detecting the incursion trying to figure out whether it needed to respond and doing nothing, then a further second of enforced pause after concluding it did need to do something until finally starting to reduce speed 0.2 seconds before impact.


That study doesn't purport to be an objective measurement of absolute reaction time. It's comparing relative driver behavior between a driving simulator and a test track, and it doesn't seem to have controlled how immediately drivers needed to respond to avoid a collision. It does, however, include one objective measurement of human reaction time, albeit not as a primary result of the study:

> the time from incursion start to throttle release included the reaction time of the tow vehicle driver pulling the foam car (which was consistently less than 200 milliseconds)


Sounds like they fixed that post-crash..

Handling of Emergency Situations. ATG changed the way the ADS manages emergency situations (as described in section 1.6.2) by no longer implementing action suppression. The updated system does not suppress system response after detection of an emergency situation, even when the resolution of such situation—prevention of the crash—exceeds the design specifications. In such situations, the system allows braking even when such action would not prevent a crash; emergency braking is engaged to mitigate the crash. ATG increased the jerk (the rate of deceleration) limit to 20 m/s3

https://dms.ntsb.gov/public/62500-62999/62978/629713.pdf .


They also addressed the earlier flaw that all the path history was lost every time the object classification changed:

>Path Prediction. ATG changed the way the ADS generates possible trajectories—predicts the path—of detected objects (as described in section 1.6.1). Previous locations of a tracked object are incorporated into decision process when generating possible trajectories, even when object’s classification changes. Trajectories are generated based on both (1) the classification of the object–possible goals associated with such object, and (2) the all previous locations.

edit: This was also improved, clearly it was not ready when it rolled-out before these changes:

>Volvo provided ATG with an access to its ADAS to allow seamless automatic deactivation when engaging the ADS. According to ATG and Volvo, simultaneous operation of Volvo ADAS and ATG ADS was viewed as incompatible because (1) of high likelihood of misinterpretation of signals between Volvo and ATG radars due to the use of same frequencies; (2) the vehicle’s brake module had not been designed to assign priority if it were to receive braking commands from both the Volvo AEB and ATG ADS.

... changes

>Volvo ADAS. Several Volvo ADAS remain active during the operation of the ADS; specifically, the FCW and the AEB with pedestrian-detection capabilities are engaged during both manual driving and testing with the UBER ATG autonomous mode. ATG changed the frequency at which ATG-installed radars supporting the ADS operate; at the new frequency, these radars do not interfere with the functioning of Volvo ADAS. ATG also worked with Volvo to assign prioritization for the ADS and Volvo AEB in situations when both systems issue command for braking. The decision of which system is assigned priority is dependent on the specific circumstance at that time.Volvo ADAS. Several Volvo ADAS remain active during the operation of the ADS; specifically, the FCW and the AEB with pedestrian-detection capabilities are engaged during both manual driving and testing with the UBER ATG autonomous mode. ATG changed the frequency at which ATG-installed radars supporting the ADS operate; at the new frequency, these radars do not interfere with the functioning of Volvo ADAS.

>ATG also worked with Volvo to assign prioritization for the ADS and Volvo AEB in situations when both systems issue command for braking. The decision of which system is assigned priority is dependent on the specific circumstance at that time.


Jerk is the rate of change of deceleration. The original settings were a pause of 1s, followed by 5m/s³ of jerk up to 7m/s². That means it takes 2.4s from the issue is detected until "full" braking of 7m/s² is applied.

Even 20m/s³ doesn't seem all that aggressive to me. A good car can brake with around 9m/s² (depending on the state of the road) which means it's going to take 0.45s to go from 0 to full braking.


Too bad they didn't do that after any of the previous 33 times it crashed into a vehicle


There's lots of bad stuff in this story without making up new stuff. Those 33 times were other vehicles striking the Uber vehicle, rather than vice versa. There was one time where the Uber vehicle struck a stationary bicyle stand that was in the roadway.


Is the story wrong? This is what it says:

"In these 37 incidents, all of the robo-vehicles were driving in autonomous mode, and in 33, self-driving cars crashed into other vehicles."

This is saying the self-driving cars crashed into other vehicles.


Story is wrong. The linked report has it as "Most of these crashes involved another vehicle striking the ATG test vehicle—33 such incidents; 25 of them were rear-end crashes and in 8 crashes ATG test vehicle was side swiped by another vehicle."

Edit: I emailed the register and they fixed it immediately. Nice!


That doesn't say who's at fault. Could be that Uber got rear ended 25 times before they disabled emergency breaking, then they hit a pedestrian.


I agree. Just pointed out that it is patched now...or rather, should be.


>> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

Those still all seem to fall into the category "thing you should avoid hitting", though, right?


The table goes into more detail. Each time the classification changed, the history for that object was essentially deleted; since there's only one data point, the system predicted that it would "continue" to stay stationary, even though the pedestrian was walking at a steady pace.


That part, about deleting the history, confuses me.

Why delete the history on a classification change?

Shouldn't classifications be tiered? In this case, while the system was struggling to PERFECTLY classify the object, it was clearly thinking it was something that should be avoided (oscillating between car, bike, other).

In this case, I would expect the system to keep it's motion history. IMO, this could have prevented the accident because although it didn't determine it was a bicycle/person until "too late" ... it had determined with plenty of time that it was maybe a car, maybe a bike.


This classification failure reminds me of the concept of "Disguised queries" I saw on LessWrong [1]. That is, there's the question you really want to answer:

A) Should I avoid driving this car into this thing?

and then there are subsidiary questions that help to answer that, but would be fully obviated if you already had a good answer for A):

B1) Is this a human?

B2) Is this a vehicle?

B3) Is this unrecognizable?

The "natural category" here is "drive into" vs "don't drive into", not "human vs vehicle vs fruit stand vs other".

[1] https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-...

In the article, the A)-type question is whether the object has vanadium, and the B)-type questions are whether the object is a blegg (blue round thing) or a rube (red square thing). The distinction becomes stark when you know it has vanadium, but it doesn't neatly fall on the blegg/rube continuum.


If it's having a hard time both identifying an object, as well as measuring it's movement, there's not really any reason it should understand that all those separate data points are the same object.

That is, it doesn't really matter if the object history is "deleted" or not; if it can't associate a new data point with a previous history (by identification or predicted position), the practical result is the same as if there is no object history.

This could be a result of using velocity based tracking, which I don't know that Uber uses, but is a fairly standard method, as it's what raw GPS measurements are based on.


> if it can't associate a new data point with a previous history (by identification or predicted position)

This sounds a bit strange, given that the entire technology around which autonomous driving is built is about identifying and recognising patterns in extremely noisy and variable data.


This seems to be the core problem in my opinion : the category swapping inhibited a timely braking decision.

If so, how come these categories where not encompassed into a more abstract "generic object" whose relative position seems to be getting closer since first detection ? That ought to have triggered the same braking scheme as, say, an unmoving car detection ahead.

I'd go for : utter engineering malpractice.


I suspect overly aggressive association of unknown objects would have it's own set of side effects.


Well, in my experience, if you're left to balance side-effects, it's that your underlying design is flawed.

But regardless, taking sensor classification (inherently error-prone) at face value is engineering malpractice.


Right, but as the article mentions, each time it got reclassified, the tracking history gets wiped, so the car doesn't know that the object is about to enter the path of the SUV. It just sees an object that's in the other lane and assumes it's going to stay there.


>the tracking history gets wiped

This might be a little too nitpicky but it doesn't get wiped. It's simply no longer associated with that object because it's considered a different object. It's still a huge, glaring issue, obviously, but all the data is still there.

In this particular case, the "object" was identified as one type of object and so all of the data related to that was classified as "car" info, for example, and then, when it's reclassified, that additional data starts recording to the "bike" info bucket. The software should have been keeping track of certain data regardless of that classification but it's only seeing the latest bucket of data. If the tracking history got wiped, we wouldn't have the data to look back on to see how this was all happening.


Major facepalm right there. It's like all the stupidity of UI choices I see, but with the consequence of a car collision, and (in cases like this) death.


> > 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

That alone should have been ground for immediate cessation of operation until a driver could take over, for the system to be declared unworthy of operation on public roads until this problem was fixed. The differences between 'pedestrian with bicycle', 'vehicle' and 'bicycle' are so large that any system that wants to steer a vehicle should be able to tell the three apart at at least 50 yards of distance or even more.

That is the reason why regular drivers have to pass an eye test before they are allowed behind the wheel. If you can't see (or understand what you are seeing) you should not drive..


> " and Vasquez took the steering wheel, disengaging the autonomous system"

> And then top it off with systemic issues around the backup driver not actually being ready to react.

It's even worse than that! Once the human does take the wheel the computer stops doing anything.

So from when the human is alerted and grabs the wheel, until the human can react, the car isn't even slowing down!

That's like the worst of both worlds.


From a year ago [1], engineers said jaywalker detection wasn't there:

> Employees also said the car was not always able to predict the path of a pedestrian.

The brake inhibition was very intentional and was the result of in-fighting as well as engineers trying to make the Dara demo:

> Two days after the product team distributed the document discussing "rider-experience metrics" and limiting "bad experiences" to one per ride, another email went out. This one was from several ATG engineers. It said they were turning off the car's ability to make emergency decisions on its own like slamming on the brakes or swerving hard. ...

> The subtext was clear: The car's software wasn't good enough to make emergency decisions. And, one employee pointed out to us, by restricting the car to gentler responses, it might also produce a smoother ride. ...

> A few weeks later, they gave the car back more of its ability to swerve but did not return its ability to brake hard.

And then they hit Herzberg.

The UberATG leaders who made it through the Dara / Softbank demo likely vested (or are slated to vest) millions of dollars.

[1] https://www.businessinsider.com/sources-describe-questionabl...


> with 0.2 seconds left before impact, the car sounded an audio alarm

It takes about 300ms for your brain to react to unexpected stimulus, so the alarm is useless in this case. Sad.


The alarm's entire purpose is to shift the blame to the engineer in the driver's seat


it speaks to a general problem of ML systems. Real-world problems are open-ended and a system that cannot reason about what it sees but merely applies object classification is completely clueless and won't reach a level of fidelity that is needed for safety.

I'm increasingly convinced that virtually every unstructured problem in the physical world is an AI-hard problem and we won't be seeing fully autonomous driving for decades.


We as humans possess some skills that are so profoundly important but also so subtle that we don't even recognize them as skills. And excessive optimism about AI is a lack of recognition of how fundamental those skills are to our navigation of the world (both figuratively and literally.)


I wonder why pick such vehicle to test this. Why not something with smaller mass to have less momentum on impact. (ie google car).


Actually yeah, can we not rig some kind of bicycle with the same sensors and test self-driving that way?

The steering mechanism would have to be modified obviously, but surely steering is a trivial part of the problem compared to actually figuring out where to steer to?


Part of path planning will involve vehicle dynamics, breaking, acceleration and steering response, and the envelope of the vehicle. All of those will heavily impact how a car should drive.


But that's something the AI can learn fairly easily, isn't it? The difficulty in this case wasn't that the AI had issues figuring out the handling of the SUV, it's that it had issues detecting a pedestrian and a dangerous situation. You can still run into these issues on a bicycle, with a much lesser chance of killing people.


This is totally true but the issue was more with the methodology of the detection rather than the detection itself. Regardless of the type of vehicle, the software wasn't good enough for real-world testing.


cycle dynamics is an unsolved control problem


That's not entirely true, I can drive a bicycle just fine.


I can drive a car just fine, too. Are we still talking about autonomous vehicles?


My personal anecdotal belief is that it’s much, much harder for a robot to ride a bike than to drive a car. It requires human-level bodily/physical intuition that AI is nowhere near. We can barely program robots to walk normally, let alone ride a bike.


As I remember it, the driver got a lot of heat for fumbling with her phone (or 2nd computer?) right before the accident. I don't think however that 1.2s is a bad reaction time for a complex situation.

Would it have killed the developers to make the car sound its horn when it gets into this absurd 1s "action suppression" mode?


> the driver got a lot of heat for fumbling with her phone (or 2nd computer?) right before the accident

Based on news stories I found, she was glancing at a television show on her phone [1].

> make the car sound its horn when it gets into this absurd 1s "action suppression" mode?

If they added the suppression because there were too many false positives, that would just have resulted in the car honking at apparently arbitrary times. It's just converting the garbage signal from one form into another. It's still too noisy to be reliable.

1: https://www.azcentral.com/story/news/local/tempe/2019/03/17/... Vasquez looked down 166 times when the vehicle was in motion, not including times she appeared to be checking gauges [...] In all, they concluded, Vasquez traveled about 3.67 miles total while looking away from the road. [...] starting at 9:16, Vasquez played an episode of “The Voice,” The Blind Auditions, Part 5, on her phone.


> If they added the suppression because there were too many false positives, that would just have resulted in the car honking at apparently arbitrary times. It's just converting the garbage signal from one form into another. It's still too noisy to be reliable.

I love how they went from "our vision system is too unreliable to have warning signals every time it doesn't know what's in front of it" to "okay let's do it anyway but just not have warning signals". Like it didn't make them stop and think "well maybe basing a self-driving car off of this isn't a good idea".


Oh, that's not being fair; they checked in a fix after all!

  /* issue 84646282 */
  sleep(60 * 1000)


I don't know why you're downvoted, because I find your comment funny. Reminds me of a similar real fix for a race condition I found somewhere in one of the companies I worked before:

    Thread.sleep(Random.nextInt(1000));


Compared to classifying a pedestrian with a bicycle crossing the street in the dark it is easy to track the gaze of the safety driver and stop the vehicle when they are not looking at the road.


> I don't think however that 1.2s is a bad reaction time for a complex situation.

1.2 seconds after hitting a predestrian. That's a pretty poor reaction time. Typically you want to apply the breaks before you come into contact with a person.


> you want to apply the breaks before you come into contact with a person

Usually breaks are applied shortly after the moment of contact, but the brakes should certainly be applied earlier.

(I'm genuinely sorry, but I couldn't resist.)


...darn it. I think I have to leave it that way now. Can I blame my phone? Yeah, I'm gonna blame my phone.


That has nothing to do with the reaction time. Yeah, they weren't looking at the road, but that's a separate issue that doesn't involve reaction time. Going from an unexpected buzzer to full control of the vehicle in a single second is a pretty good speed.


No it's not, and hitting the brakes isn't "full control". Honestly, how long do you think it would take you to hit the breaks if something jumped out at you? It would be almost instant.

Edit: ok, I was just making a bit of a joke at first, but I looked it up. Reaction times vary by person, but tend to be between 0.75 and 3 seconds. 2.5s is used as a standard, so I guess I have to conceded that 1.25s is pretty good... I guess, for whatever that's worth.


Just a few minutes ago I had a very similar situation, although at day time. I was going at about 40 kmph straight forward my lane near a bus standing at a bus stop at the right side of the street. At the moment I was passing the bus, a young person almost jumped out in front of me, maybe 15-25 meters away. They were hidden behind the bus, so I had no way to see this coming in advance. Fortunately they realized I was coming and backed off and I also managed to stop completely before them.

So if my reaction was 5.7 seconds, I'd definitely have applied brakes far too late. I conclude the total time from classifying the object moving into my way and applying brakes was less than a second. (And btw, my car has emergency braking / pedestrian avoiding system and it didn't trigger, so I was faster).


You had experience "hope that nobody jumps out from behind that bus, as people tend to do", however. Hard to formalize, IMHO.


>Edit: ok, I was just making a bit of a joke at first, but I looked it up. Reaction times vary by person, but tend to be between 0.75 and 3 seconds. 2.5s is used as a standard, so I guess I have to conceded that 1.25s is pretty good... I guess, for whatever that's worth.

We have a rare sight here: someone not being right, learning more about the situation, changing their opinion, and then making an edit about it all.

Kudos, and thanks for making this a better place for discussions :)


It is possible! Happens more on HN than it does on Reddit at least. I'm ok with being wrong.


Reminds me a bit of "Cisco Fixes RV320/RV325 Vulnerability by Banning “curl” in User-Agent".



I would like to see the failure-mode-effects-analysis (FMEA) that identified "action suppression" as a means of mitigating a nuisance fault on a safety critical system.

And understand why the designers felt this was okay...(Assuming of course, this was the actual reason for the delay. They may have a legitimate reason?)

I hope it's not the case that the hazard analysis stated that the human in the loop was adequate no matter what haywire thing the software did.


> A hardcoded 1 second delay during a potential emergency situation. Horrifying.

As a controls engineer in the automotive industry, I can tell you that a 1-second delay for safety-critical systems is not atypical.

The expectation is that the normal software avoids unsafe operation. Bounding "safe operation" is difficult, so if an excursion is detected, there's essentially a debounce period (up to 1 second) to let the normal software correct itself before override measures are taken.

This helps prevent occasional random glitches or temporary edge cases from resulting in a system over-reaction, like applying the brakes or removing torque unnecessarily, that would annoy the driver and potentially cause unsafe operation themselves.

Obviously there are still gaps with that approach. But there is supposed to be a driver in charge; and the intent is to prevent run-away unsafe behavior. It essentially boils down to due-diligence during development.


To counter balance your point : the original Volvo emergency braking system in the car saw the crash and wanted to brake 1,3 seconds before it happened.

So Volvo engineers didn't think at all like you do / say. Their system was 0,1 second faster than Uber at detecting it, 1,1 seconds faster if you factor in Uber active suppression, and it would have braked 2,1 seconds sooner than the Uber did.

Why didn't it? Because Uber deactivated it's braking ability.


>>> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

This is exactly why I keep saying that autonomous vehicles are not 10 or even 20 years away. More like 50-100 years away if that. Same reason as to why famously a group of researchers in the 60s thought that solving computer recognition of objects would take few months at max, and yet in 2019 our best algorithms still think that a sofa covered in a zebra print is actually a zebra with 99% confidence.

Had a human been actually paying attention to the road, I can bet they would start breaking/swearving as soon as they saw something, even if they weren't immediately 100% certain that it's a human - a computer won't until it's 99%+ certain, which is too risky assumption considering the state of visual recognition of objects.


> System can't decide what's happening.

> At least the car would still try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.

"Don't know what it is, let's ram it."

Never mind not detecting a pedestrian, that in itself is terrifyingly incompetent and negligent.


This is ridiculous. I already understand many seemly critical software are just unsafe / insecure but people actually running this without multiple layers of safety net mechanisms on a high speed machine that can, and did kill people? The backup driver is one broken safety net, and there are no more working security redundancy?


>> try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.

Better question than why did this happen: How often do these cars "see" bicycle and decide to glide on by. How often do they seeing things horribly incorrectly and we are all just lucky nothing happens.


> I bet they added it because the system kept randomly thinking something serious was going to happen for a few milliseconds when everything was going fine.

Smoothing bugs out via temporal integration. The oldest trick in the book.


The car hit Elaine Herzberg.

  Elaine's Obituary

  Elaine Marie Wood-Herzberg, 49, of Tempe, AZ passed away 
  on march 18, 2018.  A graveside service took place on 
  Saturday, April 21, 2018 at 2:00pm at Resthaven/Carr-
  Tenney Memorial Gardens in Phoenix.

  Elaine was born on August 2, 1969 in Phoenix, AZ to Danny 
  Wood and Sharon Daly.

  Elaine grew up in the Mesa/Apache Junction area and 
  attended Apache Junction High School. 

  Elaine was married to Mike Herzberg until he passed away.

  She was very creative and enjoyed drawing, coloring, and 
  writing poems.  She always had a smile on her face for 
  everyone she met and loved to make people laugh.  She 
  would do anything she could to help you, and was there to 
  listen when you needed it.

  Elaine is survived by her mother Sharon, of Apache 
  Junction, AZ; her father Danny, of Mayor, AZ; her son 
  Cody, of Apache Junction, AZ; her daughter Christine, of 
  Tempe, AZ; her grandchildren Charlie, Adrian, and Madelyn; 
  her sister Sheila, of Apache Junction, AZ; and many other 
  relatives.
But a homeless person is only ten points. So let's remember the car.

https://www.sonoranskiesmortuaryaz.com/obituary/6216521

https://afscarizona.org/2018/05/14/the-untold-story-of-elain...


This is why I will never use any service from Uber ever. I may be insignificant but at least I know I am doing my little part to not support this disgusting movement of "Move Fast and Break Things" which includes actual casualties. Everyone involved except the lowest pawns get away with it so the next company can do it even worse.


It's a choice you can make, but it's worth noting that the alternative to getting SDCs working isn't "Nobody gets hit;" the alternative is "We keep having humans operate multi-ton vehicles with their known wetware attention and correction flaws and an average of 3,200 people die a day."

That doesn't imply Uber has the right chops to solve the problem, but I hope someone does.


False dichotomy, my friend.

Look at Europe (or heck, NYC) for alternatives to a car-dominated society:

* walkable, mixed-use, dense neighborhoods

* public transportation (rail-based, in particular)

* car-free streets (and cities)

The solution to traffic deaths is not self-driving cars. It's moving away from the Levittown-style suburbs that have proliferated the US since WW II.


>The solution to traffic deaths is not self-driving cars.

Moving the world to a traffic free societies is not happening in a hurry, if ever. I'm writing this from a pedestrianised area in London but we still have plenty of cars.

Self driving type tech however has the potential to transform things in the next decade or so. Even if we don't have actual self driving the collision avoidance tech is getting quite good eg https://news.ycombinator.com/item?id=21388790


Cool, so tear down American civic architecture to its bedrock and rebuild it from scratch.

Not impossible, but less than likely in the short horizon. We'll probably get working SDCs sooner.


True that, but not mutually exclusive.

We used to have streetcars. We tore then down. We can put them back.


And a self-driving streetcar is a more constrained problem to solve than a self-driving car.


Yup! Still nontrivial, but actively worked on[1].

[1]https://www.wired.com/story/siemens-potsdam-self-driving-str...


The interesting difference is the “wetware” knows, without ever being trained, that it’s going to hit another person and stop the vehicle. If you think SDCs will reduce deaths at all I believe you’re ignoring reality. There are plenty of other options like public transport or remote work that is infinitely better then some cocky engineers thinking replacing a human brain is easy


6,000 pedestrian deaths a year suggests the wetware may not know to a level of certainty we should consider exclusively acceptable.

And I haven't met anybody working on the problem who thinks it is easy. But a lot of them do think it's worth the hassle if it will save even a fraction of tens of thousands of lives a year (even if fully autonomous operation is decades out, the semi-autonomous driving enhancements that have come from the research are already saving lives). Adopting and enhancing mass transit is also an excellent idea, but I think it's unrealistic to assume that will work exclusively. America has had quite a few decades to decide mass transit is something that everyone will jump on to, and it seems to not be happening.


And you still think deaths are going to be reduced by this? Read the article again. This will not go away, more pedestrians will be killed by self driving cars. Deaths will never be reduced to zero, even with self driving cars. But you will take jobs away from people driving for a living.

Public transport doesn’t work in the US because of how garbage it is. If the spent half an effort to build up a proper transportation system adoption rates would be different. I live in CA and we have some of the better transportation systems in the country. It takes me 1hr to get to work via car or 2-3 hours via public transport. Solve that problem and you’ll see people adopt it real quick.

Further the real problem is people not obeying traffic laws. Not that they can’t, they’re choosing not too. An easy solution is to but a device in vehicles that automatically cites a person for things like crossing solid lines and merging with no signal, speeding etc.


Deaths will never be reduced to zero, no. But if a self-driving car can cover twice as many miles between accidents as a regular car, the number of deaths per year plummets as more SDCs are added, assuming about the same number of total miles driven per year.

... And SDCs can be programmed to obey traffic laws.

As for jobs, we're not going to see jobs taken away by self-driving cars in the near future. More likely, we'll see a fleet of vehicles with driver assist technology derived from attempting to solve the self-driving problem that will make the lives of career drivers, such as those in the trucking industry, that much better and safer.

I agree that improvements to mass transit would also help. But those aren't mutually exclusive problems.

We have to keep in mind, it's not in either-or story, it's a statistics and numbers story. on the same day this woman was tragically killed, nine other people were killed in traffic accidents. They just don't make the news, because that tragedy has become so ubiquitous that we are utterly desensitized to it.

We shouldn't be.


There’s a lot of ifs and predictions here.

How many deaths would be prevented by automatically citing a vehicle for breaking traffic laws? This is a much cheaper and simpler option than an SDC.

SDCs can be programmed to obey laws, I never said they couldn’t. However a software system will not reach the same level of reasoning a human brain can in our lifetimes. Compute capacity isn’t even close yet. So again, an SDC / Software System / AI is no replacement for a human brain.

We have driver assist technology now. No company in the world is going to keep a human driver on the books when they can be replaced by a machine. History is there as proof. It’s not a question of if they will be replaced, it’s how long it will take to make the technology robust enough.

They don’t make the news because death is a normal, acceptable thing for us. People die, accidents happen. Covering every person that dies would not at all be time efficient.

There’s also something to be said for darwinism here, but I’m not going to get into that.


> They don’t make the news because death is a normal, acceptable thing for us.

That's how human cognition deals with things we don't think we have control over. There was a time when smallpox deaths were just "part of God's plan."

I disagree the car crash body count is inevitable or needs to stay normal any more than some people's children just naturally succumb to smallpox.

> There’s also something to be said for darwinism here, but I’m not going to get into that.

Yes, I definitely wouldn't. It's an offensive attitude to have around people who have lost friends and loved ones this way to imply they weren't good enough to live.


Umm, have you lived in any non-car dominated city/country? Multimodal public transport is a thing.


I hope you don't ever set foot in any other car either, considering they kill a million people every year. But a million is just a big number, it's not as powerful as 1 obituary.


This accident didn't save any lives though. It's just associated in your mind with saving lives. It's one more death; there's no decrease in traffic accidents to compensate.


So you're implying that our ability to take risks in the pursuit of massive systemic gains is exactly zero?


Trading risks now for future gains is always tricky.

Suppose the CEO of Uber was a cannibal, and you framed letting him eat people as a necessary perk in order to keep him happy and the self-driving program on track. Would it be valid to say the number of people it's permissible for him to eat is exactly zero, even if it slows down the production of a truly self-driving car? I mean, what's one or two lives compared to 40,000 a year or whatever? There's a lot of uncertainty about the costs and benefits though, even if you strictly adhere to a utilitarian viewpoint.


I'm fine with taking risks. I just don't think we should be making a cavelry charge into the machine guns. I know the payoff would be great if we succeeded, but it's still not a good idea.

I'm deeply excited by the possibilities of self-driving cars, and I would agree that it's necessary to take risks to make them a reality. The question is always if we're taking a necessary risk or just being reckless.

Uber has taken unnecessary risks and learned relatively little from them. They didn't need a fleet on public roads to tell them that object detection was terribly broken.


I actually don’t think the payoff will be great. What’s the real benefit here. Some jobs will be lost but now we have a large hackable software system that humans have little control over. If I have to sit in the front seat and watch the road then it really doesn’t do anything for me. And then there’s always going to be instances like this “I didn’t test that scenario” crap above.


As a UK-resident, I find the omission of considering a pedestrian in the road quite unexpectedly, and unfortunately, overly car-centric.

There isn’t really the term “jaywalking” here. It’s just “crossing the road”. I’m not sure on exactly who has what legal responsibility, but it certainly feels like pedestrians should look out for cars when crossing, and drivers should look out for pedestrians.


Legally the pedestrian has no responsibility except that they're prohibited from entering certain areas specifically legislatively set aside for motor vehicles like motorways (approximately "freeways").

Drivers are required to give "due care and attention" to driving which can be demonstrated by following the "highway code" and that code tells them pedestrians might do things they don't expect and to assume if it's not clear what's going to happen then yeah there is a pedestrian behind that obstruction and they are going to run into the road in front of you, whereupon hitting them would be your fault.

For example when I was a child I got off my first bus home from secondary school, and ran straight into the road in front of a car I couldn't see because the bus was in the way. The horrified driver was legally responsible for that, even though she hasn't intended to hit me. I believe she would have been automatically billed by the authorities for the cost of shipping me to a hospital to have my broken leg set and so on.

Clearly it is in some sense my fault that happened, but on the other hand it's not me choosing to drive a huge steel box at 30mph past a bunch of idiot children...


Driving without due care and attention is also an offense in the United States and Canada, and should apply in almost all cases where a pedestrian is hit, but usually isn't.


Why didn't your school bus have a stop sign deployed? Those are for exactly this risk. Running a school bus stop sign I believe is a more significant violation than running post-stop signs.


What school bus? Did I say school bus?

It was simply a bus. Specifically it was the hourly bus from the nearest large town (where my new school was) to the village I grew up in.


In London, children take the same double decker buses everyone else takes. There’s no “school bus” like they do in the US. This is also true for Germany and the Netherlands where I’ve lived as well.


Unfortunately they don’t exist in a lot of countries.


Do they exist in any country other than the USA?



Soon Asia none of those are public school buses, especially China where they are all private buses painted yellow and have no special treatment (no stop sign popping out of the side). Similar differences exist in Europe. Or I guess my point is, none of these systems come even close to resembling what we ache in the USA and Canada.


12mph is the speed limit around a stopped school bus here in New Zealand.


In Texas, you cannot even go around a stopped school bus. Very expensive ticket ($400!)


I presume that's the case in every U.S. state. Part of the bus stop protocol I got taught is that kids cross the street in front of the bus after they get out, and that we needed to walk forward far enough that the driver could see us.


My school bus had this pole attached to the front bumper area of the bus. It swung out when it stops. This forces kids to walk at least 8 feet (or however wide a bus us) in front of it before crossing.


Yep, they appeared in my school district in the mid-90's. And with it came stop signs that swung out from the side of the bus, overhead escape hatches that could be opened for ventilation, and ejection seats.


Ejection seats?!!


Yep, same in WA. In fact, every school bus deploys bright red flashing lights and displays giant stop signals unfolding from the back, with signs that say in very uncertain terms that it is illegal to pass a school bus when it is stopped and the lights are deployed.


In Minnesota, the school buses have stop signs that swing out with lights on them when the bus is boarding or letting off children. It is illegal to go around the bus when the sign is displayed. I hope I never find out how much the ticket is.


Yup, we have those in Texas too. I assumed they were standard; I guess TIL I learned they're not. They very much should be. Children, especially very young children, are quite unpredictable even when they should know better, and the signs are a signal from the bus driver to surrounding cars that the safety of those children (which the driver should be monitoring) is much, much more important than the impatience of the surrounding motorists. You always stop. Always.


Ridiculous those big road hogs get to sit there impeding traffic for literally no reason. Just adjust the route so the kid only gets out on the side facing the house so the rest of us don't have to suffer. Pay taxes for the road then have to sit there not even able to properly use it, ludicrous.


I'm glad you weren't hurt worse; an unfortunate college student at my university tried the same thing with a public transit bus stopped to let her off at a green light, and she did not survive.

The car passing the bus had no chance of seeing her, and there was no reason to believe anyone would be crossing against the green light. I think she may have had a pattern-match malfunction and acted as if she'd just disembarked a school bus. It was a tragedy all around.


Just to be clear, the laws on crossing the street as a pedestrian differ by district.


You didn't get taught by parents/teachers that you should be crossing behind the bus, not in front of it?

I'm not saying it was your fault, it just never ceases to amaze me how car-centric the US is (that's after over a decade of living here)


back by the massive diesel exhaust pipe? no thanks. you’re really setting kids up to be healthy! the front the driver can see if the child is far enough away (hence the pole) and it keeps the kid in sight for any other stopped cars next to the bus. Also I hold you understand running around saying things like “it just never ceases to amaze me how car-centric the US is” is offensive and micro aggressive. we have what we have, and that’s a lot of room.


I was crossing behind the bus. That's why she couldn't see me. Also, this sub-thread is about the UK, and so unsurprisingly my anecdote is also about the UK.


That only works on a one-way street, not a small two-lane street like where I grew up. It's not as general advice as you think.


IMHO this has little to do with jaywalking.

All sorts of hazards can come up in the road, such as children running into the street, deer or other animals.

This scenario was bound to happen given that auto-breaking was disabled and the human driver was playing with their phone instead of paying attention to the road, which was the whole reason they were in the car.


> given that auto-breaking was disabled

Only the Volvo auto-breaking was disabled; the Uber software/hardware itself should have handled the situation fine.


Agreed; my thanks to whomever removed the politicizing term "jaywalker" from the submission title, but more importantly shame on the NTSB for using it in this report. Whether or not she was breaking a street-crossing law is entirely irrelevant to the drivers obligation of due diligence.


When I trained for my driver's license in Germany, the scenario "people may run onto the street" was one of the most frequently recurring scenarios in the theory part.

Anything from "children run onto the street to chase after a ball they were playing with" to "pedestrian jumping out between cars because they didn't see you" or "tram stops in the middle of the road and people exit onto the street".

Legally pedestrians must use marked crossings when available and are told to check both ways, but if a driver hits a pedestrian, it's always the driver's fault (though the pedestrian might share some blame in some circumstances but never the full liability).


I find extremely hard to believe it wasn't top of mind for the Uber engineers. I think it's more likely they just fucked it up or deliberately ignored it and are letting the NTSB think it was negligence because it's a less harmful outcome for them.


Sure, that's what the driver's ed classes drill in, but it's not true that it's always the driver's fault.

The German law still has some notion of whether a driver could have prevented an accident or not. Not all situations are the fault of the driver.

Extreme example: If somebody runs into your car on the Autobahn that's not your fault.


I've seen that happen (on a 4-lane hwy): a drunk who apparently tried to jump under a bus, but reacted so late he nearly missed it; as it were, he stumbled into the side (and rebounded to the wayside). Quite improbable, yet there we were.


> Legally pedestrians must use marked crossings when available and are told to check both ways, but if a driver hits a pedestrian, it's always the driver's fault (though the pedestrian might share some blame in some circumstances but never the full liability).

It's theoretically the same in most US jurisdictions. However, it's not what the law says that matters; it's what the police will actually do, and in practice, there is a very strong tendency to blame the pedestrian for any accident.


> Anything from "children run onto the street to chase after a ball they were playing with" to "pedestrian jumping out between cars because they didn't see you" or "tram stops in the middle of the road and people exit onto the street".

What about "drug impaired people going onto the autobahn at night wearing dark clothes in a section unilluminated by street lights" which is what I believe happened here?


It's near impossible to enter autobahn without a car or other motorized vehicle. I am not sure what kind of road the victim was following but it surely wouldn't compare to autobahn. Was it a highway with free entrance for bicycles and scooters?


its quite easy to enter it on foot - unless you mean "unintentionally" then I agree with you.


From the video, the road has sidewalks, so it's reasonable to expect pedestrians.


> at night wearing dark clothes

This implies the woman was not easily spotted. She was, at > 5 seconds to impact - plenty of time for a braking or avoidance maneuver.


You believe incorrectly. Nearly every word of what you typed was wrong.


Rhetoric spewing forthcoming... I talk about this stuff with my peers, and am constantly dismissed by the techno-optimists that believe in the promises of big tech, but I think it is important not to jump the gun and put these self-driving cars into production (i.e. the real world) without a significant paper trail and analysis of real-time incidents being made available to the public. Full stop, all incidents and outcomes, not just the good.

Going further, I wish for more initiative to re-design infrastructure putting the safety of pedestrians, cyclists etc at the forefront. It is simply not safe for alternative transportation to share the road with vehicles, autonomous or otherwise. Coincidentally, if infrastructure were designed to limit or prevent the interaction between cars and non-cars, the problem of detection for autonomous systems becomes much easier. Unfortunately, society would rather prop up the free market with capital than invest in public works which benefit more than corporations and shareholders.


Good points, but we also need to quantify the risks posed by human drivers in similar situations. Getting "similar situations" is hard, but not impossible: aircraft industry builds faithful sims where people can respond to various stimuli (or lack thereof for hours after which trouble comes).

As it is, we bash every crash and failure of such cars and may prohibit those from public roads while statistically they might be making our lives safer, not more dangerous. My 2c.


I see what you're getting at, but there is a certain amount of cognitive dissonance at play here. If a person driving a car strikes a pedestrian, there may be a grey area as to who was at fault. What was the speed of the driver? Was the driver impaired? Was the pedestrian impaired? Was the driver's view obstructed, was the pedestrians view obstructed? Etc, etc.

But when a system that is designed to autonomously detect infinite, non-discrete, random variables fails prevent a collision/incident, you have a failure. There is no grey area, Y or N. The hope is, the system is smart enough to determine exactly 'why' there was a failure (i.e. the system recognizes the input, a pedestrian in this case, but was not able to prevent collision due to the constraints of physics, or without a nominal increase for concurrent accidents). Still a failure, albeit a failure with a greater snapshot of heuristics and causality.


You're getting stuck up in pedantry when only one thing matters: Do AI drivers kill fewer pedestrians than human drivers?


I've never understood techno-optimists on this point.

If Uber has such amazing technology that avoids accidents, they could deploy them as accident-avoidance for regular drivers. They could give normal driving control to the humans, and take over only if there is a potential of accident.

If you argue that these driver assistance technologies already exist, then you better not be comparing your magical AI on supersafe cars with sunny climate on well marked highways to the average driver on the average vehicle under average climate on average roads.


That is a good point, especially when considering that modern cars have semi-autonomous controls which are likely to actually be effective in preventing ‘human’ failures, while still maintaining the UX of a non-autonomous vehicle.

Another point to consider, is that the actual experience of autonomous driving may increase the likelihood for ‘manual intervention’ failures due to the system being able to react correctly most of the time!


Sure, but do they? Can we really know with such different sample sizes?


Regarding sample sizes.

Say, you claim your product works, on the average, for 80 years before breaking.

I believe you, buy it, and it promptly breaks within 2 years of operation.

Maybe I was unlucky. But high more likely, you were bullshitting me.

That's what we have here.

Human death rate: 1 death / 80 million miles

Uber's death rate: 1 death after 2 million miles

So yes, we can make judgement calls.


All evidence points to nope.


Citation needed. This is my argument. Unless you are working for Uber/Tesla/Waymo where is the data? I'd even be satisfied with seeing incidents where the system successfully avoided collisions, at least as a baseline. How do regression lines trend in comparison to non-autonomous cars considering scale?


>Citation needed.

Sure!

Traffic fatalities in the US, human drivers: 1.25 deaths per 100 million vehicle miles [1]https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...

Uber's self-driving car fatality rate: 1 death per 2 million miles[2]

That is, Uber's state-of-the-art is an order of magnitude worse than human drivers.

I know, you'll complain about small sample size. The thing is, assuming Uber's cars match humans, the chances of a collision happening only after 2 million miles are very small, like pulling an ace of spades out of a shuffled deck of cards.

The point is, we do have the data, and the data does not support optimistic beliefs.

[1]https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...

[2]https://arstechnica.com/cars/2018/10/waymo-has-driven-10-mil...


I'm pretty sure if they were better than humans by any real statistical margin they would be trumpeting that in press releases. The fact that they work so hard to maintain such secrecy- only revealing data that they are legally mandated to- suggests that they, the people must familiar with the system performance, think it is inadequate.

In essence, because the people who control the data have an interest in revealing data that is superior to human performance, the lack of conclusive data on this leads to an inference that it isn't.


> without a significant paper trail and analysis of real-time incidents being made available to the public. Full stop, all incidents and outcomes, not just the good

Without simultaneously reporting the identical data for human driven vehicles in a complete, accurate, and globally comprehensive way there is little that can be done with such data except use it to make political attacks on the technology.

As comparison: every single battery electric vehicle that has a fire is widely reported and disclaimed, impugning the technology as unsafe. The 10s of thousands of flammable fueled vehicle fires are completely unreported in the media.


A brief look at https://www.nhtsa.gov/ reveals that more data is available than I thought, along with a roadmap to full automation (https://www.nhtsa.gov/technology-innovation/automated-vehicl...). I'm glad this is being looked at, and it seems that they are in favor of automation. That being said, the data is there for comparison/analysis. One major benefit of autonomous vehicles is that they are certainly better reporters than humans. They need not rely on the fallible memories of human drivers when data from sensors, cameras, internal system monitoring etc is out there living in the data-center. A complete snapshot of an event for replay removes any uncertainty from the equation. Simply put, these systems are designed to provide optimal data that should be captured for analysis. I am not a Luddite here, I think that it is possible for machines to be more effective drivers than humans. As a consumer/pedestrian however, I am skeptical of any claims without actionable (open) data being provided.

*edit whoops don't know HN link formatting :)


Yes, exactly! I'm Australian, it's perfectly normal here to walk on the road when there aren't footpaths, ideally walking into the direction of traffic flow so that you can see oncoming cars.

Pedestrian crossings aren't provided on most roads outside of shopping and business areas - you simply have to cross at a safe point and try to be visible.

I really don't know about the US but I can't imagine it's that different: surely people need to be able to cross the road without being run down as an unclassified object.


The US has a very hostile view towards pedestrians, in large part due to PR campaigns by car companies. The term jaywalking was invented to slander pedestrians not crossing “correctly” (jay is a synonym for rube).


So your view is that Uber specifically chose to not classify pedestrians correctly because of car maker propaganda? You actually believe that?


No, the view is that Uber (and city officials, and some of the press) are spinning hard to shift the blame to the victim of this manslaughter, by using words such as jaywalker.

FYI, this is an American thing[1]. There's no jaywalker in Russian, for instance. Or French. If you know any other language that has a word for that, let me know.

It didn't even spread to the UK:

"In other countries such as the United Kingdom, the word is not generally used and there are no laws limiting how pedestrians can use public highways."

[1] https://en.wikipedia.org/wiki/Jaywalking#Origin_of_the_term


FYI, your own link has a list of jurisdictions that have jaywalking or jaywalking-like laws. Did you even read it?


I feel like if someone's out walking around they're probably just doing it for fun or exercise and don't have anywhere important to be. People in cars are more likely to be on a schedule. One of the reasons I think it makes more sense to favor cars in most situations.


I have walked around outside in order to get places almost every day since I moved out of Arizona years ago. Even in the car-centric US, huge numbers of people live in places that aren’t quite so car-centric as Phoenix.


In reasonably dense U.S. cities, people walk into the road outside of crosswalks all the time. Even in suburban areas, it's normal to cross like that in residential areas. But suburban areas are often crisscrossed by 30-50 mph streets where crossing the street outside of crosswalks (or sadly even in crosswalks, really) is dangerous. This is partly because of the speed limit. But also partly because there are few pedestrians (points of interest are far apart so people need to drive everywhere), so drivers are relatively reckless.

I'm not convinced that the pedestrian outside crosswalk omission arose because of U.S. car-centric thinking. I think if any software engineer working on the project was told that their autonomous driving system wasn't watching out for these pedestrians, they'd be horrified. In fact, probably many of them were already horrified at how poorly their system was performing in general, and yet management still deployed the software in actual physical cars.


Nobody is blaming the engineers here, they are not the ones who get to decide when to launch.


Perhaps they should be the ones who get to decide. In many safety-critical fields they do. That's the point of requiring an engineer's stamp of approval on a design.


In the UK pedestrians have the "right of way" if they were crossing the road and a car comes up on them on all roads except on Motorways and slip roads where they are banned. I put "right of way" in quotes because there's no set law - but almost all the time because a car will hurt a pedestrian - the law will side with the pedestrian. In terms of law, the written one says if a walker is crossing a side road that the driver wants to enter into, then the driver must give way.

One example would be if you were leading a walking group of 20 people and wanted to cross the road and it was safe to do so at the beginning but because the group is large it takes a long time. Any approaching cars would not have any right of way until all the group has passed.


A right of way means that they have priority. In the UK pedestrians only have a right of way in specific cases.

But drivers have a duty of care, and if a pedestrian is crossing a road then obviously they must do what is safe, i.e. slow down and let the pedestrian cross if (s)he is in the way.


At least some US states don't have a legal concept of "right of way", specifically to avoid the tacit implication that the person who has right of way is under no responsibility to avoid a collision if someone in their path doesn't.

e.g., where I live, the traffic laws are defined in terms of who has the responsibility to yield, and undergirded by a catch-all requirement that everybody who is able to act to avoid a collision must do so.


It's perfectly fine to give pedestrians the right of way, because assuming they also have some sense of self preservation isn't really so far fetched.

For instance in Germany they drill into you in driver's ed that when you hit someone with your car, it's pretty much always your fault. I'd say we don't have a jaywalking epidemic.


It is. Legally, the two are entirely equivalent.

The reason some US states choose to specify who should yield instead of who gets to go first is because framing things in those terms hopefully alters the ways that drivers think about things in a way that will improve safety overall.

TBH, I suspect that it's an empty gesture that is being used as an alternative to implementing more rigorous driver education and licensure requirements; the US tends to view driving as more of a right than a privilege.


It's reasonable to give priority to pedestrians once they are on the road because it makes sense to allow them to leave the road as quickly as possible instead of being stuck there because cars don't let them, which is both very dangerous and bad for traffic.

It's equally reasonable for pedestrians NOT to have priority to start crossing the road unless there is a clear, marked pedestrian crossing.


Sure, you may get a ticket for not using a pedestrian crossing if there is one nearby, but if you get run over it's still probably the driver's fault. You can have both.


There is a difference between having priority and having the right to be safe.

Priority means that cars must give way, which is the case at pedestrian crossings: Cars must give way BEFORE a pedestrian has started to cross in order to let him cross.

But in any case once a pedestrian is on the road cars have a duty of care not to hit him.


I have a saying: don't let your gravestone say "But I had right of way!"


France road law specifies that a pedestian keep absolute priority even if they are infringing the law


It's not car-centric, it's just extremely bad (though perhaps just incomplete...) design for a car that is tested on public roads.

Anything can get in the way of a moving car. Pedestrians are unpredictable and should be kept an eye on at all times.

In the UK it is legal for pedestrians to cross roads anywhere unless specifically forbidden (that's basically motorways). In general accidents it will be deemed the driver's fault (if you see a pedestrian you're supposed to slow down and avoid, not honk and continue because you have a duty to drive with due care and consideration) unless the pedestrian acted in such a way that the car could not avoid the accident.


The emergency braking system was likelt turned off because it had triggered too many false positives. (Abruptly braking too often also makes you a dangerous driver.)

So out of complete necessity, these cars need to be programmed to actually ignore pedestrians unlikely to cross into the path of the vehicle.

This is a tough problem because the AI has to identify the human (seems as if it did in this case) AND the intent. I've seen two recent videos of Teslas abruptly stopping for humans on the side of the road. One was actually a bus stop poster model and the other clearly had no intent of crossing the street and angry motioned the Tesla on.

Imagine driving by a group of joggers on the side of the road. Who can you pass safely? Who many stumble a bit into your path? Who will prepare to cross but first wait for you to pass by? Who will try to cross in front of your path? The micro-decisions and predictions made are very challenging to get right.


> The micro-decisions and predictions made are very challenging to get right.

Yeah, they really are. So... maybe they just shouldn't have self-driving cars until they can figure that stuff out?



This is where Waymo uses deep learning models that can predict the future behavior of other road users in real-time. They are hiring: https://waymo.com/joinus/1235933/


This is par for the course in AVs. It was probably the case for Uber too: hardcoding an "action suppression" heuristic on top of it is horrifically negligent IMO (I work on ML systems for an AV company)


[nevermind, the case I linked wasn't a pedestrian]


It was a pedestrian, _walking_ a bicycle.


Interesting segue: in The Netherlands, that would not legally be a pedestrian, although you would be classified as one for most purposes.

A pedestrian ("voetganger") is someone who travels by foot only. If you guide another vehicle by hand, or even a horse or cattle, you are legally considered a driver. Therefore, most laws concerning pedestrians have to make explicit allowances for walking a personal vehicle. For example, the primary Dutch Traffic Regulations [1]:

The rules in this resolution concerning pedestrians apply equally to persons guiding by hand a motorbike, moped or bicycle

[1] https://wetten.overheid.nl/jci1.3:c:BWBR0004825&hoofdstuk=I&...


I posted about Harry Dunn who died while "riding his motorcycle when a woman emerged from the airbase on the wrong side of the road and there was a head-on collision." I mistakenly recalled that case as involving a pedestrian.


That seems thoroughly off topic.


That case involves neither autonomous cars nor pedestrians.


My bad, people commented too quickly for me to delete it when I realized it.


> but it certainly feels like pedestrians should look out for cars when crossing, and drivers should look out for pedestrians.

The problem in this case was that the pedestrian was impaired by drugs, if I recall correctly.

In my opinion, this happened because of 3 simultaneous failures: failure of the pedestrian to responsibly cross street, failure of the computer system to recognize the obstacle, failure of the computer babysitter to be on the lookout for obstacles and override the computer.

Hopefully we learn something from this triple failure and take measures to reduce their likelihood in the future.


> The problem in this case was that the pedestrian was impaired by drugs, if I recall correctly.

That's overstating it. I think it would be decent under the circumstances to spend 30 seconds Googling it before slandering the victim:

Wikipedia: https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

According to the preliminary report of the collision released by the NTSB, Herzberg had tested positive for methamphetamine and marijuana in a toxicology test carried out after the collision. Residual toxicology itself does not establish if or when she was under their influence, and hence an actual factor.


Soooo...it's now okay to run over people (5 seconds is a LOOOOOOOOOOOOOOON-crash! time), as long as you have an excuse? Go watch the scapecam footage: even with the horrible picture quality, you can see the victim walking (in a straight line across the road, no less!) a very long time before impact.

Perhaps you could also argue "if she were wearing a helmet, it would have conferred +1 to CON and perhaps saved her", for what it's worth - makes sense in the crudest of simulations, but not in RL.


Of course not! It's never okay to kill people. But she is at least partially to blame for her own death. If I take a nap on some train tracks and get run over, is my death 100% on the train engineer for not stopping even though I was in plain sight for 30+ seconds?


That’s not even close to jaywalking. Bad example


You are taking this into ridiculous extremes. She wasn't taking a nap in the road, and a car is stoppable at a far smaller distance than a train. Your argument boils down to "it's her own fault for existing outside a car, shit happens."


When I jaywalk (or cross anywhere) I consider it my responsibility to cross safely. If I got hit by a car, it is 100% my fault for not looking. Maybe it is also 100% the driver's fault. Responsibility and blame is not something that has to get divided up such that it adds to 100%.

In this case the pedestrian and driver are both fully responsible. The pedestrian was lollygagging across the road like a complete idiot. The driver wasn't paying attention.


> Responsibility and blame is not something that has to get divided up such that it adds to 100%.

I think I might agree with something you're trying to say, but this point is completely incoherent.


I'm not saying responsibility + blame > 100%, if that's what you think I said. Read it as two statements: "Responsibility is not something...," and "Blame is not something..."

If doesn't clear things up, I don't know how it could be incoherent.


"The most glaring mistakes were software-related. Uber’s system was not equipped to identify or deal with pedestrians walking outside of a crosswalk. Uber engineers also appear to have been so worried about false alarms that they built in an automated one-second delay between a crash detection and action. And Uber chose to turn off a built-in Volvo braking system that the automaker later concluded might have dramatically reduced the speed at which the car hit Herzberg, or perhaps avoided the collision altogether."

Wow, dangerously and maliciously negligent engineering. Another reason Uber is a company to be wary and critical of. It's hard to say whether their rushed and corner-cutting approach has set back SDCs, but those engineers should change their ways now that a death is on their conscience.


> a death is on their conscience

How much negligence is required before a death is on their criminal record?


Ask Boeing about that.


In this case, isn't the blame on Vasquez, the safety driver? If a plane crashes while on autopilot, ultimately we should blame the human pilot since they retain master control.


Given the nature of the system, when would she have been notified that she needed to take control? Additionally, what did Uber tell her about how the system operated? Ultimately it is on Uber for the lack of serious consideration. Rolling out level 2 & 3 autonomous driving systems where driver attention (especially non-experts in an open world) is still required on a wide scale is absolutely dangerous given human nature. Most people believe that you have to completely jump to level 4 & 5 autonomy which represents fully autonomous systems.

Think of it like this: "Hey, let's get this person who isn't an employee of the company, gets paid a pittance in a "gig" economy where extreme "hustle" is needed, probably isn't an attentive driver anyways and put them as a critical backstop in a system that is still a prototype and that they most likely don't understand."

The Uber engineering team lacked or ignored human factors experts.


I think it depends who was legally in control of the vehicle. I assume (and I relise it could be different, but this is just creating a model) that the safety driver would legally be considered to be in control of the vehicle, and as such responsible for the crash. She did after all have the ability to prevent the crash had she not been negligent.

I assume for instance that if I were to use Tesla AutoPilot on the road in the UK (I don't have a Tesla so I haven't looked into this) and my car crashes into someone while a) it's enabled and b) I'm not paying attention that I am still 100% at fault.

Until self-driving cars can legally be in control of themselves, absolving occupants of any responsibility, I'd assume that this is, or at least should be, the case.

I don't think Uber is clean in this to be clear, I suspect they were cutting corners to stay competitive, and I just don't trust them at all to make decisions that are in the interest of the general public, but the direct criminal responsibility seems to lie with the the safety driver, even though it seems that Uber should be sueable for something.


"legally be in control of themselves"

Never. They have no skin in the game. If they did, locking them inside a car for their life would be illegal.


At some point I suspect laws will change if self driving cars become good enough. I don't know where the liability will lie, perhaps the car manufacturers, or the insurance companies.


Indeed. The "safety drivers" are designed to protect Uber's liability, not pedestrians.

see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236 ("Moral Crumple Zones")


Much of the blame in the 737 MAX crashes is aimed at failed automation.

https://www.theverge.com/2019/3/15/18267365/boeing-737-max-8...


In his defense, it's much harder to react to emergency situations if you are not already fully engaged. It's similar to the difference between dropping a breakable item versus watching someone else drop it. You will start reacting to your own folly much faster than passively watching it unfold.


Uber should have added a camera which watches the driver and checks if they're paying attention.

If not it alerts, turns the warning lights on and slowly stops the car.


I suspect the mistake of having a human operator watching Netflix instead of monitoring the road was probably the "most glaring". Software shitting the bed is a daily occurrence after all, but I can't remember the last time I watched TV while I was supposed to be driving my car.


Imagine having to sit at a desk watching a Twitch stream of someone driving a car, and if you see that they're about to crash you have to hit a button. I think the average person would do well at that task for a few hours... can you imagine doing it for 2000 hours per year if that's your full time job?


At least they reached their Sprint-goals, Business was happy and Retrospective out-scoped the issue.


I'm stunned than no one is facing jail for this.


Buried deep in one article it seems that despite there being some evidence the safety driver was looking at their phone, they decided that the driver wouldn't have had time to stop anyway.


Why are the cars even at the roads when it was fairly clear this could happen?


Because it's no more risk than just driving a regular car on the road? Uber employs/contracts thousands of regular people to drive customers around and I'm sure some of them have had accidents as well - they just don't make the news as much.

There is this secondary issue that partially autonomous cars give the driver a false sense of safety but that's not unique to this case either.


~6 seconds is quite a long stop distance when decelerating from 40 mph. Wonder what the visibility was. Not sure how valid "ya they were looking away but they wouldn't have seen them anyway" is.


Kind of depressing to think about: They hired a safety driver, then engineered the system so the safety driver was not capable of preventing the accident. Just paying a human to sit in the seat and watch the car kill a pedestrian. Even if it's not the driver's fault, they're probably going to feel guilt about their involvement forever. I sure would. All because some engineers cut corners!


> engineered the system so the safety driver was not capable of preventing the accident.

I don't believe this is true, unless I misread the article. The safety driver can override the vehicle controls at any point, and had she been paying attention to the road, would have done so five seconds earlier.


With a disabled emergency braking system and an inability to handle a rather common situations like people jaywalking, these cars really shouldn't have been driving in public. That is the kind of stuff that I'd expect to be tested and implemented on private grounds.

Additionally, reducing the number of people in the car to one when the car is pretty much by design not capable of handling emergency situations by itself is quite reckless.


Additionally, permitting a test plan that prominently involved requiring all operators to break the law by operating a computer when they were supposed to be keeping their eyes on the road is quite reckless on the part of the State of Arizona.

I really don't think we should be so eager to damn Uber that we forget that there were more than just Uber employees asleep at the wheel here: Arizona has a duty to protect public safety. By permitting a self-driving car test program on the public roadways without doing even basic due diligence in vetting the program first, the State was grossly negligent in that duty.


> there were more than just Uber employees asleep at the wheel here

You couldn't be more right. The zeitgeist in 2015 was full-steam ahead on the autonomous future and woe to anything that stood in its way.

The framing of Arizona's embrace of autonomous testing four years ago [0,1] contrasted with California's caution [2,3,4] couldn't have been starker. One was branded enabling innovation, the other was seen as bureaucratic red tape holding up progress.

It's too easy to get wrapped up in these narratives.

[0] https://azgovernor.gov/governor/news/2015/08/governor-doug-d... [1] https://www.uber.com/blog/tucson/driving-innovation-in-arizo...

[2] https://fortune.com/2015/12/16/google-california-rules-self-... [3] https://www.theverge.com/2015/12/16/10325672/california-dmv-... [4] https://www.nytimes.com/2015/12/17/technology/california-dmv...


They shouldn't have been driving in public with someone who wasn't paying attention 100% of the time as any regular driver would. The longer quote:

> Also, don't forget: the SUV's emergency braking system was deliberately disabled because when it was switched on, the vehicle would act erratically, according to Uber. The software biz previously said “the vehicle operator is relied on to intervene and take action," in an emergency.

The question then was there proper training and communication to the test drivers that it's never okay to look down at your phone. Or whether that was simply unrealistic expectations. Or if the hours were too long, or testing at night, etc.

It said there was 5 seconds which should have been more than enough for a human test driver to hit the brakes, which was their stated job.


In this case the safety drive was watching their phone which was playing a movie so a bit beyond the normal level of attention wandering you'd expect. But even for people trying their best having to pay attention to a road for hours without having to provide any input isn't something you can reasonably expect people to do. NHTSA level 3 autonomy is just a bad idea, we need to go straight from 2 to 4.


I think so, too. It's just asking something that humans weren't designed for.


>The question then was there proper training and communication to the test drivers that it's never okay to look down at your phone.

If I remember right, Uber did cost-savings by removing the safety engineer and expected the driver to fill both roles simultaneously.


> They shouldn't have been driving in public with someone who wasn't paying attention 100% of the time as any regular driver would.

It's pretty well known that humans' minds wander, that it happens more when monitoring reliable systems for rare problems, and that it makes operators less responsive and lowers their error detection rate [1] - as anyone who's attended a boring meeting or lecture can attest!

I'm not sure that anyone informed would imagine a worker spending 40 hours a week monitoring a self-driving car would be able to watch it with 100% attention.

The truth is nobody realistic expects the safety driver to respond to reliably prevent an accident like this - they're there for slower-developing problems, resetting false alarm stops, and taking the blame.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5633607/


That's why two tests drivers makes sense in the early days and not keeping extended hours.

Companies like https://comma.ai/ are taking a much better approach IMO by keeping it simple by first perfecting lane assist/highway driving + building a driver watching device which alerts them when they stop paying attention for x amount of time. Which Uber should be investing in for their test drivers.

Another important thing is being realistic about expectations, of course 100% paying attention is unrealistic even for normal drivers, accidents will happen regardless. Hitting jaywalkers on a dark multi-lane high speed road is a lot less bad than other possible scenarios and there really hasn't been that many accidents yet.


>The question then was there proper training and communication to the test drivers that it's never okay to look down at your phone.

I would be surprised if it involved anything more than an easily-slept-through safety video. Should have come with periodic checks for distraction.


This makes me wonder how Uber got clearance in the first place. Did a regulatory body, like the NTSB, need to sign off on these self-driving cars before they could be driven on public roadways? If so, how did they miss this during that certification process? Surely emergency situations are at least discussed, tested, and reviewed?


So uber fails to predict even completely linear paths traveled by objects if the system isn't trained on or expecting that specific type of object? That sounds like an even bigger issue than "we didn't think pedestrians could exist outside designated crossings"


What's crazy to me is that an "Other" object isn't flagged for a complete stop or a major alert to the "safety" driver. This seems like a major case for when the software should stop being autonomous, when it isn't sure.

Likely the story is that their software flags a lot of things "Other" without figuring it out upon closer inspection.


For those who think "well, pedestrians don't belong on the road so this wasn't much of a bug" (because I'm sure there are some of those): just imagine the "other" was a large chunk of debris, or cargo fallen off a truck. The car would have crashed right into it.

There can be many obstacles on the road, stationary or non-stationary.

AI can get away with requiring humans to take control in emergency/unusual situations, but not when it is supposed to be "self-driving". The software at least needs to recognise problematic situations and request a safety override or come to a safe stop, it can't just shrug it off and move on.


Debris tends to be stationary most of the time. Sounds like the car could have handled that.

I agree with that ADS being shit. Objects, once identified, should be accounted for in later updates in some form. E.g. in the way of "pedestrian now behind truck next to me". How the hell can an object switch status back and forth, loosing all its history? This should make the system be extra cautious at the very least. But yeah, safety doesn't seem to have been that big of a priority.


Most of the time, yes. But if you've ever been in a mountainous region, you'll know that it won't be stationary all the time and that it's most dangerous when it's not.


There was an emergency braking feature that would engage when a collision was imminent. It fired 1.3 seconds before hitting Herzberg. But Uber had disabled that function because there were too many false positives, so the car just alerted the driver (who was not paying attention) instead. https://www.washingtonpost.com/news/dr-gridlock/wp/2018/05/2...


Alerting a human (who wasn't even in the driver's feedback loop at the time) 1.3 seconds before the collision? That is equal to "hey, look at this unavoidable crash that we're driving into!"


1.3 seconds is definitely not enough for a human to engage brain. I agree the human should have been paying attention and might have spotted the problem sooner, but that alert is completely useless.


It is ridiculous to expect that the driver can be alerted and react in any kind of reasonable time frame unless they were already paying attention, in which case what use is a self driving vehicle?


They are supposed to be paying attention in these self-driving car tests. If the driver was negligent then they should be held at least partially accountable.


FWIW, we now know Uber's technology didn't do this, but we don't necessarily know the nitty-gritty details of how other self-driving systems work.

For me, the most important take-away isn't something about Uber being sloppy. It's that the distinction between "machine learning" and "artificial intelligence" really matters, and the efforts of marketing teams to over-hype ML by selling it as AI may be hindering our society's ability to make cogent public policy decisions around these technologies.

(No, I'm not really interested in haggling over ideas like "hard AI" and "soft AI" here, either. Normal people don't split those hairs. Nerds generally don't, either, except when they're trying to score easy points in a debate.)


The machine learning part could have gone better: it didn't expect a pedestrian and jumped between vehicle, other, bicylce and unknown. But that would have been good enough, the failure is in the code around that. When your classification is unstable or low confidence you can't trust it, which in a safety critical system would mean falling back to something else. That could be using a different classifier (e.g. linear interpolation of the object's position to find out if we come anywhere close to it) or just giving up and stopping the vehicle. Just continuing as if the fluctuating results were reliable wouldn't be acceptable in any industry.


What kind of tech do you expect them to use? While I hope it wasn't, I'd not be surprised with them just running a classifier over the current scan, have it spit out "vehicle" at coordinate A, search the previous scan's analysis for a matching vehicle close by. How do you handle uncertain/flip classifications? Or objects appearing to suddenly "spawn"/"despawn"? A proper "what object was that in the past" would likely require its own AI. Reliable linear interpolation doesn't seem that simple to me.


From what I remember, the pedestrian was detected early enough but classification jumped between two or three classes and it never got a prediction vector, basically being a standstill object (at different locations) that was never predicted to cross the trajectory of the car.


Jaywalking was invented by the car industry, I wonder if they'll double down on it again: https://www.vox.com/2015/1/15/7551873/jaywalking-history


Maybe works for Uber and GM, but in Europe jaywalking doesn't exist. If they want to sell cars here they will have to deal with being responsible for almost every pedestrian they hit, because that's the standard we already set for human drivers.


> but in Europe jaywalking doesn't exist

Europe is not a country and each European country has different laws: https://en.wikipedia.org/wiki/Jaywalking#Europe


But it is a somewhat homogeneous region with similar laws, as supported by that Wikipedia article: in general you can cross roads anywhere unless it's a motorway or you are within 100 meters of a designated crossing (any closer and the laws diverge on what's acceptable). That's just how we talk about the US treating things a certain way despite the different states having varying laws with countless exceptions and special cases.


In which case, this would have been a jaywalking infraction in many European countries, as a designated crossing is nearby.

However, it is not the role of a driver (or their car) to serve death penalties for that.


According to wikipedia your nearby is 110 meters, so slightly out of the range for most European countries. She most likely used a set of crossing paths on the area between both halves of Mill Ave so she started even further south of it.


> your nearby is 110 meters, so slightly out of the range for most European countries.

« Every » rather than « most », 100m seems to be the Eastern European limit and going down the list in western or northern Europe the limits seem far lower, usually 20 to 50. Furthermore pedestrians not following this often does not absolve drivers from any (let alone all) responsibility.


Along with attempting to shift blame to the cloud, there are always suggestions which try to mitigate their inability to deal with sentient things with more rules.

https://news.ycombinator.com/item?id=20882763 https://news.ycombinator.com/item?id=20520124


In NYC and LA, pedestrians have the right of way, no matter where they are in the road. The pedestrian might be cited for jaywalking, but the motorist is liable for any injury to a pedestrian.


When the software I write fails, and it does, someone doesn't get to read a web page. Nobody dies or even gets very inconvenienced. This story reminds me of how good a thing that is. After 40 years of practice I'm still not competent to write such high stakes code. I hope the people who do write it are far more competent than me, but still at least as skeptical about their own capability.


(I don't know how competent you are, but it's likely that) nobody is more competent than you are. We solve high-stakes situations not by installing different people, but by installing better systems and by double-checking things.

In my job we ensure the quality of high-risk things by having multiple layers of review and quality control. It doesn't matter how good the people are who are assigned to the task. We need double-checks. Lower-risk things are not subject to this level of quality control.

Most computer software is low-risk and so it gets very little review. Unfortunately some computer software is much higher risk but does not seem to get review commensurate with its risk.


Yeah, this is so important to understand. The fundamentals of creating safe systems revolve around putting in place good procedures and systems, not just by 'being better'. A great example is one of the ways that surgery was made safer was simply by introducing check-list to ensure all pre-surgery checks were done. Surgeons didn't get smarter or more diligent, they simply had to check off all the basic things before a surgery and that forced them into a situation where they couldn't miss obvious things. Just introducing that basic checklist according to studies reduced in-hospital mortality for surgeries by close to 50%.[1]

[1]:https://www.who.int/patientsafety/safesurgery/faq_introducti...


An interesting trap is people in the field of social media (FB/ Twitter) not realizing that they are in that high-stakes environment where bad privacy settings and leaks can cause death.


Some of us do, as we come from (or work with people who come from) states that will perpetrate violence against dissidents.

Unfortunately, there still aren't enough of us. We're trying to change that.


I mean with regards to Facebook in particular, bad settings can cause genocide.

https://www.bbc.co.uk/news/blogs-trending-45449938


You are right, if this was a code issue. But it's not a code issue. It's a fundamental misunderstanding of the situation. It's not a problem that can be solved with code. People vastly underestimate the power of brains due to lifetimes of science fiction anthropomorphizing machines.

https://news.ycombinator.com/item?id=21250424


The NTSB just released their report: https://dms.ntsb.gov/public/62500-62999/62978/629713.pdf

"If the collision cannot be avoided with the application of the maximum allowed braking, the system is designed to provide an auditory warning to the vehicle operator while simultaneously initiating gradual vehicle slowdown. In such circumstance, ADS would not apply the maximum braking to only mitigate the collision."

"Certain object classifications— other—are not assigned goals. For such objects, their currently detected location is viewed as a static location; unless that location is directly on the path of the automated vehicle, that object is not considered as a possible obstacle. Additionally, pedestrians outside a vicinity of a crosswalk are also not assigned an explicit goal. However, they may be predicted a trajectory based on the observed velocities, when continually detected as a pedestrian."


> continually detected as a pedestrian

Interesting, from the article and this, it sounds like the system can’t maintain position tracking of an object if it’s classification changes. So even if it could detect a pedestrian, something that could be ambiguous like a pedestrian pushing a bike might have no motion tracking data from one moment to the next, so the car would have no ability to predict its trajectory.


I had the same impression from reading the table. Looking at the map and trajectories though, how could the human not see the car coming? Or was she thinking that the car would stop/slow down? Same question holds for the vehicle operator.

edit after reading the other reports: the victim apparently was under the influence of methamphetamine and the vehicle operator was busy watching Hulu.


The pedestrian was about 3/4 of the way across the road when she was struck, and was walking a bicycle that was partially laden with goods. That suggests that quick evasion on the pedestrian's part would have been somewhat difficult, but given that the road was empty of other vehicles, there was a long clear sight distance to the pedestrian, and there was ample space to maneuver, any reasonable driver would have been able to stop or switch lanes to evade the pedestrian.

The driver was not paying attention to the road and was incapable of performing a timely emergency maneuver (be it a stop or lane change).


Yeah but when you cross the street you generally keep an eye out for traffic, and the sight-line is such that if the car had headlights on it should have been visible for more than 6 seconds before the collision. The victim might well have expected the car to yield somewhat.

However, I don't know if the car did have any external lights on, so it might have been hard to see until it was closer than 6 seconds away.


As far as I recall the driver was looking at the car's status screen which they'd been instructed by Uber to monitor rather than at the road at the time of the crash, so didn't notice the pedestrian until too late. I can't say much about what the pedestrian saw, the Uber car killed them so we can't ask.


The driver was the Uber engineering department, the human supervisor in the driver seat was distracted by their phone.

Why the victim didn't pay attention isn't known as far as I can tell, but from the released video they do appear inattentive.


I think this is key. Had the system been able to retrieve tracking history as the classification was changing, it would have realized that she was moving perpendicular to the vehicle's path (not, for example, an object traveling alongside the vehicle).

Page 10-11 of the NTSB report make this very clear: https://dms.ntsb.gov/public/62500-62999/62978/629713.pdf


I found the "action suppression" system very interesting, and I think that is new information. Essentially, for 1 whole second after an emergency situation is detected, the system does nothing at all by design. It even suppresses planned breaking maneuvers. If the operator doesn't take over control after one second, the car will slow down slowly, not even using the full allowed braking power for regular driving (not to speak of actual emergency braking, which the system wasn't allowed to do at all).


To me it indicates that at the time of the crash the software was still in a state of development where false positives massively outnumbered situations where action was warranted.


The insane and arguably negligent part of this is that in a situation where false positives result in braking and false negatives result in death, that as an engineer you don't go with braking


It wasn't that the sensors were confused or something - the AI had no idea how to cope with pedestrians in the road for whatever reason, so threw them in the "Other" category.

At least 98% of objects are categorised, the rest don't matter. /s


Consistently hitting the "other" category would be a massive improvement over what happened. According to the timeline for the pedestrian was initially classified as a vehicle, later as a bicycle, with short periods in between where she's classified as "other" or "unknown". And of course when an object changes classification it loses all history and is initially predicted to be static. Despite detecting her for 5.6 seconds the system never had more than 1.5 seconds of tracking history because it couldn't make up it's mind and kept throwing everything away.


TBH this does sound exactly like something you'd expect to happen if you took some Javascript ninja used to moving fast and breaking things and asked them to build a car. But surely Uber has more sense than that...


Coming from the defense (go ahead, clutch your pearls, I won't be offended) industry I just can't understand why people ship things that fail into in potentially dangerous states on some known non-trivial fraction of inputs. If the purpose of the software were entertainment or some other low stakes thing I'd understand but software controlling things that can kill people or lose lots of money need to be able to handle unexpected inputs gracefully.


> I just can't understand why people ship things that fail into in potentially dangerous states on some known non-trivial fraction of inputs

Because that kind of safety-conscious culture needs to be deliberately enforced, from the top all the way down, by a system that takes a long view.

If we're making an analogy to the defense industry, then building self-driving cars as a tech startup is closer to the Anarchist's Cookbook than Northrop Grumman.


Because the penalties and rewards don’t align.


Because you don't want to be the one whose team is lacking in agility on the next company all-hands meeting ;)


Or managers barking at sheepish recent grads/h1bs who don't know how to or can't push back. Does not seem like a very strong engineering culture.


What are you saying, that a Haskell or Rust elite programmer wouldn't have made this mistake!?


The strong types in those languages would have prevented a reference to a "Person" object to be treated like a reference to an "Other" object... /s


Yes... surely.


Isn't the bigger problem that it did not classify the object as being on a collision path. It doesn't matter what the object is, brake if you are going to hit it.


It's astonishing to me that emergency brakes were off. The investigation report clearly shows that the pedestrian was classified incorrectly, alternating multiple times between wrong classifications, and the path prediction was off. Despite all of those failures, it was still correctly determined that a crash was imminent 1.2 seconds before impact, and a second later realized that avoidance had failed - at some point around 1.2 seconds before the crash, AEB should have engaged.

Instead, as the report says:

> The vehicle was factory-equipped by Volvo with several advanced driver assistance systems (ADAS), including forward collision warning (FCW) system and automatic emergency braking (AEB) system. However, Volvo collision avoidance ADAS were not active at the time of the crash; the interaction of the Volvo ADAS and ATG ADS is further explored in section 1.9.

It was apparently disabled by Uber because it was difficult to run the Volvo systems alongside their own, which strikes me as highly irresponsible. Autonomous driving software relies on multiple redundancy, and AEB is the last-resort system for when everything else, from automatic maneuvers to alerting the driver, has already failed.


> It doesn't matter what the object is, brake if you are going to hit it.

The built-in emergency braking system from Volvo does this [0] but Uber deliberately disabled it (presumably because it conflicted with their self-driving rig).

[0] https://www.media.volvocars.com/global/en-gb/media/pressrele...


Atleast the Volvo system output could atleast be used as a sanity check or something.

If you hit the accelerator harder when the Volvo brakes you override it. It should be fairly easy to integrate as a backup.

However I guess that the Volvo system might not be far looking enough for those speeds?


From what I read in some of the earlier reports on this, the car didn't have the ability to emergency brake in autonomous mode. It was disabled at that time, so it could only brake in regular traffic, not for obstacles that appear suddenly.


That is right - so now we have two major errors.

The justification made for disabling the emergency braking - that it would interfere with data gathering - might appear reasonable at first sight, but it does not stand up to scrutiny, for if the emergency braking is triggered, the driving system has already made a mistake, and you already have the data on that malfunction.


Yes, but accidental emergency breaking can cause just as bad of an accident as this, it all just depends on the scenario.

They had hired a driver to sit behind the wheel to monitor the road and the car for exactly this reason.

The driver they hired decided to watch a movie on their phone instead of paying attention to the road.


> Yes, but accidental emergency breaking can cause just as bad of an accident as this, it all just depends on the scenario.

If Volvo's system is that dangerous, then it should not be on the road at all - but there is no evidence that it is, you are just making a speculative argument.

> They had hired a driver to sit behind the wheel to monitor the road and the car for exactly this reason.

That is no reason to disable a safety feature that would add safety in depth.

> The driver they hired decided to watch a movie on their phone instead of paying attention to the road.

That was a major error - a crime, in fact - but, unfortunately, also an entirely predictable scenario that cannot be dismissed on the grounds that dealing with it would make testing more difficult or expensive. So now we have three errors.


Does not compute. I regularly hit objects while driving my car. Such as: potholes, twigs, paper bags, plastic bottles.

The deadly assumption here is "doesn't clearly match a category" => "safe to ignore"


Emergency brakes were disabled because they behaved erratically. So I'm not sure what it would have done if it had ID'd her correctly, it was still relying on witless behind the wheel. What a shit show.


I doubt that the emergency braking system was behaving erratically in itself, to any significant extent - if that were the case, it would do so for human drivers, and so should not be on the road at all. What I suspect is more likely is that Uber's system was behaving erratically, triggering the emergency response with some frequency.


Yea, should have put "behaving erratically" in quotes, I think you were probably on the money that it was in response to the AI or AD in this case. Probably was behaving like antilock brakes as the classification flipped every few milliseconds.


I can personally attest that they couldn't see you IN the crosswalk either. One nearly ran me down because it failed to yield to pedestrians in the crosswalk when turning right.

While I was standing there talking about it, the driver looped the car around the block and tried again. It did the exact same thing the second time. One of the people it happened to walked in to the office to report the bug. The Uber engineers there poo-pooed the concern and never did anything to fix it. This was before they killed the pedestrian. The fact that they utterly failed to create adequate safety systems when repeatedly warned, shows they are absolutely not capable of doing this safely.


Oh wow. That's really bad, one of the ground rules of traffic participation is that traffic that goes straight on the same road as you when you turn always has priority.

> The Uber engineers there poo-pooed the concern and never did anything to fix it.

This is yet another nail in the coffin for Uber and self driving. I think as a company Uber is categorically and institutionally unable to participate in this space in a responsible manner. Their whole corporate culture is totally opposed to what is required to make this a reality.


Here's a simple rule that could be enacted for companies seeking to do work in the autonomous driving space: if your car kills a person, your company loses its license to work on autonomous vehicles. Forever. That will dictate the adequate pace to achieve these goals safely. Then we'll see what's really possible with this technology.

Having cars that can drive themselves just doesn't seem like a particularly high priority for society at large in the face of other looming issues. Why allow it to proceed in such a dangerous fashion at all?


But human-driven cars kill tens of thousands of people a year in the US. I think improving that is really important for our society.

It is very sad that this pedestrian died and companies that kill people through avoidable accidents like this should be punished. Uber specifically seems like a trash tier self-driving car research program and I wouldn’t mind if they just stopped.

But one day, even a system that’s far safer than human drivers is going to kill someone accidentally. It is going too far to say, a method of driving cars that ever kills somebody should be abandoned.


What if the rate of deaths for a given company is not zero, but below the rate for human drivers? Is it ok to have additional, unnecessary pedestrian deaths by NOT allowing that company to deploy their technology?

(That said, the negligence in the Uber case makes it pretty clear they are likely far from reaching that level of competency)


So you mean being okay with a multibillion dollar corporation harming people so it can make even more profits, all the while telling you that it's "good for society"

Gee, where have I heard that script before?


So is reducing the accident rate on roads "harming people"?


Good for the people once the system is perfected. Not so much for the ones who get run while the system is being perfected.

The "greater good" utilitarian argument has been the basis of some of the worst policies and politics in the world.

I'm not saying that self driving cars fall into the same category, but how many deaths are you okay with until Uber/Waymo perfect their algorithms (and later, charge you for it)? 1? 10? 100?


Given how many people die per year in traffic accidents with non-autonomous vehicles, shouldn't this rule apply too? Humans have shown that they can't drive cars safely and should no longer be trusted to drive.

More seriously though, it would be more interesting to compare number of kilometers driven without accidents compared to national statistics. And having something like the NTSB do thorough investigations when accidents do happen. Ultimately there is a need for autonomous driving because it should eventually cause less deaths


If a fault with the vehicle parts itself causes an accident in a human driven car then the manufacturer absolutely should be held accountable.

Also, if you believe Uber or any company is pursuing autonomous with the goal of decreasing the amount overall automobile deaths I daresay you're extremely naive. It's probably not possible to achieve anyway on a road shared with human drivers.


I don't believe that Uber's goal (or any other companies) is to decrease the amount of overall automobile deaths. However I believe that the widespread adoption of automated driving will eventually result in a decrease of automobile deaths.


Should companies making human-driven-cars also stop making cars should one of them kill a person?


I'm pretty skeptical we'll see autonomous vehicles--at least outside of limited access highways or other relatively less difficult scenarios--sooner than decades from now. But your suggestion is an impossibly high bar. There will always be failures because of debris on roads, unpredictable actions by human drivers/cyclists/pedestrians, weather problems (e.g. black ice), mechanical failure, etc. that will result in some level of fatalities.


Personally, I think using automobiles as a blueprint for autonomous transportation is horrifying and preposterous. The first auto I know of went 10mph in 1886. Engineers have had over 100 years to iterate on the concept yet people are still dying. Somehow, Ubers and Googles think they can distill this kind of refinement into a decade or two of software "engineering". Insanity.

We should focus on transporting goods and services autonomously at a fraction of the speeds. It should be about efficiency and impact, not getting one human from A to B.


Animals crossing the street is rare, but a situation I'd expect self-driving cars to be trained on. I doubt they classified all known large animals, so shouldn't there have been a classification to anticipate animals it doesn't know crossing the street?

As someone who spent a number of years living in the countryside, this seems obvious. However, for those growing up in cities, this might never cross your mind. It makes me wonder if there might be a lack of diversity on the teams building and testing these systems.


What’s that quote about the last 20% of the work taking 80% of the effort?

This crash seems to be the result of several cases not being handled correctly, or at all.

1. Pedestrian crossing -not- at a crosswalk, aka jaywalking

2. Objects tagged as “Unknown” don’t have their path tracked.

By the time the car saw an “unknown object” (the woman) directly in its trajectory it was too late (1.2s before impact).

Why was there no system for a person jaywalking? That’s extremely common.

Why don’t “Unknown” objects get stored and individually have their trajectories tracked? Hello?

Programmers need to start going to jail.


Not necessarily the programmers. There'll always be some people who are easily enough to manipulate to write this kind of code. But those making the decision to ship this and sign off on it definitely should face criminal negligence charges.


One can only claim ignorance for so long. They know what they’re programming.

But yes, managers and the like involved should also be going to jail. This cavalier, negligent attitude needs to be dealt with yesterday.


> Why don’t “Unknown” objects get stored and individually have their trajectories tracked? Hello?

Because what if that "unknown" object is just a plastic bag blowing in the wind?

People would get mighty annoyed very quickly if emergency braking kicked in for every piece of garbage blowing onto the road.


>> if emergency braking kicked in for every piece of garbage blowing onto the road

Yeah, cases like that might make people wonder about the correctness of the self-driving computer.



And now today (Nov 6) there are news articles about the Tesla "Smart Summon" which was quietly activated. There are videos of dangerous near collisions of "summoned" driver-less Tesla vehicles very slowly driving among other vehicles.

https://www.ctvnews.ca/video?playlistId=1.4658613


That’s very blatant sensationalist journalism that conveniently does not mention that there is a person holding down a button the entire time that the car is being summoned.

If that’s “dangerous”, well, so is piloting it manually.


A person who does always what is in front of the car and what is coming behind or on the sides, and who will probably be looking at the button rather than the car while it moves.

That is just as dangerous as using your smartphone while driving in a parking lot, which is forbidden for good reasons.


Do you stare at a button when you hold it? That seems very odd. If that were true, video games wouldn't exist.


The video shows the app with a radar-ish view of what the car is seeing of its surroundings and its planned route. It's more than just a button, and I could see why someone would be focused on the screen display instead of the car.


That’s a red herring though. It shouldn’t matter if the person is paying attention if the self driving tech is working. (As Musk likes to say, the person is just there to tick off a legal checkbox.)


At what point do Musk’s statements represent a moderate legal risk?

For the sake of argument, could Musk’s statement, along with his very public insistence of being actively engaged in the design process, demonstrate a cavalier disregard for proper safety engineering at the management level (outside acceptable industry practice) and so a defective process?


Well, that's a big if. Moreover, an if that has "nope nope nope" written over it with literal human blood. RIP EH.


The linked video talks about updating laws regarding the legality of operating the smart summon feature on public roads. I don't see how that can be called "blatant sensationalism".


Someone inside the car has a better sense of the surroundings than someone operating it from a distance. They also don't have a skin in the game, which can impact the judgement


It is the same 'skin in the game' as being in the car. They are not concerned about their own safety if driving the car in a parking lot, they are concerned about their car getting into a collision. No different whether in the car or not.


Agree, a licensed human driver is in charge of a "summoned" vehicle. Any operation you might expect from such a vehicle is on the person holding the button.


Human drivers aren't usually licensed for, and license practical tests do not cover, remote operations of the type being discussed.


So long as everything works perfectly, sure. But there's nobody in place to hit the brakes in an emergency scenario. Releasing a button as an emergency measure seems a bit unreliable, since humans tend to clench their muscles when there's an emergency.


Worse:

AT+RELEA

NO CARRIER

Unless the car brakes the very instant the connection drops (and not when the timeout propagates through the networking stack), now you have two problems.


I have a similar sentiment about nuclear weapons. What's the problem if someone needs to be pushing a button?


It’s mind boggling that tracking history wasn’t maintained between object classifications. Surely the best thing to do would be to detect "object" -> track it -> classify -> predict path -> back to classify


Between this and deliberately disabling Volvo's own emergency braking features, I think some decision-makers at Uber should be charged with negligent homicide.

Maybe a few executives going to jail will put an end to this cowboy attitude where lives are on the line.


IANAL, but given that the disabling of the security system was an act of commission, not omission, it could be argued that this was more than just "negligent". On the other hand, car manufacturers have no legal duty to include collision avoidance systems, so disabling one is probably not illegal in itself.


Jail in corporate America? Sacrilege!


So: Uber put a car on the road that was not designed to detect pedestrians except in a few limited conditions. Someone died because of their gung-ho approach to safety. How is that even remotely acceptable? Really hope we see people being held responsible for this...


When I learned to drive they stressed unanticipated objects coming into the street - especially balls: don't look at the trajectory of the ball and judge whether it will be in your way, instead look for the kid who will be running after the ball while not paying attention to traffic. Also in my state if you hit a kid (a minor actually) on a bicycle you are guilty until proven innocent. That's the law. You have to prove there was no reasonable way for you to avoid the accident.


If the safety driver had been paying attention to the road instead of his phone, they might have been able to break in time so that this woman did not get hit.


The interview with the driver by the NTSB is interesting if you want more details from the driver's side of the story:

https://dms.ntsb.gov/pubdms/search/document.cfm?docID=477745...

You can also see the detailed report from the NTSB which will go through about everything you would want to know about the actual driver and her reaction:

https://dms.ntsb.gov/pubdms/search/document.cfm?docID=477743...

Particularly of note is that Hulu was streaming video on her phone until about 1 minute after the crash (page 11 of the report).


One of the most terrifying lines in those documents to me is this:

> According to Uber ATG, the SDS did not have the capability to classify an object as a pedestrian unless that object was near a crosswalk.

Jaywalking is extremely common. I've seen pedestrians jaywalk a 45mph 6-lane road, with no median to speak of. Anyone with driving experience should know that pedestrians can occur even with no crosswalk, so it boggles my mind that any engineer would sign off on such a decision.


Also of note from those documents:

* Uber had previously staffed each car with two operators, allowing one to keep their eyes on the road while the other took notes.

* Forthcoming performance metrics for operators would include "the VOs ability to keep the SDV in autonomous mode as long as possible unless there is a Fleet Desk support issue (software issue)."

Obviously not as relevant to this particular case as the Hulu streaming, but to me these details suggest safety isn't their highest priority.


Human minds don't work that way. Paying continuous attention to a vehicle you're not controlling is quite simply impossible, and if safety relies on it then your safety system is broken.


Human minds don't work that way. Paying continuous attention to an airplane you're not controlling is quite simply impossible, and if the function of a copilot relies on it then airlines are broken.

People are tasked with monitoring things all the time. It's not some brand new impossible task.


There's a critical difference between airplanes and cars. When flying an airplane, very few emergencies ever require action faster than "Couple of seconds" -- and any procedure which does require such will be thrown out at first opportunity.


Possible, but should not be relied upon. Even in cases where the driver knows that a handoff to manual control will occur, it takes a significant amount of time to mentally switch from being a passenger to being a driver. For cases where the AI knows it isn't performing well, such as heavy rain or snow, it makes sense to handoff. For anything that happens on shorter than a 10 second timeframe, the AI must be able to handle it, because the human cannot switch modes that quickly.


We had a thread a few days back with optimistic people predicting self driving cars in 2020.

I will urge those people to read this and consider the numerous other cases, edge or not, that these systems which seem to work great in the surface, may not be handling.



What does it matter what the "object" was, if anything/something is moving to intercept then the car should stop.


Well surely not, if its a bird and you'd cause an accident with an emergency stop then you should continue. Dogmatic responses are exactly the problem with self-driving vehicles.

Sometimes it's preferable to hit something in your own vehicle. Cue the classic moral question of which person would you kill if you can only avoid one - with your vehicle - by hitting another.


Well, if a large bird - say - an emu or ostrich (but probably also a turkey) suddenly crosses your car's path I believe you would brake.

At least here (in the country) it is not uncommon that wild boars or deers cross the road suddenly at night, and I have seen cars literally destroyed by the collision, in some cases with the driver seriously injured.


Generally we should probably have vehicles follow proscriptions of the law when they can't avoid hitting anything and change the law if we want them to act differently. I've never seen a hypothetical self-driving car trolley problem where there wasn't a single option that was clearly what the law required.


By and large, the trolley problem concept is overblown. Most of the time the right/best answer is going to be to stand on the brakes and hope for the best.

But I'm honestly not sure what the law "requires" if you've got a scenario where there are going to be bad outcomes no matter what you do.


For instance if a car has the choice of hitting someone in the road or swerving onto the sidewalk and hitting someone there then clearly the legal thing to do is for the car to stay in its right of way and hit the person in the road.


That's a pretty clear case of taking a deliberate action to leave the road surface. But you can at least imagine scenarios where everyone is within the bounds of the road--say 5 people directly ahead and 1 off to the side.

As I say though, if you can't swerve to avoid people, the most reasonable action that most people would take--to the degree they had time to make a conscious decision at all--would be to brake as hard as they could and let things play out as they will.


Well yeah sure, but a bird is quite different to a person, it's small and usually not on the floor. I just meant it seemed strange to detect an object but not do anything about it because it's not labelled.


I think part of it comes down to reliability of the sensors - if you detect an object 200m down the road, 5 metres to the right and you have only a small error in the reading, you could easily get the impression that it's moving into the path of the car. However, if you can identify what it is then you can identify if it's likely that it really is going to move into your path. These sensors really aren't as perfect as you would hope which is why so much effort is being put into machine intelligence.


Uber's researchers make very confident presentations about the very advanced ideas that are ostensibly implemented in their cars. How does this square with the apparently very primitive system that's described in the NHTSA report?

Example:

Jeff Schneider: Self Driving Cars and AI https://youtu.be/jTio_MPQRYc

Or:

https://eng.uber.com/research/predicting-motion-of-vulnerabl...

Predicting Motion of Vulnerable Road Usersusing High-Definition Maps and Efficient ConvNets

Following detection and tracking of traffic actors, prediction of their future motion is the next critical component of a self-driving vehicle (SDV), allowing the SDV to move safely and efficiently in its environment. This is particularly important when it comes to vulnerable road users (VRUs), such as pedestrians and bicyclists. We present a deep learning method for predicting VRU movement where we rasterize high-definition maps and actor’s surroundings into bird’s-eye view image used as input to convolutional networks. In addition, we propose a fast architecture suitable for real-time inference, and present an ablation study of rasterization choices.


This article is just terrible, it goes into the code/technical problems with the car. They killed a woman and no one went to jail for manslaughter.


People jaywalk often, I feel like this issue should have appeared before. Would it be possible Uber was using some of Otto's technology (self-driving trucks) then decided to replace it abruptly (because of the lawsuit with Google) and it caused this seemingly avoidable crash?

Context: Uber acquired Otto, a company founded by Google Waymo's former CEO, Anthony Levandowski. It quickly got involved in a lawsuit where Google alleged that Lewandowski stole Waymo's self-driving intellectual property. Uber later agreed not to use any Waymo IP and give 0.34% of equity to Google.

https://www.buzzfeednews.com/article/priya/waymo-asks-judge-...

https://jalopnik.com/googles-waymo-and-uber-reach-settlement...


My biggest fear is that accidents of this type will result in the engineers running back to their ML to add another case to the training data. We'll end up playing a game of whack-a-mole (literally?) as new special cases come up. Will training the system to recognize a pedestrian pushing a bicycle enable it to recognize someone riding a mountain bike?

Consider why the auto-braking system wasn't enabled. It's because the system can't identify which elements in the environment are harmless, causing too many false positive braking events. The opposite problem is no easier to solve.


The PMs, engineers, and person sitting at the wheel should all be charged with manslaughter. Maybe then we'd get actual improvements.

But who am I kidding? We can't even lock up bankers for simple shit.


I've long wondered how autonomous vehicles could ever work in the UK, and mostly because there are lots of driving situations I encounter where you have to force your way out or twist the rules to make any sort of progress.

Now we can add pedestrians to that which in the UK is a tricky topic since pedestrians have right of way if they're already crossing a side street you're turning into. Even an astute driver can run into trouble here and has to be extremely aware of pedestrian movement.


Even in the US, driving is more difficult than it is in Arizona suburbs. The fact that self-driving cars can’t even handle super easy mode makes me very pessimistic that they will ever be used in more than a tiny minority of places.


Brad Templeton has also written his analysis of what went wrong in the crash.

https://www.forbes.com/sites/bradtempleton/2019/11/06/new-nt...


Another thing that seems wrong is that the backup drivers are on their own. Anyone who's done night time guard duty will know how hard it can be to stay awake, let alone maintain attention.

It would make so much sense t have a backup driver and co-backup driver (and maybe allow neither a smart phone).


I still fail to see what benefit self driving cars give us. There’s always going to be bugs and now we’re strapping them to multi-tom vehicles. This reeks of bad idea yet we continue on as if this is the a cure for some huge problem


Holy shit this is scary. I must have been living under a rock but I didn't realize that FULLY-AUTONOMOUS self-driving cars were even LEGAL yet.

> The self-driving car was fully autonomous at the time at the accident, though it had a human driver at the wheel. An internal camera caught the Uber worker looking down and away from the road moments before the accident, unaware of Herzberg’s presence before it was too late.

Uber worker ... employee? Were they doing a live test?

> ... the team at Uber Advanced Technologies Group has adopted critical program improvements to further prioritize safety ...

I certainly hope this means having their terminator vehicles on a closed track instead of out and about. What the hell does "prioritize" have to do with "common sense"? If you throw an unhandled exception ... STOP THE CAR


Who knows, maybe the next set of victims will be people jaywalking to get to an Uber, that then get mowed down by another Uber driving at 39mph which classified them as bicycle / other.


Wow. Somebody really dropped the ball on that one.

It must be stuff so complicated that it's difficult to see the overview. How else could someone not figure on people outside of crosswalks?


Relevant: https://youtu.be/dnioHfg1xbQ

Tesla cars can't even figure out where they are half of the time.


This is murder.

Uber wants to increase ridership any way they can, and if you're walking then you're not a customer.

Since they got away with it in beta, the release version will continue to murder people.


it can only be as good as who ever programmed it

Consumers and even regulators seem to think somehow software magically adapt to anything.

These sort of software environments need standards developed that deal with identified situations that it must satisfy in standardised testing as a starting point.


Self driving cars probably (and impossibly) need their own roadways like cable cars or trains.


So, self-driving BRT? Probably will only happen when self-driving cars do in general.


So... trains?


Pedestrian detection outside of crosswalk is an edge case, it's a nice to have!


Why does Uber use an SUV?

These vehicles are widely known to be much worse for pedestrian injuries and fatalities. They should use an ordinary car.

(Frankly, I'd like to see all SUV drivers considered legally negligent in any pedestrian accident due to their choice of vehicle.)


If you think Uber’s car is bad can you imagine how bad Lyft’s is or most any company that started years behind?

I worked in the SDC industry. It’s mostly a science project.

Between its economics and its car I don’t understand why Lyft isn’t heavily shorted.


> If you think Uber’s car is bad can you imagine how bad Lyft’s is or most any company that started years behind?

The amount of time someone spent on a software project isn't the best indicator of its quality... especially for science work where there's a large amount of knowledge transfer within the field.

Lyft hired a top Google engineer to run the project and plenty of other experienced people. They weren't starting from scratch like Waymo and nor did Uber.

The car simply shouldn't have been tested without a driver (or two) constantly paying attention. Clearly Uber didn't trust it to be running by itself yet (especially with the emergency brakes being disabled) but it basically was if the test driver wasn't paying attention.


I can see jaywalking becoming a thing of the past!


Did they tell the safety drivers this fact?


I almost jaywalked while reading this. Thankfully the vehicles are all manually driven.


tl;dr: Uber have not the slightest idea what they're doing, and should not be permitted to develop anything safety-critical until they can properly explain 'object permanence'.

Also, would probably be a good idea to hold the creators, trainers and marketers of anything called 'AI' legally responsible (jointly and severally) for all of such a system's decisions for the next few decades. Might encourage some much-needed caution.


These things seem almost certain to me:

* Self-driving cars are not going to be here in less than 5 years

* The tech is improving fast enough that it will end up working, and not being vaporware. Within 20 years, self-driving cars will be more than 50% of the new car sales, and eventually it will become the norm.


Some blame, too, belongs to the human driver. His job was to look for exceptional conditions that the car may not pick up. And he failed.

It's quite possible there wasn't time for a good human driver to stop completely or avoid a pedestrian where one shouldn't be, and this accident was all but unavoidable, but we'll never know because no attempt was made. Given the speed of the car, the woman crossing the street either miscalculated, was betting that traffic would see her and slow down, or didn't see the oncoming car.

It was also odd that they disabled the car's stock emergency collision avoidance system.


this definitely counts as a human error, so stupid.


For reference, the video recorded by the car: https://twitter.com/TempePolice/status/976585098542833664/vi...


No, this is a video from a shitty dash cam released by Uber (and then happily compressed to shit and forwarded by police, who don't miss a chance to blame whoever died in a crash) to deliberately misrepresent the situation. This garbage pinhole sensor has not one hundredth of the dynamic range of the actual cameras they use for the autonomous driving and is not at all representative of how human eyes see, which would have had no problem spotting her from many many seconds away.

No, this is part of the crime here.


To make an omelette, you have to break a few eggs.


Well I hope you are going to be that egg not me. Cuz multi billion corp is too cheap to properly test their product and just deploy into production and see what happens and patch it later.


We can either have fast technological progress, or we can be completely tied up in bureaucratic red tape, with every potential innovation having to go through endless "oversight" committees staffed by non-technical airheads.

I know which I prefer.


> We can either have fast technological progress, or we can be completely tied up in bureaucratic red tape, with every potential innovation having to go through endless "oversight" committees staffed by non-technical airheads.

https://en.wikipedia.org/wiki/False_dilemma


That is a false-dichotomy. We can have something more in the middle, or at least less an extreme as "let cars just drive wherever without any tests and when it kills someone just say o well, that's progress".

In this particular case, Uber got a deal with the mayor (Governor? I can't remember) of the city to do testing, and it was clear the government bureaucrat allowed them to do whatever, and then coordinated with the police after Uber killed a pedestrian to fabricate evidence that looked like the woman came out of nowhere.

I would say that is really far from "endless oversight" and much more into the "just mow them down one at a time until you get it right" territory.


That's a false dilemma.


All progress involves a certain amount of irreducible risk and I'm sure that any self driving car program, even if well run, will at some point end up killing someone by mistake - though hopefully after far more than the normal number of miles driven per fatality average.

But in this case it looks like Uber was really cutting corners that they shouldn't have cut and I think that Uber does deserve sanction for that.


Do you want to be the egg? Or your husband/wife? Maybe your children?


Would you say that if that was your family in front of that car ? Very ignorant comment.


Yeh agree with both of your comments but is this the way to justify the error that caused someone’s life ? Almost makes it sound like “oh well it happens”. Comments like these are showing why software like that was rushed to test against live human. What about showing some empathy?


A classic "tragedy of the commons" problem. Inarguably having only self-driving shared vehicles on the street will be massively positive to society (public space now reserved for parking will be public space, accidents due to drivers being tired/under the influence will not be a thing any more, resource waste for cars standing around 99.99% of their lifetime will not be a thing any more), but there will be a not-so-small amount of sacrifices on the way (be it people dying or being injured by prototypes, or massive job losses because the demand for cars will drop).

Air travel was horribly accident prone too, for decades, and now it is one of the safest forms of travel in developed countries - and for electric/autonomous travel, it will be the same.


It's how we've arrived at where we are now, and why you have all the things you take for granted. Buildings, planes, borders, they all had a human expense.


If you set out to make an omelette and a person ends up dying because of what you've done, you have exceeded your license to break a few eggs.


yeah, as long as _you_ are not the one in the pan




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: