How did LIDAR and IR not catch that? That seems like a pretty serious problem.
It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.
When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).
Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.
It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.
That's not at all clear to me. I don't know too much about cameras, but it looks to me like the camera is making the scene appear much darker than it actually is.
In the video, you can see many street lights projecting down onto the ground, and the person was walking the the gap between two streetlights. The gap between street lights (and hence the person) was in the field of view of the camera the entire time; they just weren't "visible" in the camera because of the low lighting. I'm confident my eyes are good enough that I would have been able to see this person at night in these lighting conditions. (Whether I could have reacted in time is another question.) It seems to me like the camera just doesn't have the dynamic range needed for driving in these low light conditions, which is a major problem.
I have to agree. Just like a normal camera has issues in low-light, it is clear that this camera is diminishing exactly how light the road ahead was. While I can't say confidently that I would have been able to stop to prevent hitting them, watching the video in full screen does lead me to believe that I would have seen them and been able to apply the brakes at least enough to reduce the impact. Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else just prior to the impact. This alone leaves me skeptical to just how much could have been done to prevent this accident.
This is pretty much the experience I have with my dash cam, a Yi. In its recorded video, its automatic exposure control make it look like everything outside of the headline cone is pitch black, but it is actually not. I have seen deer and possums by the side of the road, and debris etc, that did not show up when I later checked the video for the same period. There is enough spillover light from modern headlights that a human whose eyes are dilated and adjusted to dark conditions will see a pedestrian standing on the median, stepping off it, crossing the inner lane towards the car's current lane. More than enough time to begin to brake and possibly swerve. I have dodged animals in a situation similar to this.
Yep that exposure control / sensor quality of the dash cam in the video was rubbish. My own Blackvues produce far, far better results than that. Just look at how nothing is illuminated by street lights, this clearly has the effect of making the poor rider appear "out of nowhere". Also agree it appeared driver was on smart phone most of the time, thus not in control of the vehicle, and had thus no business being on the road as these are systems UNDER TEST.
If that's the best Uber can produce then they ought to hang their heads in shame. Unless it was doctored with... as I find it hard to believe they'd put such rubbish quality cameras in their trials.
Do you trust Uber to provide all the data, or would they selectively produce data favorable to them?
Do you trust Uber to provide unedited raw video, or would they process it to increase contrast, make it appear that nothing was visible in the dark areas of the frame, reduce the resolution, drop frames, etc.?
The internal camera (let's be honest and call it the scapegoat camera, because that's the only practical use for human "safety drivers" when they are not permanently engaged) must take almost all its light from IR, because we don't see anything of the smartphone screen glare that the eye movement so clearly hints at.
I don't think the driver is looking at her smartphone. I think she's checking the car's monitor (as in a computer screen). Although to be fair, that should be showing the car's view of its surroundings so I don't know what's going on there.
Edit: Nevermind. Someone posted a picture of the car's interior, below and there's no computer screen.
Ok so this is getting old now, but I just came across the following - which show what I'd expect the roads to look like, and geesh were Uber ever full of crap to release their video which pretty much had the effect of exonerating them.
> I have seen deer and possums by the side of the road
Both of those have eyes that act as reflectors and you can see their eyes well before you can actually see the whole animal.
This[0] suggests that the total time required for a human to avoid an incident like this is 3.6s (at 35 mph, casual googling suggests the car was doing 40). Even if we add 1 second of extra time to deal with it I'm not sure that makes the cut.
Other people in the thread have pointed out the woman stepped out in a darker area between where the street lights are placed. Reflecting eyes are not the only way to detect an object. A person watching the road would have seen her dark silhouette contrasting to the next patch of light.
Also remember she was not a stationary object. She was in the act of crossing the road. Human eyes/brains are good at detecting motion in low light even if we can't 100% make out what the object is.
I have lived in Tempe and know that part of town well. There are apartments, gas stations, hotels, strip malls, fast food restaurants and a strip club. It's not a pitch black country road.
I know what you're talking about with the eyes, I spend a lot of time driving rural WA highways at night, but no. I have seen deer that had their heads facing the other way and were standing in the shoulder/ditch area. In conditions where i can definitely make out the shape of the deer and its location but the dash cam sensor misses it entirely.
Your last paragraph is a valid calculation if this were a case of a person stepping directly off a curb into the lane of traffic. However, it appears that they were probably standing on the median looking to cross, then stepped off into the left-most lane of traffic, an empty lane, proceeded across that lane towards the lane in which the car was traveling. In this sort of situation human intuition will recognize that a person standing on the median of a high-speed highway is likely to do something unusual. Particularly when you observe the visual profile of, as media has reported, a homeless person who is using the bicycle with numerous plastic bags hanging off it to collect recycling.
Driver didn't see this person because the driver was occupied with smartphone, only occasionally glancing up.
Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.
Perhaps the video was intentionally darkened to simulate this effect. :P
>Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.
Using bright interior lighting at night is something that we've known not to do for more than a century. If the driver couldn't be expected to see the pedestrian because the interior lighting or UX was too bright that is not something that does not reflect favorably upon Uber.
That's their only purpose. Nobody in their right mind could expect human observers to stay as alert as an actual driver when cruising for days with an AI that is good enough to not require interventions all the time. Passengers add nothing to safety, and an almost reliable AI will make anyone a passenger after a short while.
I'd like to have an interior view of what driver was actually looking at. It couldn't have been a FLIR monitor, for sure.. it seems more likely to be a phone held in the right hand? Bit hard to tell with the quality of the footage, but driver looked rather tired to boot.
If so (a hand held phone), in Australia that driver would be going to jail for culpable driving causing loss of life.
It could have been anything readable. I got the feeling it was either a Kindle or something like that or maybe even a hardcopy of something printed or written on paper. This was just a hunch but I think it's being validated in my mind by the fact that there was no light seeming to shine on the driver's face but that's probably due to the night vision camera not picking up that type of light? I don't really know. My mind is filling in a lot of gaps here, I realize.
EDIT: Upon re-watching the video a third time and really paying attention to this I don't think there is any real way for us to know without confirmation from the driver them self or an official report on the incident. My mind was definitely deciding things that just aren't discover-able from the video itself.
"Uber also developed an app, mounted on an iPad in the car’s middle console, for drivers to alert engineers to problems. Drivers could use the app anytime without shifting the car out of autonomous mode. Often, drivers would annotate data at a traffic light or a stop, but many did so while the car was moving"
The whole project seemed designed for an outcome like this. Eg allowing app to be used whilst on the move, after reducing from 2 to 1 operators. Culpability ought to lie with Uber.
I think he is, at least, I've never heard of any law that removes responsibility from a driver if driving a self-driving car. I think this will also apply to empty cars, if they get into an accident, the owner is liable.
I compare it to the backup camera in my car. While close up at night it is good, if something or someone is a short distance away I can barely make them out. However, looking in my mirrors I can see them or at least make out that someone or something is there.
A camera can have pretty good dynamic range at night, but it needs a big sensor, a huge lens to operate with a fast shutter speed. In the video, you can already see the motion blur, indicating shutter speed is slower than what it needs to be to identify nearby objects in low light.
Autonomous cars are never going to be viable. Just looking at the cost of high-end SLR sensors and lenses that you'd need to match human eye dynamic range, and you're already looking at an expensive setup, before we even get to things like 360-degree vision and IR/LIDAR/Hyperspectral imaging. And that's in addition to all the compute problems.
Sorry Silicon Valley tech-bros, but it's a fantasy you're chasing that's never going to happen. The quicker we can end this scam industry, the better.
I think you’re comparing object detection to high quality photography though. There are plenty of options that can detect objects at night. Even cheap infrared technology, I would think, would be sufficient for picking up moving objects at night.
Wetware is astonishing stuff. All the propaganda to anthropomorphize machines is showing here... cheap IR sensors are not the issue. AI is not intelligent and inanimate objects have no self.
They should pivit to augmenting drivers, not attempting to drive for them. I would happily utilize a properly designed HUD (meaning I have source access) connected to a fast MerCad or bolometer array.
Sorry for lack of input or varied discussion but I just had to stop and say how goddamn friggin cool it would be to have bolometers hooked up to a smart HUD that didn't interfere with your vision of the road. Something really translucent that smartly blended it's color scheme as to not interfere with the coloration of signs and details beyond your view on the road / around the road.
But you are right, though. I think augmenting drivers sounds like a great idea in the sense you talk about. The kind of augmenting drivers I don't want are those stupid headbands you'd wear that beep like crazy if your head starts tilting in a way that resembles falling asleep. If you are in danger of falling asleep at the wheel and need a device like that I think it's pretty obvious one should take a nap on the side of the road or in a free parking lot, haha. Hopefully if we do wind up headed in that direction the people inventing will have a similar way of thinking and inventing.
High-quality exists because the human eye is that sensitive and discerning. And there aren't plenty of options that can detect objects at night. IR isn't any cheaper, and then you have to figure out what IR bands you want to detect.
I've read (see [1]) that humans have a low-light ability that approximates ISO 60,000, a pretty large value and larger than simple video cameras provide. However, very high end pro/enthusiast SLR's go considerably higher, see this real-time astrophotography with the Sony a7s at ISO 409,600 (youtube video [2]). The same Sony will work great in full sunlight too.
The Canon ME20F-SH is a video camera reaches ISO 4,000,000. This camera has a dynamic range of 12 stops and is available at B&H for $20,000. [4]
Of course, this isn't exactly the challenge that cameras face when assessing a scene. The dynamic range happens within a single scene all at the same time. Wide dynamic range (WDR) is the term I've seen used in describing video cameras that can handle both bright and dim areas within the same scene.
No that's not how ISO works. The Canon ME20F-SH shoots high definition video at professional video shutter speeds and has an available ISO range of 800 to 4,560,000. At $20,000 I'm not suggesting that this exact camera would be appropriate for use in autonomous vehicles, but I am pointing out that video systems can now exceed the capabilities of human eyes.
There are a number of video samples shot on the Canon ME20F-SH on YouTube. In these one can see that under low light situations the camera is shooting at ordinary video speed (the camera supports shutter speeds from 24 to 60 fps). I'm not trying to push the Canon ME20F-SH; I don't have any association with Canon. The manual for this camera is available on-line if you'd like to read up on it: [1].
The actual exposure of a video frame or image depends upon the f-stop of the camera's lens (aperture), the shutter speed, and the ISO of the image sensor. See [2].
Basically, each doubling or halving the shutter speeds corresponds to one "full-stop" in photography. Each full stop of exposure doubles or halves the amount of light reaching the sensor. Changing the aperture of the camera's lens by full stops also doubles or halves the amount of light reaching the sensor. Full stops for camera lenses are designated as f1, f1.4, f2, f2.8, f4, f5.6, etc.
The light sensitivity of the film or sensor is also customarily measured in full stops. Very slow fine grained color film is ISO 50 and is usually used in full sunlight. ISO 100 is a bit more flexible and ISO 400 used to be considered a "fast" film for situations where more graininess would be acceptable in exchange for low light situations. Each doubling of ISO number corresponds to a full stop. So a photo take with ISO 400 at f2 with 1/1000 second shutter would have the same "brightness" as a picture taken at ISO 100 at f2.8 with 1/125 second shutter (less 2 stops ISO, less 1 stop aperture, and plus three stops shutter speed). Naturally, other factors come into play, the behavior of film or digital sensors at extremely slow or extremely fast shutter speeds isn't linear, there are color differences, and noise issues too. See [3] if you are interested in more about how photography works.
The footage from a normal camera should not matter, a self driving car is equipped with stuff that works regardless of light conditions like LIDAR oor IR cameras. This looks to me like a software failure.
The footage from the normal camera does matter in that it's the main way that we (humans) can process the scene. The parent comments are just pointing out that the camera footage is likely darker than the actual scene in person.
Waymo cars are capable of sensing vehicles and pedestrians at least half a block away in every directions. I was reserving any judgement on wether this collision could have been prevented, but seeing the video tells me that 1) a human driver might have hit the victim regardless, and 2) I'm very surprised that the LIDAR sensor didn't cause the car to stop to a halt much, much earlier. This is exactly the kind of situation that I would expect self-driving cars to be better than human drivers.
I agree that dashcam/external cam footage is going to be limited and possibly misleading, and I would think/hope such footage isn't the primary factor in evaluating accident cases. But I do think there's value to it. I shouldn't have said that it is the "main" way for us to process a scene, but the most accessible/relatable way.
What you posted looks pretty cool, I don't know enough about it to understand what I should be prioritizing focus on, but we can chalk that up to ignorance. The benefit that driver-view footage has is that it is a viewpoint all of us are familiar with. If you ask me to watch dashcam footage to assess some kind of traffic thing, there's a general expectation of where I keep my eyes and what I notice.
This normal-human-view mode is probably going to be necessary in AV cases in which we determine whether the car's AI did the right thing. Presumably, as AV becomes mainstream and extremely safe, these accidents will involve edge cases and outliers which are poorly interpreted by sensors/non-human-vision. Seeing the scene as a human driver does might be a necessary starting place?
But the Uber case in AZ, IMO, proves your point. The Tempe police quickly made a judgement call based on what seems to be inadequate video. Everyone who can now view the video will also be inclined to think how impossible it would be to avoid hitting the victim, even if the actual scene in-person has much more light. And of course, we don't want to judge AV solely on whether it performs as well as normal humans.
Uber may not be at fault, legally speaking. That's up to the legal authorities to decide.
However, as a society and civilization, and even more so, as engineers and scientists, we are going to expect that the autonomous car matches or exceeds human-level performance in critical situations like this.
Therefore the time spent on investigating, understanding, and discussing the root causes of the accident is worth understanding. Accidents like these generally do not happen due to a single factor. It is necessary to understand all the necessary factors if we want to make autonomous driving systems more reliable.
At the very least we need to understand whether the pedestrian appeared in the other sensors that a human could have identified by looking at the sensor data, and if yes, whether the autonomous system matched or exceeded human-level performance by detecting the pedestrian, and if the pedestrian was indeed detected, why the autonomous driving system failed to respond to the situation.
Surely not? Cars are routinely driven by people who are not owners, and liability for traffic offences (including that the vehicle must be insured) is with the driver.
In my experience typically only minor infractions like parking violations are assigned to the registered owner of the vehicle, but in other case – accidents, running red lights etc. – the driver is liable regardless of who owns the car.
No. The general rule is that negligence is required to be held responsible. If I let my next door neighbor borrow my car to go to the grocery store, and he hits someone, I'm not responsible. Unless, the person can prove "negligent entrustment", i.e. it was irresponsible just to let this person borrow my car, e.g. they're a habitual drunk, or blind, or 11.
However, most auto liability insurance covers whoever you permit to drive the vehicle, so the owners policy does typically cover the fender bender on the way to the grocery store.
Correct, the owner's insurance policy is the primary coverage when the owner lends their car to a 3rd party. Obviously in the case of a moving violation the driver is at fault and receives the penalty, but damage is still covered by the owner's policy. In the case where the other driver is at fault, that car's owner's insurance is liable.
exactly this. what's the response time of software? it ought to be close to zero and significantly faster than human's. let's say it's a generous 0.5s - no brakes where applied at all, and even with the crappy darkened video we got (place isn't that dark https://www.youtube.com/watch?v=1XOVxSCG8u0 ) the pedestrian was in view for 2 to 3 seconds.
car didn't see it at all even in those last moments.
Well it was a pedestrian but they were walking their bike across the road. It's not like the software should make a distinction between a cyclist in the way and a bicycle with no rider in the way.
Indeed, it's hard to find pedals without them. Even ones that cost $10 a pair have reflectors. Unfortunately, pedal reflectors are ineffective when the bicycle's path of travel is perpendicular to the light source. The video doesn't reveal evidence of other reflectors, such as the common spoke-mounted ones whose purpose it is to highlight a bicycle traveling crosswise. For a moment, the bicycle is clearly illuminated by the headlights; I don't see any spots of light on the wheels or elsewhere.
For a side view, the reflectors on the tires (visible at the end of the video) are way better indicators of “watch out! Bicycle” than those reflectors.
See this video for a comparison of visibility (not in English, but that's immaterial - set speed to 2x ;)): starting with a "bike ninja" and going all the way to "Christmas tree" https://youtu.be/oAFQ2pAnMFA?t=1m0s
It's from 2011, there's been a lot of improvement in consumer-grade cameras since. Even so, it fits my perception IRL: even a small reflector is orders of magnitude better than no reflector, and adding multiple (esp. covering 360 viewing angles) makes you stand out at night; same goes for pedestrians.
This 100%. When I drive, I watch the road. I don't watch my mobile phone, I don't watch the kids behind, I don't watch my wife. I don't watch the sky. I don't watch the GPS.
I just watch the road in front of me.
My idea is that the car has been behaving well for a long time and consequently the driver lowered is vigilance. Big mistake.
A fully attentive human driver might have hit this person regardless. Would they have hit them while taking no evasive action whatsoever? No swerving, no brakes?
I don't think so: the dash cam video is misleading. I had multiple ninjas jump at me before, and although I did notice and avoid them, they were not visible on the dashcam until the very last moment. Surely Uber would not release data to intentionally mislead the public?
Even so, I count a full second from when a human paying attention would have seen something just using this video as eyes, until impact. The stopping distance at 35mph is 136ft, which is 2.65 seconds at 35mph, so the accident would still happen but the impact speed could be lower.
Yeah, but at that speed, it's more than possible to swerve around an obstacle rather than screeching to a halt before touching it. Even turning slightly to the left/right would have made a dramatic difference in the outcome to this person's life. Not to mention the person in the car that might have also been severely injured if this was a heavier obstacle.
This was purely bad software, and no failure scenario being programmed in. I really don't think it's that difficult to program split-second reaction to obstacles that appear into the driving path. We need to get to a point where these vehicles can do stuff like this, even in a 2-dimensional way:
They seem to use 2.5 seconds as the standard for drivers to perceive and react to an obstacle, which based upon studies covers 90% of all drivers. 1.5 seconds to perceive, 1 second to react. Then you have maneuver time on top of that 2.5 seconds.
Given this, 1 second seems very low. A large percentage of drivers would probably plow into them at full speed.
This dashcam footage was released by the local police. It's likely they don't have the ability to access the autonomous car's working telemetry. Given Uber's legal history I doubt they'll release anything until they're compelled to by law. Personally I find it borderline irresponsible of Tempe PD to release this video and statements based on this video so early in the investigation.
Low beams at high speed do not give enough advance warning to reliably prevent a collision; as your lights are turned downward, you see a pedestrian only when they're quite close.
In general, traffic safety requires that road planners ensure that one of three conditions always applies:
a) the roads are lighted from above;
b) cars are able to use high beams;
c) there are no pedestrians crossing the highway.
This can be done in general, mostly by investments in infrastructure to ensure lighting or isolated highways wherever the density doesn't allow to drive with high beams.
> clear the driver was looking at his phone or doing something ...
Seriously, what else can you expect. These companies who do put these things on the road with the justification that "There is a human behind the wheel" should be taken out back and shot in the head...Just pull the plug. No more self driving cars for them. Those are just the kind of tech companies we don't want around...
See, it is not a mistake that they are making. They know well enough that this human behind this wheel is a useless as a dummy. But they do it any way. What does it say about them?
I feel sorry for the 'safety driver' here as it seems likely much of the liability will fall on her. As a transgendered ex-felon she can't have had a lot of fantastic job opportunities. I wonder how much Uber was paying her to sit in the hot seat.
The difference between Waymo and Uber here should be the difference between being allowed to continue, or getting barred from further self-driving research.
1. A driver who is not looking at the road cannot "potentially intervene", and is as good as no driver at all..
2. These companies seem to be doing nothing to make sure that the drivers will pay attention always and is always in a position to intervene. They even seemed to allow smart phone usage while they are in the car.
So, according to them, the human behind the wheel is just a decoy to prevent backlash from officials and the public, so that they can always say, "look, there is a human behind the wheel if something goes wrong"...
Also, even if they implement some measures, they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. A driver who is actively driving the car will notice a lot more stuff than a passenger who is just looking at the road. There is no way to make a human pay that kind of attention with out actually driving the car. So at best, your "driver behind the wheel" is as good as a passive passenger.
And as told before, the companies are not even trying to make sure of that.
I could be wrong, but I believe part of the reason for having a human behind the wheel is that it allows the testing to take place under existing driving laws. At some point prior to an unmanned vehicle being allowed on the road, lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel, a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.
In that context, the landscape changes significantly - instead of a self driving car that mowed down a pedestrian, we have a driver who was too busy looking at her phone to pay attention to what her vehicle was doing. From the various articles, it seems that she's not an engineer, and is there in effectively the same capacity as any other Uber driver. If that's the case, she's putting far too much trust into an experimental system. I agree that Uber could do more in the way of technological means to ensure the driver is paying attention, but at some point, an adult with a job needs to be responsible for doing that job.
>lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel..
The framework should have been in place before these vehicles were ever put on the roads. For example, there should have been some formally specified tests for a self driving vehicle before it can be put be on the road, even with a back up driver..
> a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.
Any thing that does not require drivers to keep their hands on the wheel is not a driver assist. It IS the driver. So there should be tests that make sure of the competence of the tech that is in the drivers seat.
>they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. //
I'm certain that if you can design and build a self-driving car that you can design a simplistic human attention monitoring system that will cause the car to pull over if attention level is too low.
Gaze monitoring that checks for looking downwards or away from the carriageway for extended or too often repeated periods wouldd probably be enough.
I imagine the attention of the "vehicle operator" is vital to the proper training of the vehicles -- if they don't see near misses, or failures to slow for potential hazards, or failures to react to other road users then how can the softwares faults be corrected? Do they get a human to review all footage after the drive?
I agree completely. As far as I can tell, the driver did not even have hands on the steering wheel. How hard would it have been to put sensors on the steering wheel to require both hands? They didn't even do that. Although even if they did, I agree with your statement that "[t]here is no way to make a human pay that kind of attention with out actually driving the car."
Not difficult at all, and you can make them keep reasonable attention. Look at the new Cadillac driver assist: sensors in the wheel for hand placement -and- eye tracking. If the driver isn’t watching the road/holding the wheel, they get escalating alarms until the autopilot disengages.
And that’s consumer drive assist tech, not “we are experimenting with full autopilot” tech, where I’d think such safety measures would be even more appropriate.
This is a solvable and solved technical challenge. Uber just didn’t devote any resources to it because they don’t appear to give a shit beyond acquiring a legal fig leaf to shift liability from themselves to an individual.
Frequent, randomly scheduled disengagements should keep the driver quite on edge, preventing them from becoming a passenger. But each and every one of them would create additional risk, so the net improvement might be negative. There is just no way to get this right, except for being reluctant of pushing to scale. With all the hype, wishful thinking and investor pressure, this clearly isn't happening.
I've been thinking about this for the last couple of days, and it's definitely a hard problem -- even with steering wheel sensors and eye tracking, it doesn't stop people zoning out and not being ready to react.
I did wonder if you could require the driver to make control inputs that aren't actually used to control the car but are monitored for being reasonably close to how the computer is controlling the car, and then the automation disengages (with a warning) if the driver is not paying sufficient attention. I then realised that may be _worse_ - in the event of a problem, the driver would have to switch to real inputs that override, which may delay action and not be something they do automatically. It would mean they are paying attention more to see if the automation is making errors where they have more time to react though (e.g. sensor failure that is causing erratic behaviour but not led to an emergency situation).
I wonder if a hybrid approach might be viable -- fake steering is used to ensure that the driver is alert and an active participant, but the driver hitting the brakes immediately takes effect and disengages the automation.
> Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else just prior to the impact. This alone leaves me skeptical to just how much could have been done to prevent this accident.
Wait, aren't you mean to have your hands on the wheels at all times? I don't see what to be skeptical about when if he just followed the law this could have been avoided.
It seems to me the driver might be in for some legal trouble.
But this has got to be just the black-box camera, right? Surely the actual camera they use as a driving sensor is much better than this? Not to mention the LIDAR and all the other sensors that should have caught this.
> Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else.
Probably checking the computer installed for diagnostics of the autopilot system. If it's in self driving mode and you are the engineer in charge, you'd want to constantly check what the system is seeing vs the actual conditions on the road.
If you're the driver of a car you're supposed to ensure safety by looking out, not verifying sensory information. If Uber designed their cars to show a rendering of the computer's perception to the driver, or other sensory output, they would violate that principle.
To me it looks like the guy is just falling asleep at a boring job. In all likelihood that was not an engineer more than any other taxi driver is an engineer.
The software is the "driver" of this car. Not the human behind the wheel. Take a look at job descriptions [0] for this. They always include a bit about "operating in vehicle computers". The fact, we don't know what the person is doing.
I am pretty sure Uber uses an iPad app for its autonomous vehicles. The driver is looking at that iPad application periodically along with the physical windshield view.
If you search "Uber autonomous vehicle" you can see some videos of the display. From what I gather, basically gathers the signals into a human readable model. In general I wouldn't have recommended this driving style but it might have been too dark to see much anyway.
I don’t understand this, I’ve seen a few people comment in the same vein.
People can safely drive in total darkness with the aid of their amazing human eyes and high-beams.
If for some other reason visibility is low you slow down - not rely on glancing at a backlit display ruining your own night vision and taking your eyes off the road for seconds at a time.
Or flick your high beams, quick beeps, adjust speed... I do all these things if I see anything on a collision course with my vehicle.
It is surprising to learn that these vehicles are operating at night. To collect training data, since nighttime driving is inevitable, perhaps there are ways to simulate night to the computer vision systems during daytime so the human supervisor can still see clearly.
Lol, the "human supervisor", looking at his knees, probably on reddit or tweeting.
Would you trust this system that didn't even manage to slow down at all with a pedestrian slowly pushing a bike directly in front of it, artificially adjusted to be even worse, driving during the day??
Human eyes have the same issue: if you are next to a bright light source, the areas without or less light will look much more dark. I assume cameras work the same way?
Cameras work the same way, but much much more poorly. A human eye can see multiple orders of magnitude higher range of light to dark areas at the same time. The accepted estimate is that the human eye can detect a 1 million: 1 range from light to dark in terms of photon intensity.
The driver has been described as male in news reports:
> "The driver said it was like a flash, the person walked out in front of them," Moir said, referring to the backup driver who was behind the wheel but not operating the vehicle. "His first alert to the collision was the sound of the collision."
also note that pushing the brakes was not the only option : steering the wheel to avoid collision was another, maybe more efficient. Still, I feel the same as you do : I cannot guarantee I would have avoided this.
I can confidently assert that Asian or atleast Indian drivers will almost assuredly not hit the pedestrian in this scenario; We have trained our eyes and senses to watch out for these as it happens all the time.
EDIT: what i meant, in light of the downvotes is that humans can train themselves to see, and just that folks driving in Asia have heightened sense of alertness, due to their environment. Hope it came out alright.
The whole reason many people on here have been advocating for self driving cars is that they can see obstructions more or less perfectly in the dark with LIDAR. I am much more interested in what that sensor said.
I'm reluctant to infer exactly what a human eye would have seen in that situation. I have absolutely driven down streets in suburbia where the gap between street lights was large enough to make them quite dark, and that video was an example of exactly what I was afraid of happening whenever I drove down those streets (though admittedly my fear was hitting a white tailed deer).
I think it might also be fair to argue that the car's high beams were not on (but again, that shouldn't matter because of LIDAR, right?).
I'm not confident even an above average human driver would be able to avoid that accident, even if good eyesight gave you an extra half second to respond. Dark clothing and no reflectors means that person was definitely invisible to both the camera and the driver for some time after they would have been visible in daylight.
I've had a couple of situations where someone appeared close to my line of travel with low visibility clothing (at night) that scared the living shit out of me, and they weren't trying to cross the street.
To be clear, I am not blaming the victim here, but do wear high visibility clothing when you're a pedestrian near high speed roads at night.
A person with common sense and a developed understanding of the situation would drive more slowly in situations like this. The law says that you don't drive faster than you can see.
A similar thing (no fatalities, just a shopping cart pushed by homeless people) happened to me. Ever since then, I have learned to be much more aware of situations like this (tunnel of light surrounded by darkness).
This just shows that Uber's tech is bad and that they let it on the road shows that their culture is still at least partly rotten.
I don't think Dara is the do gooder that some people are making him out to be. His primary motivation seems to be to usher Uber to an IPO. IMHO, if he actually had ethics, he would be front and center on this. Your company just killed someone. Where are you?
> The law says that you don't drive faster than you can see.
Amusingly, the law also says that manufacturers have to produce headlights that cast light out far enough to leave you adequate stopping distance at 60mph. Almost no headlights on the market currently do that.
Not a counterpoint, just a tangent that I find sadly amusing.
The ninja is the reference standard for real-world pedestrians. It's up there with the surprise moose. Systems that can only detect bright peds are going to be horrific meat grinders and lead to autocar hell instead of autocar heaven.
Sad side note that most people appear unaware of the benefits even the simplest and cheapest of reflectors do provide.
The seemingly random design decision of many runner manufacturers to embed tiny reflector strips in their shoes have no doubt saved countless lives. And their owners would probably be none the wiser.
And those who are aware often lack the understanding that with glare-minimizing headlights, reflective surfaces at or below knee-level are many times more useful than anything higher. A reflective hat would be pointless.
Yup. There's a place for education here. I know I wasn't really aware of the benefits until adult age, when I started to find myself more often in a car, at night, in rural areas. I still remember the first experience, in which I've noticed a cyclist on another lane ~0.5 seconds before we passed him. Dark clothes, dark bike, zero reflective elements.
In Norway, when growing up, I was frequently exposed to campaigns saying "Bruk refleks!" (Use reflector(s)!), and given free ones at every opportunity.
Of course it makes sense there where daylight may be hard to find half the year, however even in Australia, once it is dark the darkness is the same.
And I haven't seen a single government initiative to increase visibility awareness - most people are completely in the dark. (Sorry)
Riding shared bike trails in Melbourne at night on the commute home, this is something I think about often in the "winter" months. Peds may hate the strong glare from my LEDs, but it is the only thing the has half the chance of making out ninjas against the frequent sports ground stadium floodlights the path goes by.
> The whole reason many people on here have been advocating for self driving cars is that they can see obstructions more or less perfectly in the dark with LIDAR. I am much more interested in what that sensor said.
That is not the whole reason, it is one of many reasons.
> To be clear, I am not blaming the victim here, but do wear high visibility clothing when you're a pedestrian near high speed roads at night.
Preventing the accident might not have been possible, but even being able to decrease speed by a tiny amount would have greatly improved the pedestrian's chance of survival. Slowing from the 38mph that the car was traveling down to 30mph would decrease the chance of fatality from about 45% to below 10%.
Yep. I live half a mile away and just drove the same path tonight around 10pm, it's nothing like the video. There are spots that are darker than others, but they don't look nearly as dark. Nowhere on the street looks pitch black, there's ambient light everywhere.
For anyone who's interested, try taking your phone with the camera app open into a dark room and comparing what you see to what's on the screen. Which shows more detail?
Out of interest-- can you take a pic while the lighting outside is similar (assuming weather hasn't changed dramatically?) and maybe adjust exposure to what your eyes see? Or take a camera phone pic for comparison?
> The gap between street lights (and hence the person) was in the field of view of the camera the entire time
But the gap between street lights is going to be very hard to see into.
> I'm confident my eyes are good enough that I would have been able to see this person at night in these lighting conditions.
I think you're overconfident. Human low light vision is very good if there is low light everywhere. But it is not good at seeing into low light regions when brightly lit regions are nearby.
That said, I agree that a visible light video camera is likely to be even worse that human vision under the given circumstances. But as others have commented elsewhere in this thread, the car is not just supposed to be using a visible light video camera. It has LIDAR and IR sensors, which should have clearly shown the pedestrian well before visible light did.
FWIW the spot where the crash happened is in fact badly lit. I know this anecdotally from having been at the location for events -- it's right next to a concert venue -- but it can also be seen on other dashcam videos.
In this video [1] driving northbound, same as the vehicle in the crash, the car first goes under AZ-202, emerges under a streetlight, goes through a darker spot, then another streetlight (as you see the rocky outcrop), and then a very dark spot: and suddenly, you see a right-turn lane that wasn't there before. The latter dark spot is where the crash happened.
Another video by the same author, driving southbound [2], provides another useful reference. And these videos are three years old, yet the illumination of the roadway has not improved. Cameras exaggerate the contrast a bit, but not unreasonably so. The streetlights in question essentially aim directly downwards, illuminating the roadway immediately underneath, but much less of the surrounding air than other designs. This is responsible for the dark gaps, albeit it does significantly reduce light pollution.
Found more. The car in this video is going southbound, camera facing backwards [3]. This view faces the same way as the Uber did, but of course this video is moving away from the scene, and offset by a few dozen meters to the west. The drastic change in roadway illumination can still be seen.
In a fourth video [4], the car is going northbound, like the Uber, in the proper lanes, but the camera is pointing obliquely front-right. The illumination seems better, but you can still see the intensity of the shadows, including environmental shadows and the car's own shadow, as it moves between the lights.
Everyone is moaning and slicing and dicing what the self-driving vehicle did wrong but, since you're familiar with the area: are pedestrians typically expected to be crossing this road?
Seems like the accident has a lot of factors that might not only be the self-driving car's fault, nor even a human driver that was fully in control. Regardless of how well people may want self-driving cars to do, one thing that can actually exist in the present is to make sure that we are creating safe ways for pedestrians to cross a road.
I've also driven around here a lot. No, pedestrians are not common. Maybe once a week in my experience? They do love to cross outside of crosswalks at night, though, and I've found that I have to adjust my own eyes' object recognition to look for moving shadows and not just moving lights, because they're very hard to see even in well-lit areas.
I've driven many thousands of hours at night and have dealt with a fair number of crazy pedestrians including a rather ... uncoordinated ... guy in Casa Grande who decided to go in circles on his bike in the middle of the road at around 3 AM for no discernible reason. Fortunately that place was much better lit and I was able to see him and stop until he got out of my side of the road.
So it's not that common, but yes, every so often you will see some person in black jaywalking across a wide road at night and they're quite hard to see. I don't think a lot of people appreciate that the streets here are wide & fast and that there just isn't that much pedestrian traffic even in daytime.
That was my suspicion. I've lived in very suburban areas before as well as rural ones where you might even be going 55 on a two-lane road with no street lighting whatsoever.
Here in LA, it's dense and traffic can't get up to very high speeds and we have relatively frequent places to cross safely if people choose to do so. I've definitely seen those who choose not to walk an extra 100 feet to wait at a crosswalk nearly hit in dusk or night traffic.
No amount of automation is going to bring the accident rate down to 0 so through a combination of factors, such as traffic and community design, we can work in tandem with automated driving to get closer. There's still the X factor of our human ability to do really dumb stuff.
Tucson does it due to the nearby observatory. The greater Phoenix area has a huge glow that washes out all the stars. You can see the glow as far away as Casa Grande when you come out of the little rocky pass on I-10 north of there.
> The greater Phoenix area has a huge glow that washes out all the stars.
I live in the Phoenix area, on the west side closer to Glendale (specifically, the border between Phoenix and Glendale is literally in my back yard).
There are times in the summer where the glow from the city is so bright, that rather than a dark sky (never black), you have a grey dimly lit sky instead.
Literally, "the sky was the color of television tuned to a dead channel" - maybe not as bright as the static Gibson was referring to, but still bright enough to see by - even without a full moon.
This site lies on the approach route for Sky Harbor airport. I'd imagine the street lights are intentionally designed to reduce light pollution at the expense of "on the ground" effects.
> But the gap between street lights is going to be very hard to see into.
This wholly contradicts my experience driving at night on a street with street lights. I can't recall a time in my entire life I have had significant difficulty seeing into the gap between street lights. Keep in mind that the gap is not arbitrarily chosen.
Edited to note that I have experienced difficulties in low-vis conditions such as snow storms, sand storms, VERY strong rain storms, etc. None of which apply to this situation.
Keep in mind a lot of folks in this thread might be suddenly realizing they have reduced ocular ability at night, a likely common condition that pretty much nobody is aware of when it’s minor (because it’s not obvious something is amiss; maybe it’s just that dark). I agree with you that streetlights and headlights are almost universally sufficient in my experience. If they’re not, it’s worth getting your eyes checked out for light sensitivity at night. You never know.
I’m not sure a typical eye exam checks for it, either, because none of the tests I can think of seem like they’d be useful.
(As usual, an even keeled comment based on family experience is -2 and rapidly being silenced with zero feedback inside 5 minutes, which makes me wonder why I contribute to this community at all, probably time to stop)
I did one at my last eye exam and it was pressing a button when you see dim flashes in all different locations. If you had low sensitivity, you wouldn't see those flashes and presumably you'd get a low score.
That test mostly isn't testing sensitivity, it's testing field of view which is an indicator of some potential eye health issues. It might end up testing sensitivity incidentally but that's not the purpose.
I suspect it's too late to chnage now but have a "throwaway" account is an indicator you might not be committed to the community. one has to dig a little deeper to find out a multi year history with 4000 karma. so first impressions of your comments might be getting biased (it might just the "red car effect" but i am seeing a lot more throwaway accounts these days)
I would also not judge the community based on reactions to this very contentious thread - i am wary of jumping in on this one, but thought it worth noting your comment was not wildly out of place.
Judging a comment, or commenter, based on karma is asinine. Respond to comment, not commenter; ideas not people. You are not well representing HN, and this is my 'unpopular opinion' account talking. There are plenty of better ways to engage, and I do appreciate your enthusiasm for HN. Perhaps this is an apt introduction to the heated discussion that is HN.
Well, in my view I was responding both to the comment ("i am leaving") and the commentor (making the years of participation and 4000 points relevant). If someone has been a contributor for many years then we should consider why they chose to leave. it might be them, it might be us.
I actually believe i am representing HN as a place where different opinions can be voiced, hopefully in a manner to generate light not heat. Heated discussions are rarely the useful or interesting ones to read.
Thank you for appreciating my enthusiasm.
PS
Are you using two accounts - one ("my 'unpopular opinion' account) for saying things you fear people might not like? That seems odd. May I ask why?
This may be how you believe you see the world but most people take reputation into consideration and on sites which expose that information account age and karma are very popular cues for that.
Karma does not mean shit. It just means you are complaisant. I think the proper way to use things like HN/reddit is to always use a throwaway account and always speak your mind without the fear of negative karma...So I also agree 100% with the parent. Reply the comment, not the commenters, their karma or their entire history.
> Keep in mind that the gap is not arbitrarily chosen.
It's not supposed to be, no. But the gaps are not always optimal. The spacing of the street lights in the video (to the extent I can tell) seems to be quite wide, wider than I would think is optimal.
The edges of the "lightpool" that the lamps normally cast is probly being clipped by the cameras crappy dynamic contrast, it is almost certainly a much larger lightpool in real life.
If Tempe is like Tuscon, they are using different kinds of street lighting from the rest of the country to minimize light pollution for star gazing reasons.
> But the gap between street lights is going to be very hard to see into.
Looking, right now, at a parking lot between two lights from a well-lit room. I can make out most of the outline of the black car in the middle of the "darkness" without any trouble. This isn't even the low light vision kicking in (which I agree isn't going to kick in if you're driving). Human vision should be able to make out the pedestrian earlier than the video footage.
How long does it take you to make out the black car and determine that it's a car? What if the car were coming straight at you out of darkness and you were standing in the light of a street lamp?
Also, are you looking straight at the car? Or are you looking elsewhere so that the car is in your peripheral vision, the way it would be if it were on the side of a road you were driving on?
Not when you transition from high to low light conditions. The problem is night vision has more noise which makes movement detection far more difficult. This is made worse because the pupil can't fully dilate making the gaps seem much darker.
This street in particular is weird at night because the street downstream rises up, and the light from those lamps is cast at a higher point. The place she was hit is extremely dangerous because there are no lights on her, and no lights behind her.
I believe that a human would be able to see in those conditions. It's a lit street with a car with functioning headlamps. It wasn't foggy or rainy.
I've personally driven down country roads without any lighting except my headlights and saw deer poking their head out of the woods a ways away for which I slowed down in case they darted across the street. Someone slowly walking their bike would be trivial.
The video makes it seem impossible but afterwards in the interviews the driver said it wasn't too bad after his eyes adjusted. He did have some issues with his own lamp blinding him which lead to errors. (He actually won this stage.)
As far as I'm concerned Uber's software/hardware is completely at fault and not ready for public testing. I'm uncertain how much better everyone else's tech is but Uber's typical carefree approach has ruined it for everyone.
There are consumer level dashcam that can shift up to 12800 ISO which can create a fairly distringuishable picture with ambient moonlight.[1]
Canon builds sensors with ISO's in the millions which should be able to see distinguishable shapes without ANY light. [2]
> It's a lit street with a car with functioning headlamps.
The headlamps may have been functioning, but they appeared to be aimed way too low. You can see that the car is able to traverse the distance lit up by the headlamp in about a second at 38 mph. If the headlamps were aimed properly, it should light up the road about 5 seconds ahead of the car.
> If the headlamps were aimed properly, it should light up the road about 5 seconds ahead of the car.
A system with this rule baked in would be driving slower.
People adjust the way they drive based on what their environment is doing, how well their equipment is working and their own alertness. Except in the extremes we should not accept misconfigured equipment as an excuse. And if a system detects that there is no acceptably safe speed for it to go then it should not move at all.
> A system with this rule baked in would be driving slower.
Arguably, the system should detect a misconfiguration like this when the car is turned on and not allow the car to be driven until the problem is fixed.
I also suspect that human eyeballs would have a different view of the light/dark portions of what's depicted there, and especially eyeballs would have probably had a much higher chance of detecting movement in peripheral vision than that video gives any hint of.
We typically cant see much detail in the scene out of our small region of focus, but you can bet if a tiger appears from behind a tree our visual system will scream to the brain _look over there right now!_
Our eyes and our entire visual processing system is very much not "just like a webcam, but made out of meat".
I’ve driven that location on that road many many times at night and no it is not that dark, it is lit up well like most city streets. The video make the contrast appear greater.
I have a dashcam, and I've seen night videos from it.
In fact, the picture from my dashcam seems much better than this low quality mess, but still night videos from it come out much less visible than reality.
I've tried to rewatch some parts of videos later, and I find I was able to see much more detail on the sidewalk and on the periphery than was captured by the dashcam. Everything gets blown out in the night videos by the headlights.
Which makes me wonder if the Uber autocar is just relying on camera vision to drive itself... If it is, and it's been lying to authorities (I don't know what they said to the authorities about their cars' capabilities), that could be big.
The point is not germane. What is germane is that a car that supposedly uses LIDAR and infrared, and presumably was approved by the regulators on the basis of such, should have had no problem seeing the pedestrian as LIDAR and infrared are unaffected by night and at least shown some indication of braking but did not. This suggests that the car does not in fact utilize any of those fancy (non-vis) detection methods. Alternatively, these fancy detection methods were fooled by the bicycle and thus misclassified as an error or something.
My point is that it's likely the camera view we're seeing has nothing to do with the self driving portion of the vehicle (a good hint to this is the interior view--useless to autonomous driving, but a common feature of dash cams).
The car has a LIDAR sensor mounted on the roof. It is supposed to continuously scan 360° of the environment. Since LIDAR is an active sensor (it emits light), the car should have seen the person and bicycle even in the dark. That it did not do so suggests the car does not evaluate LIDAR input, or it dismissed the object as erroneous data.
It has 64 lasers, spread out over about 27 degrees - about 0.4 degrees per laser, from almost horizontal to an angle of 24 degrees or so down. Now take a look at where it is mounted on the car, and envision these laser beams spreading out and being spun in a circular conical area around the car.
Now - if you think about it - as the distance from the sensor increases, the beams are spread further apart. I'd be willing to bet that at about 200 feet or so away from the car, very few of the beams would hit a person and reflect back. Also - take a look at the reflectance data in the spec. Not bad...but imagine you are wearing a fuzzy black jacket on your top half. How much reflectance now?
What do you think the point cloud returned to the car is going to look like? Will it look like a human? Hard to say - but you feed that into a classifier algorithm, there's a possibility that it's not going to identify the blob as a "human" to slow down. Especially when you add some bags, a strange gait, plus the bicycle behind the person. All of this uncertainty adds up.
I am also willing to bet that only the LIDAR was used for collision detection (beyond the radar on the unit). Any cameras - even IR based - would likely only be used for lane keeping and following purposes, plus traffic sign identification. Maybe even "rear view of vehicle" detection. Ideally it would be used for "person/animal" identification and classification to - but again, given the camera sensor, and who knows what the IR sensor saw or didn't see, along with the weird lighting conditions - well, who knows how it would have classified that mix?
Lots of variables here - lots of "ifs" too. All we can do is speculate, because we don't have the raw data. Uber would do well to release the entire raw dataset from all the sensors to the community and others to look over and learn from.
Finally - I am not an expert on any of this; my only "qualifications" on this subject is having taken and passed a couple of Udacity MOOCs - specifically the "Self-Driving Car Engineer Nanodegree" program (2016-2017), and their "CS373" course (2012). Both courses were very enlightening and educational, but could only really be considered an introduction to this kind of tech.
> The dynamic range of the human eye is vastly better than a visible spectrum camera.
Certainly better than any camera mounted on a dashboard.
It's honestly a bit surreal how the pedestrian appears out of the splotch of pure darkness in the frame. That's low dynamic range and resolution (or high compression) at work, not how light behaves in reality.
I figured that light in front of the car was mostly just messing with the camera but that driver sure didn’t see that pedestrian either. I’m willing to give a human driver the benefit of the doubt here and say that even with eyes on the road and hands on the wheel the outcome would likely have been the same. The pedestrian was not highly visible - no reflectors, dark clothes, it’s really hard to see people like this.
The eye can gain a lot more stops through adaptation (irising, low-light rod-only vision), but those mechanisms dont come into play when viewing a single scene -- and cameras can also make adjustments, e.g. shutter speed and aperture - to gain as much, if not more, range.
A camera captures the entire scene in a frame with a fixed dynamic range. Human vision builds the scene with spatially variant mapping, the scene is made from many frames with different exposures stacked together in real time.
I'm concerned about poor scotopic adaptation due to the rather bright light source inside the car - maybe it's the display he's looking at. I see a prominent amount of light on the ceiling all the way to the back of the car and right on his face. It's really straight forward to collect the actual scene luminances from this particular car interior and exterior in this location, but my estimation is the interior luminance is a bigger problem for adaptation than the street lights because the display he's presumably looking at has a much wider field of view, and he's looking directly at it for a prolonged period of time. It's possible he's not even scotopically adapted because of this.
And also why is he even looking at the screen? He's obviously distracted by something. Is this required for testing? Ostensibly he's supposed to drive the car first. Is this display standard equipment? Or is it unique to it being an Uber? Or is it an entertainment device?
Retest with an OEM lit interior whose driver is paying attention. We already know the autonomous setup failed. But barriers are in place than also increase the potential for the human backup driver to fail.
I agree, but I don’t think the eye can adapt beyond its inherent dynamic range over a matter of milliseconds - the iris is not opening or closing over that timescale, so you’re relying on the inherent dynamic range of the retina (which is pretty good).
What the eye IS doing is some kind of HDR processing, which is much better than the gamma and levels applied to that video. I bet a professional colorist could grade that footage to make it a much better reflection of what the driver could see in the shadows - even with a crappy camera, you can usually pull out quite a bit of shadow detail.
It's not so simple. Technically, you are not wrong but a video feed should have been sufficient here. It should also be considered that digital video has improved drastically over the last decade.
Even LIDAR aside, computer vision and a raw video feed should have been enough to have prevented this collision.
When a digital camera records an image, a gamma curve is applied to it before display, which makes up for our bias against the darker portions which the digital equipment does not have. We are very capable of guessing the results of bright conditions but not dark conditions via compressed video.
Moreso, these cars should not be using consumer CCDs with compression. They should be utilizing the full possible scope of video.
> When a digital camera records an image, a gamma curve is applied to it before display, which makes up for our bias against the darker portions which the digital equipment does not have.
Gamma correction makes up for a bias against darker portions in the display, not in our eyes. It's a holdover from the CRT days where the change in brightness between pixel values of, say, 10 and 11, was far less than the change between 250 and 251. Human eyes have excellent low-light discernment which is why 'black' doesn't really look black and you can make out blocky shapes during dark scenes on some DVDs.
Compressed video lacks information in the blacks and that is why we see blocks. The blocks are not there before compression, so it’s not simply a matter of detecting them. While we are good at seeing objects in blacks, your explanation alone doesn’t account for why compression algorithms reason to remove so much of that data. Maybe we are saying the same thing. It’s hard to tell.
Your assertion about the origins, however, are at odds with what I have been taught, my understanding, and all the supporting info I am finding in a quick search. My understanding is that luminance values from a sensor have something of an empirical scale but I’m sure this no complete explanation. I am speaking from my working knowledge. I can’t find anything supporting that it is simply a fix for discrepancies between display types. Can you link to something or explain what I am missing?
I do know a bit about cameras, and you're spot on.
You will frequently see dash cam footage and night photography blow out the relative highlights and blacken the relative shadows.
This is because (cheap) hardware does not have the same dynamic range as human eyes, especially at night. So "properly exposed" it has to make a call to capture light values in the middle somewhere. Those light values too far out the top it interpreted as white, those out the bottom it interpreted as black, created an artificial high contrast version of what a human eye would see.
This is pretty intuitive, generally when we're driving down the road with our lights on, we aren't literally moving between pools of black, often in many urban areas I'll even forget to turn my lights on because I can see well enough.
You MAY be able to get a VERY BAD interpretation post processing of what a human would see by increasing the brightness of those pixels near the black threshold.
If any of you have a dash cam it's very obvious how the light levels of images captured at night look like this video and is VERY different from what you see as a driver - objects are much brighter than this with your own eyes.
Also - the car is driving way too fast.
I did some driving tonight and paid close attention to when I naturally slowed down - and albeit I'm probably on the higher curve of good drivers in that I don't tailgate, drive the speed limit and generally slow much slower than the speed limit when conditions are poor (fog/rain/snow, night, slick/wet roads, near curves/hills where I can't see the road). I noticed that many of the times I naturally slowed down on the roads here I slowed considerably under the speed limit by 10 to 20 MPH in some areas. It seems this Uber SDV is generally going as fast as it is possibly allowed to regardless of what it can see.
Also even if the camera was perfectly accurate about human field of view, no human driver in his right mind would drive so fast with such a poor visibility. Any judge would qualify this as reckless driving.
So either way the software failed:
-If AI misjudged Lidar information and didn’t compute the slow moving pedestrian it’s a fail
-If it didn’t have enough computer vision space it should have slowed down
Possibly in the second scenario the human test driver is at fault too because he should have noticed bad condition and hit the autopilot kill switch.
In France it’s 100% (in civil cases), unless the driver can prove it’s a suicide. It just goes by kinetic energy: you store it, you are responsible for it. Other people don’t have to dodge your car. And since death penalty is not part of the arsenal, killing a pedestrian is not an appropriate sentence if they commit a infraction that’s punished with a 50€ fine.
I don't really know how it works in the US or in this state, but in my country, you simply can't drive when it's as dark as the video appears. Either you're not in a city and you can turn your mainbeam headlights on (the blinding ones), or you're in a city and the road is much more lit and the speed limit is 50kph.
With those, the driver would've seen her from a mile away.
Since in the video, we aren't seeing the original scene, but rather, the camera's interpretation of that scene, I think it would be hard to judge except to base it on what your average streetlight brightness is.
Like a camera, your eye also has only so much dynamic range. So if those street lights are bright enough, or your interior lights are too bright, you might have nearly zero visibility in those shadows.
But it is certain that a self driving car "should" be able to see. Even two cheap digital cameras one tuned to see the darker range and the other brighter should easily see in these type situations.
"Even two cheap digital cameras one tuned to see the darker range and the other brighter"
Sounds a lot like rods and cones in our eyes, huh?
There's another difference with eyeballs that would almost certainly have helped here - the low light sensitive peripheral vision that the rods provide is also attuned to movement, we're more sensitive to movement in peripheral vision as well as being better able to see in low light.
You wouldn't need two different cameras, just one camera shooting HDR video (alternating between over/underexposing the frame so that no information is clipped) to get a clear image at all exposure levels.
Eyeballs are pretty good at night vision once adjusted, but good high sensitivity cameras can be much better. And let's not get started on LIDAR/RADAR... it seems clear to me that this was not a sensory deficiency, it was poorly designed/tested software.
No, much more like the iris in our eye. Turn on a bright light inside a car and see how hard it is to see outside on a dark night. Modern HDR cameras have a much higher dynamic range than the human eye. Hence the surreal HDR photos you see.
The driver wasn't fixed on the road but he glanced two times prior the collision and had no hesitation. It seems that it was at least dark enough for him not to notice a person + a bike on a large road.
I'm also very (sadly) surprised that she crossed that kind of road at night without hurrying or reacting to the sound of cars approaching.
I too think visibility is better than it appears in the video, but I'm not so sure it's good enough to help all that much. However, even with visibility as bad as in the video, I'm confident in my ability to handle the situation. I would probably not be able to break in that short amount of time and from that speed, but neither would I drive at that speed. When there are less than ideal conditions (in this case visibility), it is our responsibility as drivers to adapt and lower the speed, possibly dramatically. This goes for autonomous cars too. If the road in front of me and the areas next to it are not clearly visible, I'd drive at such a speed that a collision would in all likelihood only result in scrapes.
True but there are trails that cross over the road, it is an odd area. If you zoom out on google maps you will see some of the trails. Note the sidewalk/pathway. It is no pedestrian but has paths for them so it sends mixed signals.
I saw that. The median is landscaped with a bizarre X-shaped paved area. It can't be intended for recreational use or walking; it's a divider between two fast roads. At all four entrances to the X, there is the no pedestrians sign.
I think you are wrong since this people die of this exact scenario almost everyday. The camera might be making it darker but that doesn't mean that every driver (Everyone's eyes and reaction times are different) would have been able to see her and get out of the way.
Is it possible that driving under intermittent street lights messes with the aperture or image recognition? It would be like flashing a strobe light at the camera.
right, so the camera's night vision mode that detects objects in the dark would have been completely blinded by the street lights while passing under the streetlamp. Take night vision goggles and look at a light. It blinds the whole field of vision.
The only thing that I think was the cars fault was that the car is programmed to drive when the driver is driving around distracted. There is no point to a human driver sitting behind the wheel of an autonomous vehicle if they aren't paying attention.
People need to understand that self driving mode isn't a freedom from the responsibility of driving safely. Rather its a tool to help ensure that driving statistically becomes safer as more self driving vehicles find their ways onto the road.
Hopefully someday all cars will be self driving and dangerous hazards/traffic reduced to the point that they are virtually none existent rather than being towards the top of the list of "preventable death" and "things humans don't want to waste most of their time during the day doing".
How did LIDAR and IR (?) not catch that? That seems like a pretty serious problem.
Something is badly wrong there. That should have been detected by LIDAR, radar, and vision. Yes, they need a wide dynamic range camera for night driving, but such things exist.[1][2] They're available as low-end dashcams; it's not expensive military night vision technology.
Radar should pick up a bicycle at that range. The old Eaton VORAD from about 2000 couldn't, but there's been progress since then.
LIDAR has its limitations; some materials, including the charcoal black fabric used on some desk chairs, are almost nonreflective to LIDAR. But blue jeans, red bike, bare head? Expect solid returns from all of those.
The video shows no indication of braking in advance of the collision. That's very bad. There simply is no excuse for this situation not being handled. The NTSB is looking into this, and they should. I hope the NTSB is able to pry detailed technical data out of Uber and explain exactly what happened. In the first Tesla fatal crash, they didn't get deeply into the software and hardware, because it was clear that the system was behaving as designed, unable to detect a solid tractor trailer crossing in front of the Tesla. The result of that investigation was that Tesla had to get serious about detecting driver inattention, like all the other carmakers with lane keeping and autobrake do.
This time it's a level 4 vehicle, which is supposed to be able to detect any road hazard.
The NTSB has the job of figuring out what went wrong, in detail, the way they do for air crashes.
LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.
Why it didn't even appear to try to stop? You got me, refresh rate on the LIDAR? LIDAR flat out being mounted to high and relying on optical sensors instead for collision avoidance of small targets (like a human head)?
I'm guessing, I'd love to see an NTSB report on this.
Why even bother having a LIDAR system on your self driving car if it doesn't have sufficient resolution to detect a person standing right in front of it?
This doesn't seem like an edge case at all. Pedestrian crossing the road at a normal walking pace, and no obstructions in the way which would block the car's vision. The fact that it's dark out should be irrelevant to every sensor on that car other than the cameras.
Something obviously went terribly wrong here; either with the sensors themselves or the software. Probably both.
For detecting larger obstacles like buildings or other vehicles would be my guess.
Realistically faster sensors should be used to detect obstacles. LIDARs I could find with some cursory googling can run up to 15hz. Computer vision systems can run much faster (I have a little JeVois camera that'll do eyeball tracking at 120hz onboard, I assume something that costs more can do better).
But more importantly, you're vastly trivializing the problem - Standing right in front of it, sure the LIDAR will see the person no problem. Standing 110 feet away (which would be min stopping distance at that speed)? Realizing that, for a LIDAR with a 400' range at 15hz moving at 40mph you get ~7 samples of a point before you're at it... For at least the first 3 frames that person is going to look like sensor noise. At 110 feet that person (which I'm calling a 2' wide target) is 1 degree of your sensor measurement.
It's not that it's useless or broken, more just this a seriously bad case where optical tracking couldn't work and where LIDAR is particularly ineffective at seeing the person because of how it works. More effective might be dedicated time of flight sensors in the front bumpers, unsure how long a range those can get, but they are also relatively "slow" sensors.
It’s not mutually exclusive either. You can have lower frequency, lower angular res 360 spinning LIDAR for low granularity general perception, and also have much higher frequency, brighter, and lower FOV (~90-120deg) solid state lidar mounted at the very least on the front corners of the car. We should be absolutely littering these vehicles with sensors, there’s no reason to be conservative at this stage.
> LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.
I highly doubt this is the issue. I am not sure what Ubers setup is, but even a standard velodyne should have been able to pick that up based on angular resolution.
> Realizing that, for a LIDAR with a 400' range at 15hz moving at 40mph you get ~7 samples of a point before you're at it... For at least the first 3 frames that person is going to look like sensor noise. At 110 feet that person (which I'm calling a 2' wide target) is 1 degree of your sensor measurement.
This is based on the velodyne LIDAR specs I could find last night with some quick googling:
- 400' range
- .04 degree angular resolution
- 15hz max update rate
If you have more accurate real world experience with these sensors and can share more accurate performance characteristics I can update.
These calculations were done assuming a vehicle moving at 40 mph. The stopping distance at that speed is about 110ft. I computed the pixel size by assuming 1 measurement = 1 pixel giving me 9000 pixels per 360 degrees.
Thats the one LIDAR Uber seems to have matching pictures.
5Hz - 20Hz full round sampling rate, lets assume 15 Hz.
The resolution in the horizontal plane is dependent on rotational speed, so at 15 Hz it should be 0,26 degrees.
(0,35/20*15 = 0.26)
For the woman height the angular resolution is 0.4 degrees no matter the rotation speed.
Id est, she would have been atleast one pixel wide from 400 feet and about 2 pixels high and growing in size if we assume 2' wide.
(Not counting bike).
I really see no exuse for Uber messing this up that bad. The LIDAR can't have missed a potential "obstacle" when it got closer, even if the car wouldn't classify it as a human.
I was using Rev E because it's the data sheet I had handy. Mostly I was trying to point out that LIDAR is not some magic thing that always sees everything and there's limitations.
There's with your .26 angular resolution @ 15hz. (I just have a spreadsheet that spits all these out for me.)
These are NOT big targets, they could easily have been mistaken for noise and filtered out. All of the LIDAR data I've ever seen has been fairly noisy and did require filtering to get usable information from it. And given the number of frames they get maybe their filtering was just too aggressive.
Yes, I agree with you that we can't assume that the car could have noticed the woman from 120 meters from LIDAR data alone. Maybe with some kind of sensor fusion with IR-cameras.
But, as it got closer and what the computer though was noise was on about the same place a sane obstacle finder should have given a posetive match. Maybe at 30 - 40 m worst case?
At 142 feet the woman probably had (assuming she was 5.5'):
asind(5.5/142) = 2.21* => 2.21/0.4 = 5.5
So between 5 and 6 "scanlines" going from left to right over her.
Assuming she was 2' wide that's 0.8 degrees which would be 2 to 3 pixels in breadth according to your spread sheet.
That's between 10 and 18 pixels (voxels?) that stand out clearly from the flat road around it, exluding the bike.
If you wan't to get an idea of how LIDAR data looks Velodyne has free samples and a viewer for less resolution models.
It pretty hard to identify obstacles far off, but you will still see there is something there. It's especially easy to identify obstacles that are vertical.
As she got closer, she would eventually show up clearly on the LIDAR data. But since the car never slowed down or went left, it didn't notice her at all even at point blanc (or did see her but failed to do anything about it).
A buddy of mine has a lower end LIDAR on a robot, working with them on SLAM on it, trying to get a similar hardware set up locally over the summer. (I have weird hobbies)
Yeah, I'm willing to accept SOMETHING bad happened here, as I said I really just wanna dissuade people from the notion that LIDARs will see all obstacles all of the time. Not going to say the car acted perfectly and it was sensor failure, but definitely willing to say that the LIDAR probably COULD see her but not as well as people would assume.
Really, I think this was a case of the car over driving their effective sensor range, same as what happens when you're on a dark road and a deer runs into the middle of the road, you simply can't react fast enough by the time you realize the danger is there. Computers are fast but they aren't perfect.
What I'd be particularly interested in was if the computer saw her and if it did the calculation - I can't stop safely in this distance, and decided to just hit the obstacle because it was "safer". At that point we start getting into ethics and this problem gets a lot murkier.
The person in that last picture is something like 5 feet from the car which is far to close to be useful at 40MPH. At those speeds what's important is what it sees at 150 and 200 feet and how fast it can refresh.
When your resolution is low enough to not see this they stop calling it LIDAR and start calling it a rangefinder. If this was actually a fundamental limitation of the sensor then that's the crappiest LIDAR unit I've ever heard of. When I first heard about the accident my initial reaction was "this is why vision only systems are inadequate". The fact that it didn't even detect the object at all before it collided with it with lidar, radar, and vision is inexcusable. This could set fully autonomous cars back enough that forget the cost of one life, this delay could kill tens of thousands because of a preventable accident.
I am glad to hear the NTSB is investigating this and not just the local police (who would lack the technical resources to make a useful judgement). Have there been any statements at all from them so far?
The NTSB tends to not make even initial statements until upwards of a several weeks in, and final statements as much as 6 months to a year later. They're nothing if not thorough.
Probably the only explanation was that LIDAR was off, either on purpose (for testing?) or because it broke and there was no safety mechanism to prevent the car from operating if/when LIDAR is off.
I don't understand why everyone in this thread is so focused on the sensors alone. The sensors might detect anything they like, but they're not going to stop the car on their own. The car has logic that tells it how to react to what its sensors perceive- that's the AI part. If the car's AI can't identify a woman pushing a bike as a woman pushing a bike, or it doesn't know that it has to stop before hitting her- well, then it won't.
There's so much confusion here, about the capabilities of these systems. People think that a combination of better sensory perception + faster reaction times suffices to drive in the chaos of the real world. That's not so. Sensors and fast thinking won't get you nothing if you can't think right. You have to be able to know what the things are that your sensors detect, and how to react to them.
It's perfectly possible that the Uber' car's LIDAR detected the lady crossing the road- but the AI just didn't know what to do about her and simply did nothing.
This is exactly it. I see people mentioning seeing the victim at the last second but these vehicles are supposed to be better. They scan in non visible spectrums with LIDAR. Lack of safety vest, lack of headlights, none of it is supposed to matter... or at least it shouldn't completely compromise the vehicle's systems. Camera's may not work as well but an obstacle directly in the path should still be detected. Especially an obstacle that would reflect LIDAR and give off a very obvious infrared signature.
This video also shows another point I made recently in a conversation. People need stimulus to keep them alert and focused. I don't think it's at all reasonable to expect someone to sit idly with almost no interaction or responsibility and expect them to stay alert. The human brain doesn't function that way.
I watched the video a bunch of times and I'm not 100% sure how the vehicle could have reacted at the upper maximum of time where she would have been visible to LIDAR, and maybe for Radar, to make a significant difference. At least given the two options where it could have slowed maybe 10km/hr at most (from the 40mph aka 64km/hr speed limit) but that's still an if and I'm not sure it would have ensure the survivability of the jaywalker OR safety of the people in the vehicle at that speed.
The other option is swerving which might have been a possible solution here as well, but that would also have been highly dangerous for the people in the car as well at those speeds, within that timeframe, possibly causing >1 fatality or serious injury.
Regardless I'm very much speculating here regarding reaction times based on watching a low quality video, I'm really looking forward to expert analysis here rather than speculation on the capabilities of LIDAR/Radar + computation speed at 60km/hr... even considering a human driver would have 100% hit this person.
If the vehicle can't detect objects in the non visible spectrum (even just IR) at least as far away as a human can in the visible spectrum then that is a showstopper right there for the technology. Additionally if it can't then it shouldn't be traveling at a speed where it can't react in time.
My question is given it could detect the jaywalking object (regardless of visible light) within the very very short timeframe at those speeds, on what looks like a highway, I'm curious if it's rational to expect even the future ideal machines (say 5yrs from now) to have been able to react in that situation.
It's not as obvious as people here are pretending it is.
Yet even then we now have a previously unknown model to test our machines on to prevent it from happening again. Given a human would 99%+ of the time not have seen this woman in time, then I believe we'll at a very minimum be better off as a society as a result of this... as wrong as that sounds, because it's now a high-priority dataset, not just a sad story in the local news (if even) we'll forget about tomorrow as it would be with a human driver.
> Given a human would 99%+ of the time not have seen this woman in time
I'm far from convinced that a human would not have seen this woman in time.
See all the comments in this thread about how the dashcam footage is much worse than reality, and even one person who drives that road regularly saying it's not that bad visibility-wise.
I think if I had seen that lady slowly walking her bike onto the road in my adjacent lane, I would have slowed down for sure. And from seeing my own nighttime dashcam videos, I think I would have seen her. She's the only object nearby, on a fairly straight road with no adverse weather conditions. I would have seen someone pushing a bike onto the next lane.
Maybe I would have hit her still, but I would have slowed down for sure.
So the speed limit on Mill Avenue, where the crash took place is 35 mph. The uber was traveling at 40 mph. The reason Mill’s speed limit is 35 instead of 45 (like most Arizona’s major roads) is because it’s got much heavier pedestrian traffic than typical.
If an autonomous vehicle cannot detect pedestrians crossing a slower-than-typical road with enough time to at least not kill them, it shouldn’t be on the road. If that means uber can’t drive autonously at night, too bad for them.
To be fair, the law currently is very permissive to drivers, and a human may not have been deemed at fault. Despite going 40 in a 35 zone, when (due to reduced visibility) they actually should have been going 25. You are supposed to go only as fast as you can stop, given current visibility. Regardless of the speed limit.
Usually it is the law that you are only allowed to drive as fast as the conditions allow (e.g. CA's "Basic Speed Law".) That includes being able to see obstacles in your path early enough to be able to do something about them.
If you can't see far enough to be able to avoid something in the road, you're simply going too fast. That should apply to machines, but it already applies to humans.
What are you saying exactly, that a human driver would be at fault here given the video evidence?
I believe it's entirely possible for a robot to solve this problem with proper Radar and maybe LiDAR going forward. But I would be extremely skeptical about anyone claiming a human would have been at fault...
If the statement is that "a human would not have been able to stop in time either", then yes, by the letter of the law, a human would have been at fault.
If you can't see where you're going, you need to slow down. Does that seem so unreasonable?
> So, should an accident occur between a jaywalker and a car---if shown that the driver could have/should have seen the person and could have/should have been able to avoid, then without question the driver can be held responsible.
> To a large degree, it comes down to the driver's ability to avoid the accident. If a jaywalker steps right out into the car's path and is instantly hit, the driver will usually not be held responsible. It will be determined that the pedestrian caused the accident.
> However, if the jaywalker strolls into the street a few hundred yards ahead of the car and the driver does not slow down or swerve, the driver could be held responsible. Even though jaywalking is illegal, drivers are expected to take reasonable action to avoid crashes when they can, even if they feel they have the right of way.
> Negligence also comes into play if the driver should have seen the pedestrian but did not. For instance, a driver who is texting and driving may look away from the road and not see someone step into the street, hitting them with the car. The driver could argue that the road was clear and that the person shouldn't have been there. While that may be true, he or she could still face charges.
I never said "every possible illegal or unexpected obstruction". What I said was "if you drive so fast you can't avoid something blocking your lane by the time you see it, you're going too fast." Your quote, in fact, confirms what I said:
However, if the jaywalker strolls into the street a few hundred yards ahead of the car and the driver does not slow down or swerve, the driver could be held responsible.
This was clearly not a case of "someone stepping out right in front of the car", since they were more than halfway across the rightmost lane, walking a bicycle.
Edit: This rule is merely a variation on the universally accepted one that says that if you rear-end someone in another vehicle, you're almost universally at fault (unless it can be proven that they acted in such a way that the collision was unavoidable.) The logic being that if you could not avoid a collision, you were going too fast for the distance you had to the vehicles in front of you.
Are you suggesting that drivers should be less liable for running into stationary objects than they are for running into other vehicles? That seems absurd to me.
I think your response time calculations are wildly wrong.
The LIDAR sensor[1] being used here can pick up targets up to 120m away. I'm not sure about the RADAR or vision systems also in place, but even LIDAR alone should have been able to easily pick out the pedestrian with plenty of time to come to a full stop.
This is clearly poorly designed autonomous driving software, not a sensory deficiency.
I watched it again and you might be right, there was likely time for something to happen, given the available reaction speed going 40mph on a highway. I'm curious what that something translates to and what affect it could have had on this situation (ie, swerving, slowing by x mph, etc).
Because you didn't do the math yourself, 40 miles per hour is just shy of 18 meters per second. So 120 meters is almost 6.7 seconds at that constant speed (more if you're slowing down). Start of video to collision is less than 5 seconds.
That should give quite a bit of time to slow down then or at least move slightly out of the way, even given the person may not have been detected exactly within the 5s+ of the video + factoring in computation speed + mechanical response times. Although again this is speculation as I'm not intimately familiar with how the objection detection works and what a bike w/ plastic bags may have looked liked crossing the road, plus what available options were at that speed and given the environment. Thanks.
The braking distance at 80 mph for a modern car is 320 feet (just under 100m) the car should have been able to come to a complete stop if the software had been using the LIDAR correctly.
This isn’t a highway. Mill Avenue is one of Tempe’s most pedestrian-trafficked major roads. And it has a slow 35mph speed limit, and is well lit (much better lit than the dash cam shows), because so many pedestrians cross it.
40mph is actually right at the inflection point where survivability changes dramatically. At 40mph it's about a 50/50 chance of a car crash with a pedestrian causing a fatality. At 35mph the chances of a fatality go down to 1 in 3, at 30mph it goes down to 1 in 5. At a little under 20mph it's 1 in 10.
Okay, I've heard that 40mph plenty of times here when AI cars come up, so is it possible the car could have slowed down 10mph? I only suggested 10km/hr as an ideal maximum (aka 35mph) given better technology, not as a baseline for today...
I've also read that the last speed sign was 45mph before this accident. I used 40mph as it was between the 35mph sign that was coming up before the hit and the 45mph one before it.
Braking time is about 4 seconds from 40 mph (average braking acceleration is a bit under half a gee for ordinary cars), which means every second of braking is a roughly 10 mph reduction in speed.
> I watched the video a bunch of times and I don't see how the vehicle could have slowed down within the 700ms max where she would have been visible to a LIDAR, to make much of a difference.
There's also a steering wheel. (Why is everybody here missing this?!) I could totally see it having moved out of the way.
In order for steering to be a collision avoidance strategy, the system would have to risk the life of the driver. It may be fairly easy to recognise an obstruction on the road, but determining that it's human is much harder. If a plastic bag blowing across the road could kill the driver, or other drivers nearby, that would be a very unpopular solution.
They hit the person with the right side of their car, so even a small move left could have avoided this accident. Given that it was a divided highway with an empty lane to their left, moving half a lane over would have been a reasonable response.
Of course, the Uber vehicle did not take any action at all. It doesn't seem to have ever realized that there was a solid object in front of it. Without that, collision avoidance is impossible.
Indeed! To LIDAR, she was basically standing in the lane, with a bulky bicycle. To visible light, including the driver, who was apparently half asleep or watching the dashboard, she was in shadow until just before the collision.
So yes, LIDAR should have caught this. Easily. So something was clearly misconfigured. And even if the driver had been carefully watching the road, he probably wouldn't have seen her in time.
But I wonder, is there a LIDAR view on the dashboard?
Right, so assuming LIDAR caught her: I'd imagine the algorithm presumed that she wouldn't cross the center line till she did cross the line, I don't know what to think the algorithm would do there after?
Presumably the algorithm had a pretty good idea of where the lanes were, and if the LIDAR detected a non moving object in an adjacent lane and decided it was fine to ignore it because it presumed it was not going to start moving, that's a pretty broken algorithm.
I don't have the link handy, but I was reading a webpage yesterday (related but not about this crash) which showed Google's self driving car's "view" of a road scene - it's clearly painted different color boxes and identified pedestrians, bicycles, other cars - along with "fences" where it had determined it'd need to slow or stop based on all those objects.
Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public), it was faulty (but being used anyway, either because its self test is also faulty, or because the driuver/company ignored fault warnings) - or _perhaps_ Google's marketing material is faked and _everybodies_ self driving tech is inadequate?
> Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public)
I think this is a very good possibility considering that autonomous vehicles is the goal of the company and they're racing to get to that point before they run out of investment money. They have a lot of incentive to take short cuts or outright lie about their progress.
Looks like a velodyne 64 based laser. It is virtually impossible for those to not be able to see the the bicycle well in advanced. Uber had a serious issue here. Something like:
1. System was off
2. Point clouds were not being registered correctly (at all!)
3. It was actually in manual mode -- safety driver didn't realize or didnt react fast enough.
4. Planning module failed
4. Worst outcome in my opinion: Point cloud registered correctly, obstacle map generated correctly, system was on, planner spit out a path but the path took them through the bicycle.
The LIDAR data look pretty noisy, especially for distant objects. Could not they filter out the pedestrian thinking it is a bush or something like this?
I get your concern, but I would probably reserve the word inadequate. If this is the only situation you have to worry about a self driving care hitting and killing you in, and it's the only know data point at this time, some may consider that much more than adequate.
A website that "does something weird" when you use a single quote in your password... That _could_ be "the only situation you have to worry about". It is _way_ more often a sign of at least the whole category of SQLi bugs, and likely indicative that the devs are not aware of _any_ of the other categories of errors from the OWASP top 10 lists, and you should soon expect to find XSS, CSRF, insecure deserialisation, and pretty much every other common web security error.
If you had to bet on it - would you bet this incident is more likely to be indicative of a "person pushing a bicycle in the dark" bug, or that there's a whole category of "person with an object is not reliably recognised as a person" or "two recognised objects (bicycle and person) not in an expected place or moving in an expected fashion for either of them - gets ignored" bug?
And how much do you want to bet it's all being categorised by machine learning, so the people who built it cant even tell which kind of bug it is, or how it got it wrong, so they'll just add a few hundred bits of video of "people pushing bikes" data to the training set and a dozen or so of them to the testing set and say "we've fixed it!"
If this is the only data point then uber self driving cars are about 50 times more dangerous than average human drivers (see numbers quoted repeatedly elsewhere; uber has driven about 2 megamiles; human average is 100 megamiles between fatalities)
If that's your idea of adequate, you'd be safer just vowing to get drunk every time you drive from now on, since a modest BAC increases accident rates, but not by a factor of FIFTY!
I really don't bundle Tesla in with Waymo, Lyft, Toyota, Uber that are trying to build ground-up self driving cars. Are Tesla actively testing self-driving cars on public roads yet? Are their included sensors even up to the task? I didn't think they even have LiDAR?
True, but this seems to be a simple case of reacting to a person who steps in front of the car. Automatic braking technology exists on even cars that aren't considered "self-driving".
It’s that last possibility that’s horrifying above all others. The backlash either way is going to be terrible, but if these cars are just not up to the task at all, and have driven millions of miles on public roads... people will lose their minds. Self-driving tech will be banned for a very long time, public trust will go with it, and I can’t imagine when it would return.
This is going to sound bad, but I hope this is just Uber’s usual criminal incompetence and dishonesty, and not a broader problem with the technology. Of the possible outcomes, that would be the least awful. If it’s just Uber moving fast and killing someone, they’re done (no loss there), but the underlying technology has a future in our lifetimes. If not...
Waymo actively test edge cases like this both in their test environments in the desert and via simulation, they have teams dedicated to coming up with weird edge situations like this (pushed bicycle) where the system does not respond appropriately so that it can be improved. All of these situations are kept and built up into a suite of regression tests. https://www.theatlantic.com/technology/archive/2017/08/insid...
Not "center line", because this is a divided highway. So she had to cross two lanes from the median, in order to step in front of the Uber. "Human-sized object in the roadway" should have been enough to trigger braking, even if the trajectory wasn't clear.
Anything that is tracking an object moving on the road should be looking at the velocity of the scanned object as well as keeping track of some sort of difference from normal. I would think the car should know it's on a two lane one way road, realized an object was moving in one lane with some sort of velocity towards the path of the vehicle, and that perhaps something was not normal.
From the reports of cars running red lights and then this I would imagine they have an extremely high level of "risk" (what it takes for the car to take actions in order to avoid something/stop) that is acceptable.
What would be far worse than a hardware or sensor failure would be to learn that Uber is instead teaching its cabs to fly through the streets with abandon. Instead of having cars that drive like a nice, thoughtful citizen we'll have a bunch of vehicles zooming through the streets like a pissed of cabby in Russia.
> who was apparently half asleep or watching the dashboard
It is possible that a screen provided a clearer (somehow enhanced) view of the road, so I'm reserving judgment for now.
Of course using that screen could be a grave error if the screen relied on sensors that missed the victim. But if it appeared to be better than looking out of the windshield then that points to a process problem and not necessarily a safety driver inattention one.
He startles just before the collision, so anything he was watching on the dashboard arguably showed no more than the video that was released. But maybe the video camera had poor sensitivity at low light, and the driver could have seen her sooner, looking out of the windshield.
I'm not so sure that's just before the collision. The driver claimed that he didn't notice the pedestrian until he heard/felt the collision and it's not like the car hit a large object. I'm not convinced that he startled before the car hit the pedestrian.
Probably the LIDAR did catch it. Probably the algorithm (neural network) that takes in LIDAR data and outputs whether there is a object in front failed or gave a really low probability which was less than the threshold specified. This happens all the time with deep neural networks.
In any case technology is to blame and self driving should be banned until these issues are resolved.
How would this have helped at 40MPH? The user would have milliseconds to react and hit the brakes. The point is that the car is self-driving. If a user has to watch a video display and intervene for every edge case it's more dangerous than just driving yourself.
From the released video, if LIDAR was including the entire roadway (all three lanes) there would apparently have been at least four seconds warning.
In production, having a LIDAR display would be pointless. But for testing, it might be useful. But maybe better would be to tell drivers to keep their eyes on the road.
> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.
Seems more likely that it's a software problem. Especially given the rest of Uber's behavior, I wouldn't be surprised if they're aggressively shipping incomplete/buggy software in the name of catching up to more careful competitors like Waymo.
The video here is misleading. A human driver has a much wider field of view and better low-light vision than this video renders the situation. That's not to say that this would have been prevented by an attentive driver. But it's also clear that the safety driver was not paying attention, so it's even harder to know.
Like parent said, it's low-light in the visible spectrum, but I'd totally expect these vehicles to scatter tons of light in non-visible spectrums, making these conditions well-lit in those spectrums. Like a bat using echolocation.
I think the point GP was making is that an attentive human driver may have been able to see the pedestrian much earlier than when she becomes visible on the video.
this, the human visual system adapts to darkness. Consider that the victim who obviously is crossing the street as part of their lifestyle has likely done this many times before, and of all the vehicles that could have hit the victim, the one that did happened to be one of uber's self-driving vehicles with a clearly inattentive driver behind the wheel. A driver paying attention to the road would have at least hit the brakes well before impact.
Intermittent street lights reduce the adaptation though and add glare - the human visual range for scotopic ('dark adapted') and mesopic (twilight conditions) vision is about 4 orders of magnitude of luminance (cd/m²) that the retina can perceive simulataneously from 0 to saturation, without adaptation (dilating pupil). Dark to light adaptation is very rapid and happens in fractions of a second, light to dark adaptation happens over minutes.
The eye will adapt to a mean level of light in the larger FOV (not fovea only) - that is why instrument clusters on cars need to be low-level lit, to not disturb this adaptation. Exterior light sources like headlights and street lights further influence adaptation and veiling glare can lead to light sources overshadowing smaller luminance signals and pushing them out of the range that the eye is adapted for.
Don't forget compression artifacts. A dark object against a dark background in a highly compressed video is going to get compressed into just looking like the background.
I downloaded the video directly and it looks like it's 133 kbps for the video data at about 360p, which is just abysmally low. So it's not surprising that it's difficult to pick out detail in such a contrasty scene given the degree of compression and the resolution.
I don't think modern digital sensors actually under perform human vision, or it's at least not obvious. They have made huge improvements in the last decade.
Also, When a digital camera records an image, a gamma curve is applied to it before display, which makes up for a bias against the darker portions which the digital equipment does not inherently have.
Considering the streetlights, I cannot imagine any excuse. This video will sadly give them the benefit of public doubt but anyone familiar with lighting digital video will be unconvinced that the video feed was the culprit.
I take a photo with my iPhone in night conditions and the image I see on the screen comes out way darker than I saw with my eyes. That has to be corrected for, no?
No. The term we need to introduce here is "Dynamic Range". It is pointless to say "Sony A7s can vastly outperform the eye" where it could have been set for a low light exposure. Human eye has an amazing dynamic range - I don't know the exact number today, but last time I checked was like 3 years ago and the cameras at the time (D800) were not even close to human dynamic range.
That's because you're comparing a single exposure from a digital camera. You can have dynamic range far in excess of the human eye with HDR techniques, by combining difference exposures and/or by exposing different pixels on the same image differently.
>> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.
A human is not an "obstruction", dammit. I mean, literally- it's not like hitting a wall. The driver's life will never be in danger and the care may not even be significantly damaged. There's a very special reason why we want self-driving cars to avoid humans, that has nothing to do with the reason we want to avoid obstacles. And because this special reason is very, very special indeed, we need much better guarantees that self-driving vehicle AI is extremely good at avoiding collisions with humans, than we do for anything else.
In pitch darkness, IR cameras can only see more than a visual light camera if you somehow had IR headlights that were more powerful than the visual headlights (they probably don't; that would blind other AVs just like high beams blind other drivers). It doesn't grant you the ability to see in the dark with infinite range. Lidar can sense shapes with more range, but at the cost of dramatically worse resolution and latency. It's conceivable the radar/lidar sensor caught the person in the left lane with a bike and decided that was a reasonable place for a person with a bike to be, then lost track of the person while she walked into the right lane (where the visual/IR system could not see her yet).
It's also entirely possible there was an egregious bug. This video doesn't really tell us much.
A garden-variety uncooled LWIR camera from FLIR can see a difference of 0.05 degree. As long as the pedestrian isn’t wearing a thermal blanket from head to toe he/she can be seen.
We see a nIR video of the driver. Those weird eyes everyone seems to have are the tell-tale sign of nIR as it interacts with the retina. So, we know that nIR sensors are being used in the interior, at least.
That said, Arizona in the summer is going to play havoc with lIR and thermography in erms of false positives and negatives. The sensor suite probably should be using lIR at night for this reason and the switching it off in the day. But given Uber's history, the lack of lIR reeks of cost-cutting.
I'm not an expert in IR or computer vision by any means, so take this with a grain of salt.
Air has such a low thermal mass that it doesn't measurably affect most IR sensors. Hot pavement could be a potential issue, but that shouldn't have a major effect on forward-facing sensors.
Besides, it's only March. Even Arizona isn't that hot yet.
> In pitch darkness, IR cameras can only see more than a visual light camera if you somehow had IR headlights
Did you ever use a thermal IR camera ? What you're saying only apply to cheap chi-com cheapo"IR" CCD (the ones you find in home security), not the FLIR/military-grade stuff.
The lady was walking the bike at the edge of a light cone.
The inner cam was actually filming in IR.
Uber cars have IR cameras, LIDAR and an also multiple radar sensors.
My guess is that the algorithms have never met a person crossing the street with a bicycle during night time so they just ignored it or considered it to be a glitch.
You can have to approaches regarding labeling driving situations. Either you label with positive tags the situations where the car needs to react. Or you label with positive tags the normal situations when the car does nothing.
Depending on the two approaches you can have a car that kills pedestrians that appear in weird circumstances. I also bet a pedestrian that ducks in the middle of a lane would 100% be killed by a car. Or two people having sex while standing in the middle of a lane.
The other situation you have cars avoiding invisible obstacles that may appear due to some aberrations from sensors (which are far from perfect).
Which is exactly the reason this is nowhere near production: "kinda works, sort of, in trivial cases" is not quite the SDV promise I keep hearing: "my car will also go reasonably straight when I let go of the wheel on a wide, straight stretch of the road, and won't even run over anyone if there is nobody on the road. Therefore, autonomous driving!"
Arizona and Tempe especially have lots of darker roads. The LIDAR/computer vision tuning at night doesn't seem right. Maybe it was adjusting for the changing street light brightness but yes this is one situation and the vehicle had just come from the Mill bridge that has festive lights that are strung along the bridge [1].
This is a situation though where the LIDAR should have clearly been better than it was. Maybe it was in a strange state after having seen all the lights and then complete darkness, looks like they were headed north on Mill Ave over the bridge [2] just past the 202 where it is indeed very dark at night and probably the spot right here [3] which matches up with the building in the background, the other way is South and is busy/urban by ASU. They had just crossed a lit up bridge, then dark underpass, then into this area [3]. The area that it happened in [3] does have bike lanes, sidewalks and a crossing sidewalk close by [4] but is by a turn out so not a legal crossing however there are lots of trails through there.
This video is worse than expected by far and may be forever harmful to the Uber brand in terms of software.
In AZ I usually see the self-driving cars out in the day, maybe there is lots of night tuning/work to do yet.
I've been staring at those myself for a couple of hours. My best guess is the crosswalk was moved, but the paths were left. There's a lake nearby and tons of parking under the overpasses, with trails and picnic areas.
Also, the woman was right under a working street lamp. And as was stated in an earlier article the car continued on at 38 mph after the accident. The bike ended up 50 yards down the street.
EDIT: "That spot is east of the second, western-side Mill Avenue bridge that is restricted to southbound traffic, and east of the Marquee Theatre and a parking lot for the Tempe Town Lake. It can be a popular area for pedestrians, especially concertgoers, joggers, and lake visitors. Mid-street crossing is common there, and a walkway in the median between the two one-way roads across the two bridges probably encourages the practice."
"Pedestrians can cross a street without using a crosswalk in many instances without risking a jaywalking ticket, but Arizona law requires pedestrians not using a crosswalk to yield to traffic in the road."
There's a saying "a sign is not a wall." You can make dangerous things illegal all you want (such as "not overdriving your headlights," hint hint), but it won't stop people doing that. "But that person was not supposed to be there" is a rather weak excuse for manslaughter.
Lots of parks around and trails that do go across the road, it is an odd area.
If you zoom out on google maps you will see some of the trails. Note the sidewalk/pathway, it is no pedestrian but has paths for them so it sends mixed signals.
The pedestrian is a lot more obvious to the eye than I suspected, and it's actually quite shocking. They are correct to stop all road tests until they have investigated why they are missing this.
It strikes me as extremely disingenuous if this is all Uber gave to the police. They should be making as much raw data as possible available. At the very least it'd let other companies test their AIs against the scenario and see if they would catch sight of and be able to avoid the pedestrian, if not then this is one more data point to train them on so it doesn't happen again.
I hope so, but I'm not certain what sorts of legal precedents could be leveraged here. Uber, for instance, might try and avoid sharing non-visual recording data on the basis that it's proprietary information. IANAL but I'm very curious if companies can be compelled to share proprietary formats and tools for examining those formats or translating them into non-proprietary formats (which... is that even a thing legally speaking?) in a case like this.
For instance, if law enforcement had testimony and other warrant allowing things that indicated that a user had stored some vital secret plan in a password field what could the government compel a company to do, assume the disk it relies on is also encrypted for extra fun time
1. Hand over the physical disk
2. Hand over the disk image
3. Hand over the decrypted disk image
4. Hand over the unobfuscated (enc or hashed) string of interest from the decrypted disk image
5. Compel the company to decrypt the string if it was encrypted with a common algorithm (i.e. AES)
6. Compel the company to decrypt the string if it was encrypted in a proprietary manner (i.e. in-house custom encryption)
7. Compel the company to devote resources (how much?) to brute force a one-way common hashed string (i.e. bcrypt)
8. Compel the company to discover a hash salt assuming the company doesn't store it locally but may be able to procure it from the user to do the above.
9. 7 & 8 if the one-way hashing algorithm is proprietary (and weak) and the company raises objections that the process of breaking this string will reveal key components of how the algorithm works (i.e. the hash is just md5(string) XOR "IMMA SECRET_STRING")
10. 7 & 8 if the proprietary algorithm is not weak but the company raises objections over trade secrets for other reasons.
The legalities are beyond me, but the core principal seems pretty simple: if Uber isn't willing to cooperate fully with the NTSB to make autonomous cars safe drivers, then Uber doesn't get to make autonomous cars. Full stop.
Im conviced 99-100% that the Automatic Emergency Brakeing (AEB) on my Tesla would have braked for that. The promise of these systems, as you also point out, is that they can see things, humans cant. The "real" cameras on this car (not this dashcam footage), and the LIDAR, should be fine with it beeing near pitch black.
Yeah releasing the sensory data beyond human visible spectrum would be way more informative about if a better designed AV would have dealt with this better.
I'm glad it was not me driving down that road that night, I don't think I could have prevented it.
I think you sell your driving skills very short here. Assuming you have normal eyesight, you would have likely a 10-fold higher dynamic range than the visible camera footage shown. The gap between streetlights would have been easily discernible with your eyes, unlike in the camera footage.
See I wonder about that. In compromises conditions (many) drivers will drive slower and be more cautious. Perhaps computers need to be given a sense of fallibility? Computer can sense low light conditions and drive even more cautiously as a result.
It's pretty clear to me (from the second half of the video) the driver was looking down at her phone and glancing up at the road periodically. IMO if she had been focusing on the road, she would have at least started braking before hitting the pedestrian. Or perhaps actually stopped before that happened.
My more generous interpretation is that they were looking at the computer screen where the car shows its interpretation of the situation, people tend to lift their phone towards their face.
This is only a fig-leaf of a driver (for catching legal flak by sitting in the front left seat), not actually operating the vehicle at all. This part was inevitable, given the unbounded technooptimism.
I agree thats a textbook case for the non-visual-specrum sensors. Its possible that lidar DID catch it, but the avoidance logic decided to continue forward. For example if it decided a collision was immpossible to avoid, swerving might make things worse. Also its possible the logic thought the timing was such that the bike would pass after the car crossed where the bike was going, so slowing down would actually cause a collision.
The lady can be seen fairly clearly (even in this poor quality video) at 0.03 and impact occurs at 0.04. That's 1s, which means a distance of approx 17m. If the guy was watching (sort of the point of him being there really) he could have slowed the car significantly and probably even stopped it. These are test vehicles being treated like prod vehicles. They should probably not be on streets with pedestrians quite so soon.
You're massively underestimating human response time to visual cues and also the distance needed to stop a vehicle traveling at 35-40mph. It takes a full quarter of a second to respond to a visual stimulus on average, and more than that to also move your foot and depress a brake pedal. By that time the car was less than 50 feet from the pedestrian. At 40mph braking distance is about 80 feet in good conditions. There is absolutely no way a human driver could have avoided this accident assuming the same visual distance and dynamic range as the camera. Best case, the car may have slowed down a bit before impact.
You absolutely don’t need to brake in this situation, there are 5 lanes, you release the gas to give the pedestrian more time to finish crossing and if it’s a bit short you go to the next lane over, aiming behind the pedestrian. That’s what everybody was doing in France, now that I am in Arizona the drivers are just murderous towards pedestrians, they don’t release the gas or move to aim behind them, like it’s perfectly ok to kill someone. (Don’t get me started on right on red and exiting from a drive in, I think I will get killed on a sidewalk here)
Swerving is almost always the wrong thing to do from a safety perspective. This situation is identical to a deer in the road at night, a situation in which most traffic safety experts advise hitting the horn and brakes, but not swerving (note that human drivers hit over a million deer in the US every year, despite supposedly being alert and able to see better than cameras at night). You don't have time for a mirror check to see if there's a car next to you, the shoulder might not be safe, and swerving at speed is an excellent way to lose control of your vehicle entirely. There's also a bike perpendicular to the lane, so you would have to swerve way more than just enough to get around a person.
For a deer, swerving is the wrong thing to do, because a deer's life does not matter, so it's worth it to hit the deer.
With a person, though, you are seeking to protect everyone, so the tradeoff swaps in favor of swerving, because the person in the car next to you is far more likely to survive a collision.
Human would have had significantly longer than this video. It's not that dark there at night. (Not as dark as the camera would lead you to believe). Swerving distance to avoid death is well within possibility. Aside from that, the possibility of death was further exacerbated by the vehicle not reacting to impact.
Regarding how the LIDAR did not catch that, there are 4 possibilities I can think of:
1. A software bug failed to recognize the obstacles, or misclassified them, or it fell below some probability threshold.
2. LIDAR didn’t work at the time, and the car did not shutdown.
3. The victim‘s clothing absorbs the LIDAR‘s wavelength pretty much completely, such that it appeared as a „black hole“ and was ignored by the algorithm since this occurs commonly. Unlikely though since the bike itself would surely have registered?
4. It’s hard to see on the video, but is the car going up a slope? In that case, if the LIDAR didn‘t look up far enough, it could have failed to see the victim for optical reasons.
Another option (related to your #1) could be disagreement between the visual field cameras and the LIDAR. Which could result in a lower confidence of an object being a pedestrian.
It seems like this perspective may come from the idea that processing camera input is a formality. But the best estimates of the practical computing power of the brain are based on its visual processing capacity, because we know that's a hard problem. CAPTCHAs all depend on humans' ability to process images semantically faster than a computer (spambot). While it probably isn't unsolvable, I don't think it's surprising that this is consistently a challenge.
> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.
It's a bit too early to make that conclusion. For all we know, the equipment was malfunctioning. Which I guess technically leads to your point, but we'll have to wait for the investigation to actually know what failed vs. what met expectations (I worry that expectations and tolerances, as set by the car companies, will be revealed to not be as comfortable as we might assume).
It was also a bit too early for the police to release a statement less than 24 hours after the incident saying that it appears that Uber/the car/observer was not at fault.
Jaywalkers ar at fault in Arizona in the case of an accident so it doesn't seem too early.
On the subject: this lady I used to know hit someone who ran out in front of her and started freaking out (thinking they were in some serious trouble) until the police told her "you're fine, they were jaywalking".
> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.
This was taken by a video camera - which has a much lower range of detectable brightness then the human eye. The pitch-black spots in the video are almost certainly not pitch-black if you were to look at them.
" ... since the pedestrian showed up in the field of view right before the collision"
Either the woman had just said the words "Beam me down Scotty" and materialised there like the video feed footage implied - or she'd been in view for quite some time - at least enough time for a person pushing a bicycle to cross en entire lane. If Uber's tech is only capable of detecting her as she "showed up in the field of view right before the collision" - their tech is not fit for purpose and should he held 100% at fault here. (Not that doing that will help her family or friends, but it might help stop Uber and their competitiors from doing it again...)
This is drummed into students during the motorcycle training syllabus here (Sydney Australia) - "Do not ride beyond your field of view. It you can't see beyond a curve, crest, fog, rainstorm, queue of traffic or whatever - make sure you're going slow enough that you can stop before you get to the end of where you can see".
I always explain it to friends starting out "you need to assume that just around every corner there's a stationary shipping container that's fallen off a truck. If you cant stop in time by the time you see it - it's your fault for going too fast."
Many people don't seem aware that the reduced speeds at curves aren't because your car can't take the curve at that speed (most can) but because you can't tell if there's an obstruction from a sufficient distance.
A car should not ever be driven faster than conditions allow. If the driver cannot see (from rain, snow, darkness, etc.), then they need to slow down. To do otherwise is putting people on the roads at sever risk of injury or death.
a) there are people on the roads inside those metal boxes on wheels, y'know.
b) "shouldn't be there" is not a bianco cheque for "run them over", at least in the civilized world
> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.
Actually a human driver would be expected to have less visual trouble in this case. People's eyes are far more adaptable to low light conditions than a camera's video. If you've ever tried to take a picture on a visible night using your phone, you've seen this effect.
> When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).
Except that the computer did not do that in this case. This car also uses LIDAR and should have noticed the pedestrian long before the accident occurred.
> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.
Either the sensor equipment or the software was defective, otherwise the pedestrian would have been detected.
I'd like to see an Uber SDV drive on Waymo's test tracks (where they have the employees pop all the shit at it). And just see what it does. I'm guessing it will be ridiculous and nightmarish.
In addition to that, even if we were limited to the "last moment", there was about half a second or a second time to react. Correct me if I'm wrong, but that should be enough for the car to at least try something.
Isn't the car supposed to brake to minimise the collision, if the swerving is too dangerous (and it wasn't in this case, as the road wasn't too busy)?
> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.
I'm not sure, pls look that pic https://imgur.com/a/VfBck, you can clearly see there exists at least 10-15 meters b/w them right at the time when she pops up. Now I don't know the speed of the car, but I'd wager, a human driver (if s/he was alert) would have attempted a breaking at that moment.
At 38 MPH, the car would cover that distance in 0.7 seconds. That is on the low end of human reaction times for braking, so an average person might not have time.
I don't work on self driving cars or even vision, but I have heard when you are on a highway you start with filtering out small stationary reflections, since they are almost certainly not cars. (Which is maybe why the tesla didn't see the big red firetruck - it was not visible). It's not a big leap to imagine that Uber's LIDAR ignored the bike because it was not recognized as a person, was moving perpendicularly at a low speed, so got pre-filtered as a puddle. We can only guess and Uber will report whatever they want.
How is it in America? Over here (North Europe), one may not use high beams when there is street lighting (even though in this case the lighting seemed not very good).
Speculating: IR interference might have jammed this vehicle's LIDAR for an instant as it sampled the vicinity of the hazard. Any power in the passband of the detector might have been sufficient to saturate it and make it less sensitive to the returning LIDAR signal. This could come from another LIDAR unit in the vicinity. Other possible jammers might include IR laser pointers, TV remotes, or IR camera illuminators.
> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.
Because the software is still critically flawed, of course...this only represents a present-day failing, not some sort of permanent obstacle for the future.
No. ITAR controlled, in the context of IR imaging, means that you can't export any thermal imaging systems above 9fps. You can use whatever you want inside the US, and 60fps systems are available for consumers today.
You can get 320x240 60hz for $2000 as a consumer[1]. On the one hand I wonder what kind of resolution the military grade version is, but I also wonder if the bandwidth/dollar is actually better. I imagine Raytheon basically just makes up a number when they're setting the price of a helicopter gun cam.
Maybe we just don’t have the tech yet to make it work. At the very least, Uber surely doesn’t. There’s no way the car didn’t see her, but it didn’t react, which means it failed to recognize what it was seeing.
I currently work full-time in the self-driving vehicle industry. I am part of a team that builds perception algorithms for autonomous navigation. I have been working exclusively with LiDAR systems for over 1.5 years.
Like a lot of folks here, my first question was: "How did the LiDAR not spot this?". I have been extremely interested in this and kept observing images and videos from Uber to understand what could be the issue.
To reliably sense a moving object is a challenging task. To understand/perceive that object (i.e., shape, size, classification, position estimate, etc.) is even more challenging. Take a look at this video (set the playback speed to 0.25): https://youtu.be/WCkkhlxYNwE?t=191
Observe the pedestrian on the sidewalk to the left. And keep a close eye on the laptop screen (held by the passenger on right) at the bottom right. Observe these two locations by moving back and forth +/- 3 seconds. You'll notice that the height of the pedestrian varies quite a bit.
This variation in pedestrian height and bounding box happens at different locations within the same video. For example, at 3:45 mark, the height of human on right wearing brown hoodie, keeps varying. At 2:04 mark, the bounding box estimate for pedestrian on right side appears to be unreliable. At 1:39 mark, the estimate for the blue (Chrysler?) car turning right jumps quite a bit.
This makes me believe that their perception software isn't as robust to handle the exact scenario in which the accident occurred in Tempe, AZ.
I think we'll know more technical details in the upcoming days/weeks. These are merely my observations.
> If uber's software wasn't robust, why "test in production" when production could kill people?
Because it's cheap. And Arizona lawmakers apparently don't do their job of protecting their citizens against a reckless company that is doing the classic "privatize profits, socialize losses" move, with "profits" being the improvements to their so-called self-driving car technology and "losses" being random people endangered and killed during the process of alpha-testing and debugging their technology in this nice testbed we call "city", which conveniently comes complete with irrationally acting humans that you don't even have to pay anything for serving as actors in your life-threatening test scenarios.
Disclaimer: I am playing Devils Advocate and I don't necessarily subscribe to the following argument, but:
Surely it's a question of balancing against the long term benefit from widely adopted autonomous driving?
If self driving cars in their current state are at least close to as safe as human drivers, then you could argue that a short term small increase in casualty rate to help development rate is a reasonable cost. The earlier that proper autonomous driving is widely adopted, the better for overall safety.
More realistically, if we think that current autonomous driving prototypes are approximately as safe as the average human, then it's definitely worthwhile - same casualty rate as current drivers (i.e. no cost), with the promise of a much reduced rate in the future.
Surely "zero accidents" isn't the threshold here (although it should be the goal)? Surely "improvement on current level of safety" is the threshold?
You can make the argument with the long-term benefits. But you cannot make it without proper statistically sound evidence about the CURRENT safety of the system that you intend to test, for the simple reason that the other traffic participants you potentially endanger are not asked if they accept any additional risk that you intend to expose them to. So you really need to be very close to the risk that they're exposed to right now anyway, which is approximately one fatal accident every 80 million miles driven by humans, under ANY AND ALL environmental conditions that people are driving under. That number is statistically sound, and you need to put another number on the other side of the equation that is equally sound and on a similar level. This is currently impossible to do, for the simple fact that no self-driving car manufacturer is even close to having multiple hundreds of millions of miles traveled in self-driving mode in conditions that are close enough to real roads in real cities with real people. Purely digital simulations don't count. What can potentially count in my eyes is real miles with real cars in "stage" environments, such as a copy of a small city, with other traffic participants that deliberately subject the car to difficult situations, erratic actions, et cetera, of which all of them must be okay with their exposure to potentially high-risk situations.
Of course that is absurdly expensive. But it's not impossible, and it's the only acceptable way of developing this high-potential but also highly dangerous technology up to a safety level at which you can legitimately make the argument that you are NOT exposing the public to any kind of unacceptable additional risk when you take the super-convenient and cheap route of using the public infrastructure for your testing. If you can't deal with these costs, just get the fuck out of this market. I'm also incapable of entering the pharmaceuticals development market, because even if I knew how to mix a promising new drug, I would not have the financial resources to pay for the extensive animal and clinical testing procedures necessary to get this drug safe enough for selling it to real humans. Or can I also just make the argument of "hey, it's for the good of humanity, it'll save lives in the long run and I gave it to my guinea pig which didn't die immediately, so statistically it's totally safe!" when I am caught mixing the drug into the dishes of random guests of a restaurant?
It's an n of 1, but we're nowhere close to 'human driver' levels of safe.
Humans get 1 death per 100 million miles.
Waymo/Uber/Cruise have <10 million miles between them. So currently they're 10 times more deadly. While you obviously can't extrapolate like that, it's still damning.
If you consider just Uber, they have somewhere between 2 and 3 million miles, suggesting a 40x more deadly rate. I think it's fair to consider them separately as my intuition is that the other systems are much better, but this may be terribly misguided.
This is a huge deal.
I honestly never thought we'd see such an abject failure of such systems on such an easy task. I knew there would be edge cases and growing pains, but 'pedestrian crossing the empty road ahead' should be the very first thing these systems are capable of identifying. The bare minimum.
This crash is going to result in regulation, and that's going to slow development, but it's still going to be justified.
I have the same questions as well. But my best guess is that they probably have permission to drive at non-highway speeds at late nights/early mornings (which is when this accident occurred, at 10 PM).
>The Volvo was travelling at 38 mph, a speed from which it should have been easily able to stop in no more than 60-70 feet. At least it should have been able to steer around Herzberg to the left without hitting her.
As far as why test this, I'm guessing peer pressure(?). Waymo is way ahead in this race and Uber probably doesn't wanna feel left out, maybe?
Once again, all of these are speculations. Let's see what NTSB says in the near future.
I live here and they drive around at all times of the day and don't seem to have any limitations. They've been extremely prevalent and increasing in frequency over the past year. In fact, it's unusual _not_ to see them on my morning commute.
> At least it should have been able to steer around Herzberg to the left without hitting her.
Does the car have immediate 360 degrees perception? A human would have to look in one or two rear view mirrors before steering around a bike, or possibly put himself and others in an even worse situation.
If you're about to hit a pedestrian and your only option is to swerve, then you swerve. What could you possibly see in the rear view mirror that would change your reaction from "I'm gonna try to swerve around that pedestrian" to "I'm gonna run that pedestrian over"? Another car? Then you're going to take your chance and will turn in front of that car! The chance that people will survive the resulting crash are way higher than the survival rate of a pedestrian being hit at highway speeds.
You should always be aware when driving of where your "exits" are. This is not hard to do. Especially at 38 MPH, you can be extremely confident there are no bikes to your left if you have not passed any in the past couple seconds. And, lanes are generally large enough in the US that you can swerve partway into one even if there are cars there.
If everybody is driving in the same speed on all lanes, which is not unlikely on that kind of road, I generally am not confident that I can swerve into another lane _and slam the brakes_ without being hit. If I am hit, the resulting impact speed with the bike could be even worse than if I just slammed the brakes, so I don't think it's really a given.
You also cannot decide in 1 second what would happen if the pedestrian were to freeze, and whether you'd end up hitting him/her even worse by swerving left.
Most people in that situation would just brake, I think.
Other self-driving car companies (like Google (or whatever they renamed it)) have put a lot more work into their systems and done a much greater degree of due diligence in proving their systems are safe enough to drive on public roads. Uber has not, which is why they've been kicked out of several cities where they were trying to run tests. But Tempe and Arizona is practically a lawless wasteland in this regard and is willing to let Uber run amok on their roads in the hopes that it'll help out the city financially somehow.
I'm assuming LiDAR is not the only sensor installed in self-driving cars. Isn't that the case? And in this scenario, the software didn't have a lot to process. Road was empty, pedestrian was walking bike in hand perpendicular to road traffic...
Even if the detection box changed in size, it should have detected something. Tall or short, wide or narrow, static or moving... at least it should apply brakes to avoid collision.
I'm really surprised that we're even talking about the pedestrian's clothes or lighting or even the driver. Isn't the entire point of sensors like LiDAR to detect things human beings can't? The engineering is clearly off.
Is it possible for car to do some calibration of some sort to decide what is current "sensor visibility"? Like a human would do in a fog. Is this a common practice to use this information to reduce or alter speed of the car?
Great question. At least in our algorithms we do this - to adjust the driving speed based on the conditions (e.g., visibility or perception capabilities).
At the end of the day, you can drive only as fast as your perception capabilities. A good example of that is how fast humans can perceive when influenced by drugs/alcohol/medications vs. when uninfluenced.
What is baffling is the fact that the car was driving at 38 mph in a 35 mph zone. This should not happen regardless of how well/poor your sensing/perception capabilities are.
The description given before the video was released painted a picture in my mind that the woman was on the median and "suddenly" entered the roadway in front of the vehicle. I pictured someone darting across the road directly in front of the car, with no way to stop in time.
This video shows a completely different scenario. The woman started on the median, but the vehicle was in the #2 lane. She wasn't visible to the naked eye but she also wasn't darting into traffic and had to cross the #1 lane before even being in the path of the vehicle. A human driver certainly would have difficulty stopping in time, but why did the sensor package not pick her up? This doesn't appear to be the close call we were told it was. To me, this seems like exactly the scenario that autonomous driving vehicles are intended to prevent.
I have the very strong suspicion that the driver has turned autonomous drive mode off, perhaps to be able to watch tv or something. She seems extremely distracted by the dashboard and there is no way that sensors could not alert her of that presence. Also, apparently the car was driving at 38mph in an area where the speed limit was 35mph. Would the autonomous mode allow that?
OTOH, if the issue was "human thought computer was driving, computer was actually off", this might be a completely new class of errors in driving (although it does happen in aviation, where both pilots think the other one is PF: https://aviation.stackexchange.com/questions/5091/how-are-co... )
> Also, apparently the car was driving at 38mph in an area where the speed limit was 35mph. Would the autonomous mode allow that?
This seems key. Whether in autonomous mode or not, whether someone got hit or not, this clearly should not have been happening, and as such indicates some malfunction or lapse in protocol.
Have to agree with you. This is the automatic exposure control of the camera. my 1440p dash cam does the same thing. In its video it looks like everything outside of the cone of the headlights is pitch dark, but it is actually not. I have seen deer standing by the side of the road and later checked the dash cam, which did not pick them up at all.
Sure, if we were actually sitting in the front seat we would have a different perspective. Want to convince me it was unavoidable? Release the IR footage too.
I disagree, if you look at any of the Waymo videos you can see they have great recognition of every car, pedestrian, and obstacle within 100 meters of the car, and most of that data (point cloud lidar/radar at the very least) would work in pitch black without even headlights. Thus it's extremely reasonable to expect that autonomous vehicles will easily prevent scenarios like this.
Inclement conditions, defensive driving, etc. are much harder to work with but this should have been cake.
If you turn your headlights off on a dark country road, your sight capabilities actually go up, not down. The problem is: most roads are lit or you st least have to compete with other headlights, and it would make you harder to see.
Go to [0] and [1] and count how many times they use the word "safe." We're not hoping autonomous vehicles are safer, we're being sold on them as the safer alternative. So yes, this type of close call situation is exactly where these vehicles are supposed to be superior.
That's a very misleading comparison, even as an informal, rough comparison for the sake of argument.
Such a comparison would have to take into account the amount of human-driven cars and automated cars - not just taking a picture of a single day, but the variance over time (e.g. if today there are a thousand automated cars in operation, and yesterday there were 50, that can distort the average stats); automated cars aren't driving in certain areas/times/weather conditions whereas human-driven cars are, etc.
Automated cars may be safer, but, open snark – I hope it's not calculating its sensor data in this way — close snark.
Maybe a better word than "intended" would be "expected". As in, most normal, rational, reasonable people basically familiar with the technology involved expect this to be preventable.
Being buried in lawsuits isn't a great way to make money, generally speaking. The people hoping to make lots of money at this should have a pretty high level of motivation to make it safe.
I was under exactly the same impression. From the description a few days ago, I had imagined someone on the right hand side of the road stepping out from between two parked cars.
From the video this morning I was wondering how it was possible the vehicle lidar and radar didn't pick this up. This is exactly the sort of thing I would hope these additional sensors could pick up easily.
Also from the released video, it was clear to me the "human driver" of the car was not paying attention. Looks like they are looking at their phone at lap level 90% of the time, unless there was something like a "camera/lidar" view in the dash they were looking at.
Having a human in these self driving cars is useless. There is so little for the human to do that it is hard to keep them from checking out. And once they do, they might as well not be there. Seems like "Safety Theater" to me, make the public and the Uber riders feel like there is still a human in the loop, when there really isn't.
Yes, particularly the 'theatre' part. Its expensive and useless.
More useful would be to have a central location with humans monitoring dozens of cars, like a sort of air-traffic control situation. Better chance of keeping their attention with enough going on. They'd be there in case of difficulties, know when a car 'went off the rails' or notify authorities in case of an accident.
And with Central monitoring you could easily have people switch out frequently, and run drills like they do with lifeguards. The lifeguards at our neighborhood pool change positions every 15 or 30 minutes or go on break, with new ones cycling in. The water park does something where the supervisors will periodically introduce a special ball to the water which the lifeguards have to "save".
I worry that monitoring dozens of cars would be too much information for the "Road Traffic Control" to be able to respond in the 1.5 seconds that were available in this video, especially if they have such "low dynamic range" as what we see in this video. But maybe Lidar data would have showed the RTC operator something the car didn't see?
I'm really looking forward to the findings about what radar, lidar, and sonar sensors were saying during this time.
What is striking to me is I haven't seen anyone talking about evasive maneuvers. The car must have been able to see the person, and the road looks empty, why not swerve?
At the absolute bare minimum, hit the brakes and reduce the impact. An attentive human driver would have at least started to hit the brakes. The software should have had plenty of time (I'm estimating a full second) to do something productive.
Keep in mind the video isnt nessisarily representitive of what you would see with the naked eye. Most cameras have very poor contrast in low light situations that are interspaced with lights like a typical street with streetlamps, your eye does significantly better.
I bet the police chief who gave the quotation saw that video once and was startled at how suddenly the person appeared in that video and didn't check and see things like there was another lane or anything about the surroundings.
I would expect most autonomous systems to extrapolate the movements of anything in the vicinity and check whether it's going to cross paths with the vehicle. I would speculate that LIDAR (or whatever systems they use for object detection) simply failed to detect her.
Bikes are mainly curved, small pieces of metal, and absorptive pieces of rubber. Their reflectors are designed for viewing them from the front or rear, not the sides. I wonder whether lidar can even differentiate a sideways bike from noise.
Even if those in-wheel reflectors are removed, the wheel rims are reflective and the seat post is reflective (at minimum). And additionally there are several things that are way less reflective than a bicycle that you don't want to hit in the middle of the road. Like a deer, or a pothole, or a pedestrian.
A bicycle shouldn't need to be covered in reflectors to be safe from getting run over by a car. However anybody who cycles at night (as I do frequently) should wear reflective clothing because the road is full of reckless drivers.
In principle, I agree. But, I wonder if holding a bike in front of you is almost like holding a mirror that obscures the lidar ability to detect a solid mass.
edit: but she is in front of the bike, so this shouldn't matter. Does the darkness of her clothing impact any of the sensors?
That paints the whole accident in a very different light from the video from Twitter. Would any human not notice someone crossing in the good camera screenshot?
Really hope the cops didn't just look at the low-dynamic range dashcam video and decide it's case-closed here. I would not want one of these Uber cars on the streets with me.
I think Uber should be forced to share all of the sensor data.
Historically, building and engineering codes advance in response to disasters. This mindset needs to be applied to robots that are going to be operating in public spaces. Incidents like this are an opportunity for all self-driving car developers to get better and there's a public safety interest in that happening.
Wow, this really paints a different picture. If Uber cars have video quality like the one in the Tweet they should not be on public roads at all.
Edit: I'm not asserting that this is the only video that car "sees", but if it's not, it seems a bit disingenuous for Uber to share only this crappy low quality video with the police.
So we have no idea if it's just a crappy dashcam or it has to do with the compression algo that twitter applies to videos. Note the video the imgur screenshot is taken from a vid uploaded to youtube.
Also, the dashcam probably isn't what the car uses to "see", that would be beyond disqualifying.
Despite the fact that the driver was clearly playing with his mobile (which makes him indefensible), I have some sympathy for the driver not reacting early enough. Even if the driver spotted the obstacle, the car is only going to apply the brakes when it gets closer to the obstacle, not half a kilometer before. So how much time would the driver have had to 1. assess that there is a danger, 2. realise that the self driving car is not reacting to that danger, 3. apply the brakes.
Your point 2. is actually very important and I haven't seen it discussed before. We don't know what instructions the drivers have been given, but there will be first a question in the drivers mind about if the car will react.
The driver has to consider if they will be reprimanded by Uber for breaking before the car had a chance to.
So this might stop any kind of reflex action - which I assume is what the standard 0.5s reaction time is based on.
The police were on the scene of the accident. They presumably would take photos of the scene. So they had other shareable evidence that would show the conditions more clearly.
Yet they choose to tweet out this very low quality video that basically absolves Uber. Plus they have a motive to spin the story after local government invited uber there to run these tests. Seems very CYA-ish.
Don't know why this is being downvoted, it's an interesting point.
There's a significant difference in lighting between a night with a clear sky and full moon versus an overcast night and new moon. Not saying that's the case here, but it's not a valueless comment.
That picture is shockingly clear. I have trouble imagining how any human driver would not have seen that pedestrian. The road seems incredibly well lit.
It took me a bit of back and forth to try and exactly match various videos and pinpoint what actually happened. I've take a screenshot of Google Maps [0] and added some annotations [1].
It's confusing because in the Twitter video it all looks like there is two lanes, but at the actual collision there's four lanes.
What is stupid to me is that there is a pedestrian path that leads right up to a 4-lane highway without any kind of barrier with nowhere close to cross over.
edit: I added a screenshot of the victim's view of the on-coming car [2]. You can see how clear the road is leading up to the crash site.
Pathetic and sad performance by the vehicle and "safety" driver. The woman does not "appear out of nowhere", she was in the roadway for some time. The woman was not wearing all black, had red hair, and her shoes were reflective. Even if we are to believe that their camera is this crappy, they still have the lidar, and it appears brakes were not applied. Even 500ms of braking * 0.8g = 9mph. That might have saved her life. Ultimately if Ubers car cannot see a pedestrian crossing the street at a walking pace at night and not hit him/her, it should not be operating at night.
clear weather. flat road with no obstacles. looks like good road conditions. no other cars visible. i am no computer vision expert but this really looks like a common use case for the software to be able to handle
What I don't understand is why the pedestrian didn't see or hear the car. She seems totally oblivious. Surely she would have seen the headlights. It is possible she had some sensory impairment, either visual or auditory.
My guess is that she felt so obviously visible that she didn’t expect to be hit. From where she started crossing she had a view of all the incoming cars for the entire length of the bridge. She was probably expecting the car to just descelerate from afar to let her cross.
no she didn't expect cars to decelerate from afar. She just didn't care
She didn't even look at the road at all, not even during the very moment of impact. She was looking the other way 100% of the time right up until the crash. She didn't even care enough to look at the road at all because "looking at the road" was WAY too much work for her.
Pretty sure she didn't bother to do some kind of "physics deceleration analysis" before she crossed the road LOL
She's probably crossed there a zillion times and never had a problem, and so has stopped caring. It's a well lit area with good sight lines, human drivers would have to fuck up to a significant degree to hit someone under similar circumstances.
I love how you contemplate the logic of the pedestrian but impute no responsibility into a potential driver. Why do people want to defend a clearly defunct AV system so much that they question the thinking process of a woman who was killed?
Not OP, but pointing at risky behaviour alone does not say anything about the drivers, the companies or the pedestrians responsibility. It's not football, so one does not have to choose sides at such issues.
An accident like this only occurs with multiple failures: a person crossing the road somewhat carelessly. a driver not paying attention to the road, and in this case, an autopilot system that didn't do what it's supposed to do.
no i am not defending the AV. If someone was "wrong", it was definitely the AV and not the pedestrian.
The AV should definitely have at least tried to stop, it screwed up. Pedestrian has the right of way.
Having said that, I wouldn't want folks on HN to take this as an example and say "hey I am entitled to cross the road without looking, cars would decelerate for me and will always stop for me because i am entitled to right-of-way". WRONG
You cross the road without looking, the car hits you. Sure, AV or human-driver, the car would always be wrong.
You are "right" because you are the pedestrian and have right of way. You win the logical argument, but you are the one spending a year in the hospital.
Do you want to be "right"? Is taking a quick look at the road that much work for you?
This happens all the time at night where I live. People cross the road and you don't see them until you're quite close.
Cars are highly visible to pedestrians (due to headlights). I think pedestrians assume that they themselves are equally as visible to the car driver too - "I can see them, so they must be able to see me." is the subconscious line of thought I think.
Often though pedestrians get "lost" in the lights of oncoming vehicles or the brake lights of cars ahead, or are just in your blind spots.
The other thing is, I've noticed that people who have never learnt to drive often don't really appreciate how cars act and are not very good at estimating the distances involved. You've probably noticed this when crossing streets - some people take some eye-raising risks when crossing the road while others wait ... guess which ones are the drivers who are experienced with how cars/drivers act!
My guess here is the poor woman was stationary in the median thinking "Should I cross?" as the car was approaching. She probably then decided she had time to make it across before the Uber car got there so started to cross the street, perhaps assuming the vehicle was further away than it was, or would slow down since the "driver" would see her. We've all done this I am sure ... sometimes with time to spare, sometimes cutting it a bit close.
Perhaps if this was what happened, the Uber car "saw" her stationary in the median and disregarded her as a stationary road-side object and carried on until it was too late. IIRC these systems need to disregard static objects at the side of the road (e.g. signs, trees etc) otherwise they'd be slamming the brakes on all the time. Perhaps the initial movement she made into the road was disregarded as "sensor noise" (e.g. you can see objects "jumping around" a bit in this video from Uber: https://www.youtube.com/watch?v=WCkkhlxYNwE) from what it thought was a static object, but that would not explain the apparent total non-reaction from the vehicle when she was clearly in the path.
I've driven a XC90 a few times - it has auto-lane keeping when using cruise control on the highway. Perhaps that was what was driving, not the Uber system.
I think your guess is dead on - it's what I had imagined the scenario was before I saw the video. I see this all the time, and the social contract between car and pedestrian is the car slows. Heck even half the time people do this they don't even look at the car - it's quite crazy - I never do that mostly because I make the assumption that I will be the unlucky shit that was hit when someone decided to bite down on a messy Carls Jr burger.
Regardless of how I do this, it's a pretty accepted contract and definitely still piles the fault on the SDV in my opinion.
What if she was stationary in the middle, but was stationary in the middle, in the left lane?
I mean, she began to cross fron the center (left) to the right, walked a bit, saw the car light, stopped, and then decided to cross again? The sensor would have seen her as a stationary object in a lane the car wasn't drving into. And then she begans to walk, and this movement is disregarded as noise in the beggining, and then it's too late because she walked ? It just takes few steps to cross a lane, about 5 or 6...
Wow, that self driving car is total crap if it can't pick up that obstacle. Doesn't get much more obvious than that. Slowly walking from the left lane into your lane on a fairly straight road.
I bet it was more visible to the human eye than on that video as well. You'd have to see someone crossing the road like that.
Of course the pedestrian wasn't doing the smartest thing but I believe a human driver would have at least hit the brakes had they been actually looking forward and paying attention.
I thought the actual advantage of self-driving cars is that they're always meant to be looking forward and paying attention. That doesn't appear to have happened in this case.
> Wow, that self driving car is total crap if it can't pick up that obstacle.
Not to nit-pick too much but 'that obstacle' is a person. A person that died because engineering hubris. I think we should be careful in our language in this situation, as it may get away from us. She was not 'an obstacle' or a 'thing to avoid' or 'hobo', she was a living person.
I 100% agree, however arguing about semantics is not productive.
I deliberately chose the word "obstacle" to hammer home the point that the car did not even need to identify that the object in front of it was a person.
If that obstacle had happened to be a fallen tree, it would be the car's passengers that would be dead instead.
Literally all it needed to do was stop, or at the very least slow down.
In this case, I believe the car performed much worse than an average human driver would have, and this video shows without any doubt that these cars are not safe to be on the road in their current state.
i'm not sure i follow what you're trying to communicate. what do you mean by "get away from us"? also, framing things as "obstacle" vs "living person" seems like a false dichotomy.
I mean that Uber, being a very shady company, has a large incentive to 'sterilize' the language. The person becomes an 'object', the person driving the car becomes a 'rouge employee', the cruddy software and cost cutting becomes an 'undiscovered malfunction', the burden of safety on the AV manufacturer becomes a 'shared breakdown'. Responsibility and humanity start to get stripped away via the word choices used.
While the system should have reacted better, I'd definitely say she died because she crossed the road in a very unsafe manner. I wouldn't attempt to make that crossing without a reflective vest, front and tail light, and even then I'd still be very anxious.
It has to be the driver (or computers') responsibility to spot and react in situations like this as it is the dangerous machinery that can cause injury and death.
All parties have a responsibility when travelling in and around public roads.
Using your logic, if I were to try to cross the railway line and got hit by a train, the conductor has a "responsibility to spot and react in situations like this as it is the dangerous machinery that can cause injury and death." While I agree with the conductor's share of the responsibility, I'd still say that I'm to blame for not crossing the railway lines more carefully.
Railway lines are physically barring people from crossing them. Walking on rail tracks is actually forbidden. When a railway crosses a road there are physical barriers and audiovisual signals to prevent people from crossing when a train is near.
Roads in cities are shared spaces. Pedestrians have to obey traffic laws but as a car driver you are taught to anticipate incorrect behavior from pedestrians, especially to accommodate children who might not know any better or the elderly who might be absent-minded (e.g. start crossing a street then turn back because they worry they might not make it).
It's driver education 101. If there's a parked line of cars and there are apartment buildings and maybe you even spotted children playing, anticipate that a kid might dart in front of you for some stupid reason because they don't understand the risk. So slow down and anticipate having to come to a full stop.
This was a high visibility situation on an open road with no other traffic. The pedestrian violated traffic laws by crossing the road outside a marked crossing but that happens all the time. A human driver would have anticipated the possibility of this happening when they spotted the person standing beside the road in the distance. They would have decelerated when they saw the person start moving onto the road and they would have made a full brake when it became clear the pedestrian was going to walk straight onto the road with no regard for their own safety.
The automated car showed no signs of having adjusted to the situation whatsoever. Sure, the pedestrian was stupid and got herself killed, but the car was driving recklessly and bears the full responsibility.
EDIT: to be clear, your example is equivalent of a person climbing onto a highway/motorway, not of a pedestrian crossing a street.
While I get your point, I do think there's an important distinction: train lines are rare, usually far away from where people live, walk, bicycle, etc. Roads, on the other hand, occupy a significant portion of public land in our cities. It's not great that it's effectively a death sentence to be less-than-focused while walking home.
You know the old saying "with great power comes great responsibility". The onus for being responsible is not on the pedestrian. It's on the person (or CPU) operating the multi-ton machine at speeds that can kill.
> You'd have to see someone crossing the road like that.
You could say the same thing about a car with headlights on coming towards you... From the little evidence we have it seems like the pedestrian, driver/monitor and the car all should have done better.
That pedestrian obviously didn't have good situational awareness, but are we happy to let self-driving cars mow down any unaware obstacle on the road? Broken-down cars/deer/children/roadworks?
Honestly, that video proves to me that this tech is nowhere near ready. That was the most obvious of obstacles and the car did not even slow down.
I believe a competent human would have at the very least slowed down in that situation, and likely stopped completely.
> that video proves to me that this tech is nowhere near ready.
It proves that Ubers tech is nowhere near ready. Based on all the stupid shit I see Uber engineers saying on Blind app, I have very little faith in their maturity, professionalism, and culture.
I would bet that Google is doing a better job than Uber is.
That could be true, but the perception of the industry is now "OMG OMG KILLER ROBOT CARS ARE UPON US", never mind that other companies are expected to behave far more responsibly. (To compare: Chernobyl was a case of 1. bad design, 2. disabled safeguards, and 3. reckless operators - yet nobody cares of the distinctions, and it has essentially killed nuclear power)
It is not uncommon for pedestrian to notice in coming traffic, and still cross the road, in the expectation that the car ll just slow down for them to cross..A human driver, without exception, will slow down in this case. Sadly the same thing might have happened here. The lady took one look before crossing, and expected the car to slow down..
> It is not uncommon for pedestrian to notice in coming traffic, and still cross the road, in the expectation that the car ll just slow down for them to cross..
Where do you live because I never see this. That is just asking to get hit by a car.
> A human driver, without exception, will slow down in this case.
The pedestrian in this case isn't equipped with computer-activated motors and brakes. Yeah, it would've been best had she not crossed there at all. But that issue is entirely independent of how the AV should perform, assuming that Uber AVs are touted as being significantly safer and faster to react (day or night).
Agreed. Many of us are expecting the machines to do better, by design. Humans will make mistakes. Mistakes will be baked into the designs. Should be just once.
And it is surprising to see how limited this first run of sensors apparently is.
In my mind this accident is on Uber no matter how you interpret the video.
Scenario 1: Lets say the pedestrian was visible to the naked eye and sensors. The model and safety operator still didn't "see" her and act in time. Who to blame? Uber.
Scenario 2: The lighting and environment was in a condition where neither the model nor the safety operator could see the road more than 10 feet in front of the car, yet neither thought it to be irresponsible to go at full speed. Who to blame? Uber.
I believe that:
1. The sensors should have detected the pedestrian from far away even if the lighting was bad at the time. I mean that's sort of the point with autonomous vehicles, that they are better at perceiving their surroundings and can make better decisions on how to act quicker than any human could.
2. The safety operators are not engaged enough in their tasks to be effective. I think people underestimate how boring it must have a job where 99% of the time you should do absolutely nothing other than stare straight ahead and be ready for a situation like this. This problem is hard to solve. Maybe we should be training the model on closed tracks and only release on the real roads when it passes some sort of test where it is put through various scenarios. Like a driving instructor for AIs.
For those of you who think the pedestrian is to blame. I agree that the pedestrian might have made a bad decision by expecting the cars to brake, however these situations occur all the time. She didn't dart across the road or jump in front of the car suddenly. Yesterday I helped an elderly guy across 4 lanes of traffic which took about 1 minute. All you can do in that situation is to hope you are visible to the drivers and that they will stop before running you over.
Many people comment that the lighting is poor and that a human might understand what's happening too late. This is debatable and misses the point: if visibility is bad, you (and it applies with full force to automated drivers) should reduce your speed accordingly, maybe with the exception of freeways where you are not expected to encounter pedestrians.
> maybe with the exception of freeways where you are not expected to encounter pedestrians.
Freeways require you to reduce speed in those cases as well, possibly even more so. This is pretty well demonstrated by the pileups that occur every winter, and similar (if rarer) incidents caused by dense fog or similar. Pileups at freeway speeds are much worse than the 2, maybe 3 car crashes that occur on city streets.
Yes, there's no pedestrians, but one person hitting a pothole or large puddle and spinning out is all it takes to get a pileup if you're not careful.
I think we should be able to transfer the guidelines that apply to train drivers. There they spend hours and hours on end doing practically nothing and certainly not steering.
However I'm guessing that the Uber test drivers don't get paid anywhere near as much as a train driver or given the same kind of training.
Why on uber? It should be on the driver. That driver kept looking down almost the whole time. He is not employed to watch movies or something else. If he needs to watch some data then maybe 2 of them should be there.
It is entirely too easy for companies to hire poor people and call them "safeguards" with no regulation. I'll put my money on that the "safeguards" are poorly paid, poorly trained, over worked, and that the Uber are disregarding the many studies showing how incapable people are at doing a job like this (i.e. never ending monotonous surveillance interrupted by intense split second decisions). Come on, Uber implemented this system and put people in harms way, and their "safeguards" are not working and they should know that and take responsibility for that.
If there is something that isn't going to help the public perception of autonomous cars at all, it's releasing a compressed to shit capture of another video showing a single camera angle from dozens.
I would say it's a deliberate attempt to manipulate if I didn't also strongly believe ignorance on part of the police department has lead them to believe that autonomous cars could even exit a parking lot without data from many more than this one camera, not to mention the vastly more useful LIDAR on top.
(That's before you consider the video angles shown here are just for dashcam purposes. The real cameras for the autonomous driving are in the sensor array on top of the roof)
Yeah. Does the Uber car really capture video at 480p and 15fps? Also only releasing the video conveniently ignores the fact that these cars have IR and LIDAR. The pedestrian is hard to see in this video essentially because it is dark and they are wearing dark clothing. Neither of these are at all obstacles to LIDAR and IR, and the video at least shows us that the road is clear of obstructions.
>Does the Uber car really capture video at 480p and 15fps?
Probably not, but it isn't like the inputs to the self-driving models really need to be better than that. Lower resolution helps your processing time a lot and there's little point in having an FPS higher than your processing time.
I'm sure at least the collision avoidance part of the system would need to poll at a much higher rate than 15fps. That's up to 67ms latency you're adding. With enough miles that delay could kill people.
Average human reaction speed is around 215ms. Not an apples-to-apples comparison because humans can react much faster to continuous situations (humans have a timing accuracy of around 9.5ms) while a machine-learning model is limited to only reacting once per frame, but still.
If you wan't to compare against human "sample rate", it'd be equivalent to at least ~200 FPS (in order to get the same accuracy with a camera). Sure, the signal takes a moment to plumb its way through, but that's irrelevant to spotting objects.
If they're actually feeding data at 15 FPS into their ML model, then what the fuck were they expecting? Correlating movements at those framerates would be nigh-impossible.
Relying on ML for this is already comically irresponsible, but that'd just be ridiculous.
My ass, mostly. I'm extrapolating based on monitor framerates and how accurately we can see the velocity of fast-moving objects, and that I can spot a timing difference of ~5ms reliably.
Human eyes are almost comparable in terms of a framerate based on the neuron spiking rates, which are somewhere over 250-500Hz max. Obviously that's not directly comparable though, but it gives an idea of how well we can deal with moving objects.
I think they're pulling it from noticeable monitor FPS rates. I'm not an expert on machine vision, so I couldn't tell you the FPS needed to correlate movements between frames.
Average human reaction speed when driving, from the time the dangerous situation happens to the time of first reaction, is usually considered to be in the 700-2000 ms range. The lower bound is with optimal lighting and after something has alerted the driver in advance (e.g. a police car dashing past you a moment before), and the first reaction is not "the brakes are slammed" but "the foot is lifted from the accelerator". Hitting the brakes can take another 100-200 ms.
I think releasing much higher quality and higher dynamic range and multi-spectrum video is going to be a LOT more damming of driverless car tech. Just look at all the commentators here saying they would have not been able to stop in time based on this terrible camera footage (ignoring the vastly better capabilities of their own human eyes).
I'm holding out hope that a good chunk of the public will understand this footage is not at all indicative of the vision capabilities of even the human eye, lest the actual sensor tech on that Uber car. Certainly a good number of people on Twitter do.
If Waymo understood how to stick a knife in they would have replicated this scenario yesterday and released a video. Hell, you could take a Mercedes S class with night view assistant and it would spot this, no problem.
Personally, I'm amazed that we get to see any video at all from this accident... Given that we live in an age where video is king, and given Uber's reputation for going to extreme lengths to get their way, I think we should be thankful to the Tempe Police for getting this important document to the public so quickly.
> If there is something that isn't going to help the public perception of autonomous cars
If there is something now that's going to help with the public perception of this cars that people claim to be autonomous, it's people like you just quitting trying to defend this and care about the deceased ATM. The software and hardware was sub-par, and the driver of a fucking TEST DRIVE was not looking at the road, let aside having hands ready on the steering wheel. The driver's feet were probably wandering around relaxing besides the pedals. TBH I don't give a flying fuck to the public image of autonomous (!) vehicles ATM.
Given there are no obstructions in the road, LIDAR (or even the decent cameras from the sensor array on top) would obviously show her on a clear collision path many seconds before impact.
So I guess this is the ultimate litmus test if the culture at Uber has changed. They let this video stand and I don't personally think they should be allowed to drive another metre autonomously on public roads.
This is Uber we're talking about. They ran a red light while self-driving, shrugged it off and kept going. They had a team whose job was to avoid regulation by blacklisting the credit cards of police. I would be shocked if Uber does anything other than the bare minimum of apologising.
So now we all know- Elaine Herzberg did not run out in front of the car, as
the police said, she was walking at a normal pace; she was not in the shadows-
the camera footage is typically darker than human vision; and the reason why
the first thing the driver knew of the crash was the sound of it is because
she didn't have her eyes on the road.
And all the cars sensors, its superior perception of its environment and its
superhuman reaction times were no use, without a human-level understanding of
its environment to go with them. It couldn't tell that there was a person
crossing the road in front of it and even if it did it didn't have a concept
of what a person is and why it should try to avoid them, or in any case, it
just didn't know what to do about it.
So can we now please roll back the off-the-charts hype about self-driving cars
being safer than human drivers? It's abundantly clear that this is not yet the
truth (not yet. yet). That's just not the state of the art, at this point.
We're like the people jumping off towers with crazy "flying" apparatus, in the
18th century, because they were convinced they could fly that way.
Or maybe we should just stop pretending that what we really care about is
safety and let's we just want to have cool tech toys to play with, no
matter the consequences.
You're missing the (admittedly comfy) narrative that the other tech companies are being diligent and careful while Uber are a bunch of cowboys rushing alpha code out into public, who only change stuff when they get caught. The autocar fan's worst nightmare.
I noticed that trend in comments, yes. Unfortunatly, the real issue with what Uber or Waymo (or anyone else) are doing it's with the limitations inherent in the technology itself, specifically, machine learning for object recognition and identification and for the learning of complex beheaviours.
The limitation is -it's a bit technical, but basically, in principle, machine learning is possible under certain assumptions, as laid out by Valiant in his PAC-learning paper (A Theory of the Learnable), especially the assumption that a training sample will have the same distribution as unseen data. Under this condition, machine learning can be said to work and we can look at performance metrics and be happy they look good.
Well, except that the real world has no obligation to operate under our experimental assumptions, so once you deploy machine learning systems in the real world, their performance goes down, because you haven't seen nearly enough of the data you really need to see, in the lab.
And, if you attach such assumptions to safety-critical systems, then you're taking an unknown and unquantifiable risk. Or in other words, you're putting peoples' lives in danger.
And that's everyone who uses machine learning to train cars to drive in real-world conditions. Not just Uber.
Yes, but that is not specific to machine learning. Humans have also learned to drive under the assumption that future observations will somewhat be alike what they have seen in the past. And yes, that is putting peoples' lives in danger.
But that has nothing to do with machine learning. It has to do with all control systems, human or machine.
The point is that machine learning algorithms' decisions always have some amount of error, and that this error goes way up in the real world.
The auto-car industry's marketing claims that self-driven vehicles are safer than humans just because computers have faster "reaction times" (they probably mean faster retrieval from memory).
But if your reaction is completely wrong it doesn't matter how fast you react. Reacting very fast with very high error will just cause a very fast accident- and make it harder for puny humans' reflexes to avoid it, to boot.
There was almost a second where the woman was clearly visible and yet the car did not attempt to emergency break. I would not expect the vehicle to safely stop, but it could have definitely slowed down and reduce the energy of the collision.
This is not criticism against self-driving tech, I would not expect alert human driver to avoid collision either due to limited reaction time. With technology however, we should be able to do better than humans, particularly when it comes to reaction times. Clearly, there is still a some work to do.
Do we know the vehicle didn't brake? There's no audio to tell us anything, and a car takes "almost a second" to rock forward on its suspension when reacting to brake input too. FWIW, I think I might see the nose bobbing a bit just before the collision, but honestly can't tell.
I certainly don't think you can say "did not attempt to brake" based on the evidence at hand, basically.
Because the nose of the car doesn't go down. Especially in an emergency braking situation the nose of the car will go significantly down, and you would see that in the video.
The video is cut as the person is a few feet from being struck by the car. That's not "the last second" that's basically at the point of impact and already much too late
Indeed. Uber was clearly able to spin the early reports about this incident; there is little correspondence between those descriptions and this video. Corruption in Tempe.
The victim had absolutely no business crossing that road like that, but the notion that the fatality was somehow unavoidable is pure bunk.
Individual actors all working within the same values system and responding to the same inputs will produce similar actions. There's no conspiracy or "quid pro quo" corruption here, most likely, merely a corrupt values system. In a place like tempe pedestrians and bicyclists are treated as second class citizens, especially in the case of crashes involving automobiles. In this particular case there is a walkway on the median strip which is purely decorative and marked with signs indicating not to use it, which should tell you how pedestrian hostile the area is.
I think it could have stopped safely. Assuming it has quicker reactions than a human, stopping distance should be around 18 meters. I think it just completely missed her.
The formulas and online tables are not really accurate for real world modern cars. Example: https://youtu.be/0zbZweqlZPw?t=1m29s finds an empirical stopping distance of 11 meters at 40 mph. Actually closer to 8 mph than 18 mph!
that is a bold statement to make from the video information we got, at least in my opinion. I would estimate the speed to be at least 50 km/h which gives 25 m as a very rough estimate for the stopping distance without reaction time. It's probably better than that but not 8 m.
But I do agree that the car doesn't seem to have stopped at all and missed the victim completely.
And an autonomous car that actually worked in theory would be able to swerve safely, since it would have a full knowledge of surrounding obstacles and be able to measure the physics of the car as it was swerving, allowing it to keep in a safe envelope. In my mind, the complete lack of reaction from the car paints this as a total failure on the part of the car.
In other words, the car _never_ detected the pedistrian and never slowed down on its own. This has nothing to do with _when_ it saw her. It clearly didn't, yet both camera and LIDAR should have been able to.
I feel bad for Waymo and all the guys taking the necessary safety precautions. This is going to set the industry back quite a bit. Thanks Uber, and RIP Elaine.
I don't. Yes, I think Waymo is probably taking very nearly all the necessary safety precautions. But how would we know? It's simply a matter of trust, which is not good enough. And as we see here, the problem isn't Waymo the problem is every other fly-by-night automated vehicle operator out there. We should all know how the sausage gets made in this industry and it is absolutely not through a process of meticulous rigor and studious observance of industry best practices. We all know that software systems get made via mountains of short-cuts and compromises amidst an environment of half-assed development practices.
The only way to fix this is to have regulations which actually enforce quality control. That means code audits, it means a lot more process and a lot more oversight, all of which is going to be a drag but which is necessary when developing systems on which so many lives could depend. If you look at the regulatory requirements for aviation software or even video poker machines they will really put into perspective how little is being done now with autonomous vehicles.
At least one study been done to see how responsive a passive "driver" of an autonomous vehicle can be. The study concluded a person that is required to intervene with an autonomous vehicle is no better than a drunk person.
It makes sense. If the car is able to get you from point A to point B 99.9% of the time without an issue, are you really going to be paying attention that one time when a person walks out in front of you and the car fails to respond appropriately? If the vehicle enables people to not pay attention, they aren't going to be attentive.
I'd be willing to bet almost every driver that gets to ride around in these test vehicles thinks the job is awesome because they get paid without having to really do anything.
Maybe this suggests that an effective use of current (i.e., 99.9%) autonomous driving implementations would be a captain and co-captain type of scenario, where the car only responds to commands that are the same from both drivers, like a logical AND between driver inputs (within some tolerance, and filtered, etc.).
What if the promise of autonomous driving tech isn't to make human drivers obsolete, but to make human drivers nearly perfectly safe. What are the chances that an engaged human driver and an autonomous system would both, simultaneously give the same incorrect or dangerous driver input to a car? It's not as sexy as robot cars, but seems like a significant development over the current situation. Also, it seems like a viable way to test on public roads.
What do you do in the case where the driver and the computer disagree? Say there is an object directly in front of the car - driver turns left to avoid, computer turns right to avoid. You have to take SOME action -- you can't do nothing otherwise you hit the obstacle. If you go with the driver, what is the point of the computer if it can't overrule bad decisions? That's equivalent to just letting the driver drive. If you go with the computer, what is the point of the driver? Only taking actions due to an AND of driver and computer inputs seems to only work if there is one correct course of action, which I doubt is the case in many situations.
Maybe you could have a system where the computer only takes actions to deliberately prevent unsafe situations, and is conservative in doing so (i.e. it doesn't drive, per se, but brings the car to a safe stop or enforces a maximum speed), but that's a huge step down from the current goals.
Haha, yeah, good point. In the event of contrary inputs, I guess one does have to supersede, e.g., one is the 'pilot' and one is the 'co-pilot.' Your proposal is much better thought out. But then I think we've described the collision avoidance systems that are in production today. Take it any further and you run into the disengaged driver situation.
A number of comments here have touched on the fact that she was apparently homeless. I spent 5.7 years homeless. Well before that, I had a college class on homelessness and did an internship at a homeless shelter. I am author of the website the San Diego Homeless Survival Guide.
I've made a few comments in other discussions about some of the ways her status may have contributed to this tragedy. I'm just going to link to them with a short identifying blurb. Hopefully, taking them out of context won't make this go weird places.
I am leaving this here in part because "She was high or crazy" is a common stereotype about homelessness and it tends to not be a compassionate view. There are myriad ways her status could have contributed to the situation without her being either high or crazy. Yes, she could have also been one (or both) of those two things. But plenty of housed people get high or have mental health issues and we don't hand wave off their deaths as "they must have been high or crazy."
So, my hope is to be respectful of people whose feeling is her status could have contributed while putting out hopefully better information. Yes, her status may have contributed. But there is no reason to complete that line of thinking with "because homeless people are usually crazy and/or junkies."
There is a huge shortage of affordable housing in this country. A link backing that up is provided in one of my previous comments. These days, a lot of homeless people are just poor and can't afford rent, even while employed.
Wow, I had thought it impossible for the Uber AV to be in the right lane, and for the victim to be crossing left-to-right, for the police to claim that the victim couldn't be seen until it was too late. But I didn't expect the road to be so dark, given what we saw in the accident photos (which might have been over-exposed?) and Google Maps, which showed a lot of street/sidewalk lighting.
In the moment that we can fully see her, she does look unambiguously like a person walking a bike across the road (reports say there were plastic bags on the bike, but they weren't obvious/obstructive in the camera view). Is the AV's LIDAR expected to detect this kind of thing, even if it's too dark for human eyes?
The video of the Uber driver doesn't look great for the driver. I mean she doesn't look particularly engaged -- but I suspect that's what most of us would look like at the wheel. But she definitely seems to be looking downwards, right at the moment of impact.
Unless some other incriminating info is discovered, I hope that the driver isn't the sole focus of punishment (doesn't help that she's a convicted armed robber, albeit years ago). Being able to brake in time for the victim seems difficult even in most ideal and alert conditions. And I have to think human operators are going to suffer complacency when 95-99% of the time they never have to actually drive -- making that switch seems to be a situation ripe with problems.
I don't mean that Uber execs/testers/engineers (again, assuming there isn't other incriminating evidence) should be scapegoats. I hope the result involves regulations that add more transparency to reporting (especially in Arizona), and public debate about the expectations of AV and AI.
The dynamic range of small sensor digital cameras is so poor that it is impossible to judge whether or not she would have been visible to the naked eye based on the video.
That's a great point. I did think, looking at the video, that the street was unnaturally dark. The camera seems to be exposed for night lights and the headlights too.
Most cameras have very poor contrast in low light situations that are interspaced with lights like a typical street with streetlamps, your eye does significantly better.
Digital image quality is generally far inferior to naked eye visibility, but even in the image, the pedestrian can be seen for at least a full second.
The driver clearly was unfortunately not paying attention. She clearly reacts within a few hundred milliseconds of looking up, which is to say immediate reaction time in terms of mental processing. That means that we cannot use her reaction time to gauge how early she could have seen the pedestrian to compensate for poorer optics of digital cameras.
As I understand LIDAR, it should work even at night (as it generates its own light and measures response time). This is a pedestrian, walking across a road, with nothing possible to occlude her. There is no reason it should not have identified an obstacle in the road (she's been in the road for several seconds, after all, having crossed at least 3 lanes of traffic by that point). Even if the visual camera had problems identifying the object, LIDAR should have flagged it.
LiDAR is not affected by the amount of visible (sun) light. I'm very curious to see the data returned from the LiDAR, because this should've been a very clear human frame, albeit with a bike behind it.
LIDAR still has a signal to noise ratio that limits it though, especially with materials that reflect little with the wavelength used (usually NIR, somewhere around 980nm, the military uses 1500 because it's more eye-safe).
But even then, this impacts the maximal detection range only. Even if the obstacle suddenly popped up at 20m it should have been enough to drastically reduce the collision speed.
This is a shit show now. A new technology is being tested, and the tester is a convicted armed robber, and was not even looking at the road during the test drive. I think every single person in Uber that has anything to do with this testing program is guilty of homicide, including the driver. This could've been at least non-fatal.
> I hope that the driver isn't the sole focus of punishment
As sad as this accident was, I don't see how the driver or Uber is legally at fault. Pedestrians crossing outside of a crosswalk are supposed to yield to traffic. The victim in this video is stunningly oblivious to the fact that she's on a roadway.
Obviously we want self-driving tech to be good enough to avoid such an accident, particularly since a human probably could have; I'm not arguing otherwise. But legally, the victim was clearly at least partially at fault here.
I think one thing this incident is going to do is to point up the limitations of having a human at the wheel, supposedly ready to take over. In a true sudden emergency, after tens of thousands of miles of uneventful riding around, that just isn't going to happen reliably.
> Pedestrians crossing outside of a crosswalk are supposed to yield to traffic.
Pedestrians always have the right-of-way. They might get a ticket for jaywalking in some cities, but it's never permissible to hit someone with your car, regardless of whether they're in a crosswalk. Vehicular manslaughter is no minor infraction, and Uber can expect a civil suit from the victim's family even if there are no criminal charges.
Just FYI: In the US, this not legally true, and definitely not legally true in a lot of comparative negligence states.
While its true you can't just hit people with your car, any civil suit that is filed would be usually lost in most of these states (where lost == you may not recover money). It does vary, some states (like AZ) allow you to recover 1% even if you are 99% at fault.
In fact, what you say about right of way is not even true in very pedestrian friendly states like California. In California (and most states), what the person you replied to wrote is correct.
21950.
(a) The driver of a vehicle shall yield the right-of-way to a pedestrian crossing the roadway within any marked crosswalk or within any unmarked crosswalk at an intersection, except as otherwise provided in this chapter.
(b) This section does not relieve a pedestrian from the duty of using due care for his or her safety. No pedestrian may suddenly leave a curb or other place of safety and walk or run into the path of a vehicle that is so close as to constitute an immediate hazard. No pedestrian may unnecessarily stop or delay traffic while in a marked or unmarked crosswalk.
(c) The driver of a vehicle approaching a pedestrian within any marked or unmarked crosswalk shall exercise all due care and shall reduce the speed of the vehicle or take any other action relating to the operation of the vehicle as necessary to safeguard the safety of the pedestrian.
(d) Subdivision (b) does not relieve a driver of a vehicle from the duty of exercising due care for the safety of any pedestrian within any marked crosswalk or within any unmarked crosswalk at an intersection.
Pedestrians do always have the right of way, but they bear a bit of responsibility as well to make sure the driver is aware of them, and not just walk out into the road. I have been on both sides of the wheel here, and it can never be assumed that a car knows a person is there.
They will probably lose in arizona (In My Lawyerly Opinion) but Uber will settle it out anyway.
It's not only comparative negligence, you don't have to always yield. In fact, Arizona has a weird set of rules that would make this exact set of events a law school exam style question:
Arizona: Vehicles must yield the right-of-way to pedestrians within a crosswalk that are in the same half of the roadway as the vehicle or when a pedestrian is approaching closely enough from the opposite side of the roadway to constitute a danger. Pedestrians may not suddenly leave the curb and enter a crosswalk into the path of a moving vehicle that is so close the vehicle is unable to yield. Pedestrians must yield the right-of-way to vehicles when crossing outside of a marked crosswalk or an unmarked crosswalk at an intersection. Where traffic control devices are in operation, pedestrians may only cross between two adjacent intersections in a marked crosswalk.
One saving grace for any plaintiff is that in AZ you can recover 1% damages even if you are 99% at fault.
The wildcard here is that interior video. The safety driver was clearly not paying attention. Although the exterior video seems to exonerate her, everybody knows that even with no bright clothing and in between streetlights, a pedestrian rolling a bicycle wouldn't be that invisible.
On the other hand, that pedestrian was astonishingly oblivious, crossing a two-lane roadway with a 45mph speed limit and not even looking for oncoming traffic. If she did that every night for a couple of years, I think her odds of having at least a close call would approach 100%.
As presented, this looks like a pretty classic "overdriving your headlights" situation.
Even though they're made of retroreflective material, only two lane divider dashes at a time can be seen in the video, indicating something like 50 feet of visibility. Stopping from 38 mph takes 70+ feet from the time the brakes are applied (and human reaction time adds quite a bit more). Things (people, animals, stopped cars, road hazards) appear in the road fairly routinely. If you can't stop inside the area you can see, you're operating your vehicle recklessly.
I think you are missing a key point here. Have you ever been on a street with streetlights and had difficulty seeing past two lane divider dashes? The point is that the camera footage is vastly under-indicating the amount of distance a human eye would have been able to see.
Furthermore, if there are street lights, and you go through an unlit area, you should be able to detect the presence of an obstacle because it will be obscuring the view of the road ahead.
Uber's technology clearly sucks, to the extent that I doubt they are even using Lidar. However, the woman driving the vehicle should be arrested for not being aware. If she had been looking at the road for more than a small fraction of the time she surely would have seen the person on the road.
Well... after the car hit her there's no obstacle anymore. If there is no provision to detect an accident and then cancel the route to drive it's natural that the car doesn't stop, since the obstacle is now behind it. It's a computer with wheels.
(closed captions: that was sarcasm. When colliding with anything, the correct response is not "just keep on driving"; and especially not when you've just hit a person)
If it were a human driver, human eye, etc., then maybe this would be the case. But Lidar isn't limited in this way, and reaction time shouldn't have been an issue.
I agree. I regular drive in the country (unlit, forested, deer country) and believe that I have more visibility than this driver and car. And I'm typically going 45-50mph and have Xenon headlights from 2004.
This is not what I expected at all. The original news reports based on the police comments made it seem like she was on the side of the road and then randomly darted out. That's not something I would really expect AV's to be able to cope with yet, if ever.
Instead what we see is a scenario that happens all the time: a pedestrian who is sleepy or perhaps with mental problems or on some substances...in any case not hyper alert...crossing a road without looking. It's not that busy at that time of night and I'm sure I have done the same thing. It's very discouraging and angering that these cars are being driven without the capability. I feel very disappointed. To me it's not super important whether an ordinary driver would've stopped: ordinary people drive drunk, exhausted and/or distracted all the time; but I hoped that AV's were already better than that.
Okay, isn't this the kind of a situation where the machine was supposed to excel?
The dynamic range of the video is very low so it looks like the victim comes out of complete darkness but weren't all these sensors supposed to see the obstacle even in complete darkness?
BTW, please consider the low dynamic range of the video when commenting on the human's ability to avoid that accident. After driving in the dark for a while your eyes will adapt and you'll be able to see much more details in the shadows than a regular video camera can record.
This seems like a simple case. I would have expected the driver assist in my 2017 Subaru to have reacted to something in the road. I'm surprised that the much more sophisticated self driving system did not.
I can't count the number of commercials I've seen for Fords and Nissans and Subarus and every kind of modern car that has this exact feature, and this is exactly how the advertising plays out. Someone sprints in front of the car, the car stops automatically, everyone is safe, pedestrian continues with their life. I've never used it in real life, but I assume it works how the ads show it.
If Uber can't match a $25,000 off-the-shelf floor model mass-market midsized sedan for collision avoidance, it's hardly a self-driving car.
The system in my car does this but only under a certain speed, much lower than what the vehicle in this video appears to be doing. It's intended for stop-and-go city driving or to avoid hitting a kid that sprints out of a driveway in a slow residential area, not to slam on the brakes to avoid hitting a deer or person at 40mph.
I wonder if this was affected by the fact that the pedestrian was in another lane up until the very last second. Perhaps the car detected the pedestrian but failed to consider it an obstacle since it wasn't in its direct path. It could be unexpectedly difficult to account for pedestrian crossing speed if it caused automated cars to stop when a car in the next lane happened to "wobble" towards the automated car's lane.
I dunno, if it decided to ignore the pedestrian because she was in the other lane that's extremely troubling. The pedestrian was moving laterally across the road. If the car has detected that, it should infer that she might become an obstacle very shortly.
Driving is all about predicting the future. Think of every time you've been able to tell that someone is going to change lanes even though their blinker is off, or slowed down when a ball bounces into the street because you know there's going to be a kid following it. If the car isn't capable of that, it's not ready for public streets.
What's most troubling is it doesn't even matter if the uber car thought the slow object in the other lane was a pedestrian, a car, a tree, whatever. Even if it thought the object was say, the most "normal" thing it could be, another car, this would still be a special situation requiring action. Without knowing the objects classification the estimated speed is enough to decide. Why would a car be stopped in the middle of a lane on a fast road? It should be treated as an obstacle that could grow to the side. After all it could be a police, tow, construction, or disabled vehicle, and a cop or tow worker might be about to walk to the side.
Actually many states now require by law that you get as far as possible from a lane with a disabled vehicle, as many human accidents have happened.
I am convinced uber has been basically pretending to do the mountains of careful and sophisticated crap waymo actually has gone to great lengths to do, and is just racing to put anything out so they can keep stringing investors along as far as they can before the jig is up. Well the jig is up now.
I think target fixation also contributes to accidents involving emergency vehicles on the shoulder.
Back to Uber, the number I hear is that they are striving to go better than 13 miles/intervention to prevent an incident. For Waymo this is over 5000 miles. I'm convinced too.
When combined with her walking a bicycle, I suspect this is an edge case they never tested. It may have tripped up their pedestrian detection or path prediction (or both) if they got inconsistent hits on the bike.
There in lies what might be core to the issue though. "An edge case we hadn't tested" Perhaps how we DO AI right now just isn't quite ready for situations this complex yet? Maybe AI based on "oh I've experienced something like this before" is not quite enough? Are we able to do better in getting a system that can infer what's about to happen and make decisions without experiencing it directly... or even indirectly? I know some AI systems are able to do this now with limited cases... but it does make me still wonder if 0.05% of the time the complexity of this task is still just a little outside the capabilities of our learning systems. And maybe we only see the results when something bad happens in that tiny window of time when the AI is unsure.
I am obviously no AI expert, just what I know from loosely following the field. But things like this cross my mind from time to time.
This is the exact thing I've been thinking all the time: Something's not totally right here: Let's at least slow down to better assess the situation. Everything works better in slo-mo. My human brain would then have much better time to evaluate the scene. This would definitely also hold for the sensors and processing systems for an autonomous vehicle.
And even if it doesn't, and the system still concludes it's noise/an obstacle that isn't going to move etc., a low-speed collision is preferable to a high-speed one. Unless you can be completely confident you aren't going to hit something, slowing down is not a bad default action.
Even if you imagine it thought it was another vehicle stopped in the other lane of traffic, the SDV should have slowed. There's no way you should assume a stopped vehicle isn't hiding another danger and blowing through while speeding is what stupid humans do and what SDVs shouldn't.
If the car cannot predict "there's something in the next lane, could be in mine the next moment, better slow down just a bit", it has absolutely NO BUSINESS AT ALL driving around on public roads. At each and every occasion that I drove in a city, I have needed to take at least one such minor evasive maneuver due to someone suddenly getting into my lane, be it a car, a bike or a pedestrian. This is not even driving 101; if the car is unable to handle that, it is quite literally unfit for the road.
You would think that a reasonable project plan would attack the human safety/emergency situations first, and then move on to anything else. I guess having a car that avoids accidents but doesn't drive across the country on its own does not make for interesting headlines. This is the consequence of headlines-driven AI... "lets solve something hokey and grabs attention so management will be pleased instead of doing the more meaningful/long term r&d."
2017 Legacy with "Eye Sight." I have had one or two situations where the car emergency braked on my behalf. It happens when the car in front of me turns off the road and is nearly clear. I don't brake because I know that the car will clear the roadway before I get there but the system doesn't see it that way and brakes. I anticipate that now and avoid anything close enough to cause braking so I don't wind up the front end of a rear end collision.
The other thing this system does is to provide adaptive cruise control. If I'm behind a vehicle that results in slowing down and then switch lanes (to where there is another vehicle in front of me) the car seems to think it can resume speed and slip between the two vehicles. I've come to expect that too and disengage the speed control before switching lanes.
It also provides a warning when approaching the lane markings (unless I have indicated a lane change.) Occasionally it triggers on seams in the pavement. It also provides steering input if the land deviation increases. I've only experienced when I tested it in purpose. I'm not sure it would reliably keep the vehicle in the lane.
Overall the system seems to be pretty good and though not perfect, is a net asset.
That sounds awful. Driving is hard enough when you have to second-guess what other drivers and pedestrians are going to do. Why add to that by having to second-guess what your own car is going to do?
I've got to side with the people who want no auto-driving until we have always-better-than-human auto-driving. When cars only have back seats, and no driver controls beyond a way to state your destination, auto-driving will be acceptable.
I love our 2017 Outback w/ EyeSight. Luckily, we haven't had to test the auto braking system at high speeds but the lane assist and assisted cruise control are wonderful.
Not a suburu but my passat engaged emergency breaking once. It's was kinda spooky to go for the break and realize its already depressed. Definitely prevented an accident. I was paying attention too, but someone jumped into my lane and slammed their breaks on for who knows why, there wasn't anything in the road...
In some areas that maneuver is a strategy to collect an insurance payout. If the car had many passengers (who would all get soft tissue injuries) this might have been a "swoop and squat."
> someone jumped into my lane and slammed their breaks on for who knows why
Insurance fraud, possibly. Depending on jurisdiction if you rear-end someone you could automatically be 100% at fault (assuming the fraud is not discovered/proven)
Absolutely love my Outback + Eyesight. That said, the lane assist and automatic cruise control are not very sophisticated.
The automatic cruise control is great for freeway and some street driving, but don't expect it to brake very smoothly / like a human. I consider it outsourcing part of my concentration.
Lane assist is nice, but it won't auto-center -- if you were take your hands off the wheel on a straightaway it would "ping-pong" back and forth. I mostly like it on long drives, reduces the amount of effort on bends.
- working in pairs, so there is social pressure, conversation, and two pairs of eyes to increase alertness and safety?
- doing shifts of 30 - 45 minutes at most [1] (although they could potentially swap back and forth with a co-driver)
- issued a dumb-phone for emergencies and searched for entertainment devices (it's good enough for Amazon warehouse staff)
- being monitored by the driver-facing camera, with training and termination for drivers who can't hack it
- monitored automatically for attention using eye tracking or other methods, with the car safely stopping if lack of attention is detected
- required to take over on a random, regular basis for a short period to keep them engaged and attentive (and obviously, the car keeps driving if they don't take over, but they are marked down)
Due to the boredom, it is an extremely demanding job, but the way it is being done is clearly not good enough.
[1] I can't find anything published about how long the shifts are, but I'm guessing they are longer.
I think these are sensible approaches to try and mitigate the problem.
Still, I'm not sure this is a problem that can be effectively mitigated. I've been _extremely_ suspicious of any AV system that requires a human to be attentive, alert, and ready to take over at a moment's notice.
Humans are just _so bad_ at that kind of task. Since we're apparently making people do that job anyway, I'm all for the mitigations you've outlined. But I think it's absolutely crazy that we have humans doing a job that we're just _really really_ bad at, and then making that the critical safety element of the testing.
All of those suggestions will increases cost and decrease profit, there's no law mandating any of those so why would they go above and beyond for 'safety' reasons?
I understand it's the ethical decision but at the end of the day for-profit companies are only interested in one thing: maximizing profits.
Robin_Message basically says a lot of what I was thinking, but you also need to consider what maximises profits in the long term (and self driving cars are a long term project) - is it doing the minimum now to reduce costs, or is it taking all reasonable precautions to avoid the public perception building that self-driving cars are unsafe?
Profit maximisation is not the only factor of running a business, even a public company.
But even given that, what are the PR and legal costs of this predictable [1] accident being made worse by a poor safety system? Plus the downtime on doing training/testing? And the probably harsher requirements that might now be enforced?
Maximizing profits doesn't mean doing everything on the cheap.
[1] in the sense there was likely to be an accident at some point.
The video makes it look like the woman popped out of the shadows and makes it look like this was unavoidable. But that's not what a person would really see. People's eyes have great dynamic range. Take this picture for example (not mine):
What you'd see in real life is much closer to the edited version on the right, while unedited pictures (the version on the left or the uber video released) would make it seem like you can't see shit. A driver paying attention probably would have seen this person from far away. At the very least, the video doesn't convince me this was unavoidable.
Why is the car driving at full speed if it can't see the road ahead? Does the car know that visibility is low? What other sensors are they using? Did the car notice the pedestrian once they were fully lit up?
This video raises so many questions. I think we're going to be revisiting this incident over and over again in papers, reports and eventually textbooks.
>Why is the car driving at full speed if it can't see the road ahead?
This is the best question to ask.
Why? Because going through all the comments, there seem to be 2 trains of thought:
1) LIDAR should have seen it. (car's fault).
2) It's the pedestrian's fault.
Neither of these questions matter. Because the real question is like you said: if the car couldn't have seen it, then why drive at full (or actually above legal) speed?
Move fast and break things. Fake it till you make it. Drink the koolaid. All hail Silicon Valley. Blah.
The ugly truth is, when looking at the high-end camera photos of the accident spot, a human driver had a 50-50 chance of avoiding this accident. This car didn't even react. But that's not the worst thing. The worst thing is that the infrared or heatcam tech is fairly advanced and would have seen this person even in pitch black. But it seems Uber chose to do "testing on production" and released an experiment onto the roads. It's well known that lidar tech has problems during rains, snows and low light. Sure, driving it is great for machine learning, but at what cost? Especially when everyone expects so much more from this tech than to equal human drivers. The goal should be to remove accidents from the equation. But, especially for Uber, the goal is just to profit.
A straight road, in dry conditions, obstructed by an entire person and a bicycle with REFLECTORS on it, and the car doesn't see her?
I think whoever wrote this code should be in prison, quite honestly, along with their entire management chain. I suspect many people will feel the same way. This is a really really bad look for automated vehicles in general. It also backs up my contention that anything less than full 100% automation is worse than no automation at all, as the driver was complacent and had less than zero chance of taking over.
Not the wheels I think, but the shoes of the woman (plus something near handlebars). That bike would not be street legal here (north Europe) due to not enough reflectors - although a self-driving car still should be able to spot it.
It certainly does look bad for self driving vehicles, and I definitely feel terrible that a life was lost from this.
However, how does one develop a self driving vehicle that’s 100% automated without the ability to test in real driving conditions? Despite this accident, self-driving vehicles have a fairly safe driving record for the number of miles and time they have been active.
It's a difficult question. Because we cannot just dive into it assuming that the toll (in life/safety) of beta testing AVs on public roads will result in a net benefit within a reasonable timespan. The human driving fatality rate is ~1 death per 100 million miles. Uber has 2 million miles driven and 1 fatality. It's obviously unfair to extrapolate and say that Uber has 50x the fatality rate of normal driving. But that means we have to keep testing Uber AVs on public roads.
What if an Uber AV accidentally kills someone at the 2.5M mark? That's still not enough data to statistically compare apples to apples. Maybe the next 100M miles of Uber testing is fatality free...that still wouldn't be completely enough (right? I'm not great at stats but I would think we need at least a billion?). Of course, it could go the other way, with Uber AVs killing someone every 1M miles.
As a general tech optimist, I'm inclined to think tech will get better, overall. But let's face it, that's not a given. And in the meantime, it's likely the tech upper-class won't be the ones who suffer the most while tech improves. The case at hand being the prime example: a homeless recently-imprisoned woman was killed.
Earlier today someone submitted an interesting RAND study that argued that the testing time for autonomous vehicles to meet statistical reliability for safety testing would be on the order of decades, or even centuries, and there would still be no guarantee that AVs would be safer. I'm hoping RAND is just being really pessimistic here...
Build quite large test facility with diverse artificial scenes. Pay people to walk/bike/drive around. Use external motion capture, GPS, stationary radars, etc. with additional offboard computation to act as a watchdog and step in when the onboard systems fail. Would be expensive, but the companies pursuing autonomous driving can afford it.
There are several parts here: the hardware a self-driving car uses to "see", the software a self-driving car uses to process sensor input into a representation of the vehicle's surroundings, and then the software that makes decisions and issues commands to the car to actually "drive" it.
You have a car with all of this running but a human driving while you drive around for hundreds of millions of vehicle miles. You then review the data for these trips and use it to assess both the ability of the hardware/software to maintain a meaningful degree of "situational awareness" as well as the reasonableness of the software to control the car if it had actually been doing so. From this you can determine how good of a job the system is doing and build a fairly good idea of how much you can actually trust the car to drive on its own. Then you can let the automation drive the car with a human behind the wheel and actually paying attention. From that you can further improve your assessment of how well the car operates under realistic driving conditions.
However, if the human isn't paying attention then you potentially significantly increase risk. Especially if you short-changed the previous step of monitoring the automation's performance while "side seat driving".
In the case of, say, Waymo, they've done a fairly good job here because they've been careful and thorough in each step. In the case of Uber, as is reflective of their corporate culture, they have rushed ahead and taken on a lot more risk than they should have, in this case putting bystanders in harm's way.
After comparing the released video and a nighttime video [1] on the same stretch of road, I think the released video very likely showed a lot less visibility of the victim due to the dynamic range of the camera. In [2] the victim's shoes became visible. Note the lamp post with a traffic sign to the right. In [3] captured from [1] note the same lamp post on the right edge and the storm drains on the curb, not far from the lamp post and not that dark. [4] shows the victim was very close to the storm drains. The camera that captured the released video had to adjust to the bright headlight of the vehicle. IMO the victim would be a lot more visible to a human driver.
Clearly there are still issues with the car. The sensors should have picked up on this. I would be surprised if the automated emergency braking in my Honda wouldn’t have picked this up based on past experience of a close call at night where it saved my ass.
I want to focus a bit though on the human failure here. Uber has been piloting this program with human backup drivers just for situations like this until there is enough confidence in the maturity and safety of these cars.
Whether this driver could have prevented the collision if their eyes were on the road, we will never know. What is clear is the driver was looking down, most likely on their cellphone.
For that there is negligence on both the part of the driver and on the part of Uber. Uber is recording the interior of these cars, so there should have been some type of review process in place to capture this type of behavior from the drivers so it could be addressed.
While I do agree there is a level of responsibility on the driver's part here I will also say that I think it's equally unreasonable to expect someone to stay alert in that situation. Human brains need stimulus and interaction to stay alert focused. To me this is as much a failure of understanding how the human mind functions... not just on Uber's part but of all the companies that think the human will stay alert while the car does the work. Our brains just don't work that way. Once the brain starts to trust the car it disengage and wander off onto other distractions.
I really hope Tempe has the balls to revoke Ubers AI driving license.
This is completely unacceptable. Their approach to AI driving has been terrible from the start. They don't have the data, nor the expertise to do this in a safe way.
This is not the first time they've fucked up royally
It strikes me that a good way to test self driving cars would be to simulate people doing random dangerous things like walking in front of the car and seeing how good the car is at avoiding an accident. Then you give the car less and less time to see how well it can eg. respond to a giant mannequin being thrown in front of it.
Following that train of thought, an accident like this
could have been caused by fear of corporate liability.
Because you would eventually have to set some threshold where the car would not respond to an errant pedestrian for the safety of the occupants. Especially in wet weather. A good lawyer could argue that is when the car software decided to kill the pedestrian. So, Uber's AV team might have just said "she was jaywalking" to avoid writing code that dealt with situations outside of the rules of the road because it might in the future look controversial, even if the addition of that code may have the effect of sparing more lives than it takes.
1) I'd hope and assume that an autonomous vehicle would have better sensors than a single mounted video camera. It does have other sensors that it uses, right? Were they all operational? What do they show? Combined, do they have dynamic response comparable to the human eye?
2) The road seems really dark. Are the headlights intentionally dimmed here? At night, on a empty road with no opposing traffic, shouldn't the headlights normally be on bright, rather than dimmed? Does the autonomous vehicle control this switch or the human driver?
3) I feel like a human driver would have been likely to swerve left to avoid the human. They may still have hit the rear of the bicycle, but no one would have died. Does the car even attempt to prioritize collisions with inanimate objects over living ones? Does it at least attempt to slow down if a collision seems inevitable?
4) If you cannot stop in time to avoid an obstacle on the road after it is illuminated by your headlights, it would seem clear that you are travelling "too fast for conditions". Either you need brighter lights, or a you need to slow down. Is this in dispute?
Seriously. Go spend some time in /r/roadcam on reddit to get yourself accustomed to what a typical dashboard camera video looks like. People here are drawing all sorts of conclusions based on seeing an unfamiliar type of video, and that's a really bad thing.
The two biggest things to get used to are:
1. Dashcams do a terrible job in dark or low-light scenes. The car's headlights, and the streetlights, were almost certainly functioning just fine; the scene only looks as dark as it does because that's the best the camera can do. A dashcam video of a dark/low-light incident will always look significantly darker than how a human would perceive it when actually present.
2. Dashcams often have a wider field of view than their human operators. Which means you often see obstacles in the video before they'd be in the field of view of a human driver (this causes no end of "well I saw that coming way before the driver did!" snark in /r/roadcam discussions).
And in theory this vehicle has infrared on board, and a LIDAR system scanning the road ahead for trouble. Any such system should have had zero difficulty detecting the pedestrian in time to stop or avoid. The fact that it didn't is a very bad sign for Uber's implementation.
Thanks for the clarifications - dashcams sound exactly like the first few generations of GoPro cameras (haven't tried any of the last few so can't generalize) - almost useless in low light conditions, great in sunlight or in front of a high powered light source.
The first question that comes to my mind watching this, is, why didn't the _pedestrian_ see the _car_? It looks as though she didn't even look up until the car was only a few feet away from here. I'm not just trying to "blame the victim". The car "should" have responded sooner. But see-and-avoid works both ways. If you're choosing to cross a road in the dark, you, too, have a responsibility to understand the drivers' limited visiblity and act accordingly. My point is that this is a teachable moment for not only SDV developers, but also for pedestrians.
The fact that the pedestrian died is precisely the point. As a frequent pedestrian, and as a person who likes people, and because people are often pedestrians, I would like to prevent pedestrian deaths. That happens by holding SDV developers to rigorous standards, but also by highlighting that crossing the street safely requires paying close attention and being careful, more so than many people do. I've seen several places where pedestrians have the right of way a lot, and many will just step directly into the street, legally mind you, when cars are traveling quickly nearby. No matter how you slice it, that yields an unacceptably high probability of death, and you can blame whoever you want but you will still be dead. There's a difference between victim blaming, and pointing out that you can avoid becoming a victim by taking sensible precautions. This is the latter. It doesn't absolve SDV developers of any carelessness they may have had in their work to point that out too.
If a pedestrian has the right of way, why should the impetus be on them to be mindful? Why shouldn't the impetus be on drivers to slow down? Why should pedestrians live in constant fear of being killed when the law should protect them?
The law asserts justice, it does not protect against negligence in the moment it is happening. In the case of crossing the road, with a many thousand pound hunk of metal vs a human being, the human will overwhelmingly bear the brunt of the immediate harm.
It seems pretty obvious to me that being a mindful pedestrian is simply a matter of self preservation
Right of way means cars are obligated to stop when you cross the street. It doesn't solve the practical issue of drivers needing at least some time to react, even if they are paying complete attention and driving at a reasonable, safe speed.
Difficulty of looking both ways and pausing for a second before you cross the street: 1 unit. Difficulty of getting getting every single driver in any area where pedestrians have the right of way to drive slow enough that they can make a safe stop assuming any pedestrian can legally dart into the street without warning: 1,000,000,000 units.
And yet I've seen plenty of people turn 90 degrees across a street at their usual walking pace with no pause and with their eyes on their phone and headphones in their ears.
If you want to spin that particular chamber of probability because legally you're technically protected, honestly I feel worse for the driver who has to live with that memory now and wonder what they possibly could have done to prevent it short of just never driving a car at all.
If the article were about a cyclist who, while riding without a helmet, was run over and killed by a car whose driver didn’t do a safe lane change, and I commented:
“Not trying to blame the victim... but this is a teachable moment not only for drivers, but for cyclists...”
... it would be a true statement, which is why I didn’t say the GP was making a false one, but a normal native speaker would at least recognize it as a strange thing to say.
Cyclists should wear helmets, and the cyclist certainly is partly to blame. But no one was suggesting that the cyclist is guilt-free, so the actual effect of the statement is to try and draw a false equivalence.
It’s like saying “sure, global warming is real, but let’s not forget the effects of gradual and natural cyclical climate change.” No educated person in the discussion “forgot” the second, but by phrasing it this way you’re implying we should be paying equal attention to a large effect and a much smaller one.
You really think drivers have carte blanche to run down people in the road as long as they have right of way? That drivers have no responsibility to avoid killing people as long as those people aren't on crosswalks? That's what you seem to be saying here.
Legally correct, but technically wrong. When it comes to judging whether self-driving cars are safe enough to be allowed on the roads with the rest of us, incidents like this are the kiss of death. With a human driver at the wheel this woman would be alive, even though she was jaywalking. This is going to set back public acceptance of self-driving technology a ways.
That's a strong assumption to make, that a human could have done better. It was dark, night, nowhere near a normal crosswalk, and not where I as a driver would normally expect a person to be. I think it would have been hard for even the most alert driver to not hit this person in this circumstance.
The point wasn't quite "what would human do?" but "computer just drives straight over people, BAN KILLER ROBOTS NOW!!!" Public perception and its backlash is not always commensurate with the technical side.
No, as others have discussed, driving above the speed limit when it is “the middle of the light with no lights on” means the driver is at fault as well.
I think the only circumstance where a pedestrian can cross a road without watching for cars is when protected by a traffic light, and even then that's not wise. A pedestrian crossing a road outside of any crosswalk at night and not watching for cars is playing russian roulette. Doesn't mean the driver (or self-driving system) shouldn't have spotted it as well to prevent the accident. But it is clearly a key driver enabling that accident.
> I think the only circumstance where a pedestrian can cross a road without watching for cars is when protected by a traffic light, and even then that's not wise.
You are definitely correct about even then you should check. And once you check and see it is clear...keepchecking as you cross.
An acquaintance of mine from college was killed crossing Colorado Blvd in Pasadena, California late on a Friday afternoon, in a crosswalk at a fully controlled intersection, with the cross traffic having a red light.
Colorado Blvd is a major street, and late on a Friday afternoon would have a fairly high density of cars.
The car that hit him was going something like 80 mph. At the time he started crossing, that car would have been 3 blocks away. Even if it had been the only car on the road, at that distance there would be no way to judge the speed, and any car that far away traveling anywhere near legal speeds would be far enough away to not be a danger to any normal pedestrian crossing.
With the other cars that were on the road on a late Friday, it probably would not even be possible to see the car that hit him when he entered the crosswalk. There would have been several cars stopped at the intersection obstructing his view, plus cars at the intersections further up the road, or in transit between the intersections.
His only chance would have been to keep checking oncoming cars as he crossed, even after all the cars actually at or near his intersection at stopped.
Almost no one does that. Mostly once we see everyone nearby stop we just concentrate on cars that are turning and so might enter the crosswalk even though the light is red (assuming we are in a right turn on red jurisdiction).
Lesson #1: Treat each step as you cross the street as if it is your first step into the street. Do your full "is it safe to cross" scan constantly.
Lesson #2: Cars very far away at the time you start to cross can make it to you before you finish, even if traffic seems heavy enough that there is no way they could go fast enough to reach you. Your scan needs to look out farther than you think it needs to.
The girl who ran a red light three feet in front of me while looking at her phone would agree with you -- that is, if she even realized she had done it. (I was in the crosswalk and I stopped when it became apparent that she wouldn't)
I see a lot of people who cross the street without looking. I definitely don’t get it. I constantly check everywhere even when I clearly have the right of way. But that’s how people are.
These people need to watch more videos of people getting run right the hell over before it finally sinks in. I’ve watched enough that I never walk close alongside any road, and keep a good distance from an intersection while waiting for the cars to stop, because sometimes they don’t!
And I wouldn't be surprised if many of these people are really concerned about the threat of terrorism or of flying while ignoring an imminent lethal risk that is many orders of magnitude higher.
I sometimes have trouble figuring out what lane an oncoming car is on, with glasses, in daytime. It wouldn't surprise me if she was absent-mindedly taking a familiar route to her homeless camp and mistakenly thought the car was on the left lane.
> If you're choosing to cross a road in the dark, you, too, have a responsibility to understand the drivers' limited visiblity and act accordingly.
You have a self-responsibility, which most people act on (self-preservation). But as a driver you have a responsibility to pedestrians who can be killed by the machine that you are driving. That is the greater responsibility and why active care for pedestrians should be one of the highest regards of a driver (or autonomous driving system).
If you're choosing to cross a road in the dark, you, too, have a responsibility to understand the drivers' limited visiblity and act accordingly.
No kidding. This might be a somewhat controversial opinion, but I think the elephant in the room here is this general "pedestrian has right of way" notion. It's much easier for pedestrians to watch out for and evade cars than the other way around, and yet we seem to insist the opposite? That seems rather backwards and contrary to the laws of physics.
THere are several lanes free with plenty of room for the car to pass without issue. I think the pedestrian assumed this was just an inconsiderate driver not slowing down and driving around me... as there is no way on earth that a human would ever have hit her! the video is totally unrepresentative, and if it was representative then the car should have been doing half the speed.
On the one hand, the cyclist became visible in the footage at a very late stage. If that is how the drivers eye saw it they would have very little time to react, however they may have been able to do a sharp swerve.
On the other, the operator was clearly distracted regardless of if they would have been able to avoid this.
If they didn't want drivers looking at those displays, why would they put them there? Either looking at those displays is part of the job, or the display should automatically turn off. I'm guessing it's the former.
"however they may have been able to do a sharp swerve."
As someone that enjoys racing cars and other types of high performance driving I feel like I can pretty confidently say this isn't even remotely a realistic statement.
You're grossly overestimating human response times. Even if we're ignoring response time and imagining the driver instantly turning the steering wheel, large SUVs like the XC90 simply can't generate the lateral acceleration required to move the vehicle clear of the pedestrian in the time and space shown in that video.
But as many people have pointed out, what the camera captured is not what the human eye would have seen. The eye has much better dynamic range; an alert driver would have had more time than what we can see on the video.
Although it seems from the dashcam footage that the scene is dark and the bike/pedestrian comes out of nowhere, humans are able to handle extreme low-light situations and I don't doubt that this pedestrian would have been visible to an attentive driver.
If autonomous cars cannot handle low-light conditions given the current state of hardware, either from visible, ir, or lidar, then they need to run in the daytime only.
> humans are able to handle extreme low-light situations
But this wasn't an extreme low light situation. It was a situation with street lights alternating with extreme low light regions. Under those conditions the human eye will not be dark adapted and your low light vision won't be very good.
Ive driven at night and the video is not representative of what I see. With your car's headlights and the road lights, there are no blind spots on the road. If there is not enough street illumination and my headlights cannot reach far enough ahead I turn the high beams on and travel cautiously.
> With your car's headlights and the road lights, there are no blind spots on the road.
I agree that is the optimum, but I don't think all roads are optimal in this respect.
> If there is not enough street illumination and my headlights cannot reach far enough ahead I turn the high beams on and travel cautiously.
From what I can see in the video, this is what a reasonably cautious human driver would have been doing on the street shown, since it does not appear to have sufficient light to avoid blind spots. It does not appear that the Uber car had its high beams on. But, as has been noted in other comments in this thread, the car had other means besides visible light (LIDAR and IR) for seeing the pedestrian, so it should have been able to do better than a human driver at avoiding the collision.
Two obvious things from the footages: the pedestrian is invisible until the last moment, and the "driver" is focused on something else, probably a smartphone.
Given of how little time there was between the hitted woman became visible and the impact, a driver action may not have been able to avoid it, but what will be interesting is how his responsability will be engaged or not.
Anyway one advantage of automous car is the theoretical the ability to have faster than human reaction and extra-human sensors. It should have been able to catch the problem.
> the pedestrian is invisible until the last moment
In this crappy video, yes. Maybe it was scaled down massively before handing to the police or made darker, we don't know. It might have been visible for a normal person.
The Police should ask for the raw footage and at any rate the AD will if there's any doubt. They will probably engage a forensic computer tech expert as well and possibly ASU professors/engineers. But from a black box perspective it's clear that the system didn't not detect anything. Still IANAL but I would probably guess there will be a reckless driving charge for the driver here.. mitigated by the pedestrian failing to yield to the car by not crossing at a cross walk. The human driver was clearly not paying attention and there is video evidence of that. If the driver had kept their eyes on the road and the accident had still happened then it probably would be no charges filed.
Agree that the driver was clearly not paying attention.
I was surprised that the headlights did not illuminate further down the road. That is, drivers are cautioned not to “out-drive your headlights” — meaning that you shouldn’t drive so fast that by the time you see something, you don’t have time to react. Maybe a human driver couldn’t have avoided the accident given the speed the self-driving car was traveling, but perhaps a human driver would have (or should have) driven slower?
>I was surprised that the headlights did not illuminate further down the road.
You are surprised because your intuition and experience tells you that it is very easy to see much further down the road when traveling on a road with street lights and non-high-beam headlights on. The footage is highly misleading regarding how far a human eye would be able to see.
According to the videos of Waymo’s cars, their cars would’ve seen her. Lidar should be more effective at night.
I guess it’s obvious Uber didn’t steal their tech. I was on their side from the initial reports but there’s no reason that should’ve happened if that car was equipped with lidar.
The onus is on driver to see what the path ahead is clear. Uber clearly violated the 3 sec. rule [i.e. reaction time plus braking time with the braking time dominating the equation]. Uber was driving too fast for the conditions. If you have only 30m clear view, you can't drive faster than 10m/s. Driving faster, like Uber did here, is basically driving blind, reckless and negligent. The woman was in the headlights for only 1 second - thus Uber was driving 3 times faster than the maximum safe speed in those conditions ( specifically the power and angle of their low beam headlights)
>the pedestrian showed up in the field of view right before the collision
It takes AZ police and $70B corp together to conclude that the woman appeared too fast. Guess what? If people and objects appear too fast in the field of view then you're driving too fast.
In Uber’s own tech blog, it describes its use of LIDAR to generate a wire frame of each object and track it. The lighting should be irrelevant for the vehicle, while it is essential for the driver.
From the below article: “Last fall, Uber officials showing off their vehicles on Mill Avenue said their radar and lidar were able to detect objects, including jaywalkers, as far as 100 yards away and avoid collisions.”
Something went really wrong here, and I look forward to the release of the rest of the data. I’m wondering why the woman, who had a large enough silhouette, wasn’t detected by the software.
I suspect she was picked up by the sensors and categorised as an object. These are absolute fundamentals, and I can't fathom the car working at all if it can't do this. If it was being driven with a limited set of sensors or a malfunctioning sensor, hoo boy the NTSB will have a few things to say.
I reckon it's more likely that this is was due to a software issue where she was flagged as "not a collision concern" or similar due to being in the other lane on a 2-lane road, or due to her path not being predicted accurately due to the odd returns received by a bicycle in profile at that range (it may have thrown off their regular pedestrian detection). In any case a terrible shame.
Okay, so Tempe police aren't going to be able to get & grok the raw data on how the AI was handling things. So how does that shake out in an investigation? It seems like they'll have to rely very heavily on the good faith of Uber (hah!) to honestly interpret & communicate the truth of that side of the story.
Releasing the video so soon without the equally important (probably more so) AI side is almost deliberately misleading, and at best shows Tempe ill equipped to deal with the investigation if they believe that video to answer any more questions than it raises, and I can't believe they don't know better. The video screams "Blame the poor light/lazy driver/idiot pedestrian!" at least for viewers that aren't more savvy on the tech involved.
Would it matter from the legal point of view? The police have evidence of speed and unsafe driving for the condition and driver inattention. I guess it depends on what the legislature legislated in AZ for these cars/situations. The AI should get scrutiny but I am not sure if that is a police matter or an DA matter.
Watch the footage carefully and you'll see that the car actually tracks to the right immediately before impact.
When centered there's about 20" between the sides of the car and the lane markers. (120" lane - 80" car) / 2.
At the moment of impact, the victim appears to be between 3' and 4' from the right lane marker. To avoid the fatality, the car would have only had to track left 2' from center.
So my question is - what was the last possible moment such an avoidance maneuver (veer left) could have been initiated?
I think that's an interesting factor in assessing whether this tragedy was avoidable or not. Certainly seems like a technical failure here here in that the car was easing slightly to the right and not braking or veering left.
When I cross-reference the lane lines at 0:03 with google maps[1], I get an estimate of the car being about 60ft-70ft away from the victim when I was first able to spot her shoe in the video. Some googling[2] says human reaction at 40mph would've taken 59ft, with an additional 80ft of actual stopping time.
This seems in line with the police report that "it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven)"
Maybe a SDV with superhuman reaction time should've been able to stop some 10ft after hitting her, and maybe if it had night vision-like sensors, it should've been able to see her from farther away, but my impression is that it would've been impossible for a regular human driver to avoid this accident.
Given that the SDV didn't attempt to reduce speed before she was visible under regular light, I'm forced to assume the sensors do require light in order to detect obstacles and that there wasn't enough light to activate the sensor before the headlights illuminated her.
1. Human eyes have a better dynamic range than cameras do. The camera is thus displaying a worse contrast than a human eye can see.
2. Motion is preattentive in the visual system. (When you think about it, panicking about subtle motion in the corner of your eye tends to be evolutionarily advantageous).
So the time we have to identify the pedestrian using the video is a minimum bound of the time, not necessarily an accurate reflection of the time. It is not clear how much extra time (if any) a reasonable driver would have in this incident.
On a collision course, there's zero relative motion of the object you're about to collide with. It's one of the quirks of the human vision system that makes it particularly ill-suited to operating motor vehicles.
The speed limit is only 35 not 40 on that road unless we're looking at different roads? If the driver is paying attention they could turn the wheel a bit and split the left lane (there doesn't appear to be anyone else on the road) and avoid the person who was half way through the lane at time of impact. Might not clear the person but clipping the back of the bike would certainly be better than a direct impact. You wouldn't even have to jerk the wheel and swerve at such a slow speed and on a dry road with modern tcs even over-correcting is handled by tcs to limit what your input does. Of course all this is just playing arm chair quarterback so who knows if someone including myself would have been able to react in time. More of a thought experiment to how an alternative could have occurred.
I'm curious how the visibility differs between what we see on this camera and what a human actually sees in person.
According to google maps, the speed limit is 45 northbound (which is the way the car was going), 35 southbound. The report was that the car was going at 38, but I could not find stopping ranges estimates for that exact number. So give or take a little.
Looking at the video, it seemed like the driver reacted (facially) about a second after she shifted her gaze from some device to the road. It took me several runs of replaying the video to narrow down the time between when I first saw her shoe and collision to about 2 seconds. Also, recall this was at 10pm. In my opinion, swerving or braking with a 1-2 second notice is extremely hard, especially if it's late and you're tired. To be perfectly honest, if it was me, I don't think I would've been able to react at all before the collision.
It appears that UBER may have disabled/overridden Volvo's built-in safety platform, "City Safety". City safety automatically brakes for obstructions, up to 50MPH.
A lot of people seem to mention how LiDAR should have seen her coming: Generally speaking, that's true. But it's difficult. In a single frame the system would hopefully detect something on the left of the lane, but wouldn't know what it is (if they use AI it's probably not trained for "person shoving bike packed with plastic bags").
If you have the object detection in place, you still need a trajectory for that object; if you don't know the exact angle you're looking at that object it's difficult. And moving at any speed there are plenty of things that could go wrong with the calculation, especially for slow objects.
Also remember you're seeing a lot of oncoming traffic (for which you don't want to do an emergency brake) and cross traffic moving onto the road, letting you pass before they enter your lane (for which you might want to slow down a bit).
Maybe the bags reflected LiDAR light causing a wrong classification... Maybe their detection noise was too high and the quick fix was to turn down sensitivity.
Whatever happened exactly (and I hope we get more information), this is a terrible accident, because the object in question was not part of some controlled test, but a human being left dead.
Still, the technology is very promising, but companies should be more careful with testing; have drivers be aware of the road (is that IR light from the camera or a smartphone?), and maybe do more testing in controlled environments.
As a software engineer, I would only release an autonomous car into the wild after I trust it enough to walk into its path on a dark road (with enough space to brake; suicides can not be avoided)... I wonder if the Uber engineers would?
I don't know how you can watch this video and then say that they should "maybe do more testing in controlled environments".
This was a complete failure and Uber's cars should be immediately banned from testing on public roads and there should be investigation into their practices to ensure public safety while testing their vehicles. If they rushed it trying to play catch up with Waymo they should be punished.
It's a "maybe" because I don't have any more data points than what's present in the video - only assumptions.
But I strongly assume this could have been avoided by more thorough testing (e.g. use a contraption to shove various inanimate objects in the path of the car at various speeds [both object and car] in various light conditions).
And iff a proper investigation finds out people responsible for this accident (engineers doing dirty fixes, managers neglecting safety, the driver just being an unaware dummy; you name it), they should carry the consequences same as a human driver would.
And yes, their self driving cars should stay of the road until the fault has been found and fixed. I never claimed the opposite. (Aren't they? IMHO it's the obvious thing to do; but then we're talking about Uber and their poor sense of responsibility...)
Eventually "maybe" has too weak of a connotation in this context (as a non-native speaker I sometimes find it difficult to pick the correct variant for the exact thing I want to implicate) - for which I apologise if I mislead you there.
According to this article, the algos only track objects that have been classified as "moving" and the initial classification is what matters.
"Let’s say that there is a huge red fire truck idling at the side of the road. An autonomous car can’t see the fire truck if the fire truck suddenly pulls out into traffic. Why? Autonomous cars have a stop and start problem. They don’t have unlimited processing power, so they cut down on calculations by only calculating the potential future location of objects that seem to be in motion. Because they are not calculating the trajectory for the stationary fire truck, only for objects in motion (like pedestrians or bicyclists), they can’t react quickly to register a previously stationary object as an object in motion.
If a giant red fire truck is parked, and it suddenly swings into traffic, the autonomous car can’t react in time."
I've noticed that pedestrians in the US tend not to look at cars coming their way when they cross intersections - I'm not sure if it's an attitude of "you're supposed to stop for me" and and they're willing to bet on that happening every time, or if they're really unaware of the danger of a few tons of steel coming at them.
This always struck me as very strange as someone from Europe - they teach kids in elementary school to look both ways and not to underestimate the speed of a car.
There are about 2500 pedestrian deaths in the US every year [1], it's just tragic and some of these fatalities could certainly be avoided :/
Edit: Since sgc mentioned 6-6.5k pedestrian deaths per year in the EU, which didn't sound any better than the US, I tried to come up with pedestrian fatalities per 100k population (data for 2012 or 2013):
As someone who has spent a fair amount of time in both places, I can say that the US pedestrian feels more entitled due to an almost universal pedestrian legal right of way outside of controlled intersections (stop lights). It's a foolhardy attitude, and completely antithetical to the way people are taught (which is: look right, then left, then right again before stepping off the curb). But Italians often do the same because they wouldn't ever cross the street otherwise, and there will be a small group of risk takers adding to the statistics in every country.
There is also a pedestrian right of way in Europe, at least in the countries I know of.
I think that European drivers are convicted when they hurt a pedestrian. I don't think that Americans get convicted as often or as long. That could explain why they don't drive as carefully.
How could you possibly come to that interpretation of the data? There are 2.5x more pedestrian fatalities in the EU. It would appear that US roads are safer for pedestrians, and of course drivers are convicted if they are at fault.
The first link is very interesting - for example the UK was able to reduce pedestrian fatalities from ~1500 per year (absolute numbers) in 1991 to ~500 in 2014.
I was definitely impressed with the improvements. There is an EU wide plan for fatality reductions for 2020. It doesn't appear they will hit their targets, but much seems to have come of it.
This is definitely not a case of someone jumping out in the road, as was previously reported. Sure, jaywalking at night is a bad idea, but this is exactly the type of thing an automated driving system should catch and avoid
Right. This really is the worst case scenario for Uber, the worst set of circumstances you could have deduced from the original report; the only excuse they have is that it was dark, and if you can only see about 20 feet in front of you, it's clearly insane to be driving 40mph.
And it doesn't seem like there's any braking at all, even once she's visible. Really really bad.
I thought whole initial response from the police was weird. They went out of their way to basically say she jumped out in front of the car, and that the accident was unavoidable. They even told us that the women was homeless, which I viewed as a subtle way to hint that maybe she wasn’t in her right mind.
It just felt like they were saying everything possible to absolve Uber, in a way that’s very different from how most crashes are treated.
100% agree. The Police Chief even said the video would "potentially shift the blame to the victim herself, rather than the vehicle."
If that's not very strange language to use for a public servant describing a video that shows a fatal vehicle-vs-pedestrian accident, I don't know what is.
I'm not in the US so I don't know, but people in another thread were making comments that that is often how it is handled in the US, the crash victim is blamed somehow. It was implied by one person that police want to avoid paperwork.
It certainly looks like a human driver would have braked enough to make this a minor injury accident. It's amazing that the additional sensors of the Uber car were not enough to avoid the accident entirely.
The Uber car was a Volvo XC90. They are supposed to have some of the best pedestrian avoidance safety features in existence. My understanding is that a stock Volvo would be expected to catch this situation and avoid or stop. Did Uber override these systems? Or does it just not work very well?
I had the same question, especially because several relatives and I own Volvos and pay attention to their safety promises. One would assume that Uber cannot disable Volvo's safety systems...
However, it appears that Volvo's "City Safety" detection of pedestrians or bicyclists requires clear visual identification. So it works in daytime, or at night with strong illumination, but not in relative darkness. [1]
One other thing that seems off in the video is that the headlights don't illuminate much distance ahead.
Since I've seen this be a very controversial statement here in the past, I'd just like to point out to everyone that American drivers on four lane roads are not expecting pedestrians to be in the middle of the road and not at a crosswalk. That is not a normal thing to happen in America, and no one expects it to happen. It is a very rare situation.
--edit: okay guys... no where did anyone argue that you shouldn't be watching for pedestrians crossing. Stop yelling "gotcha!" like you caught me in a trap. What I'm arguing against is the common refrain on these articles that there is no such thing as jaywalking and the pedestrian has the right to the road over the car. Maybe it's true in Europe judging by comments on previous articles, but it's not common in the US. It's illegal.
Congratulations on expertly knocking down your own strawman, though.
I have driven in the US for over 20 years and have the opposite experience. I am constantly looking for people, animals (deer are quite dangerous to hit, and they move _way_ faster) and lost count long long ago how many times I have been in very similar or worse situations.
It's not even remotely rare. Drive on campus or downtown in AZ for a more than a few minutes.
Yes, obviously you should watch for objects unexpectedly crossing the road. No one argued you shouldn't. The point is, some people have and will still argue that the pedestrian has more right to the road than the driver and pedestrians should be allowed to cross anywhere they please. But that's not how it works in the US, crossing roads like this is illegal unless it's done at a crosswalk, and normally people are not stupid enough to do it with oncoming traffic.
"obviously you should watch for objects unexpectedly crossing the road. No one argued you shouldn't."
Strawman. I argued the opposite. You made the grossly incorrect assertion "That is not a normal thing to happen in America, and no one expects it to happen. It is a very rare situation."
If preprogrammed cars cant handle random objects in unexpected places then they are uber DOA.
But the legality is to some extent beside the point. Autonomous vehicles aren't going to be tuned to pedestrian yielding rules state-to-state, simply because hitting a pedestrian is never an acceptable outcome if avoidable no matter who is in the right/wrong.
There is a limerick to this effect with the same message -- you may have (or think you have) the right-of-way, but as a pedestrian you should never insist on it.
I think you can generally say that there are few countries where one would expect people to cross a four lane road without watching out for cars. I'm not sure why you would think that to be a controversial statement, especially considering that the person being hit in the video is also not just walking on the street but crossing it.
Obviously the concrete level of awareness that drivers usually have is probably related to a lot of other circumstances (light conditions/amount of traffic etc.)
I don't think it to be a controversial statement, I know it to be controversial. Just look at the other reply posted at exactly the same time as yours with someone saying they totally expect pedestrians crossing illegally wearing black at night.
The last thread about this contained a very long argument from some Europeans who disagree with the core concept of "jaywalking". Some people really do hold the belief that roads are for pedestrians first and cars last, contrary to US law.
That is strange. Maybe they were not talking about 4 lane roads? Because on smaller streets I would also expect pedestrians, especially if there is not boardwalk. But in my opinion this is indifferent to the situation at hand anyway because this video is about somebody crossing the street without respecting the cars right of way, which can't be deemed common behaviour by anybody.
First of all, I don't think we are. Also, I live in a modestly sized European city of about 650k citizens (excl. metro), and even we do have some sections of road with 4 lanes in each direction, and in intersections, even more. I'm quite sure bigger cities have more of those, even in Europe.
There's no specific 'no pedestrian' signage, and there are no nearby marked crossings. If that road were in [for example] the UK, Australia, NZ, or much of Europe, the car wouldn't have had right of way.
I'm currently in Europe (Germany) and if I cross a 4 lane road thinking I have the right of way, I'm dead. Can you point to any road provision that supports your thesis? Because my quick research does not confirm your standpoint. It also seems to defy logical sense to let cars on a 4 lane road stop for pedestrians because there would be an enormous amount of accidents when cars overtake each other
People crossing streets mid block is such a common in pheonix that last week Arizona announced a plan spend $250k on educating the community on this very problem. Pheonix is one of the most dangerous cities for pedestrians and bikers. Pheonix is built for cars not people, a ton of roads are fast 4 Lanes in residential areas.
>the pedestrian has the right to the road over the car. Maybe it's true in Europe judging by comments on previous articles, but it's not common in the US. It's illegal.
You are conflating two different concepts. In every state in the U.S., it is illegal to hit a pedestrian whether or not they are where they are supposed to be. If you see a pedestrian jaywalking then you are required to avoid the pedestrian to the greatest extent possible. Anything less is at least manslaughter.
Wow! You are saying no one in the US expects, or at least consider that could be an obstacle on the street in front of you? It doesn't mean necessarily a pedestrian. An animal? Cargo lost by a truck in front of you?
It is the responsibility of the driver to be alert and try to minimize the impact. Just because a cyclist shouldn't cross a four-lane road in the middle of the night, doesn't mean you shouldn't look for obstacles.
As an American, this general kind of situation is a reasonably common situation with people and animals (people more in urban environments, large animals more in rural ones, and both in many suburban ones) and something I was taught to be aware of in public school driver's education classes.
The particular combination of speed and distance may not be common, but the general situation isn't rare.
Perhaps, but the pedestrian also did not react at all to the car. Even though she knew she was crossing a street and should look for cars, see the headlights, hear the engine, etc.
The pedestrian was a homeless person. I don't mean to be insensitive but there's a chance she wasn't in a condition to recognise the danger of what she was doing.
Regardless, it is the driver's responsibility to perceive and avoid hazards like this and I think that would've been pretty easy given it's a big wide empty road with overhead lighting. This looks like the kind of bad low light footage my cheap dashcam produces in similar conditions.
For better or worse, it's not the pedestrian on trial here. It's the self-driving vehicle which completely failed to register a significant obstacle directly in front of it for nearly two seconds. There is no scenario where this is an acceptable response from a control system in charge of a 1500kg projectile in a shared public space.
Even if you're not a cyclist, it's a good idea to wear something with reflective stripes when you're out on dark roads. This is actually the law in several European countries.
Shouldn't the card have been driving with its brights on? I mean... she was already crossing so she didn't come out of a corner or something and I have a feeling that with brights on she could have been seen. Also, there doesn't seem to be other cars in the opposite direction or anything for which the car shouldn't have had its brights on.
Regardless of how suddenly she appears in the camera, it is pretty logical that she was probably visible to the human driver if he had been watching the road. That's not nearly fast enough on a stretch of road like that to be overdriving the visibility provided by the modern headlights of a brand new Volvo. I bet she was visible to the driver in plenty of time to have stopped.
I'm an optimist, but still I think we are much farther away from legitimate self-driving cars than a lot of people think. And every accident like this where the computer vastly underperforms a human driver is a huge PR blow to the technology.
I agree; and particularly like that you mention this is a brand new Volvo -- I regularly drive much faster on roads that are likely to have deer -- and this driver is clearly not paying attention. I think the driver should be considered for manslaughter charges (to which, as they were an employee, Uber is civilly liable for)
I am not totally onboard with the idea of manslaughter charges, but in principle I'm with you. I am nervous about how the liability question gets worked out. When a human behind the wheel is directly liable for their actions we at least have some incentive for good behavior. When we detach that liability and assign it generically to a company instead, then it feels like our options for coercive behavior shaping has been significantly reduced. We can't put the corporation in jail, and history shows that we rarely punish corporations significantly, even large fines are frequently pretty small when you look at the budget of the corporation involved.
An alert human driver would still have had a bad crash, but might not have killed the pedestrian.
In a two-lane road, you cannot simply swerve to the other lane w/out checking. I (and many other?) drivers would have just slammed on the brakes. But at that road's speed limit, I believe the brakes wouldn't have stopped in time, and a collision would still have occurred.
The pedestrian+bicycle was a direct obstacle, not at some slim and oblique angle. So it is odd that the car's collision detection didn't pick up on it.
tldr; a human driver may or may not have done better.
Although a collision may still have occurred if a human slammed on the brakes immediately after seeing the pedestrian, even a 10mph slowdown may have mitigated the fatality into a minor injury. ProPublica has a graph showing the frequency of fatalities in crashes for different impact speeds, showing a more than halving in the occurrence of fatalities in crashes between 38mph and 28mph impacts: https://www.propublica.org/article/unsafe-at-many-speeds
The article says that the vehicle "doesn’t appear to slow down". If the data shows that it actually didn't slow down, it seems like there is a critical bug in Uber's software that has killed a pedestrian.
Checking what? It's dark. You should already know where the cars are around you at the moment. Second, a human eye easily could have seen her, this cam footage is not remotely comparable to the low light sensitivity and motion sensitivity of a human.
When did the "futurology" crowd become so militant? I've seen multiple people on multiple forums blaming the victim here, with some even going as far as to say that they "deserved to get hit". Other rather disgusting comments have been along the lines of "you have to break a few eggs to make an omlette".
Since so many of these people are outwardly saying that they're comfortable with people dying to "train the AI" (their words), I wonder if any of them would volunteer themselves or someone they care about to play the role of "broken egg"?
Others have been blaming low light conditions, which is ridiculous for so many reasons I wouldn't even know where to start.
The general feeling I get, is that they simultaneously acknowledge that this technology is still developing, but refuse to admit that it isn't already perfect. If they had their way, and blame for incidents was always shifted somewhere else, autonomous cars would never improve.
Not a good look, and we're in for a very messy dystopian future if the lowest common denominator is already willing to blindly trust what they believe to be "AI".
The interior view tells you everything you need to know about why anything except perfection in self driving cars will lead to more accidents.
The driver specially selected and trained for exactly this job, does not look at the road! If someone specially trained for that won't do it, the general public certainly will not.
Also, in the video you can not see the bicycle till it's almost too late. But I have better night vision than this camera, I feel as a human I would have seen the bike. (For example the sides are pitch black in the video, but in similar lighting I can see stuff to the side.)
But the real issue is that self driving cars should have better vision than a human, not worse!
"The driver specially selected and trained for exactly this job, does not look at the road!"
Humans gonna human. Our brains are trained by millions of years to not pay attention to things not directly important. It's hard enough sometimes to keep our attention on the road when we're doing the driving and the driving is too easy, like being alone on a highway at night. Expecting humans to passively observe driving while making no decisions and taking no actions... well... could you watch eight straight hours of video footage from a car, tomorrow? I know I couldn't. I couldn't even pre-HN/Reddit/instant entertainment constantly in my pocket, let alone now.
The idea that sitting a human in an otherwise-automated car will do anything after the first few hours is absurd. It's a feel-good measure and nothing more. I wouldn't have expected any other result.
Yeah, there's that videogame about driving a bus constantly, on an empty road, for hours and hours. It occasionally veers off, so you have to pay attention, but that's all. That game should be a minimum "driving" test for safety drivers. (Not really, but something like that)
I agree that it's an apt comparison, except Uber seems a worse experience. In Desert Bus you have to correct the bus.. Stupid work, but something necessary - boring - to do or you fail.
There seems to be nothing at all to do in that Uber?
I .. was aware of that, but you literally pay for Schadenfreude and charity there. I sincerely believe that no streamer can make money or get a following with a schedule of Desert Bus runs.
Nor do I want to believe that there are people out there, bored out of their mind to click on anything, anything(!) on Twitch, would pick Desert Bus over literally any other thing for more than three minutes.
It's painful. People like seeing others suffer, I guess.. But these streams do serve a purpose in this thread: The poor players do look exhausted and ready to die after a short while already. Which brings me back to my original point: These Desert Bus gamers have something to do (correct the bus repeatedly) and in your example a whole community who engages them. The Uber driver ... has nothing at all?
Although this video will no doubt throw that driver under the, um, bus it wouldn’t matter if this were some rock star Uber engineer. You could see the exact same result.
This is why I've always criticized Tesla over its defense of Autopilot accidents, even when most people here were defending them. Then Waymo and Ford came out and confirmed what I and several others here were saying - that humans can't react to "take over the wheel" if something shows up in front of the car.
With autopilot now, you never know when it will start flashing to wiggle the steering wheel, so it is not a hands off, get distracted with your phone type of thing anymore.
I wonder what the "deaths per vehicle miles" stats for autonomous vs human-piloted cars look like right now? Humans kill, what, 30,000 other humans a year or so in The US? But there's a _lot_ of vehicle miles driven. My guess is this one incident has likely pushed the autonomous car death rate per vehicle mile way beyond the current human piloted death rate per vehicle mile...
(Having said that, there's not much doubt in my mind that stuff like antilock brakes and autonomous emergency braking have reduced the road toll (and will reduce it further as these recent-ish technology become more common i the aged vehicle fleet) - I think the difference there is then those technologies are not nearly so inclined to make driver pay less attention to what's going on around them. I _hope_ people don't think "It's OK, I've got emergency braking - I'll just watch this tv show as I drive to work!")
This is the exact reason why we need Nationwide disclosures of automous miles driven. California requires this stat, so it's data is the most telling.
California .92 deaths per 100 million (human)vehicle miles traveled. There were only 500k autonomous miles driven in California last year.
I'd bet your guess about death rate per mile is right considering waymo which has largest and most advanced fleet only drove 2 million miles nationwide last year.
Does Tesla's autopilot count as autonomous? There has been one confirmed death in what is likely over half a billion miles of autopilot usage. That would seem to tip the rate back in favor of the autonomous cars over humans.
Autopilot does not count. I'm assuming it's because it requires people to be actively engaged in the task of driving and it only works in very specific situations. Tesla reported 0 miles of autonomous driving in CA on public roads last year.
If one incident can reverse your conclusion then the sample size isn't large enough to draw that conclusion.
With only one death, the only conclusions you could reasonably reach from statistics are either "we don't know yet" or "we've gone so many miles with only one death that we know it's safe enough." There's no reasonable way to argue from the stats that we know it's not safe.
There have been studies that indicate that these sorts of technology (lane keeping/departure warning, auto-brake, blind spot detection) help reduce collisions.
Personally, I don't think that it's right to dismiss these systems entirely if they provide some level of safety benefit, are cheaper and easier than level 5 autonomous vehicles to develop and / or deploy.
There was an excellent comment by curveship on the thread the other day [0] that really stuck with me so I want to parrot it a bit:
> Our car-based transportation system is far and away the most dangerous thing any of us accept doing on a daily basis ... everyone on the jury has in the back of their mind "that could have been me"
> drivers don't get punished for doing something dangerous, ... They get punished for doing something more dangerous than the norm.
> self-driving cars don't just have to be safer than human drivers to be free of liability, they need to be safe period. In a trial, they don't benefit from the default "could have been me" defense.
You are obviously correct that a "statistically better than humans" vehicle is an improvement over what we currently have. But this is the kind of technically correct answer that doesn't actually resonate with people, especially those outside of tech.
> I have better night vision than this camera, I feel as a human I would have seen the bike.
I'm not so sure. Under conditions where there is low light everywhere, so the eye has time to adapt, human low light vision is very good. But under the conditions shown in the video, where there are street lights present and the pedestrian comes out from a gap between the street lights, your eye won't be dark adapted and you won't have much of a chance of seeing the pedestrian in the shadows.
This is literally the reason for high-beams. If the road is not illuminated enough you turn those on and drive as fast as you are comfortable being able to fully stop if a deer jumps out of left field (or in this case, a pedestrian is walking across the road).
And I have literally never had to use high beams on a road with street lights. In my opinion, the camera footage vastly under-indicates the amount of visibility available to the human eye between the lighted areas.
> I have literally never had to use high beams on a road with street lights.
I have, plenty of times. Not all road streetlights are optimally placed.
> the camera footage vastly under-indicates the amount of visibility available to the human eye
I agree with this; it's pretty much always going to be true for a visible light video camera under nighttime conditions. But I don't think that means a human driver, without high beams, would necessarily have been able to see the pedestrian. (As I've commented elsewhere in this thread, I think I would have been using high beams on this street, streetlights notwithstanding; the gaps between the lights look too long for good visibility.)
A self driving car should be far better than a human at turning them on and off.
In Australia it is relatively common in semi rural areas with similarly sparse street lights that you would turn high beams on and off depending on oncoming traffic.
It's not a function of seeing it the dark, it's that human eyeballs have really good dynamic range, which is exactly why they'd beat cameras in those conditions.
Not if they're not given time to dark adapt. As I've posted elsewhere in this thread, human vision would still be better than that of a video camera using visible light, but that still wouldn't be very good under variable lighting conditions.
There's a range of light to dark you can see at once. Your eyes can adapt and move the whole range up and down, as can cameras. You're right that it takes time. But the width of the range at a given moment is wider on people than many cameras (sans post-processing). That's why this video is deceptive.
Our eyes can see details in the light and dark at the same time better than cameras can. This (https://i.imgur.com/3u1kN7T.png) is an example. The top picture has less detail in both the lightest and darkest portions than the bottom one. The human eye is reeeeally good at that.
> Our eyes can see details in the light and dark at the same time better than cameras can.
Yes, I agree. But that doesn't mean that a human eye would necessarily have seen the pedestrian in this situation in time to react. (Based on other posts elsewhere in this thread, I don't think a human driver would have been able to see the pedestrian in time without high beam headlights on.)
is this what it looks like when you drive at night? Can you only see about ten feet in front of you, and everything else is pitch black? No. Not even close. Our eyesight is much, much better than this.
If our eyes were as bad as this camera, it wouldn’t be legal to drive at night.
What is displayed on the center console? Is it possible that the driver was looking at a live video feed? I believe that some standard cars show their night vision feed there.
I'm skeptical of that possibility, because if that were true, you would see the driver gasp in surprise while looking down (and a lot sooner BTW, given that it's supposedly night vision), instead of roughly one human-reaction-time after looking up at the road.
> Also, in the video you can not see the bicycle till it's almost too late. But I have better night vision than this camera, I feel as a human I would have seen the bike. (For example the sides are pitch black in the video, but in similar lighting I can see stuff to the side.)
While I'm happy for you, there are a lot of human drivers who don't have good night vision, have poor reaction time due to sleep deprivation, etc., etc., and get into accidents like this all the damn time. Is it a bad idea to have self-driving cars that are about as safe as the 10th percentile human driver? If the 10th percentile human driver would have hit this person, too, then we either need to keep these self-driving Ubers on the road, or revoke 10% of drivers licenses.
The article below predicted a scenario much like this one. It made a point that putting a computer in control that works really well on average will make humans reaction worse in general and especially if it's a weird, emergency situation happening right as their attention is kicking in.
Apart from all the questions about the camera, resolution and culpability, we need to re-think what it means to be training and evaluation of autonomous cars with respect to the rare incidents.
Training the cars based on data collected from roads is heavily biased towards incident free conditions. This does not give any training or feedback on the rare occasions such as these. If there was a learning algorithm deciding what to do (assuming hand coded rules are brittle and hence one would want to learn handling these scenarios) then it perhaps has no training data.
Evaluating the cars based on incidents per million is fine but doesn't tell anything about how the incidents would have been handled if they had happened. There is no incentive for the learning algorithm to slow down the car to prevent fatality, if all it cares about is an incident happening and not the severity.
One possible solution, autonomous cars are trained in real life simulations (using realistic lighting conditions, dummies and what not) to be able to handle the rare incidents and they are also required to pass regulatory testing in similarly realistic conditions to test for their behavior in rare incidents, before they are allowed to drive on actual roads.
As far as driving on public roads, I tend to agree. As do the companies. They all have a safety driver behind the wheel, ready to take over.
There was clearly something that went wrong here in this tragedy, but it’s not a reckless disregard for safety on anyone’s part that led this to happen.
The worst part of all of this is the XC90 (the car involved) has emergency pedestrian braking built in (like many new cars today), yet based on the video it appears to be disabled so as not to interfere with Uber's autonomous driving.
Lots of mention about LIDAR here. As well as IR sensors, etc.
Unless Uber comes out with all the technical specifications that is mounted on the car to detect motion and obstruction on the road, all are mere speculations.
The vehicle in question here would be in the custody of the authorities (impounded), so they should be able to investigate the details on it. Uber should provide as much technical specifications available on the vehicle to aid in the investigation (especially if they are certain that it was more the fault of the pedestrian rather than the self-driving vehicle (SDV)).
SDV companies and advocates should probably pool resources where they could buy a parcel of land, simulate it into a city like settings, and perform edge case tests such as people suddenly crossing, animals, objects, lighting conditions. This will allow for better simulation as well as advancement of technology that would soon realize the possibilities of autonomous driving cars on the road as safe and reliable.
Waymo has 91-acres already[0]. They are selling/donating that land to the "California AutoTech Testing, Development and Production Campus", which will expand to 300 acres and allow others to use it too[1].
It doesn't really matter what the car was actually capable of, so much as what it should have been capable of.
Let's say for argument's sake that the car wasn't equipped with whatever it needed in order to see a person crossing the street in the dark. That doesn't excuse Uber: it just means they shouldn't be testing in the dark on public roads.
It isn't obvious from the video if the human driver would have had time to react, but they certainly did not help their chances by being unalert. What is the point of having a human driver if they are not going to be watching the road or able to react.
If they aren't a passenger, are they merely there to get around regulation that requires a human in test autonomous driving vehicles?
This exact situation happened to me on an unlit road 3 years ago. I swerved in the opposite lane and avoided the collision, without thinking. I was just lucky that my reflexes kicked in on time and that there was no opposing traffic.
I had to stop and shake out all the adrenaline and rage after that.
Real question, what would have happened to me, a human driver, if I had hit the person?
Can't tell if the camera has poor contrast resolution or if it was indeed that dark. With the dynamic range available on digital sensors, it seems to me that even without some form of FLIR it still should be more possiblle to capture this for sensors than eyes at any rate. Do self driving vehicles use any of this, especially at night?
This is pure speculation, but it is my working theory: The car's so called "AI" thought the thing in front of it was a four legged animal (e.g., see <https://goo.gl/images/3qAmiM>).
In most states, braking for animals in a way that might endanger others on the road is illegal. So, it decided to go ahead and hit the thing in front of it on purpose.
I am not sure if the people who program the firmware for these contraptions have ever driven cars in difficult conditions. Between Teslas ramming into trucks and Uber cars taking out poor homeless people for a bonus score, it seems to me they are programming things, but they have no idea how this stuff is supposed to function outside of ideal parameters.
Sure, provide the driver with more information. But, taking the driver out of the responsibility equation is stupid.
> In most states, braking for animals in a way that might endanger others on the road is illegal.
If that is true, then that is a very bad law. If something appers in front of a car I'm driving, the first thing I do, before anything else, is braking. And so should you, and everbody else. First, you save precious reaction time by first braking, then thinking. Second, it might be a human and thus worth much more than a damn car. Third, if it's not a living thing, it's not uncommon for humans to step on the street to go after it.
Any dangers by sudden braking are avoided by mandating sufficient space between cars. If a car breaks suddenly and the car behind it crashes into it, the driver of that second car is responsible for the damages, because they evidently didn't have enough distance to the car in front of it to react in time to danger situations.
The intent behind such laws is simple: You must not endanger others to save an animal. These are good laws, even if those without actual driving experience may not understand them.
As the Globe and Mail puts it: "It doesn't matter why the chicken crossed the road — your responsibility is to the people around you."[1]
"If you're forced to choose between getting
rear-ended and hitting a small animal — hit
the animal."
"People end up in serious crashes with other
cars because they let their emotions take over
when they see a bunny, DiCicco says."
"Drivers need to be prepared to make these
tough decisions and not get in a 12 car pileup
because they saw a baby deer and thought of
Bambi," he says.
My unfounded speculation is that the RC car jockeys who programmed how the car should react under various conditions did not fully understand what such laws mean (probably having no real life driving experience in ambiguous conditions). And, the cars sensors saw the human with the bicycle, but thought it was an animal.
This is pure speculation, but it makes sense to me. My gut feelings are somewhat validated by every single comment here that tries to make this this poor woman's fault.
PS: You might correctly deduce that I do not think "self-driving cars" is a good idea. I think everyone needs to be reintroduced to Asimov.
"If you're forced to choose between getting
rear-ended and hitting a small animal — hit
the animal."
If you are forced to make that choice, the car behind you is already driving illegally close to you. Kristine D'Arbelles, spokesperson for the Canadian Automobile Association (CAA), is just making excuses for other people's poor driving.
Stopping distances people! And the rule of thumb is double them for rain, ten times for ice. And if someone is tailgating you, let them pass you. They are in the wrong but there is no point saving a few seconds keeping in front of them.
> These are good laws, even if those without actual driving experience may not understand them.
No, they are not. You didn't even touch the arguments I gave.
If there is a risk of collision with the car behind you, then not you but the driver of the car behind you is at fault for not keeping a safe distance to account for their reaction time and brake performance.
> but the driver of the car behind you is at fault
That will be small consolation if you cause harm to others' persons or property when trying to avoid an animal such as a deer. That is the practical upshot.
If you hit a deer, insurance will pay you under comprehensive coverage and your rates will not go up. On the other hand, if, because of your attempt to save the deer's life, you hit a guardrail and smash your car, and get hurt in the process, you'll be lucky to collect anything.
The main point I tried to make above is this: I am speculating that the car's "AI" thought the person walking a bicycle fit the profile of an animal which interacted with a ruleset that embodies this principle in a simplistic way because the code for these stupid self-driving cars is being written by people with little to no experience driving in less than ideal conditions.
Surely the car's computer had data about all sorts of things around it. The car does not and cannot know anything. It would have been up to some programmers to tell it what to do with the data.
I suppose we'll need to wait for some standardization in the sensor department, but it would be great if autonomous car makers could share the sensor data for collisions so that everyone can learn from the mistakes. Test data is great, but there's nothing like real world problems.
This was not what I was expecting. If this had been an accident with a normal vehicle, I think the driver would have been found to be at least partially at fault, and guilty of involuntary manslaughter. But I'm not a lawyer.
I thought the victim jumped out into the street, but that clearly isn't the case. The pedestrian always has the right of way, a human would have seen her and slowed and swerved, and no one should be driving so fast that they can't stop for an obstacle that is just coming into view. And like many others mentioned, I'm sure she was visible to the human eye where the camera's exposure was set up to not wash out what was in the headlights.
Uber screwed up on this, maybe they were just lucky it didn't happen earlier.
That's a very common misconception, but in fact in the majority of states in the US vehicles have the right of way on roads over pedestrians outside of marked crosswalks and crosswalks at an intersection [1].
I think this misconception arises because many people overestimate what privileges and protections having the right of way confers. Some people seem to think having it means that you can go come hell or high water, no matter what the people who are supposed to yield are doing.
It is not nearly as powerful as that.
All that having the right of way actually means is that others that want to use the same resource as you are supposed to let you go first. If they ignore that and try to go ahead of you, it doesn't mean you can unsafely force your way ahead of them.
For example, if I'm standing beside a highway, nowhere near an intersection or a marked crosswalk, looking toward the other side clearly wanting to cross, the cars have the right of way in most states. I'm supposed to wait until there is a gap in the traffic sufficient to let me cross safely.
If no such gap comes and I start to step out into the street the cars have to stop even though they have the right of way. If police witness this they can ticket me for something like "failure to yield".
Yea good points. I didn't mean "right of way" as a legal term, but as a common phrase which means you are responsible as a vehicle driver to not hit people, even if they are doing stupid, illegal things. If you've had a bad day at work you don't get to speed up and swerve into little Johnny as he runs across the street after his ball, then tell his parents "he shouldn't have been in the street" and drive away.
Please, mind that the camera does not have the same dynamic range a healthy person has. There are also other subtle signals that humans can use that computers will still have problems using. I have personally avoided few collisions with pedestrians just on the fact that they occluded some far away light or on very subtle flicker of some part of their clothing.
I don't see how car relying solely on vision (it does not seem to react to the pedestrian as if it was "seeing in the dark") can replace human with the current level of technology.
If this is true and the car was not able to use LIDAR/Radar/IR camera etc. to augment its visible spectrum then I don't think it should have been allowed for this car to be driven fully autonomously.
Collisions between pedestrians, bicyclists and cars would be impossible if we restricted the roadway. Collisions of vehicles between lanes, including the opposite and crossing lanes, would also be impossible.
You don't even need self-driving cars to do it, and it would greatly reduce the number of serious accidents and fatalities. You could start with just the roads with the most accidents.
And by the way, nobody seems to have mentioned that assisted driving technology could be added to cars today. We already have cars that emergency-brake for us. They could also warn us of upcoming red lights, fast approaching obstructions in the roadway, or unsafe actions on the road, such as letting the car drift lanes without a turn signal.
I'm surprised the vehicle didn't detect an object moving into its field of view. What are the expectations from the driver under the terms of Arizona's self-driving car laws? Are they expected to remain engaged as a driver, or are they permitted to allow the car to perform all of the driving? It's impossible to say whether an attentive drive would have changed the outcome of this scenario -- largely because a highly compressed video doesn't really convey what the driver actually was capable of seeing, but it's very clear from the video that the driver was more focused on a phone (or some other device) and was not an active participant in driving the vehicle.
I'm definitely interested in the legal expectations too. If it turns out LIDAR completely dropped the ball here, it might not be the case (given AZ's lax rules) that Uber would be penalized for LIDAR that was substandard.
Presumably, there'd be regulations for software too, classifiers and AI decision making. Has any jurisdiction set standards in this?
I think this brings up an interesting philosophy for autonomous driving technology - is it feasible if it's "only as good as human driving"?
I mean I've seen comments that a human might not have seen the pedestrian. Is that a "defense" of driving AI? That's it's about as good as human driving?
Seems to me the public won't accept driverless cars unless there is significant evidence that it's much better than human drivers - after all human drivers make a lot of mistakes and cause a lot of fatalities. I don't think any other claims of the advantages of AI driving could offset any negative publicity of injury or death.
I think experience is the best teacher. In a perfect world, Uber will look at the data, determine why this happened. The AI will learn and it won't happen again. That is compared to countless new human drivers entering the road every year. That's not a defense, or an excuse, but an explanation or reality.
I know we don't live in a perfect world, but regardless, we can't change what happened here, so it's best to learn something from it.
However bad the car was or wasn't at detecting the victim, it's nothing compared to how bad the human victim was at detecting the car (which presumably had its lights on coming down an empty street at night).
What's your point? She wasn't driving a car. Imagine if the uber hardware was installed into a petrol tanker. It just just as pointless a discussion.
If self driving cars can't be trusted to not smash into any obstacle that appears such as a deer/roadwork/child/runaway shopping cart, then they shouldn't be permitted on the road.
The point is that humans who have poorer sensors and do even stupider things are allowed to drive. So the bar for self-driving is a lot lower than people around here make it sound.
I actually believe that the kind of situation shown in the video (pedestrian walking across road at night in the path of a car, where car has to brake hard to avoid a collision) happens much more frequently than most people think. And most of the time the pedestrian comes out fine because most drivers are looking ahead most of the time, and someone walking across the road is a big abnormal event that is easy for a human to notice and react to.
I don't have any data to support this, other than a vague memory of doing something similar many years ago. Although if I remember correctly, that was a more major road that was probably better lit. But I still find it hard to believe that human vision would have been as lousy as the camera's.
I agree. I can remember quite a few times I've had to slow down.
One time there was a bar fight that spilled onto the road I was driving along. I slowed down as I could see the commotion on the sidewalk, and lucky I did because they ended up in the path of my car.
This self driving car would have killed quite a few people in that case it seems.
Stupider things - I'll give you. But where is the evidence that humans have poorer sensors?
All I saw is a car plowing into a person without even applying the brakes. That strongly implies to me that the car wasn't even aware the person was there, as reflexes should not be an issue.
Most humans would see that person directly in front of them.
So I contend that the car's sensors are poorer than a humans.
Do you remember the reaction test where a ball is rolling onto the road and you have to hit the breaks?
This situation is more akin to a ball laying on the road already and you drive over it.
This situation is very clear cut. There is an object (a subject even) on the road directly in front of the vehicle and the car just straight forward drives into it. It doesn't even swerve. It doesn't break. Nothing. This is a total fail of the system.
What if there is an accident and people are standing on the road for some good reason? What if it is a deer in the headlights? What if a kid is running loose? Is the vehicle supposed to just hunt them down?
If the car was able to pick up the person with the sensors, but the system failed to stop the collision, will the programmers who designed the autonomous system be held ethically accountable for the bug that led to the death? O_O
Yes. I am not sure if self driving cars are subjected to regulatory aspects such as the SDLC in FDA regulated industries but they probably should be (as should embedded computers in autos in general).
The other commenters here have done a good job of examining this incident under the lens of computer vision and AI/ML automated driving.
So I would like to take a moment to remind everyone, especially pedestrians but also drivers: To ignore the road is to play russian roulette. It doesn't matter where you grew up, even if it was a place where drivers always stopped for pedestrians and were oh so polite. Even a car at a stoplight could malfunction itself or the person driving it errs... Always keep your distance and track anything heading in your direction. It is a tragedy what happened here but it begs the question, at what point of negligence do we classify risky behavior? Russian roulette is clearly suicide but crossing a street at night and not paying attention to headlights in the distance isn't? I'd say the relative risks are vastly different but the outcome similar.
What is even worse for life loving pedestrians is the fact that cars can have their headlights turned off at night(usually newer models cant but older ones can.) And then add in EV and the car is invisible and silent... Well clearly it starts to look like someone driving under these conditions is just a potential murderer. But similar scenarios are worth thinking about. Really, anyone walking the street at night should be aware of the non obvious cues of motor vehicles.
Arizona has the highest pedestrian death rate in the United States. Drivers are incredibly reckless here, and for people like myself who do not drive jaywalking is never an option. I will not excuse Uber's mistake here. There must be repercussions for their lack of oversight. However, I would like to defend Arizona's loose legislation surrounding self-driving testing on our streets. The honest answer is that the only way forward for self-driving adoption is through real-world data. It's the fastest way to reach global adoption. The faster that companies like Uber and Waymo can improve their technology, the faster we can see a reduction of human drivers on our roads. While this accident was horrible and must serve as a wakeup call, it must be acknowledged that there were 224 pedestrian deaths across Phoenix in 2017. In the past week alone there were 12 pedestrian deaths. This figure has actually increased over the years and makes Arizona a unique data point for auto companies. I am in no way suggesting that Uber should face zero repercussions. However, as a local pedestrian, if you gave me the option to magically switch every vehicle here over to Uber's technology with current specifications I'd take it in a heartbeat. I interact with these vehicles every day, and feel safer with them still.
Since I didn't find it in a Ctrl+F, I'll chime in to say that one of the advantages of self-driving cars is the scrutiny that accidents will come under, and how the results of that investigation can improve the driving of all other self-driving cars.
While many fatal car accidents are scrutinized to an appropriate level throughout the various countries across the world, many are not. And when they are examined in detail and the vehicle had a fault, the systems in place to issue recalls and defects are quite robust and reasonably effective, although time consuming. When the driver is at fault, the local judicial system (if it exists) can attempt to rectify that behaviour through the usual means - fines, imprisonment, training, whathaveyou.
But with a self-driving car, the autonomous systems can be updated after a collision like this is understood, improved, and distributed to existing vehicles at a scale and rate far surpassing anything possible with humans driving cars individually in every city across the world.
I don't think self-driving cars are flawless in concept or mandatory for all circumstances, but having been to fatal car accidents that are eerily similar to one another on a few too many occasions, I can welcome large scale discussions on what went on in a death such as this one - because the positive outcomes can be replicated elsewhere, hopefully saving someone from the same fate.
Typically models don’t work like that... i.e changing the model or its training alters it’s behaviour generally and needs to be tested against its overall performance. So whilst crash data may go toward a dataset which is then used to better evaluate or train different aspects of a system, it has none of the luster of the learning from every accident being maximised.
The human mind is a quite unique in its ability to decide where one model stops and another one starts.
The way I see it is that this fatality was a calculated risk for Uber. Uber realizes how much Waymo is ahead of them and decides to drop too many safety restraints in development of the self-driving technology.
Any safety regulations based on this fatality are going to hit Waymo just as much as Uber, but also slow Waymo down and give Uber time to catch up. Any financial fine would be tiny comparing to total money invested in self driving technology.
The issue I am having with this is that you can easily see that something broke the sight line between car and lights on slight left. Mind that - the video is not only of pretty low resolution but also it is not giving out proper "blackness" of black. A human observing the road would be able to catch that and as a precaution take the foot off a gas and put it on a brake pedal. Those small changes make the difference.
That's a great point, and something we use in driving even in daytime, sometimes you are driving on a curvy mountain road and see something move through the leaf cover and know to take precautionary steps - to slow down and extra care on being in the lane. I don't know much about it but wildly speculate some of this might fall into general AI, though.
It's time to set the precedent by jailing those who deemed the system was ready for real-world use.
This woman's death will not be in vain. I did not expect such a terrible fuck up so early on, but I'm glad it clearly demonstrates how badly made autonomous vehicles are. They're not going to be ready to hit the streets for decades, unless of course we decide that Uber's profit margins are worth more than human life.
A few people have commented along the lines of "I probably couldn't have stopped in time even if I had been driving." I think it's important as a driver to hit the brakes as hard as you can even if you don't think you'll be able to avoid collision, because the energy is reduced greatly even with a small amount of deceleration (F=ma). Probability of death in car/pedestrian collision is reduced from over 80% to under 10% by slowing from 40 mph to 20 mph. [1]
Also, I think it's likely the human eye would have been able to see the woman before the camera does. The camera needs her to be fully covered by the headlight beam. I know when I drive at night I focus my attention on the area between the road ahead and the fringe of the beam as I scan center, left and right. The area outside the beam coverage never looks fully black to me. I definitely can perceive obstacles outside the main beam coverage.
> I think it's important as a driver to hit the brakes as hard as you can even if you don't think you'll be able to avoid collision
It's good to hit the brakes, yes, but the best way to avoid almost all collisions is to steer, not just to brake. Braking combined with steering sharply to the left (since the pedestrian was moving to the right) would have been the best response to this situation.
(Note that, before anti-lock brakes became common, "hit the brakes as hard as you can" was not good advice, precisely because it would lock the wheels and prevent you from steering. In a car with anti-lock brakes, that's not an issue.)
Depending on your speed, that is very bad advice. Sounds like a recipe for losing traction and fish tailing (potentially into other cars or people or even stationary objects).
There is only so much traction afforded by the contact patch on your tires. You can brake, you can turn, you can speed up but I would not recommend doing more than one at a time.
> There is only so much traction afforded by the contact patch on your tires
If you have anti-lock brakes, they are designed and calibrated to know how much to brake to be just short of losing traction. That's what they're for--to allow you to brake and steer at the same time without having to have special training to know exactly how to balance the two.
If you don't have anti-lock brakes (but it's very rare now not to have them in a car of recent manufacture), then it's still possible to brake and steer at the same time, but it's not easy or intuitive to balance the two to maintain control. It can be done, but you basically have to be a professional driver.
Sure. Anti-lock brakes work in accordance with the laws of physics.
> It is inadvisable for any non professional driver to attempt to maximum break and also steer away. It is just not safe.
This is true for a vehicle that does not have anti-lock brakes. It is not true for a vehicle that does have anti-lock brakes. As I said before, that is precisely what anti-lock brakes are for: to allow you to just push the brake pedal as hard as you can while also being able to steer (because the anti-lock brakes translate "push the brake pedal as hard as you can" into "don't apply the brakes hard enough for the wheels to lock"). And the reason why anti-lock brakes were invented and put into cars was precisely that you need to both brake and steer to avoid most collisions.
I looked into this further, since I was trained to not steer and brake. I understand anti-lock brakes but it does not answer the question of breaking traction, just stops your wheels from locking up (which does maximize traction vs locked wheels).
The answer to the question is ESC (Electronic Stability Control) which is mandated by law in all US vehicles as pf 2012. ESC controls braking on individual wheels to prevent over/under-steering which was my primary concern about swerving in emergency situations.
This short video is a great illustration, it helped me understand how the system works. Looks like my information about swerving is out of date.
Previously I only found this study that claims people were not driving or braking properly with ABS resulting in more single car crashes due to over-steering while braking.
It seems like neither man nor machine took any steering or braking action -- advisable or not.
I also wonder what is the exact job responsibilities are for the human in the car. (The "test driver?") Would love to see the training materials for this role. Are they there to prevent accidents? Or to take over if the car gets "stuck?" (Or both)
This is just the raw camera input. The question here is, what did the software see?
What did the sensors register? Is their system using an ultrasonic sensor? Did this detect anything? Commercial ultrasonic sensors can detect something at least 15' away. So the software could have at least slowed down to minimize the collision impact.
What did the LIDAR see? Did their software detect an object?
This accident happened on a clear night. There was no rain. There was no snow. The pedestrian crossed the street in a perfect 90 degree angle to the sidewalk. The pedestrian did not jaywalk and cross diagonally. The pedestrian even walked the bike across the road. Thus, the pedestrian performed a legal road crossing.
I don't think a human driver would have made this same mistake. When I drive on a dark road, I occasionally flash my high beams to ensure that there is no obstacle in front of me for hundreds of feet. That is how I, as a human, can safely determine that the road is clear for me to proceed.
What's on trial here, is the safety of the software system developed by Uber. And on this instance, their system failed, and a human died.
Someone recorded driving the same section of road at night using a cell phone camera and you can see how much better that camera reproduces the real-world visibility of the situation: https://youtu.be/1XOVxSCG8u0?t=26
That stretch of road was not particularly poorly lit, the pedestrian would have been abundantly visible to anyone paying attention, and it should have been picked up by the Uber vehicle's vision system as well, let alone their sensor suite.
I'm going to go out on a limb and guess that the pedestrian that got hit has crossed at that area many times in the past. There is actually a pedestrian crossing on the median there, but it is purely decorative and posted with signage saying not to use it. However, the area has pretty good sight lines down the road and it does seem to be well lit, so a pedestrian who cavalierly crosses through traffic would likely nevertheless have a very low chance of getting hit.
Perfect example of a corner case. I do NOT believe you can kind of be driving and so having a safety driver means little. The software has to be able to handle this type of situation or should not be on the road.
I get it was crazy for this lady to be in the middle of the road in complete darkness. But this is the EXACT situation you would have thought a SDC would perform far better then a human.
What is really needed is some way to replay this in data against others algorithms and see what they would do.
Love to see what Google would have done? I also worry in trying to keep up with Google these companies are doing very unsafe things. Uber would be far better off just using Google technology.
Oh, come on, I think all these comments about Uber PR are way, way too harsh. Sure, the camera sensor might be making video appear darker than it actually was, and I'm surprised LIDAR didn't help, and sure something might and should be improved. But given the amount of accidents happening in a clear daylight in pretty damn obvious situations without any automatics whatsoever, purely because of humans — it's unfair to make a case of robot-auto unreliability out of it. I don't know what would be the chance of human driver hitting this girl, but I'm guessing high. If anything, the pedestrian sure as hell should've seen the car! Apparently though, she didn't. Why? Well, because.
The though of myself driving this car manually in this situation actually makes me feel scared. I might would have been going slower… I might. Fuck, I'm actually panicking trying to imagine that.
As bad as they are, human drivers in the US have 1.25 fatalities every 100 million miles, across all road conditions and pedestrian behavior.
Uber just crossed the 2 million miles and it already has a fatality, where a competent driver could have, at the minimum, slow down by a few miles per hour to reduce the severity of the crash enough to result in an injury instead of death.
There's no excuse here. The Uber self driving car should not be anywhere near a public road, where children, the elderly, or distract people can be found.
IIRC, high-resolution thermal infrared cameras are ITAR controlled-- which means they are not available for consumer applications above an abysmal resolution (something like 320x280).
Public safety like this seems like a great impetus to do what was done for GPS, and declassify this crucial (and now really quite mature) technology for the general public.
You can easily buy a VGA FLIR camera in the US. They'll probably run your name through the denied-party list to make sure you don't have a history of diverting products to countries we don't like.
FLIR will sell you cryo-cooled cameras domestically as well.
Based on the video, it took about 1.5 seconds from when the pedestrian came into view till the vehicle collided with them.
At 38 mph, a car is traveling about 56 feet/second. That means that the car traveled about 84 feet in that timeframe. VOL (Visual Aim on Left) headlamps are supposed to be aimed 2.1 inches below headlamp level at a distance of 25 feet [1]. If the headlamps on the vehicle were about 2 feet off the ground, then they should have been able to light up the roadway about 286 feet ahead.
Had the headlamps been aimed properly, then the driver could have seen the pedestrian about 5 seconds in advance. That would have given the driver about 1 second to react and 4 more seconds to slow down or change direction to avoid the collision.
> they should have been able to light up the roadway about 286 feet ahead
But when that person was 286 feet away from the vehicle - she was also way off to its left. Headlights mostly light up what’s ahead of you, not what’s on your side. Unless I am missing something.
> Headlights mostly light up what’s ahead of you, not what’s on your side.
Good headlamps should have a uniform and widely distributed beam pattern which should be able to light up at least the lane of traffic you're in, as well as the shoulder and several lanes to the left.
We don’t know when the pedestrian car into view for the human eye. It’s possible that the actual scenario was much brighter in real life than on camera.
Not to say there isn‘t a problem here... But walking across the street in dark clothes, without any light or reflectors, at a random spot does not seem like the best idea to me.
If I have a picknick on the train tracks and get hit by a train, maybe you should consider my stupid decision as well when evaluating the security of trains.
People don't expect trains to avoid them if they step on the rails.
Recently I almost got hit by an electrical scooter. Ambient noise was very low, and I relied on sound alone until very late, and saw it at the very last moment.
I've learned, and I won't rely on sound anymore. If cars occasionally don't react to them, people will learn too, and completely stop jaywalking in unclear situations. It will also cost many lives before everyone has had a chance to adapt.
It's incredible how bad the self-driving system failed in this instance; it's clear that any self-driving system (Uber or any other) -including any update/change to its software- should be audited by the government AND all major tech universities (e.g. MIT) before a single one of those cars go into the public streets.
Wow, the pedestrian just walked onto the road without so much of a glance at the direction of incoming cars.
Sure, it's the car's responsibility to stop. But still--it's almost like the pedestrian didn't care at all. She never saw the car throughout the entire incident. She never saw it coming. She didn't cared to.
Anyone here have expertise on the current state of law governing autonomous vehicles? I know AZ is supposed to be lax compared to California, so I wonder if there is any situation in which Uber could be at fault (besides a civil suit from victim's family)?
Let's say that the car's LIDAR was indeed dysfunctional, or obviously inferior compared to the industry standard. It could be a hardware problem, or the software attached -- e.g. the Uber AV classified the victim as a large paper bag. What legal framework is there to assign fault to company/manufacturer in an accident like this, other than design/planning that is obviously malicious or grossly negligent?
In other words, is there any situation where Uber (or Waymo, etc) could be found legally at fault in a state like Arizona, no matter how badly their vehicle performs in an accident?
There’s nothing in the tort or criminal justice system that prevents responsible parties from being held responsible for negligently causing the death of a person in the realm of self-driving cars. If Arizona prosecutors don’t have enough for a case now, they might later after the NTSB investigation. No charges have been brought, so they could be brought up later.
I would suspect that the backup driver would bear the criminal responsibility for any collision the car got into. What the general consensus of the other HN commenters is that the driver could not have reacted better, though the LIDAR should have detected the woman.
In that case, the woman’s family would have a strong wrongful death case against Uber.
What you wrote made me realize that regulations/laws about equipment/performance/computational standards would probably be managed at the federal level anyway (e.g. NHTSA). Though America is not necessarily the lead in autonomous travel, right? How far along are nations like Singapore and Germany in having legal frameworks for autonomous safety?
Why is the backup human driver staring at the dashboard most of the time? Should he not be closely watching the road? It appears he lost a few valuable seconds right before the collision because his attention was diverted to the interior of the car rather than focussing that attention on the streets?
Driver not looking at the road (as obvious in any automated vehicle).
Might be wrong, but it looks like either the headlights are not the maximum setting, or they are broken or the camera is underexposing the picture.
And complete software or hardware failure at detecting the obstacle.
Definitely not a combination that should be driving on public roads at night.
That said, the pedestrian is crossing outside of a crosswalk at night without looking at the road, so they are at fault as well.
Note however that obviously even a terrible driving system like this one is extremely more likely to hit an unattentive than an attentive pedestrian (and in fact even more than a good driver, who tend to have accidents due to more exceptional circumstances than "neither party was looking at the road"), so that doesn't excuse Uber's fault, and actually makes their position worse.
I would hope self driving cars have better sensors.
There are high dynamic range cameras that simultaneously take multiple exposures and composite them in real time.
The dynamic range in this video is horrible. An HDR camera should have made her visible much sooner. Further why not use larger CMOS sensors, something like on a DSLR camera. Finally, the headlights appear fairly ineffective too. I would hope these cars are using Xenon headlights or something that does a much better job illuminating the road.
Xenon lights should be required on all new car sales. My new car has them and the difference is incredible. Not only can I see far more, but signage really pops. I've noticed that they are also much easier to look at when approaching. It is an incredible safety feature.
Whatever the outcome of this case, autonomous cars are not going away. That cat is already out of the bag.
As for this accident, I think in time it will be shown in time that while machines can make mistakes, they make fewer mistakes than humans. Six years ago I heard that CSX (a major rail line in the U.S.) was actively pursuing autonomous trains. I thought the idea was horrible, until the programmer working on the project informed me that machines guiding trains have been proven to make fewer mistakes than humans guiding trains. It keeps the human element of error out of the equation.
There is a certain amount of fear these events generate, but did you notice that the same level of media coverage was not given for the dozens of similar accidents that happened in the past 24 hours with non-autonomous vehicles?
> I think in time it will be shown in time that while machines can make mistakes, they make fewer mistakes than humans.
This is religious thinking.
>There is a certain amount of fear these events generate, but did you notice that the same level of media coverage was not given for the dozens of similar accidents that happened in the past 24 hours with non-autonomous vehicles?
This is a known unethical company testing dangerous technology on public streets and killing someone. By the numbers Uber's self driving cars are far worse than human drivers when it comes to safety, and the video shows their autonomous system has serious issues. Why defend them?
If you're a fan of self-driving cars you should be slamming Uber, if anything is going to hurt the technology it's shitty companies playing fast and loose with their tech resulting in people's deaths.
I’m actually not. I’m agnostic about the technology, and I don’t care for Uber either for or against, and am not defending them nor am I interested in attacking them.
Just being realistic, looking at the big picture. Realistically speaking, this tech will be improved in time.
I have better things to do with my time. My comment wasn’t aimed at any one company. I think you misread my comments as fanboy cheering for autonomy by the whole point of my comment was centered around the fact that the technology cat is out of the bag. That’s just reality.
Uber shills and other defenders of straight up manslaughter by a machine (breaking first rule of robots), I thought it was the woman's fault and tooootaaallly not a failure on Uber's part. Hopefully this puts the final nail in the coffin of a terrible business. Sad someone had to die.
That is one super dark patch, but lack of visible light shouldn't be a problem for IR?
And whether the human driver/observer could have reacted fast enough to prevent death is a question, but it would help if the human in the car had been looking out the window more than looking down at her? phone.
What are the legal implications for the safety driver in this one? We don't have laws to protect self driving accidents right? So if the authorities determine that the "car" is at fault, would that mean the driver at the wheel be implicated?
One thing that I haven't seen a whole lot of commenting on is how distracted the safety driver appears to be in the interior shots. He looks bored, and his attention seems to be directed everywhere but the roadway.
This raises some serious concerns about the extent to which "safety drivers" actually increase the safety of self-driving car tests. If that's how they normally behave when a car's in self-driving mode, then it seems to me more like they are creating a false sense of security that mostly just serves to make the public feel better about the risk of half-baked self-driving equipment being tested on the public roadways.
Something that tickles my brain since yesterday: are their obstacle detection systems active or passive? In other words, do they emit IR to illuminate nearby obstacles or they rather rely exclusively on visible light projectors on board and street lights?
I ask because emitting IR would likely interfere or even blind speed cameras (which use IR themselves to take night photos), so I expect this to be either forbidden or strictly regulated. Which brings me to the question: if it uses IR, and therefore detects it, could a speed camera IR beam, or any other IR device, in vicinity have interfered with that car obstacle detection?
That is utterly terrible. The car should have reacted to the pedestrian comfortably. I think something in the change in lighting confused the sensors (the road appears to go from having street lights to not having them at the point of contact).
00:04 - The entire body of the pedestrian is visible
00:05 - Collision occurs
If we assume that the computer only had this video as input, then the computer only had 2 seconds to avoid the collision. That would be unavoidable for the computer. But the fact that there was no sign of slowing down or braking in these 2 seconds is pretty alarming.
But it sounds unlikely that this video was the only input to the computer. Did the car have multiple cameras to "see" bright as well as dark objects at night? I would imagine that a self-driving car driving at night would use multiple cameras (tuned to various level of brightness) so that the car can match human level vision?
Is the video slowed down? Also they need to release a better quality.
If I was behind that wheel, I don't think I would've seen that lady to be quite honest. The place where the lady was crossing the road was under a shadow of nearby trees or buildings. If one is going to cross a road like this, they should at least use a well lit part of the road.
Also, why did she have her back turned to the traffic if she is jaywalking on such a big road. Very poor decisions on her part.
I am willing to bet Uber gets out scot-free with this one, and rightly so.
1) I wouldn't be surprised if that is the "best quality" they have.
2) I also suspect that human eyeballs would have a different view of the light/dark portions of what's depicted there, and especially eyeballs would have probably had a much higher chance of detecting movement in peripheral vision than that video gives any hint of.
The video is very damning. Either a problem of camera contrast or a big bug. And having a backup human driver is useless.
There is a terrible ethical dilemma here. Humans are currently at a a death rate of about 1 per 10^8 (hundred million) miles travelled. Waymo, which is best in class, is at 1 human engagement every 5,000 miles as per California data. Of course, most of those were likely minor. But there are more than 4 orders of magnitude there to allow for plenty of surplus dead people. These things shouldn't be on the streets!
It looks like Uber is hiring irresponsible homeless people to save $$. It is hard to say if I would have seen her, I am accident free driving for 63 years, however, when I drive, my eyes are on the road, I do not doze off or look at accidents I pass, so I would have slowed dramatically, and possibly stopped.
Some frame by frame analysis versus time/velocity etc should answer some questions.
LIDAR does not depend on full spectrum light, and should have seen the woman, although the frame is mostly open space?
The interior shot seems to support the idea that it is not feasible for semi-autonomous systems to rely on an alert human monitor to handle the difficult bits. Passive alertness is not a human trait.
Whats the point of a safety driver who's eyes are not on the road? Pretty easy to monitor if the safety driver is paying attention or looking down every second like this safety driver was.
1. What a clear failure for self-driving/collision avoidance technology, especially in conditions (night time) that we've been told are much better for machines to "see" in.
2. "safety driver" useless if he's distracted. what's the point if he's not actively engaged?
3. Video comes from what? Is this raw video from the cheapest chinese dashcam uber found on the shelf and slapped into the system so they can get ahead of the story and plausibly deny in certain situations?
oof. As a motorcycle rider Looking at the video is hard. Im always vigilant but sadly, I'd probably have plowed right through the woman. The street lights seem to do absolutely nothing.
People are unpredictable...however this kind of crossing feels familiar. In Los Angeles we have transients that are known to wander through 5 lanes of empty street at 5 AM with impunity. Ive been hard-stopped by wagon trails of shopping carts laden with old rags and garbage, being pulled by an elderly sisyphean figure cloaked in old comforters and tarpaulins. Ive also had the unfortunate luck to see a garbage truck plow through a stolen utility cart from Home Depot, conveniently parked by a homeless woman in the center of an off ramp as she was engaged in a furious battle with unseen forces under an adjacent bridge.
And yet you didn't make that mistake when you were in a very similar situation.
I don't think I would have hit that lady pushing the bike. If I saw her slowly meandering into the left lane like that, I would have slowed down. The car just plowed ahead at full speed. In fact, it plowed ahead, at night, at 3mph over the speed limit.
I've seen my own dashcam videos that look very similar to that, and in reality things are much more visible than what the dashcam shows.
State of the art emergency assist systems in normal driver-operated cars would have seen this woman/bike and started to brake the vehicle. Even without any assist systems, it should be possible for a human driver to see this bike/woman. It was dark, but otherwise conditions were excellent.
I would suppose that sensory capabilities of a driverless vehicles are above those of normal cars. There is clearly something very wrong with uber's technology.
And due to human vision being better than a dashcam, this car's emergency-assist system, the safety driver, could probably have caught this as well, if they had paid attention to the road.
Which is something that that production-grade technology can ensure with head and eye-tracking. Some non-autonomous cars have this feature to stop people from dozing off, and good semi-autonomous systems, like GM's SuperCruise will refuse to go into or stay in self-driving mode unless the driver is paying attention.
We're still in the early days on this, but given this video it appears that there was a hazard in the road that should have been detectable by a road safe system. In this case, the system failed to recognize a person, and they're dead.
I'm not a legal expert, but it seems plausible that this isn't going to be just a sad tragedy with no one at fault - in the near future we could see actions such as negligent homicide charges.
You can see the human driver taking her eyes off the phone and immediately recognize the situation.
Yet I see no hard deceleration. This would have to be immediately visible as it would jerk the driver forward from her seating position, yet you can't see this happening.
I understand that once the pedestrian came into the light cone, the computer should have reacted almost instantly to a very obvious threat situation, yet I see nothing.
2+ seconds from the time you first see the person on this really crappy video is enough time for a human to react and a car to stop from 35mph. I don't blame the driver. Trying to stay alert for hours just for an incident like this is impossible and futile. I do blame Uber's shitty tech. Even this low-res camera sees the person in time and yet it never even applies the brakes from what I can see.
I really hope this will be looked at professionally and unbiased. The initial statement of the police sounded anything but. If they looked at the same evidence as we do now, it would indicate a severe lack of understanding of the difference between camera pictures and human vision. Plus, who spread the rumor on the very first day that the victim was a homeless person? What does that have to do with anything?
The automotive cameras that I have worked on seemed to have a better low-light performance than this video suggests.
Some of the more recent OmniVision focal plane arrays (imagers) are really very good in low light conditions (for the price).
Of course, it is difficult to tell without a side-by-side comparison, but I would have thought that the equipment could/should have been better than this.
I didn’t see anything about light adjustment time in that article. Can the human eye adjust between high and low light conditions faster than a camera?
I realize the self-driving/lidar/AI/IR is more interesting to the HN audience but what about the pedestrian? How can the pedestrian not see, anticipate and react to a 4000 pound SUV, on the road (where cars normally are), with headlights (I assume they were on), driving right towards her?
Did the pedestrian not look both ways? The pedestrian could have easily gone to the crosswalk, waited a few seconds for the car to pass and then jay-walked, at least stopped in the left lane when they realized the car wasn't slowing down, hurried to get across,
or something.
Based on the video it looks like the pedestrian was just as clueless as the "driver", only looking up in surprise at the last second and making no move to avoid the collision. Maybe it sounds cruel blaming the person that died but they certainly played the largest part in their death.
I am making no excuse for Uber, I know nothing about self driving cars or the tech behind them. But come on- how can a 150 pounds person walk into the middle of a road without making any effort to check for their own safety? It seems similar to stories of people wearing headphones and walking down the middle of a train track.
As a 200 pound ball of squishy flesh and my life on the line, I feel it is my job to be very aware, very cautious, very careful when entering the domain of 2 ton speeding metal objects, especially in the dark.
With all the logged sensor data this should be the best documented fatal collision in history and will be able to be replayed again and again with different software to see if it was avoidable with the current hardware configuration.
Obviously a sad situation, but it's reassuring that the knowledge from one crash can lead to all other autonomous vehicles learning to avoid it in the future.
I'd be really interested to know if there have been any 'close calls' before now, where a similar malfunction occurred, but the human driver caught it in time and manually hit the brakes. Not that I blame this driver; I just think that any incident where the human driver had to take control to avoid an accident is really just as bad as this.
I sure couldn't have reacted quickly enough to have prevented this collision. I'd expect the car to have better reflexes then I do, but given the little time from when the pedestrian first showed up on camera and the time when the collision occurred, I doubt that there was anything that the car could have done, regardless of the sophistication of its software.
An autonomous vehicle obviously shouldn't exceed a speed where it's stopping distance exceeds it's vision capabilities, since next time it might as well be a tree trunk. That's sometimes difficult for humans to gauge, but then we want autonomous cars so they can make these calculations (and they are much better equipped to).
Uber clearly has a very terrible problem that their sensors did not pick this up and stop in time, but yeah... if what I saw as a human driver matched what the video camera saw and I hit the brakes the microsecond I saw them, my car still would have hit the person. Brakes aren't magic, and human vision has its limitations.
I'm more than a little disappointed to see the victim walked all the way across the left-side lane, unnoticed, before being struck. While the monitor appeared to be more interested in her phone, or whatever she was continually looking down at, instead of keeping an eye on the road and vehicle performance which I imagine was what she was being payed to do
A human driver would have braked and swerved left into the other lane; there was obviously no oncoming traffic.
A human driver would have seen the woman much better than this camera footage makes out.
The Uber attendant was sadly not paying attention, which is going to look very bad for the self-driving car industry, even if it is very hard to ask humans to pay attention and do nothing.
Does anybody really think its a good idea for Lyft, Uber, Tesla etc to develop safety systems for the open roads separately in isolation? Who benefits from that?!
I'll bet a Tesla would not have run this person over. This is where the Government should step in and force all of these AV manufacturers to implement a common safety system / protocol.
Certainly, some team of engineers is tasked with analyzing the cause of this failure and providing potential solutions for future releases. What is the best way to do this without trivializing the fact that a human life was lost? Does anyone have experience with analyzing the aftermath of some event which caused death?
I dont know exactly how autonomous cars works, but is it possible there was a warning (beep) before the system tries to stop the car, She got confused, hits the accelerator, disabling the system, and hits the pedestrian by accident? How autonomous cars decide when there is an obvious obstacle and the driver hits the accelerator?
This thread is a case study on the Dunning–Kruger effect. I hope a lot of self reflection occurs once the actual studies and, you know, facts get around about human vision, peripheral perception, low light peripheral perception, reaction times, pedestrian fatality statistics, this accident...
I wonder how the Uber engineering team is feeling. I've had bugs in my code, but they've never killed anyone.
I guess the feeling may depend on how rushed vs. well-tested they feel their software is. But no amount of "I told you it wasn't ready" is going to repair the psychological damage.
The real problem here is not that the tech is imperfect, I'd actually expect it to be imperfect given it's not been even a decade for this tech. The real problem is that this imperfect tech is on public roads. How do they even get the permission to test drive on public roads?
I remain curious about the set of detection mechanisms in place - if this video is the only one, I’d say Uber is culpable (of course it’s not the only one), and if it’s not the only detection mechanism, then where’s the data from the others?
Another note: I haven’t heard criticism about the driver’s inattention...
I think the inattention is just expected now. If you have to do absolutely nothing 99.99% of the time, then it just doesn't seem possible to constantly focus on the road. At least if you are driving down an empty highway you still have to hold the wheel and press the accelerator. For this reason, I think level 0 systems ("may momentarily intervene but has no sustained vehicle control") hold much more promise for the near future (next decade or two), and I wish there was more focus on getting these systems widely deployed.
I certainly wouldn't have high beams on here. It's a lit road and it seems to be in the city, so you'd likely be encountering other cars often enough that you'd spend more time turning the high beams on and off than you would controlling the speed and direction of your vehicle.
What happens when the human grabs the wheel? Is it possible the car could/would have stopped, or reduced the danger to the pedestrian, but the human actually overrode any emergency corrective measure the car would have taken? Did the human prevent the car from working as intended?
It's very disingenuous to release this video. This is not the picture the car saw nor the human. That coupled with the fact that no radar or lidar data was released makes me wonder why the police seem to be bending over backwards to absolve Uber.
Baloney. In most USA cities the vast majority of police officers on patrol are in radio motor patrol cars, not footposts. Are you even from the USA? Or is there a city that is an exception?
Or are you repeating the trite idea that all motorists are also pedestrians because they have to walk a few feet in a parking lot or driveway at the start and end of their trip? That is ridiculous if so -- travelling a few feet in a parking lot, a place designed to be safe for pedestrians, is very different from having no choice but to walk miles of dangerous high speed roads like the one in question today because it's the only way to get home
Yea that video pretty much sums it up. The car is definitely the proximate cause. If I were handling the claim I would go 90% liability to the car for driver inattention and careless negligence and 10% to the pedestrian for jaywalking.
I have to wonder if HID headlights contributed to this problem as well. The headlights have such a sharp, defined "edge". The woman was not visible until she was within the bright portion of the lights (which isn't very big).
While this may be unfair - but LISTEN UP REGULATORS: IT's time to have standardized tests for SDVs before they are allowed to be considered road ready. And I hate to say this, but WAYMO is the company to design the test suite.
I don't think Uber with their "Hustlin'" culture should be allowed to run field tests that may endanger human lives. We simply cannot "move fast and break things" when it comes to self-driving vehicles.
What about the logs? Do companies like AppDynamics, Datadog support self-driving cars yet? Any companies out there providing a platform to do this? So important for debugging, auditing, compliance, legal, etc.
I passed there tonight, the place is incredibly well lit and the person was probably visible from before the underpass, and probably visible the entire time she was on the road because of the geometry of the curve.
Elaine Herzberg probably casually jaywalked previously but human driver slowed.this time she got surprised by robot killer that violated asimovs 1st law. Safe av's are coming but why so impatient?
Wow, this is inexcusable. I hope Uber is found liable for this death. Someone walking across the road at night? That is literally the first test case I would add if I were building a self-driving car AI.
If anyone is wondering how to avoid accidents like these while YOU are driving - look for shadows. You may not see someone wearing dark clothes, but you can see the shadow they project change as you move.
It's completely predictable that the human driver would stop paying attention, which is what makes these "semi-autonomous" systems worse than systems that aren't at all autonomous.
There are a lot of questions around the vision side of this, but I have slightly different question, the answer which probably points to greater systemic issue in the overall solution.
Why is a car, the speed of which is regulated by the autonomous driving system, breaking the legal speed limit?
Even accounting for errors/accuracy of odometers/speed cameras, this car was traveling at a speed in excess of the legal speed limit for that section of road, increasing both all the underlying risks of operating a 1-2t motor vehicle at 60+km/hr, and the material impacts from those risks.
In a relatively new technology, why is the decision to trade safety for speed being made?
In any technology where safety (and ultimately human life) is concerned, why are we testing in production?
Since this is still in development, Uber should have assumed that the self driving software would fail unexpectedly.
If a human sees something work 1000 times, they will expect it to work every time. Uber should have anticipated overconfidence in the software. They should have coached the driver on this. They should have reviewed footage of safety drivers to ensure that safety drivers are complying with instruction.
This was not an accident. This was carelessness. I blame Uber. Uber's "just get it done" cultural attitude is incompatible with developing safe self-driving software. There's a reason that Google, with 10X the number of miles on the road, have not killed anyone.
Obviously a "hello, world" failure of self-driving.
I wouldn't blame a human for not seeing that person, but better should be expected of the tech.
However:
* there doesn't appear to be a single reflective device on the bike. For instance, the usual spoke-mounted reflectors that are stock equipment on even low-end bikes do not seem to be there.
* the woman seems completely oblivious to the car's approach. She doesn't react at all but keeps casually walking with the bike right until the moment of impact. She mustn't be looking in the direction of traffic at all and is mowed down completely by surprise, like someone sucker-punched in a bar. (Was this someone with disabilities? Visual or hearing impairment? Developmental?)
* I think here is the road where this took place:
https://www.google.co.in/maps/@33.4351488,-111.9415554,3a,60...
The scene of the accident is a little bit forward of here.
Utterly not a place to be crossing at night in a way that is completely oblivious to the presence of vehicles and far away from the intersection. Note that this is a one-way double lane; there is only one direction in which to look out for cars.
This kind of badly behaved, suicidal pedestrian is a challenge to drivers even in daylight. However, you would expect precisely this sort of situation to be among the highest priority test cases for self-driving tech.
BTW: here is a shot of a sign forbidding pedestrians from crossing at almost that exact spot: https://www.google.co.in/maps/@33.4362927,-111.9424451,3a,15...
She wouldn't have seen that one because she probably crossed the other lane already; she's coming from the median. How did this person live to 49?
Would love to see the log files of the car to see why and how this occurred. I'm pretty sure it's known internally - but is Uber obligated to release them for the public?
The bicycle is visible in the video during 0:01/0:22. The glint of the reflector appears when there are two full white divider lines between the camera and the victim.
Did anybody else think that the human operator in the driver's seat, should always be looking in the front, than being distracted by looking down or below ?
> That’s only because there’s no light on her until the end.
It seems like the car is out driving its headlights. Assuming a road line is about 10 feet (as is typical), only about 20~30 feet of distance is illuminated... but at 30mph you need ~45ft for an average car to break, and at 40mph you need ~80ft. (According to [1] -- ignoring reaction time.) Is there something wrong with the headlights? Is the video just that bad? (And if so, is it even used for decision making by the car?)
> That’s only because there’s no light on her until the end.
Yes, quiet true.
> The car hardware should have seen her. If it doesn’t work in the dark it shouldn’t be on the road imho.
I tend to agree. I think unfortunately this video indicates that the car did not perform worse than a human. It's still valuable to have cars that perform better than humans, even if they aren't perfect.
> I tend to agree. I think unfortunately this video indicates that the car did not perform worse than a human. It's still valuable to have cars that perform better than humans, even if they aren't perfect.
I disagree. I think this video shows that the car performs much worse than I would expect a human to.
I have my own dashcam videos at night that look very similar to this, and I definitely think I would not have hit a large object such as a pedestrian walking a bicycle on a straight road such as this in a 35mph zone.
Even in that over-exposed video, at 35mph, I think you would have to be extremely lucky to not hit her. From her coming into view to collision is ~700ms. Subtract average human reaction time (250ms), and that leaves a little less than half a second to swerve.
A human would have made this mistake, but I don't know enough about LIDAR to understand if a computer should have made this mistake. Is night vision poor on these machines?
Then slow down. If you can't stop or swerve when something becomes visible to you then you're driving too fast regardless of the posted speed limit. Period.
Isn't that the exact same video in poorer quality?
I think the question now is how well a LIDAR is supposed to work in the dark. I keep hearing people saying that the hardware should've "seen" her, but speaking purely in terms of physics, how exactly is that supposed to work? Doesn't night vision still require a minimum threshold of light in order to work? How much light?
If that's true, this would certainly suggest a big failure from the LIDAR system in this incident. I wonder how that claim is tested. Would city lights interfere with testing (similar to how you can't see stars in the middle of cities)?
Uh, really? You don't think a human driver could have taken any corrective action if they were paying attention to the road? The risk I see with these semi-autonomous systems is precisely that they invite the sort of inattention seen here. The brakes, at least, should have been applied.
There is I think a broader legal/philosophical question here:
Are we to judge the car's ability to avoid the pedestrian on human standards or AI/machine standards?
From the video, it looked like this could be a situation where a human driver cannot avoid the accident (due to poor light, not enough reaction time etc) but a machine should be able to avoid it with it having multiple cameras, IR sight, and much faster processing speed.
My guess would be that this is an off-the-shelf dashcam, completely independent from the self-driving cameras and sensors. Being an off-the-shelf dashcam means extracting and publishing the video from it is easy, which is why it came out first. It also explains why it has video from inside the car; many off-the-shelf dashcam models have both a forward-facing and a backwards-facing camera.
Personally, I'm more annoyed at the blurring of the telemetry data at the bottom of the video. What was blurred probably included the dashcam model, date/time, GPS coordinates, calculated speed, and so on.
The output from the other sensors might take more work to extract and convert into an understandable form, if they are even available after the fact; they might be immediately used in the control loop and then discarded, instead of being stored.
This makes the most sense. It's probably a redundant system: off the shelf dash cam not connected to anything but power. The police could easily extract the sdcard.
The actual car data will need to be collected and released by the NTSB and will probably come in a few weeks.
It essentially is. Even with that crappy autoexposure, she would probably be visible if the resolution and framerate wasn't terrible, and the compression hadn't taken out whatever detail remained.
"There is no such thing as an accident. Only negligence."
This was no accident. The jaywalker was negligent for crossing the street as she did.
Uber is negligent for not having autonomous vehicle standards that clearly exceed what humans can do (I don't think a human could have avoided her based on the video).
Not sure about the driver. Depends on his training/job etc...
I do wonder if perhaps Uber made the video darker before releasing. Or used a camera that is not the main sensing camera. Because I would hope the cars are smart enough to increase ISO to get more detail at night time.
2) They should be using a dark adapted / night vision camera and probably are, the video feed presented is just that, the best one that "presents".
3) They should be using thermal infrared to spot living things
4) They were severely over driving their headlights if what we are seeing is "reality" as seen by the onboard computer.
I want to know how much access to the hardware Uber had after the accident, including physical and remote. I also want to see the streaming logs and full provenance.
(4) is the key point here, if that's the only meaningful telemetry, they're going way too fast. If you can't see then you can't stop. This is still a computer failure.
There is no way, full stop, that Arizona regulators allowed a driverless car with only a single dashcam quality level camera as telemetry to drive around. Either Uber broke the law and let a car on the road that doesn't have working LIDAR and IR and possibly RADAR or the software just doesn't work.
These would have been the 1st battery tests by the car company, surely? Child chasing ball is obviously important scenario. Lidar/sw didnt recognize human/bike combo? But blob was moving & system failed to see or avoid moving blob.That makes me think there should also be a pretrip system check like airlines.
What controls does Uber have that the "safety driver" is doing their job? When the computer gets confused or its computed probability drops below a threshold, when and how does it alert the driver. How many of these self driving testbeds have a person in the seat just for CYA and not actual safety ?
I'd like to see Uber's logs of all the other pedestrian and vehicle near misses where the computer took corrective action.
What testing did Uber do on a closed course with adversarial conditions? Ball rolls into road with a child following, pedestrians walking under a flickering street lamp, people dressed in costumes, a parade, a protest, service workers, a pickup truck with a lost load.
I don't think that would work. Driving is not a fully conscious, deliberate see-analyze-act loop. A lot of your control movements are subconscious, and you expect instant, correct feedback from the vehicle. To see what I mean - if you've ever played a racing game, try opening a racing replay/Let's play video on YouTube, pretend for a while that you're in control (use your WASD keys), and see how your brain complains about your inputs having no impact on reality.
> 3) They should be using thermal infrared to spot living things
To my knowledge the fast frame rate (>9 Hz) thermal sensors are still ITAR-restricted, seriously limiting their application in autonomous vehicles among the other things.
> They were severely over driving their headlights if what we are seeing is "reality" as seen by the onboard computer.
This is the salient point that I was looking for.
Interestingly, US and European headlights standards are very, very different and this accident is a clear example. US headlights are intended to "illuminate the roadway", whereas European headlights are intended to "illuminate obstacles on the road and in the margins". That's not just a wording difference, rather the part numbers for US-spec and European-spec headlight assemblies are different.
Hopefully the NTSB has direct access to all the other telemetry and will release it as well. It would be greatly beneficial to the public if we could see the LIDAR and thermal logs as well.
It's Arizona. I just checked. The high today was 92F. Pavement would almost certainly be warmer than that. How's a thermal camera supposed to detect 98F on a background that is proably +/- 5 degrees of that?
Thermal cameras care about more than just the absolute temperature of an object. The emissive and reflective properties of the material (in infrared) also matter significantly. A human being would almost certainly be visible in a thermal imaging view of that scene.
When you walk outside in the daytime, everything is bathed in a uniform light from the sun, but you have no trouble distinguishing between objects, as they emit and reflect light differently.
Probably not relevant to conditions at 10pm on Monday, when it was between 70F and 57F (the range for 6pm Monday to 12am Tuesday.)
> Pavement would almost certainly be warmer than that.
Sure, when it was 92F the pavement would probably be 140+F.
> How's a thermal camera supposed to detect 98F on a background that is proably +/- 5 degrees of that?
At night, the pavement would be much cooler than a human; at the high temperature you report, it would probably be much hotter; there's a place in between where the problem you have would be occur, sure, but it's neither at the high nor, more to the point, in the conditions when the accident occurred.
See comment above. We routinely walk around (and avoid objects) even though the entire field is uniformly bathed in the light of a single light source.
If you redefine the temperature scale so 90F is 0, then that difference between 98 and 92 becomes the difference between 8 and 2, which suddenly looks a lot more significant. So I think your concern is based more on the arbitrary 0 of the F scale.
If a thermal camera can't detect a person on the road, then a thermal camera is the wrong camera to use. The exact technology isn't important - the important thing is, the vehicle failed to detect a human being crossing in front of it.
Even lows are still in the upper 70s, and again, asphalt will absorb heat during the day and then release it at night. 10pm isn't exactly "middle of the night", either, it's only 3 hours after sunset.
There would still be situations when you'd be attempting to detect a human with a hot asphalt background. Such as a crosswalk at the base of a hill (C is the car and X is the crosswalk):
I wouldn't be one bit surprised. The antics of this company are well known. This incident has my wondering: of all the companies working on this, why is Uber trusted to be testing this technology?
> Levandowski seemed to struggle in other ways as well. In December, Uber dispatched 16 self-driving cars, with safety drivers, in San Francisco without seeking a permit from the California DMV. The test went poorly—on the first day, a self-driving car ran a red light, and the DMV ordered Uber to halt its program in the state.
> The company suffered further embarrassment when a New York Times article, citing leaked documents, suggested that Uber’s explanation for the traffic violation—that it had been caused by human error—wasn’t complete. The car malfunctioned, and the driver failed to stop it.
> The misdirection came as no surprise to the Uber employees who’d spent time at Otto’s San Francisco headquarters. Someone there had distributed stickers—in OSHA orange—with a tongue-in-cheek slogan: “Safety third.”
In 2016 they tried just turning their autonomous vehicles loose in SF without regulatory approval. They got shut down. Rather than apply for permits, they headed off to Arizona, where the then-new governor welcomed them with open arms, an "anything-goes approach", and a lot of anti-regulation blather: https://www.nytimes.com/2017/11/11/technology/arizona-tech-i...
Culture takes time to change and we don’t know how hard they’re trying. A new CEO matters long-term but lower level management matters more in the short-term.
That's not what I was suggesting either. We had a long discussion in a nearby thread about this; I got tired of it but check it out if you're interested.
Trusted in general. Trusted by people. Trusted by authorities.
I mean, there ought to be the point at which someone says, "this company has a long, consistent, documented history of antisocial behaviour and complete disrespect for law, therefore they shall not be allowed to work on this society-changing and life-critical technology". That this doesn't happen is, I feel, a failure of our society/regulatory apparatus.
I’ve gotten 2 products UL certified, and I don’t think it’d be very useful for self-driving cars.
The process seemed to be mainly reactive: only things that had caused frequent problems in the past were part of the standard.
For instance, plug-in devices must have fuses. The standard gives no guidance about the right size of fuses, but a reasonable engineer would of course pick a good size. The standard’s main effect is to avoid cutting safety features to save cost.
We shipped a mobile office robot. We sweated the details of it not running into people or crashing, but the UL cert only verified that we had fuses and flame-retardant plastics and the like.
> I do wonder if perhaps Uber made the video darker before releasing
This video was released by the police department.
Are you suggesting someone from Uber drove down to the scene at 10pm, somehow managed to grab the SD card from the dashcam with all the police there, ran back to their car, uploaded it to their laptop, fired up Premiere, edited the brightness, downloaded the modified video back into the SD, ran back to the scene with the tampered evidence, and only then the police got to it?
Sorry, but if this isn't a conspiratory theory, I don't know what is.
I'm hoping the police and NTSB have guards on the impounded car and they have a strict chain of custody on anything Uber accesses.
To be fair though, Uber can probably extract, and potentially even change this data, remotely. Given their track record, that's not outside the realm of possibility at all. They'd be stupid to attempt it with the world staring at them, but this is also the company that had Greyball, the hell map, blatantly disregarded laws in many countries and had to hire Eric Holder's company to run damage control.
It's probably easier to just enable ssh on the prototype system to enable arbitrary remote debugging than to specifically design a locked down API for accessing certain data.
I was under the impression this was just footage from an off-the-shelf dashcam. I could figure out how to get video out of mine without an instructions manual. Is that not the case here?
You have no idea where the video came from. Completely conjectural example: uber contacts police (who haven't even tried to find any sd card or download the contents or figure out how to convert it to a format postable on twitter) and says "hey I have the video for you."
Did anyone bother to go to that exact spot at night to see what things are like from a human perspective? That strikes me as the first thing to do as a news agency...
The issue is more with the dynamic range of the sensor than the ISO. Increasing the ISO would blow out the highlights long before providing sufficient detail of the shadows (not to mention the increased noise).
Regardless of this camera's performance the LIDAR and other sensors should have picked this up.
Is there any reason it wouldn't be possible to employ multiple cameras each with varying gain/ISO/aperture/exposure/shutter-speed to combat the narrow dynamic range of the individual sensors? Basically create an HDR stream in parallel instead of in series by varying the settings from frame to frame.
Disclaimer: I have a frustratingly poor understanding of this subject, something I desperately need to remedy.
This sort of method is used in the Canon DSLR custom firmware Magic Lantern for HDR video - it brackets the exposure by ISO. However you have some problems with moving subjects: https://youtu.be/5me5jEr4ldQ?t=415 (warning - flashing images)
That's interesting but it still seems to be utilizing a single sensor set to different exposures in series, not multiple synchronized global shutter image sensors each set to different exposures. While such a setup may have little to no use for human perception what I'm talking about is feeding each stream into a vision system, be it a classical vision pipeline or deep net system.
Those cars have multiple sensor feeds, many of which are far better than human vision. I have experience with off-the-shelf IR illuminators, and even an array of half a dozen of cheapo ones improve low light object detection at distances of upto 40-50m by orders of magnitude.
So I feel Uber would have selected whichever feed best supported the explanation that it was "impossible" to avoid such an accident, and handed it to investigators. Given the number of people agreeing with the "impossible to avoid" explanation even in a tech-savvy crowd like HN, I'd say the PR strategy's worked well and saved Uber some more bad press.
I think it's possible to look at the video file and find markers for modification. Certainly this is the case for JPEG, TIFF, PSD, vs Raw files. It's not impossible but very impractical to render a Raw file, edit it, and then unrender it back into a Raw file. It's also pretty malicious and I think it would show a corrupt intent to modify a Raw image in this manner.
There are Raw equivalents for video, but consumer cameras are all post rendered, and heavily compressed. So modification is going to change all kinds of metadata, but in particular there will be more than the expected quantization from double compression.
Again, if that were demonstrated, I think it would show a corrupt intent. The intent to deceive.
Redcode Raw is an example, I'm sure there are others. This doesn't mean there's zero in-camera processing, it just means it's not rendered, ergo it's scene-referred or camera-referred. There can be lens and sensor artifacts removed, like dead pixels.
OpenEXR is another such format that's camera agnostic, not totally dissimilar to Adobe DNG vs the (proprietary) Raw format used by a particular make/model of camera.
Doubtful. There's no way they are making the CCDs/MOSFETS/etc. themselves, you'd need a heck of a chip fab and for almost no increase in usability. Most chips these days are very good at what they do. The main issue is what kind of sensor are they using. However, based on the compression that twitter is using to send you the video, we really don't have any way of knowing what the sensor is (bit-depth, frame-rate, dwell time, wavelength sensitivity, etc).
I don't mean they made their own CCD chips, (and I guess I didn't mean raw like what the voltages are at the digitizer), but the configuration and pre-processing prior to being fed into their algorithm probably is considered valuable enough that they wouldn't want to publicly display it.
You can't increase ISO without washing out the areas in the street lights. One of the difficulties not being discussed here is that this is a terribly lit road, with large dark voids between the street lights. And the biker was crossing in the void.
It is not a terribly lit road. You cannot increase ISO, but you CAN increase dynamic range. The human eye, as an example or as another example a much better camera, can easily see those seemingly large dark voids between street lights.
This is a common theme to all the luddits and uber haters, and I don't get it.
The video doesn't show the rider until less than a second before the collision. Yet it's an article of faith among you guys that this video which does not show an easy path to avoidance actually does, due to various magic incantations:
+ The road lighting wasn't bad. A real eye would have shown something different that the camera can't[1], therefore the video proves what it doesn't show.
+ LIDAR and IR should have shown that, therefore Uber is hiding something because the video doesn't show data that must have been present.
I just don't get it. I'm watching this video and seeing what is clearly a huge tragedy and a near-unavoidable collision.
[1] This is not at all true in the real world, by the way. But I see little value in arguing what is clearly a point of faith and not reason.
Not a luddite here (though definitely an Uber hater). I'm really disappointed by what this accident tells.
IMO this was still a hard case for a human[0], but it shouldn't be a hard case for LIDAR-equipped self-driving car. They were not doing Tesla-style stunts with "we'll drive only with a visual-spectrum camera". They had a suite of sensors on-board.
I expect it'll turn out that either it was Uber-specific problem (e.g. software bug, or lying about LIDAR capabilities, or both), or a deeper problem in sensor tech for self-driving cars in general. The second case would be really disappointing, as - given all that has been published so far - one would expect self-driving car tech to avoid accidents like this.
--
[0] - I mean, really, a pedestrian going through a road like this with zero reflective lighting? Is it even legal in US? In Poland it isn't.
In the wake of the Tesla accident, there was this discussion about the difficulty of detecting stationary obstacles (too many false positives, so they're readily discarded). As here the movement was perpendicular to the movement of the car, I wonder whether that had any impact.
What do you mean by unavoidable? Do you think a human would have made the same mistake in most cases?
Because then my question is, have you ever used a camera at night? It can't see as much as you (at video rates).
What don't you get about people discussing other sensing technologies? Just because it wasn't clear from this video and this camera, there are other sensors which don't require illumination with visible light. Even a front facing radar might do a good job of slowing down.
I'm saying that the evidence as it stands doesn't remotely support the idea that this accident was avoidable for either the driver or the automation, and that I'm shocked at the irrationality with which people are trying to pretend it does.
If you want to make a point about cameras or eyes with evidence, do so. The evidence in the linked video supports there opposite of what you claim.
Here's what the video helps understand. The pedestrian wasn't hiding behind, say, a bush and leaped out. It seems like the pedestrian was unoccluded for the whole time they were on the street.
"Seems like", that is, because the pedestrian is not actually visible! This is the irrationality part. You are literally saying that she must have been visible because you can't see her on camera.
Gah, but she was in the dark. You literally can't see her. You're asserting stuff about human vision being better without evidence[1], and then using an inference from the video that clearly shows this woman was invisible to "prove" your point. Which is insane.
[1] Again, this is just not true. Cameras sensors have HIGHER dynamic range and ISO bandwith than eyes. What is true is that people have better AI behind the scaling decisions and can search better across wide-dynamic-range environments than typical cameras. But again, that presupposes that the driver would have been looking for the black-clothed biker walking a bike without reflectors in the dark shadows between street lights. Which, wait for it, NEEDS EVIDENCE.
By visible I meant visible to active sensors like radar and lidar. This is the second part of my original response to you, and why I then said the pedestrian was unoccluded. Sorry that was unclear. Also sorry you're so frustrated.
Street lighting is a side issue, and the lack of street lighting is AZ is a feature. It's awesome, I get to keep my night vision, and we have the stars:) Making things worse to accommodate confusion about what computers are capable of is a bad idea, and the correct sensors don't care anyway.
It's great that Uber changed the CEO and he's trying to change the culture. But that doesn't happen overnight. This is a 9 year old company with 12000 employees. It has a culture that has been built and reinforced over that time. As CEO, he doesnt make decisions at the front line directly, and it may take him a few years to convince everyone in the company to adopt his rules (and he may even fail, depending on the incentives inside the company). There's no magic wand a CEO can use to change a company overnight.
Are you joking? Uber has always flaunted laws. Even in their home town it was illegal to offer a taxi service without a license. They did it anyway. Later, the city decided to differentiate between ride hailing via app and sticking your arm out to hail a cab. That change happened after uber broke the law.
Third comment I have seen by you mentioning the new CEO, so I think this is worth stating. Does a new CEO automatically change all the thinking from all the employees that were hired under the old CEO? Uber has a compromised integrity because they, as a collection of managers, made immoral/illegal decisions. So yeah, hiding information to save their ass is something I see as possible.
Not just their thinking.. after 9 years, they have policies, practices, rituals, incentives, etc (both written and unwritten) that supported those immoral/illegal decisions.
And the employees aren't helpless either. If they don't like this new CEO, they can choose to withhold information, don't write things down that are no longer approved of, etc while their direct manager looks the other way. They can collectively undermine the CEO until he's completely ineffective and forced to leave.
The culture is one of the hardest things to change inside of a company once it's established. It's self-reinforcing.
So we have to wait a couple of years to see if he's successful. There's no guarantee he'll win just because he's at the top.
I don't know about you, but if I was in his shoes coming in trying to fix a company with a compromised integrity like this, I would be making it very clear to the employees on day 1 that anyone who continued the same old crap or showed similarly questionable judgment (whether to the company's benefit or otherwise) would be fired instantly, end of story. I'd bet that would change things mighty quick. And I don't think this is the only solution either. So yeah, I think it's very doable if the CEO is serious about it, and he very much seems to be. I'm more than open to see evidence to the contrary, but until then, I think it's quite wrong and unfair to assume they would be continuing their previous practices.
Culture is much harder to change than you're making it out to be. The employees are not his subjects that can be bossed around and ordered to behave in a certain manner. That's old school management thinking (from 50+ years ago) that's very ineffective.
The culture is one of the hardest things to change once it has been established. It prevents companies from entering certain market segments; from competing in certain ways; from taking certain actions. In a large company, it's so difficult to change, that it's easier and more effective to establish groups outside of the company to pursue new initiatives.
For example, this is the reason why AWS is separated from the retail segment inside amazon. Why Walmart won't compete with Nordstrom. And why large companies regularly create subsidiaries to innovate on new ideas or pursue new market segments (and they're often established off-site, so the corporations culture doesn't carry over).
GM makes a great example about how difficult that can be. They spent billions on Saturn as an experiment to build a new car company. Later they tried to integrate Saturns success into GM by adopting their culture and other practices... but it didn't matter what the management wanted... Saturn died and GM didn't change. Saturn's culture didn't infect GM--GM successfully killed a culture that tried to change them, even after seeing the success of Saturn.
> The employees are not his subjects that can be bossed around and ordered to behave in a certain manner. That's old school management thinking (from 50+ years ago) that's very ineffective.
Why the false dichotomy? How is telling your employees "you will be fired for lack of integrity" equal to "bossing them around"? This is quite the straw-man and it undermines the rest of your points for me. It's not like the only 2 options out there are to order people around like your servants or let them do whatever illegal hell they want. He could be strict about the red lines on integrity without ordering them around like his servants with regards to general management. Why do you make it seem like the only possibilities are the extremes?
What is "lack of integrity"? It can't be defined granularly enough by a CEO. He can make an example of some particularly egregious behavior--and that does help--but there are millions of small decisions that happen every day.. and the employees will need to decide if each of those actions represents a "lack of integrity" or not. And the truth is, they already have some practices, that they aren't going to question unless directly prompted to do so. Just as you repeat numerous rituals and habits regularly--you don't reexamine them until something forces you to.
What this means is that the CEO cannot order everyone to behave with integrity. They must be convinced of what integrity is, what type of behavior that represents, etc. So that when they make these small decisions, they are consistent with the culture the CEO is trying to create.
You can't order them to behave with integrity--because that's so vague as to be meaningless. Everyone will make up their own definition of integrity, and it'll always fit so they turn out to be right.
So I don't see the false dichotomy... There's no order a CEO can give that will change the culture in the way you describe (at least not in a large company like Uber).
(I think it's worth pointing out that the people inside of Uber who made the bad decisions likely didn't see them as bad or wrong. Their mission and culture supported those decisions, and they believe them to be right. So when you say 'behave with integrity'... that's meaningless, because they believed they were right when they made the wrong decision initially. They were already acting with what they believed to be integrity.)
Yeah, it's impossible to define it. But thankfully, you don't have to unambiguously define ethics and integrity for all of humanity and posterity to get somewhere useful. We're not writing probably correct algorithms here, we're dealing with humans. Some linear combination of "something illegal", "something you don't want landing on the New York times next to your name", "I know it when I see it", and "if this still isn't clear for you, you test me at your own risk" would be sufficient to take care of it: either they'll figure it out by themselves or they'll find out the hard way.
The clarity of that rule only exists in your head. Each person will draw the line differently, assess the risk differently, etc. Saying "I'll know it when I see it" does not result in a culture change. That just results in the CEO randomly enforcing his rule when he finds out about something he doesnt like.
Again, like I said, it doesn't need more clarity than however clear or vague it is at the moment. Either people will figure out how to stay away from the gray area and whatever they understand your red line to be, or they won't and will instead have to find out the hard way. Your mere assertion that this won't work isn't any more convincing than mine that it will work.
Well actually in management it is well known that your method wont work. There's a history of your approach, a lot of research and things we've learned from corrupt corporations, etc and there's a reason it's not used anymore. History, methods, purpose etc of corporate culture is quite interesting, and I fully encourage you to read more about if it interests you. Uber's new CEO has a monumental task ahead of him.
You're arguing they deserve the benefit of the doubt until proven otherwise. Most people seem to feel they've done enough damage that it's on Uber to prove they're worthy of trust.
The cars don't have to increase ISO to get detail, that's for human eyes. Increasing ISO only works because we can't see details in shadows, but computers don't have that limitation. A neural network wouldn't have to add 100 to each pixel to make out detail, like a human would.
> The cars don't have to increase ISO to get detail, that's for human eyes. Increasing ISO only works because we can't see details in shadows, but computers don't have that limitation. A neural network wouldn't have to add 100 to each pixel to make out detail, like a human would.
I'm not an expert on this stuff but I think you're misunderstanding or mixing up the ISO value with exposure compensation. The ISO value corresponds to the sensor's sensitivity, and even RAW photos (which are more or less dumps of the raw sensor data) are captured with specific ISO values, and look different at different ISOs.
i) what human eyes need to do to get details from a dark image (increase ISO and risk washing out bright areas); and
ii) what human eyes do when looking out at the real world (rely on the huge dynamic range that the human visual system is capable of, which dynamic range is about an order of magnitude larger than any video camera sensor);
It's worth pointing out however that humans can't make use of that whole order of magnitude range at the same time. The visual system has to gain up and down as the average light in the field of view changes. This takes several minutes or even a half hour to fully dark adapt.
Sorry, in short you're wrong, yes they do. I'm on my phone so I will be brief.
The ISO determines the sensitivity of the sensor in the camera to the photons hitting each bucket in the "sensor array". This sensitivity combined with frequency of the photon hits determines the data that makes up the actual image. If you could theoretically dump the RAW data from an image recorded for a different ISO for the same image, it would be different for each one.
You can't "boost" an image's ISO post-capture, because the sensor data is already captured. What you can do is increase the values of those pixels that are appreciably close to buy not equal to zero, but for which data exists, giving the impression of pulling out data from the shadows, but the limitations/complexities are too much to go into here. When you do so in practice, you do not have as much relative detail in those extremes.
What we interpret as "noise", typically in high ISO low light photography, is a real thing that the computer cannot perfectly compensate for, because it's a fundamental property of both the hardware, but more importantly, of background light/radiation and the probability/quantum nature of light...
What no one seems to be talking about is that a person was walking their bike across a divided 4 lane road that in many places would be called a highway. This is not a pedestrian friendly crosswalk, and it’s not high noon.
You can tell by the tail lights of the car in front (first second of video) that a pedestrian would be able to see a car coming.
Which begs the question - why did the human step out in front of the car? Is there culpability there, too? If a person intentionally puts themselves into a deadly situation, how should AI handle this?
We’re all looking at the cars, but let’s keep in mind that crossing a dark divided highway in front of a car you can see coming is a really really bad idea.
You make an interesting point regarding responsibility. Say that person were:
- Mentally deficient.
- A child suddenly running because thats what children do.
- A blind person making a mistake.
There are countries where if a pedestrian is crossing the road anywhere that isn't a designated crossing zone, then they are responsible for their fate.
In Canada and the USA, I believe that pedestrians "always" have right of way and drivers are supposed to be as vigilant as possible.
Of course, some pedestrians take this to heart and suicidally jump onto a road. In the US, this is more prevalent in walkable cities and college towns.
> 28-793. Crossing at other than crosswalk
A. A pedestrian crossing a roadway at any point other than within a marked crosswalk or within an unmarked crosswalk at an intersection shall yield the right-of-way to all vehicles on the roadway.
Is it me or the street lighting is beyond mediocre? It is weak and alternate between a "lighted" portion and a literally black portion.
But even then, the driver was busy with his cellphone. The accident might have happened nevertheless due to bad lighting. But this could have been avoided was the driver on alert.
Coming from Europe, I've been surprised as how many places in the US that don't have streetlights at all. Driving between Maryland and Virginia, there are 4-lane highways that are completely dark for long stretches at a time. I'm assuming this is a money-saving concern, since the US has such a huge amount of area to cover.
Back in many native, smaller habitat, I've only rarely encountered this kind of darkness, and it's usually on small, isolated country roads. When I got my license, my instructor's company had a special spot, an hour outside the city, where they always went to for dark driving lessons, because almost all of the road network is (exceedingly well) illuminated.
I prefer the situation of no lights other than headlights.
I'm from Maryland, and my neighborhood is off a country road with a single streetlight at the head of the neighborhood. If the weather is doing anything (fog, rain, snow) even a little bit then it's completely awful to drive through that intersection.
If the light didn't exist, I could keep my vision adaptation the same before / during / after, and could compensate for inclement weather the whole time. As-is, I go from very-dark to bright-light and back to very-dark over a hundred feet or so, and it makes it very hard to adapt safely.
I'm used to driving in dark highways. It sounds reasonable as long as these highways are far from populated areas, have the borders well protected from jumping animals and are regularly maintained for holes and possible garbage.
The video makes the lighting look bad but most street lighting is very partial. When I drive home in the city—admittedly at somwhat lower speeds—the lighting is very patchy and all sorts of idiots in dark clothing are farting into traffic on foot and bikes. Lots of students. I pretty much expect people to be in the street but admittedly it’s a congested area compared to this.
Hereby I'd like to name this as "Jack-in-the-box" test case for self driving cars. No self-driving car should be allowed on road unless it can detect and avoid collision with person literally popping up out of ground at 20ft of distance while driving at 40mph in total darkness.
Wow... This is REALLY bad for self driving cars. HOW did it not catch this? I'm a huge proponent of self driving cars, but I'm shocked that the car literally did NOTHING to avoid her. I know for a fact, I would have seen and potentially swerved (at least slammed on the breaks).
She crossed at a spot that was pitch black. Sneakers come into view maybe 40 feet away. Clearly visible at 20. Is that enough time to stop at that speed? Not sure I would have stopped in time.
You don’t need to stop. Slowing from 40 to 30mph decreases the fatality rate by an order of magnitude. Maybe the car shouldn’t have been going above the speed limit if the conditions were so poor that it couldn’t plausibly see humans walking right in front of it.
That's an argument intended only to shift blame and distract. The car was going 38 in a 35 which is well within the legal margin of error on a speedometer[1]. For all we know, the speedometer read exactly 35.
There's enough blame to go around between the driver who was obviously texting and the woman illegally jaywalking, you don't need to invent a false controversy.
I mean... I even provided a source and everything. And you threw that all away with a glib "nah I read somewhere that you're wrong".
Please read the link. I may still be wrong, but by my math, the car was a Volvo XC90 which means the acceptable margin of error on the speedometer is +/- 3 miles per hour.
Nice try, but these cars have GPS. Trying to evenly pin the blame for the pedestrian’s death on her is insane. Talk about trying to “shift blame and distract.”
Exactly. Humans could have swerved or slowed down at least some in time to lessen the impact.
If you replayed the exact scenario the chance of the same outcome is likely absurdly high with the robocar. If you replayed it with the same human it would likely be much better. If you replayed it but varied the human driver, maybe even better than that.
only person who bothered to point this out. A camera from the car doesn't even give the picture a human would see or what a million other sensors would see.
bottom line was that the person hit was crossing in the middle of the street (jay walking), so they are to blame for the accident not the car. while the technology _should have_ seen them, i don't blame it. this is essentially the same as someone breaking into your house, getting hurt and you getting sued. the person is breaking the law, yet you're to blame... that wouldn't be fair to you, so it isn't fair to blame the car in this incident.
This is such a blatant attempt to manipulate public opinion before the actual data appears. Even if Uber's software is ridiculously using visible light for its decision-making, it should definitely be taking into account what it doesn't know. A system like this should not be guessing. There is a giant swath of black in the video; the camera doesn't know what's there. So, the car should be moving so it has a short enough stopping distance to avoid a collision. (Or at least flash the high beams, but that would be pointless, because obviously it isn't actually using this camera. Uber may be run by reckless psychopaths, but their engineers aren't stupid).
Some of the comments in this thread are ridiculous, and your comment throws them into sharp relief. Even if the video is a perfectly faithful reproduction of the scene and the dark parts are that dark, why in the world would the software assume a space it has next to no information about is safe to drive into at the current speed?
The car is equipped with more than just an RGB camera sensor wise, such as the Velodyne LiDAR, so it does have information about the scene despite being pitch black to the naked eye.
Yeah, my point is that the RGB video could be a perfectly faithful reproduction of the scene* and also the software could have reached decision from multi-sensor data that hasn't yet been released (i.e., a dark spot in some video frames from one of the RGB cameras doesn't imply that the autonomous system lacked info).
*Though we can assume that this dashcam footage is lower quality / resolution / less precise than what the sensors had access to.
In vehicle vs. pedestrian or bicyclist scenarios the criminal justice system is almost always tremendously biased toward the vehicle driver, unless the victim happened to have been a child in a residential area.
Mm, fair enough, that's my inner conspiracy theorist happening :b (Every part of the US government is obviously in cahoots with Uber as part of a complex scheme to take over the world, right?). Okay, probably not intentional on their part, but I think it's really disappointing that this is the video they had to share. The video itself is manipulative, and surely they would realize that. It provides some context, but the prevailing focus is "look how dark much darker those pixels are than those other pixels!", which just isn't useful.
No conspiracy needed. AZ wants to appear friendly to self driving car companies (e.g. they already don't regulate them as hard as CA.). A single undesirable being taken out isn't gonna change that.
Plus it's not like AZ has a long history of pedestrian friendliness anyway. It's practically engineered to harm them, with 2mi between crosswalks
Something that's worth pointing out as well is what happens to low-light contrast sensitivity as we age. I believe there's something like a 50% loss in night vision between the ages of 20 and 50.
Having watched the video, I seriously doubt I'd have seen the pedestrian in time to stop for her. I don't know that I would even feel that sorry for her, frankly -- there is only so much we can (and should) do to protect drunks and idiots from hazards like crossing the street at night without looking.
Finally, had the driver been paying attention, at least some blame could probably be assigned to DOT's stubborn refusal to bring their lighting standards into the 21st century. Cars in other countries are equipped with noticeably better lighting tech than US cars are allowed to use. It looks like you'd be overdriving these headlights at any speed over 15 MPH or so.
> I don't know that I would even feel that sorry for her,
>there is only so much we can (and should) do to protect...
This strikes me as a good example of willful blindness and could subject a person to a charge of criminally negligent homicide. And I even see that possibility in this video, that the driver was maladapted for night conditions due to a display in the car that maybe wasn't supposed to be there.
Imagine a condition with this car not recognizing and then hitting an animal (elk, moose, horse, cow) such that the animal goes right through the windshield and kills everyone inside the car. This is a real problem worth solving, regardless of whether you personally care about humans outside the car.
Don’t be too proud of this technological terror you’ve constructed.
This strikes me as a good example of rebutting a point that nobody made. I didn't say I'd hit her on purpose.
But if the conditions depicted in the video are accurate -- meaning, not artificially manipulated to make the lighting appear darker than it was -- then it would be awfully easy to rationalize away any guilt I might feel. If that amounts to victim-blaming, then so be it. When no malice is present, sometimes the victim really is to blame.
Put another way, this is one of those cases where it's better to focus on fixing the problem rather than the blame.
Hitting her on purpose would not be criminally negligent homicide, it would be homicide. What you described is a blase attitude that suggests it's OK to have no imagination of risks that can get people killed, and that's OK because after all they're drunk or stupid. Your words.
Malice is not required for criminally negligent homicide. And yes, indeed, much better to fix the problem, that was my point. Not blame drunk or stupid people, which was your point.
> Hitting her on purpose would not be criminally negligent homicide, it would be homicide
Any killing of a human by a human is homicide, even the kind that is not criminal; what you probably mean is that it would have been murder (or maybe voluntary manslaughter.)
Accident aside, if Uber is so "strict" on their background checks for drivers, how do they let a convicted robber be apart of their high profile pilot?
While I'm surprised the sensors on the car didn't catch the cyclist, I pretty sure if I had been driving and paying full attention I would have hit the bike too.
Personally, given this video, I'd still be totally comfortable with a self driving car on the road with no safety driver (who clearly didn't make a difference here anyway, not that they could have).
I agree in that I consider the safety driver to be irrelevant in this situation, because of the reaction time needed to take control, nevermind hit the brakes, nevermind after a complacent state of mind.
But as others have pointed out, it's hard to tell what the driver could have seen just from the dashcam video, which for starters, has a much less dynamic range than human vision. Because in the accident scene photos/videos, the ambient street light is much more visible [0]. I'm guessing driving down that Arizona road at night can't possibly be as pitch dark as the camera shows it, given how that area seems to be used for entertainment venues/festivals.
How is this video "better?" This is clearly somebody holding a camera recording a screen and seems to have much more artifacting and brightness changes.
To all the armchair experts: try recreating this accident scene in the real world.
Sodium Vapour (monochromatic) lamps, then a large darker area with a pedestrian in dark clothing with car’s headlights dipped.
You will be alarmed at how much stuff you can put on the road and not see it because the road has a lighting pattern that makes it less safe than no lights at all.
With no lights at all, and a hedge between carriageways, the car would have had high beams on and the driver may have seen the pedestrian with plenty of time to brake and either avoid collision altogether or reduce the impact to non-fatal energy.
Now go recreate the scene for yourself and see how it works for you.
I dunno, I see a lot better than my dashcam sees stuff in similar conditions.
Whether a human would've seen her or not is besides the point, really - the car is supposed to be fitted with sensors that don't rely on visible light, and this looks like the most basic test case you'd run on an AV - big empty road with an obstacle moving at constant speed into the path of the vehicle.
The bike in between the pedestrian and vehicle likely confounded the lidar: too complex a 3D surface, little return from radar. Is it a solid object or a swarm of insects?
Also how well does lidar work against surfaces like textured dark clothing?
This will be a learning experience for everyone, and Waymo is really out of place condemning Uber for this.
There are two explanations for the video published:
1. They DO have cameras onboard with the exposure settings necessary to see the woman in the video, and have not released the video because it makes for bad PR.
2. They DO NOT have such cameras, in which case they should have their self-driving permit revoked because this failure mode is completely predictable to anybody working in this space. And anyway their LIDAR should've detected the person.
Looks like somewhere between 1.0 and 1.2 seconds between when I can see the pedestrian in the video and when the collision occurred.
I don't think a normal human, even if they were paying attention to the road, could have braked or swerved in time to avoid hitting them.
Not sure if visibility is better for a real human than that video but from what I remember about the eye test at the DMV it is extremely lax. They had a vision test sign with a really big font and it was really close. I was appalled that they let people drive who can't see the vision test sign on the opposite side of the room -- even that was ridiculously easy to read.
Also, even if they reacted within 0.1 seconds (unlikely) I doubt the car would have been able to slow down in that timeframe.
I'm surprised IR and LIDAR didn't pick it up though. Would be useful to have the other data released. Video is only a small part of what the car sees and excluding this information is a huge disservice.
Not sure if they already do, but would be nice, no, imperative, for car companies to collect "black box" data of collisions and shared this with all other interested parties. This is a dataset that would be extremely beneficial for all self-driving AI engineers.
It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.
When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).
Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.