“Certainly, our Lidar is capable of clearly imaging Elaine and her bicycle in this situation. However, our Lidar doesn’t make the decision to put on the brakes or get out of her way.” - Would be very interesting, in the name of transparency, to see the log. Whatever malfunctioned has to be there. Hopefully some sort of mechanism to prevent future crashes arises from this. Instead of just a few lawsuits, payoffs and coverups
There are 2 problems with this way of reasoning is what you see in the aviation industry.
One, you're comparing this incident with an idealized human. Everybody thinks pilots and drivers are magical beings that don't make mistakes. And then 90 people die because one engine catches fire, pilot turns off the other engine and then banks the aircraft for make a quick emergency landing, obviously the plane loses lift immediately, drilled itself into a bridge, taking a taxi with it into the river below. No survivors, almost a hundred dead [1]. And I get it, the pilot needed to rapidly make a series of decisions under ridiculous levels of stress, that's the real cause. But when things go wrong close to the ground at 150km/h in an object that weighs 50 tons, they go wrong quickly, so you need to respond quickly. Needless to say, nothing prevents reoccurence. Quite simply, a plane will crash if you do this. There's nothing that can technically be done to prevent it. As for cars, over 10.000 people die every year because humans can't be bothered to wait until they sober up to drive [2]. Those are the human pilots that we should measure against.
Frankly, I don't understand how humans are allowed to drive cars or fly planes at all. We pass signals in our brain, they can cross from neuron to neuron in about 10ms. That means that in a second, a signal can affect, at most, a ball with radius 2cm in our brain. To spread out over the brain requires 7-8 seconds minimum in theory and in practice minutes is the more common scenario. That means, it's your spinal column that's driving the car/flying the plane and it gets updates "from upstairs" that are 2-5s old by the time they reach the control loops. Our brain is very good at predicting events so it doesn't look like that's the case, but it is.
Compare self-driving or other autopilots against realistic humans, who make these sorts of mistakes. An average autopilot needs to do better than a decent human driver. It should not need to outperform a magical how-we-imagine-ourselves perfect human driver.
That said, I do agree that we need some basic rules. Tesla's car did NOT stop after crashing into a truck after a lidar mistake (and a serious mistake by the truck driver that it was attempting to compensate for I might add), but not stopping, that's utterly unforgivable. Same here, obviously the car was obviously either driving with the lidar turned off or ignoring it's output. That's like a human driving with their eyes closed.
But certification must be functional. It can't be on the quality of individual components. It can't be on code review. It must be functional. We should have test tracks where autopilots get confronted with dozens of situations, preferably combining 5-6 individual problems at the same time. And then it needs to navigate it safely, under constant decision pressure. And then it needs to end with 5 cars being sacrificed to test their reactions when something heavy drops on them, when they drop off a cliff, when they get mechanically blocked, and when catastrophic mechanical failure occurs ... (so they don't put the pedal to the metal causing 10x the damage to others when they do have an accident).
But it needs to be a functional, practical test. Not the madness we currently have for aviation.
> We pass signals in our brain, they can cross from neuron to neuron in about 10ms
Could you clarify this point for a non-biologist? I understand the neuron-to-neuron transmission is not going to happen at the full electrical conductivity rate (something like 100 m/s) but this seems so much slower as to be hard to understand as a lay person.
Simple: signals travel through the brain through to a kalium-natrium cascade reaction (not even a real chemical reaction, just a gradient change), and every time they hit a synapse it becomes a lot more complex involving a dozen plus neurotransmitters. (This is why a kalium/potassium injection will kill you: it reverses the gradient for a long, long time, meaning the nerves cannot fire during that time, which means your heart and breathing (and everything else) stops. Incidentally, this also fires every pain nerve in your body so it should be unbelievably painful, and survivors do report that. Kalium is the Latin word for potassium)
Electrical conductivity is barely used at all. It is used in the processing of the resulting signal, but not in transmitting it. Even that part is very different from a current on a wire or through a transistor.
I lean towards what Velodyne is saying in this situation. I have been working with LiDAR systems for over 4 years of which the last 1.5 years have been towards building autonomous driving vehicles. When I saw the videos, I was truly baffled by how a LiDAR can miss that. I worked with different types of LiDARs (from different manufacturers) and there is a very high chance that the LiDAR point cloud contains all the information corresponding to the person and the bicycle to make a decision.
What we need to keep in mind is that sensing an object is different from deciding whether or not to take an action (e.g., hitting brakes, raising alarms, swerving, etc.).
Most LiDAR/RADAR/Camera manufacturers only provide input data. It's like saying "hey, I see this". It's up to the perception software to decide whether or not to make a decision.
In most cars, relatively simpler decisions are made by the car's perception software (e.g., adaptive cruise control, lane change warning, automatic braking, etc.).
Self-driving companies override such systems, and rewire the car such that it is their perception software that makes the decision. So the onus is completely on the self-driving company's software. In this case, it is the perception software developed by Uber to be critiqued - not Velodyne, not Volvo, not the camera manufacturer.
It looks like the engineers at Velodyne feel confident that they should (and would have) sensed the person, and hence their statement. I wouldn't doubt them much as they have been in the LiDAR game since DARPA days when self driving was considered experimental.
From a different angle, Velodyne may not have much to loose by throwing Uber under the bus - especially when compared to how much their reputation is at stake. This is because Velodyne has several big customers (e.g., Waymo, and almost every other self-driving, mapping company that is serious about getting big).
NTSB should and will get access to the point clouds. Uber has a choice of releasing the point clouds to the public - but I highly doubt they will.
If you worked with LIDARs, maybe you know how much noise do they give in the output? Could not it be that Uber software filtered pedestrian image out as a noise, for example because there was no matching object on a camera or because reflections from the bike looked like a random noise?
Both effects you mention (sensor fusion problem between camera/lidar; spotty lidar reflections from bike) are possible.
These problems probably should not have prevented detecting this obstacle, though. But, a lot depends on factors like the range of the pedestrian/bike, the particular Velodyne unit used, and the mode it was used in.
One key thing is that lidar reflections off the bike would have been spotty, but lidar off the pedestrian's body should have been pretty good. That's a perhaps 50-cm wide solid object, which is pretty large by these standards. But the number of lidar "footprints" on the target depends on range.
You'd have to estimate the range of the target (15m?) and compute the angle subtended by the target (0.5m/15m ~= 0.03 radian ~= 2 degrees), and then compare this to the angular resolution of the Velodyne unit to get a number of footprints-on-target.
Perhaps a half dozen, across a couple of left-to-right scan lines. Again, depending on the scan pattern of the particular Velodyne unit in use. The unit should make more than one pass in the time it took to intersect the pedestrian.
This should be enough to detect something, if the world-modeling and decision-making software was operating correctly, hence the puzzlement.
They do have noise, but we are talking about milimeter to centimeter scale (accuracy is < 2cm). So a grown up person is roughly 2 orders of magnitude bigger than the accuracy of the scanner.
To give an example how big (or small) this noise would have been in this situation I did a very simple virtual scan of a person with a bicycle at a distance of 15 meters [1]
It was scanned with a virtual scanner inside our sensor simulation software, so this is not the real data and should be taken with a grain of salt.
It is not possible for the algorithm looking at the LIDAR input data to have the same level of discrimination as humans, so this would be a possibly in my opinion.
I don't understand why everyone is focusing so heavily on LIDAR being at fault or not at fault. The car does have a RADAR as well, which would not help detecting the pedestrian but most certainly the bike she was moving along. I don't know the field of view of the radar but that should have caused an emergency brake as well, shouldn't it?
I think half the people commenting on this incident have misspelled "brake". It's odd, because that's not something I've observed as a common error before.
From a recent publication in an IEEE conference related to intelligent vehicles:
"Radar is robust against bad weather, rain and fog; it can measure speed and distance of an object, but it does not provide enough data points to detect obstacle boundaries, and experimental results show that radar is not reliable to detect small obstacles like pedestrians."
This would be because the wavelength of lidar is in the micron range while that of vehicle-detection radar is in the mm-cm range. You won't be able to reliably get radar reflections off of mm/cm-scale objects or object elements, or accurately (<1cm) localize object boundaries. Good navigation would require tighter localization.
Radars are really good, though, for detection of objects, including identifying moving objects, close up -- canonical examples being walls and other vehicles. Radar sensors are rather cheap (having been in mass production for a long time) so it's common to have one on every bumper or corner.
If it were "only" a human then the radar would have a hard time seeing the person. But the bicycle is a substantial chunk of metal which should give much stronger echo.
And a 10 kg. metal object placed in the path of an autonomous car should cause all kinds of emergency measures to engage.
Many high end cards have auto braking systems based on radar (and additional sensors). Mercedes even has a similar scenario on their product page [1]
Volvo does so too, as far as I could find.
So not only did they not detect the obstacle with two different sensor technologies they may have deactivated the already existing safety features in that car as well.
Would a bunch of plastic bags filled with stuff (plastic bottles, clothes, etc..) tied to a bike be something the LiDAR would see as part of the roadway rather than a set of distinct objects?
This seems exceptional for Velodyne to come out with a statement like this directly from a spokesperson. I would expect a supplier to be more restrained before throwing a large (and growing) customer under the bus like this.
Well the question of "why didn't the lidar see the pedestrian?" was on everyones mind in the industry, so openly going out and declaring:
"it surely must have seen it, the lidar's are fine" is an attempt at reassuring everyone who is now questioning if Lidars are not reliable enough.
This is not exactly throwing Uber under the Bus as they themselves have an interest in being able to tell that story later on:
"Our analysis concluded that our algorithms didn't put enough weight on the data coming from the lidar, which worked as intended and should have been weighted higher in these specific circumstances, we will adjust our efforts accordingly and will Donate $largeSum to CarsAgainstHumanity to bribe everyone into forgetting how bad we fucked up."
It's entirely possible that Velodyne worded it far more softly, as to not directly attack Uber - as this article somewhat comes off as them doing.
Who knows what other greater context the particular statements were in. After this past election I never trust these types of one-line quotes taken from a larger interview.
The headline could easily have been:
- Uber's Lidar manufacturer just as "baffled" why pedestrian not detected before crash
Uber is the poster child for lying and cheating. With an extraordinary reputation and history of scandals like theirs, extraordinary proof is required for their claims.
But Velodyne has proof -- gobs of it -- that their product is "capable" of seeing the pedestrian in the dark. It's possible that this particular unit malfunctioned, but for this to be Velodyne's fault and not Uber's it would have had to malfunction in such a way that it gave the appearance of operating normally. That is extremely unlikely.
> She said that lidar has no problems seeing in the dark. “However, it is up to the rest of the system to interpret and use the data to make decisions. We do not know how the Uber system of decision-making works,” she added.
> “In addition to Lidar, autonomous systems typically have several sensors, including camera and radar to make decisions,” she wrote. “We don’t know what sensors were on the Uber car that evening, if they were working, or how they were being used.”
There's still a ton of variables here besides whether or not the Lidar detecting the pedestrian. Including the other sensors, how the software works, etc. All things out of the scope of Velodynes knowledge.
Velodyne is not saying that they are certain the car should/could have stopped in time or avoided the crash. Their perspective is merely regarding the functionality of Lidar being able to detect the person.
Not to mention we don't even know whether the Lidar malfunctioned yet either...
Go back and re-read the GP, which was trying to draw some sort of moral equivalence between Uber and Velodyne:
"Why is it OK for the LIDAR company to make a blanket statement of innocence without proof, but not OK for Uber to do the same?"
This is a disingenuous question. It assumes facts not in evidence, to use the legal aphorism. Velodyne did not "make a blanket statement of innocence without proof". It made a very narrow and defensible claim, namely, that its product, when working properly under the conditions at the time, should have been able to detect the pedestrian. It is obviously true that there are "a ton of other variables" but that is a red herring with respect to the original question.
> Velodyne did not "make a blanket statement of innocence without proof".
I agree, if anything I supported this statement with my comment.
The difference is that regardless of the narrowness of their claim, it will have a broader impact on how people judge Uber. Nor do we even know if the Lidar was functioning properly, which is an assumption Velodyne is making when they made their claim.
We simply need more evidence before we can fully judge Uber. And before we can give Velodyne a complete pass in terms of the functionality of their Lidar.
Uber is poison and everyone knows it. Everyone in the ecosystem is trying to throw them under the bus because they deserve it. The world will be a better place if Uber (not ride”sharing”) is destroyed.
I've seen less technical friends on social media responding to the dashcam video saying "of course it hit her; it's way too dark to see!" without understanding that LIDAR doesn't use available light. People may not understand exactly what Velodyne does, but so long as the company supplies "the eyes for self-driving cars" they have a clear risk of the public thinking this was a sensor failure.
More and more, I am convinced that the video released comes from scapegoat cameras, as someone named them here on HN: "post crappiest video available, have the public believe there's nothing you could have done." It works (tm)
Uber already tried getting their hands on a better LiDAR unit, we all know how it went. If they could switch to a better in-house sensor, they would have done so already, and Velodyne is the best of what’s commercially available now. What are they going to do, switch to an inferior model at a time their software can’t even avoid a collision using state of the art LiDAR?
It reminds me of when an airliner crashes (a rare thing now).
The airline, the manufacturer, the engine manufacturer, the part(s) suppliers, the pilots... will all point at each other. Usually the pilots lose because they don't have any money whether they were at fault or not.
You have to wonder why everyone isn't pointing at the safety driver in this case.
Here's an example of everyone blaming everyone else, and the manufacturer lost despite it not actually being their fault:
Uber being poison and it's easy to poop on Uber aside.
I think it makes sense to for Velodyne to get a head of the message before any client throw Velodyne under the bus. If Velodyne takes the hit, all their suppliers have leverage over them. Staying ahead keeps the clients confident in Velodyne products.
Also scale-wise, the market is tiny at this point. Nobody cares the sales from quarter to quarter because whenever self-driving cars hit mainstream, all previous numbers will be scribbles in the margin.
Velodyne has a finger in practically every major self driving car pie slice. It's pretty much heads you lose, tails I win for them (Assumption:nobody can bring a better laser to market).
I'm not sure Uber is a very large customer of Velodyne's. They're also suppliers for Google, Ford, and Caterpillar, as well as a bunch of others.
If they're sure it's not their fault it's a good idea to get that information out before people start questioning LIDAR technology. If Uber leaves they'll lose sales in the short term. If everybody else gets scared away then they're toast.
I don’t know how hard it is to not run over a human autonomously but I do know that it’s easy to refuse to let your cars on the road until they’re capable of it.
Can I ask a dumb non-engineer question? Backup cameras in areas with basically any road debris/weather get covered with dirt. I'm in the Seattle area now, and I often lick my thumb and wipe off the camera, because its a vision system I rely on so I don't kill people.
Could lidar/cameras/etc on the vehicle be obscured by road debris or worse, things being bumped/moved, smudged, or even foul play?
Yes, that's possible. But in such a situation the data from the LIDAR looks very different than it does under normal conditions. You can easily tell when a backup camera is covered with dirt by looking at the image. Uber's self-driving software should have been able to detect a LIDAR failure in much the same way.
I think any dirt on a sensor would register as additional objects in the scenery, not missing objects. Humans are great at autocorrecting, but from a straight-up sensor data point of view, no light on these pixels=something is between sensor and light source.
The analogy doesn't stand up - there's a non-trivial chance that a memory module or your hard drive in your laptop will fail over the lifetime of the device.
Furthermore, a LIDAR unit is complex and has firmware and lower-level software embedded that may be at fault.
Yes, I've even seen this first hand where it was, in fact, the laptop as the issue source. Not that I'm taking Uber's side at all, just saying it is a possibility and we should wait for official evidence of the fault before jumping to conclusions.
If the software is well written, any of those types of failures should raise an alarm to alert the safety driver to take over, and then shut down the self-driving system.
It isn't anything like blaming your laptop. Lidar is a crucial sensor that allows the car to "see." If the lidar couldn't "see" the person, it doesn't matter how good your software is.
Well, it COULD be at fault, if the sampling rate was too low to see a human walking across the road, or if it couldn't see the materials of the bike or human, etc etc. But we all know none of these things are true.
No, that still wouldn’t be the Lidars fault. That would be a fault in safely setting the specification for the Lidar sample rate, and not catching the design flaw in testing.
I haven't been following this closely, but how was the car able to go over the speed limit? If a speed governor was turned off, or was able to be overriden, isn't it possible that multiple systems in the car were turned off, such as systems that regulate gas and brake?
The speed limit on that road, in the area of the crash, is 45 mph in the northbound lanes, and 35 mph in the southbound lanes. The crash occurred in the northbound lanes.
I think I saw a comment here on HN from a Tempe resident that said that it's 45 in the direction the Uber car was going, but 35 the other way, hence the confusion.
I thought the Uber was north of Curry Road going south, but after comparing the video to the map, it looks like the Uber was south, heading north to the intersection. The road opens up and there are two metal signs. https://www.google.com/maps/@33.4362927,-111.9424451,3a,75y,...
So, yes, the speed limit on the road in the direction the Uber was traveling on the stretch of road it was traveling was indeed 45mph.
Also, very sadly, at the point of impact, there was actually a sign that said "Do not cross; Use Crosswalk ->". https://www.google.com/maps/@33.4365489,-111.942659,3a,22.9y... Although not facing the direction the bicyclist was crossing from. I'm willing to bet an accident had happened here before.
> That.... doesn't really make any sense. I've never seen a road where opposite lanes of travel had different speed limits.
Really? This is really common. Think about a road connecting a town and a rural area. In the half-mile adjacent to the town, the into-town direction will be limited lower to get drivers to slow down as they approach town, and the out-of-town direction will be posted at the higher rural limit.
There's a road that's effectively like that near where I live. AFAIK the traffic laws in my home state dictate that a speed limit change on a road takes immediate effect at the location of the sign indicating a new speed. There is a place where the limit goes from 25 to 35 and the signs on opposite sides of the road are at least a half mile apart, meaning that depending on which way you're going on that stretch the speed limit could be legally interpreted as 25 or 35. It seems like this would be dependent on specific interpretations of various state laws.
If you look at how lidar's work this does indeed make no sense. Lidars will see vertical poles that stick up high, like pedestrians. That's exactly what they're good at. For a lidar with even bad resolution to have missed this pedestrian ... it's technically not impossible, but it'd be one-in-a-trillion streak of extremely bad luck. It seems much, much more likely that the car was somehow ignoring the output of the lidar.
Lidar's see everything in a specific plane that's originated at their sensor. This situation is exactly what lidars are made for ...
Remember the Tesla accident ? A lidar was scanning planes in front of the Tesla and there was a large truck in front of it. A truck hangs low at the front and at the back, and the tesla autopilot saw both of them. Presumably because it was using lidar it decided that the front of the truck was a car, and that the rear of the truck was a car, and then when the truck changed direction it compensated by doing a high speed maneuver directing the car beteen the front and the read wheels of the truck. Needless to say, results were less than optimal (and then came the unforgivable: after the crash the autopilot was still in control, but it did NOT stop until it was mechanically blocked from going on). That's the sort of mistake you'd expect a lidar to make : it misses objects that are very close to the ground, or "far" off the ground. It sees things starting at 50cm high until 1m30 or so (also depends on the distance to the sensor. The closer the sensor, the narrower the range), no more. That's the weakness of lidars.
That unfortunately means what they do miss is ... well, let's put it this way : you can't mount it low in the car, because at that height it'll think large pebbles are telephone poles (plus mud will splatter up and block the sensor). So you don't do that. You can also mount it high but that means those detection planes don't get very close to the ground. And that means, what it'll miss is anything that's close to the ground. Dogs. Children. Parking poles. Stairs (or any kind of abyss). That's where you'd expect mistakes.
After the Tesla fatal accident, there was some discussion about the difficulty of stationary obstacles. Apparently, there are so many false positives, that they're quite readily discarded (otherwise the car would stop all the time).
As the victim was traveling perpendicular to the movement of the vehicle, I wonder whether that had anything to do with it. If so, quite a severe limitation.
Someone in a previous thread mentioned that Uber had disabled LiDAR input because they were testing visual-light-only navigation. Was that just speculation/rumor?
> Someone in a previous thread mentioned that Uber had disabled LiDAR input because they were testing visual-light-only navigation.
That smells like BS. I can't imagine any serious players in the self-driving field that can't replay all their sensor inputs into models offline to see how things behave. If they wanted to test without LiDAR, they would run this simulations without LiDAR input. No reason to disable it when there's actual consequences.
That being said, I suppose it's possible that Uber have done what I already said, so much so that they were confident it would work and were willing to deploy it. But it still smells funny to me because I would sincerely hope the LiDAR (and other non-visible light sensor input) would be used as a failsafe.
My very limited understanding of autonomous car implementations is that you mix in all your input together to determine your surrounding environment (and all your "what do I do" logic deals with information derived from this input aggregation), so in order to safely test a new model with less input in your car, you'd need at least 2 models (one with all input, one with reduced) running at the same time.
It still seems like it'd be viable to me, and something you'd certainly want to do to avoid situations exactly like this. If you've got 2 models running, it seems like it'd be pretty straightforward to have the "all input" model assume control if it's trying to avoid an immediate collision, basically having it perform a similar role to that of a human behind the wheel during "disengagements".
I hate to say it, but never underestimated the human potential for stupidity. To make matters worse, this is amplified in organizations. But in this specific case, this is all speculation until the data gets analyzed.
That rumor originated from Robert Scoble. I've never taken him very seriously, though he is the type of person who probably does have contacts within Uber.
A software bug is still the most plausible explanation, given the evidence we have.
this is quiet interesting. two days ago the main stream media here in germany said that the accident would have happened anyways, regardless of the self-driving technology built into the vehicle.
Looking at the video, the quality of the current self-driving technology is really questionable, especially if i recall also the other non-fatal road traffic offences publicized so far.
And alongside being untrue, it ignores the differing severity of collisions. Even if the car had to use its normal video camera instead of Lidar for some reason, the second or two of applying the brakes that would provide can easily -- and will likely -- turn a fatal collision into a non-fatal collision.
So if the LIDAR did not fail, it was probably the neural network that takes in LIDAR data and makes the decision to brake. Would love to see NTSB releasing the data for us to analyze.
"By applying convolutional neural networks (CNNs) and other deep learning techniques, researchers at Uber ATG Toronto are committed to developing technologies that power safer and more reliable transportation solutions."
"CNNs are widely used for analyzing visual imagery and data from LiDAR sensors. In autonomous driving, CNNs allow self-driving vehicles to see other cars and pedestrians"
There are many projects using deep learning with lidar. Google PointNet, PointNet++, also https://www.youtube.com/watch?v=UXHX9kFGXfg . These are all much newer than 2D CNNs and I don't know if it works well enough to actually be used in SDCs. Also, using CNNs on point clouds comes with all sorts of problems.
My understanding is that you don’t do analysis or training inside the control loop at all. Basically, build a model and then it becomes a go/no-go check when you really want to use it.
I don't know how the system is integrated into the self driving logic as a whole. But a few years ago when we were working on a self driving train, LIDAR was used as a system that reports back a list of obstacles (position, size, maybe a simple shape descriptor) or as an obstacle map that is laid out in a grid and shows which cells are occupied and how high the thi g occupying the cell is.
If your system that processes the pointcloud and creates this data does not detect faulty (or missing) data from the sensor the higher level logic will happily hum along believing that nothing is amiss.
And some LIDARs even have the option to do the processing on the device itself (i.e. IBEO). In which case you can theoretically work with only a list of obstacles reported back by the sensor.
People are saying that "you couldn't see the woman" due to the visible light levels in the video Uber/the police have released but technically informed people are saying "the car has more than just visible light sensors" so arguing that the visible light video exonerates Uber is a defense built on shaky ground.
A lot of the defending the self-driving car came in the form of, "conditions were bad, the car's sensors didn't detect her, even a human probably wouldn't have seen her". i.e., bad input (and not bad processing of said inputs)
A pretty complex device too. I'm not sure about this specific one, but LIDAR tend to operate in a scan fashion with a refresh rate. This means it can get chopped noisy data in a time sense which requires post-processing.
That would not have mattered im this case. The Veldoyne LIDAR they seem to be using (from the photo of one of their Volvos) is the HDL-64E (don't know which revision but the key specs are the same). This sensor can operate between 5 and 15Hz (full rotations per second). So in the slowest setting the car (assuming it was going at 45mph) moves 4 meters within a full rotation and in the fastest setting moves 1.5 meters.
That might sound a lot, and in some applications it is a lot, but please keep in mind that after one full rotation the sensor has scanned an area of up to 45000 square meters ( maximum scanning distance is 120 meters). So each rotation gives you a pretty good situational awareness of your surroundings (barring any occlusions that prevent the sensor from seeing certain spots). Unless the person is covered in IR absorbing material (some black materials do a quite good job at that) it should be impossible for a pedestrian to sneak up on the car without it being sweeped multiple times by the laser beams
Velodyne clearly have no clue what they're talking about. They should consult with the "futurologists" on reddit who only needed 5 seconds and a video to conclude that "it was too dark out".
From my perspective, the first question should be, "Given the same circumstances, would a human driver have performed better?".
After viewing the video of the incident, I strongly believe that human driver would NOT have done better.
Therefore, I believe that any attempt to blame or spread mistrust in these technologies because of this incident is (at best) misguided and (at worst) alarmist.
Nighttime dashcam videos typically do a very poor job of representing what a scene looks like to the human eyes. Here is a phone camera video of the same stretch of road as the accident took place on, at night: https://youtu.be/1XOVxSCG8u0?t=26
From this you can see how good the visibility is. We know that the visibility on this stretch of road was pretty good for nighttime driving. We know that the pedestrian that got hit had crossed an entire (empty) lane of traffic before entering the Uber vehicle's lane. I would say that any competent driver who was paying attention and who was driving a car with working headlights (or perhaps even without) would have spotted the pedestrian well in advance and been able to avoid the collision fairly easily.
The fact that the Uber vehicle did not do so despite having an abundance of opportunity and despite having not only visible light data but also lidar and radar is almost certainly a massive failure on its part. Almost certainly this is due to a failure to integrate sensor data properly or to fail to categorize a pedestrian correctly.
I have no faith that the local PD will do a good job here, but I do have faith that the NTSB will be exquisitely thorough, and I would bet hard-earned dollars that they are going to tear Uber's self-driving technology up one side and down the other.
The entire point if self driving cars is they are supposed to be better than humans. So the first question should NOT be if a human could do it better.
Of course in this situation a human probably could have done better.
The real question here is why we they allowed to test an unproven car on the unsuspecting population?
Regardless of whether or not a human driver could have done it, the vehicle had a pretty critical sensor onboard that should have been able to see the pedestrian. It’s important to figure out how this collision managed to happen.