Hacker News new | past | comments | ask | show | jobs | submit login

If you worked with LIDARs, maybe you know how much noise do they give in the output? Could not it be that Uber software filtered pedestrian image out as a noise, for example because there was no matching object on a camera or because reflections from the bike looked like a random noise?



Both effects you mention (sensor fusion problem between camera/lidar; spotty lidar reflections from bike) are possible.

These problems probably should not have prevented detecting this obstacle, though. But, a lot depends on factors like the range of the pedestrian/bike, the particular Velodyne unit used, and the mode it was used in.

One key thing is that lidar reflections off the bike would have been spotty, but lidar off the pedestrian's body should have been pretty good. That's a perhaps 50-cm wide solid object, which is pretty large by these standards. But the number of lidar "footprints" on the target depends on range.

You'd have to estimate the range of the target (15m?) and compute the angle subtended by the target (0.5m/15m ~= 0.03 radian ~= 2 degrees), and then compare this to the angular resolution of the Velodyne unit to get a number of footprints-on-target.

Perhaps a half dozen, across a couple of left-to-right scan lines. Again, depending on the scan pattern of the particular Velodyne unit in use. The unit should make more than one pass in the time it took to intersect the pedestrian.

This should be enough to detect something, if the world-modeling and decision-making software was operating correctly, hence the puzzlement.


They do have noise, but we are talking about milimeter to centimeter scale (accuracy is < 2cm). So a grown up person is roughly 2 orders of magnitude bigger than the accuracy of the scanner.

To give an example how big (or small) this noise would have been in this situation I did a very simple virtual scan of a person with a bicycle at a distance of 15 meters [1]

It was scanned with a virtual scanner inside our sensor simulation software, so this is not the real data and should be taken with a grain of salt.

[1] http://www.blensor.org/blog_entry_20180323.html


It is not possible for the algorithm looking at the LIDAR input data to have the same level of discrimination as humans, so this would be a possibly in my opinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: