Hacker News new | past | comments | ask | show | jobs | submit login

Not exactly. The dynamic range of the eye in a single scene is around 12-14 stops, about the same as a high-end digital sensor, like the one in the Canon EOS 5D or Sony Alpha. https://www.cambridgeincolour.com/tutorials/cameras-vs-human...

The eye can gain a lot more stops through adaptation (irising, low-light rod-only vision), but those mechanisms dont come into play when viewing a single scene -- and cameras can also make adjustments, e.g. shutter speed and aperture - to gain as much, if not more, range.




A camera captures the entire scene in a frame with a fixed dynamic range. Human vision builds the scene with spatially variant mapping, the scene is made from many frames with different exposures stacked together in real time.

I'm concerned about poor scotopic adaptation due to the rather bright light source inside the car - maybe it's the display he's looking at. I see a prominent amount of light on the ceiling all the way to the back of the car and right on his face. It's really straight forward to collect the actual scene luminances from this particular car interior and exterior in this location, but my estimation is the interior luminance is a bigger problem for adaptation than the street lights because the display he's presumably looking at has a much wider field of view, and he's looking directly at it for a prolonged period of time. It's possible he's not even scotopically adapted because of this.

And also why is he even looking at the screen? He's obviously distracted by something. Is this required for testing? Ostensibly he's supposed to drive the car first. Is this display standard equipment? Or is it unique to it being an Uber? Or is it an entertainment device?

Retest with an OEM lit interior whose driver is paying attention. We already know the autonomous setup failed. But barriers are in place than also increase the potential for the human backup driver to fail.


I agree, but I don’t think the eye can adapt beyond its inherent dynamic range over a matter of milliseconds - the iris is not opening or closing over that timescale, so you’re relying on the inherent dynamic range of the retina (which is pretty good).

What the eye IS doing is some kind of HDR processing, which is much better than the gamma and levels applied to that video. I bet a professional colorist could grade that footage to make it a much better reflection of what the driver could see in the shadows - even with a crappy camera, you can usually pull out quite a bit of shadow detail.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: