Hacker News new | past | comments | ask | show | jobs | submit login

> Even if this is the case, why does it matter?

Because people are using Deep-learning over single-lens cameras to replace depth-perception... and then wondering why the cars that do this run into stationary objects with flashing lights. https://static.nhtsa.gov/odi/inv/2021/INOA-PE21020-1893.PDF

No one really cares about where deep learning works. People are complaining about all the areas where deep learning is failing, with dramatic and deadly results.




The semantic understanding problem, more generally, is under-acknowledged in autonomous driving.

A human can tell the difference of a child standing by the side of a road, about to throw a ball into the road; vs a child standing at the side of a road, waiting for a bus. A human will slow down in anticipation of the likely outcome. A robot without state awareness will be extremely limited in available responses.

Without a useful state model of the universe (i.e. concept awareness), you're limited to purely reactive behaviors.


That's still ignoring the problem. "Self-driving" tech is no where near that. You gotta set your expectations correctly.

We're at the "Firetruck with flashing lights was hit at full speed on FSD mode" stage of the problem. This means that the depth-field mapping broke. The car was unable to tell how far away the firetruck was, and plowed full speed into the firetruck.

Its very telling that the other self-driving companies are using LIDAR to build the depth map, instead of trying to create depth-maps through deep learning.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: