Hacker News new | past | comments | ask | show | jobs | submit login

FYI, worth reiterating, a deep learning vision system will not necessarily recognize a dummy as a human. Particularly if it operates in both visible and IR spectrum.



That's a fair point which also occurred to me while reading the article. It is, I think, indicative of a deeper issue with using ML in these sorts of safety contexts. If the only way to really test your safety system is to actually put people in danger your whole concept may be problematic.


That's why Tesla's approach is pretty brilliant IMO. It's easy to collect samples where there was hard braking and there was a real, actual human visible in the path of the car while the car is under human control. No dummies are needed, and AI was not in control of the car, so there's no ethics issue either. Your Tesla will upload such samples automatically if Tesla deep learning system wants them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: