Hacker News new | past | comments | ask | show | jobs | submit login

There's still mechanical failure to deal with.



I am uniquely equipped to answer that question. Did an undergrad in mech E and worked as an automobile engineer for while before pivoting to computer vision and ML.

The safety standard that mechanical parts need to abide by are orders of magnitude higher than any Vision/Software product. Additionally, mechanical failures are rarely catastrophic. The part will most likely alert you hours/days in advance that it has begun to fail. Additionally, when catastrophic failure does happen, it often leads to the car stopping and not ramming through a busy intersection.

The problem with ML algorithms, is that they fail without warning and without intuition. It is incredibly hard to design against catastrophe. ML algos are also great at overfitting, so they can often learn to beat narrow tests while being inept at dealing with the exact scenario the test was supposed to evaluate it for. (I know, I know, "Don't tune on the test set!"... but human factors make it near impossible to avoid some level of it)

We are in dire need of a regulatory/evaluation body for AI that consists of top-tier ML researchers. Now that can be a govt. body or a 3rd party contractor that works closely with the Govt. But, we need to start laying the groundwork and debating around it now. So when we do need it, it is ready to be deployed.


I have a similar background, elec-mech undergrad, masters in robotics doing SLAM, spent time doing electrical design for industrial usage, then software doing computer vision, now working on drone autopilots.

I 100% agree on needing to design and regulate the design of safety critical CV (and particularly ML based CV) algorithms so that, if nothing else, the failure mechanisms are quantifiable and limited in their impact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: