Hacker News new | past | comments | ask | show | jobs | submit login

Yes, it is trivial to calculate how to respond to known hazards. What is hard to calculate is what the hazards actually are in any given situation. Humans don't merely look at the road and compute how much standing water is there, they have prior experiences and a sense of what other drivers are doing on the road and why they are doing those things.



Self driving cars also have prior experiences and a sense of what other drivers are doing and why. That's sort of the point of machine learning.

In fact, they will have a lot more prior experience than any human driver in the world.


Machine Learning still confuses Elephants for Cats.

While it's quite fun if you pin up a cat dressed up in an elephant costume and surely amuses a few colleagues, a self-driving car is not something that should confuse an Elephant for a Cat.

ML is IMO not reliable enough for use in self-driving cars, in complicated situations the driver should take over.


It's not reliable enough now, sure.

Similarly, my 6 year old is also not reliable enough to pilot a car.

Both will change as they mature.


Sure, multiple decades sounds about right (as opposed to current confusing statements like "has full self driving capabilities").


Unfortunately in the realm of machine learning, quantity does not equate quality.


The parent of any teenage driver can also attest to this.


They have a lot more experience driving, but humans do not only use their driving experiences as priors.


humans regularly get killed misjudging how much water is in underpasses, I guess an autonomous car based on detailed mapping can measure water deep quite better comparing sensor result with its map (eventually).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: