Hacker News new | past | comments | ask | show | jobs | submit login

I've been saying this for years, as I took Thrun's class when he was at Stanford. Google's dirty little secret is that self driving cars are still mostly smoke and mirrors. Given relatively controlled conditions and a trained driver who can play backup when needed, they work. But if you put them in complicated situations - snow, a busy city environment, abnormal signage - watch out.

The problem is that the driving model is probabilistic. When you solve a problem probabilistically, getting from 90% covered to 99% to 99.9% covered to 99.99% covered involves exponential leaps in difficulty. So even if the car covers 99.9% of driving conditions (and it currently doesn't), there's still a tremendous amount of work to be done to get it to 99.9999% correct, or whatever the threshold is for it to be deemed "safe" for fully autonomous use.

I personally am bearish on the technology, as getting the inconvenient final situational cases correct will be extremely challenging. I would love to be proven wrong, but at Stanford I came to the opinion that the probabilistic approach would get us to really cool demos, but never a fully autonomous vehicle. That being said, the people working on this are a whole lot smarter than I, and I would love to be proven wrong.




One that Google has not solved, for instance, is navigating a gas station. When they fill up gas at Shoreline & Middlefield in Mountain View, I see humans doing the driving.


Approaching this problem as 'navigating around X' is a wrong way to go about it. Once you solve the problem of navigating a gas station, the next you will face is to drive around a school, or a lane where kids are playing. The list would never end.

The idea must be to come up with a generic algorithm that solves these problems as a whole. Not one specific case at a time.


Well, I mostly agree with this.

The gas station is a special case though, because the objective isn't just travel from here to there. Finding a parking space is somewhat similar kind of special case, where there is a specific objective.


Why would a car of the future be using an old tech like gas? Surely it will be electric.


I decided to test a Google self driving car as it crossed the intersection by accelerating into its broadside - no reaction at all. Well, got a reaction from the humans inside.

I recently narrowly avoided getting killed by a broadside collision by braking just in time. If I were further I would have sped up out of the way. Would a probabilistic approach handle this? Maybe they need to compile a list of special edge cases.


You can't compile a list of edge cases for this kind of thing, because it is impossible to know the comprehensive list of all the situations the car won't handle correctly.

In the end, you need a learning technology that can properly adapt to any possible situation and give a decent response. Maybe it can be done, but we certainly aren't there yet and I'm skeptical as to the the tractability of the last bit of the problem.


I think they're currently concerned with making sure the vehicle drives safely. Many humans apply evasive maneuvers, only to end up killing someone else or hurting themselves in other ways.

All this shows is that the Google car was driving well, and you weren't. Though I'm sure as the tech progresses they'll look into this sort of thing, and will implement what makes sense.


Did you hit the car? Would the car have hit someone else if it accelerated or braked? Perhaps the AI made the right decision.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: