Hacker News new | past | comments | ask | show | jobs | submit login

I can't believe this person kept using it. If I had noticed a bug in auto-pilot and complained about it, I would be way too scared to ever use auto-pilot again. Personally, I never use auto-pilot because driving is piss easy, as it's designed to be.

Perfect self-driving cars is a nearly impossible feat to accomplish in an unbounded track. I can only imagine automated driving in a system which has no room for error. Examples include: tunnels under the ground, chain links on the ground (as in trolleys, trains, etc.), or anything else that vastly reduces the entropy involved in driving.

With self-driving cars on current roads, it will probably take years to get from 1% error to .1% error, and decades to get from .1% error to .01% error, which isn't even good enough. Perhaps it will take a century or longer to develop the required artificial intelligence to make self-driving cars perfect "enough". There's just too much room for unique problems to spawn. Bounding vehicle freedom seems to be the only way forward.




Your numbers about error percentiles don't make sense. Ideally, you'd want an outcome measure like fatalities per million miles, accidents per 100k miles, not "% error" which is vague.

Furthermore, look at the actual data we have right now. SDC makers actually put out data in California about their "disengagement rate" which is how many time the human drivers took over from the software. Waymo have steadily increased that rate over the past few years, now they are driving many hours without disengagements. Look at the link below, page 4, you'll see they have 63 disengagements over 350k miles. That's 1 per 5.5k miles, so these cars are driving for days without a human takeover.

They will not need their own infrastructure, that would be not be economically viable. They will go on the roads we have or they won't go at all. Tunnels are going to be reserved for high-density point-to-point travel, if the boring company or others ever get scale...


Then let's add some perspective. You must be referring this[0] paper. If the average person puts on 1,000 miles per month[1], then that means they'd have to deal with disengagement (a mishap) at least twice a year, which is not acceptable for fully autonomous driving. I'm going to define a "fully autonomous vehicle" as "a vehicle which should not ever require me to sit in the front seat and control it under any conceivable circumstance".

Put differently, I should be able to lay down for a nap in the back seat and wake up at my destination without any chance for disengagement during my entire lifetime. At the current rate of 1 mishap per 5,500miles, I would be dead after about 6 months.

Assuming a human lives to 75 years (we should really be using 75years minus 16years, but it's unimportant), a lifetime of driving is about 1,000mi/mo x 12mo/yr x 75yr = 900,000 miles. I don't even want the probability of encountering a mishap to be once per lifetime, let alone once per 6 months. One mishap per 900,000 miles isn't enough, because, on average, I'd encounter one disengagement in my lifetime. Assuming we're striving for a world where 7 billion people can drive without a single incident in 75 years (a vast underestimate), we need the probability of a mishap to occur to be less than once per 7,000,000,000humans x 900,000mi/human = 63 x 10^14 miles.

1/5e3 is not even close to 1/6e15. We're talking about 12 orders of magnitude in our error rate. I'd say we're laughably far away from our goal. We've got a long way to go.

[0] https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115...

[1] https://www.fool.com/investing/general/2015/01/25/the-averag...


> I don't even want the probability of encountering a mishap to be once per lifetime,

This doesn't seem reasonable - Waymo's report doesn't dive in depth enough about each disengagement to warrant this sort of extreme reliability.

If "2 disengagements" per year, were at most fenderbenders - something I'd wager humans do way more than twice per year - that would be a very different story than if those 2 disengagements were life threatening. Sure you'd wake up from your nap, but you wouldn't be dead, and at most you'd have to exchange insurance information.


> This report covers disengagements following the California DMV definition, which means "a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”

So, you're right, there's no clear distinction, but I would further argue that it doesn't matter. Even if only 1/1000 disengagements are fatal, my conclusion remains the same. I think we're splitting hairs at this point, though.

Even if not fatal, I highly doubt a significant fraction of such events (as defined above) would allow me take a nap upon departure and wake up at my destination, so it would still be unacceptable to me. I guess we have to agree on what an acceptable end-game is for fully autonomous vehicles. If you think "waking up on the shoulder exchanging insurance" is acceptable, then that would indeed change the numbers (but by how much? Two, maybe four orders of magnitude?).

Humans get into fender-benders all the time, but surely we'd strive to eradicate this inefficiency in the automated driver. I think this is still an active area of debate; some assembly-line work can be made more efficient with machines, but we've seen humans out-perform machines in other types of work. I think driving tends to utilize more reactive, intuitive "System 1" thinking[0], so I imagine that humans will be vastly better than machines at driving for a very long time.

[0] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow


Are you kidding? A person averages two fender-benders per year?

And no, it's not acceptable by any stretch. A disengage is extremely dangerous to drivers who are lulled into false sense of security through the marketing of.. self-driving cars.


It is ludicrous I feel to worry about getting to a stage where nobody ever has any type of accident.

I'd like to know what the actual rate would be, the 5k miles figure is for conservative thresholds of when control should be handed over, and many problems may be safely solvable (if annoying) by simply coming to a halt or pulling over.

No, the current setup is not OK to release on a large scale, but it's not expected to be, and we're not 12 orders of magnitude from a reasonable point.


That's, actually pretty damn terrible. 1 disengage per 5.5k miles is roughly 1 accident per 5.5k miles if the driver is not paying attention. Look at the Uber accident cam video pointed at the driver. This is the attention levels of drivers in so-called "self-driving cars."


These cars have had far more disengagements than accidents. Why are you assuming that one disengagement is equal to one accident?


Because that's exactly what you would expect from an alert safety driver - to take control and prevent an accident when the car makes a mistake.


I’m honestly surprised this is even allowed to operate. I doubt that this is increasing safety in any meaningful way.


And even when continuing to use it in general, use it where it acted up several times? Why were the hands not on the wheel and the driver extra careful here?


There is zero reason for not using the autopilot at the location based on Tesla's own explanation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: