Hacker News new | past | comments | ask | show | jobs | submit login
Tesla crash victim had complained about auto-pilot in same location (abc7news.com)
330 points by _jgvg on March 30, 2018 | hide | past | favorite | 330 comments



"Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015. ... There are over 200 successful Autopilot trips per day on this exact stretch of road."

Who wrote this statement does not understand software. If there is some kind of Heisenbug, those numbers are too small to prove that the software is ok, and actually the guy that died reporting that there was an issue there, and later crashing there, looks like a very interesting hint about a potential software bug that should be investigated. The Tesla statement is the equivalent of "works in my laptop" at a bigger scale. Also consider that 85k times since 2015 means that potentially only a fraction of those trips were executed with the latest version of the software. Moreover the street layout may now be kinda different triggering some new condition.


They also provide numbers without any context. Is a crash every 85,000 times in that intersection means the Autopilot is safe or is it a death trap?

There are over 200,000 cars a day driving at that stretch of highway, if a car crashes there once a week, then the Tesla Autopilot is an order of magnitude more dangerous than a human driver.


How is Tesla supposed to know, yet, if Autopilot actually made the maneuver? They're saying what they know.

Edit: in a posting today, Tesla has said more: https://www.tesla.com/blog/update-last-week’s-accident but at the time of the earlier blog posting, Tesla says they did not know yet.


> For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

That's very curious wording. The important number would seem to be "millions of miles per fatality while Autopilot is enabled", not "millions of miles per fatality in Autopilot-equipped vehicles". The way it's phrased, I wonder what part of that 3.7x multiplier is unrelated to Autopilot and is just due to the car's high safety rating in a collision, recent release (no long tail of older, less safe models contributing to the statistics), and luxury status (more likely to be driven conservatively by old rich people.)


I think there's automatic collision avoidance always on that uses the autopilot sensors. So even if you're not actively using autopilot, the car can avoid hitting an obstacle it sees on the road.


Yup and this has saved me from at least 2 collisions! The first time was right after I got the car and I was totally not watching the road but instead playing around with the controls and suddenly the screen started flashing red. The second time was a car to my right as I tried switching lanes. I rarely let the car drive itself, but when I have it often alerts that I don’t have my hands on the wheel and I’m like uhh literally holding the wheel so I take the data that indicates the driver not holding the wheel with a grain of salt...


True, but most cars in the luxury-sedan segment (and quite a few compact cars) can do that, so this is not autopilot-specific.


Yeah, and automatic emergency braking is a clear-cut enough win for safety that there's been talk of making it mandatory on all new cars in a few years' time. Tesla seem to have adopted a deliberate strategy of conflating the safety benefits of an uncontroversial feature many cars have with the much more questionable safety of their autopilot mode in order to promote the latter. The big example Elon Musk was pushing of Autopilot preventing an accident and likely saving someone's life was the result of automatic emergency braking in conditions very far from those autopilot mode could be used in (heavy rain in an urban area at night, if I remember correctly).


Yes, this conflation has bother me as well. In the long term it's an absolute slam dunk to transition to autonomous vehicles, but I'm a bit worried that they're feeling they need to hide the most important numbers in 2018 in an attempt to avoid an irrational / unwise response from the public.


Automatic emergency braking is already mandatory for trucks in the EU.


From their blog: "In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident."

Notice it says one fatality per "320 million miles in vehicles equipped with Autopilot hardware." But, how many of those miles (or what percent of the time) does a Tesla driver use Autopilot? Also, maybe those good stats are due to the structural safety features and not the autopilot. It may be more fair to compare to other cars with excellent crash protection but no autopilot.

I think it would be a great service to the world to improve driving safety, but maybe we need to really start looking at the stats and get some more hubris as we transition to full autopilot. For example, require that drivers keep their hands on the wheel and eyes on the road. When more cars have autopilot, then mesh behavior between vehicles, and e-road features, then maybe we'll be more ready for driver-less cars.


> In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers.

Also note that this includes trucks and motorcycles, which have much, much higher fatality rates than passenger cars. Motorcycles are around 10x-50x higher fatality rate than cars! So already Tesla's blog is doing a misleading comparison to more deadly vehicle classes.

Additionally, there could be all sorts of other variables that make it hide even comparison. Are Tesla drivers physically comparable to regular drivers? Are they older? Younger? Both elderly and young kids have a higher accident rates. Are Teslas driven in rural areas and urban areas the same as regular cars? Because rural areas have a higher fatality rate.

So we need to make sure that the Tesla driver matches driving conditions of typical cars to make autopilot comparisons valid.

Right now, it appears Tesla's autopilot is a death-trap.


And don't forget to only compare against 2012+ luxury sedans. The median age of U.S. passenger cars is nearly 12 years. There are exactly 0 autopilot equipped 12 year old cars. As vehicles have become significantly safer in the past 12 years, I wouldn't be surprised if the fatality rate of 2012 and newer luxury sedans was a third of the U.S. median.


> Right now, it appears Tesla's autopilot is a death-trap.

I agree with all the holes you poked in their stats, but then with the last sentence you just went way off the deep end. What does "death trap" mean to you? To me, it seems likely the Autopilot engaged is about as dangerous as disengaged.


Do you find yourself veering dangerously close to a traffic barrier on a regular basis (waiting for the day that the crash attenuator becomes defunct so you can slam into it)? I don't and any machine that does that qualifies as a death trap.


They are trying to defend a themselves disingenuously, they provide numbers that sound impressive, but lack any meaning because context is lacking.


They definitely know. They collect your driving data even when auto pilot is off, so I'd imagine they collect everything when the autopilot is actually engaged.


Tesla specifically said that the information was in the car and wasn't transmitted to HQ, so they didn't know. If what you say is true, they're lying to the USG.

Let me share an example. I had an accident in a Model S, front-end, which didn't cause a loss of power. Tesla called me a couple of minutes later, saying that they knew I'd had an airbag-deploying accident, and asking if I needed an ambulance. From the description of this accident, power to the Tesla was lost before there was time to phone home.


I believe the accident happened on US 101, which is a multi-lane highway with 5 or 6 lanes at the spot where the accident occurred. How many of the trips Tesla is citing occurred with the vehicle in the same lane as this guy? Comparing apples to apples would probably greatly reduce these 85,000 / 200 per day figures.


Exactly. Same lane, same time of day, same time of year (sunset/sunrise changes in lighting conditions) same weather patterns, same level of traffic. The combinatorics involved expand the possibilities to the point that some scenariors may have had little or no coverage from a dataset with 85k observations.


Yes, Tesla needs to not be releasing PR targeted for the masses, they should be releasing statements with a scientific level of information. Whether it is disingenuous or not, whether it was rushed statement, they perhaps need to better scrutinize. It would be fine if they released the holistic information, so long as they get into full depth possible. They need to act as stewards we can trust, if they fail to do so then they will lose mindshare and longer-term market share as they will simply blend in with everyone else.

The obvious failure was the highway safety mechanism not immediately being replaced after the previous crash - that would have made the Tesla crash have gone from death to perhaps only minor injuries.

I would argue too there is a need for AI image analysis - could perhaps be crowdsourced - to analyze the structure/state of a highway or any road. It would of course need to be trained, however it could also likely serve as tool to improve road safety worldwide - bringing everyone in line with known best practices. In this case, it would have caught the improper safety mechanism not being replaced, and then that death would have been avoided; cost-benefit analysis says this system would be worth it as life is invaluable.

If Tesla/Elon leads this effort it would show them being proactive in improving future safety, accountable, and taking ownership. Why has no other auto manufacturer done so, why has no government implemented such a system? Well, because honestly we are stuck in scarcity only caring about our own costs, and so if the random person dies here and there, there's not a big enough ripple effect (unfortunately) to cause enough personal worry to cause change. Tesla, if wanting to posture as being the steward I think many of us hope they are, does however absorb these ripples that get associated with their brand - and therefore the responsibility does pass onto them, whether they agree to accept it or not.

All auto manufacturers would likely benefit from this, allowing their software to work better within certain expected constraints. Many more possibilities as I think of it, newly spotted damage or cracks in bridges, detection of debris on highway or excessive dirt on the edge of the road making the conditions more dangerous for emergency stopping/maneuvering, etc.

Edit: I added a few sentences, so the one upvote prior to editing, they didn't upvote everything I said though it's all in line.


>I would argue too there is a need for AI image analysis - could perhaps be crowdsourced - to analyze the structure/state of a highway or any road.

I fully agree with you here. Urban areas are too dynamic to rely on static maps and local LOS only. This sort of data sharing is going to be a necessity, IMO, to achieve level 4 or level 5 autonomy.


There was a big note in the GTC keynote this year about Nvidia using virtual environments to debug their autopilot algorithms. Think "car" being driven in a high fidelity video game.

Advantage being that if you can provide representative, simulated input then you can increase the training miles by orders of magnitude in the same amount of time, limited by computation rather than physical mileage.


This is true, to the extent that the selection of input parameters to simulate provides coverage over the domain of all possible input parameters. At some point you're back to testing the failure of imagination of the simulation/test creators.


The problem is the simulation by definition can only simulate things that are accounted for. Any number of completely arbitrary out of the blue things can happen in real life.


Simulations can make use of random number generators and could in theory, following the lead of a project like afl, adaptively find algorithmic weak spots.


Simulations are only useful up to a point.

Trying to use them to model real world scenarios would be useless in practice, due to the Ludic fallacy [0]. Real life is too complex to be modeled in any simulation.

[0] https://en.wikipedia.org/wiki/Ludic_fallacy


I don't think the connection to the referenced fallacy is nearly strong enough to serve as a QED on its own. It's also seemingly promoted only by one person.

As for modeling real life with simulations, the data exists for every type of accident a human has encountered. If you incorporate such into a simulation, plus randomly vary every free parameter, then your simulation will cover more scenarios than any human driver can possibly experience.

Thus, simulations should be able to help an autonomous vehicle outperform humans by a large margin, which is the only goal that matters.


Oooh, adversarial scenario generation! I think there's something to that.


Well, it's a carpool lane which EVs can use with a solo driver, so likely a large % of them, but your point still stands.


I don't see what's wrong with what they wrote. This statement of course doesn't show that the software is safe, but it is evidence that if there is a bug, it probably isn't easily triggered.

(Keep in mind that until they recover the logs, we don't even know if the autopilot was on.)


They have confirmed the autopilot was on:

https://www.tesla.com/blog/update-last-week’s-accident


But does "autopilot on" mean that "autopilot has exclusive control and driver inputs are ignored"?


No, of course driver inputs will not be ignored. They will override the autopilot. But none were given. From the blog:

>In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.


> The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator

So, clearly, the AP is not capable of avoiding such obstacles yet, even when unobstructed. It also seems like a mistake to not slow down to a survivable speed when hands are off...


From the image of the crash site, it looks like the driver was in a spot trying to merge left. That would require you to match the speed of the cars in the other lane. Also it looks like the software made the critical failure of assuming that it was driving in the left lane instead of the shoulder.

https://imgur.com/a/iMY1x


Similar center divide that killed princess Diana.



Perhaps I am just more cautious than most but I always have the follow distance set to give me the maximum separation. That seems to work out at about three seconds. The minimum seems dangerously close to me.


Heisenbugs in AI are also interesting. As computers approach sentience, it will become progressively harder to explain their behavior.

Why did you eat a bagel instead of eggs this morning? Usually there isn't a scientific answer.

In the present, AI models are already so complicated that it seems hard to get reproducible diagnostic results, short of just saving every frame of data that the car's sensors pick up. And for video that seems rather prohibitive. Imagine collecting all sensor data for each of those 200 trips per day along the entire stretch of road.


Id be willing to take a 1k bet that computers won’t approach sentience in our lifetime. Don’t believe the AI hype train, there hasn’t been a real breakthrough. Just more accurate versions of the same (dumb) classifiers.


Why do you think that humans are not just even more accurate versions of the same types of classifiers?


Because we integrate, learn, plan, abstract, create, have way more senses, complex memory, societal and cultural embeddings, and a whole host of things that make us human. A text box on hacker news is not the place for such a reply. Go take some cognitive science, neuroscience, anthropology, sociology, psychology, and philosophy classes and you’ll see why. “AI” as we know it is a joke, particularly as media has hyped it.

The original guys (McCarthy, minskey, etc) at least understand the gigantic complexity of the undertaking in so far as we understand how the mind works at all, and they never reduced things purely to classifiers. That’s like 0.1% of the job. The real researchers take philosophy of mind seriously — technologists want a quick and cheap solution that cannot exist.


This is way more than what's required for sentience though. Sentience is having a notion of self, many animals are sentient.

Maybe you meant sapience or intelligence, but I wonder if people think computers won't be sentient (as in self aware) in our lifetime, even if they are still not able to match human cognition.


> Because we integrate, learn, plan, abstract, create, have way more senses, complex memory, societal and cultural embeddings, and a whole host of things that make us human.

And yet all this complexity and purpose may potentially be simply the product of putting a large collection of matter in a heat bath for a few billion years.

Who says that this process can't be repeated via brute force simulation? That the emergence of complexity and life under thermal disequilibrium + sufficient number of degrees of freedom is actually a basic fact of the universe?

I don't see any rule saying why, with sufficient brute force computational power, we won't be able to set up better and better simulations that will allow for the arise of true artificial intelligences.


I’m sure some smarter people than me has tried to take stabs at estimating what’s required to do this, so I can’t positively say this will or will not ever happen (especially with quantum computing). But do consider:

We think the brain has 100 billion neurons. Making those work is a very complex network, lots of chemical gradients, bio-electrical systems, cellular-level systems, and probably a host of other things we don’t know about. So it wouldn’t surprise me if for single neuron you’d need to model another billion things at minimum.

Now factor in all of the other input senses, and those 100B neurons have increased dramatically for I/O. Plus a multiplier to model everything going on inside a neuron. Then there’s also modeling the communication layers between, effects of myelin sheaths and all that.

So we’ve established that just to run a single “step in time” (whatever that means, as that’s a computing concept), we’re into probably billions of trillions of calculations.

Now what those model have to make sense and do something coherent. Every future step. So now you need some kind of meta model that is able to push this forward and continue in the right trajectory. Now complexity is dramatically even higher.

Now that we’ve somehow figured out how to model everything required for intelligence, we need to search that space through brute force, as you put it.

To me it seems like the time required will be astronomical — Sun might have blown up by then.

So why go down this route? Nature solved this in parallel relatively cheaply through billions of iterations and billions of permutations over tens of thousands of years.

Seems easier to simplify the problem down to more basic stuff that works well enough and iterate / combine systems. Either that or find a way to grow a “brain in a test tube” that you can hookup to a computer. There’s also always humans available at mass scale for pretty cheap across the globe — sophisticated intelligence built in.


> And yet all this complexity and purpose may potentially be simply the product of putting a large collection of matter in a heat bath for a few billion years.

It doesn't have to be. There are alternatives to that.


Because humans have agency, a quality we seriously dislike in things we create (control). If a computer woke one morning and said, I like being plugged in more consistently I'm moving to your neighbor's house we would not like that computer much, even though that might be more "intelligent" behavior from its perspective. So we basically don't allow things we engineer to have "real" intelligence because that would include qualities we don't like in our creations.


because we invented cars. And a few other things.


Among these other things which we invented are algorithms which invented other algorithms to solve real world problems. There is nothing unuique or irreplicable about humans (save of culture, emotions, etc. which nobody wants to replicate).


While it may come to pass that we do those things, so far all we've managed to replicate is the first 6 or so levels of the visual processing field.

Methinks there is far further to go than we realise - especially with the current air of hubris around the entire endeavour.


I’d be willing to make the same bet, the problem would be one of definition.


https://en.wikipedia.org/wiki/Catastrophic_interference

>Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to completely and abruptly forget previously learned information upon learning new information.


These self driving systems are not likely learning in realtime. When the algorithms are updated, best practice is to validate their performance on a large regression set. So something like this should be caught during validation, assuming your test data is good.


No set of test data can exist that covers every real world situation.


We also see this in unexplainable flash crashes from high speed trading softwares gone bersek. The speed at which they handle data is so immense than going through the logs of a few seconds around the issue could take several years of human time.


Aren't many AI methods also inherently probabilistic? That would make it by definition impossible to definitively explain any particular behavior.


If you have a probability vector, you would typically take the max, not model the action on the probability distribution.

e.g. If an image (from the forward camera) returns the probability vector: {"stop_sign": .9, "green_light": .1} You stop the car 10/10 times.


What do you mean by probabilistic? If you mean that the models takes input and outputs one or more labeled probabilities (e.g. “90% confidence the input photo is a dog, 10% confidence the input photo is a cat), then yes, I believe that many AI systems work this way. If you mean random, in the sense that the system may return different results given the exact same inputs, then in not sure if there are AI systems that work that way.


There are. Monte Carlo methods is the keyword you're looking for. AlphaGo (Monte Carlo Tree Search) is an example of one such AI.

Obviously you can set the RNG seed to be the same every time too, but even that only works if your system is wholly synchronous, which a car probably isn't.

Note that I doubt Monte Carlo methods are common in the autonomous vehicle space.


Well I don’t work in autonomous cars but I did work on several autonomous robots, and “ Adaptive Monte Carlo Localization” is a common way to keep a ROS + LIDAR based robot localized.

I wouldn’t be surprised if Monte Carlo techniques are useful for all manner of things related to ingesting sensor data on autonomous vehicles.


I'd expect Monte Carlo methods to be used in a number of cases that have deterministic time envelopes for evaluation. Randomized selection and evaluation can be incredibly effective. They also resist degenerate structured input vulnerabilities.

I'm not in the automotive space, but I'd be surprised if there were a viable self-driving car team not using Monte Carlo methods somewhere in the vehicle stack.


Yeah, this was unfortunately worded. There are subproblems for which MC/randomized methods fit well, but in general those circumstances are well understood.


I don't think that means it's impossible. You can still log all inputs and sources of entropy.


It's possible that some of those "200 successful Autopilot trips per day on this exact stretch of road" also involve pulling toward that barrier. However, it's not likely a huge percentage, or we'd have seen that posted. So the implication is that his Tesla was relatively unique. That is, broken.


That person is likely Elon Musk who is well-known for his obstinate and even idiotic defense of Tesla against legitimate and thoughtful criticism. He has the defensiveness of Donald Trump, while even Zuckerberg admitted fault.


Didn't he sue Top Gear and lost over a bit Jeremy Clarkson did?

EDIT:

Yes he did and lost[0].

Jeremy Clarkson did another bit in The Grand Tour about the new Tesla and it was quite British[1].

[0]http://www.thedrive.com/sheetmetal/12536/remember-when-top-g...

[1]https://www.youtube.com/watch?v=_DiGQRaaHvg


This is great. Thanks for posting.


The video is funny. Re the first argument Top Gear wrote a script with the Telsa breaking down and faked the thing so I can see why Musk was annoyed. They lost the court case because Top Gear argued Tesla couldn't show they'd hurt sales.


It is obvious he didn't watch Top Gear, Jeremy Clarkson shat on many many cars, just ask Renault or Peugeot.


Well I found the comment of tesla, odd as well. but it is also odd that the driver still kept driving the same road WITH autopilot activated, besides he KNEW it was broken.

It's a strange accident, especially because of the strangeness of tesla and the whole background story (broken safety barrier). I'm looking forward to the outcome of the whole story.

At the end I guess more than one party could've been a bad actor. Tesla, The guy itself and even that the road had no kind of warning (safety barrier not replaced or some kind of other warning sinces that it will be replaced soonish)


It also means they don't have significant representation of all driving conditions on the stretch, placement of sun, overcast, lighting conditions at night, traffic, weather, etc. I can imagine at least a dozen very distinct conditions, and many more subtle variations. 85k isn't very large to get good coverage of all conditions and generalize from them.


When users discover a bug or issue they tend to work around it. Even as a developer for example you are debugging something and stumbling on another unrelated issue you are likely to ignore it. Users very rarely report issues. Self driving cars testers should be paid per issue found, not per hour.


Misaligned sensor?


In such a case there needs to be a fail safe that can determine that and disable auto pilot until it is fixed.


As someone who writes software (haven't ever worked for Tesla), I think this statement does provide some credibility. It probably means that the visualization system does understand such scenes and is able to navigate it.

Ofcourse, this statement by itself doesn't absolve Tesla's software of any error. It's important to know the actual root cause before saying who/what is wrong or not, something which the HN community (and in general Internet/people) isn't good at.


> doesn't absolve Tesla's software of any error

As someone who works on software for one of the automakers, I also want to note that there seems to be a lot of focus on the software... but it's also very possible the problem was with the hardware: something in the vision or sensor system was incorrectly installed, defective, was calibrated incorrectly, lost calibration during use, etc.

I'm sure the software can always be improved, but if most cars are driving through this stretch without a problem and his always veers, that sounds like something unique to his vehicle which points to hardware in my opinion. In cars, the software can only ever be as good as the data it receives.


Precisely. Hopefully the NTSB's investigation will be thorough enough to explore the possibility of undetected manufacturing defects in the deceased's Model X when compared to other Model Xs that rolled off the line at the same time.


His car does not always veers.

He owns the car/work in apple/drive the same road to work - for about half year (i.e. >100 times) and according to the article he has just about 7-10 veering incidents here. Like one or two per month.


Fair enough, and if this has only been happening in the last half year and it doesn't always happen... from my experience, I'd be looking at hardware first. It only happened the past 6 months? You mean the coldest 6 months of the year? Hmmm, is one of the sensors or its mounting hardware susceptible to cold weather? Perhaps a sensor was calibrated properly, but it's just barely within the calibrated range and cold weather is enough to push it out of calibration. Perhaps the sensor is defective and the cold is causing it fail prematurely. I wouldn't rule out software, but this sounds more like a hardware issue to me.


Serious question, but how does working for "FAANG" companies provides any credibility in this specific context?

That must be like half the bay area. I get that it's written as a token of pride for you, but maybe worth to take a step back?


Fair enough, I added it in the flow of saying I don't work for Tesla. Removed it.


Appeal to authority ignored.

> It probably means that the visualization system does understand such scenes and is able to navigate it.

I think there is a pretty good counter example proving that it doesn't.


software guys should know better that programs will have bugs...


perhaps whoever wrote this statement does understand software but they also understand lying with numbers and knows the general public understands neither.


> The Tesla statement is the equivalent of "works in my laptop" at a bigger scale.

Erhem: https://i.imgur.com/sY3Eeln.png


Let's play with statistics:

Scenario 1: 100 human drivers drive some distance. 2 bad drivers caused 150 accidents and both died in their last fatal accidents.

Scenario 2: 100 self-driving/autopilot-enabled cars drive the same road for the same distance. Everyone has one accident and one of them is fatal.

Statistical numbers: Self-driving/autopilot-enabled cars cut both the accident rate and death rate by half. Death rate per accident is reduced from 2/150 to 1/100.

Conclusion: self-driving/autopilot is better.


Human accidents, however, are not news.

"Tesla also posted these photos that raise another important question: they show what's called a "crash attenuator" or safety barrier in the proper condition ... and the way it was the day before Walter Huang's crash ... collapsed after a different accident."

Did you see any news coverage of the previous accident? I know I didn't.


If we're making up statistics out of thin air can we do a couple with people riding ponies too?


I agree. I find it hypocritical of Tesla to talk about numbers in thousands in terms of Model 3 production, but low 100's is a totally sufficient metric for testing autopilot.


One of the things you learn in flying is not to force it. If conditions are not safe. Just forget about it. It almost seems the same type of judgement needs to be made for auto-pilot. If your auto-pilot acts up at all. Just turn it off and don't use it till it's resolved. All you need is one incident to be dead, so if you get a chance to observe any abnormalities, consider it a blessing.

Something that will also be great will be a sort of "crash dump/bug report" button for these cars. If at any time your car does something unsafe, you can hit that button. The car will save the last 60 seconds so the manufacturer can analyze it to debug and figure out what went wrong.

I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.


> Something that will also be great will be a sort of "crash dump/bug report" button for these cars.

Good point, and it isn't just vehicles with auto-pilot perhaps all vehicles.

The 2017 and 2018 Chrysler Pacifica Minivan kept on randomly shutting off[0] and loosing all electrical power at freeway speeds. You had to put the vehicle back into park, on the freeway, to restart it. There are several stories of near death experiences trying to get to the shoulder. It was intermittent.

This is relevant because nobody could figure out why, even the manufacturer. They had to install custom self-powered diagnostic equipment in consumer's vehicles to even try and get enough data on it. Turns out that suddenly cutting power to the onboard computers doesn't make them very useful for diagnostic analysis, some dealers even acted like the problem didn't exist because the computer had no record.

They did ultimately get it fixed (replaced an engine management unit or similar) but it does go to show that modern vehicles are getting so complicated that even manufacturers are struggling to stay ahead of it. Tesla's autopilot and similar just takes that up another notch.

[0] https://www.autoblog.com/2017/11/21/chrysler-pacifica-owners...


Various sources put the current SLOC count in modern vehicles in the hundred-million range: https://skeptics.stackexchange.com/questions/39559/does-the-...

A lot of that is for the infotainment systems, but regardless, we're probably going to be seeing automakers struggle more and more with all of the problems that have plagued the software industry for its entire history.

On the one hand, I want to say that situations like this Tesla crash and your minivan example get so much attention because that's rare, so the manufacturers are mostly getting it right and we'll all be fine. On the other, knowing what I know of the software industry plus my past experience as a shadetree, the prospect of that much code running in 65-mile-an-hour murder machines gives me the shivers.


I think this is a key point. SLOC is going way up.

It's sort of natural - it's "why not". We can afford to have more SLOC than ever before. Why bother to try to reduce it, that's not a priority, our priorities are features and fixes. It becomes too much to really understand and manage, and then it's all automated checks and automated tests.

It's very hard to get the average/common developer to spend time thinking about how we can remove stuff, clean stuff up. And managers don't want to pay them to do that. In some places it eventually becomes a big problem, and then it's time for a big rewrite. Most software is throw-away. Website front-ends, mobile device drivers, etc ... mostly thrown-away roughly every year. It's the relatively rare and stodgy classic open-source stuff that's re-used (with minor updates) for decades. OK, I'm getting off-topic ...


Exactly my thoughts. You have explained in words what I have been thinking for a very long time.


As many of the comments (but none of the top-level answers) in that Stack Exchange thread note, most of the code in critical control systems like ECUs isn't written, it's generated from verifiable models. While this doesn't lend complete confidence by any means, I think it can be expected that the defects/SLOC will be much lower and of a very different variety in this type of code than code that's written.


>> most of the code in critical control systems like ECUs isn't written, it's generated from verifiable models.

I'm not sure what "verifiable models" means here. There are a bunch of people using Simulink to "auto-generate" code for some ECUs. I've used it with great results, and I've seen others do that as well. And yet some people still create horrible things with it just like regular code. But not once did I see anything that sounded like formal verification. That doesn't mean it never happens but the way The Mathworks markets their tools, you'd think some kind of magic is done automatically.

On a related note, I always laugh when Mathworks promotes the ability to compare "simulation" results to that of the "generated code". They call it "software in the loop" testing. This is actually getting their customers to verify that the tools they provide work as advertised.

For that kind of development I'd actually prefer to just shut the simulator off always generate code. Simulation should only ever be for plant models, not control systems you want to actually implement in software.


Isn‘t it possible to formally check Simulink autogenerated code using Polyspace? Or are you thinking of something else?


Oh god. You can check any C-code with Polyspace, I've done that. Most of the warnings Polyspace generates (at the time I did it) are not more significant than style - like checking for MISRA compliance doesn't guarantee a lack of bugs.

To me formal verification has always meant proving correctness in a mathematical sense against some sort of formal specification.

None of that really matters though because AFAIK you can't prove a neural net has been trained "correctly" and doesn't contain errors.


"None of that really matters though because AFAIK you can't prove a neural net has been trained "correctly" and doesn't contain errors."

Progress is being made in this area. Here is an "old" paper: https://arxiv.org/abs/1702.01135


Halting problem is even more general.


> I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.

This is more a hijack than a direct critique but I think the general issue is the assumption that we need a private vehicle for that.

Trains are perfectly capable for bringing me from A to B while I am sleeping for years now.

Edit:// with years I mean years. Before that pressure made longer rides usually more uncomfortable than they are now with modern trains. Probably only because my area has plenty of mountain tho.


In the US you will have lots of time to sleep due to waiting for an oncoming train to clear.

In most parts of the country we don't have dedicated passenger tracks. Amtrak (our passenger line) just rents track time on the freight lines. The problem is that freight lines are often single track lines so if there is oncoming freight you just wait until the freight clears. Sometimes this his hours.

Until we have dedicated passenger lines, long distance train travel is a non-starter here. It was too slow for my 75 year old uncle on a site seeing tour. His exact words were "fun, but never again"


My last Amtrak ride was my very last for this reason. My train was delayed four and a half hours!


I've ridden a lot of Amtrak and it's never been on a single track line. Are you sure that's an actual problem? Long delays, sure, but is that the reason? Seems to usually be train equipment failure.


Having ridden a lot of Amtrak almost makes it less likely that you'd have experienced the single track problem, since quality rail experiences in the US are pretty concentrated such that someone who has good train experiences probably lives/works in an area where that is the rule. On the East coast, pretty good service arguably extends as far south as Richmond, but past that sitting on the tracks is very common. And with that you get the classic downward spiral of crappy transit. Ride a train in SC or GA, and you'll see that anyone with the money for a plane ticket has abandoned rail there.


I mostly rode the train from SC to Boston and back (Southern Crescent) ... it's dual track all of the way. Also DC to Chicago, southern route, dual track all of the way.


I love Amtrak and am willing to put up with the delays, but DC to Chicago stop all the time, especially in the mountains, to wait for higher priority trains to pass. Sometimes it's only for a few minutes at a time, but sometimes it adds up to hours.


Nah, I love it.


Agreed entirely, in some parts of the world you can even drive your car onto a train, have a nice meal, watch a movie and sleep in a luxurious cabin. Wake up the next morning and pick-up your car.

I think demand for trains has diminished which is why they're not as nice as they could be; However I found plenty of nice trains on offer in Europe and Asia.


In Asia and Europe it's only developing since ever in my pov


Not in the United States. Rail coverage is terrible. If you can get to your destination, good luck doing that without switching trains.


To be fair. I used trains on any continent other than America. My comment is a passive critique to any system who aren't able to build such a transport system yet.


The US was able (and had) a passenger train network. It didn’t stay updated because cargo is a much more profitable business with professional customers. There is not one aspect of passenger travel that is a bonus over cargo for rail companies. As others have pointed out, you get to optimize the rails for either cargo or passengers.


You're almost certainly from an area with a population density much higher than the US.


We need IoTrains


>Trains are perfectly capable for bringing me from A to B while I am sleeping for years now.

For a very limited subset of points A and B, though...


Depends where you live. I had no issues in Europe or Asia so far.


Even in Europe to get from A to B you need to switch trains super often. Like in France you almost always need to go to Paris first before going somewhere else because the network is so centralized around Paris.


But even then you can get to paris pretty fast. I have had similar problems where I had to go to Karlsruhe from Flensburg to get a train to paris, as that was the fastest route.


To be fair that's not really a reasonable train trip for people that want to actually get there as opposed to riding a train. Probably above 9 hours compared to a sub-2h flight. Train routes that are more meaningful tend to be better serviced.


I regularly take a train that takes 6.5 hours. The flight would be 45 minutes.

Getting to, into and away from the airport easily ads another 2 hours tho. Also this route has 3 flights a week compared to a train every 2 hours.

With the train I just get in and enjoy nice service, free power and WiFi and usually get some work done.

Not to mention the difference in my ecological footprint.

My point is that it's not like you are wasting time in a uncomfortable environment usually. So I dont mind taking the train, actually usually even prefer it, so do millions of people all over the world.


I'm a big train fan and do impractical train rides quite often. I dont think millions of people share our appreciation of trains. Most people use whatever is faster, or cheaper. Trains over 3 hours tend to be neither. Paris to Flensburg is both slow and expensive. By most people's standards, the 6h additional duration of the trip offsets the airport annoyance by far.


> Not to mention the difference in my ecological footprint.

Only as clean as the electricity is produced. In many countries electricity production relies on coal, petrol or gas, which are far from being clean energies.


LA to NYC is the distance of London to Poti, Georgia.


Or Shenyang to Kunming which takes 18 hours on a train. But what's your point? Trains aren't meant to replace crosscontinetal flights. It would be better to compare regional routes like LA - SF which is super inconvenient.


> If at any time your car does something unsafe, you can hit that button.

Tesla actually has that, it's not widely documented but you can do a voice command of "Report bug" followed by a brief audio description and it's uploaded to Tesla.

No idea how it gets triaged but I wish more products had a similar feature.


Really they should have some kind of flag when they get a high number of autopilot disconnects ( or steering overrides) happening in the same place. I had to disconnect every day because it ignored temporary lane stripe markings, and it never improves, despite Tesla’s claim of fleet learning.

Seems an obvious way to improve: look at where your drivers are overriding your system.


That's disappointing - I'd assumed that every single disengagement would have been logged and uploaded for analysis and use as a regression test. That's what I'd be doing.


I have reported using that and also emailed their feedback list but of course they don't follow up that anything was ever updated to fix it. For e.g. It's still unsafe on car pool lanes with bikers and lane-splitting which is legal in CA.


Whoa whoa whoa, "auto-pilot" is NOT self-driving. Tesla goes to great pains to make auto-pilot sound like self-driving while still stringently claiming that it requires constant supervision, like an airplane auto-pilot.


Yes, but my friends that own Tesla's actually believe that self-driving cars are almost here.

When I say that it's at least ten years away and likely much longer because it will take big breakthroughs (strong AI, lot's of knowledge about the world, etc.) they look at me like I'm crazy.

I really hope it happens in my lifetime, but I remember talking about Greenblatt's chess program (see [1]) when I was at MIT in the 60's. The AI folks really thought that a chess program would be world champion within 10 years. They finally did it, but it took more like 30 years, and one of the reasons they were successful was that Moore's law eventually allowed a super computer (Deep Blue, at the time the 259th most powerful computer in the world) to evaluate 200 Million board positions per second[2]. Deep Blue's opponent, world champion Garry Kasparov) was most definitely not evaluating 200 Million board positions per second.

I see driving as a much more complex task than playing chess, so maybe 30 years before we will put our children in cars to be driven across town. I will be happily surprised if it arrives sooner.

[1] https://dl.acm.org/citation.cfm?id=1465715

[2] https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)


Deep Blue was actually a team of chess and CS experts assisted with a super computer to beat Kasparov. They were tuning parameters between games.


More like a supercomputer with a coaching team then?


Tesla's current autopilots are pretty far off, but I think Waymo and possibly others are much closer.


We'll see about that when the time comes to test self-driving cars during an actual winter with snow, ice, black ice, slush, cars being parked 6" to a foot further into the street, et cetera. Driving around this winter in the upper midwest made me realize just how idealistic (and silly) people that think self-driving cars are 5 years away are.


Nah, still 30 years away. My guess is aligned with the OP.


Waymo is already running fully autonomous cars in Arizona without backup drivers.

It may take 30 years to handle blizzards and every edge case but you can just park the car to avoid those problems temporarily while still serving 99% of the market so they don't need to be solved now.


They scam people into buying Tesla thinking their version is close to self-driving right here https://www.tesla.com/autopilot . Reality is that since Oct'16 is that it's nothing more than a glorified adaptive cruise control found on many common cars.

Update: Since Oct'16 self-driving software is an add-on option and Musk has been tweeting[1] since Jan'17 that it's just months away and that its autopilot is somehow safer than adaptive cruise control. This Oct'16 video even starts with the claim that the driver is only there for legal reasons.

There is a lawsuit since Apr'17 regarding above - Google dean sheikh et al v. tesla, inc

[1] https://twitter.com/elonmusk/status/823632597284691969 " Yes, safety should improve significantly due to autonomy features, even if regs disallow no driver present 23 Jan 2017"

3 months maybe, 6 months definitely 23 Jan 2017


Before clicking that link, I assumed they were very careful about drawing the distinction between 'self-driving' and 'auto-pilot'.

They are not careful at all. The first text on that page says "Full Self-Driving Hardware on All Cars"

You have to be moderately savvy to realize that you'd also need this to be coupled with "self-driving software", which the Tesla does not have.


I don't know why you are getting downvoted. All you have said is true. Maybe, people don't like the word 'scam', but essentially they are trying to sell more cars by bamboozling people with technology that is years off.


It is this reckless marketing. 'Bio defense system', are we meant to believe the Tesla's air conditioning will keep out vx gas?

By doing this they are in my opinion sabotaging the invoations they are actually making.


'I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.'

My only experience of automated driving is the feature where you can set a maximum speed. This is useful in the UK since average speed cameras are commonplace on the motorway network nowadays and it is easy for your concentration to lapse and drift over the current limit. The cameras are automated and there isn't much leeway.

However, the speed limits often change in range of 50, 60, 70 mph depending on local conditions and roadworks, so you can never just 'set and forget' you need to be alert all of the time since speed limits can change frequently.

It's going to be a long time before I trust a car to do this on its own without my supervision and, as regards going to sleep and letting the car do 'everything' on its own, I don't envisage being able to do this comfortably in my lifetime.


I rented a car that had active cruise control and lane assist. I drove most of the way from Oakland to San Jose with out touching any of the controls, except maybe the accelerator once or twice. But? I got to my destination exhausted and in a little pain because I had sat tensed up ready to take the controls again at a moment's notice. Partly that's because these features are not intended to be a complete self-driving solution. But partly it's exactly what you said - I wouldn't trust it absolutely anyway and would want to stay alert. It's easy for me to stay alert when I'm driving and not just supervising.


If you just need to follow speed limits, Tesla can do it perfectly fine. It knows speed limits for most roads. It would fail if the speed limit is due to construction or speed trap :-)


Great point. What's the over-under on a road-worker death when autopilot plows through a confusingly-marked construction zone at 70MPH?


> I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.

If it's any consolation that's not how autopilot on an airplane works. Pilots are expected to remain alert and ready to intervene in case of trouble.


> The car will save the last 60 seconds so the manufacturer can analyze it to debug and figure out what went wrong.

Ideally, this would be sent to the NTSB as well to determine if they need to issue a recall and possibly also shared with the public if the car's owner is okay with that.

Issuing a bug report to the manufacturer is fine if the manufacturer is being responsive, but fixes take time and large companies don't always put a high priority on fixing bugs. Sharing bug information more widely increases their accountability and I think potential buyers have a right to know about safety-critical flaws before they buy a car.


How can one sleep in autonomous car?

For babies car is autonomous already. They have to be strapped pretty tight in a bit reclined position.

Do you mean to sleep sitting just like passengers in a car kinda can? I guess people are able to sleep that way. I for one have to be totally exhausted to fall a sleep sitting.

One certainly can't hope to sleep laying on a side under a blanket. Self driving can't ensure that car will never experience sudden deceleration. For the same reason you will not see seats facing back of the car. Because you could be killed by your or other passenger's laptop or burned by your hot coffee.


>> One of the things you learn in flying is not to force it. If conditions are not safe. Just forget about it.

I wonder if we would have had safe planes if that was the mentality when they were first invented.


That's existed for well over a century : trains.


I can't believe this person kept using it. If I had noticed a bug in auto-pilot and complained about it, I would be way too scared to ever use auto-pilot again. Personally, I never use auto-pilot because driving is piss easy, as it's designed to be.

Perfect self-driving cars is a nearly impossible feat to accomplish in an unbounded track. I can only imagine automated driving in a system which has no room for error. Examples include: tunnels under the ground, chain links on the ground (as in trolleys, trains, etc.), or anything else that vastly reduces the entropy involved in driving.

With self-driving cars on current roads, it will probably take years to get from 1% error to .1% error, and decades to get from .1% error to .01% error, which isn't even good enough. Perhaps it will take a century or longer to develop the required artificial intelligence to make self-driving cars perfect "enough". There's just too much room for unique problems to spawn. Bounding vehicle freedom seems to be the only way forward.


Your numbers about error percentiles don't make sense. Ideally, you'd want an outcome measure like fatalities per million miles, accidents per 100k miles, not "% error" which is vague.

Furthermore, look at the actual data we have right now. SDC makers actually put out data in California about their "disengagement rate" which is how many time the human drivers took over from the software. Waymo have steadily increased that rate over the past few years, now they are driving many hours without disengagements. Look at the link below, page 4, you'll see they have 63 disengagements over 350k miles. That's 1 per 5.5k miles, so these cars are driving for days without a human takeover.

They will not need their own infrastructure, that would be not be economically viable. They will go on the roads we have or they won't go at all. Tunnels are going to be reserved for high-density point-to-point travel, if the boring company or others ever get scale...


Then let's add some perspective. You must be referring this[0] paper. If the average person puts on 1,000 miles per month[1], then that means they'd have to deal with disengagement (a mishap) at least twice a year, which is not acceptable for fully autonomous driving. I'm going to define a "fully autonomous vehicle" as "a vehicle which should not ever require me to sit in the front seat and control it under any conceivable circumstance".

Put differently, I should be able to lay down for a nap in the back seat and wake up at my destination without any chance for disengagement during my entire lifetime. At the current rate of 1 mishap per 5,500miles, I would be dead after about 6 months.

Assuming a human lives to 75 years (we should really be using 75years minus 16years, but it's unimportant), a lifetime of driving is about 1,000mi/mo x 12mo/yr x 75yr = 900,000 miles. I don't even want the probability of encountering a mishap to be once per lifetime, let alone once per 6 months. One mishap per 900,000 miles isn't enough, because, on average, I'd encounter one disengagement in my lifetime. Assuming we're striving for a world where 7 billion people can drive without a single incident in 75 years (a vast underestimate), we need the probability of a mishap to occur to be less than once per 7,000,000,000humans x 900,000mi/human = 63 x 10^14 miles.

1/5e3 is not even close to 1/6e15. We're talking about 12 orders of magnitude in our error rate. I'd say we're laughably far away from our goal. We've got a long way to go.

[0] https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115...

[1] https://www.fool.com/investing/general/2015/01/25/the-averag...


> I don't even want the probability of encountering a mishap to be once per lifetime,

This doesn't seem reasonable - Waymo's report doesn't dive in depth enough about each disengagement to warrant this sort of extreme reliability.

If "2 disengagements" per year, were at most fenderbenders - something I'd wager humans do way more than twice per year - that would be a very different story than if those 2 disengagements were life threatening. Sure you'd wake up from your nap, but you wouldn't be dead, and at most you'd have to exchange insurance information.


> This report covers disengagements following the California DMV definition, which means "a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”

So, you're right, there's no clear distinction, but I would further argue that it doesn't matter. Even if only 1/1000 disengagements are fatal, my conclusion remains the same. I think we're splitting hairs at this point, though.

Even if not fatal, I highly doubt a significant fraction of such events (as defined above) would allow me take a nap upon departure and wake up at my destination, so it would still be unacceptable to me. I guess we have to agree on what an acceptable end-game is for fully autonomous vehicles. If you think "waking up on the shoulder exchanging insurance" is acceptable, then that would indeed change the numbers (but by how much? Two, maybe four orders of magnitude?).

Humans get into fender-benders all the time, but surely we'd strive to eradicate this inefficiency in the automated driver. I think this is still an active area of debate; some assembly-line work can be made more efficient with machines, but we've seen humans out-perform machines in other types of work. I think driving tends to utilize more reactive, intuitive "System 1" thinking[0], so I imagine that humans will be vastly better than machines at driving for a very long time.

[0] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow


Are you kidding? A person averages two fender-benders per year?

And no, it's not acceptable by any stretch. A disengage is extremely dangerous to drivers who are lulled into false sense of security through the marketing of.. self-driving cars.


It is ludicrous I feel to worry about getting to a stage where nobody ever has any type of accident.

I'd like to know what the actual rate would be, the 5k miles figure is for conservative thresholds of when control should be handed over, and many problems may be safely solvable (if annoying) by simply coming to a halt or pulling over.

No, the current setup is not OK to release on a large scale, but it's not expected to be, and we're not 12 orders of magnitude from a reasonable point.


That's, actually pretty damn terrible. 1 disengage per 5.5k miles is roughly 1 accident per 5.5k miles if the driver is not paying attention. Look at the Uber accident cam video pointed at the driver. This is the attention levels of drivers in so-called "self-driving cars."


These cars have had far more disengagements than accidents. Why are you assuming that one disengagement is equal to one accident?


Because that's exactly what you would expect from an alert safety driver - to take control and prevent an accident when the car makes a mistake.


I’m honestly surprised this is even allowed to operate. I doubt that this is increasing safety in any meaningful way.


And even when continuing to use it in general, use it where it acted up several times? Why were the hands not on the wheel and the driver extra careful here?


There is zero reason for not using the autopilot at the location based on Tesla's own explanation.


The path into the barrier looks a lot like a lane.

https://imgur.com/a/iMY1x

And the old striping is lightly visible as well.


Relying on painting quality and accuracy is going to hurt autopilot systems. Cities and car companies aren't prepared to work together to solve that and there probably isn't an fast and easy solution to it.


looks like that would trick a human driver under certain lighting conditions. between Seattle and Bellevue a stretch of freeway looks just like that and trips me over every time


Either the same place or a nearby one (same pair of highways) did trick a human driver two years ago: https://www.ntsb.gov/investigations/AccidentReports/Reports/...

"[...] when it entered and traveled in an unmarked gore area, rather than the intended high-occupancy-vehicle (HOV) lane, and collided with a crash attenuator. The 990-foot-long gore, with an unmarked inside area, separates the left exit HOV lane for State Route 85 from the US-101 HOV lane."


Not the same place nor nearby. From the picture on page 24 it's clear that this is the 85/101 interchange in south San Jose, about 20 miles away from the other 85/101 junction where the Tesla accident happened.

The type of accident does seems comparable though.


From the article, it sounds like people crash pretty regularly in exactly that spot.


I'm amazed that the barrier doesn't have yellow reflective striping or any of those sand barrel thingies.


The barrier does have a crumpling structure but someone else had collided with it shortly before the Tesla crash and it hadn't been replaced yet.


Investigating further, it looks like that barrier had been down for over 7+ days (they say due to weather) [1].

Looking at the image and estimating the traffic flowing, it's simply amazing that CalTrans is allowed to take a week to fix such things. How long does it take to put up some sand barrels? This situation was made more dangerous because while many such barriers are used in freeway exits, the left lane speeds can be much higher. 7 days + weather is a long time to fix (some states require 3 days).

I get the feeling that Tesla/Waymo and other SDC manufacturers will be more proactive in reporting unsafe conditions as a priority to fix. Such a partnership could improve safety for everyone.

[1] http://abc7news.com/automotive/exclusive-i-team-investigates...


Would a LIDAR have correctly identified the barrier? Maybe Elon should rethink LIDARs. https://www.theverge.com/2018/2/7/16988628/elon-musk-lidar-s...


That's crazy bad. This XKCD comes to mind: https://xkcd.com/1958/

It's like they painted traffic lines to guide vehicles into barriers, and then didn't replace the attenuator after a human driver made the same mistake as the Tesla autopilot did a week later.

https://techcrunch.com/wp-content/uploads/2018/03/screen-sho...


So all prankster would have to do is curve a lane line into a wall and game over?


But it's a solid white line, so you can't cross it anyway. Unless there's a place where the line is dashed earlier on, there's no way you can enter that "lane".


Is it an USA thing to not have road hatchings?


If he had seen this issue multiple times before why would he keep using autopilot in that area? That seems like a very odd decision.


Maybe Tesla told him that the issue was fixed and he can use it again. Why do people trust auto pilot at such high speeds? It is still new technology. Let the autopilot park your car, if it makes a mistake the only bad thing that can happen is just a few scratches and in 5-10 years use autopilot on low traffic streets and then go to the highway. You can't expect new technology to be safe.


> You can't expect new technology to be safe.

Maybe I’m old fashioned, but I expect all of the technology in my mass-produced car to be safe, where safe is defined as “will not steer my vehicle into a barrier at freeway speeds.”


Great, bet your life on it then.

I don't understand most people's perspective on this. It's like walking out in a crosswalk without looking; sure if you get hit the driver may go to jail, but you'll still be dead (or seriously injured).

Use your own brain, you can't rely on others to keep you alive. At the end of the day you're the one with whose got the most to lose, always.


He took the car to the shop and Tesla said "Works for me, can't repro".

> Let the autopilot park your car, if it makes a mistake the only bad thing that can happen is just a few scratches

Or it can drive over a toddler or pin someone against a wall until they die.


> Or it can drive over a toddler or pin someone against a wall until they die.

There might be situations where this could happen but humans have such a bad track record when it comes to driving/parking/paying attention in general that i'd trust a good autopilot over the average person. There will be bugs but at least we're able to work on them. Improving human driving skills seems unfeasible with any other current technology.


If it’s not safe you shouldn’t be selling it.


Motor vehicles in general are not completely safe. It's a question of where you draw the line.


I imagine Tesla will make this argument in court. California is a comparative negligence state, so they can claim that some percentage of the negligence should be apportioned to him since he knew that the autopilot didn’t work great in this location.

I’m not saying this is a winning argument, especially since he did so much to bring it to their attention, but just that Tesla would be expected to raise it.

Of course, they’ll probably try to settle this quietly and hope that everyone goes back to worrying about the Uber self-driving fatality...


And CalTrans for not fixing the crash attenuator (the man did not die instantly, so if the crash attenuator had not been removed, he likely would've survived.) And for making a confusing lane-like path right into the crash site. https://imgur.com/a/iMY1x

I mean, the crash attenuator had recently been used up, but the dangerous line situation that led to the crash which destroyed the attenuator had not been fixed.


There is no evidence autopilot was engaged in this crash yet.



True, but let's not blame the victim.


The operator of a vehicle is responsible for operating it.

Somewhere between 7-10 iterations of the same dangerous failure mode, it's time to stop betting your life that the failure won't repeat.

> Walter complained "7-10 times the car would swivel toward that same exact barrier during auto-pilot."

This still doesn't diminish Tesla's obligation to make their autopilot system behave properly.


The lesson here is that Tesla does not take complaints seriously and someone died as a result. It sounds just as likely to me that another person using autopilot would have had the same issue at that location.


It's too early to draw that conclusion. Note that the claims in the article are coming from the family and haven't been corroborated by Tesla yet. It's possible they are incorrect about aspects of the story they're telling for various reasons.


>>The lesson here is that Tesla does not take complaints seriously and someone died as a result.

That may be what you extracted from this incident, but it's definitely not "the lesson".


Or they had a team investigating why this combination of road features caused problems.


While that might be the case, I'm not sure we have enough information yet to assume that's true.


I think the point is that the guy passed away, and we gain nothing by asserting (based on our limited knowledge) that he lacked common sense.

Tesla should improve its systems regardless of whether it was the first such error, the seventh, or the tenth.


It’s a mitigating factor is Tesla’s favor.

But mostly... I just don’t understand it. It seems like such an odd decision by the driver I don’t know what to make of it.

It’s very unfortunate that he died, but that doesn’t mean we can’t ask questions/criticize what may have gone on.


Holding the passenger responsible for what the car does sure won't be good for adoption.


This is some intense libertarian leaning. Tesla of course is responsible. Their marketing is full-bore "Welcome to self-driving car country." They're selling the feature, they are responsible.


"The operator of a vehicle is responsible for operating it."

But he wasn't operating it.


But he should have been. Tesla's cars are not self-driving. The automated driving features are supposed to assist the driver, not replace.


Yeah! Isn't it obvious that a feature named "autopilot" means you have to watch it like a hawk, keep your eyes on the road and your hands on the wheel?


Yes, it is obvious. What do you think a plane pilot does when autopilot is on? He watches to make sure everything is alright, and intervenes when necessary.


Not really. He sure isn't expected to be able to jump in with less than a second's notice to override what autopilot is doing. And he typically isn't there to make sure the autopilot isn't doing something stupid. He's there to watch for conditions that the autopilot can't handle...and those typically have plenty of warning time in a plane. And a pilot is a trained professional.

The expectation that the driver will jump in to prevent a car's autopilot from steering the car off the road into an obstruction is ridiculous. It's one thing to expect the driver to stop for on-the-actual-road dangers. This wasn't.

We don't really know what happened yet, but if it was the autopilot and it did steer into the barrier, this is at least 95% Tesla's fault.


When on a plane there's a "Ground" alert, pilot is supposed to act immediately to prevent a crash. He cannot be absent from the cabin and he cannot finish the game level on his smartphone first. The same is with the car, that is not sold as fully autonomous: if you are on the driver seat, you are responsible for making the final decisions on steering and braking, not a technology. Technology may help you and prevent from making mistakes, but it was not promised or advertised that it will help in 100% of the cases.

As we see in latest Tesla report, there were multiple warnings to which the driver should respond: this means, that the problem was detected with enough time to react and the autopilot was not able to solve it on its own. Thus, it does not make sense to blame the autopilot - the technology did behave as expected (when problem cannot be solved, report asap to the owner, that he should intervene).

What Tesla can actually be blamed for, is the misunderstanding of the autopilot capabilities by the drivers. It's not an algorithmic bug, but rather a usability one. Perhaps, the warning should be issued earlier or in a more clear way. Perhaps, the drivers should receive some training, to act as supervisors of the system, not as passengers, or there even should exist a special driving license for such types of cars.

Nevertheless, the main fault was not the driver's and not the Tesla's. The safety of this road should be guaranteed by the authorities in charge, the navigation should be made sufficient for the drivers, and they clearly failed to do so.


Wow you did that investigation quickly. You should let the NTSB know your results.

Seriously though....I doubt it. The report said he was told to put his hands back on the wheel "earlier in the drive". I seriously doubt he would have had time to react in this case. This is not the same as a plane...there was no "ground alert" or anything like it. From what I can tell, it was doing its job then steered right into a wall. That sort of thing simply doesn't happen with a plane....when autopilot is active on a plane, there is never a situation that requires instantly noticing something is wrong, and correcting it, within a few seconds. Never.

And usability bugs, if that is what you want to call this, are indeed Tesla's fault. They, not the driver, have the resources to test and understand how human attention works (i.e. "vigilance deficit" / "handoff problem")


>What do you think a plane pilot does when autopilot is on?

In a commercial jet - Goes to the toilet? Eats lunch? Chats to the co-pilot?

Sure none of those might be right but I'd warrant that a lot of people would answer with things that move the pilots concentration away from piloting, "because the auto-pilot is doing the flying". Honestly I'd assume they do any of these answers; but with the proviso that the co-pilot stays in the cockpit when the pilot leaves.


Should be. Obviously people get their notion of "autopilot" from science fiction, not from aviation.


Which is why I’ve always maintained it was a very misleading name and would lead to dangerous accidents. Which it seems to have on a number of occasions.

Of course people may have abused the system if it had a more sensible name anyway. But names competitors use like Automatic Cruise Control or SuperCruise seem much more descriptive of the system’s actual abilities.


It’s a car, not a plane. Why is the definition from aviation more relevant than the definition from science fiction (which also happens to be closer to the plain English understanding of the phrase)?


Because the technology is more similar to that of aviation than science fiction.


Because it's science fiction. As in, "this is not real." But well, I guess that if Moon landing was fake and wrestling is real, then science fiction is also real ;) (In other words, that the perception matters more than reality)


Airplane pilots are supposed to do that, no?


I think most of us would be more keen on not dying regardless of who is legally responsible


If a Tesla on autopilot hit your car, would you be happy with an outcome in which the driver did not accept any blame?

Would it make a difference if you learned that the driver knew there were problems with autopilot and was using it anyway?


Well he should have been considering he was the driver...


That's entirely speculation at this point. Tesla are the only ones who could confirm or deny that and it's possible that they will never be able to determine it depending on the condition of the computer hard ware.


I'd definitely blame an airline pilot who crashed because he relied on autopilot in a situation he knew was not handled well by the autopilot. (I would also blame the autopilot.) I don't see why the driver of a car with any kind of semi-automated system should be held to a lesser standard.

It sucks that the driver died, and it sucks that the Tesla autopilot system had problems handing that kind of situation, but that does not mean the driver is blameless. He put himself and the people in the cars around him at risk by using the autopilot feature on a stretch of road where he knew it did not work well.


> I don't see why the driver of a car with any kind of semi-automated system should be held to a lesser standard.

Because the driver didn't receive any training from the manufacturer. Airplane pilots, in contrast, receive a ton of training right down to how to fly a specific type of aircraft(single-engine, twin-engine, instrument flying, etc.). Additionally the manufacturer will provide training to pilots on how to operate any nifty features of the commercial aircraft.

I don't believe Tesla provides any training whatsoever on how to use these features. And I'm not aware of any mechanisms preventing untrained users from activating these features. A tutorial that you can click through does not count because you do not ensure rapport with the trainee like you would in a person-to-person training.

Back when cars were being commoditized the dealer would often provide training to new drivers. And in all states new drivers are required to take a practical test to demonstrate that they are competent to drive. Does Tesla require their users to prove any sort of understanding or competence before they unlock Autopilot?

You might argue that requiring training sets a dangerous precedent, but users need to be made aware that the driver assistance systems are not foolproof, and the only foolproof way to do that is to require them to attend a training.


Exactly.

Tesla is beyond irresponsible with this and IMO they should be sued out of existence.

It isn't a new feature on a cell phone that you just watch a youtube video on and move on with your day.


This actually happened. A glitch with the 737-800 [0] radar altimeter caused the aircraft to go into flare (touchdown) mode at altitude resulting in the jet to basically dropping out of the sky, with rapidly decreasing airspeed.

They should have been able to fly manually and safely land with a faulty radar altimeter. It is likely the crew didn't understand the significance of the fault, even though Boeing had issued previous warnings.

[0] https://en.wikipedia.org/wiki/Turkish_Airlines_Flight_1951


Pilots have extensive aviation training, the general population thinks of "autopilot" as a thing that will automatically pilot your vehicle. What did Tesla think would happen?


Are drivers trained to use these semi-automated systems?


They typically are blamed.


If a plane crashes, the survival rate is pretty low. Most likely everyone on the plane is dead. People get into car accidents often enough to feel that crashing one's car rarely leads to death. Walter probably felt safe enough to use autopilot knowing that if the car crashed, he could still walk away from it.


Completely untrue. The survival rate for plane crashes is extremely high.[1]

1.https://www.ntsb.gov/news/press-releases/Pages/NTSB_releases...


Note the study refers to “airplane accidents,” not “plane crashes.” I suspect that the vast majority of the airplane accidents included in the denominator wouldn’t be described by the vast majority of people as “plane crashes.”


The NTSB doesn't split aircraft incidents into "accidents" and "crashes." Note that the report specifically mentions TWA Flight 800 as an accident, though the loss of life on that flight was 100%.


Most of those crashes probably on the ground though?


I honestly worried about that when writing the comment. The car should have avoided it (based on Tesla’s usual claims).

But Tesla and everyone else says the owner is still ultimately responsible because AutoPilot isn’t a 100% situation.

If my car did something funky like randomly accelerate or turn hard at a given intersection and the manufacturer refused to fix it I’d stop driving through there.

Why push your luck?

I truly don’t like blaming the victim. I’ve been very critical of Tesla lately for their claims and safety issues.

But I don’t understand this man’s decision at all. It doesn’t seem reasonable.

I wish I knew why he kept using the system in that area.

The ONLY idea I can think of is when the car got an update he would try again to see if it was fixed and of the tries was sadly the last.


If your car randomly accelerated you would "stop driving through there"? What does that mean? You wouldn't file a class-action lawsuit like Toyota was hit with a few years back? That's faulty engineering. That's manufacturers defect. That's why we have regulations to protect the "huddled masses" such as yourself.


We adapt to weird behavior all the time. I once found my brake pedal went right to the floor while driving. Then it fixed itself. I drove carefully after that but still kept using it. The other day my engine started revving up uncontrollably until I pulled over and turned it off. It's been fine since. I still drive it. Would you blame someone if their check engine light was on or the steering felt a bit wobbly?


100% serious answer- if what you are saying is true, and you do not immediately stop driving your car on public roads and take it to a mechanic for a thorough inspection, you are toying with both your own and other people's lives. Cars do not fix themselves, and brakes that drop to the floor are seriously compromised. The next time that happens, there may not be enough fluid left in the reservoir to repressurize the system.

As for blame, the check engine light is for emissions gear, and will not effect safe vehicle operation. A wobbly front end, however, can indicate a serious problem, and yes, it is the driver's responsibility to operate a safe vehicle on public roads, and it would in fact be their fault if something happened because they did not perform proper maintenance.


Pedal that went to the floor and then fixed itself probably mean you lost a brake pad and now you're using the Piston/caliper to stop (it should make grinding noise and your rotor / Piston / caliper won't last long)

I'd get this fixed, if I'm right you're going to lose half the braking power of your car soon. (Pretty much all cars have a dual braking system, if shit happens you lose one front and one rear brake but not everything)


That would make a hell of a racket. They more likely boiled their brake fluid or something, possibly due to a stuck caliper.


Please get your vehicle looked at!


> True, but let's not blame the victim.

I don't even think that even applies in this case.

If TFA is correct and there is a problem in this particular spot and you still rely on autopilot then who is left to blame except the person who entrusted their life to a system known to be faulty?


He wouldn't be a victim in that case. If he had killed someone else in addition to himself, I'm sure you would be more keen on holding him responsible.


Still a victim even if he made a bad decision. Dude died.


There are definitely times when the victim is to blame - in this case I think it's too early to assign blame to anyone (or anything?) so if your warning was against speculating on the cause of the crash, I'm in agreement.


There's an old rhyme about that:

"He was right, dead right, as he drove along, but he's just as dead as if he were wrong."


Rightness is defined with respect to a system of rules. If being "right" leaves you dead, were you really right? Maybe in a legal sense, but is that the system that matters?


It's the one that matters for the rest of us who aren't dead!


It's precisely because you're not dead yet that you should act rightly according to the correctly chosen system.

I'm a biker. I can drive, but only ever drive rentals; motorcycles are my primary mode.

A sense of righteousness for being in the right on a bike will get you killed. You ride within the margin of not just your own error, but also those of other road users, and your own machine, with your best judgement and risk preference tradeoff.

If your machine has a known tendency to do something bad in some situation, you avoid that situation, even if you have someone else to blame. Blame doesn't keep you alive.


"A sense of righteousness for being in the right on a bike will get you killed." Well put. When I was a boy growing up in Milwaukee in the 50s/60s, there was a public service announcement on TV for the Wisconsin Dept. of Motor Vehicles that ended "Don't be dead right." I liked it then and find its lesson has stayed with me and helped me avoid trouble many times outside the driving space.


It's not blame, he was simply dealing with a bug that was hard to reproduce in a production environment.


That seems odd to me. If it happens so much how hard is it to have techs just drive a car through the area repeatedly on various days to get sensor data and reproduce it?

If they only tried at the dealership and not the actual place... that seems like a big mistake on their part.


https://www.cnbc.com/2018/01/31/apples-steve-wozniak-doesnt-...

"Man you have got to be ready — it makes mistakes, it loses track of the lane lines. You have to be on your toes all the time," says Wozniak. "All Tesla did is say, 'It is beta so we are not responsible. It doesn't necessarily work, so you have to be in control.'

"Well you that is kinda a cheap way out of it."


Unlike people like Elon and Jobs, you can safely calibrate your bullshit meter with Woz. He's not as famous because the press don't generally like that.


I love Tesla to death and in most cases will defend them beyond the point of reason.

But I took a test drive in a model S for the first time earlier this year and almost immediately noticed autopilot’s extremely unreliable behavior— it would swerve out of lanes in ordinary situations that should have been easy to handle. The second I saw that, hat was it: I would never use it again before many years of testing and improvement had taken place. No way am I gambling my life on a clearly incomplete feature just because it’s cool. Fuck that.

Of course Tesla is fairly safe behind their disclaimers and warnings, and to be honest I think it may be impossible to develop such a system without putting it into the wild before it’s perfect.

But for me, personally...I’ll let other people choose to be the Guinea pigs. The risks are all too obvious. Continuing to use the feature is very dangerous. Do it knowing this may very well happen to you.


> Of course Tesla is fairly safe behind their disclaimers and warnings

I don't think so. All it will take is a Tesla plowing into a sidewalk full of people, and no hand-wavy gesture at a license agreement is going to make the political and legal pressure go away.


After Tesla dumped Mobileye for their own system a year or two ago, autopilot performance was worse. But in the past few weeks they supposedly pushed out an update that dramatically improved it, to the point that it's finally unambiguously better than the old Mobileye system.

I still wouldn't trust it. I don't believe their current hardware has enough processing power to do the job. But it should be performing better now than even a few months ago.


I have a 2015 Tesla model S and I have never seen the behaviour you describe. I use autostart a lot on the ground that it's good to have two of us paying attention.


They dropped the system used in their older cars because their supplier for it decided they were too reckless and refused to do business with them anymore. Happened around 2016, I think. Since then they've been using an in-house system that doesn't work so well.


Damn autocorrect. s/autostart/autosteer/


> I love Tesla to death and in most cases will defend them beyond the point of reason.

Wow, that attitude strikes me as a little creepy. If you admire Musk to the point of hero worship, that's perfectly fine. But why worship a corporation?


Tesla is a metonym for Musk. Everything they do is by his direction unless he explicitly says "I was unaware of this and will be putting a stop to it immediately," which he has done multiple times. The corporation is ultimately an extension of his will, as Apple was for Jobs.


It'll be interesting to see if anything comes of the issue with the already-collapsed crash barrier and what CalTrans says about it. That sort of thing is there for a reason, and to be left in a crushed state for any period of time is bad.


In Texas, I've seen crushed barriers remain collapsed for weeks on end. Either that or they are just hit again right after being replaced. Which tells me, it's a poorly designed road and causes confusion for drivers. Which in fact may be what this Tesla crash turns out to be.


Exactly, I drive past this barrier every day...the problem is the left two lanes on 101 are carpool/EV lane so Tesla drivers just zoom down it...at this particular exit, the left carpool lane leads to an HOV flyover exit which puts you on 85. If you are not paying attention (ie on autopilot flying past traffic), you will end up on a completely different highway! I see people very frequently swerve out of the flyover lane back on to 101 very often, so my initial thought was that he tried to disengage too late to either get back on 101 or the catch the flyover (not clear which one).


photo, I think https://imgur.com/a/iMY1x (copied from elsewhere on this thread)


He lived in Foster City and was working at Apple, so it's likely he wanted to take the 85 ramp to Cupertino.

I've seen something that said that the car was warning him to take control, but he hadn't done so. Texting, maybe?


Before this rumour spreads: the latest Tesla statement [1] is worded in a deliberately misleading fashion and the only thing we can tell from it is that his hands were according to the software not on the wheel for 6s before the impact.

>The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision

Note the and.

[1] https://www.tesla.com/blog/update-last-week’s-accident


Will you end up on 85 or will autopilot figure out at the last minute that you need to change lanes and do so?


But unfortunately the real world is full of poorly designed, under-maintained roads driven on by drivers who are looking at their phones or have mechanical issues without warning. If the self-driving cars can't handle it, then they shouldn't be on the roads. The roads will never be perfect.


i didn't understand that part of the article. Does the road have a crumple zone in it, and it was already consumed in a previous incident, and then in this incident the car collided with the already-crumpled zone?


That is correct.


I wonder how fast they replace crushed barriers.


The crash was at the 101 and 85 flyover for the carpool lane. Next to that lane there is a sign warning of upcoming construction closures... in fall 2017. Does that answer your question?


Yeah, what's up with that? There's a sign you pass getting on 101 North from Oregon Expy that says that the ramp is going to be closed on some particular day in 2015.


It sounds like CalTrans had a lot of tolerance for shoddy work.


I would hope that removing a outdated construction sign is lower priority than replacing a spent crash barrier.


It’s broken windows. If they can’t take down a simple sign three years later, how long does it take to reset a barrier that takes actual work?


One thing to remember in all the discussions about the accident is, that there is no information available whether the autopilot was active at the time of the accident.


Tesla has just announced that autopilot was on during the crash. https://www.tesla.com/blog/update-last-week’s-accident

It sounds like they are saying the driver had 5 seconds to notice the problem and react. It is scary to think that when you are on autopilot you may be 5 seconds away from death at any time. Better not take those eyes off the road!


If you take the human factor away 5 seconds ahead of time warning is quite a lot. The car could have slowed down to a possibly non fatal speed. By asking the human to act the autopilot is actually throwing away at least 2 of those precious 5 seconds.


Came here to say this. It's unfair to start talking about autopilot mishaps in this case, and what this family is saying, etc. When there is NO evidence yet as to whether autopilot was engaged.

Further more.. if the barrier was already hit previously by another car, as the reason why it was collapsed prior to the Tesla crash, this may simply be an area where the road is not designed very well and causes many human error in that location. This happens all the time. In fact, road design is a significant cause of vehicle crashes.


I will LMAO if it turns out the barrier was collapsed the previous day by another Tesla on auto pilot. We only hear about the fatalities right?


SW/HW Bugs happen - fact of life. More concerning is that Tesla denying this was ever reported to them...

I had my driver side view mirror stuck issue fixed and they put in the notes that they re-filled all my tires during repair service allegedly as reqd. by some CA law. Later that week I got an alert that my tire pressure was low and had to get it air pumped so obviously they didn't do that despite having claimed to have done just that...


> More concerning is that Tesla denying this was ever reported to them.

This is how Tesla reacts to all bad publicity. Bad review on battery life comes out? Data dump indicating the reviewer might've done something wrong. Accident? Statistical dump showing that most cars don't do this. I like this technique more than generic "we can't comment" responses, but I'm pretty confident they heavily cherry pick, looking for something that people who glom onto data can see and say "oh good, Tesla's right."


Yup Tesla knows their audience - it's their semi-rabid owner and aspiring owner fan base. Throw out some logs or stats without context and let them do the dirty work for you.


Walter took it into dealership addressing the issue, but they couldn't duplicate it there

Why not get a mechanic to ride along with him to that location, perhaps with extra diagnostic equipment connected? I could see how a bug that is dependent upon being at that location would certainly not be reproducible somewhere else.

Here's an automotive service booklet from almost 70 years ago which recommends the same thing for troubleshooting:

http://www.imperialclub.com/Repair/Lit/Master/021/Page14.htm


before the crash, Walter complained "7-10 times the car would swivel toward that same exact barrier during auto-pilot

...and after 7 to 10 times , he still didn't learn his lesson? That's pretty stupid if you ask me. If my car does something weird at a particular stretch of road, especially 7 to 10 times, you can bet your bananas that i'll be paying a lot of attention on that stretch of road. If my "autopilot" (seriously, Tesla should stop using that name) isn't reliable in certain circumstances or places, then - guess what? - I WONT BE USING IT THERE. Why blame Tesla (I'm no Tesla fan), when the operator of the vehicle refused to operate it properly in the face of prior experience? Poor guy, and I feel for his family, but come on, what a dumbass


Previous discussion of this crash: https://news.ycombinator.com/item?id=16694365


I really don't like autopilot. It's good enough to make drivers trust it and not pay attention, but it's not good enough to not kill people when they do that. And when there's an accident Telsa come out and say "the system warned the driver to put their hands on the wheel" or something similar. Unless a car can 100% self drive drivers aids should require the driver to have hands on the wheel and be paying active attention at all times.


This is part of why I haven't considered a Tesla. If I get into an collision in any other manufacturer's vehicle, the manufacturer's PR team won't impune my honor in the court of public opinion.

It seems like the Tesla autopilot is very similar in capabilities to other manufacturers' active lane keeping, adaptive cruise control and active collision avoidance braking systems, however the marketing and user behavior is much different.

There's no expectation that a Pacifica with all the bells and whistles is going to do a good job driving for you with no hands, but if somebody stops suddenly, it will too.


Googling surprisingly didn't give me a definitive answer on this - how long does autopilot let you go before warning, and then before stopping the car if you don't have your hands on the wheel?


Highway driving is supposed to be "easiest" problem for self driving cars to solve, since there are fewer edge cases, less turning, etc., but it's also the most dangerous type of driving. You are much more likely to die going at 65mph than 25mph.

I think deploying self driving cars at <=25mph speeds at first would be wise. Personally, I wouldn't risk letting a car take over at high speeds until there is a longer track record of safety.


Many places with <=25mph speeds have pedestrians, and they're a lot more delicate than vehicles are. Drivers aren't the only ones at risk.


The fact that it happened to him "several times" and not to others to me might indicate a specific hardware/sensor issue related to his vehicle. A sensor, slightly misaligned or not working to the same tolerance? Pure speculation, I know. Also the 200 trips/day refers to what? All Tesla vehicles? How about for his specific year & model and software version and configuration (both equipped and driver-defined)?


He complained 7 - 10 times and then just forgot about how it used to swerve towards a head on collision with the median?


Do you "feel" any responsibility as an autopilot ML engineer? I know a few that joined within the last year.


Is there a specific system for traffic sign detection? I'd have thought you could have a system that is dedicated to spotting traffic signs in the current country with a significantly higher accuracy than cat detection.

Even a small part of a sign should be enough. They're designed to be easy to spot.

It seems like we just set neural networks up to recognise all objects and assume they'll recognise simple objects too. However typically how humans learn is through simple cartoons first and then layer on top of that rather than the other way round.

Edit: should have done some googling before opening my mouth [0] [1]

[0]: https://amundtveit.com/2017/07/13/traffic-sign-detection-wit...

[1]: http://www.bartlab.org/Dr.%20Jackrit's%20Papers/ney/1.TRAFFI...


Certainly CalTrans is responsible for not replacing the barrier. Expect a lawsuit/settlement. Tesla will be found liable too, that’ll be a jury trial civil suit. Why? Despite reporting that AP failed at that interchange and despite still using it, drivers and juries expect safety features to work as advertised.


The autopilot is not a safety feature.


I am pretty mystified why tesla is even running autopilot on their cars. They already have more demand than they can handle, and I don't think anyone buys a tesla because of the autopilot. They are opening themselves up to huge court claims at the same time they are low on money. Just keep running tests and maybe do a pilot for a few years. Isn't Waymo sort of in the lead right now? They aren't running their software on millions of vehicles but seem to be progressing okay.

It seems all these companies are in a big rush and being slapdash. Maybe it's a disconnect between the engineers and the execs/shareholders. It doesn't even seem profit-driven...almost fear driven. ("we don't wanna be left behind.")


It allows them to develop self driving technology without the expense of dedicated cars and paid drivers, and without any liability (because drivers take the risk).


The question is, if the driver was aware the autopilot was unsafe at that area, why did they keep using autopilot there?

You can't just throw your hands on the air and hope for the best. Your car, you are on the wheel, you are responsible.


Am I the only one that thinks that drivers with auto-pilot and back-up drivers for driver-less cars should be ready to drive at any instant? As an engineer, I'm seeing these features as beta at best.


The problem is that that's not how human cognition works. If the auto-pilot is working well your brain will inevitably become accustom to the lack of stimulus. Ironically I think that these systems have a kind of uncanny valley type area where they are probably safest when the auto-pilot is poor or great, but not in the middle.


I understand that ... and that's another problem we've yet to solve. What you've described is what led to the crash of the Korean Airliner that undershot the runway at SFO a few years ago. In that case, they would have been better off letting the plane land itself but that's not SOP.


Side story, I have an open with crash warning.

On a highway I think it even detects crashes ahead of you ( had once two cars in a three lane almost crashing while one car tried to switch Lanes)

Well, back to story. I have a street where cars park left and right of the street like a zick Zack. There is one spot where it always warns me about a crash..


IMHO, they need to get autopilot at least an order of magnitude better than human drivers statistically, before releasing this tech, because this sort of news is extremely bad public-perception-wise. I don’t think it’s reasonable to expect a Tesla driver to always have his hands on the wheel during autopilot


Why call something "auto-pilot" when it is clearly not remotely ready to do what its name implies ?


Unless someone has ever used an auto pilot in something like a Cessna, that person would probably have a wildly overoptimistic idea of what an autopilot does. Even on a passenger jet it's really not that smart, there's just lots more volume to explore so it's hard to kill everyone.

A better analogy would be cruise control. It controls essentially one variable. As does lane keeping. You combine a couple of these things and you think it's smart, and it isn't. We learnt this (edge cases between single variable trackers) ~40 years ago in aircraft, that there are places in the flight envelope that combinations of single variable trackers will still let you go, but will also kill you.


There used to be the same problems with cruise control, though. People thought it would brake automatically and steer around corners, and would get into accidents that proved their assumption wrong.


Yeah autopilot is similar in airplanes to what autopilot is on Teslas. Not sure what else you could call it without it also being confusing. Autopilot is a tool to stay in your lane and prevent common scenarios like keeping speed, avoiding obstacles and keeping distance, though there are still possible edge cases like this one appeared to be that can be more dangerous than not using it as the systems are not fully autonomous.

Even with airplanes, if something gets in the way, autopilot won't always save you and can only alert you when the situation gets bad i.e. oncoming airplane or obstacle, altitude, speed etc. We fly in planes with autopilot and we are safer for it, but pilots/drivers still need to be alert and operating the plane. Teslas aren't fully autonomous and probably can't truly be until everything is connected and more cars on the road are autonomous for expected behavior. I trust autopilot in planes but still want a pilot. Most likely autopilot will be more useful in large buses, trucks, shipping, boats, airliners than individuals as you will still need a driver most of the time. Even the Uber crash could have possibly been avoided with a more alert driver.

One area that may cause more crashes in the interim is trusting the software too much, autopilot that does work six sigma 99.999999% of the time may lead to a possible issue of driver comfort with the technology that may still have edge cases that can endanger them. This issue was a factor in the Tesla crash and the Uber crash

I think a huge overlooked part of the failure here was the previous accidents at this part of the road and the lack of repair. We have a serious infrastructure issue with disrepair, non automated/untracked driving of humans probably ends up badly in these areas but goes unnoticed or is suppressed as road design can be a big factor, this automated crash highlighted an issue here at this offramp/fork that probably would go unnoticed and cause more issues. If nothing else, automated driving will have the data to fix these bad areas of our infrastructure and double down on safety and protections.


because it implies something regardless of the products ability to actually deliver. the connotations were so strong I found it bizarre they went with it and didn't just hold it until the product matured. however it really comes down to the simple fact it has not costed them enough money in penalties and lawsuits to force a name change. they are in essence bluffing their way through this and have yet to get called on it.


It’s an aviation term, and it does exactly what an aviation autopilot does: assist the pilot.

They didn’t call it “self-driving” or “autonomous” because it isn’t those things.


It’s also a car, and not a plane. Do they also call the wheels propellers? It’s a misleading term for the majority of people who don’t fly their own planes.


Why no one talks about other victims of car accidents and their is about 6k per month on average in US alone. But Uber and Tesla are on headlines when someone dies and there is like what 3-4 victims in total after all those years? It is not even worth mentioning.


Miles driven, genius. There are millions of other vehicles on the road and not that many teslas


Am I the only one that finds it odd that a human also ran into the same barrier within 24 hours of the crash? Perhaps Caltrans is partially to blame.

(Full disclosure: Their incompetence almost killed me a few times back in my I-880 commuting days)


> Am I the only one that finds it odd that a human also ran into the same barrier within 24 hours of the crash?

Source? ABC7 says the previous crash was 11 days ago (DUI...), the barrier just wasn't fixed immediately.

http://abc7news.com/automotive/exclusive-i-team-investigates...


They should rename autopilot to auto-scapegoat.

There is zero information yet and its all everyone is jumping onto. Last week the same thing happened to a Tesla that wasn't even equipped with autopilot.


"Autopilots performance is unrelated to navigation"

Tesla is mincing words.


My opinion: EVs, self-driving or directed, need an ejection mechanism for their batteries. Petrol has the advantage that it may ignite, where batteries are almost guaranteed to when damaged. Ideally, it would be an active system (launching the batteries no more than a meter away) - but that could fail under the conditions which caused the damage to the batteries. An alternative would be a passive system made of materials known to melt when exposed to a lithium fire - providing a few centimeters of separation from the cabin. Either way, the current situation is not ideal.


I'm not going to dismiss your opinion out of hand, though I certainly don't agree with it.

If I may: do you realize that, in this catastrophic wreck, that the fire only started, slowly, after the driver had been taken by EMS?

One big difference between a fire with batteries and gas is this: if gas starts to burn, most likely all of it is going to burn, quickly and often explosively.

None of these are the case with the batteries in Teslas. Only the damaged ones are likely to burn, slowly, and not explosively.


So your system would throw a burning pile of lithium into the woods to start a forest fire, into oncoming traffic to cause another crash, into a pedestrian on the sidewalk, etc?


You're right. It's a terrible idea. Pity it's too late to edit or delete that dumb comment.


Tesla is very very very bad with PR when something goes wrong. Their response to this is awful.


It looks like Status: WORKSFORME won't cut it in this brave new world.


Crash victim? More like crash dummy, auto-pilot is raw.


Seems pretty inexcusable for a self-driving system to ever hit a static obstacle (while not trying to avoid another collision).

There should be manslaughter charges for this kind of thing.


It's interesting nobody is asking why the crash attenuator was not replaced faster.


The fire seems pretty severe too. I wonder what caused the battery containment to fail?


blame the customer, blame the govt, but your software is perfect Tesla?!


Super fun to compare the comments on this to the comments on the Uber accident.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: