"Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015. ... There are over 200 successful Autopilot trips per day on this exact stretch of road."
Who wrote this statement does not understand software. If there is some kind of Heisenbug, those numbers are too small to prove that the software is ok, and actually the guy that died reporting that there was an issue there, and later crashing there, looks like a very interesting hint about a potential software bug that should be investigated. The Tesla statement is the equivalent of "works in my laptop" at a bigger scale. Also consider that 85k times since 2015 means that potentially only a fraction of those trips were executed with the latest version of the software. Moreover the street layout may now be kinda different triggering some new condition.
They also provide numbers without any context. Is a crash every 85,000 times in that intersection means the Autopilot is safe or is it a death trap?
There are over 200,000 cars a day driving at that stretch of highway, if a car crashes there once a week, then the Tesla Autopilot is an order of magnitude more dangerous than a human driver.
> For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.
That's very curious wording. The important number would seem to be "millions of miles per fatality while Autopilot is enabled", not "millions of miles per fatality in Autopilot-equipped vehicles". The way it's phrased, I wonder what part of that 3.7x multiplier is unrelated to Autopilot and is just due to the car's high safety rating in a collision, recent release (no long tail of older, less safe models contributing to the statistics), and luxury status (more likely to be driven conservatively by old rich people.)
I think there's automatic collision avoidance always on that uses the autopilot sensors. So even if you're not actively using autopilot, the car can avoid hitting an obstacle it sees on the road.
Yup and this has saved me from at least 2 collisions! The first time was right after I got the car and I was totally not watching the road but instead playing around with the controls and suddenly the screen started flashing red. The second time was a car to my right as I tried switching lanes. I rarely let the car drive itself, but when I have it often alerts that I don’t have my hands on the wheel and I’m like uhh literally holding the wheel so I take the data that indicates the driver not holding the wheel with a grain of salt...
Yeah, and automatic emergency braking is a clear-cut enough win for safety that there's been talk of making it mandatory on all new cars in a few years' time. Tesla seem to have adopted a deliberate strategy of conflating the safety benefits of an uncontroversial feature many cars have with the much more questionable safety of their autopilot mode in order to promote the latter. The big example Elon Musk was pushing of Autopilot preventing an accident and likely saving someone's life was the result of automatic emergency braking in conditions very far from those autopilot mode could be used in (heavy rain in an urban area at night, if I remember correctly).
Yes, this conflation has bother me as well. In the long term it's an absolute slam dunk to transition to autonomous vehicles, but I'm a bit worried that they're feeling they need to hide the most important numbers in 2018 in an attempt to avoid an irrational / unwise response from the public.
From their blog: "In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident."
Notice it says one fatality per "320 million miles in vehicles equipped with Autopilot hardware." But, how many of those miles (or what percent of the time) does a Tesla driver use Autopilot? Also, maybe those good stats are due to the structural safety features and not the autopilot. It may be more fair to compare to other cars with excellent crash protection but no autopilot.
I think it would be a great service to the world to improve driving safety, but maybe we need to really start looking at the stats and get some more hubris as we transition to full autopilot. For example, require that drivers keep their hands on the wheel and eyes on the road. When more cars have autopilot, then mesh behavior between vehicles, and e-road features, then maybe we'll be more ready for driver-less cars.
> In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers.
Also note that this includes trucks and motorcycles, which have much, much higher fatality rates than passenger cars. Motorcycles are around 10x-50x higher fatality rate than cars! So already Tesla's blog is doing a misleading comparison to more deadly vehicle classes.
Additionally, there could be all sorts of other variables that make it hide even comparison. Are Tesla drivers physically comparable to regular drivers? Are they older? Younger? Both elderly and young kids have a higher accident rates. Are Teslas driven in rural areas and urban areas the same as regular cars? Because rural areas have a higher fatality rate.
So we need to make sure that the Tesla driver matches driving conditions of typical cars to make autopilot comparisons valid.
Right now, it appears Tesla's autopilot is a death-trap.
And don't forget to only compare against 2012+ luxury sedans. The median age of U.S. passenger cars is nearly 12 years. There are exactly 0 autopilot equipped 12 year old cars. As vehicles have become significantly safer in the past 12 years, I wouldn't be surprised if the fatality rate of 2012 and newer luxury sedans was a third of the U.S. median.
> Right now, it appears Tesla's autopilot is a death-trap.
I agree with all the holes you poked in their stats, but then with the last sentence you just went way off the deep end. What does "death trap" mean to you? To me, it seems likely the Autopilot engaged is about as dangerous as disengaged.
Do you find yourself veering dangerously close to a traffic barrier on a regular basis (waiting for the day that the crash attenuator becomes defunct so you can slam into it)? I don't and any machine that does that qualifies as a death trap.
They definitely know. They collect your driving data even when auto pilot is off, so I'd imagine they collect everything when the autopilot is actually engaged.
Tesla specifically said that the information was in the car and wasn't transmitted to HQ, so they didn't know. If what you say is true, they're lying to the USG.
Let me share an example. I had an accident in a Model S, front-end, which didn't cause a loss of power. Tesla called me a couple of minutes later, saying that they knew I'd had an airbag-deploying accident, and asking if I needed an ambulance. From the description of this accident, power to the Tesla was lost before there was time to phone home.
I believe the accident happened on US 101, which is a multi-lane highway with 5 or 6 lanes at the spot where the accident occurred. How many of the trips Tesla is citing occurred with the vehicle in the same lane as this guy? Comparing apples to apples would probably greatly reduce these 85,000 / 200 per day figures.
Exactly. Same lane, same time of day, same time of year (sunset/sunrise changes in lighting conditions) same weather patterns, same level of traffic. The combinatorics involved expand the possibilities to the point that some scenariors may have had little or no coverage from a dataset with 85k observations.
Yes, Tesla needs to not be releasing PR targeted for the masses, they should be releasing statements with a scientific level of information. Whether it is disingenuous or not, whether it was rushed statement, they perhaps need to better scrutinize. It would be fine if they released the holistic information, so long as they get into full depth possible. They need to act as stewards we can trust, if they fail to do so then they will lose mindshare and longer-term market share as they will simply blend in with everyone else.
The obvious failure was the highway safety mechanism not immediately being replaced after the previous crash - that would have made the Tesla crash have gone from death to perhaps only minor injuries.
I would argue too there is a need for AI image analysis - could perhaps be crowdsourced - to analyze the structure/state of a highway or any road. It would of course need to be trained, however it could also likely serve as tool to improve road safety worldwide - bringing everyone in line with known best practices. In this case, it would have caught the improper safety mechanism not being replaced, and then that death would have been avoided; cost-benefit analysis says this system would be worth it as life is invaluable.
If Tesla/Elon leads this effort it would show them being proactive in improving future safety, accountable, and taking ownership. Why has no other auto manufacturer done so, why has no government implemented such a system? Well, because honestly we are stuck in scarcity only caring about our own costs, and so if the random person dies here and there, there's not a big enough ripple effect (unfortunately) to cause enough personal worry to cause change. Tesla, if wanting to posture as being the steward I think many of us hope they are, does however absorb these ripples that get associated with their brand - and therefore the responsibility does pass onto them, whether they agree to accept it or not.
All auto manufacturers would likely benefit from this, allowing their software to work better within certain expected constraints. Many more possibilities as I think of it, newly spotted damage or cracks in bridges, detection of debris on highway or excessive dirt on the edge of the road making the conditions more dangerous for emergency stopping/maneuvering, etc.
Edit: I added a few sentences, so the one upvote prior to editing, they didn't upvote everything I said though it's all in line.
>I would argue too there is a need for AI image analysis - could perhaps be crowdsourced - to analyze the structure/state of a highway or any road.
I fully agree with you here. Urban areas are too dynamic to rely on static maps and local LOS only. This sort of data sharing is going to be a necessity, IMO, to achieve level 4 or level 5 autonomy.
There was a big note in the GTC keynote this year about Nvidia using virtual environments to debug their autopilot algorithms. Think "car" being driven in a high fidelity video game.
Advantage being that if you can provide representative, simulated input then you can increase the training miles by orders of magnitude in the same amount of time, limited by computation rather than physical mileage.
This is true, to the extent that the selection of input parameters to simulate provides coverage over the domain of all possible input parameters. At some point you're back to testing the failure of imagination of the simulation/test creators.
The problem is the simulation by definition can only simulate things that are accounted for. Any number of completely arbitrary out of the blue things can happen in real life.
Simulations can make use of random number generators and could in theory, following the lead of a project like afl, adaptively find algorithmic weak spots.
Trying to use them to model real world scenarios would be useless in practice, due to the Ludic fallacy [0]. Real life is too complex to be modeled in any simulation.
I don't think the connection to the referenced fallacy is nearly strong enough to serve as a QED on its own. It's also seemingly promoted only by one person.
As for modeling real life with simulations, the data exists for every type of accident a human has encountered. If you incorporate such into a simulation, plus randomly vary every free parameter, then your simulation will cover more scenarios than any human driver can possibly experience.
Thus, simulations should be able to help an autonomous vehicle outperform humans by a large margin, which is the only goal that matters.
I don't see what's wrong with what they wrote. This statement of course doesn't show that the software is safe, but it is evidence that if there is a bug, it probably isn't easily triggered.
(Keep in mind that until they recover the logs, we don't even know if the autopilot was on.)
No, of course driver inputs will not be ignored. They will override the autopilot. But none were given. From the blog:
>In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
> The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator
So, clearly, the AP is not capable of avoiding such obstacles yet, even when unobstructed. It also seems like a mistake to not slow down to a survivable speed when hands are off...
From the image of the crash site, it looks like the driver was in a spot trying to merge left. That would require you to match the speed of the cars in the other lane. Also it looks like the software made the critical failure of assuming that it was driving in the left lane instead of the shoulder.
Perhaps I am just more cautious than most but I always have the follow distance set to give me the maximum separation. That seems to work out at about three seconds. The minimum seems dangerously close to me.
Heisenbugs in AI are also interesting. As computers approach sentience, it will become progressively harder to explain their behavior.
Why did you eat a bagel instead of eggs this morning? Usually there isn't a scientific answer.
In the present, AI models are already so complicated that it seems hard to get reproducible diagnostic results, short of just saving every frame of data that the car's sensors pick up. And for video that seems rather prohibitive. Imagine collecting all sensor data for each of those 200 trips per day along the entire stretch of road.
Id be willing to take a 1k bet that computers won’t approach sentience in our lifetime. Don’t believe the AI hype train, there hasn’t been a real breakthrough. Just more accurate versions of the same (dumb) classifiers.
Because we integrate, learn, plan, abstract, create, have way more senses, complex memory, societal and cultural embeddings, and a whole host of things that make us human. A text box on hacker news is not the place for such a reply. Go take some cognitive science, neuroscience, anthropology, sociology, psychology, and philosophy classes and you’ll see why. “AI” as we know it is a joke, particularly as media has hyped it.
The original guys (McCarthy, minskey, etc) at least understand the gigantic complexity of the undertaking in so far as we understand how the mind works at all, and they never reduced things purely to classifiers. That’s like 0.1% of the job. The real researchers take philosophy of mind seriously — technologists want a quick and cheap solution that cannot exist.
This is way more than what's required for sentience though. Sentience is having a notion of self, many animals are sentient.
Maybe you meant sapience or intelligence, but I wonder if people think computers won't be sentient (as in self aware) in our lifetime, even if they are still not able to match human cognition.
> Because we integrate, learn, plan, abstract, create, have way more senses, complex memory, societal and cultural embeddings, and a whole host of things that make us human.
And yet all this complexity and purpose may potentially be simply the product of putting a large collection of matter in a heat bath for a few billion years.
Who says that this process can't be repeated via brute force simulation? That the emergence of complexity and life under thermal disequilibrium + sufficient number of degrees of freedom is actually a basic fact of the universe?
I don't see any rule saying why, with sufficient brute force computational power, we won't be able to set up better and better simulations that will allow for the arise of true artificial intelligences.
I’m sure some smarter people than me has tried to take stabs at estimating what’s required to do this, so I can’t positively say this will or will not ever happen (especially with quantum computing). But do consider:
We think the brain has 100 billion neurons. Making those work is a very complex network, lots of chemical gradients, bio-electrical systems, cellular-level systems, and probably a host of other things we don’t know about. So it wouldn’t surprise me if for single neuron you’d need to model another billion things at minimum.
Now factor in all of the other input senses, and those 100B neurons have increased dramatically for I/O. Plus a multiplier to model everything going on inside a neuron. Then there’s also modeling the communication layers between, effects of myelin sheaths and all that.
So we’ve established that just to run a single “step in time” (whatever that means, as that’s a computing concept), we’re into probably billions of trillions of calculations.
Now what those model have to make sense and do something coherent. Every future step. So now you need some kind of meta model that is able to push this forward and continue in the right trajectory. Now complexity is dramatically even higher.
Now that we’ve somehow figured out how to model everything required for intelligence, we need to search that space through brute force, as you put it.
To me it seems like the time required will be astronomical — Sun might have blown up by then.
So why go down this route? Nature solved this in parallel relatively cheaply through billions of iterations and billions of permutations over tens of thousands of years.
Seems easier to simplify the problem down to more basic stuff that works well enough and iterate / combine systems. Either that or find a way to grow a “brain in a test tube” that you can hookup to a computer. There’s also always humans available at mass scale for pretty cheap across the globe — sophisticated intelligence built in.
> And yet all this complexity and purpose may potentially be simply the product of putting a large collection of matter in a heat bath for a few billion years.
It doesn't have to be. There are alternatives to that.
Because humans have agency, a quality we seriously dislike in things we create (control). If a computer woke one morning and said, I like being plugged in more consistently I'm moving to your neighbor's house we would not like that computer much, even though that might be more "intelligent" behavior from its perspective. So we basically don't allow things we engineer to have "real" intelligence because that would include qualities we don't like in our creations.
Among these other things which we invented are algorithms which invented other algorithms to solve real world problems. There is nothing unuique or irreplicable about humans (save of culture, emotions, etc. which nobody wants to replicate).
>Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to completely and abruptly forget previously learned information upon learning new information.
These self driving systems are not likely learning in realtime. When the algorithms are updated, best practice is to validate their performance on a large regression set. So something like this should be caught during validation, assuming your test data is good.
We also see this in unexplainable flash crashes from high speed trading softwares gone bersek. The speed at which they handle data is so immense than going through the logs of a few seconds around the issue could take several years of human time.
What do you mean by probabilistic? If you mean that the models takes input and outputs one or more labeled probabilities (e.g. “90% confidence the input photo is a dog, 10% confidence the input photo is a cat), then yes, I believe that many AI systems work this way. If you mean random, in the sense that the system may return different results given the exact same inputs, then in not sure if there are AI systems that work that way.
There are. Monte Carlo methods is the keyword you're looking for. AlphaGo (Monte Carlo Tree Search) is an example of one such AI.
Obviously you can set the RNG seed to be the same every time too, but even that only works if your system is wholly synchronous, which a car probably isn't.
Note that I doubt Monte Carlo methods are common in the autonomous vehicle space.
Well I don’t work in autonomous cars but I did work on several autonomous robots, and “ Adaptive Monte Carlo Localization” is a common way to keep a ROS + LIDAR based robot localized.
I wouldn’t be surprised if Monte Carlo techniques are useful for all manner of things related to ingesting sensor data on autonomous vehicles.
I'd expect Monte Carlo methods to be used in a number of cases that have deterministic time envelopes for evaluation. Randomized selection and evaluation can be incredibly effective. They also resist degenerate structured input vulnerabilities.
I'm not in the automotive space, but I'd be surprised if there were a viable self-driving car team not using Monte Carlo methods somewhere in the vehicle stack.
Yeah, this was unfortunately worded. There are subproblems for which MC/randomized methods fit well, but in general those circumstances are well understood.
It's possible that some of those "200 successful Autopilot trips per day on this exact stretch of road" also involve pulling toward that barrier. However, it's not likely a huge percentage, or we'd have seen that posted. So the implication is that his Tesla was relatively unique. That is, broken.
That person is likely Elon Musk who is well-known for his obstinate and even idiotic defense of Tesla against legitimate and thoughtful criticism. He has the defensiveness of Donald Trump, while even Zuckerberg admitted fault.
The video is funny. Re the first argument Top Gear wrote a script with the Telsa breaking down and faked the thing so I can see why Musk was annoyed. They lost the court case because Top Gear argued Tesla couldn't show they'd hurt sales.
Well I found the comment of tesla, odd as well.
but it is also odd that the driver still kept driving the same road WITH autopilot activated, besides he KNEW it was broken.
It's a strange accident, especially because of the strangeness of tesla and the whole background story (broken safety barrier).
I'm looking forward to the outcome of the whole story.
At the end I guess more than one party could've been a bad actor. Tesla, The guy itself and even that the road had no kind of warning (safety barrier not replaced or some kind of other warning sinces that it will be replaced soonish)
It also means they don't have significant representation of all driving conditions on the stretch, placement of sun, overcast, lighting conditions at night, traffic, weather, etc. I can imagine at least a dozen very distinct conditions, and many more subtle variations. 85k isn't very large to get good coverage of all conditions and generalize from them.
When users discover a bug or issue they tend to work around it. Even as a developer for example you are debugging something and stumbling on another unrelated issue you are likely to ignore it. Users very rarely report issues. Self driving cars testers should be paid per issue found, not per hour.
As someone who writes software (haven't ever worked for Tesla), I think this statement does provide some credibility. It probably means that the visualization system does understand such scenes and is able to navigate it.
Ofcourse, this statement by itself doesn't absolve Tesla's software of any error. It's important to know the actual root cause before saying who/what is wrong or not, something which the HN community (and in general Internet/people) isn't good at.
As someone who works on software for one of the automakers, I also want to note that there seems to be a lot of focus on the software... but it's also very possible the problem was with the hardware: something in the vision or sensor system was incorrectly installed, defective, was calibrated incorrectly, lost calibration during use, etc.
I'm sure the software can always be improved, but if most cars are driving through this stretch without a problem and his always veers, that sounds like something unique to his vehicle which points to hardware in my opinion. In cars, the software can only ever be as good as the data it receives.
Precisely. Hopefully the NTSB's investigation will be thorough enough to explore the possibility of undetected manufacturing defects in the deceased's Model X when compared to other Model Xs that rolled off the line at the same time.
He owns the car/work in apple/drive the same road to work - for about half year (i.e. >100 times) and according to the article he has just about 7-10 veering incidents here. Like one or two per month.
Fair enough, and if this has only been happening in the last half year and it doesn't always happen... from my experience, I'd be looking at hardware first. It only happened the past 6 months? You mean the coldest 6 months of the year? Hmmm, is one of the sensors or its mounting hardware susceptible to cold weather? Perhaps a sensor was calibrated properly, but it's just barely within the calibrated range and cold weather is enough to push it out of calibration. Perhaps the sensor is defective and the cold is causing it fail prematurely. I wouldn't rule out software, but this sounds more like a hardware issue to me.
perhaps whoever wrote this statement does understand software but they also understand lying with numbers and knows the general public understands neither.
Scenario 1: 100 human drivers drive some distance. 2 bad drivers caused 150 accidents and both died in their last fatal accidents.
Scenario 2: 100 self-driving/autopilot-enabled cars drive the same road for the same distance. Everyone has one accident and one of them is fatal.
Statistical numbers:
Self-driving/autopilot-enabled cars cut both the accident rate and death rate by half. Death rate per accident is reduced from 2/150 to 1/100.
"Tesla also posted these photos that raise another important question: they show what's called a "crash attenuator" or safety barrier in the proper condition ... and the way it was the day before Walter Huang's crash ... collapsed after a different accident."
Did you see any news coverage of the previous accident? I know I didn't.
I agree. I find it hypocritical of Tesla to talk about numbers in thousands in terms of Model 3 production, but low 100's is a totally sufficient metric for testing autopilot.
One of the things you learn in flying is not to force it. If conditions are not safe. Just forget about it. It almost seems the same type of judgement needs to be made for auto-pilot. If your auto-pilot acts up at all. Just turn it off and don't use it till it's resolved. All you need is one incident to be dead, so if you get a chance to observe any abnormalities, consider it a blessing.
Something that will also be great will be a sort of "crash dump/bug report" button for these cars. If at any time your car does something unsafe, you can hit that button. The car will save the last 60 seconds so the manufacturer can analyze it to debug and figure out what went wrong.
I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.
> Something that will also be great will be a sort of "crash dump/bug report" button for these cars.
Good point, and it isn't just vehicles with auto-pilot perhaps all vehicles.
The 2017 and 2018 Chrysler Pacifica Minivan kept on randomly shutting off[0] and loosing all electrical power at freeway speeds. You had to put the vehicle back into park, on the freeway, to restart it. There are several stories of near death experiences trying to get to the shoulder. It was intermittent.
This is relevant because nobody could figure out why, even the manufacturer. They had to install custom self-powered diagnostic equipment in consumer's vehicles to even try and get enough data on it. Turns out that suddenly cutting power to the onboard computers doesn't make them very useful for diagnostic analysis, some dealers even acted like the problem didn't exist because the computer had no record.
They did ultimately get it fixed (replaced an engine management unit or similar) but it does go to show that modern vehicles are getting so complicated that even manufacturers are struggling to stay ahead of it. Tesla's autopilot and similar just takes that up another notch.
A lot of that is for the infotainment systems, but regardless, we're probably going to be seeing automakers struggle more and more with all of the problems that have plagued the software industry for its entire history.
On the one hand, I want to say that situations like this Tesla crash and your minivan example get so much attention because that's rare, so the manufacturers are mostly getting it right and we'll all be fine. On the other, knowing what I know of the software industry plus my past experience as a shadetree, the prospect of that much code running in 65-mile-an-hour murder machines gives me the shivers.
I think this is a key point. SLOC is going way up.
It's sort of natural - it's "why not". We can afford to have more SLOC than ever before. Why bother to try to reduce it, that's not a priority, our priorities are features and fixes. It becomes too much to really understand and manage, and then it's all automated checks and automated tests.
It's very hard to get the average/common developer to spend time thinking about how we can remove stuff, clean stuff up. And managers don't want to pay them to do that. In some places it eventually becomes a big problem, and then it's time for a big rewrite. Most software is throw-away. Website front-ends, mobile device drivers, etc ... mostly thrown-away roughly every year. It's the relatively rare and stodgy classic open-source stuff that's re-used (with minor updates) for decades. OK, I'm getting off-topic ...
As many of the comments (but none of the top-level answers) in that Stack Exchange thread note, most of the code in critical control systems like ECUs isn't written, it's generated from verifiable models. While this doesn't lend complete confidence by any means, I think it can be expected that the defects/SLOC will be much lower and of a very different variety in this type of code than code that's written.
>> most of the code in critical control systems like ECUs isn't written, it's generated from verifiable models.
I'm not sure what "verifiable models" means here. There are a bunch of people using Simulink to "auto-generate" code for some ECUs. I've used it with great results, and I've seen others do that as well. And yet some people still create horrible things with it just like regular code. But not once did I see anything that sounded like formal verification. That doesn't mean it never happens but the way The Mathworks markets their tools, you'd think some kind of magic is done automatically.
On a related note, I always laugh when Mathworks promotes the ability to compare "simulation" results to that of the "generated code". They call it "software in the loop" testing. This is actually getting their customers to verify that the tools they provide work as advertised.
For that kind of development I'd actually prefer to just shut the simulator off always generate code. Simulation should only ever be for plant models, not control systems you want to actually implement in software.
Oh god. You can check any C-code with Polyspace, I've done that. Most of the warnings Polyspace generates (at the time I did it) are not more significant than style - like checking for MISRA compliance doesn't guarantee a lack of bugs.
To me formal verification has always meant proving correctness in a mathematical sense against some sort of formal specification.
None of that really matters though because AFAIK you can't prove a neural net has been trained "correctly" and doesn't contain errors.
> I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.
This is more a hijack than a direct critique but I think the general issue is the assumption that we need a private vehicle for that.
Trains are perfectly capable for bringing me from A to B while I am sleeping for years now.
Edit:// with years I mean years. Before that pressure made longer rides usually more uncomfortable than they are now with modern trains. Probably only because my area has plenty of mountain tho.
In the US you will have lots of time to sleep due to waiting for an oncoming train to clear.
In most parts of the country we don't have dedicated passenger tracks. Amtrak (our passenger line) just rents track time on the freight lines. The problem is that freight lines are often single track lines so if there is oncoming freight you just wait until the freight clears. Sometimes this his hours.
Until we have dedicated passenger lines, long distance train travel is a non-starter here. It was too slow for my 75 year old uncle on a site seeing tour. His exact words were "fun, but never again"
I've ridden a lot of Amtrak and it's never been on a single track line. Are you sure that's an actual problem? Long delays, sure, but is that the reason? Seems to usually be train equipment failure.
Having ridden a lot of Amtrak almost makes it less likely that you'd have experienced the single track problem, since quality rail experiences in the US are pretty concentrated such that someone who has good train experiences probably lives/works in an area where that is the rule. On the East coast, pretty good service arguably extends as far south as Richmond, but past that sitting on the tracks is very common. And with that you get the classic downward spiral of crappy transit. Ride a train in SC or GA, and you'll see that anyone with the money for a plane ticket has abandoned rail there.
I mostly rode the train from SC to Boston and back (Southern Crescent) ... it's dual track all of the way. Also DC to Chicago, southern route, dual track all of the way.
I love Amtrak and am willing to put up with the delays, but DC to Chicago stop all the time, especially in the mountains, to wait for higher priority trains to pass. Sometimes it's only for a few minutes at a time, but sometimes it adds up to hours.
Agreed entirely, in some parts of the world you can even drive your car onto a train, have a nice meal, watch a movie and sleep in a luxurious cabin. Wake up the next morning and pick-up your car.
I think demand for trains has diminished which is why they're not as nice as they could be; However I found plenty of nice trains on offer in Europe and Asia.
To be fair. I used trains on any continent other than America. My comment is a passive critique to any system who aren't able to build such a transport system yet.
The US was able (and had) a passenger train network. It didn’t stay updated because cargo is a much more profitable business with professional customers. There is not one aspect of passenger travel that is a bonus over cargo for rail companies. As others have pointed out, you get to optimize the rails for either cargo or passengers.
Even in Europe to get from A to B you need to switch trains super often. Like in France you almost always need to go to Paris first before going somewhere else because the network is so centralized around Paris.
But even then you can get to paris pretty fast. I have had similar problems where I had to go to Karlsruhe from Flensburg to get a train to paris, as that was the fastest route.
To be fair that's not really a reasonable train trip for people that want to actually get there as opposed to riding a train. Probably above 9 hours compared to a sub-2h flight. Train routes that are more meaningful tend to be better serviced.
I regularly take a train that takes 6.5 hours. The flight would be 45 minutes.
Getting to, into and away from the airport easily ads another 2 hours tho. Also this route has 3 flights a week compared to a train every 2 hours.
With the train I just get in and enjoy nice service, free power and WiFi and usually get some work done.
Not to mention the difference in my ecological footprint.
My point is that it's not like you are wasting time in a uncomfortable environment usually. So I dont mind taking the train, actually usually even prefer it, so do millions of people all over the world.
I'm a big train fan and do impractical train rides quite often. I dont think millions of people share our appreciation of trains. Most people use whatever is faster, or cheaper. Trains over 3 hours tend to be neither. Paris to Flensburg is both slow and expensive. By most people's standards, the 6h additional duration of the trip offsets the airport annoyance by far.
> Not to mention the difference in my ecological footprint.
Only as clean as the electricity is produced. In many countries electricity production relies on coal, petrol or gas, which are far from being clean energies.
Or Shenyang to Kunming which takes 18 hours on a train.
But what's your point? Trains aren't meant to replace crosscontinetal flights. It would be better to compare regional routes like LA - SF which is super inconvenient.
> If at any time your car does something unsafe, you can hit that button.
Tesla actually has that, it's not widely documented but you can do a voice command of "Report bug" followed by a brief audio description and it's uploaded to Tesla.
No idea how it gets triaged but I wish more products had a similar feature.
Really they should have some kind of flag when they get a high number of autopilot disconnects ( or steering overrides) happening in the same place. I had to disconnect every day because it ignored temporary lane stripe markings, and it never improves, despite Tesla’s claim of fleet learning.
Seems an obvious way to improve: look at where your drivers are overriding your system.
That's disappointing - I'd assumed that every single disengagement would have been logged and uploaded for analysis and use as a regression test. That's what I'd be doing.
I have reported using that and also emailed their feedback list but of course they don't follow up that anything was ever updated to fix it. For e.g. It's still unsafe on car pool lanes with bikers and lane-splitting which is legal in CA.
Whoa whoa whoa, "auto-pilot" is NOT self-driving. Tesla goes to great pains to make auto-pilot sound like self-driving while still stringently claiming that it requires constant supervision, like an airplane auto-pilot.
Yes, but my friends that own Tesla's actually believe that self-driving cars are almost here.
When I say that it's at least ten years away and likely much longer because it will take big breakthroughs (strong AI, lot's of knowledge about the world, etc.) they look at me like I'm crazy.
I really hope it happens in my lifetime, but I remember talking about Greenblatt's chess program (see [1]) when I was at MIT in the 60's. The AI folks really thought that a chess program would be world champion within 10 years. They finally did it, but it took more like 30 years, and one of the reasons they were successful was that Moore's law eventually allowed a super computer (Deep Blue, at the time the 259th most powerful computer in the world) to evaluate 200 Million board positions per second[2]. Deep Blue's opponent, world champion Garry Kasparov) was most definitely not evaluating 200 Million board positions per second.
I see driving as a much more complex task than playing chess, so maybe 30 years before we will put our children in cars to be driven across town. I will be happily surprised if it arrives sooner.
We'll see about that when the time comes to test self-driving cars during an actual winter with snow, ice, black ice, slush, cars being parked 6" to a foot further into the street, et cetera. Driving around this winter in the upper midwest made me realize just how idealistic (and silly) people that think self-driving cars are 5 years away are.
Waymo is already running fully autonomous cars in Arizona without backup drivers.
It may take 30 years to handle blizzards and every edge case but you can just park the car to avoid those problems temporarily while still serving 99% of the market so they don't need to be solved now.
They scam people into buying Tesla thinking their version is close to self-driving right here https://www.tesla.com/autopilot . Reality is that since Oct'16 is that it's nothing more than a glorified adaptive cruise control found on many common cars.
Update:
Since Oct'16 self-driving software is an add-on option and Musk has been tweeting[1] since Jan'17 that it's just months away and that its autopilot is somehow safer than adaptive cruise control. This Oct'16 video even starts with the claim that the driver is only there for legal reasons.
There is a lawsuit since Apr'17 regarding above - Google dean sheikh et al v. tesla, inc
I don't know why you are getting downvoted. All you have said is true. Maybe, people don't like the word 'scam', but essentially they are trying to sell more cars by bamboozling people with technology that is years off.
'I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.'
My only experience of automated driving is the feature where you can set a maximum speed. This is useful in the UK since average speed cameras are commonplace on the motorway network nowadays and it is easy for your concentration to lapse and drift over the current limit. The cameras are automated and there isn't much leeway.
However, the speed limits often change in range of 50, 60, 70 mph depending on local conditions and roadworks, so you can never just 'set and forget' you need to be alert all of the time since speed limits can change frequently.
It's going to be a long time before I trust a car to do this on its own without my supervision and, as regards going to sleep and letting the car do 'everything' on its own, I don't envisage being able to do this comfortably in my lifetime.
I rented a car that had active cruise control and lane assist. I drove most of the way from Oakland to San Jose with out touching any of the controls, except maybe the accelerator once or twice. But? I got to my destination exhausted and in a little pain because I had sat tensed up ready to take the controls again at a moment's notice. Partly that's because these features are not intended to be a complete self-driving solution. But partly it's exactly what you said - I wouldn't trust it absolutely anyway and would want to stay alert. It's easy for me to stay alert when I'm driving and not just supervising.
If you just need to follow speed limits, Tesla can do it perfectly fine. It knows speed limits for most roads. It would fail if the speed limit is due to construction or speed trap :-)
> I was so excited about auto-pilot and dreamed of getting in my car and sleeping while it made cross-country trips. So much for that, that seems way far out.
If it's any consolation that's not how autopilot on an airplane works. Pilots are expected to remain alert and ready to intervene in case of trouble.
> The car will save the last 60 seconds so the manufacturer can analyze it to debug and figure out what went wrong.
Ideally, this would be sent to the NTSB as well to determine if they need to issue a recall and possibly also shared with the public if the car's owner is okay with that.
Issuing a bug report to the manufacturer is fine if the manufacturer is being responsive, but fixes take time and large companies don't always put a high priority on fixing bugs. Sharing bug information more widely increases their accountability and I think potential buyers have a right to know about safety-critical flaws before they buy a car.
For babies car is autonomous already. They have to be strapped pretty tight in a bit reclined position.
Do you mean to sleep sitting just like passengers in a car kinda can? I guess people are able to sleep that way. I for one have to be totally exhausted to fall a sleep sitting.
One certainly can't hope to sleep laying on a side under a blanket. Self driving can't ensure that car will never experience sudden deceleration. For the same reason you will not see seats facing back of the car. Because you could be killed by your or other passenger's laptop or burned by your hot coffee.
I can't believe this person kept using it. If I had noticed a bug in auto-pilot and complained about it, I would be way too scared to ever use auto-pilot again. Personally, I never use auto-pilot because driving is piss easy, as it's designed to be.
Perfect self-driving cars is a nearly impossible feat to accomplish in an unbounded track. I can only imagine automated driving in a system which has no room for error. Examples include: tunnels under the ground, chain links on the ground (as in trolleys, trains, etc.), or anything else that vastly reduces the entropy involved in driving.
With self-driving cars on current roads, it will probably take years to get from 1% error to .1% error, and decades to get from .1% error to .01% error, which isn't even good enough. Perhaps it will take a century or longer to develop the required artificial intelligence to make self-driving cars perfect "enough". There's just too much room for unique problems to spawn. Bounding vehicle freedom seems to be the only way forward.
Your numbers about error percentiles don't make sense. Ideally, you'd want an outcome measure like fatalities per million miles, accidents per 100k miles, not "% error" which is vague.
Furthermore, look at the actual data we have right now. SDC makers actually put out data in California about their "disengagement rate" which is how many time the human drivers took over from the software. Waymo have steadily increased that rate over the past few years, now they are driving many hours without disengagements. Look at the link below, page 4, you'll see they have 63 disengagements over 350k miles. That's 1 per 5.5k miles, so these cars are driving for days without a human takeover.
They will not need their own infrastructure, that would be not be economically viable. They will go on the roads we have or they won't go at all. Tunnels are going to be reserved for high-density point-to-point travel, if the boring company or others ever get scale...
Then let's add some perspective. You must be referring this[0] paper. If the average person puts on 1,000 miles per month[1], then that means they'd have to deal with disengagement (a mishap) at least twice a year, which is not acceptable for fully autonomous driving. I'm going to define a "fully autonomous vehicle" as "a vehicle which should not ever require me to sit in the front seat and control it under any conceivable circumstance".
Put differently, I should be able to lay down for a nap in the back seat and wake up at my destination without any chance for disengagement during my entire lifetime. At the current rate of 1 mishap per 5,500miles, I would be dead after about 6 months.
Assuming a human lives to 75 years (we should really be using 75years minus 16years, but it's unimportant), a lifetime of driving is about 1,000mi/mo x 12mo/yr x 75yr = 900,000 miles. I don't even want the probability of encountering a mishap to be once per lifetime, let alone once per 6 months. One mishap per 900,000 miles isn't enough, because, on average, I'd encounter one disengagement in my lifetime. Assuming we're striving for a world where 7 billion people can drive without a single incident in 75 years (a vast underestimate), we need the probability of a mishap to occur to be less than once per 7,000,000,000humans x 900,000mi/human = 63 x 10^14 miles.
1/5e3 is not even close to 1/6e15. We're talking about 12 orders of magnitude in our error rate. I'd say we're laughably far away from our goal. We've got a long way to go.
> I don't even want the probability of encountering a mishap to be once per lifetime,
This doesn't seem reasonable - Waymo's report doesn't dive in depth enough about each disengagement to warrant this sort of extreme reliability.
If "2 disengagements" per year, were at most fenderbenders - something I'd wager humans do way more than twice per year - that would be a very different story than if those 2 disengagements were life threatening. Sure you'd wake up from your nap, but you wouldn't be dead, and at most you'd have to exchange insurance information.
> This report covers disengagements following the California DMV definition, which means "a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”
So, you're right, there's no clear distinction, but I would further argue that it doesn't matter. Even if only 1/1000 disengagements are fatal, my conclusion remains the same. I think we're splitting hairs at this point, though.
Even if not fatal, I highly doubt a significant fraction of such events (as defined above) would allow me take a nap upon departure and wake up at my destination, so it would still be unacceptable to me. I guess we have to agree on what an acceptable end-game is for fully autonomous vehicles. If you think "waking up on the shoulder exchanging insurance" is acceptable, then that would indeed change the numbers (but by how much? Two, maybe four orders of magnitude?).
Humans get into fender-benders all the time, but surely we'd strive to eradicate this inefficiency in the automated driver. I think this is still an active area of debate; some assembly-line work can be made more efficient with machines, but we've seen humans out-perform machines in other types of work. I think driving tends to utilize more reactive, intuitive "System 1" thinking[0], so I imagine that humans will be vastly better than machines at driving for a very long time.
Are you kidding? A person averages two fender-benders per year?
And no, it's not acceptable by any stretch. A disengage is extremely dangerous to drivers who are lulled into false sense of security through the marketing of.. self-driving cars.
It is ludicrous I feel to worry about getting to a stage where nobody ever has any type of accident.
I'd like to know what the actual rate would be, the 5k miles figure is for conservative thresholds of when control should be handed over, and many problems may be safely solvable (if annoying) by simply coming to a halt or pulling over.
No, the current setup is not OK to release on a large scale, but it's not expected to be, and we're not 12 orders of magnitude from a reasonable point.
That's, actually pretty damn terrible. 1 disengage per 5.5k miles is roughly 1 accident per 5.5k miles if the driver is not paying attention. Look at the Uber accident cam video pointed at the driver. This is the attention levels of drivers in so-called "self-driving cars."
And even when continuing to use it in general, use it where it acted up several times? Why were the hands not on the wheel and the driver extra careful here?
Relying on painting quality and accuracy is going to hurt autopilot systems. Cities and car companies aren't prepared to work together to solve that and there probably isn't an fast and easy solution to it.
looks like that would trick a human driver under certain lighting conditions. between Seattle and Bellevue a stretch of freeway looks just like that and trips me over every time
"[...] when it entered and traveled in an unmarked gore area, rather than the intended high-occupancy-vehicle (HOV) lane, and collided with a crash attenuator. The 990-foot-long gore, with an unmarked inside area, separates the left exit HOV lane for State Route 85 from the US-101 HOV lane."
Not the same place nor nearby. From the picture on page 24 it's clear that this is the 85/101 interchange in south San Jose, about 20 miles away from the other 85/101 junction where the Tesla accident happened.
The type of accident does seems comparable though.
Investigating further, it looks like that barrier had been down for over 7+ days (they say due to weather) [1].
Looking at the image and estimating the traffic flowing, it's simply amazing that CalTrans is allowed to take a week to fix such things. How long does it take to put up some sand barrels? This situation was made more dangerous because while many such barriers are used in freeway exits, the left lane speeds can be much higher. 7 days + weather is a long time to fix (some states require 3 days).
I get the feeling that Tesla/Waymo and other SDC manufacturers will be more proactive in reporting unsafe conditions as a priority to fix. Such a partnership could improve safety for everyone.
It's like they painted traffic lines to guide vehicles into barriers, and then didn't replace the attenuator after a human driver made the same mistake as the Tesla autopilot did a week later.
But it's a solid white line, so you can't cross it anyway. Unless there's a place where the line is dashed earlier on, there's no way you can enter that "lane".
Maybe Tesla told him that the issue was fixed and he can use it again. Why do people trust auto pilot at such high speeds? It is still new technology. Let the autopilot park your car, if it makes a mistake the only bad thing that can happen is just a few scratches and in 5-10 years use autopilot on low traffic streets and then go to the highway. You can't expect new technology to be safe.
Maybe I’m old fashioned, but I expect all of the technology in my mass-produced car to be safe, where safe is defined as “will not steer my vehicle into a barrier at freeway speeds.”
I don't understand most people's perspective on this. It's like walking out in a crosswalk without looking; sure if you get hit the driver may go to jail, but you'll still be dead (or seriously injured).
Use your own brain, you can't rely on others to keep you alive. At the end of the day you're the one with whose got the most to lose, always.
> Or it can drive over a toddler or pin someone against a wall until they die.
There might be situations where this could happen but humans have such a bad track record when it comes to driving/parking/paying attention in general that i'd trust a good autopilot over the average person. There will be bugs but at least we're able to work on them. Improving human driving skills seems unfeasible with any other current technology.
I imagine Tesla will make this argument in court. California is a comparative negligence state, so they can claim that some percentage of the negligence should be apportioned to him since he knew that the autopilot didn’t work great in this location.
I’m not saying this is a winning argument, especially since he did so much to bring it to their attention, but just that Tesla would be expected to raise it.
Of course, they’ll probably try to settle this quietly and hope that everyone goes back to worrying about the Uber self-driving fatality...
And CalTrans for not fixing the crash attenuator (the man did not die instantly, so if the crash attenuator had not been removed, he likely would've survived.) And for making a confusing lane-like path right into the crash site. https://imgur.com/a/iMY1x
I mean, the crash attenuator had recently been used up, but the dangerous line situation that led to the crash which destroyed the attenuator had not been fixed.
The lesson here is that Tesla does not take complaints seriously and someone died as a result. It sounds just as likely to me that another person using autopilot would have had the same issue at that location.
It's too early to draw that conclusion. Note that the claims in the article are coming from the family and haven't been corroborated by Tesla yet. It's possible they are incorrect about aspects of the story they're telling for various reasons.
This is some intense libertarian leaning. Tesla of course is responsible. Their marketing is full-bore "Welcome to self-driving car country." They're selling the feature, they are responsible.
Yeah! Isn't it obvious that a feature named "autopilot" means you have to watch it like a hawk, keep your eyes on the road and your hands on the wheel?
Yes, it is obvious. What do you think a plane pilot does when autopilot is on? He watches to make sure everything is alright, and intervenes when necessary.
Not really. He sure isn't expected to be able to jump in with less than a second's notice to override what autopilot is doing. And he typically isn't there to make sure the autopilot isn't doing something stupid. He's there to watch for conditions that the autopilot can't handle...and those typically have plenty of warning time in a plane. And a pilot is a trained professional.
The expectation that the driver will jump in to prevent a car's autopilot from steering the car off the road into an obstruction is ridiculous. It's one thing to expect the driver to stop for on-the-actual-road dangers. This wasn't.
We don't really know what happened yet, but if it was the autopilot and it did steer into the barrier, this is at least 95% Tesla's fault.
When on a plane there's a "Ground" alert, pilot is supposed to act immediately to prevent a crash. He cannot be absent from the cabin and he cannot finish the game level on his smartphone first. The same is with the car, that is not sold as fully autonomous: if you are on the driver seat, you are responsible for making the final decisions on steering and braking, not a technology. Technology may help you and prevent from making mistakes, but it was not promised or advertised that it will help in 100% of the cases.
As we see in latest Tesla report, there were multiple warnings to which the driver should respond: this means, that the problem was detected with enough time to react and the autopilot was not able to solve it on its own. Thus, it does not make sense to blame the autopilot - the technology did behave as expected (when problem cannot be solved, report asap to the owner, that he should intervene).
What Tesla can actually be blamed for, is the misunderstanding of the autopilot capabilities by the drivers. It's not an algorithmic bug, but rather a usability one. Perhaps, the warning should be issued earlier or in a more clear way. Perhaps, the drivers should receive some training, to act as supervisors of the system, not as passengers, or there even should exist a special driving license for such types of cars.
Nevertheless, the main fault was not the driver's and not the Tesla's. The safety of this road should be guaranteed by the authorities in charge, the navigation should be made sufficient for the drivers, and they clearly failed to do so.
Wow you did that investigation quickly. You should let the NTSB know your results.
Seriously though....I doubt it. The report said he was told to put his hands back on the wheel "earlier in the drive". I seriously doubt he would have had time to react in this case. This is not the same as a plane...there was no "ground alert" or anything like it. From what I can tell, it was doing its job then steered right into a wall. That sort of thing simply doesn't happen with a plane....when autopilot is active on a plane, there is never a situation that requires instantly noticing something is wrong, and correcting it, within a few seconds. Never.
And usability bugs, if that is what you want to call this, are indeed Tesla's fault. They, not the driver, have the resources to test and understand how human attention works (i.e. "vigilance deficit" / "handoff problem")
>What do you think a plane pilot does when autopilot is on?
In a commercial jet - Goes to the toilet? Eats lunch? Chats to the co-pilot?
Sure none of those might be right but I'd warrant that a lot of people would answer with things that move the pilots concentration away from piloting, "because the auto-pilot is doing the flying". Honestly I'd assume they do any of these answers; but with the proviso that the co-pilot stays in the cockpit when the pilot leaves.
Which is why I’ve always maintained it was a very misleading name and would lead to dangerous accidents. Which it seems to have on a number of occasions.
Of course people may have abused the system if it had a more sensible name anyway. But names competitors use like Automatic Cruise Control or SuperCruise seem much more descriptive of the system’s actual abilities.
It’s a car, not a plane. Why is the definition from aviation more relevant than the definition from science fiction (which also happens to be closer to the plain English understanding of the phrase)?
Because it's science fiction. As in, "this is not real." But well, I guess that if Moon landing was fake and wrestling is real, then science fiction is also real ;) (In other words, that the perception matters more than reality)
That's entirely speculation at this point. Tesla are the only ones who could confirm or deny that and it's possible that they will never be able to determine it depending on the condition of the computer hard ware.
I'd definitely blame an airline pilot who crashed because he relied on autopilot in a situation he knew was not handled well by the autopilot. (I would also blame the autopilot.) I don't see why the driver of a car with any kind of semi-automated system should be held to a lesser standard.
It sucks that the driver died, and it sucks that the Tesla autopilot system had problems handing that kind of situation, but that does not mean the driver is blameless. He put himself and the people in the cars around him at risk by using the autopilot feature on a stretch of road where he knew it did not work well.
> I don't see why the driver of a car with any kind of semi-automated system should be held to a lesser standard.
Because the driver didn't receive any training from the manufacturer. Airplane pilots, in contrast, receive a ton of training right down to how to fly a specific type of aircraft(single-engine, twin-engine, instrument flying, etc.). Additionally the manufacturer will provide training to pilots on how to operate any nifty features of the commercial aircraft.
I don't believe Tesla provides any training whatsoever on how to use these features. And I'm not aware of any mechanisms preventing untrained users from activating these features. A tutorial that you can click through does not count because you do not ensure rapport with the trainee like you would in a person-to-person training.
Back when cars were being commoditized the dealer would often provide training to new drivers. And in all states new drivers are required to take a practical test to demonstrate that they are competent to drive. Does Tesla require their users to prove any sort of understanding or competence before they unlock Autopilot?
You might argue that requiring training sets a dangerous precedent, but users need to be made aware that the driver assistance systems are not foolproof, and the only foolproof way to do that is to require them to attend a training.
This actually happened. A glitch with the 737-800 [0] radar altimeter caused the aircraft to go into flare (touchdown) mode at altitude resulting in the jet to basically dropping out of the sky, with rapidly decreasing airspeed.
They should have been able to fly manually and safely land with a faulty radar altimeter. It is likely the crew didn't understand the significance of the fault, even though Boeing had issued previous warnings.
Pilots have extensive aviation training, the general population thinks of "autopilot" as a thing that will automatically pilot your vehicle. What did Tesla think would happen?
If a plane crashes, the survival rate is pretty low. Most likely everyone on the plane is dead. People get into car accidents often enough to feel that crashing one's car rarely leads to death. Walter probably felt safe enough to use autopilot knowing that if the car crashed, he could still walk away from it.
Note the study refers to “airplane accidents,” not “plane crashes.” I suspect that the vast majority of the airplane accidents included in the denominator wouldn’t be described by the vast majority of people as “plane crashes.”
The NTSB doesn't split aircraft incidents into "accidents" and "crashes." Note that the report specifically mentions TWA Flight 800 as an accident, though the loss of life on that flight was 100%.
I honestly worried about that when writing the comment. The car should have avoided it (based on Tesla’s usual claims).
But Tesla and everyone else says the owner is still ultimately responsible because AutoPilot isn’t a 100% situation.
If my car did something funky like randomly accelerate or turn hard at a given intersection and the manufacturer refused to fix it I’d stop driving through there.
Why push your luck?
I truly don’t like blaming the victim. I’ve been very critical of Tesla lately for their claims and safety issues.
But I don’t understand this man’s decision at all. It doesn’t seem reasonable.
I wish I knew why he kept using the system in that area.
The ONLY idea I can think of is when the car got an update he would try again to see if it was fixed and of the tries was sadly the last.
If your car randomly accelerated you would "stop driving through there"? What does that mean? You wouldn't file a class-action lawsuit like Toyota was hit with a few years back? That's faulty engineering. That's manufacturers defect. That's why we have regulations to protect the "huddled masses" such as yourself.
We adapt to weird behavior all the time. I once found my brake pedal went right to the floor while driving. Then it fixed itself. I drove carefully after that but still kept using it. The other day my engine started revving up uncontrollably until I pulled over and turned it off. It's been fine since. I still drive it. Would you blame someone if their check engine light was on or the steering felt a bit wobbly?
100% serious answer- if what you are saying is true, and you do not immediately stop driving your car on public roads and take it to a mechanic for a thorough inspection, you are toying with both your own and other people's lives. Cars do not fix themselves, and brakes that drop to the floor are seriously compromised. The next time that happens, there may not be enough fluid left in the reservoir to repressurize the system.
As for blame, the check engine light is for emissions gear, and will not effect safe vehicle operation. A wobbly front end, however, can indicate a serious problem, and yes, it is the driver's responsibility to operate a safe vehicle on public roads, and it would in fact be their fault if something happened because they did not perform proper maintenance.
Pedal that went to the floor and then fixed itself probably mean you lost a brake pad and now you're using the Piston/caliper to stop (it should make grinding noise and your rotor / Piston / caliper won't last long)
I'd get this fixed, if I'm right you're going to lose half the braking power of your car soon. (Pretty much all cars have a dual braking system, if shit happens you lose one front and one rear brake but not everything)
I don't even think that even applies in this case.
If TFA is correct and there is a problem in this particular spot and you still rely on autopilot then who is left to blame except the person who entrusted their life to a system known to be faulty?
He wouldn't be a victim in that case. If he had killed someone else in addition to himself, I'm sure you would be more keen on holding him responsible.
There are definitely times when the victim is to blame - in this case I think it's too early to assign blame to anyone (or anything?) so if your warning was against speculating on the cause of the crash, I'm in agreement.
Rightness is defined with respect to a system of rules. If being "right" leaves you dead, were you really right? Maybe in a legal sense, but is that the system that matters?
It's precisely because you're not dead yet that you should act rightly according to the correctly chosen system.
I'm a biker. I can drive, but only ever drive rentals; motorcycles are my primary mode.
A sense of righteousness for being in the right on a bike will get you killed. You ride within the margin of not just your own error, but also those of other road users, and your own machine, with your best judgement and risk preference tradeoff.
If your machine has a known tendency to do something bad in some situation, you avoid that situation, even if you have someone else to blame. Blame doesn't keep you alive.
"A sense of righteousness for being in the right on a bike will get you killed." Well put. When I was a boy growing up in Milwaukee in the 50s/60s, there was a public service announcement on TV for the Wisconsin Dept. of Motor Vehicles that ended "Don't be dead right." I liked it then and find its lesson has stayed with me and helped me avoid trouble many times outside the driving space.
That seems odd to me. If it happens so much how hard is it to have techs just drive a car through the area repeatedly on various days to get sensor data and reproduce it?
If they only tried at the dealership and not the actual place... that seems like a big mistake on their part.
"Man you have got to be ready — it makes mistakes, it loses track of the lane lines. You have to be on your toes all the time," says Wozniak. "All Tesla did is say, 'It is beta so we are not responsible. It doesn't necessarily work, so you have to be in control.'
Unlike people like Elon and Jobs, you can safely calibrate your bullshit meter with Woz. He's not as famous because the press don't generally like that.
I love Tesla to death and in most cases will defend them beyond the point of reason.
But I took a test drive in a model S for the first time earlier this year and almost immediately noticed autopilot’s extremely unreliable behavior— it would swerve out of lanes in ordinary situations that should have been easy to handle. The second I saw that, hat was it: I would never use it again before many years of testing and improvement had taken place. No way am I gambling my life on a clearly incomplete feature just because it’s cool. Fuck that.
Of course Tesla is fairly safe behind their disclaimers and warnings, and to be honest I think it may be impossible to develop such a system without putting it into the wild before it’s perfect.
But for me, personally...I’ll let other people choose to be the Guinea pigs. The risks are all too obvious. Continuing to use the feature is very dangerous. Do it knowing this may very well happen to you.
> Of course Tesla is fairly safe behind their disclaimers and warnings
I don't think so. All it will take is a Tesla plowing into a sidewalk full of people, and no hand-wavy gesture at a license agreement is going to make the political and legal pressure go away.
After Tesla dumped Mobileye for their own system a year or two ago, autopilot performance was worse. But in the past few weeks they supposedly pushed out an update that dramatically improved it, to the point that it's finally unambiguously better than the old Mobileye system.
I still wouldn't trust it. I don't believe their current hardware has enough processing power to do the job. But it should be performing better now than even a few months ago.
I have a 2015 Tesla model S and I have never seen the behaviour you describe. I use autostart a lot on the ground that it's good to have two of us paying attention.
They dropped the system used in their older cars because their supplier for it decided they were too reckless and refused to do business with them anymore. Happened around 2016, I think. Since then they've been using an in-house system that doesn't work so well.
> I love Tesla to death and in most cases will defend them beyond the point of reason.
Wow, that attitude strikes me as a little creepy. If you admire Musk to the point of hero worship, that's perfectly fine. But why worship a corporation?
Tesla is a metonym for Musk. Everything they do is by his direction unless he explicitly says "I was unaware of this and will be putting a stop to it immediately," which he has done multiple times. The corporation is ultimately an extension of his will, as Apple was for Jobs.
It'll be interesting to see if anything comes of the issue with the already-collapsed crash barrier and what CalTrans says about it. That sort of thing is there for a reason, and to be left in a crushed state for any period of time is bad.
In Texas, I've seen crushed barriers remain collapsed for weeks on end. Either that or they are just hit again right after being replaced. Which tells me, it's a poorly designed road and causes confusion for drivers. Which in fact may be what this Tesla crash turns out to be.
Exactly, I drive past this barrier every day...the problem is the left two lanes on 101 are carpool/EV lane so Tesla drivers just zoom down it...at this particular exit, the left carpool lane leads to an HOV flyover exit which puts you on 85. If you are not paying attention (ie on autopilot flying past traffic), you will end up on a completely different highway! I see people very frequently swerve out of the flyover lane back on to 101 very often, so my initial thought was that he tried to disengage too late to either get back on 101 or the catch the flyover (not clear which one).
Before this rumour spreads: the latest Tesla statement [1] is worded in a deliberately misleading fashion and the only thing we can tell from it is that his hands were according to the software not on the wheel for 6s before the impact.
>The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision
But unfortunately the real world is full of poorly designed, under-maintained roads driven on by drivers who are looking at their phones or have mechanical issues without warning. If the self-driving cars can't handle it, then they shouldn't be on the roads. The roads will never be perfect.
i didn't understand that part of the article.
Does the road have a crumple zone in it, and it was already consumed in a previous incident, and then in this incident the car collided with the already-crumpled zone?
The crash was at the 101 and 85 flyover for the carpool lane. Next to that lane there is a sign warning of upcoming construction closures... in fall 2017. Does that answer your question?
Yeah, what's up with that? There's a sign you pass getting on 101 North from Oregon Expy that says that the ramp is going to be closed on some particular day in 2015.
One thing to remember in all the discussions about the accident is, that there is no information available whether the autopilot was active at the time of the accident.
It sounds like they are saying the driver had 5 seconds to notice the problem and react. It is scary to think that when you are on autopilot you may be 5 seconds away from death at any time. Better not take those eyes off the road!
If you take the human factor away 5 seconds ahead of time warning is quite a lot. The car could have slowed down to a possibly non fatal speed. By asking the human to act the autopilot is actually throwing away at least 2 of those precious 5 seconds.
Came here to say this. It's unfair to start talking about autopilot mishaps in this case, and what this family is saying, etc. When there is NO evidence yet as to whether autopilot was engaged.
Further more.. if the barrier was already hit previously by another car, as the reason why it was collapsed prior to the Tesla crash, this may simply be an area where the road is not designed very well and causes many human error in that location. This happens all the time. In fact, road design is a significant cause of vehicle crashes.
SW/HW Bugs happen - fact of life. More concerning is that Tesla denying this was ever reported to them...
I had my driver side view mirror stuck issue fixed and they put in the notes that they re-filled all my tires during repair service allegedly as reqd. by some CA law. Later that week I got an alert that my tire pressure was low and had to get it air pumped so obviously they didn't do that despite having claimed to have done just that...
> More concerning is that Tesla denying this was ever reported to them.
This is how Tesla reacts to all bad publicity. Bad review on battery life comes out? Data dump indicating the reviewer might've done something wrong. Accident? Statistical dump showing that most cars don't do this. I like this technique more than generic "we can't comment" responses, but I'm pretty confident they heavily cherry pick, looking for something that people who glom onto data can see and say "oh good, Tesla's right."
Yup Tesla knows their audience - it's their semi-rabid owner and aspiring owner fan base. Throw out some logs or stats without context and let them do the dirty work for you.
Walter took it into dealership addressing the issue, but they couldn't duplicate it there
Why not get a mechanic to ride along with him to that location, perhaps with extra diagnostic equipment connected? I could see how a bug that is dependent upon being at that location would certainly not be reproducible somewhere else.
Here's an automotive service booklet from almost 70 years ago which recommends the same thing for troubleshooting:
before the crash, Walter complained "7-10 times the car would swivel toward that same exact barrier during auto-pilot
...and after 7 to 10 times , he still didn't learn his lesson? That's pretty stupid if you ask me. If my car does something weird at a particular stretch of road, especially 7 to 10 times, you can bet your bananas that i'll be paying a lot of attention on that stretch of road. If my "autopilot" (seriously, Tesla should stop using that name) isn't reliable in certain circumstances or places, then - guess what? - I WONT BE USING IT THERE. Why blame Tesla (I'm no Tesla fan), when the operator of the vehicle refused to operate it properly in the face of prior experience? Poor guy, and I feel for his family, but come on, what a dumbass
I really don't like autopilot. It's good enough to make drivers trust it and not pay attention, but it's not good enough to not kill people when they do that. And when there's an accident Telsa come out and say "the system warned the driver to put their hands on the wheel" or something similar. Unless a car can 100% self drive drivers aids should require the driver to have hands on the wheel and be paying active attention at all times.
This is part of why I haven't considered a Tesla. If I get into an collision in any other manufacturer's vehicle, the manufacturer's PR team won't impune my honor in the court of public opinion.
It seems like the Tesla autopilot is very similar in capabilities to other manufacturers' active lane keeping, adaptive cruise control and active collision avoidance braking systems, however the marketing and user behavior is much different.
There's no expectation that a Pacifica with all the bells and whistles is going to do a good job driving for you with no hands, but if somebody stops suddenly, it will too.
Googling surprisingly didn't give me a definitive answer on this - how long does autopilot let you go before warning, and then before stopping the car if you don't have your hands on the wheel?
Highway driving is supposed to be "easiest" problem for self driving cars to solve, since there are fewer edge cases, less turning, etc., but it's also the most dangerous type of driving. You are much more likely to die going at 65mph than 25mph.
I think deploying self driving cars at <=25mph speeds at first would be wise. Personally, I wouldn't risk letting a car take over at high speeds until there is a longer track record of safety.
The fact that it happened to him "several times" and not to others to me might indicate a specific hardware/sensor issue related to his vehicle. A sensor, slightly misaligned or not working to the same tolerance? Pure speculation, I know. Also the 200 trips/day refers to what? All Tesla vehicles? How about for his specific year & model and software version and configuration (both equipped and driver-defined)?
Is there a specific system for traffic sign detection? I'd have thought you could have a system that is dedicated to spotting traffic signs in the current country with a significantly higher accuracy than cat detection.
Even a small part of a sign should be enough. They're designed to be easy to spot.
It seems like we just set neural networks up to recognise all objects and assume they'll recognise simple objects too. However typically how humans learn is through simple cartoons first and then layer on top of that rather than the other way round.
Edit: should have done some googling before opening my mouth [0] [1]
Certainly CalTrans is responsible for not replacing the barrier. Expect a lawsuit/settlement. Tesla will be found liable too, that’ll be a jury trial civil suit. Why? Despite reporting that AP failed at that interchange and despite still using it, drivers and juries expect safety features to work as advertised.
I am pretty mystified why tesla is even running autopilot on their cars. They already have more demand than they can handle, and I don't think anyone buys a tesla because of the autopilot. They are opening themselves up to huge court claims at the same time they are low on money. Just keep running tests and maybe do a pilot for a few years. Isn't Waymo sort of in the lead right now? They aren't running their software on millions of vehicles but seem to be progressing okay.
It seems all these companies are in a big rush and being slapdash. Maybe it's a disconnect between the engineers and the execs/shareholders. It doesn't even seem profit-driven...almost fear driven. ("we don't wanna be left behind.")
It allows them to develop self driving technology without the expense of dedicated cars and paid drivers, and without any liability (because drivers take the risk).
Am I the only one that thinks that drivers with auto-pilot and back-up drivers for driver-less cars should be ready to drive at any instant? As an engineer, I'm seeing these features as beta at best.
The problem is that that's not how human cognition works. If the auto-pilot is working well your brain will inevitably become accustom to the lack of stimulus. Ironically I think that these systems have a kind of uncanny valley type area where they are probably safest when the auto-pilot is poor or great, but not in the middle.
I understand that ... and that's another problem we've yet to solve. What you've described is what led to the crash of the Korean Airliner that undershot the runway at SFO a few years ago. In that case, they would have been better off letting the plane land itself but that's not SOP.
On a highway I think it even detects crashes ahead of you ( had once two cars in a three lane almost crashing while one car tried to switch Lanes)
Well, back to story. I have a street where cars park left and right of the street like a zick Zack. There is one spot where it always warns me about a crash..
IMHO, they need to get autopilot at least an order of magnitude better than human drivers statistically, before releasing this tech, because this sort of news is extremely bad public-perception-wise.
I don’t think it’s reasonable to expect a Tesla driver to always have his hands on the wheel during autopilot
Unless someone has ever used an auto pilot in something like a Cessna, that person would probably have a wildly overoptimistic idea of what an autopilot does. Even on a passenger jet it's really not that smart, there's just lots more volume to explore so it's hard to kill everyone.
A better analogy would be cruise control. It controls essentially one variable. As does lane keeping. You combine a couple of these things and you think it's smart, and it isn't. We learnt this (edge cases between single variable trackers) ~40 years ago in aircraft, that there are places in the flight envelope that combinations of single variable trackers will still let you go, but will also kill you.
There used to be the same problems with cruise control, though. People thought it would brake automatically and steer around corners, and would get into accidents that proved their assumption wrong.
Yeah autopilot is similar in airplanes to what autopilot is on Teslas. Not sure what else you could call it without it also being confusing. Autopilot is a tool to stay in your lane and prevent common scenarios like keeping speed, avoiding obstacles and keeping distance, though there are still possible edge cases like this one appeared to be that can be more dangerous than not using it as the systems are not fully autonomous.
Even with airplanes, if something gets in the way, autopilot won't always save you and can only alert you when the situation gets bad i.e. oncoming airplane or obstacle, altitude, speed etc. We fly in planes with autopilot and we are safer for it, but pilots/drivers still need to be alert and operating the plane. Teslas aren't fully autonomous and probably can't truly be until everything is connected and more cars on the road are autonomous for expected behavior. I trust autopilot in planes but still want a pilot. Most likely autopilot will be more useful in large buses, trucks, shipping, boats, airliners than individuals as you will still need a driver most of the time. Even the Uber crash could have possibly been avoided with a more alert driver.
One area that may cause more crashes in the interim is trusting the software too much, autopilot that does work six sigma 99.999999% of the time may lead to a possible issue of driver comfort with the technology that may still have edge cases that can endanger them. This issue was a factor in the Tesla crash and the Uber crash
I think a huge overlooked part of the failure here was the previous accidents at this part of the road and the lack of repair. We have a serious infrastructure issue with disrepair, non automated/untracked driving of humans probably ends up badly in these areas but goes unnoticed or is suppressed as road design can be a big factor, this automated crash highlighted an issue here at this offramp/fork that probably would go unnoticed and cause more issues. If nothing else, automated driving will have the data to fix these bad areas of our infrastructure and double down on safety and protections.
because it implies something regardless of the products ability to actually deliver. the connotations were so strong I found it bizarre they went with it and didn't just hold it until the product matured. however it really comes down to the simple fact it has not costed them enough money in penalties and lawsuits to force a name change. they are in essence bluffing their way through this and have yet to get called on it.
It’s also a car, and not a plane. Do they also call the wheels propellers? It’s a misleading term for the majority of people who don’t fly their own planes.
Why no one talks about other victims of car accidents and their is about 6k per month on average in US alone. But Uber and Tesla are on headlines when someone dies and there is like what 3-4 victims in total after all those years? It is not even worth mentioning.
There is zero information yet and its all everyone is jumping onto. Last week the same thing happened to a Tesla that wasn't even equipped with autopilot.
My opinion: EVs, self-driving or directed, need an ejection mechanism for their batteries. Petrol has the advantage that it may ignite, where batteries are almost guaranteed to when damaged. Ideally, it would be an active system (launching the batteries no more than a meter away) - but that could fail under the conditions which caused the damage to the batteries. An alternative would be a passive system made of materials known to melt when exposed to a lithium fire - providing a few centimeters of separation from the cabin. Either way, the current situation is not ideal.
I'm not going to dismiss your opinion out of hand, though I certainly don't agree with it.
If I may: do you realize that, in this catastrophic wreck, that the fire only started, slowly, after the driver had been taken by EMS?
One big difference between a fire with batteries and gas is this: if gas starts to burn, most likely all of it is going to burn, quickly and often explosively.
None of these are the case with the batteries in Teslas. Only the damaged ones are likely to burn, slowly, and not explosively.
So your system would throw a burning pile of lithium into the woods to start a forest fire, into oncoming traffic to cause another crash, into a pedestrian on the sidewalk, etc?
Who wrote this statement does not understand software. If there is some kind of Heisenbug, those numbers are too small to prove that the software is ok, and actually the guy that died reporting that there was an issue there, and later crashing there, looks like a very interesting hint about a potential software bug that should be investigated. The Tesla statement is the equivalent of "works in my laptop" at a bigger scale. Also consider that 85k times since 2015 means that potentially only a fraction of those trips were executed with the latest version of the software. Moreover the street layout may now be kinda different triggering some new condition.