Hacker News new | past | comments | ask | show | jobs | submit login
NTSB: Autopilot steered Tesla car toward traffic barrier before deadly crash (arstechnica.com)
509 points by nwrk on June 7, 2018 | hide | past | favorite | 521 comments



NTSB:

• At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.

• At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.

• At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.

• At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.

This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.

The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.[1] If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because it doesn't detect stationary obstacles. Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.

That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.

This behavior is alien to human expectations. Humans intuitively expect an anti-collision system to avoid collisions with obstacles. This system does not do that. It only avoids rear-end collisions with other cars. The normal vehicle behavior of slowing down when it approaches the rear of another car trains users to expect that it will do that consistently. But it doesn't really work that way. Cars are special to the vision system.

How did the vehicle get into the gore area? We can only speculate at this point. The paint on the right edge of the gore marking, as seen in Google Maps, is worn near the point of the gore. That may have led the vehicle to track on the left edge of the gore marking, instead of the right. Then it would start centering normally on the wide gore area as if a lane. I expect that the NTSB will have more to say about that later. They may re-drive that area in another similarly equipped Tesla, or run tests on a track.

[1] https://goo.gl/maps/bWs6DGsoFmD2


One more thing to note - anecdotal evidence indicates that Tesla cars did not attempt to center within a lane prior to an OTA update, after which multiple cars exhibited this "centering" action into gore sections (and thus required manual input to avoid an incident) on video.

To me, that this behavior was added via an update makes it even harder to predict - your car can pass a particular section of road without incident one thousand times, but an OTA update makes that one thousand and first time deadly.

Humans are generally quite poor at responding to unexpected behavior changes such as this.


And this is exactly why all of these articles recently about how "great" it is that Tesla sends out frequent OTA updates are ridiculous. Frequent, unpredictable updates with changelogs that just read "Improvements and bug fixes" is fine when we're talking about a social media app, but is entirely unacceptable when we're talking about the software that controls a 2 ton hunk of metal flying at 70mph with humans inside of it.

The saying has been beat to death, but it bears repeating: Tesla is a prime case where the SV mindset of "move fast and break things" has resulted in "move fast and kill people". There's a reason that other vehicle manufacturers don't send out vehicle software updates willy-nilly, and it's not because they're technologically inferior.


This isn't an issue specific to Tesla as all automakers are now making cars that are more and more dependent on software. So what is the right way to handle these updates? You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer. Toyota had a similar problem a few years ago and did a voluntary recall. That means many of those cars with buggy brake systems were on the road for years after a potential fix was available and were driven for billions of potentially unsafe miles.


>This isn't an issue specific to Tesla as all automakers are now making cars that are more and more dependent on software.

Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.

>So what is the right way to handle these updates?

The way that other vehicle manufacturers (car, airplane, etc) have been doing it for decades is a pretty good way.

>You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer.

There is no evidence that said OTA update made Tesla cars any safer. There is evidence that similar OTA updates have made Tesla cars more unsafe.

The brake OTA that you mentioned has actually potentially done more harm than good. Tesla owners have been reporting that the same update made unexpected changes to the way their cars handle/accelerate in addition to the change in braking distance. These were forced, unpredictable changes that were introduced without warning. When you're driving a 2 ton vehicle at 70mph, being able to know exactly how your car will react in all situations, including how fast it accelerates, how well it handles, how fast it brakes, and how the autopilot will act is crucial to maintaining safety. Tesla messing with those parameters without warning is a detriment to safety, not an advantage.


>Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.

The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles. I wouldn't be surprised if their Lane Keeping Assist (LKA) systems have similar problems.

https://support.volvocars.com/en-CA/cars/Pages/owners-manual...

>WARNING When Pilot Assist follows another vehicle at speeds overapprox. 30 km/h (20 mph) and changes target vehicle – from a moving vehicle to a stationary one – Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed.

>The driver must then intervene and apply the brakes.


This comparison just sold me on how morally wrong it is what Tesla is doing. Intentionally misleading and marketing to customers a feature called Autopilot that is only a marginal improvement on what other cars already offer. What if Volvo started calling their (clearly not independent) feature Autopilot and saying it was the future of hands-free driving? Seems inexcusable.


Which is also exactly what GM is doing with Super Cruise. Here is just one of their commercials.

https://www.youtube.com/watch?v=u__51kTl4j8

Despite this warning in the manual[1]:

>Super Cruise is not a crash avoidance system and will not steer or brake to avoid a crash. Super Cruise does not steer to prevent a crash with stopped or slow-moving vehicles. You must supervise the driving task and may need to steer and brake to prevent a crash, especially in stop-and-go traffic or when a vehicle suddenly enters your lane. Always pay attention when using Super Cruise. Failure to do so could result in a crash involving serious injury or death.

[1] - https://www.cadillac.com/content/dam/cadillac/na/us/english/...


Riffing off the parallel thread about Google AI and how "corporations are controlled by humans" and can have moral values - no, corporations are controlled primarily by the market forces. When Tesla started branding line assist as autopilot, it put market pressure on others to follow suit. Hence, I'm absolutely not surprised about this ad and the associated warning in the manual.


TBF, corporations are controlled by humans and overwhelmingly influenced by market forces, which are also controlled by (other) humans.

That's a nitpick. Your broader point about Tesla pressuring the market down an unfortunate path is spot on.


That branding started a long time before Tesla was ever around.

http://www.oldcarbrochures.com/static/NA/Chrysler_and_Imperi...

Ideally, yeah, every manufacturer would have to take all the puffery out of their marketing, or better yet, talk about all the negatives of their product/service first, but I doubt I'll ever see that.


https://arstechnica.com/cars/2018/06/why-emergency-braking-s...

This article portrayed Super Cruise as something qualitatively different, based on the maps of existing roadways. I'm not sure if they've also considered integrating the multiple systems involved in driver assistance. I'm curious if Tesla has either for that matter.


I’m opposed to over-regulation of any sort, however it seems obvious that vehicle manufacturers need to do a better job informing consumers of the driver assistance capabilities of modern vehicles. Something similar to the health warnings on cigarette packs.


> The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles.

Software should not be driving a car into any of them. I think that LIDAR would see the obstacle, but as I understand, the crashed Tesla car didn't have it.


LIDAR probably would have seen the obstacle and avoided it, but so would a human driver who was operating the vehicle responsibly and correctly. It sucks that people treat level 2 systems and level 3 or 4, but the same thing applies to many convenience features in a car (cruise control, power brakes, etc...). There's always going to be some bozo doing what they shouldn't be doing with something.

I'd love to see LIDAR on consumer vehicles, but AFAIK it's prohibitively expensive. And to be fair, even Level 4 autonomous vehicles still crash into things and kill people.

https://www.bloomberg.com/news/articles/2018-05-24/uber-self...

Last but not least, every semi-autonomous system all the way back to Chrysler's "AUTO-PILOT" has had similar criticisms. People in the past even said similar things about high speed highways compared to other roads WRT attention.

http://www.curbsideclassic.com/blog/history/automotive-histo...

https://books.google.com/books?id=YpGkDAAAQBAJ&pg=PA100&lpg=...


> The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles.

Literally every car I have driven equipped with Cruise Control and Collision Avoidance (TACC) hits the brakes and slows down to 20-ish km/h if it senses ANYTHING moving slower (including stationary) in front of the car at possible collision path.


All cars from all manufacturers are running 10s or 100s of times more code than they did 10 or 20 years ago.

https://skeptics.stackexchange.com/questions/39559/does-the-...

This really affects the nature of the situation. 20 years ago, cars contained microcontrollers with a tiny bit of code which was thoroughly reviewed and tested by skilled professionals. Today, all cars run so much code, even outside of the entertainment system, that the review and testing just can't be the same. (And there's way more programmers, so the range of skill and care is also much wider.)

We're in a new mountain-of-flaky-software world.


When the Toyota electronic throttle "unintended acceleration" accidents were in the news, the software was described as a "big bowl of spaghetti" but the NTSB ultimately determined that it was not the cause of the problems. It was drivers using the wrong pedal.



There are news stories like this all over because it was an ongoing saga. But the DOT's investigation showed that folks were hitting the wrong pedal:

https://www.caranddriver.com/features/its-all-your-fault-the...


Malcolm Gladwell's podcast had a good summary of the issue, including the pedal confusion.

http://revisionisthistory.com/episodes/08-blame-game


I've long been curious about the "big bowl of spaghetti" comment (and all the other criticisms made by the experts who inspected Toyota's code). There were some extremely serious accusations which don't seem consistent with the fact that the vast majority of Toyotas on the road aren't showing problems caused by their MCU's spaghetti code.


AI takes it to a whole new level. Neural networks are all black box, they can't be reviewed. You feed in your training data, you test it against your test data, and just have faith that it will respond appropriately to a pattern that's slightly different than anything it's seen before. Sometimes the results are surprising.


That's my biggest problem with AI and neural networks. You can't really measure progress here. If you wanted the same safety standards as for every other automotive software you'd have to test drive for hundreds of thousands of kilometres after every change of parameters, because there's no way to know what has changed about the AI's behavior except for testing it thoroughly.

Compare this to classic engineering where you know the changes you've made, so you can rerun your unit tests, rerun your integration tests, check your change in the vehicle and be reasonably sure that what you changed is actually what you wanted.

The other approach to autonomous driving is to slowly and progressively engineer more and more autonomous systems where you can be reasonably sure to not have regressions. Or at least to contain your neural networks to very very specific tasks (object recognition, which they're good at), where you can always add more to your test data to be reasonably sure you don't have a regression.

I don't think we'll see too many cars being controlled by neural networks entirely, unless there's some huge advancement here. Most of the reason we see more neural networks now is that our computing power has reached the ability to train sufficiently complex NNs for useful tasks. Not because the math behind it advanced that much since the 60s.


> There is no evidence that said OTA update made Tesla cars any safer.

That particular OTA update significantly shortened braking distances. [The update] cut the vehicle’s 60 mph stopping distance a whole 19 feet, to 133, about average for a luxury compact sedan. That's a safer condition, IMO, and I'm uncertain how to argue that it doesn't make the car safer.

[0] - https://www.wired.com/story/tesla-model3-braking-software-up...


I agree with everything you said except this:

> being able to know exactly how your car will react in all situations

If one depends on intimate knowledge of his own car for safety then he’s likely already driving outside the safety envelope of the code, which was written to provide enough safety margin for people driving bad cars from 40yr ago.


I didn't say no car has ever relied on software. I said cars are becoming more reliant on software. I don't think that is a controversial statement. I also don't think it is controversial to say that other automakers also occasionally ship buggy code. The Toyota brake issue I mentioned in the previous post is one example.

Additionally, the argument that we should continue to handle updates this way simply because we have done it this way for decades is the laziest possible reasoning. It is frankly surprising to see that argument on HN of all places.

As for the evidence that OTA updates can make things safer, this is from Consumer Reports:

>Consumer Reports now recommends the Tesla Model 3, after our testers found that a recent over-the-air (OTA) update improved the car’s braking distance by almost 20 feet. [1]

That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.

[1] - https://www.consumerreports.org/car-safety/tesla-model-3-get...


> That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.

There is again no evidence to support this fact. There is evidence that Tesla's OTA software updates have introduced safety issues with Tesla cars. That's a fact.

Better braking distance is of course a good thing but if anything, the fact that Teslas were on the road for so long with a sub-par braking distance is more evidence of a problem with Tesla than it is evidence of a benefit of OTA updates.

The other factor in that brake story is that it took mere days for Tesla to release an update to "fix" the brakes. This isn't a good thing. The fact that it was accomplished so quickly means that the OTA update was likely not tested very well. It also means that the issue was easy to fix, which calls into question why it wasn't fixed before. It also highlights the fact that Tesla, for some reason, failed to do the most basic testing on their own cars for braking distance. Comparing the braking distance of their cars should have been one of the very first things they did before even selling the cars, but apparently it took a third party to do that before Tesla was even aware of the issue. This doesn't inspire confidence in Tesla cars at all.


I simply don't know what to say to you if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.

EDIT: The comment I was replying to was heavily edited after I responded. It originally said something along the lines of improving braking distance is good but there is no evidence that it would improve safety.


I think you're misunderstanding the question.

> if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.

Nobody is arguing that. We're arguing that there is no evidence the Tesla OTA update made the cars safer on net.

You're trying to set up some sort of "OTA updates are dangerous in general, but this one is clearly good, how do we balance it" conversation, but the problem is, this OTA update is not clearly good. OTA updates are dangerous in general, and also in this case in specific. You need to find a better example where there's actual difficult tradeoffs being made, and not just a manufacturer mishandling things.


> I simply don't know what to say to you if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.

If the car can’t see the obstacle, the braking distance simply does not matter.


And yet again, the same OTA update changed other parameters about the way the car drives that do make it less safe. I don't know why you're trying to ignore that fact. If I drastically improve the braking distance of a car, but in the same update I also make it so that the car crashes itself into a wall and kills you, is the car safer? Hint: no

As for your edit, you clearly misread the original comment, which is why I edited it for you. I said that there was no evidence that the OTA made the car safer. Please try to read with better comprehension instead of trying to misrepresent my comments.


If I drastically improve the braking distance of a car, but in the same update I also make it so that the car crashes itself into a wall and kills you, is the car safer? Hint: no

You don't have enough information to come to that conclusion.

It's quite common to have to brake hard to avoid a cousin. It's pretty uncommon to see the specific scenario triggering this crash behavior.


I never denied that. Your comment pointed out a problem with OTA updates and I agreed calling it "a clear flaw". I pointed out a benefit of OTA updates then asked an open ended question about how they should be handled. You responded be attacking the example I provided. I was looking to debate this serious issue, not getting into a pissing match about it.


I never said you denied it, I said you ignored it. If you wanted to debate this serious issue, then maybe you shouldn't keep ignoring one of the crucial cornerstones of the discussion. If you're unwilling to discuss points that challenge your own opinion, then it's clear that you're just trying to push an agenda rather than have an actual discussion.


> So what is the right way to handle these updates?

Avoid doing them in the first place? It's not like bit rot is - or should be - a problem for cars. It's a problem specific to the Internet-connected software ecosystem, which a car shouldn't be a part of.

So basically: develop software, test the shit out of it, then release. If you happen to find some critical problem later on that is fixable with software, by all means fix it, again test the shit out of it, and only then update.

If OTA updates on cars are frequent, it means someone preferred to get to market quickly instead of building the product right. Which, again, is fine for bullshit social apps, but not fine for life-critical systems.


Tesla does test the shit out of it before they release a patch. The problem is that users expectations of the systems performance suddenly get out of sync with what the car is going to do.

Part of me wonders if there should be a very quick, unskipable, animated, easy to understand explanation of the patch notes before you can drive when they make material changes to core driving functionality.


While using Autopilot (Big A), there should be a loud klaxon every 30 seconds followed by a notification "CHECK ROAD CONDITIONS" and "REMAIN ENGAGED WITH DRIVING" in the same urgent tone of an aircraft autopilot (small a) warning system.

Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.


I don't know why Tesla defenders keep repeating this FUD:

> Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.

Auto-TCAS and Auto-GCAS exist, and the public is aware of them: E.g. http://www.airbus.com/newsroom/press-releases/en/2009/08/eas.... http://aviationweek.com/air-combat-safety/auto-gcas-saves-un....


Or make certain critical updates only part of a physical recall, which provides notice to users that behavior will chance.


Well, then what are they testing exactly?

This is beyond broken, it's a fundamental misunderstanding of how physical products are supposed to work. Software people have gotten used to dismiss the principle of least astonishment because they know better —and no user got killed because of a Gmail redesign—, but this is a car, it's hardware with its user on-board, a lot of kinetic energy and all of it relies on muscle memory.


Obviously they don't, or the braking update would never have needed to happen in the first place.

Highways are not, nor should they ever be if at all possible, proving grounds.


I'd vote in favor of such explanation, though this alone may not be enough to cancel out possibly thousands of hours of experience with the previous system behavior.


The first thing about doing it right is to make sure it has been developed in an appropriate manner for safety-critical systems, which includes, but is by no means limited to, adequate testing.

The second thing is to require the owners to take some action as part of the installation procedure, so that it is hard for them to overlook the fact that it has happened.

The third thing is that changes with safety implications should not be bundled with 'convenience/usability' upgrades (including those that are more of a convenience for the manufacturer than for the user.) To be fair, I am not aware of Tesla doing that, but it is a common enough practice in the software business to justify being mentioned.

And it has to be done securely. Again, I am not aware of Tesla getting this wrong.


Great that they fixed the brakes OTA. But how exactly did the inferior braking algorithm get on the Model 3 in the first place? And what are the chances of a regression?


While I like Tesla, I find the praise for Tesla's fast OTA update for its braking problem to be freaking terrifying.

A problem with variable stopping distances is the sort of thing that should be blindingly obvious in the telemetry data from your testing procedures. Brake systems, and ABS controls in particular, are normally rigorously tested over the course of 12-18 months in different environments and conditions.[0] That Tesla completely missed something like that suggests either their testing procedures are drastically flawed (missing something that CR was able to easily and quickly verify in different cars), that their software development process isn't meshed up with their hardware testing and validation, or a combination of the two. Neither option is a good one.

The fact that Tesla was able to shave 19 feet of their braking distances is horrifying. After months of testing different variations and changes to refine your braking systems, shaving off an extra 19 feet should be impossible. There shouldn't be any room to gain extra inches without making tradeoffs in performance in other conditions that you've already ruled out making. If there's an extra 19 feet to be found for free after a few days of dev time, you did something drastically wrong. And that's completely ignoring physical testing before pushing your new update. Code tests aren't sufficient; you're changing physical real-world behavior, and there's always a tradeoff when you're dealing with braking and traction.

Tesla is being praised by consumers and the media because, hey, who doesn't like the idea that problems can be fixed a couple days after being identified? That's great. In this case, Tesla literally made people's cars better than they were just a few days before. But it trivializes a problem with very real consequences, and I hope that trivialization doesn't extend to Tesla's engineers. Instead of talking about a brake problem, people are talking about how great the fast OTA update for the problem is. Consumers find that comforting, as the OTA updates can makes what's otherwisea pain in the ass (recalls and dealer visits for software updates) effortless.

Hell, I'm a believer in release early, release often for software. Users benefit, as do developers. At the same time, the knowledge that you can quickly fix a bug and push out an update can be a bit insidious. It's a bit of a double-edged sword in that it gives you a sense of comfort that can bite you in the ass as it trivializes the consequences of a bug. And when bug reports for your product can literally come in the form of coroner's reports, that comfort isn't a good thing for developers.

0. https://www.forbes.com/sites/samabuelsamid/2018/05/30/rapid-...


Give me a dumb car. The software is not ready. Get them off the road.


Keep them on the road. They're still safer than a huge percentage of the awful drivers that I see every day.


They really aren't. At least you can rely on 99% of humans to try to act according to self-preservation instinct MOST of the time.

I see a Tesla, and I try to get away from them as soon as possible.


At least you can rely on 99% of humans to try to act according to self-preservation instinct MOST of the time.

Nope. I see tremendous numbers of distracted drivers who don't even realize there's a threat. I also see many utterly incompetent drivers who will not take any evasive action, including braking, because they simply don't understand basic vehicle dynamics or that one needs to react to unexpected circumstances.


Updates should fix problems not create new ones. The tried and true method for silicon valley bug fixing is to ship it to the users and let them report any issues. This is wholly insufficient for car software. Car software should seldom have bugs in the first place, but OTA updates should never bar never introduce new bugs to replace the old.


> So what is the right way to handle these updates?

Require updates to be sent to a government entity, which will test the code for X miles of real traffic, and then releases the updates to the cars. Of course, costs of this are to be paid by the company.


Current development of cars is done with safety as a paramount concern. There is no need to filter everything through a government entity. However the automobile companies are responsible for their design decisions. This should absolutely apply to software updates. That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash.

So, no filter, but government penalties and legal remedies should be available.


"Current development of cars is done with safety as a paramount concern."

That's exactly the impression that I don't get from Tesla very much. Instead I see the follwing:

Get that thing to market as quickly as possible. If the software for safety critical systems is sub-par, well, can be fixed with OTA updates. That's fine for your dry cleaning app. For safety critical software that's borderline criminal

Hype features far beyond their ability (autopilot). Combine this with OTAs, which potentially change the handling of something that is not at all autopilot, but actually some glorified adaptive cruise control. For good measure: Throw your customers under the bus if ineviteble and potentially deadly problems do pop up

Treating safety issues merely as a PR problem and acting acordingly. Getting all huffy and insulted and accusing the press of fake news when such shit is pointed out

I could go on. But such behavior to me is not a company signaling that safety is of paramount concern.

"That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash."

Let's just say that Tesla's very selective handling and publication of crash data does not signal any inclination for transparency.


I agree. I think companies should be losing serious money and individual should be losing jobs over crashes like these, much like in the aircraft sector.


Seriously ? so instead of taking x amount of time it will take x amount of time + a few years.


Testing is absolutely necessary. We're talking about millions of cars here, which are potentially millions of deadly weapons. You don't want companies pushing quick fixes, which turn out to contain fatal bugs.


That sounds like a great way to stall all further progress, which has a horrific human cost of its own.

Government has a valid role to play, though, by requiring full disclosure of the contents of updates and "improvements," by setting and enforcing minimum requirements for various levels of vehicle autonomy, and by mandating and enforcing uniform highway marking standards. Local DOTs are a big part of the problem.


Yeah, because we know governments are really good at giving certifications and doing tests that mean sonething. Lets put every design decision in the hand of governements then! or better, nationalize car companies! Problem solved?


Flying in an airplane is safe because of direct intervention by the government.

Cars have been made safe for us also by direct intervention by the government. From important things like mandating seat belts and crash safety to smaller things like forcing the recall of tens of millions of faulty air bag inflators.

These are just a few of the many things Uncle Sam has done to make things safer for us.


Isn’t flying mostly safe because of post hoc safety analysis followed by operating requirements? I don’t think the FAA tests every change made to aircraft before they can fly?


De facto the FAA indeed does that very thing.

First, any change in design (or in configuration, in the case of repairs) is backed by PEs or A&P mechanics who sign off on the changes. Their career rides on the validity of their analysis so that's a better guarantee than some commit message by a systems programmer.

Second, the FAA basically says "show us what you are changing" after which they will absolutely require physical tests (static or dynamic tests, test flights, etc., as appropriate to the scope of change).

And I'd say flying is so safe mainly from the blameless post-mortem policy that the American industry instantiated decades ago and which is constantly reinforced by the pros in the NTSB. It's a wonderful model for improvement.


I think that the FAA's role is theoretically as you express, but in practice, there is significantly less oversight (especially direct oversight) than implied.

As an example, the crash of N121JM on a rejected takeoff was due (only in part) to a defective throttle quadrant/gust lock design that went undetected during design and certification, in part because it was argued to be a continuation of a conformant and previously certificated design. (Which is relevant to the current discussion in that if you decide to make certification costly and time-consuming, there will be business and engineering pressure to continue using previously certificated parts, with only "insignificant changes".)

https://aviation-safety.net/database/record.php?id=20140531-...

PS: I 100% agree on the NTSB process' contribution to safety.


That's fascinating, it's kind of a collectivizing of the responsibility for safety.


If I, as an engineer, sign of on changing the screws on the flaps for cheaper ones and the plane crashes because the flaps go loose due to the screws being unable to handle the stress, my career can be assumed over if I have no good explanation.

If an engineer signs off a change they sign that they have validated all the constraints and that for all they know the machine will work within the specs with no faults.

If a software engineer commits code we may run some tests over it, look a bit over it. That's fine. But if the software ends up killing anyone, the software engineer is not responsible.

And yes, to my knowledge, every change to an aircraft is tested before flight or atleast validated by an engineer that understands what was just changed.


In any case, let a third party control the actual updating, so that we know when and how often cars are updated. Require at least X months of time between code submission and deployment to cars. We don't want a culture of "quick fixes".


This is a popular idea: Just put someone in charge! It ignores the incentives for those gatekeepers, who are now part of the system. In practice I don't think you're going to get better updates, you're going to get "almost no updates".

See also: The FDA and drug approvals.


So you want to do away with the FDA? Put BigPharma in charge and let them regulate themselves? That sounds a lot less appealing to me.


It took years for the FDA to investigate Theranos in case you are not aware. And they only did when the press started digging up. Poor, poor track record.


fdareview.org - the fda kills more people in late/non approvals than it saves through denials. this is why compassionate use has been on the rise


It does not, in any way, imply that pharmaceutical companies would do a better job if left to their own devices and market pressure.


There's a lot of sunlight between letting pharma companies run rampant and having the FDA. One could imagine private non-profit testing and qualifications standards organizations along the lines of the underwriters laboratories


And who will manage all that testing? A government entity? Because that would bring us back to the original point.


UL is not "managed" by a government.


It is not completely out of this world to imagine multiple private entities involved in pharma dossier reviews instead of having the FDA. The FDA employs tons of private consultants anyway so they bring virtually no value.


Certainly communication of any changes to all drivers inexperienced with the latest version; ideally user interaction required for the update to be applied, and potentially even the ability to reverse them if they are unhappy with the changes.


At the very _least_ when you introduce a change in behavior have it to be enabled from the user through the dashboard. This creates at least one touch point for user education.


This seems testable. IANAAE (I Am Not An Automotive Engineer), but why can't you run both the new and old code side by side and if the actions they take are materially different investigate further? Like, if in one case the new code would want you to move left, and the old code goes straight, one of those behaviors is probably wrong. If the driver corrects, then the new code is probably correct, but if the driver does not, then the new code is doing something probably incorrect.

At the very least, you should be able to get some sort of magnitude/fuzzy understanding of how frequently the new code is disagreeing, and you can figure out where and go check out those conditions.


It has already been touched on by another commenter, but testability of Machine Learned systems outside of a training dataset is pretty much a crapshoot.

An ML solution stops "learning" after training and only reacts.

To illustrate the difference, have you driven on roads under construction lately? As humans, when you've driven the same road hundreds of times, you start to do the same thing as a machine learned implementation. You drive by rote.

When you get to that construction zone though, or the lines get messed up, your brain will generally realize something has changed, and you'll end up doing something "unpredictable", i.e. learning a new behavior. The Machine Learning Ali's output (a neural net) can't do that. It can generify a pattern to a point, but it's behavior in truly novel circumstances cannot be assured.

Besides which, the problem still stands that the system is coded to ignore straight ahead stationary objects. Terrible implementation. It should look for overly fast and uniform increase in angular field coverage combined with being near stationary in terms of relative motion as a trigger to brake. I.e. If a recognized shape gets bigger at the same rate on all "sides" whilst maintaining a weighted center at the same coordinate. It's one of the visual tricks pilots are taught to avoid mid-air collisions.

Admittedly though, the human brain will likely remain WAY better at those types of tricks than a computer will be for a good long time.


I think the point is to run the two models on the same data, either simultaneously in everyone's cars, or using recorded sensor data from the past (maybe in the car while parked after a drive for privacy reasons). Initially, only the old version gets to take any action in the real world. Any difference in lane choice would then have to be justified before making the new version "live".

You can do this sort of stuff when replacing a web service too, by the way. For example running two versions of Django and checking if the new version produces any difference for a week before making it the version the client actually sees.


The problem, however, is your test can't be assumed to generify the way an explicitly coded web service would.

You can look at your code and say "For all invalid XML, this, for all input spaces, that." You can formally prove your code in other words.

You CANNOT do that with Neural Nets. Any formal proof would simply prove that your neural network simulation is still running okay. Not that it is generating the correct results.

You can supervise the learning process, and you can practically guarantee all the cases within your training data set, and everyone in the research space is comfy enough to say "yeah, for the most part this will probably generify" but the spectre of overfitting never goes away.

With machine learning, I developed a rule of thumb for applicability: "Can a human being who devotes their life to the task learn to do it perfectly?"

If the answer is yes, it MAY be possible to create an expert system capable of performing the task reliably.

So lets apply the rule of thumb:

"Can a human being, devoting their life to the task of driving in arbitrary environmental conditions, perfectly safely drive? Can he safely coexist with other non-dedicated motorists?"

The answer to the first I think we could MAYBE pull off by constraining the scope of arbitrary conditions (I.e. specifically build dedicated self-driving only infrastructure).

The second is a big fat NOPE. In fact, studies have found that too many perfectly obedient drivers typically WORSEN traffic in terms of probability to create traffic jams. Start thinking about how people drive outside the United States and the first-world in general, and the task becomes exponentially more difficult.

The only things smarter than the engineers trying to get your car to drive itself are all the idiots who will invent hazard conditions that your car isn't trained to handle. Your brain is your number one safety device. Technology won't change that. You cannot, and should not outsource your own safety.


Edge cases in this scare the hell out of me. I'm envisioning watching CCTV of every Tesla that follows a specific route on a specific day merrily driving off the same cliff until it's noticed.

I mean what would have happened here if another Tesla or two were directly behind Huang, following his car's lead?!

Possibly nothing, I'd assume the stopping distance would be observed and the following cars would be able to stop/avoid, but I wouldn't like to bet either way. Perhaps, in some conditions, the sudden impact on the lead car would cause the second car to lose track of the rear end of the first? Would it then accelerate into it?


The report indicates the system ignores stationary objects. I would not be surprised if a suddenly decelerated car in front of the system effectively vanished from the car's situational awareness. Your scenario does not seem that far-fetched.

An attentive human would realize something horrible had happened and perhaps reacted accordingly. A disengaged or otherwise distracted one may not have the reaction time necessary to stop the system from plowing right into the situation to make it worse.


Basically AI is a sophisticated ' hotdog not hotdog'.


> but why can't you run both the new and old code side by side and if the actions they take are materially different investigate further?

Because there are no closed facilities that you can use to actually perform any meaningful test. You could test "in-situ", but you would need an absolutely _huge_ testing area in order to accurately test and check all the different roadway configurations the vehicle is likely to encounter. You'll probably want more than one pass, some with pedestrians, some without, some in high light and some in low, etc..

It's worth noting that American's drive more than 260 billion miles each _month_. It's just an enormous problem.


This particular case might have been testable "in-situ":

[ABS News Reporter] Dan Noyes also spoke and texted with Walter Huang's brother, Will, today. He confirmed Walter was on the way to work at Apple when he died. He also makes a startling claim — that before the crash, Walter complained "seven-to-10 times the car would swivel toward that same exact barrier during auto-pilot. Walter took it into dealership addressing the issue, but they couldn't duplicate it there."

It is very believable that the car would swivel toward the same exact barrier on auto-pilot.

BTW - I'm running a nonprofit/public dataset project aimed at increasing safety of autonomous vehicles. If anyone here wants to contribute (with suggestions / pull requests / following it on twitter / etc) - you'd be most welcome. Its: https://www.safe-av.org/


This is where simulators play an important role. Many AD( automated driving) solution suppliers are investing into simulators to create different scenarios and test performance of their sensors and also SW. Else, as you said it’s impossible to drive billions of miles to cover all usecases.


A/B testing self-driving car software to find bugs? Would probably work great, but that is also terrifying!

But you're right, if you're really going to do a full roll out, may as well test it on a subsegment first - I'd hate for it to be used as a debugging tool though.


> why can't you run both the new and old code side by side and if the actions they take are materially different investigate further?

As I understand it, this is essentially what they were doing with the autopilot 'shadow mode' stuff. Running the system in the background and comparing its outputs with the human driver's responses, and (presumably) logging an incident when the two diverged by any significant margin?


> And this is exactly why all of these articles recently about how "great" it is that Tesla sends out frequent OTA updates are ridiculous. Frequent, unpredictable updates with changelogs that just read "Improvements and bug fixes" is fine when we're talking about a social media app, but is entirely unacceptable when we're talking about the software that controls a 2 ton hunk of metal

I recently got downvoted for that exact line of reasoning.

Looks like some people don't like to hear that :-)


I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between:

- known preventable deaths from, say, not staying in the lane aggressively enough;

- possible surprises and subsequent deaths.

There is an ethical conundrum (that is quite different from the trolley problem) between a known fix and an unknown possibility. If both are clearly establish and you dismiss the former to have a simplistic take, yes, you would be down-voted because you are making the debate less informed.

Without falling to solutionism, in that case, the remaining issue seems rather isolated to a handful of areas that look like lanes and could be either painted properly, or that Telsa cars could train to avoid. The later fix would have to be sent rapidly and could have surprising consequences -- although it seems decreasingly likely.

That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software. This explains why this community prefers (to an extend) Telsa to other manufacturers. Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment. There are ethical backgrounds to non-interventionism.

As the victim of car violence, I happen to think that their position is unethical. I’m actually at what is considered an extreme position, of being in favour of Telsa (and Waymo) taking more risk than necessary and temporarily increasing the number of accidents on the road because they have a far better track record of learning from those accidents (and the subsequent press coverage) and that would lower the overall number of deaths faster.

As it happens, they don’t need to: even with less than half of accidents from their counterparts, they still get spectacular learning rate.


Thanks for your comment. I think you may be right.

> I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between: > > - known preventable deaths from, say, not staying in the lane aggressively enough; > > - possible surprises and subsequent deaths.

I don't think I was misrepresenting anything (at least, I was trying not to). I just pointed out that behaviour-changing updates that may be harmless in, say, smartphone apps, are much more problematic in environments such as driving-assisted cars. I think this is objectively true. And I think we need to come up with mechanisms to solve these problems.

> That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software.

My argument is that changes in behaviour are (almost automatically) surprising, and thus inherently dangerous. Unless my car is truly autonomous and doesn't need my intervention, it must be predictable. Updates run the risk oof breaking htat predictability.

> Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment.

My worry is that (potentially) people will still die, just different ones.

> being in favour of Telsa (and Waymo) taking more risk than necessary

If I'm taking you literally, that's an obviously unwise position to take ("more than necessary"). But I think I know what you meant to say: err on the side of faster learning, accepting the potential consequences. Perhaps like NASA in the 1960s.

But my argument was simply that there is a problem with frequent, gradual updates. Not that we shouldn't update (even though that's actually one option). We ought to search for solutions to this problem. I can think of several that aren't "don't update".

But claiming that the problem doesn't exist, or that those that worry about it are unreasonable, is unhelpful.


More accurately: move fast and save some lives and kill people.


I tried to convince management that the “reveries” were leading to unpredictable behavior.


Two years ago there was an Autopark accident that made the news. Tesla blamed the driver -- very effectively.[1] But if you look closely, it likely was due to an unexpected OTA change in behavior combined with poor UI design.

In the accident, the driver double-tapped the Park button which activated the Autopark feature, exited the car and walked away. The car proceeded to move forward into an object. The driver claimed he merely put it in park and never activated the auto-park feature. Tesla responded with logs proving he double-tapped and there were audible and visual warnings.

Well, I looked more closely. Turns out Tesla pushed an OTA update that added this "double-tap Park to Autopark" shortcut. And it's one bad design decision after another.

First, let's note the most obvious design flaw here: The difference between instructing your car to immobilize itself (park) and instructing it to move forward even if the driver was absent (Autopark) was the difference between a single and double tap on the same button. A button that you might be in the habit of tapping a couple times to make sure you've parked. So it's terrible accident-prone design from the start.

Second issue is user awareness. Normally Tesla makes you confirm terms before activating new behavior, and technically they did here, but they did it in a horribly confusing way. They buried the mere mention of the Autopark shortcut under the dialog for "Require Continuous Press".[2] So if you went to turn that off -- that's the setting for requiring you to hold down a FOB button during Summon -- and you didn't read that dialog closely, you would not know that you'd also get this handy-dandy double-tap Autopark shortcut.

Third is those warnings. Useless. They did not require any kind of confirmation. So if you "hit park" and then quickly exited your vehicle, you might never hear the warning or see it on the screen that, by the way, that shortcut that you didn't know existed just got triggered and is about to make the car start moving forward while you walk away.

So I think it's quite plausible that the driver was not at fault here -- or at least that it was a reasonable mistake facilitated by poor design. It's unfortunate that Tesla was able to convince so many with a "data dump" that the driver was clearly at fault.

I still recall that poor NYT journalist that Tesla "caught driving in circles"[3] -- while looking for a hard-to-find charger at night. Now I hope we are developing a healthier skepticism to this (now standard) response from Tesla and look more deeply at potential root causes.

[1] https://www.theverge.com/2016/5/11/11658226/tesla-model-s-su...

[2] https://youtu.be/Cg7V0gnW1Us?t=2m10s

[3] https://www.forbes.com/sites/joannmuller/2013/02/14/teslas-g...


Even if there are patch notes and a clickthrough waiver, there is no possible way that they could express in words what they've changed when pushing an update to a neural net based system. Saying "you accepted the update" is ridiculous when it's not even possible to obtain informed consent.


You know, I must say as a European driving in America I've done this on occasion. Combined with drivers not letting you in, it's tempting to follow the gore sections into the (barely visible) barrier. I mean, I slow down when that happens, but otherwise I seem to react pretty much the same as this Tesla car. Why can't the "gore" sections be marked like this ?

1) https://goo.gl/maps/iWEcY7hU4DJ2

2) https://www.google.com/maps/@52.3314532,4.799684,3a,60y,221....

Differences:

a) Clearly different road markings between the gore section and normal lanes

b) BIG brightly colored and lit sign above the barrier

c) the barrier itself has a shock absorber/deflection thing like 10 meters in front of the concrete


> Humans are generally quite poor at responding to unexpected behavior changes such as this.

I know what you mean, once you truly start to trust someone else to do a job you simply stop giving it any attention. It's being handled. So you maybe hover a bit at first, keep an eye, generally interfere. Once that stage is over, you just do other things safe in the knowledge that someone else is competently handling the task. Until it blows up in your face.

This level of autopilot legitimately terrifies me. Not because it's bad, but because of the way it will make the humans who are supposed to still be responsible stop paying attention


The behavior should change every day. Never changing is impossible, especially with a new product like this.


«Lethal Regression»


Meanwhile, Tesla is busy putting out press releases saying "We believe the driver was inattentive, and the car was warning him to put his hands on the wheel."

I am utterly, completely lacking in surprise that they didn't provide the relevant context, "... fifteen minutes prior."

This just looks... really bad for Tesla. It's more important to them to protect their poor software than drivers.


If a company is this blatantly evil while still in a growing phase, I can only imagine what they would do once they are a established major player...


It is a car company. Their products directly kill over a million people every year. The injure 10x as many. Their pollution causes a similar number of deaths. They are responsible for so many horrible side effects, including dangerous city and suburbs to people taking up more and more space.

You think this one crash makes them evil?


Yes, cars kill, injure, and create disease, but they also connect people across the world, to relatives, to work, to buy and deliver goods and services.

Without cars, our level of productivity would be a fraction of what it is today as employment is confined to a tiny geography. Many more people would die from fires, disease, and crime as emergency services arrive on horse drawn carriage. Most people would never venture out of their hometowns.

Car companies, for all their faults, for all their fraud and corruption, create products that immeasurably benefit us every day. Before we call them evil, we must look at the impact they have on each of us. That impact is decidedly positive, as evidenced by the widespread ownership of cars.


Tesla kills a million people a year?


No, it is not the crash that makes the evil. It is how they handle it. And how they put blatantly misleading marketing out there, along with poorly researched features that may or may not have lead to overconfidence from the drivers along with half baked tech, that resulted in crashes such as these.


There haven't even been five deaths yet in the entire history of Tesla autopilot car accidents, but there are more than three thousand deaths a day due to car accidents. Tesla's safety record actually is such that if scaled up, their would be a massive drop in the number of deaths per day. Their marketing reflects this. So does their communication on the topic. I know it can sound offensive when they say that a driver died without his hands on the wheel in the six seconds leading up to the crash, but their communication is reflective of a desire to preserve life. So I wouldn't call it evil, let alone call it blatantly evil.

Stepping back further, away from this accident, Tesla is also a leading player in moving away from destroying our planet. By that I mean they are pushing renewable energy. Again, not something I would call evil, let alone blatantly evil.


"Tesla's safety record" is misleading:

* every update to the software makes the record irrelevant, because one is no longer driving the same car which set the previous record. Lane centering for instance was introduced in an OTA update and it likely contributed to this accident.

* most of the safe miles driven by Teslas are not with autopilot on. The NTSB explicitly said they did not test autopilot.

* finally, some HN users did the math and it turns out that humans have overall a better safety record than autopilot Teslas.

To me it looks like Tesla's communication is only reflective of covering their asses. From blaming that journalist to the latest accident.

Note: I work for a Tesla competitor.


I agree that there are aspects of the Tesla safety record which have caveats, but even when I take those caveats into account, I still come out thinking of the Tesla as being relatively safe. The same feature which adds danger at every update is one that also has the potential to add safety at every update. Eventually for example, that same update feature is expected to bring the car to the point of being superhuman in its ability to drive safely. The inability to rely on the autopilot means that I'm still responsible for my own safety, so the autopilot safety isn't as important as the car's safety in general.

I disagree that their communication is only reflective of them covering their ass. I feel their is the expectation that self-driving is going to prevent more deaths than it causes. Especially as the technology improves, but even to an extent now if only through the virtue of it not being a system people should be using without being ready to intervene.

Don't get me wrong here, you raise good points. I just don't think it's a case of blatant evil.

Also, if you have the link to the math, I'd love to read it.


>The same feature which adds danger at every update is one that also has the potential to add safety at every update...

You don't work in software, right?


https://www.greencarreports.com/news/1116969_tesla-model-3-e...

An ad hominess attack doesn't not trump evidence. My claim is true. The above article talks about an example of an update which improved the safety of the vehicle.


Dude, that is not an attack. Only if you have written software, you ll be aware of how seemingly innocent changes can break it in unexpected ways...And your "proof" article does not change anything...


You question my credentials, while quoting one of my premises. This is, implicitly, an ad hominem argument against the premise you quoted.

In addition to that, you're arguing against a strawman. I never disagreed that there was potential for the safety to be negatively impacted with every update. In fact, I explicitly agreed with this claim.


>I never disagreed that there was potential for the safety to be negatively impacted with every update...

The point you are missing is that while update might be slightly enhance safety, the negative, unpredictable impact might be catastrophic. because one is planned and other one is not.


No, the point you're pretending I'm missing is that there is this potential. You quoted text in which I explicitly acknowledge the danger of updates. Read it. Also, read further down where I explicitly ask that I not be taken the wrong way, because good points were made.

My acknowledgement only make sense if I agree that there is some level of danger in each update. That is why you're addressing a straw man.

I feel like there is a language barrier:

- https://en.wikipedia.org/wiki/Straw_man

Or maybe you're being uncharitable with me, because as you put it in our other thread you find the things I've said "stupid". So you are just guessing that I hold the stupidest possible belief you can ascribe me, even when I tell you otherwise.


>No, the point you're pretending I'm missing is that there is this potential

No. You are not missing the potential. But you do not seem to get difference in magnitude. One is incremental, reviewed safety enhancement, Other is unpredictably catastrophic.

You only seem to grasp very superficial aspects of my comments, which is why I requested to give some thought before responding. So I think there is some kind of barrier. But it is not of language. but for lack of a better word, I think it is of a lack of enough shared sensibilities..


You're projecting that the projected future upside isn't superhuman. My comment projects that it is. We disagree on projected benefits.

Right now human driving is one of the leading causes of death. I believe that technology can eventually eliminate this as a leading cause of death. So I project a much greater potential upside. I also figure this is a matter of time and effort applied to the problem. Or in other words, there is a finite amount of time before an update brings the car to this point. This puts a ceiling on my mental tabulation of the amount of risk endured prior to achieving an extremely good end. So despite the severe risks, the limited nature of that risk allows me to rule in favor of taking the risk despite its presence.

You're assuming that I haven't pictured a sweeping update which adds the car murdering anyone who was unaware. I have! Your assumption is incorrect.

And if I was being superficial, I would have answered that yes, I'm a software developer. But its a fallacious appeal to authority.


Your first point is absolutely misleading. You can apply the same logic to the other cars saying that they are all different because of the different tyres, the different conditions of the tyres, of the brakes, the different degree of care and so on. You can’t just say that they are not the same cars because of the different variables.


I think OTA driving software updates are a bigger variable than tires. I think you would too if you think about it more.


>Tesla's safety record actually is such that if scaled up, their would be a massive drop in the number of deaths per day...

There is not enough data to do this "scaling up". So doing so would be incredibly misleading (But doesn't stop tesla's PR from doing the same).

>Tesla is also a leading player in moving away from destroying our planet.

The actions of this company and the persons behind this somehow does not feel compatible with such a goal. I am sorry. I am just not buying it. It is more probable that this "saving the planet" narrative is something that is meant to differentiate from the competition and to attract investors. Do you think Elon Musk could have created a company that builds ICE cars and emerged as a major player? It is "save the planet" for tesla and "save humanity by going to mars" for spacex..

I mean, is this so hard to see?


> There is not enough data to do this "scaling up". So doing so would be incredibly misleading (But doesn't stop tesla's PR from doing the same).

There are tens of thousands of Tesla vehicles on the road, many of which have been driven for years. However, the strength of Tesla vehicles safety doesn't rest on Tesla vehicles alone. Tesla vehicles are a class of vehicle which implement driver assistance technologies. There are many other cars that do this. Independent analysis of these cars in aggregate have shown them to reduce car accident frequency and severity.

https://www.nhtsa.gov/equipment/driver-assistance-technologi...

> The actions of this company and the persons behind this somehow does not feel compatible with such a goal. I am sorry. I am just not buying it. It is more probable that this "saving the planet" narrative is something that is meant to differentiate from the competition and to attract investors.

Tesla is a leader in the renewable energy sector. There is a need for renewable energy as a consequence of climate change. Being a leading player in renewable energy means being a leading player in combating climate change. So Tesla is a leader in combating climate change. Combating climate change is an effort to save the planet. So Tesla is a leading player in the effort toward saving the planet.

At no point in the chain of logic is it necessary to call upon the motivations of Elon Musk. If someone were kill another person, the motivation for doing that deed would not change whether or not they did in fact kill someone. In the same manner, the fact that Tesla is helping to solve the problem of climate change is a fact irregardless of the motivation of its founder.


To be clear, by misleading marketing, I meant things like the "autopilot" feature. And claiming that they have "full self driving hardware". I am not sure how the safety of vehicles with assistive tech is relevant. I am not at all disagreeing on that aspect. You were saying that fact that vehicles with assistive tech are safer is reflected in tesla's marketing and PR. I am still not sure how that could be the case. How does it justify calling a half baked self driving tech as autopilot and selling them to unsuspecting people?

>At no point in the chain of logic is it necessary to call upon the motivations of Elon Musk.

We are interested in their motivation because we are thinking long term. When you are need of a million bucks, and a person shows up with a million bucks that they are willing to give you, without asking for payback, will you accept it right away? Or will you try to infer the true motivation behind the act, that may turn out to be sinister? This is irrespective of the fact that the other person is giving you real money, that can help you right now. Will you think like, we don't need to worry about their motivations as long as we are getting real money. Will you?

Hope I am clear.


> I am not sure how the safety of vehicles with assistive tech is relevant. I am not at all disagreeing on that aspect. You were saying that fact that vehicles with assistive tech are safer is reflected in tesla's marketing and PR. I am still not sure how that could be the case. How does it justify calling a half baked self driving tech as autopilot and selling them to unsuspecting people?

I brought up driver assistance technology as a way to continue discussing safety statistics. If you recall, I claimed autopilot was safer and you ruled this out on the basis of not enough information. Now you are saying that you don't feel the broader class is relevant to the discussion. So we return to the point where there is not enough information to make statistical claim about safety. As a consequence of returning to this point, your own claim about the system being half baked is without merit. Its a claim about the performance of the system which you have claimed we can not characterize with the currently available statistics.

> We are interested in their motivation because we are thinking long term.

The thing I'm ultimately arguing against is the idea that Tesla is as you put it blatantly evil. Blatant means to be open and unashamed, completely lacking in subtlety and very obvious. The things Tesla is doing with regard to the environment are blatantly good. They say they are doing it because of care for the environment and their actions reflect that. If we think long-term, their actions are part of what allows the long term to exist in the first place. They are not just lacking in shame for that, they are proud of it. Brag about it. Exult in it. It is blatant that they care about the environment.

In your post you're saying that you speculate that their motivations might not be what they have claimed. This contradicts the idea of blatant evil. Blatant evil is obvious, lacking in shame, lacking in subtlety. The hiding of something is the definition of subtlety. The need to hide is reflective of a shame.


> Its a claim about the performance of the system which you have claimed we can not characterize with the currently available statistics.

I claimed the feature they call "Autopilot" is unsafe because it has only limited capability (as per Tesla's documentation). But the naming of the feature and its marketing inspires false confidence in the drivers, leading to accidents. This is a very simple fact, and it should have been apparent to people a Tesla, and the fact that they went ahead and did this kind of marketing makes them "blatantly evil" in my books. Because, as you said, it is open and they are unashamed about it. Other safety features that are widely available in similar cars from other companies is irrelavant here. I am not even sure why you dragged it into this.

>If we think long-term, their actions are part of what allows the long term to exist in the first place.

What kind of circular logic is that? If they are not really interested (their real motivation) in the "long term", then their actions cease to be part of "what allows long term to exist".


> I claimed the feature they call "Autopilot" is unsafe because it has only limited capability (as per Tesla's documentation).

In citing their documentation, you acknowledge that there communication is enough to deduce the limits of their technology. In claiming that there is not enough data to make declarations about safety, you disavow the validity of your own proclamation of (a lack of) safety. In doing so, you've refuted many of the premises of your own argument.

> But the naming of the feature and its marketing inspires false confidence in the drivers, leading to accidents.

How is this different from any other name? Every word concept pair starts out without the word and the concept linked together. For example, the name given to our species is 'homo sapien' which means roughly 'wise human being'. But humans aren't always wise. So why isn't the person who coined the term 'homo sapien' blatantly evil for coining the term?

> If they are not really interested in the "long term", then their actions cease to be part of "what allows [the] long term to exist".

Maybe we're talking past each other but this is... an absurd idea. And wrong. So very wrong.

If someone wakes up in the morning and they say they got up because they wanted to see the face of their loved one, but really they got up because they wanted to pee, they still got up out of bed. The existence of imperfectly stated motivations doesn't cause a cessation of causal history.


> there communication is enough to deduce the limits of their technology..

Not deducing. By what they explicitly state in the manual. About the "need to keep hands on the wheel always". So again. I am not "deducing" it.

>So why isn't the person who coined the term 'homo sapien' blatantly evil for coining the term?

I don't know. Was the person who coined the the term trying to sell human beings as being wise? Are people suffering because of this word? What is your goddamn point?

Tesla is evil because they use lies to SELL. use lies and project a false image to get INVESTMENT. Please keep this in mind when coming up with further examples.

>The existence of imperfectly stated motivations doesn't cause a cessation of causal history.

Ha. Now you are talking about "history" that does not exist yet. Are you really this misguided or just faking it?


You clearly don't know what deduce means. You've also clearly haven't understood anything I've said during this entire conversation. Or even much of what you've said, since you don't seem to realize you've refuted your own points.

> Tesla is evil because they use lies to SELL. use lies and project a false image to get INVESTMENT. Please keep this in mind when coming up with further examples.

You've utterly failed to establish that they are lying.

> Are you really this misguided or just faking it?

Tesla already has an established history. Therefore, it is not necessary to speculate about future history.


>You clearly don't know what deduce means.

Please explain.

>You've also clearly haven't understood anything I've said during this entire conversation.

Oh I understood you just fine. I just find it stupid.

>You've utterly failed to establish that they are lying.

That is because you are overly generous with assumptions to justify their claims, which is typical of people who are apologetic of fraudulent entities such as Musk

>Tesla already has an established history...

But they haven't save the planet yet. Please give some thought about what you are writing before responding.


> That is because you are overly generous with assumptions to justify their claims, which is typical of people who are apologetic of fraudulent entities such as Musk.

No, I actually conceded. I gave up the generous assumptions on safety, backed by data, because you claimed we couldn't generalize from that data and I agree that doing such a generalization would be in some ways misleading.

This is what I mean by a lack of understanding on your part. Even in the post where you are telling me that you understood me just fine, but find my ideas to be stupid, you don't actually address what I'm saying.

As a consequence, I'm not going to continue this conversation. Have a nice day.


You just ignored every one of my points.


> The paint on the right edge of the gore marking, as seen in Google Maps, is worn

Indeed; here's a close-up of the critical area. Note that the darker stripe on the bottom of the photo is NOT the crucial one; the one that the car was supposed to follow is the much more faded one above that that you can barely see:

https://www.google.com/maps/@37.410912,-122.0757037,3a,75y,2...

(Note that I'm not blaming the faded paint; it's a totally normal situation on freeways that it's entirely the job of the self-driving car to handle correctly. But I think it was what triggered this fatal flaw.)


Wow. The start of that concrete lane-divider looks incredibly dangerous for a highway. It pretty-much just "starts", with only some sort of object that they call a "crash attenuator" in front of it. I would imagine its purpose is to dampen/dissipate kinetic energy and maybe deflect oncoming objects from hitting the concrete slab directly.

I don't even see any sort of road-bumps to warn drivers of this dangerous obstacle approaching.


Out of curiosity, are you not in the U.S.? These types of lane dividers are incredibly common throughout the country.

Note that I'm not defending the safety of them; I'm just surprised to see someone call them out as a hazard as they're such a common sight.


Dutch citizen here, I have never seen something like this.

To compare, here's a similar situation in The Netherlands:

https://www.google.nl/maps/@52.0928662,5.1734685,3a,75y,281....

Points of interest:

- big solid white arrow, line-fade is nearly impossible

- white-green sign indicating that the road is splitting

- big shoulder in the continuation of the division

- loads of grass instead of immediately using a metal barrier

- gently rising metal barrier, so driving straight into it will result in something like this: http://cdn.brandweersassenheim.nl/large/269dezilkvangrail104...

Same goal, but a completely different approach.


This is pretty apples-to-oranges. You see things like this in a lot of less dense U.S. areas as well. But in the crash example, it's right in the middle of an interchange, with the lane needing to form and rise quickly. Additionally, the gently rising metal barriers aren't safe in many situations because of their ability to launch cars. That's why they've been replaced by crash attenuators and sand/water barrels. Unfortunately the crash attenuator wasn't replaced after a recent (<1 week ago) accident, ensuring the next accident was fatal.

Here's an accurate U.S. analogue to your example: https://www.google.com/maps/@41.0286482,-102.1517159,3a,75y,...

Meanwhile, here's a Dutch example: https://www.google.com/maps/@51.9305956,4.4371182,3a,75y,66....

Turns out sometimes you need crash attenuators because space isn't infinite. Also, there's no grass in sight here.

Here's another example: https://www.google.com/maps/@51.9239112,4.4203482,3a,67.8y,2...

No sloping barriers, because they're not always safer.


Well, yes and no.

First, your counter-example is, ironically, pretty apples-to-oranges, as it is literally in the middle of nowhere. Meanwhile, the municipality where my interchange is located has a population density 1.5x that of Mountain View.

About the A20: it was built around 1970, inspired by American designs. Something like this would probably not be built today. Meanwhile, the specific ramp where the accident occurred, was constructed around 2006.

I do agree that safety measures should be adjusted according to their location, there is indeed no one-size-fits-all solution here.


You're right that my first example is in the middle of nowhere, but there's a good reason for that. American and Dutch city designs differ so much to make them incomparable. Even NYC has 0.63 vehicles per household (http://www.governing.com/gov-data/car-ownership-numbers-of-v...), and San Jose (the closest to Mountain View I could find) has 2.12. There's a lot more traffic to deal with, and a lot more sprawl, meaning less space for interchanges and long ramps.


Why are roads in the US so post-apocalyptic? Even your good example here looks horrible.


Hmm, perhaps something to do with there being more road miles in the US than in all of Europe combined?


The European road network is actually slightly larger than the US one (~6.5 million km vs ~7 million km).


Which makes the point just fine, when you compare the U.S's 326 million to the EU's 508 million.


Well, 741 million, I don't believe the road network figures were restricted to EU members only.


Here's an example of a crash attenuator right near your example: https://www.google.nl/maps/@52.0928978,5.1609792,3a,60y,305....

Edit: Also, that gore is freshly painted. Here's the same gore with faded paint. Mind you, it wasn't as bad, but even there you get faded painting: https://www.google.nl/maps/@52.0928706,5.1608083,3a,75y,109....


I've tried to find something similar in an urban area in Britain, here's one in Birmingham: https://www.google.nl/maps/@52.4901761,-1.889549,3a,69.6y,5....

That fence wouldn't be fun to drive into at full speed, and the lane markings are worn, but it's still far less awful than a concrete wall.


At highway speeds, if that "fence" is strong enough to prevent vehicles from falling from the raised left lane into the lower right lanes, then it would seem likely to cut a car in two. I think I'd prefer the crash attenuator.


That segment of the A38 has a 30MPH limit, with the right fork increasing to 50MPH. The dynamics of any crash would be very difficult.


The intersection in the Tesla crash is pretty sub par by US standards. Most highway off ramps are as you described. In space restricted areas we tend to use tons of reflectors then water/sand barrels or metal crumple barriers instead of gently rising barriers and a grass infield.

In basically all cases where there's a lot of pavement that isn't a lane there's diagonal lines of some sort that make it very clear that there isn't a lane there. A good chunk of the time there's a rumble strip of some sort.

In rural areas there's less signage, reflectors and barriers but the infield is usually grass, dirt or swampy depending on local climate.

Edit: and I'm wrong because why?


I wouldn't think wrong, but it seems to imply "AVs need great roads or bad roads, on mediocre roads they might kill you for no obvious reason, gee, too bad."

Such "pretends to work but actually doesn't", IMNSHO, would be far worse than "doesn't work there at all"


Granted my experience is my own and shaped by the areas I've lived in, but I'd say the crash barrier is pretty standard by US standards.

A few reflectors, a crumple barrier or some barrels and you've got a highway divider start! Certainly not as lengthy or as well marked as the Dutch example. This one I used to drive by in KC almost daily looks similar to the Tesla accident one (granted, this example does have some friendly arrows in the gore): https://www.google.com/maps/@39.0387093,-94.6774548,3a,75y,1...

I have also seen so many people crash into this one that they put up yellow hazard lights: https://www.google.com/maps/@39.0790636,-94.594365,3a,75y,17...


That's an interesting (and amusing) picture, but I'd expect many more incidents of a "gently rising metal barrier" causing a vehicle to roll over.


The metal barriers are usually quite safe and designed to absorb a lot of energy. (Plus rolling over is safer than crashing into concrete)


I'm in the EU. This road does look a little bit hazardous to me; like it was designed 60 years ago and never updated.

- The area one is not supposed to drive in doesn't appear to be marked. Where I live, it would be painted with yellow diagonal stripes.

- On a high-speed road, there would be grooves on the road to generate noise if you are driving too close to the edge of the lane.

- Paint on the road is rarely the only signal to the driver (because of snow or other conditions that may obscure road markings). There would be ample overhead signs.

- Unusual obstacles would always be clearly visible: painted with reflective paint or using actual warning lights.

- We rarely use concrete lane dividers here. Usually these areas consist of open space and a shallow ditch, so you don't necessarily crash hard if you end up driving in there. There's usually grass, bushes, etc. There are occasional lane dividers, of course, when there's no space to put in an open area. However, the dividers are made of metal and they are not hard obstacles and fold or turn your vehicle away if you hit them (and people rarely do because of the above).

I'm sure there are some dangerous roads here, too, but a fatal concrete obstacle like this, with highway speeds, with almost no warning signs whatsoever, is almost unheard of.


Excessively long, unstriped gores are not at all common in the Eastern US. This is bad engineering and poor maintenance.


If anything the US tends towards far too little merge space than excessive. Almost every city has a blind yield entering the freeway somewhere.


This is a problem with old highways that were upgraded to substandard interstates. That isn't the case here with plenty of room to build a safer design.


I live in South Africa. We see items such as this more commonly:

https://www.google.com/maps/@-25.833477,28.2391574,3a,75y,75...

https://www.google.com/maps/@-25.8317095,28.2407035,3a,75y,7...

However, it's not consistent. Some end quite abruptly as well, and have a plain sign in the middle without any slowly-rising divider.


I had never heard of crash attenuators before, but they seem really impressive! This one was the "SCI smart cushion" model. According to the manual[1] it can reduce the acceleration from a collision at 100 km/h (63 mph) to less than 10G, so even a direct frontal collision at highway speed would be survivable[1]. That's pretty amazing.

[0] http://4fhmzg4ct6wezwll3wmp9kl9.wpengine.netdna-cdn.com/wp-c... [1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212...


The crash attenuators are usually good enough to save people's lives. There are small grooves in the sides on some of the freeways in CA but they aren’t everywhere.

The problem with the attenuators are that they don’t get replaced fast enough which makes these accidents a lot deadlier.

You can see the difference between a used and unised attenuator in this article. http://abc7news.com/automotive/tesla-claims-missing-safety-b...


Agree. The area leading up to the barrier could at least be striped off.


Yes, I'd expect to see something like https://goo.gl/maps/dbp2fx79ihz

I could easily imagine a stressed human driver unfamiliar with the area following that unstriped gore area as a 'lane', too.


And you'd think that basic paint striping would be comparably cheap. It's incredible that such a deadly point wouldn't have it.


Are these kinds of lane-dividers not painted with heavy yellow diagonal stripes in the US? Is the divider equipped with reflective material and/or lights to make sure drivers notice it? Is the divider actually necessary, or could it be replaced with gravel so people don't need to hit a concrete wall when they make a mistake?

I don't think a road like this would be possible in most of the EU. Autopilot needs to be fixed, but this road is also super dangerous and probably would not be allowed in the EU.

How often do non-autopilot cars fatally crash here? This does look like a bit of death trap to me!


> I don't think a road like this would be possible in most of the EU.

I've driven in almost all countries in the union and I'm sure that and worse is readily available in multiple EU countries. While it's true that the EU subsidizes a lot of road construction local conditions (materials quality, theft, sloppy contractors) have a huge impact on road and markings quality.


Hmm, "possible" was probably too strong of a word. Of course it's possible.

However, I've driven a significant number of hours in Spain, Switzerland, Germany, UK, Ireland, Sweden, Denmark, Norway, Sweden, Finland, Estonia, and I wouldn't say that this kind of concrete divider is "readily available" as a normal part of a high-speed road, lacking the high-visibility markings I outlined in the post above, year after year as a normal fixture. In fact, I don't remember ever once seeing a concrete divider like this in the EU, even temporarily, but please prove me wrong (and maybe we can tell them to fix it!).

At highway speeds, lane dividers are only used when there is a lane traveling in the opposing direction right next to you. There is no point in concrete dividers if all the traffic is traveling in the same direction. At highway speeds, opposing lanes should be divided by a lot of open space and metal fences that don't kill you when you hit them.


Try: Poland, Romania, Hungary, Bulgaria, Greece, Slovenia, Slowakia, Czech Republic. Those are also part of the EU and road quality there varies wildly from the countries you use as your examples.


How often do non-autopilot cars fatally crash here? This does look like a bit of death trap to me!

A non-autopilot car crashed at this exact spot a week earlier. This crumpled a barrier that is intended to cushion cars going off the road here, and contributed to the death of the Tesla driver.

The fact that a human crashed at this exact spot confirms that it really is an unsafe death trap.


I have noticed that, on two-lane non-divided roads, the dividing line (separating you from oncoming traffic) is yellow, while the right-side solid line is white. White means it's safe to cross the line, but yellow means it's dangerous to cross it.

You can see that yellow line at the accident site.

Now, why didn't they start a new yellow line where the lane split? That would give drivers (and software) an important cue: if you are driving down a "lane" with a yellow line on the right, something is seriously wrong!


In the United States, "A yellow line (solid or dashed) indicates that crossing the line will place a driver in a lane where opposing traffic is coming at the driver."

https://en.wikipedia.org/wiki/Yellow_line_(road_marking)#Uni...


>> These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure.

A little off topic, but I'm curious: I usually use "by design" to mean "an intentional result." How do other people use the term? In this case, the behavior is a result of the design (as opposed to the implementation), but is surely not intentional; I would call it a design flaw.


> but is surely not intentional; I would call it a design flaw.

In fact, it is intentional! Meaning that the system has a performance specification that permits failure-to-recognize-lanes-correctly in some cases. This element of the design relies on the human operator to resolve. Once the human recognizes the problem, either they disengage the autopilot or engage the brakes/overpower the steering.

Now, you could argue that the design should be improved and I would agree. But we should perhaps step back and consider some meta-problems here. As others have stated, the functionality cannot deviate significantly from previous expectations without at a bare minimum an operator alert, training pamphlet or disclaimer form. Tesla's design verification likely should be augmented to more comprehensively test real-world scenarios like this.

But the real core issue is that the design approaches this uncanny valley of performing so terribly close to parity with human drivers that human drivers let their guard down. IMO it's the same problem as the fatality in Phoenix w/an Uber safety driver (human driver). When GOOG's self driving program first monitored their safety drivers they found that they didn't pay attention, or slept in the car. IIRC they added a second safety driver to try to mitigate that problem.


I tend to distinguish "Working as intended" and "Working as implemented."

Working as intended: the system works in a basic average-observer human sense

Working as implemented: there were no errors in implementation, and the system is performing within the tolerances of that implementation (but the implementation itself may be flawed, or may be to a design that violates average human expectations).


I think this is proper usage of "by design". The design part is that it uses a camera and low resolution radar and can see lanes and other cars and not obstacles. So this is an unintended edge case but the result is per the design.


I find it hard to think that a group of engineers didn't consider stationary objects when _designing_ an autopilot system. I would no longer consider it a flaw if it was considered.


It's a hardware limitation. If you have a radar, the only way to get a signal out of it is to ignore all stationary stuff.


Here is a more technical explanation of the limit.

You send a radar signal out, then it bounces off of stuff and comes back at a frequency that depends on your relative motion to the thing it is bouncing back from. Given all of the stationary stuff around, there is a tremendous amount of signal coming back from "stationary stuff all around us", so the very first processing step is to put a filter that causes any signal corresponding to "stationary" to cancel itself out.

This lets you focus on things that are moving relative to the world around them. But makes seeing things that are standing still very hard.

Many animal brains play a similar trick. With the result that a dog can see you wave your hand a half-mile off. But if the treat is lying on the ground 5 feet away, it might as well be invisible.


That is not accurate. In fact one of the key features of radar processing is to find objects in the clutter, usually by filtering on Doppler shifts.

As far back as the 1970s helicopter-mounted radars like Saiga were able to detect power lines and vertical masts. That one could do so when moving up to 270 knots and weighed 53kg.


For radar detecting a power line against the sky is easy. Detecting a power line against a wall is not.

The same radar set that worked these wonders while flying would have been absolutely useless for a ground based vehicle.


But a car's radar doesn't need to detect a power line against a wall; it only needs to reliably detect the wall...


Actually it needed to detect a crash barrier against a background of lots of other stuff that was also stationary.

The easier that you distinguish the signal you want from the signal that you don't, the easier it is to make decisions. For radar, that is far, far easier with moving objects than stationary ones.


That would make sense if the radar is stationary.

In this case, the radar system itself is moving... including moving relative to stationary objects. So I don't see how what you say makes sense.

Not saying you are wrong. Just saying I don't follow your explanation.


Right. Besides, pulse doppler can detect stuff that is stationary relative to the radar anyway.

The real issue is false positives from things like soda cans on the road, signs next to the road, etc. Can't have the car randomly braking all the time for such false positives. As a result, they just filter out stationary (relative to the ground) objects, and focus on moving objects (which are assumed to be vehicles) together with lane markings. This is why that one Tesla ran right into a parked fire truck.

Interestingly, I've discovered one useful trick with my purely camera-based car (Subary equipped with Eyesight): if there is a stationary or almost stationary vehicle up ahead that it wasn't previously following, it won't detect it and will consider it a false positive (as it should, so it doesn't brake for things like adjacent parked cars), but if I tap the brake to disengage adaptive cruise control and then turn the adaptive cruise control back on, it will lock on to the stopped car up ahead.


Same thing happens in my BMW wrt toggling the adaptive cruise to recognize a stopped car.


The problem is not whether the object is moving relative to the radar. It is whether the object is moving relative to all of the stationary objects behind it that might confuse the radar.


It is a ROC curve/precision-recall issue basically. Radar has a terrible lateral and even worse to non-existing elevation measurement. Potholes, man holes, Coke cans and ground clutter look all alike and can in fact be detected as "having the negative relative velocity as my car's own velocity". You want to stop for only very few of all those stationary objects, otherwise you won't drive at all. The problem is you can't classify the relevant ones with radar. Which is why the camera helps, but obviously (for false positive suppression and a high availability of the autopilot) only if it positively classifies a car's rear.


The military does it with SAR (synthetic aperture radar). Is something like that feasible for self-driving cars?


Consider price point implications.


Well, if SDVs take off as advocates espouse, the volume (plus Moore's Law) will make costs drop.


Volume maybe, but Moore's Law definitely not.


No, since you need movement to create the along-track aperture, you would have already moved through the aperture, running over what you are trying to detect.


This rather sounds like radar is the wrong choice, does it not?


To follow on, there has already been reported autopilot drifting and mismanaging that spot on the freeway (although without this catastrophic result). That fact adds to this description/explanation.

Aside: the following another car heuristic is dumb. It's ultimately offloading the AI/decision making work onto another agent, an unknown agent. You could probably have a train of teslas following eachother following a dummy car that crashes into a wall and they'd all do it. A car 'leading' in front that drifts into a metal pole will become a stationary object and so undetectable.


> the following another car heuristic is dumb.

It's the Machine Learning equivalent of Zen navigation: http://dirkgently.wikia.com/wiki/Zen_navigation


Does it actually use the position of the car in front to determine where to steer? My Honda will sense cars in front of me but will only use that info to adjust speed when using cruise control.


It will if it can't get a lock on the lane markers. For example mine does this when navigating through large unmarked intersections which in my home town has many where the far side lanes don't align with the lanes before entering the intersection (due to added left and right turn lanes). My Tesla will follow a car in front as they slightly adjust position to the offset lanes on the far side (which is very cool) but if there is no car to follow I just take over knowing it'll not handle it well.


Although I don't work in self-driving cars, I do know a fair amount of ML and AI, and I have to be honest: if my bosses asked me to build this system, I would have immediately pointed out several problems and said that this is not a shippable product.

I expect any system that lets me drive with my hands off the wheels for periods of time deals with stationary obstacles.

What is being described here, if it's correct, is a literal "WTF" compared to how Autopilot was pitched.

I wouldn't be surprised if the US ultimately files charges against Tesla for wrongful death by faulty design and advertising.


Tesla's autopilot team is onto (if I recall) it's 5th director since the unveiling of hw2 in late 2016. The first director of Autopilot, Sterling Anderson, was dumbstruck when Musk went public with delusional claims about what autopilot would be capable of, and quit. A slew of top engineers left with him, or not long afterwards. Tesla, showing their true colours, followed up with a meritless lawsuit accusing Sterling of stealing trade secrets.


This comment about rumors surrounding Anderson's departure intrigued me enough for me to Google a source. https://www.technologyreview.com/the-download/608739/some-te... is one of the top results.


Makes me wonder about the future of self-driving cars. I would think that the first thing that should be programmed in is to not run into objects that are not moving.

IMO, maybe the roads need to be certified for self-driving. Dangerous areas would be properly painted to prevent recognition errors. Every self-driving system would need to query some database to see if a road is certified. If not, the self-driving system safely disengages.


Tesla claims their cars have all the hardware required for full self-driving, but their cars do not have LIDAR. Every serious self-driving outfit understands LIDAR to be necessary. Tesla's marketing department would have people believe otherwise of course.


Exactly. I think a LIDAR could see the obstacle in this case.


You are correct. Built an autonomous car in 2007 that wouldn't hit that wall.


Were you using the Velodyne 64s?

Cruise, who uses the 32 laser velodynes avoids highway/freeway speeds because their lidar doesn't have the range to reliably detect obstacles at that distance.


SICK LMS 2xx. Weakest link was the java microcontrollers we had to use because we were majority funded by SUN. My favorite part was the old elevator relays we found in a junkyard and used for a bunch of the control system. You could hear the car 'thinking' and know what it was about to do.


I'm involved in a project using the LMS5xx.. but on a PLC, so I envy your Java microcontroller! Have you seen SICK's new MRS 6000? Range of 200 m, and 15 degrees vertical (15 degrees)! Pricing is similar to LMS5xx.


I've been out of the space for nearly 10 years. Just checked out that MRS6000 thats amazing.


>Makes me wonder about the future of self-driving cars. I would think that the first thing that should be programmed in is to not run into objects that are not moving.

My understanding is that Telsa's autopilot is pretty different from other more mature self-driving car projects. I wouldn't read too much into it.


This is a Tesla issue. Waymo uses Lidar and most likely won't run into walls.


I don't know that the issue is LIDAR, but rather distinguishing a stationary object in front from stationary objects to the side.

Does Uber use LIDAR?


Well, "uses": it seems that in the case of E.H., the lidar was there but its output was discarded in the end.


I wonder if we'll see it speed-limited. For commutes, if my time is freed up to read/work, then I don't need to be going 70 mph. If I'm travelling across a continent overnight, the gain I'd get from 10+ autonomous hours overnight would beat having to rush when driving personally.

And I agree with certified roads. But even then, if a truck drops cargo in the middle of the lane, you'd want to be in a car that can detect something sitting still in front of you.


Kudos to the NTSB. Events like this and companies like Tesla are exactly why we need regulation and government oversight. Tesla's statement that the driver was given "several warnings" is just a flat out lie.


The First Law of Autopilot should be: Don't run into shit. If this fails then it's not autopilot, it's "Autocrash".


> This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera

I'm sure Tesla's engineers are qualified and it is certainly easy to second-guess them, but it is beyond me why they would even consider a BW camera in a context where warning colors (red signs, red cones, black and yellow stripes, etc.) are an essential element of the domain.


The cameras in Tesla's non-Mobileye cars are grey grey red. Supposedly this gives you higher detail resolution while still being able to differentiate important sign colors.

https://twitter.com/elonmusk/status/798886670468644865


You can have different cameras for sign recognition if you can't deal without color. But BW cameras have huge benefits from sensitivity in the optics all the way to processing power in the backend without much downside.


Oh, you can deal without colors, but you're intentionally depriving yourself of data, as you now have Safety Gray, Danger Gray, Warning Gray and Traffic Gray. Not to worry, those colors probably didn't mean anything important anyway.


Deep learning downsamples heavily for reasonable performance.


Before it even entered the gore area, it likely centered between the actual lane of the highway and the exit lane. Bear in mind, when a lane forks, there is a time the lane is wider than average. And with the gore area lines worn, the car may have missed it entirely. Once it was centered to the gore area, presumably it didn't consider the lines under or directly in front of the car to be lane lines.


Tesla cut ties with Mobileye in 2016, and this was a 2017 Model X, so it probably wasn't running Mobileye's software.


Pedantic: Mobileye cut ties with Tesla, not the other way around.


More relevant in this situation: Mobileye's stated reason for cutting those ties.

>On Wednesday, Mobileye revealed that it ended its relationship with Tesla because "it was pushing the envelope in terms of safety." Mobileye's CTO and co-founder Amnon Shashua told Reuters that the electric vehicle maker was using his company's machine vision sensor system in applications for which it had not been designed.

"No matter how you spin it, (Autopilot) is not designed for that. It is a driver assistance system and not a driverless system," Shashua said.

https://arstechnica.com/cars/2016/09/tesla-dropped-by-mobile...


> Mobileye's CTO and co-founder Amnon Shashua told Reuters that the electric vehicle maker was using his company's machine vision sensor system in applications for which it had not been designed.

Props to Sashua for deciding to put safety above profit. We need people in leadership like this.


Yes, after Mobileye (now Intel) learned there was an internal Tesla effort to replace them.


Isn't that claim also from Tesla?


Neither company commented on the situation with any detail. However, that account was not disputed by either party.


> These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design

Really? Really?? I mean if was designing a self-driving system pretty much the first capability I would build is detection of stuff in the way of the vehicle. How are you to avoid things like fallen trees, or stopped vehicles, or a person, or a closed gate, or any number of other possible obstacles? And how on earth would a deficiency like that get past the regulators?


Shortly after the accident, a few other Tesla drivers reported similar behavior approaching that exit.

But on the other hand, gores like that can also trick human drivers. Especially if tired, with poor visibility. In heavy rain or whiteout, I typically end up following taillights. But slowly.


Except that software doesn't get tired, and conditions were good. So even if that could trick human drivers those conditions weren't present so do not apply to the case at hand.


What's your source for the claim that Tesla's system "doesn't detect stationary objects"? From the reference frame of a moving Tesla, both globally stationary and globally moving objects will appear to be in motion.


Teslas on autopilot have collided with many stationary objects that were partially blocking a lane. Known incidents include a street sweeper at the left edge of a highway (China, fatal, video available), a construction barrier in the US (video), a fire truck in the SF bay area (press coverage), a stalled car in Germany (video), a crossing semitrailer (NTSB investigation), and last month, a fire company truck in Utah.


Yes, but the radar has poor angular resolution (but a good idea of relative velocity), so it cannot tell the difference between a stationary object at the side of the road and one in the middle of it, so it must ignore all stationary object (by ignoring all objects with an apparent velocity approximately equal to the speed of the vehicle) in order to avoid constant false positives.


Can you explain further what "poor angular resolution" means?


It's good at determining whether something is moving towards it or away from it, but bad at determining where that object is; whether it's directly in front or slightly to the left or far to the left. Its "vision" in that sense is blurry.


That if Tesla were to release a side-by-side comparison of a regular photo versus what the radar sees, people would be screaming bloody murder.

Imagine an image smeared out like on a 15 year old Nokia picture phone, but with very high colour(velocity) precision. That is what a radar sees.


Poor angular resolution means you can't tell if an object is at 12 o'clock vs 1 o'clock. It means you can tell there are things, but you don't accurately know what angle they're at.


It cannot see small details. So it probably "sees" another car as a blurred spot and only when it is close.


The NTSB report said that the system didn't apply the automatic emergency braking. That either means it didn't detect the stationary object or did it on purpose.

You can argue something between those two options, but ultimately it is just a semantic argument (e.g. "it just chose to ignore it" which is effectively the same as a non-detection, since the response is identical).


Aside from resolution, another reason suggested is to avoid slamming on the breaks in the middle of a highway for false-positives.

https://www.wired.com/story/tesla-autopilot-why-crash-radar/


> The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.

True, but not one into which the car should have merged. Although crossing a solid white line isn’t illegal in California, it does mean crossing it is discouraged for one reason or another.

I love seeing the advances in tech, but it’s disheartening to see issues that could have been avoided by an intro driver’s ed course.


Reservation cancelled.

While the anti-tesla news is bad, tesla needs to be more clear that this really is a failure of autopilot and their model - they can't expect a human to get used to something working perfectly, clearly is hands were close to the wheel in the very recent past (tesla is famous for not reading hands on wheel).

I'm hoping google can deliver something a bit safer.


I don't get it, the system relies on two kinds of sensors ? radars for rear-ends and optical cameras for broader decision making ? so that location confused the cameras and that was it ? the car has no ability to understand its surrounding outside crude parsing of the visual field (trained ML I suppose)...

that's quite fucked up.


MobileEye (EyeQ3 which is in AP1) does detection on chip including things like road sign recognition the “software” part is near non existent for them it’s more of a configuration than some sort of a software ANN model like what Tesla is using with AP2 and the NVIDIA Drive PX.


Wow, Tesla doesn't have auto emergency braking? Even my Kia has that - adaptive cruise for following cars, auto emergency braking for stationary objects (and when the car in front of you suddenly slows down). Yeah, I'll keep my Kia...


My 2015 S70D (AP1 presumably) does start complaining if it thinks I'm about to collide with a stationary vehicle. Whether it would actually stop the car is not something I have had the nerve to test. Perhaps I should find a deserted car park and put out some cardboard boxes and see what happens. The problem is that cardboard doesn't reflect radar well.


JFYI, a (five year old) video from "Fifth Gear":

https://www.youtube.com/watch?v=PzHM6PVTjXo

I simply love the inflated "dummy car".


Could you wrap the cardboard in aluminum foil?


Worth a try.


It does but it cannot detect stopped object which apparently is a standard limitation. Besides aeb is made to reduce crash speed the car will still crash.


As a side note, I find the lane markings in the US confusing. The "gore sections" in Europe are filled with stripes, so there is really no way to mistake them for a lane.


As a driver on both continents, I would respectfully correct you to "are supposed to be filled with stripes". Alas, even as seen here, markings deteriorate - on both sides of the pond.

"Unless the road marking is 105% perfect, it's never the fault of the autopilot, but look, autonomous driving!" is just pure marketing, without any substance to back it.


I think stripes are "typical" in the US too. I can't speak to percentages, but they certainly have them around where I live.

Example: https://www.google.com/maps/@42.6871358,-73.8004004,90m/data...


>it doesn't detect stationary obstacles

Wait, how can that be? I mean clearly it didn't detect the barrier in this case, but that wasn't by design was it?


> Cars are special to the vision system.

Wonder if the barriers can be modified to look like another car to these systems, but still remain highly visible and unambiguous to human drivers?

Part of the overall problem is that these roads were not designed for autonomous driving. This is much like the old paver roads were fine for horses, but really bumpy/distractive to car wheels and suspensions.

Overtime, we adapted roads to new tech. This needs to start happening now too.


Machine vision is perfectly capable of detecting these barriers, as well as LIDAR.

The fault here isn’t with road design. The fault here is with Tesla shipping Autopilot without any support for stationary objects, AND their delusional and (should be criminally reckless) decision to not use LIDARs.

A car without ABS or power braking is not legal to be sold today. We need to apply those standards here: anything more than cruise control (where the driver understands they need to pay attention) needs to have certain safety requirements.


I think they rely (more) on radar now since the "I can't see a white truck" incident. The camera is useless if the sun is shining against it, the radar will still work but be useless for stationary objects. I have no idea how Tesla intends to solve this "small" issue.


From what we've seen so far, I'm guessing "marketing", "passing the buck" and "whataboutism".


Agreed it's gross negligence resulting from pure arrogance. Makes the cult of Musk so much less savory and one has to wonder how much crap there is beneath the shiny exterior in all of his projects.


I wonder how soon radar reflecting paints will start being used on roadways and barriers, etc. That seems like it would be a general benefit to all brands of cars with auto drive.

At the end of the day we want to know if the autonomous systems are safe. Policy decisions will end up depending on that determination. That requires clear definitions for what constitutes failures and accurate gathering of data.


These cars aren't crashing into stationary objects that don't have a radar return. The software in the cars is deliberately filtering out the radar returns of stationary objects. It wouldn't matter how many radar reflectors you slapped on the barrier, the Tesla would ignore the radar return of the stationary barrier.


Maybe the barrier needs a big waving maneki-neko cat in front of it, if the waving paw would be enough movement to be noticed by the radar.


So it should be possible to murder Tesla drivers by painting adversarial strips on the road?


Perhaps. But it's also possible to murder (or endanger) non Tesla drivers by messing with road signs as well.


>That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design

I wouldn't say it's by design or expected behaviour because if a Tesla approached a stopped vehicle, the expectation would be the car would stop.


It's not expected behavior, but it seems to be what happens if you don't prevent it.

https://insideevs.com/tesla-model-s-rear-ends-another-parked...


But why would it speed up beyond the speed limit?


The cruise control was set to 75. 8 seconds before the crash, it was following a vehicle at 65 mph. It steered left, then was not following the vehicle anymore, and accelerated to try and get back to the set speed.


In CA traffic (and many other states) driving at the speed limit is likely more dangerous than driving with the flow of traffic (roughly 10 MPH over the limit - sometimes more than that).

One of the options for Autopilot is that you can tell it "Never go more than X MPH over the speed limit" with a common setting being a few miles an hour over.


This statement is not quite correct. The setting indicates the speed, as an absolute offset of the currently detected speed limit, at which the car will emit audible or visual warnings that the driver has set the cruise-control speed too fast. That setting is also the speed for a specific gesture that sets the cruise control to the maximum speed for the circumstances. E.g., if the posted speed limit is 45 and the setting is 5, then the gesture sets cruise control to 50 mph.

The car never adjusts the cruise-control set speed by itself, with one exception: if the current road has no center divider, then it clamps the current speed to no more than the posted speed limit + 5 mph. The term "clamps" is in the programming sense: if cruise-control is already set below that speed, nothing changes.

The car never increases the cruise-control set speed. Only the driver can do that.

In other words, the driver had already set the cruise control to 75 mph and likely had the setting at speed limit + 10, which is aggressive. The reason the car accelerated is because it determined there were no cars ahead of it traveling at a speed lower than the set speed. Unfortunately, that conclusion was absolutely correct.


If it's like other smart cruise controls, it sped up to the set speed after following a slower car.


Because the driver told it to.


"This is the Tesla self-crashing car" That's a harsh and inaccurate statement. While I agree that Tesla didn't handle the PR well we should acknowledge that every system is prone to fail at some point. And the fact that we do not hear about non-autopilot car crashes due to system failure as much as we hear about Tesla's is not an indicator that it doesn't happen.


“Car firmware shouldn’t be open source, that’s dangerous.”

There’s no way in hell Tesla would have gotten away with selling this for so long if their users were allowed to read the unobfuscated source.

The fact that people are given no control over these things that can kill them while the manufacturers can just mess around with it without any real oversight is absolutely insane. I really don’t think the average IRC lurker who could figure out how to compile the firmware could be any more dangerous than the “engineers” who wrote it in the first place.


I think we need some more details in order to conclusively blame the Autopilot for this death. I would like to see the full report and really do hope that Tesla can see their way to coming back into the fold and cooperating with the NTSB on a deeply technical fact based analysis on what happened here. For one thing, we do need to know more about the broken crash attenuator and the impact that it had on a severity of the incident.

All of that being said, I still think Tesla has mostly the right approach to their Autopilot system. There is an unacceptably high number of crashes caused by human error and getting to autonomous driving as fast as possible will save lives. It is virtually impossible to build a self driving system in a lab, with all current known methods you must have a large population of vechicles training the system. The basic calculus with their approach is that the safety risks of not getting to autonomous driving sooner is more than the risks of failures in the system along the way to getting there. It is admittedly a very fine line to walk but I do see the logic in it.

I do think that Tesla could do more to educate the users who are using early versions of the software.


“[Driver’s] hands were not detected on the steering wheel for the final six seconds prior to the crash. Tesla has said that Huang received warnings to put his hands on the wheel, but according to the NTSB, these warnings came more than 15 minutes before the crash.”

This kind of stuff is why I’ve lost all faith in Tesla’s public statements. What they said here was, for all intents and purposes, a flat out lie.

Clearly something went wrong here, but they lept to blaming everyone else instead of working to find the flaw.


Add to that, the last bullet point from page 2 of the official preliminary report[1]:

   At 3 seconds prior to the crash and up to the time of impact with the crash attenuator,
   the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive
   steering movement detected.
That to me is a strong indicator that Tesla's AP2 can not recognize a crashed attenuator, probably one of the strongest argument that AP2 without LiDAR is unsafe and fatal (let alone of missing the redundancy in sensors and brakes[2]).

[1]: https://www.ntsb.gov/investigations/AccidentReports/Reports/...

[2]: https://arstechnica.com/cars/2018/04/why-selling-full-self-d...


It reads like he was following another car that was driving at a slower speed but once that car left his path, his car began to accelerate to the pre-set speed of 75mph.


It doesn't matter if he was following another car or not. If there is an obstacle, AP2 should brake and in that case it should brake hard or use evasive turning maneuvers to avoid head-on collision.


That’s the thing, it clearly didn’t detect the obstacle.

Given that there were no (known) obstacles, it behaved correctly.

Fix the obstacle detection.


The problem is it's entirely possible the obstacle detection is un-fixable without swapping out the sensor suite the Tesla ships with. Tesla is quite clearly expecting the problem to be solvable with that sensor suite alone (this is more-or-less the line I was given by a Tesla seller when I was shopping for a car), so it's not going to be in the T-sheet for them to recall every Tesla on the road to fit a LIDAR to them, assuming that's at all feasible.


This happens with regular adaptive cruise control on my Ford, when the cruise control set point is a lot higher than the current speed: the car accelerates after another car clears the lane and ram into things.


Ford didn't call their adaptive cruise control "auto pilot", they call it as it really is: "Adaptive Cruise Control with Stop-and-Go".[1] By contrast, Tesla touted their ACC "auto pilot" with auto steering and clearly in this case it didn't steer correctly and proven fatal (at least 3 times in my memory.)

[1]: https://www.youtube.com/watch?v=LnUtjs-jeJA


To contrast that anecdote, this doesn't happen with my Subaru with EyeSight. It uses parallax for 3D distance, if it detects a solid object ahead (be it vehicle, wall, barrier, or anything else) it will apply the emergency braking system (which may still result in a collision in the given circumstances but at much less fatal speeds).

The industry has many highly effective AEB systems, and has for years now. I cannot comment on Ford's because I don't know specifically about its problems.


My Subaru with EyeSight accelerates heavily when it no longer is tracking a car in front. This has happened once or twice in odd circumstances, but since I was paying attention to both the speed increase and road in front I tapped the brakes to disengage.


I think the parent was talking about in the general case, when there are no obstacles.

I imagine the Ford would brake too if it detected an obsticle (and has an AEB system of some sort).


My Camry Hybrid sometimes picks up cars in the lane next to me and a car length ahead, to match speed with them. And sometimes it loses the car ahead of it on turns. I try to aim the top of the steering wheel at the car ahead of me, and this seems to help the sensor a great deal.

On turns, it doesn't accelerate, only maintains speed; can be frustrating if you want to act like a racecar driver. For example, following someone around a freeway curve at 45mph, and the other driver changes out of your lane. Despite having adaptive cruise set to 60mph (or higher), it will stay at 45mph until after the curve, and only accelerate after ~5 seconds of straight roadway.

It does sound the collision alert (sadly no autonomous braking here) very consistently when I'm accelerating or adaptive cruise is on, and there's a stationary object ahead.


> Despite having adaptive cruise set to 60mph (or higher), it will stay at 45mph until after the curve, and only accelerate after ~5 seconds of straight roadway.

My 2015 S70D does something like this too.


Like all adaptive cruise controls, it will accelerate if you have the max speed set higher, but it will not ram into things since it's on you to steer.


A counter argument that HUMAN DRIVERS are unsafe an fatal (let alone missing common sense and alertness). I 100% get your point, but my counterpoint, is that this is what we are going to see, in the future. Autonomous systems that by and large are way safer, than human drivers, but when they fail, the might do so, in circumstances where are human would not. Humans are good at some things, computers/ai/sensor suites are good at others, and that is going to mean different kinds of failures, and we have to learn from this. The gigantic upside, is countless situations where the ai, detecs things, the human would not, and overall better outcomes. (Its also worth nothing, that this incident, was way worse because of the barrier. If the divider was in normal condition, the driver would have lived, in my opinion).


I get that in 5-10 years we MAY have SDVs that are superior than human drivers. But I think a lot of people have a mental model of SDVs _right now_ being superior to human drivers. I can't remember the crash stats, but I think the only way to know for sure is to calculate the number of miles a single human generally drives on average before they are involved in a collision or crash, etc...

Now I don't have the numbers memorized, but in previous Tesla, Uber, Google threads - only Google had reached records similar to human level crash safety. Keeping in mind however that they are still working on driving through snow, construction sites, non-existant (paint gone) lane lines... so the test areas are quite specific still.

I realize Tesla isn't advertising level 5 autonomy - I'm just responding to the above poster.


We don't have exact numbers for Tesla but there are 2 confirmed casualties in what is conservatively over a billion Autopilot miles. The average casualty rate for human drivers is something around 1.25 death per 100 million miles. Those numbers are not directly comparable given that Autopilot is not something that is used during 100% of all driving miles and Tesla drivers are not average car drivers. However, there is nothing that screams "Teslas with Autopilot are a death trap" like some people in this thread are implying.


Autopilot will only be engaged on roads and under conditions that are favorable to autonomous driving, but fatal accidents tend to happen more under unfavorable conditions for human drivers. The non-Tesla numbers also include cars that are smaller, cheaper and less sophisticated, so much more likely to lead to fatal outcomes in crashes. So these numbers are heavily skewed in favor of Tesla. The proper comparison would be waymos performance and they’ve been without a fatal crash afaik. As an alternative you could pull the numbers for cars of similar build on similar roads for human drivers.


I don't think it's fair to compare an L2 system to an L5 system, though. My sense is that a very good ADAS (L2) system will reduce overall fatalities, while causing a small number of additional ones through human misuse of the system. This accident seems like the latter case.


Can you point me to a source for the billion Autopilot miles? This is something I've been trying to track down without any luck so far.

The best source I've been able to find so far is a tweet by Elon Musk in 2016 saying "Cumulative Tesla Autopilot miles now at 222 million".

[1]https://twitter.com/elonmusk/status/784487348562198529?ref_s...


I wish Tesla would update those numbers more frequently, but it looks like they stopped doing that in 2016. However they did enough updates in 2016 that allow us to estimate a trend of something on the order of 1 million miles per day.

4/9/2016 - 47 [1]

5/24/2016 - 100 [2]

6/30/2016 - 130 [3]

10/7/2016 - 222 [4]

11/13/2016 - 300 [5]

This rate continuing linearly would put us at roughly 900 million today. A linear accumulation of miles ignores that Tesla has roughly twice the cars on the road that they had in 2016 when those numbers were published. I have no idea their current mileage total has reached, but I feel safe in saying it is over a billion miles.

[1] - https://twitter.com/Tesla/status/718845153318834176

[2] - https://www.theverge.com/2016/5/24/11761098/tesla-autopilot-...

[3] - https://www.tesla.com/blog/tragic-loss

[4] - https://twitter.com/elonmusk/status/784487348562198529

[5] - https://electrek.co/2016/11/13/tesla-autopilot-billion-miles...


Fantastic, thank you!


Keep in mind that a billion miles is a very small number in this context. People in the U.S. alone drive well over 3 trillion miles every year [1]. We simply don't have enough data yet to say that autonomous vehicles are safer.

[1] https://www.npr.org/sections/thetwo-way/2017/02/21/516512439...


I completely agree and should have put something about sample size in that comment. However there are no red flags yet in the statistics and I think that is important.

Think of it like flipping a coin. We decide to flip a coin 4 times and get 2 heads and 2 tails. That wouldn't be enough to draw any meaningful conclusions about whether the coin is fair or if it is weighted to one side, but I would feel a lot better about that result than if the coin ended up on the same side 4 times.


> I realize Tesla isn't advertising level 5 autonomy

To my reading they are, and then disclaiming it in the small print. They clearly want to leave an impression its close to level 5.


Tesla might not be advertising L5, but they're certainly claiming that Autopilot is safer than human drivers right now.


They're claiming that human+Autosteer+TACC is safer than a human. They're not claiming that Autopilot while you use your phone or watch a movie is safer than human driving. And they remind you of this every time you engage the feature.

Consider AEB. AEB is safety improvement on pure human driving. But it'd be hugely foolish to rely 100% on AEB to do your braking for you. It's not designed for that -- it can help you in a lot of situations, but not all of them. The same is true for Autopilot.


If they where better now, than humans (fully sdv) than everybody would be selling them, using them etc. That wasnt my point. My point is, even wheb they become safer, maybe even many many times safer, we are going to see some accidents, that you could argue, a human would have avoided. And it will be terrible for the people involved, there will be outcries etc. But the sum, will still be that many many lives are saved.


I shouldn't have to keep saying this in every thread about Tesla autopilot, but I guess I do: the fact that there will probably one day be autonomous driving systems that are safer than human drivers does not imply that Tesla's current system is safer than human drivers.


I think it is safer. Its of course dependant on the situation ie. Highway cruising. But I think it is. (Human drivers are not that high of a bar, I see crashes at least weekly, on my stretch of highway)


Why are we comparing to humans? Shouldn't we compare autosteer to a human assisted by lane keeping assist?


It is not clear that this particular autonomous driving system is better than a human driver.


And they play contradictory arguments using statistics "You don't have enough data to say it is dangerous!" while at the same time using the exact same statistics to argue that it is safer.

Either the statistics are half-baked or they're ready to draw conclusions with, you cannot have it both ways.


Not to mention repeatedly claiming that "it saves lives", when they're probably either talking about a completely separate collision avoidance system (which has probably stopped several accidents, by definition, but it isn't special and it doesn't move the car on its own); or they're talking out of their asses because the number of independent variables is the kind of thing you need a PhD statistician for and they'll probably just tell you to take a hike.


This is not even an autonomous driving system. Let loose to drive from city A to city B without human supervision, it will crash 9/10 trips.


Musk prefers to lie to the public and benefits from the disintermediation of the media that various Internet distribution channels have allowed. Tesla often makes clearly false statements. SpaceX, when part of their heavy lift rocket crashed into the ocean, cut away from that scene to a shot of two happy smiling spokesmodels pretending to not know what had happened. Today what we need more than we ever needed it is skepticism, journalism, and vigorous investigation. You can’t take anything these companies are saying at face value.


"Spokesmodels"? Those were SpaceX employees. SpaceX engineers. And at least initially, I don't think they actually knew what happened. Maybe they did later, but not initially.


The SpaceX comparison is a bad example. Sure they could have said "this and that happened", but there's always the point on how much you can trust the data you get at that point. I fully understand them not talking about failures when cause and consequences aren't even remotely understood.

But I understand where you're coming from, and I agree: Tesla quickly released statements claiming the safety benefits of Autopilot, putting all the blame on the driver, and even providing misleading statements ('the driver was warned multiple times' but leaving out that that happened 15+ minutes prior). That was the exact opposite, when they didn't know the cause of the accident, but were quick to just deny any wrongdoing.


>This kind of stuff is why I’ve lost all faith in Tesla’s public statements.

I don't understand why they would issue any statement other than, 'condolences, we're committed to safety, we're working with the NTSB to understand what happened'


I think Musk strongly believes that flipping the switch to sacrifice 1 and save 5 in the trolley problem is clearly the correct choice. If the numbers show that less people die with self driving cars than with them, then that is the right choice. I suspect he even believes that the people that die along the way in the transitional periods are worth it if that is what it takes to save hundreds of thousands of lives in the long run. Most people aren't ok with this kind of utilitarianism, but I suspect he is. The PR is just another tool of keeping people from banning self driving cars before they get to the place where they are really ready.


Who is going to report these statistics ? Would you trust such a statement coming out of Tesla right now ?


Unfortunately the "balance" in the news means they will keep reporting Tesla's statements and will rarely point out how dishonest they have been in the past.


Except it's something more petty, like his ego is too invested in not using LIDAR.


It's hubris.


Money.


I'm not sure it's in Tesla's financial interest. Misleading statements give the plaintiffs more room to argue for punitive damages, and California juries have awarded tremendous punitive damages on auto companies.


Tesla is a symbol, Musk is a strong personality. To me it's the same as when Jobs said during the antenna fiasco that "you're holding it wrong".

Being a symbol, being an entity that creates objects of desire is financially beneficial to the company. The question is, who's message will the potential customers hear: the NTSB's or Musk's?

Of course I'm not saying that this is a good strategy - Tesla may as well fall from grace eventually, but the bully is ultimately powered by greed and hunger - for adoration, for money, for financial gains.


Because Musk views people as crustaceans rather than human beings.


> Clearly something went wrong here, but they lept to blaming everyone else instead of working to find the flaw.

Precisely. As I mentioned in another comment: Autopilot indisputably failed.

They are (understandbly) shifting the discussion away from the technical issue (Autopilot failed) to the legal issue (who is to blame), because the latter is something that they can dispute.


Truly fixing the flaw would be to skip to Level 4 as Google decided to do, instead of shipping a system that many drivers get too comfortable with leading to accidents such as this one.

But skipping to Level 4 fairly clearly requires LIDAR at this point in history and Musk is on record saying he thinks it can be done with cameras and radar alone. So pride will prevent that.


It doesn't even have to be pride. If he now says they're switching to Lidars, all cars before sold with "self-driving hardware" and "full SDC is only a software and computer upgrade away" promises are screwed, which may create a huge liability for Tesla, and impact share prices significantly. As CEO, he still has to answer to shareholders and act with them in mind.


Also, „hands were not detected“ does not mean that they really weren‘t on the wheel. Maybe someone who drives a Tesla can comment on how reliable the hand detection is.


This came up last month and a Tesla driver had this to say.

https://news.ycombinator.com/item?id=16822623


The driver clearly didn't take any action to intervene, so his hands probably genuinely weren't on the wheel.


Hand detection is pretty good. You only need to rest your hands on the wheel. Have a model s as my daily driver.


Lots of drivers have problems where they have to "shake" the wheel to make the car understand that they do indeed have their hands on the wheel. The system is pretty rubbish.


It seems to me that my 2015 S70D expects me to be holding the wheel firmly enough to actually take action.

Mostly I just keep one hand lightly on the wheel when driving down a motorway with autosteer on and just give it a gentle squeeze or wiggle once in a while. If I use it on less good roads I keep both hands on the wheel in the proper location for taking control.

It seems pretty good to me.


Then it must vary significantly per car. I have to squeeze the wheel pretty hard to dismiss the message on mine.


Or, perhaps more probably, by driver. I am far more inclined to think that a large variation in hands exists, rather than a large variation in mass-produced steering wheels.


No idea about tesla, but how do you screw up hand detenction?


It's based off measuring torque on the steering column, not some sort of (say) heat or pressure sensors on the wheel itself.

So, having your hands on the wheel and not moving them can be sensed by the Tesla as "not having your hands on the wheel".


In my car (a Honda) I can have my hands on the wheel and if I’m not doi enough steering it will yell at me.

There are one or two spots around town that are straight enough that I don’t need to give any steering input and it will sometimes flag me there even though my hands were on the wheel the whole time.

On a freeway? Don’t think I’ve ever had thay happen though.


Learned something new, thanks a lot.


The hand not applying any pressure and barely touching the wheel. It happens consistently on my Ford Fusion when I am driving on a straight line. Car says my hands are not on the wheel when they are touching it.


It’s feature maybe? After all after insane and ludicrous mode kamikaze kinda fits.

I don’t understand why people had faith in AP 2.0 in the first place, especially since Tesla boasted that it took them only 6 months to develop it. And unlike AP 1.0 the regulated ADAS features were not enabled (officially as in outside of AP mode) on it for the longest time some still aren’t.


Suicidal mode - we'll make it look like an accident.


https://www.tesla.com/autopilot

and best of all, guess what Tesla shows on that landing page. A driver with their hands in their lap


They do expect you to change that posture if Autopilot does something you think is dangerous, though. Wandering out of lane uninitiated (lane change indication, for example) and accelerating towards a barrier would likely count as one of those situations.

I've yet to see an incident so far where a human driver, properly engaged with the driving experience and road conditions as they should be when in charge of a vehicle, couldn't have taken action and prevented the accident. There were 7 seconds between the beginning of the maneuver and impact; Human reaction time (including taking action) is typically 0.5 seconds. That leaves 6.5 seconds to correct steering or apply the brake, bearing in mind that Autopilot will always relinquish control in either situation.

Not apologising for Tesla, they need to sort out this edge case, but that's exactly what it was and exactly why the driver is supposed to remain engaged.


> They do expect you to change that posture if Autopilot does something you think is dangerous, though.

Their lawyers and their marketing departments are contradicting each other.

Their legal department will always insist that your hands must remain on the steering wheel. If your hands are not on the wheel, and the car crashes, they will mention this clause and implicitly blame the driver.

Their marketing department (like this webpage) say "This car can drive itself!".

And the car itself? Well it has a sensor that detect hands off wheel situations (plus point from the lawyers' "cover our asses" side), but it allows a few (/lot of) seconds of that situation before it warns you (their lawyer must've once screamed "Why the fuck does the sensor tolerate this situation!").

For a stupid analogy, it's almost like a bar owner advertising his bar as a smoke joint, but saying "you're not allowed to smoke in here, and I can spot anyone smoking" but still tolerating a few minutes of lit cigarettes.


If I were rapidly approaching a barrier at full speed on a highway, I might put my hands up and guard my face before the crash.


But then you would propably also be applying the brakes..


Example: https://www.youtube.com/watch?v=nvckBJP8QPU

I mean, auto-pilot, as promised, would be a god-send for such a poor driver (or anyone with even mild motor impairments). That it preformed soooooo poorly and that Tesla has responded soooo shadily, should put anyone seeking auto-pilot (as a real need, not a gadget) on notice.


If you were aware of what was happening. Smartphones are distracting.


Is there any proof that the guy who died was using a smartphone at the time of the crash?


oops didn't mean to imply that for the driver. meant it to as a meta-comment.

my point was that the driver probably didn't know the crash was about to happen. broader investigation into "why the crash happened" is more fruitful than "what the driver could have done differently once the crash happened"


Musk has lost touch with reality if that's true..


> During the 18-minute 55-second segment, the vehicle provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. These alerts were made more than 15 minutes prior to the crash.

Whoah. So there were NO alerts for 15 minutes prior to the crash. Compare this with Tesla's earlier statement:

> The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.[1]

This gives a very different impression. They omitted the fact that there were no warnings for 15 minutes. Frankly that appears to be an intentionally misleading omission.

So basically the driver was distracted for 6 seconds while believing that the car was auto-following the car in front of it.

[1] https://www.tesla.com/blog/update-last-week’s-accident


It's a blatant lie because they knew exactly what they were implying and it wasn't true, i.e, it was a lie.


Even the 'the driver’s hands were not detected on the wheel for six seconds prior to the collision' is suspect because it is known to be a pretty unreliable detection mechanism.

Besides, it doesn't matter: if the drivers hands were on the steering wheel they clearly weren't alert enough to hit the brakes or steer away from the impact so the point is entirely moot: autopilot was in full control of the car, whether the driver had their hands on the wheel or not to satisfy an alarm doesn't matter.


It sounds like exactly the same thing to me.


About "hands not detected on the wheel part", it seems that the car only detects steering torsion, and it is possible that the driver actually had hands on the wheel, but just didn't provide any steering input.


Reading that initial report is terrifying. I am so glad the NTSB set the record straight that the driver had his hands on the wheel for the majority of the final minute of travel. Really makes me feel like Tesla was out to blame the driver from the get go. To be clear the driver is absolutely partially at fault, but my goodness autopilot sped up into the barrier in the final seconds — totally unexpected when the car has automatic emergency breaking.

Emergency breaking feels not ready for prime time. I hope there are improvements there. Don’t want to see autopilot disabled as a result of this, would rather Tesla use this to double down and apply new learnings.

Just so sad to hear about this guys death on his way to work - not the way I want to go. :(


> Don’t want to see autopilot disabled as a result of this

I would. I think any self-driving system that expects the "driver" (passenger I guess) to take over in tough situations is bound to fail again and again. If people feel like they don't need to pay attention they won't no matter how much they know that they should.

I think that partial autonomy is a sort of uncanny valley, either side is much safer.


I'd prevent them from using the AutoPilot term, penalise them for marketing imagery that includes people driving with their hands off the wheel, and require hands to be on the wheel far more often.

Until it's significantly improved, it should be a backup safety system rather than a driving and safety system.


>I'd prevent them from using the AutoPilot term

I disagree. It's the same term as in aviation and does the same thing.

> penalise them for marketing imagery that includes people driving with their hands off the wheel

That's more reasonable.


Can this "Autopilot" control altitude of the vehicle? No. Therefor it's not the same thing, it is just something similar.

What should we call car autopilot? There is more than one opinion. Some people believe in importance of some weird technical characteristics. For most people defining property of autopilot is its ability to control vehicle while pilot left the cockpit for a few minutes. So to them car autopilot must safely drive some roads and require human to take full control on the others. Taking over in case of emergency doesn't exactly fit this idea.

What looks really weird in all this discussions is their fanatical uselessness. Who would be hurt if new law would prohibit to call autopilot anything less than L5? Well probably no one. Who would benefit? Most likely a lot of people. Why would anyone oppose regulation like this? Huge question.


> It's the same term as in aviation and does the same thing.

Except in aviation trained pilots with at least a decade of experience flying planes get to use it, understanding well the limitations and purpose of the technology. A teenager can use Tesla's "Autopilot".


Incorrect. Autopilots are commonly used even in general aviation, and you can get your pilot’s license in just a month or two if you had a cooperative flight school. Heck, there are accelerated immersive programs that train you in just 14 days.

You only need ~40 hours (minimum) of flight time to get your private pilot’s license. This isn’t even much different than driving license requirements, except that planes are a lot more expensive to rent and operate.

(And for the record, teens can also use autopilots on airplanes since you can get a student pilot’s license allowing you to operate a plane by yourself at age 16. Roughly the same number of hours of supervised instructional operation as getting a driver’s license.)


I think you are nitpicking definitions here when it is not the issue.

The issue is that the general public think of "Autopilot" as the magical piece of technology that automatically pilots thing while you sip a pina colada and otherwise switch off mentally.

What the 0.5% of people who actually know what is required of the flesh and blood pilot when using an aviation autopilot is not the issue. It is what the majority understand.

Changing what Tesla is allowed to call it is the path of least resistance rather than trying to educate the general public on "what the team actually means in the industry that made the term famous and therefore should be universally applied elsewhere".


> It is what the majority understand.

Yeah, you're going to have to provide some evidence of that, not just assertion (as common as that assertion may be on HN).

The word "autopilot" has a well-established definition. You might think "automobile" was a fully automatic vehicle.


As I understand it, aviation autopilot is generally used in open air where there are typically not frequent stationary objects the software is not designed to detect. Also usually sold in a different way to a different market.


> Don’t want to see autopilot disabled as a result of this, would rather Tesla use this to double down and apply new learnings.

I agree that Tesla obviously needs to be learning from this. But at this point in time, Autopilot needs to be turned off until Tesla can fix the fundamental flaws in the set of sensors on their cars. There is now strong evidence that Autopilot is dangerous, and there is still weak evidence that it is less dangerous than the alternative (i.e., a human driving).

I expect that the NTSB will eventually issue a safety recommendation that Autopilot not be used in its current form. I'm a bit surprised they didn't issue an urgent recommendation in the preliminary report, or even before.


I have a hard time seeing it this way. Turned off until they can fix the fundamental flaws? Which flaws exactly? It’s a prototype technology that no one knows how to do perfectly. Should Waymo also be taken off the road? None of these technologies are provably safe, so we always have to accept some risk. How much risk is appropriate is hard to determine. Perhaps LIDAR would reduce the risk, but Uber showed you can kill people even on a LIDAR based vehicle if your software fails to classify the risk properly. And humans drive with just two eyeballs so it’s not unreasonable to think camera based systems should be in principle possible to make safe.

So we could pull all the self driving cars off the road, but then we inhibit their ability to collect the wide range of real world data necessary to build the technology. Or we could pull the less successful tech off the road sooner, but that would cause those programs to wither and hand the market to one or two well funded players.

I just don’t see how pulling the tech off the road because of a flaw that Tesla has likely already fixed would accomplish much.

I do struggle to understand what we should do, but I’m not so quick to think the tech should be pulled.


> To be clear the driver is absolutely partially at fault

I'm... not so certain. Why? The autopilot had likely exhibited proper behavior every time that the vehicle had passed that particular section of road prior, and if the driver was paying full attention to the behavior of the vehicle, he would only notice the problem around the 5 second mark.

Five seconds, if you have no reason to be concerned about the vehicle's behavior, is not much time - especially if you consider that alert drivers are recommended to give themselves a minimum of 4 seconds of reaction time (i.e. follow a vehicle by at least 4 seconds).


My vehicle (a Honda civic) exhibits this exact same functionality and behavior. (lane keeping, ACC, emergency braking for cars) They make it very clear the limitations of these systems. I'd say that in the 5 months I've owned it it's had this exact behavior (veering off exit ramps) 10 times. A simple jerk of the wheel puts it back on track, it's such a natural motion if you're paying even the slightest bit of attention. That being said Tesla fails to make their drivers aware of the limitations of autopilot, so I agree that this may not be in the driver.


Tesla reminds you to pay attention and keep your hands on the wheel every time you engage Autopilot. It's one of very few legal disclaimers they show you all the time on the screen, and don't give you any way of turning it off.


And we all know how assiduously people pay attention to messages that flash up on screen.


How about audible nags? How about flashing white lights on the display? How about gradually slowing the car down until you give tactical feedback proving you are in control? Tesla does a lot of things to coerce drivers to pay attention. If you check out TMC, you'll see lots of people complaining about how paternalistic and "naggy" the system is, even for those who use it properly.

I am continually surprised by how little emphasis there is on personal responsibility when this community discusses an L2 system such as Autopilot. According to both the law and the operating manual, the driver is in control at all times. Tesla warns you of this every time you turn it on. Yes, there are enough bad drivers out there that Tesla is wise to implement habit-forming nags; but drivers also need to take responsibility for how they use (and abuse) these systems. Nobody would pass the buck to cruise control for a driver who set it to 65 and then plowed into something in a moment of distraction. All due respect to the victim here -- and I feel absolutely terrible for him and his family -- but if you are paying attention and looking at the road ahead of you, there is no situation where you accelerate for three full seconds into a concrete barrier at 70MPH -- not with Autopilot, and not without it.


If I recall correctly (and I might be wrong), didn't the driver in question actually report prior to the accident that autopilot had anomalous behavior on this section of road?


> I'm... not so certain. Why?

In this specific case, because if a driver is watching the road in front of them, four seconds is an awful lot of time to watch yourself head toward and then accelerate into a cement barrier, all without touching the brakes. It's fine to want Autopilot to be better, but as a matter of law, it is the driver's responsibility to slam on the brakes in that situation, and move into a legal lane.

More broadly, it's because the contract of an L2 system is that a human is in control at all times. L2 systems are assistive and not autonomous. They will never disobey a driver's presets, nor override a driver's real-time inputs, even if the car thinks it is safe(r) to do so. This is a major design principle behind every L2 system, and the reason why there are no scenarios where an L2 system is considered at fault by law.

Now obviously, if there were a bug that caused an L2 system to override a user's input -- say disregarding someone hitting the brakes, or overpowering the steering wheel, that'd be a systemic failure and grounds for a recall. But we haven't seen any cases of that, and there are hardware precautions (e.g. limited torque in the Autosteer servo) to minimize that possibility.


"The autopilot had likely exhibited proper behavior every time that the vehicle had passed that particular section of road prior"

The driver had actually reported to Tesla problems about that specific section of road.


> His hands were not detected on the steering wheel for the final six seconds prior to the crash.

> Tesla has said that Huang received warnings to put his hands on the wheel, but according to the NTSB, these warnings came more than 15 minutes before the crash.

> Tesla has emphasized that a damaged crash attenuator had contributed to the severity of the crash.

These may or may not have been factors contributing to the death of the driver, and ultimately may or may not absolve Tesla from a legal liability.

However, the key point here is that without question, the autopilot failed.

It is understandable why Tesla is focusing on the liability issue. This is something that they can dispute. The fact that the autopilot failed is undisputable, and it is unsurprising that Tesla is trying to steer the conversation away from that.

The discussion shouldn't be either the driver is at fault or Tesla screwed up, but two separate discussions: whether the driver is at fault, and how Tesla screwed up.


The deficiency of Tesla AP has been abundantly clear to anyone with eyes, for a long time. Only fingers-in-ears Musk fans cannot see that.


Or perhaps us fingers-in-ears Musk fans don't expect the AP to be perfect. Especially considering how new the fields are.

Autopilot doesn't need to be perfect, just better than humans.


So because the field is new, it's okay that a human being is dead?


It's not just that the field is new, it's that 1) SO MANY people die constantly in cars, and this is the start of trying to change that permanently but it cannot be perfect out of the gate, 2) nor should we wait until it's perfect if waiting that long means more people die.

Ultimately, we cannot rely on mere anecdotes. We need statistics that show it is worse. If it's as bad as the media coverage claims it is, it should be easy to demonstrate such statistics.


If human beings dying weren't at least somewhat ok, we wouldn't drive cars.


Thomas Edison would agree with your stance. Alternating Current is not safe and therefore should never have been implemented. https://en.wikipedia.org/wiki/War_of_the_currents#Edison's_a...


The thing is that the problem isn't just the Autopilot driving over white lines. Even cheap VW Up's and other micro cars would have applied the brakes automatically when an object was in the way. It is a huge failure that a Tesla can do both without stopping. Two huge fails.


You're right, it was a failure, a terrible failure, but it will be improved.


I think the issue is, if you read https://news.ycombinator.com/item?id=17256900 is that Autopilot is not better than humans.


I did, unimpressive, he doesn't have any relevant data.

My reply to the post though:

Obviously someone shaping stats to their own biases. They want to compare Tesla and Mercedes Benz crash rates because they are in a similar price range?

Mind you he has no data on automated driving crashes.

Ridiculous and short-sighted, the goal (IMO) is for all cars to be automated. And he assumes that just because lidar is expensive now, that it will be in the future, which is just naive.


Really, no comments from the down voters?


Only fingers-in-ears Musk fans cannot see that.

Speak of the devil...

It’s amazing that given the slander Tesla has been passing off as PR in this case, that anyone could defend them in this, yet here we go.


It must also never be worse than humans.

In this case, it very clearly was.


So human drivers wouldn't crash here? Or a good, attentive, best-case human driver wouldn't crash here? Because those are not the same set, and it seems the former set has crashed into this particular barrier multiple times.

I don't think it's acceptible that a vehicle with adaptive cruise, lane keeping, and emergency braking would hit this barrier without any mitigating braking or action, but the actual state of drivers on the roads and human traffic fatalities is pretty grim as-is.


I think it's "rather unlikely" to the point of implausibility, that even the human drivers who have crashed into that barrier:

1) _aimed directly at it_ rather than glancing blows/swerving at the last moment, etc., and

2) _accelerated beyond the speed limit as they did_, rather than applying any form of braking, let alone AEB.


> I think it's "rather unlikely" to the point of implausibility, that even the human drivers who have crashed into that barrier.

So implausible that the barrier was destroyed just weeks earlier by a human driver?


I find it equally implausible that you willfully ignored the context of my quotes. To the extent that you literally changed the colon that lead to the context of those quotes into a period to make it sound like that was the extent of my statement.

Yes, human drivers crash into crash barriers, as noted.

Now, how many of those human drivers "actively aimed at the barrier", as the NTSB stated? And how many of those human drivers, having aimed at the barrier, "then accelerated in excess of the speed limit" as they did so?

And if you do reply, try not to take sentence fragments as an excuse to change quotes to misrepresent someone's statements.


I ignored it intentionally because that is a mischaracterization of what the NTSB report said. That is what clickbait headlines said (misleadingly). The autopilot was following the lines, which looked a lot like a lane. Human drivers make similar mistakes, and it is quite likely that’s what happened earlier and why the barrier was destroyed beforehand. The lines are about the same size as a lane and lead you straight into the barrier.


> So human drivers wouldn't crash here? Or a good, attentive, best-case human driver wouldn't crash here?

I'd argue that a Reasonable Person [1] wouldn't crash here.

[1] https://en.wikipedia.org/wiki/Reasonable_person


I think that's actually a really good method to apply in these cases, and I agree.

However I also think that the roads are currently occupied by many people who fall below that standard.

Should a car with assistive technologies only be a success if it is as good as a reasonable person, or if it is better than a typical person?

Obviously the former is prefereable to the latter, but people seem much more content with the idea of driving themselves into walls (or another typical person on the road might crash into them and cause that outcome) than with the idea that a computer might do that same.

See also fear of flying vs driving even though flying is demonstrably safer, but feels much more out of the hands of the flyer.


Except a human driver had just recently crashed in the exact same spot - that's why the divider's crash bumper was damaged at the time of the Tesla accident.

I think there's no excuse for shipping an "autopilot" system that can't detect massive, stationary objects in broad daylight, but humans also do crash in those exact circumstances all the time.


It needs to be better than a sober, awake, human of the type who typically buys a Tesla. That would be the fair comparison, not just a random member of the public with a 20 year old car. Better than a drunk human driver, or a 90 year old is not a standard reasonable people care about when they say “better than a human.”


A human driver crashed in that same exact spot one week earlier, so you can't really say AP is necessarily worse because of this accident.


Teslas are a tiny minority of the cars driving on the road.

Even if a car crashed into that barrier once a week (it's way less than that), a single Tesla crash in all of Tesla's history so far is an order of magnitude worse.

https://news.ycombinator.com/item?id=16720848


They were also, as they have been on multiple occasions, more than happy to make sure the public narrative was "our system was working, warning him, not our fault!" when they absolutely, utterly, knew that those warnings were utterly irrelevant to the situation at hand.

Just as you or I as Tesla drivers need a subpoena to get access to the black box data for our own insurance or criminal claims against other drivers, but Tesla is more than happy to put out press releases on your driving telemetry if they feel it suits their narrative.


Tesla acknowledges that autopilot can fail.



No surprise there about it steering into the barrier or Tesla not-quite-lying about him getting warnings, but I'm surprised that he apparently wasn't dead on impact but survived all the way to the hospital?

So even a three-car pileup with the Tesla steering straight into a barrier at 71MPH & accelerating, with the car catching on fire & being reduced to a husk, still isn't enough to kill a driver immediately. In a way, you could interpret that as demonstrating how safe Teslas are (at least, without Autopilot).


A doctor has to declare a person dead, paramedics can't do it. In all cases it makes sense to transport the body to the hospital where the person is pronounced dead by the doctor. Otherwise the family may argue in court the paramedics did not try enough to save the person.


Usually in reports, if someone dies at the scene but is only officially declared dead at the hospital, they'll write a phrase like 'was declared dead at the hospital'. This report says instead that he was transported to the hospital 'where he died from his injuries'. Double-checking, media articles at the time uniformly describe him as dying at the hospital, and at least one says that he died in the afternoon ( http://sanfrancisco.cbslocal.com/2018/03/27/tesla-crash-inve... ) while the accident was at 9:27AM, so either traffic in the Bay Area is even worse than I remember, they like to spend hours doing CPR on a corpse, or he did in fact survive the crash and died at the hospital in a literal and not legalistic sense.


Not legally true, as a paramedic. EMTs and paramedics are able to declare people dead in certain, albeit limited, circumstances, sometimes involving telephone / radio consultation, sometimes not.


Modern cars are remarkably safe. I've been in a very high speed head-on collision in a Honda and not only survived, but was completely uninjured other than some whiplash.


Except in obvious cases like decapitation, paramedics cannot legally declare someone dead; it has to be done by an MD, and the paramedics will keep trying to revive you until you get to the hospital. So it's not necessarily true that he didn't die in the crash.


Not true at all. Paramedics can declare death (at least in most states, I cannot speak to all), and we _absolutely_ can stop resuscitation efforts using our clinical judgment, at times in concert with a physician.

I am not sure of the specifics of this case, but traumatic cardiac arrest is, as a generalization, "largely unsurvivable". An asystole EKG, in concert with the mechanism of injury, may be enough for us to do so, and we do not need to transport what is at that point a dead person (even if there is resuscitation being attempted, for a multitude of reasons - sometimes even "for the family's benefit"[1]) just to get them to a physician to declare death.

In many states, EMTs (with ~200hrs of medical training) are able to declare death legally, in the setting of evisceration of heart/brain, decapitation, "body position incompatible with life".

This is the situation to the best of my knowledge as a paramedic of nine years and a state certified EMS Instructor and Evaluator.

[1] it should be noted that this is done in a setting where its judged the "best" course of action for grief, establishing end of life directives, and the like, and _not_ as something that is intended to falsely convey hope.


Despite the autopilot failure, I find the battery failure quite remarkable too:

> The car was towed to an impound lot, but the vehicle's batteries weren't finished burning. A few hours after the crash, "the Tesla battery emanated smoke and audible venting." Five days later, the smoldering battery reignited, requiring another visit from the fire department.

Where is your LiPo god now? Batteries have more energy density than 20 years ago, ok. But they are also much more dangerous. Now imagine the same situation with Tesla's huge semi batteries. They'll have to bury them 6ft under, like Chernobyl's smoldering fuel rods. Minus the radiation.


So, basically they need to improve handling of damaged batteries. There are procedures in place for ICE cars too. No one is going to be storing a car with fuel and a damaged fuel tank anywhere. The main advantage is that it is usually obvious when a fuel tank is damaged and is leaking.

Some Tesla batteries have caught on fire after collisions. None caused injury to car occupants. In one of the first publicized cases, the car even told the driver to pull over safely, and the fire only started afterwards. There are vanes to direct flames.

I've seen my fair share of ICE fires. They are not pretty either. We have grown accustomed to them, firefighters know how to handle them, and the car industry has fixed most of the early issues that caused fires. It can still happen.

The same will be done for car batteries.

I agree with the energy density argument. My Leaf stores about 1 liter worth of gasoline as energy. When we reach energy densities comparable to current fuel tanks, we'd better be much more advanced in this aspect.


> Now imagine the same situation with Tesla's huge semi batteries.

I'm bearish on Tesla (not financially, but I'm mostly a pessimist in regards to their news).

But to be perfectly fair: I'm not sure if 400 gallons of diesel fuel (typical in a 18-wheeler) could be put out by a typical fire-department.

There was a tractor-truck spill due to an accident in my area once. The fire-department closed down BOTH sides of a 2x2 lane highway (55 MPH zone, very few traffic lights, a grassy median). You don't mess with 400-gallons of inflammable diesel.

The good thing about diesel is that you can just, burn it out. Burn all of it. Close down the highway of course, but once it stops burning, its safe to cross. It may take 30 minutes to an hour (all the people stuck in traffic can get out of their car, talk with each other, play on cell phones or something...), but a controlled burn is better than an explosion.

I'm not sure if Lithium Ion has a good procedure yet.


A decade ago a fuel-tanker truck crashed on a freeway overpass in Oakland, and the ensuing fire softened enough steel for the overpass to collapse:

https://www.nytimes.com/2007/04/30/us/30collapse.html

The overpass was reconstructed and I-580 highway reopened after 26 days:

https://www.sfgate.com/bayarea/article/A-MAZE-ING-His-reputa...


Battery failure?

You're expecting their engineers to design a battery that remains safe after a 70mph crash into a barrier?


ICE makers have to design gas tanks that are safe after a 70mph collision, so yeah, it's kind of expected that the battery is safe too.


Yes? Welcome to the automotive industry. Lives are at stake.


Is there an equivalent safety standard for ICE vehicles? I don't think a gas task would do particularly well in the exact same circumstance either


Yes, of course there is. That's why the gas tanks are in the back, and they most certainly do survive such crashes.


ICE vehicles don't typically spontaneously combust after a crash...


If the combustion occurs as the result of a crash, can that really be said to be “spontaneous?”


No, just during, when you're unconscious, and not safe in your new car miles away.


Right. They sometimes spontaneously combust even without crashes.


Five days after the crash doesn’t seem unreasonable. I wonder if some kind of inerting compound can be developed for lithium batteries for use by firefighters and the like.


To 'inert' it properly, you'd have to extract all the energy, which is going to look pretty much the same as burning it.


Dear Elon, want to start a website that rates how fake-newsy government-produced accident reports are? /S

"FDA said my farm is producing salmonella-infected chicken. Downvote their report on this URL!"


Without commenting on the rest of the issues with Musk/Tesla, his tweet about a news verification agency was a joke. Verifiably. It was nicknamed Pravda, and a corporation related to his tweet was registered to Musk on Oct 17 2017, the anniversery of the October Revolution.


A joke? It's verifiable because he said he might name it Pravda, and the fact that he actually registered it is further evidence that it's a joke?

I'm not saying he's really going to do it, but I question your certainty here with a total lack of actual evidence.


The words of leaders should always be taken literally, because the those words only become "jokes" when they are otherwise unexplainable in rational discussion.


Verifiably is a strong word on this site. Fair enough. I'm still right though.

Options:

* In 2018, A foreign-born Capitalist billionaire with sometimes negative press coverage suggests creating a Ministry of Truth named after a Russian propaganda newspaper to call out journalists who make false claims in their reporting. He registers a corporation for that purpose on the 100th anniversery of a bloody revolution bringing about the Soviet Union.

* A man known for using puns and showmanship in their communications trolled a bunch of people who latch onto his every word, especially when they don't like the man.

And to ComradeTaco, this isn't the presidency, and he didn't spout hate speech. It was a joke.


Never been brigaded on HN before. What a feeling.


> a corporation related to his tweet was registered to Musk on Oct 17 2017, the anniversery of the October Revolution.

FYI, the October Revolution anniversary is Nov 7th (Oct 25th using the old calendar).


> was a joke. Verifiably.

Can you link to what verified that?


I am generally against often-called "excessive regulation," but the regulator -- perhaps FTC -- should aggressively prohibit the misleading marketing message here.

The entire problem manifests from calling this lane keeping mechanism "Autopilot." Tesla should be prohibited from using that language until they have achieved a provably safer self-driving level 3+.

The problem is exacerbated by Musk's aggressive marketing-driven language. Saying things like we're two years out from full self-driving (first said in 2015) and the driver was warned to put his hands on the steering wheel (15 minutes prior to the crash) makes Musk look like he is plainly the bad guy and attempting to be misleading.

"Provably safe" probably means some sort of acceptance testing -- a blend of NTSB-operated obstacle course (with regression tests and the like) and real world exposure.


Tesla Autopilot makes it to HN pretty much every week now, almost never in a good way.

Every time, we have a big discussion about autopilot safety, AI ethics, etc.

What about lack of focus?

Tesla has already reinvented the car in a big way--all-electric, long range, fast charge, with a huge network of "superchargers". It's taken EV from a niche environmentalist pursuit to something widely seen as the future of automotive.

Why are they trying to tackle self-driving cars at the same time?

This feels like a classic mistake and case of scope creep.

Becoming the Toyota of electric is vast engineering challenge. Level 5 autonomous driving is an equally vast engineering challenge. Both represent once-in-a-generation technological leaps. Trying to tackle both at the same time feels like hubris.

If they just made great human-piloted electric cars and focused on cost, production efficiency, volume, and quality, I think they'd be in a better place as a business. Autopilot seems like an expensive distraction.


The interior design of the Model 3 is very simple - there are few physical controls - with the assumption that the car usually does not need to be driven by a human. As Elon presented it: https://youtu.be/GZm8ckvsu9I?t=2m2s. Pure speculation: perhaps the simplified interior is necessary to bring the costs down for mass-production, and here is where the "synergy" with the autopilot comes in.


Tesla has to realize these "shame the dead dude" posts are PR nightmares, right?

They are reason alone for me to never consider one, that a private moment for my family might end up a pawn in some "convince the public we're safe using any weasel stretch of the facts we can" effort.

If this is disruption, I'll wait for the old guard to catch up, lest I be disrupted into a concrete barrier and my grieving widow fed misleading facts about how it happened.


>shame the deadman posts

If that were actually the case, then what are they supposed to say?

>lest I be disrupted into a barrier

This made me audibly chuckle.


After this incident and Tesla's response to it, I hope Tesla is sued and or fined into bankruptcy. Tesla is normalizing releasing not fully tested software to do safety-critical things, and literally killing people as a result. A message needs to be sent that this is unacceptable. In addition, their first response is a PR driven response that sought to blame to driver, and violated NTSB procedures. Safety is probably the most important thing to get right with these types of software and Tesla is nonchalantly sacrificing safety for marketing.


Yeah, I can accept that some startups live by "move fast and break things", after all, if I think that their approach is dangerous for me I can choose to not be involved, I can't say the same about being run over by a car that has shitty hardware/software.

Autopilot software that are not safer than human pilots shouldn't be allowed to be sold to the general public until they fix their stuff. And even if they are safer most of the time, we are at least used to how human drivers react, most of them don't accelerate just before a crash.


And a helpful lesson for all the "excessive government regulation" people.


Whether regulation is insufficient or excessive should be determined on a case-by-case basis. Anybody making blanket statements about all regulation as a whole is an ideologue who probably should not be taken seriously.


Tesla Autopilot should be recalled via the next OTA update.

The “Autopilot” branding implies that users need not pay attention, when in reality, the system needs interventions at infrequent but hard-to-predict times. If an engineer at Apple can’t figure it out, then the average person has no chance. Their software sets users up to fail. (Where failure means permanent disability or death.)

Inevitably, Musk fans will claim that recalling Autopilot actually makes Tesla drivers less safe. But here's the problem with Musk’s framing of Autopilot.

Sure, maybe it fails less often than humans. (We don't know whether we can trust his numbers.) But we do know that when it fails, it fails in different ways — Autopilot crashes are noteworthy because they happen in situations where human drivers would have no problem. That’s what people can’t get over. And it is why Autopilot is such a dangerous feature.

An automaker with more humility would’ve disabled this feature years ago. (Even Uber suspended testing after the Arizona crash!) With Musk, my fear is that more people will have to die before there is enough pressure from regulators / the public to pull the plug.


> But we do know that when it fails, it fails in different

Oh the hubris of man. I am not even a Tesla fan, but still. "I'd rather have 2 of 100 people drive against a wall instead of 1 of 100 automatic cars driving against a wall.", that's what you are essentially saying, no?


So people are asking why the barrier wasn’t detected, and that’s fair.

Here’s another question: why wasn’t the ‘gore’ zone detected?

Why did the car thing it was safe to drive over and area with striped white lines covering the pavement?

It saw the white line on the side of that area and decided that was a land market but ignored the striped area you’re not supposed to drive on?

If you’re reading the lines on the pavement you have to try to look at all of them.

I don’t know if other cars, like those with MobileEye systems, do that but given Tesla’s safety claims they’d better be trying.


This gore zone did not have a striped area, just solid lines on each side.

http://www.dailymail.co.uk/sciencetech/article-5582461/Tesla...

Edit: Google street view of the location: https://www.google.com/maps/@37.410912,-122.0757037,3a,75y,2...


Ah. I wonder if they do recognize the stripes then.

BTW does anyone know why it’s called a ‘gore’ zone? I can see it being a (brutal) nickname but I’m hoping there is some better reason.


"A gore (British English: nose),[1] refers to a triangular piece of land. Etymologically it is derived from gār, meaning spear."

https://en.wikipedia.org/wiki/Gore_(road)


Thanks.


It seems like its a brutal nickname. The "Gore" zone seems to refer to the area after the emergency-cushion is damaged.

Normally, that area has a barrier designed to save lives, by slowing down cars and providing cushioning. However, the emergency-cushion was already damaged from a prior crash.

Without an emergency-cushion, running into a straight concrete barrier has known, lethal consequences.

EDIT: Just to be clear, I'm just guessing.


Here's the most interesting quote to me:

"The crash created a big battery fire that destroyed the front of Huang's vehicle. "The Mountain View Fire Department applied approximately 200 gallons of water and foam" over a 10-minute period to put out the fire, the NTSB reported.

"The car was towed to an impound lot, but the vehicle's batteries weren't finished burning. A few hours after the crash, "the Tesla battery emanated smoke and audible venting." Five days later, the smoldering battery reignited, requiring another visit from the fire department."

Shouldn't it be possible to make the battery safe?


It's a lithium-ion battery, so it needs to be fully discharged.


This just reconfirms my belief about Tesla's "autopilot" --- most of the time it behaves like an OK driver, but occasionally makes a fatal mistake if you don't pay attention and correct it. In other words, you have to be more attentive to drive safely with it than without, since a normal car (with suspension and tires in good condition, on a flat road surface) will not decide to change direction unless explicitly directed to --- it will continue in a straight line even if you take your hands off the wheel.

Given that, the value of autopilot seems dubious...


You can't make that kind of conclusion from a single, extremely-highly-publicized occurrence, even a fatal one. Given how often human drivers kill people, you need statistics to show that autopilot is worse.


I'm going to need statistics from an independent source, not from Tesla, that autopilot is safer.

I'm not inclined to trust Tesla's statistics given how cavalier they have been about putting their shoddy autopilot product on the road and marketing it as being better than it actually is.


Of course. How about the National Highway Traffic Safety Administration?

https://techcrunch.com/2017/01/19/nhtsas-full-final-investig...

A 40% reduction still means plenty of anecdotes for media fodder.

There may come a time when autonomous vehicles make driving so safe that the only data we can and should rely on is the rare anecdote like this and the subsequent NTSB report, like airlines. But while it’s always good to take seriously the lessons that NTSB’s targeted investigations provide, a lot more people will die tragic but less-publicized deaths if we stop any company from deploying an imperfect but still overall beneficial technology.


This guy tested it at the EXACT same location with tesla autopilot. The Tesla starts steering directly into the barrier before he corrects it.

https://www.youtube.com/watch?v=VVJSjeHDvfY


Hmm, totally off-topic but that line needs to be re-painted. The actual lane marker is almost entirely faded away, while the line that follows the exit lane is bright white. If the glare is bad enough it's possible for a human to end up thinking the more solid line is the lane marker and follow it left.

(Not to excuse tesla here though, if that wasn't clear.)


Disclaimer: Taboo comment ahead.

Subtle bugs in self driving cars would be a simple way to assassinate people with low cost overhead. One OTA update to a target and you could probably even get video footage of the job being completed, sent to the client all in one API call.

Surely by now someone must have completed a cost analysis of traditional contractors vs. having a plant at a car manufacturer.

Am I the only one thinking about this?


Probably doesn't need self-driving cars: many higher-end cars nowadays have some kind of wireless interface and software control over speed and steering.


Good point, I should clarify and add in those vehicles, such as the infamous Jeep Grand Cherokee that was hacked on a live highway.


Nope(you're not the only one) - there's a pretty popular conspiracy theory that Michael Hastings(a journalist) was killed by a car hack:

https://en.wikipedia.org/wiki/Michael_Hastings_(journalist)#...


Popular for good reason. Certainly read about it outside of Wikipedia


self driving systems cant well reason about untrained scenarios or the intent of other humans on the road. I think the people have grossly underestimated how driving in an uncontrolled environment is really a general AI problem, which we're not even close to solving.


Disagree. I think that the likes of Waymo and Cruise understand exactly how hard of a problem this is. It's only companies like Uber and Tesla that are underestimating it, and doing stupid things like disabling automatic braking (Uber) or not using LIDAR (Tesla).


the best software engineering company in the world has spent over a decade and hundreds of $millions on this problem and they are still stuck in a private beta in the easiest road environment they could find. what does that tell you about the feasibility of the solution.

There's no evidence that Cruise is in the same class as Waymo.


Waymo is about to start ferrying around paying customers later this year. Nobody said the problem isn't hard, but it definitely does not seem infeasible.


This wasn't a hard case that needed advanced AI. This was a clearly visible stationary obstacle on a good freeway on a clear California day.


deciding how/when to make panic stop or evasive maneuvers without routinely becoming a hazard requires advanced AI

determining the path of travel through lanes markings that suddenly fade out while moving in 70mph on a busy freeway requires advanced AI

its pretty clear the self-driving gods are not giving these cars a grounds up understanding of physics and self-preservation. They best thing they come up with is programmed to drive X mph and to respond probabilistically to a large but finite set of training input scenarios.


Or they require Lidar. You know, a sensor that actually detects stationary objects at a reliable rate.

Tesla is relying upon advanced AI to do things that its sensors can't do reliably. Its only Tesla who is trying to make self-driving cars off of cameras and inaccurate radar alone.

Maybe Lidar really is too expensive for mass-market adoption. But its the most obvious solution to the problem right now, instead of trying to solve harder problems (ie: camera recognition of 3d objects over trained neural nets).


> driving in an uncontrolled environment is really a general AI problem

Is it? Unless you are talking about NASCAR, an adversarial game where rational agents compete for road space, driving is essentially a physics problem: maintain a speed and direction that does not exceed the vehicle's ability to correct them so as to avoid any stationary or mobile obstacle.

In this case, the vehicle was accelerated to a speed (72mph) that far exceeded its safety speed against oncoming stationary barriers (0mph).

I concede that discerning what is an obstacle and what is drivable road surface is a hard computer vision problem, hence the tendency to simplify it using 3D data, radar/lidar etc. But still, not quite general AI.


you can break it down to various expert-system subproblems

- perfect cv image classification (+ low visibility )

- anticipating behavior of bikers, children, elderly, animals, near roads

- responding to human non-verbal communication

- responding appropriately to never before seen obstacles/scenarios without being a nuisance on the road or endangering other drivers

- vehicle dynamics (+ in rain, ice, snow, emergency etc)

...

I would guess to do these tasks with accuracy similar to humans approaches a problem space that's as difficult as general AI

some of these seemingly require the machine to have theory-of-mind which is general AI


Yep. Broad world model, physics, common sense, human communication, prediction, reasoning. Waymo’s low-level stack on hi-res map rails can’t replace human drivers.


You don't need reason, you need fast reaction times slowing the car and avoiding obstacles.

The same happens with humans. Reason is extremely slow. Pilots are trained to act fast training the subconscious, not the logical mind.


normal driving requires anticipating the behavior of other humans who may or may not entire your path of travel or anticipating unseen obstacles. it has less to do with "reaction times" than one would think. Humans routinely over-drive their vehicles capability to safely stop for an obstacle in front or adjacent to their path of travel.


> Humans routinely over-drive their vehicles capability to safely stop for an obstacle in front or adjacent to their path of travel.

Hence, the opportunity for an automated system that does not do that to be much safer, by relying on reaction times rather than strong AI.

Where a human driver would use subtle cues to anticipate a slowdown of the preceding vehicle before a turn, an automated system can get by by simply slamming the brakes in less than one millisecond when it detects braking from the other vehicle.


... and that self-driving car will make its passengers car sick and get rear ended by the human driver behind it.

those types of twitchy driving mechanics aren't normal and don't share the road well with normal humans. The physics of cars and reaction times would dictate that we program a self-driving car to drive like Grandma .... always maintain safe low speeds, very long following distances and braking hard for sketchy actions by other cars or random things near the road, but we know that actually makes the road more dangerous as humans drivers will aggressively cut-off and rear end this self-driving grandma. Secondly, that type of driving creates a bad public impression of self-driving cars hurting their chances of adoption. If you read some of the earlier impressions of Google/Waymo cars its clear they went down that path initially and had to change their approach.


Involuntary manslaughter usually refers to an unintentional killing that results from recklessness or criminal negligence, or from an unlawful act that is a misdemeanor or low-level felony (such as a DUI). (Wikipedia)

It's rather uncontroversial that this kind of accident falls under civil law, because there is some degree of liability involved in marketing a product as being safer than a human driver, but then fails in an instance where a human driver flat out would not fail: apples to apples. If the human driver is paying attention, which the autonomous system is always doing, they'd never make this mistake. It could only be intentional.

But more controversial and therefore more interesting to me, is to what degree the system is acting criminally, even if it's unintended, let alone if it is intended. Now imagine the insurance implications of such a finding of unintended killing. And even worse, imagine the total lack of even trying to make this argument.

I think a prosecutor must criminally prosecute Tesla. If not this incident, in the near future. It's an area of law that needs to be aggressively pursued, and voters need to be extremely mindful of treating AI of any kind, with kid gloves, compared to how we've treated humans in the same circumstances.


Wow. I will say that, when you look straight-on in Street View, it does look disturbingly like a valid lane to drive in -- same width, same markings at one point [1]:

https://www.google.com/maps/@37.4106804,-122.075111,3a,75y,1...

If it were night and a car in front blocking view of the concrete lane divider, it doesn't seem too difficult for a human to change lanes at the last second and collide as well. (And indeed, there was a collision the previous week.)

There's no excuse for not having an emergency collision detection system... but it also reminds me how dangerous driving can be period, and how we need to hold autonomous cars to a higher standard.

[1] Thanks to comments by Animats and raldi for the location from other angles


Anyone here actually think Elon uses autopilot?


Not sure about Elon, but I suspect (hope?) most of us Tesla drivers have familiarized themselves with the limitations of the system.

Its got a couple scenarios in my experience where it shines - chiefly long drives on clean clear interstates and stop-and-go traffic - but its got suicidal tendencies in others.

Its not unlike being driven by a tipsy friend or a teenager - there are enough indications that it can't be trusted.

Unlike those situations, its easy enough to take control back once you realize that.

Marketing it above its capabilities surely isn't helping, though.


To be blunt, I'm kind of stunned that you are defending a system that you yourself describe as "like being driven by a tipsy friend". What?! Sure, I could sit in the front passenger seat while being driven by my tipsy friend and grab the wheel from him if he was about to steer into a wall, but that is clearly much less safe than just driving myself. And I definitely wouldn't pay for the privilege nor endorse it as a good way to travel!

This is what Elon's personality cult, and the psychological need to defend a large purchase, does to people. You are making excuses for a clearly defective product that wouldn't be acceptable if this was anyone other than Tesla.


Its curious to see you appeal to ad hominems, outrage, and assumptions of bad intent, but I'll give it a go anyhow.

Perhaps I wasn't clear enough about the two use cases I cited, but the car performs excellently (and entirely soberly) in those situations.

It's a tool that has significant utility and significant footguns. I'm not advocating whether you should use it or not - I'm sharing my experience with where the system delivers and hoping users are looking past the hyperbolic marketing.

Its abundantly clear (even to us in the Elon personality cult?) that Tesla and Elon have done themselves a disservice with how they handled this tragedy.


Fellow Tesla owner here. Agree with everything you wrote.

Autopilot is a wonderful system when used properly. It's lousy when it's abused. Applied responsibly, it is very obviously a safety improvement to anybody who uses it. Used as a substitute for human attention, it's very easy to see how it can turn tragic (as we are discussing/witnessing).

The way Elon talks about these situations frustrates me to no end. The data are on his side, yet he continually resorts to half-truths because they are simple to present. Pretty obnoxious for a dude who is on a tear about media dishonesty. But: that is a knock against Elon, not against Autopilot. At the end of the day, I believe Autopilot does much more good than bad, and results in in more people staying alive than would without it. I'm glad we have NTSB and NHTSA to keep Tesla honest, and I'm also glad that they are more thorough and tempered than a lot of folks here on HN would like them to be.


This is what Elon's personality cult, and the psychological need to defend a large purchase, does to people.

Parent clearly outlines the limitations, as well as where AP works well, and you tear into them with "fanboi" trolls? Given that you know nothing more of the parent's psychological state than a name on a screen, you might try dialing it back a bit.


>most of us Tesla drivers have familiarized themselves with the limitations of the system

You're giving too much credit to the average driver. Just because they're richer than the average driver doesn't mean they're going to be more discerning about the car's advanced features. Addtionally, Tesla still advertising Autopilot as "Full Self-Driving Hardware" is borderline negligence.


> Tesla drivers have familiarized themselves with the limitations of the system

and then one day you get an update that breaks your mental model you have been familiarized with, your Tesla crashes and you die.


Don't forget rule number 1: never get high on your own supply


I bet he does, but then again a lot of cigarette company executives smoke so it doesn't say much one way or another.


I wonder if they'll release the logs from his crash.


I think he's a psycho (could be a good thing, wait) who cares about two things:

a) Tesla making enough money to make Space X work

b) Being able to establish a colony on Mars, using Space X

I think he's super honest about the Mars stuff. I also think he's being super deceptive about the Tesla stuff. Is that ethical or not? I don't know. From his larger point of view (risk of earth annihilation vs a bunch of rich americans dying in Tesla failures)... who knows.


>I think he's super honest about the Mars stuff. I also think he's being super deceptive about the Tesla stuff.

If he is super deceptive about X, then there is really no reason to believe, or incredible naive to believe that he is super honest about Y for all Y


    The NTSB report confirms that. The crash attenuator—an accordion-like barrier that's supposed to cushion a vehicle when it crashes 
    into the lane separator—had been damaged the previous week when a Toyota Prius crashed at the same location. 
    The resulting damage made the attenuator ineffective and likely contributed to Huang's death.

kinda sounds like maybe that part of the road isn't well designed or marked too.


Any kind of self-driving system that is going to be usable and safe in the real world needs to be able to deal with the kind of roads we have in the real world.

Maybe one day, when the majority of cars are self-driving, road design will change. To get there, self-driving cars will need to prove themselves on today's roads so people will buy them.


Can all driving situations on all roadways today be computed in Polynomial time? Or are some NP? Some intersections are known to be complex for humans to navigate.


It isn't:

http://www.dailymail.co.uk/sciencetech/article-5582461/Tesla...

I do think that an "Autopilot" / "ProPilot" / "SuperCruise" type system that does adaptive cruise + lane following + emergency braking should do a better job at detecting and reacting to that kind of barrier, but there is also evidence that this is a problematic location that has seen crashes by plenty of non-computer-assisted vehicles.

Edit: Google street view of the location: https://www.google.com/maps/@37.410912,-122.0757037,3a,75y,2...


A crash attenuator is a “nice to have” kind of feature. The minimum drivable roadway conditions can be, and often are, worse. We have higher standards for human drivers and it wouldn’t be news except for the autopilot.

Unless you want to ramp up caltrans to the point where it can fix non-essential parts of any and every roadway within hours of degredeation, it’s entirely impractical to expect them to be in perfect shape.


The simplest form of attenuator is just a group of plastic barrels filled with sand or water, which ought to be pretty quick and easy to replace. If California uses something fancier, then sand or water barrels would be an easy stopgap measure.


It should be fairly easy to detect a chevron sign with yellow and black stripes [0] and take evasive measures to slow down, brake heavily or change lanes. From my understanding of CNN neural networks it's a fairly simple classification problem (that A. Karpathy researched, and taught at Stanford; and he's now leading the AI/vision team at Tesla).

Looking at the video of the guy that tried that same route again, it looks like the sun was straight ahead (see starting 37th second of video [1]), and I am wondering if the camera had an issue identifying the sign due to the glare from the sun.

Do we know if this problem has been fixed now? Can the chevron be identified at the same location by other Tesla drivers today, months after the guy that tested it and posted the video of his car swerving at the same place [1]?

Following lane markings are never going to be a 100% reliable method for navigating roads. Even humans miss merging lanes, or have issues at complex surface street intersections where there are multiple overlapping lanes. But humans take evasive action.. they slow down, re-access, look at signs and probably get honked at.

Following other cars is never going to be a reliable method for navigating roads. I don't know what the guy in front of me is going to do! I keep my safe distance and take evasive action if the guy in front of me does something stupid.

I feel signs are most reliable in that sense. If there is a missing sign, a human is more likely bound to make a mistake as well. Maybe I am biased, because as someone who sails, signs (buoys, lights, colors, etc.) are what guide me in my navigation as there are no lane line markings on the water (not a perfect analogy because the speeds/traffic patterns are very different, but the point I am trying to make is about humans identifying the signs with muscle memory). A vision based neural network should be able to do the same.

[0] https://english.stackexchange.com/questions/249780/what-do-y... [1] https://youtu.be/VVJSjeHDvfY?t=37


The previous crash was a drunk driver, at night. Tesla has hit a milestone, they are almost as good as impaired drivers!


> During the 18-minute 55-second segment, the vehicle provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. These alerts were made more than 15 minutes prior to the crash.

If your hand is always supposed to be on the wheel, why is does the car not constantly alert you when it detects that your hands are off (similar to how cars beep at you if your seatbelt is unbuckled while driving)?


I think one of my main concerns with "autopilot" is that for a lot of drivers, it will absolutely make the roads safer for them and those that use the roads around them. Consequently, for some safer and more-alert drivers, it has the potential to make driving less safe.


Here's a relevant video that shows autopilot directing a Tesla into lane split.

https://www.youtube.com/watch?v=6QCF8tVqM3I


I wonder what percentage of owners actually have the courage to turn on autopilot? How many people here would/do?


If I was building this, I would upload millions of hours of data from actual Tesla drivers, and I would have autopilot releases step through data and flag the variances from the behavior of the actual drivers. I'd run this in a massively parallel fashion.

For every release, I'd expect the score to improve. With a system like this, I would think you'd detect the "drive towards traffic barrier" behavior.


Job 1: Don't run into things.


Tesla Autocrash, how much does this option cost again?


[flagged]


Why would the government ever do that?/s

Did you hear what happened to the whistle blower that leaked vault 7? Apparently he had a bunch of child porn on his computer[0]. Crazy huh?

[0]:https://arstechnica.com/tech-policy/2018/05/ex-cia-employee-...


There was probably an extra thumb drive of child porn for Snowden but he escaped capture.


....what thumb drive?


The other poster was speaking of entrapment


Yeah but where did a thumbdrive come into play?


That would be a fictitious example of entrapment to go along with the posters idea that the person who released the nsa tools archive might have been entrapped.


T-H-U-M-B-D-R-I-V-E.... What are you talking about.



What does a thumbdrive have anything to do with what I posted? It's not mentioned in the story or what I said.


Supposedly gscott suggested incriminating evidence - a USB drive of child porn in this instance - would be regularly planted on political targets such as Snowden.


Yeah but who mentioned the usb drive?

illuminati in play


I was listening to a Software Engineering Daily podcast with Lex Fridman about self-driving deep learning. Very interesting topic on ethics of self-driving cars. What he was saying, is that we need to accept the fact that people are going to die following incidents with autonomous vehicles involved. In order for systems to learn how to drive, people will have die. It's more of societal change that is needed. 30,000 people die on the roads in US every year, in order to decrease that number we need self-driving cars even with a price that society as of now can't accept


> we need to accept the fact that people are going to die following incidents with autonomous vehicles involved.

Car accidents will kill people at least as long as there are any human-piloted vehicles on the road and probably long after (pedestrians, etc.)

What doesn't need to happen is companies in the self-driving space recklessly exaggerating their cars' ability to self-drive.


It's very possible to get self-driving systems to learn to drive without killing people.

Waymo and Cruise have done a good job at that so far, by taking the conservative route of testing their systems in controlled environments before unleashing them on an involuntary public.


Short version: due to poor lane markings, Autopilot made the same mistake as many humans in the same situation and collided with the divider. Due to the frequency of this kind of accident, the crash attenuator had been collapsed and not reset meaning the Tesla hit the concrete divider at full speed, as has happened in the past with humans in control.

But please continue to blame Autopilot for not being smarter than the human operating the vehicle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: