Hacker News new | past | comments | ask | show | jobs | submit login
Tesla worker killed in fiery crash may be first 'Full Self-Driving' fatality (washingtonpost.com)
138 points by mikequinlan 8 months ago | hide | past | favorite | 306 comments



As I said in another comment, these two were idiots driving drunk. Let's get that out of the way.

The bigger issue here is how this deniability is all too convenient for Tesla. The process goes like this:

1. Ship a clearly half baked system called "Full Self Driving".

2. Require driver's hands on the wheel at all times.

3. Be extremely non-transparent about your system's safety. Tesla's crash reports in the public NHTSA database contains absolutely no details. Everything is redacted, we can't even know if the crash happened with FSD or with plain old Autopilot [1]. This is in stark contrast to the reports filed by driverless companies like Waymo and Cruise to the CA DMV [2], which Tesla refuses to do.

Also publish a "safety report" that's entirely marketing BS, which doesn't control for many factors (highway vs city streets, geography, time of day, age of cars, safety features, demographics) and hence is not apples-to-apples. Claim it's "safer than humans".

[1] https://www.nhtsa.gov/laws-regulations/standing-general-orde...

[2] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...

4. When an accident happens, just say it's the driver's fault and that they should've known it's just an L2 system.

5. Tweet about FSD vN+1 that's going to totally bring Full Self Driving by end of the year.


It should be illegal for them to call it full self drive. It's 100% false advertising not to mention the 'danger' that people will assume it is what it says it is.


It should be illegal for them to call it even 1% self driving. Because it is no more than 0% self driving. There is no mode of operation in any Tesla where the human is ever, at any moment, not responsible for safely navigating the vehicle.


Like "full self-driving" but where you're fully driving yourself? I had a '53 Studebaker with that feature.

(this is a lie; never share car ownership details online, banks still don't know that we drive them in public)


It is in California, but it hasn't been enforced yet: https://www.autobodynews.com/news/tesla-not-allowed-to-call-...


It probably is illegal...


It might be legal, but in court what you advertise the car can do generally has high weight than what you put in the warning documentation. Thus Tesla is stupid for making any such claims as now they have admitted in court they think their car is fully self driving which means the car should be liable. Even if Tesla can prove that the car was under manual control the courts are still justified for finding Tesla for anything that happens as they have clearly claimed the car is self driving.


What absurd logical leaps. If this is so clearly the case, then I don't know why every single law firm in the country isn't scrambling to sue Tesla via a class-action right this minute.


Can you bring this up to class action? I'd expect this to more be relevant when the family of someone killed wants to sue. Or more likely the insurance company who just had to pay out some $$$ that they want to recover.


Don't arbitration agreements signed by most buyers the reason we'll never see any class actions here?


Most people have not bought a Tesla and so arbitration will not apply. Any by stander in a situation has standing to sue Tesla.

Though that doesn't mean they will. Just because you can sue doesn't mean it is worth it.


I agree. I just can't understand why the FTC hasn't done anything.


Because it's not profitable, and the FTC actually doesn't care when mega-billionaires break the law for profit.


... and the fines have been factored into the bottom line.


Full disclosure: participated in Tesla's IPO, own a Tesla.

Tesla should be doing two things that would greatly reduce their liability while also keeping customers happy. Overall I think it wouldn't change the number of people buying into FSD anyway so there wouldn't be much downside for them.

1. Offer refunds for those for paid for FSD. Full refund until FSD has been enabled for a year and used more than N times, prorated after that based on usage. That gives existing customers a window to try it out and get their money back if they don't like it, with a full refund if they only tried it out or a partial refund even if they used the feature a lot.

2. Be clearer in their marketing and on the purchase page about the feature. Sell your vision of it but also be honest about the current state of it. IMHO it isn't that hard of a sell to most customers but you'd be setting accurate expectations. Tech-aware people or those of us into EV tech are aware of FSD's limits but you are only going to disappoint people if they think FSD really is capable of full unattended self driving right now. That only hurts your brand and erodes trust. You can't hide the details in 6-point lawyerese and expect customers to be happy with you.

I was aware of the limits of FSD when we bought the car and considered it an investment in the future... but I don't use it often and told my spouse not to either. It is a beta feature and must be used with care.


> Be clearer in their marketing and on the purchase page about the feature.

Do you have any examples in mind of where this feature was billed confusingly?

The product page and Model S purchase page both include clear disclaimers in both the first sentence or directly below the bulleted feature list.

> Full Self Driving

> Your vehicle will be able to drive itself almost anywhere with minimal driver intervention and will continuously improve.

https://www.tesla.com/support/autopilot

https://www.tesla.com/models/design#overview


The headline in large type says "Full Self-Driving Capability"

The normal-sized text says "Your car will be able to drive itself almost anywhere with minimal driver intervention and will continuously improve"

Only once you get to the small font do you get "The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving features evolve, your car will be continuously upgraded through over-the-air software updates."

This is not a truly honest presentation of the feature. It starts by selling the end goal that may or may not be achieved at some undefined point in the future. Then in small print it says the current features don't actually make the car autonomous.

Honest presentation might be something with a headline like "Self-Driving Beta" and in normal text say "Access to current and all future updates as we work toward full self driving. Currently self-driving can navigate most conditions under human supervision. By purchasing self-driving now you automatically get all future improvements as we continuously work toward fully autonomous driving".

If it were up to me I'd also sell this as a perk for "investing" in the company: if you buy self-driving now transfer it to your next Tesla vehicle for free. That makes it a reward for paying $12k for a beta feature and more importantly eliminates any worries that actual full self driving won't arrive before you upgrade vehicles. It also encourages people to be repeat customers.

edit: It appears I was out of date. Tesla now allows you to transfer lifetime supercharging and/or FSD to a new Tesla for free so that's a nice perk and one they should lead with and commit to long-term.


> Do you have any examples in mind of where this feature was billed confusingly?

Yes, when they started calling it full self driving, which by any reasonable person’s definition it isn’t. That deception is not entirely ameliorated by using more correct language in other places.


Just the 140 characters you quoted contains a confusing billing, IMO.

Is it "Full Self Driving" xor is "Minimal driver intervention" required?


Maybe 2 lines down will clear things up?

> The currently enabled Autopilot, Enhanced Autopilot and Full Self-Driving features require active driver supervision and do not make the vehicle autonomous.


If they also changed "Full" to "Limited" (or "Not Full"), it would be cleared up.

"Full Self Driving" and "vehicle is not autonomous" is still confusing to English speakers, IMO.


Sure, which is why they have dedicated entire sections to outlining it's exact capabilities, as well as software alerts and warnings you must accept before enabling the feature.

> Before enabling Autopilot, the driver first needs to agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Subsequently, every time the driver engages Autopilot, they are shown a visual reminder to “keep your hands on the wheel."

If you bought this car without realizing how it worked... I sympathize, but the info was provided.

If you enabled this feature without understanding how it worked... that's now on you.


I'd say only in this industry but it's not true, it just means technical people are human.

How does that saying go? You'll never convince someone of a fact that their income depends on not knowing?

When someone writes code that is very difficult for anyone else to grok it's the code writers fault, but when someone has a misapprehension of a feature of a car, because it's on the tin SOMEWHERE, it's the fault of the customer who often times is unsophisticated in terms of vehicles and tech within vehicles.


“No”, surely? If it requires constant supervision, it isn’t any sort of self-anything.


Using the correct words would not require another 2 lines to clear things up.

The only reason to label it incorrectly is to falsely advertise.


The Model S Plaid is also not plaid.

I believe this is invented outrage, their actual capabilities are made very clear.


It's mostly not outrage, it's irritation. They don't need to be clowns about it, the name doesn't need to contradict the lengthy explanation of what it actually does.

Nobody would be griping about the name being misleading if it was marketed as Tesla Tartan or something.


Model S Plaid is not describing the paint scheme of the vehicle. Full Self Driving is the name of the feature specifically chosen to mislead about its capabilities.


> Full Self Driving is the name of the feature specifically chosen to mislead about its capabilities.

Full Self Driving was chosen to describe its intended capabilities exactly.

Was the SLS misleading before its first flight in 2022 because it had yet to launch to space? Were the James Webb or Hubble falsely advertised during production? "They want us to call that thing a 'Space Telescope' when it's obviously a pile of mirrors on the ground..."


If a space telescope is delivered to astrophysicists for use without being able to do what a telescope should do, hell yeah that's misleading.

There's a massive difference between "I want to build thing x -> I'm building x -> I have built x -> Pay me for x -> Ok, here's your x, go ahead and use x" and "I want to build x in the future -> pay me for x -> here's x, go ahead and use it -> I am building x -> I have built x (I may not be able to build it at all, btw)".


It would be wrong to have called Mercury "lunar orbiters" though.


Minimal driver intervention isn’t keeping your hands on the wheel at all time and paying attention.


It is if it requires minimal intervention.

Intervention and attention are not the same.


But surely "require active driver supervision" is the same as attention, right?

https://news.ycombinator.com/item?id=39360101


Yes, they claim to require full attention and minimal intervention.


I think the quote from the article really sums it up in plain language, and the argument Musk presents should be familiar to anyone here, who's heard the "Robots > Humans in a car" argument.

> “Regardless of how drunk Hans was, Musk has claimed that this car can drive itself and is essentially better than a human,” Bass said. “We were sold a false sense of security.”


I'm no Tesla fan boy, but it seems entirely possible that even the limited capability of FSD was better/safer than a driver whose BAC was 0.26%, even if this one ended up in a fatal crash, as that happens to drunk human drivers at the rate of more than 10K deaths per year in the US.

If this is the first FSD death in the world, Tesla is probably thankful that it was someone who was hammered.


The problem is that someone who was hammered might have taken a Taxi home if they didn't have a car that claimed it could drive them home autonomously. In which case it doesn't matter how much better FSD is than a drunk driver. A colleague of mine actually bought a Tesla for this reason - to get them home safely from the bar. Tesla's marketing is way out of step from their actual capabilities and it gives people a false sense of security.


I believe this is the first fatality where Tesla can't weasel their way out of the cause. There have been multiple crashes, some fatal, where FSD was engaged but disengaged a split second before the crash. So in the mere seconds between the autopilot disengaging and the impact the driver is supposed to be fully up to speed on what's about to happen and avoid it. Also, a lot of these drivers were behaving in a unsympathetic manor (ie sleeping while FSD engaged) and the jury determined they were being a bit more reckless than what the media would let you believe.


> 1. Ship a clearly half baked system called "Full Self Driving".

In my opinion unless the manufacturer accepts legal liability[1], it's not "self-driving" despite any marketing to the contrary. "Put your money where your mouth is."

Tesla's marketing has been extremely misleading in this regard.

[1] https://insideevs.com/news/575160/mercedes-accepts-legal-res...


6. Refuse to turn over crash data you have to investigators

7. Somehow manage to set conditions that prohibit the government from disclosing the data you do eventually provide

Edit: forgot one!

8. Decide that RADAR sensors are too expensive, and stop using them. But because you don't want owners, the public, press, or regulators to see how much worse the stereo camera systems perform compared to combined radar and and camera, disable the RADAR in all the older cars


Anecdotal, but I have a car with radar installed, didn’t notice that it was disabled, fwiw. It’s better now than when the car was new.


Well, I mean I think it’s expensive to try to maintain two systems - one with LIDAR and the other without.


one would expect that to be a part of the calculus for stopping to offer them, but apparently it wasn't.


9. And never include LIDAR in the first place...


He was not on FSD. The software had unfortunately never been downloaded. I say “unfortunately”, because the accident probably would not have happened if FSD had been engaged - Elon Musk

https://x.com/elonmusk/status/1757652781010497798?s=20


Oh. Well he always truthfully and accurately represents the situation so I guess that’s the end of that.


Hopefully Tesla gets sued for false claims...


While I don't know how good Tesla's self-driving is, I do have experience with a limited keep-your-lane system in a BMW. The one that actually steers.

It requires a lot of force to override it. And remember, it requires you to keep the hands on the wheel. But you are not expecting to have to fight it, so there are some delays there. Then you're fighting a system that you don't know how much force it takes to override it. You don't want to apply too much force because you might be barreling down the freeway at 80mph. At those speeds you just nudge the steering wheel. But that's not the case when you need to override the car.

And it happens more then you expect. Every time the car mis-calculates the actual lines on the road it'll do it. Changes in paint where the old one is still there for example. Or just faded paint and all of the sudden it detects a diagonally going split between 2 slabs of concrete as the lane separator.

Or you're doing 50mph, and you're passing a couple of cyclists. If you don't turn on your turn signal and you weave to the left to pass them the car will literally steer back into the direction of the cyclists. Scary.


I have actually been in a car crash that was caused by the lane-assist feature in the Rivian R1S. It misread the proximity of the concrete K-rails on each side and jerked the steering wheel into the wrong lane. The correction of the driver to avoid (dangerously) switching lanes kicked the back of the car out at 75mph and we drifted, hitting one rail, bouncing off and hitting the other.

It was a violent enough collision to write off the car, but fortunately the construction of the vehicle saved us and nobody had any particularly severe injuries. I now am adamant about drivers not experimenting with those features while I am in the car because I don't trust them.


> If you don't turn on your turn signal ... the car will literally steer back

This kills me about BMW's lane-keep system. It doesn't just lane-keep in the absence of input, it actively fights driver input that would take it over the lane lines.


To be fair, that is the point of lane-keep assist. To keep an inattentive driver from drifting out of their lane. Without any input to the contrary, it should assume a car leaving its lane is unintentional and try to correct it. That what it is designed to do.

My Volvo has an early version of lane-keep assist (2017 model first introduced in 2015). It will fight me when it thinks I'm out of a lane. Fortunately, it doesn't take much input force to override it. It is still strong enough initially that if I were driving without holding the steering wheel appropriately it would cause a problem. Also, I must remember to turn it off if there is any snow or ice on the road surface as that tends to put the car in "kill me mode".


> To be fair, that is the point of lane-keep assist. To keep an inattentive driver from drifting out of their lane. Without any input to the contrary, it should assume a car leaving its lane is unintentional and try to correct it. That what it is designed to do.

But the whole point was that the driver explicitly turned the wheel in order to partially leave their lane, in order to give bicyclists space. They weren't drifting - they were actively steering.


Ya no tesla disables by tapping break at all, or even twitching the steering wheel,

If this was in fact a FSD crash, the driver didn't try to stop it in any way.


This all feels very MCAS to me.


No? MCAS was necessary because Boeing made the plane aerodynamically unstable.

They did so because of pressure to compete against superior Airbus offerings, and because a new airframe (the proper solution) would have been a huge undertaking and airlines wouldn't like it because they'd have to get pilots certified on the new airframe.

Boeing then did not implement sufficient redundancy in the system, proper warning annunciators, training, documentation, etc. Many pilots had no idea the system existed.

There are similarities, however. A large part of the reason Tesla has so many problems is because Musk decreed that radar was too expensive and discontinued installing them, forcing the system to rely on stereo camera vision.

Even worse: Tesla disabled the radar systems in older cars because they didn't want the older cars to be more capable - it would be a very inconvenient way for the public, press, and regulators to see how deficient the camera-only approach was.


Not sure why you’re talking about the deficiencies of the model sans radar throughout the thread, I haven’t noticed any degradation - on the contrary, the performance is better now than when the car was new a few years back. Do you have a statistically backed source for this, or is this based on media/hearsay?


> MCAS was necessary because Boeing made the plane aerodynamically unstable

It's not aerodynamically unstable. MCAS was required because in certain parts of the flight envelope outside of normal operation the pull-force feel of the control column was lighter than during normal operation. The FAA regulations require it to be heavier.


Yes this is it! I was trying to describe my misgivings with this functionality and this is exactly it. You have a powerful system with a ton of edge cases where the penalty for getting a minor edge case wrong is that someone(s) dies.


I drove one of these BMWs too and had the same thought. It seems like too far in the other direction.


If you are passing cyclists at 50 MPH you should be an entire lane to their left.


Teslas don’t require much force at all to disengage.


The Kia lane following, if anything, is a bit too weak (it follows more lazily than I would) but it certainly doesn't require any major force to overcome it - and most important - it does NOT disengage if you nudge it back to the right place, it just accepts the new normal and keeps going.

Similarly it's smart enough to NOT disengage cruise control if you hit the gas, but will if you tap the brakes (which you don't often need to do because the space management works decently well).

Ten years ago I would feel pretty comfortable driving any vehicle (up to and including double-shifted manual transmission military trucks).

Now I feel each new car needs a separate training regiment because all the automated systems operate so differently from each other.


Tesla doesn’t let you alter the car’s course without enough force to disengage.


I have a 2020 Subaru that requires very little force to move the steering wheel when lane centering is engaged. My Tesla Model 3 requires a bit more force, but it's still quite light.


I mean barely any, but also you can tap break, or hit the stalk, like theirs a billion ways to disable, a crash if FSD is enabled actually, was because the driver didn't even try to stop it.


I don’t know what BMW is like but it requires more force than I think is safe.


My 10yo VW only nudges you gently, no force required to steer it your way. Even that nudge is annoying to some, but it's definitely not dangerous.


> And it happens more then you expect. Every time the car mis-calculates the actual lines on the road it'll do it. Changes in paint where the old one is still there for example. Or just faded paint and all of the sudden it detects a diagonally going split between 2 slabs of concrete as the lane separator.

Our Subaru came with "lane assist" enabled by default, and I turned it off in under a week after it kept trying to "assist" me on New Jersey's highways right across a lane of traffic in construction zones.


> you're passing a couple of cyclists. If you don't turn on your turn signal and you weave to the left to pass them

This is why everyone gives me evil looks when I drive my own BMW :)


I think that's fairly common amongst lane centering implementations. One thing I think is unique about Tesla's autopilot is that if you grab the wheel and override the steering, it disengages AP altogether. That is, including the traffic-aware cruise control. I believe most/all other manufacturers maintain speed unless you also hit the brakes.


It does not - my M3 continues to keep the speed if I override autopilot with the steering wheel. If I press the brakes, then both systems are shut down.


It takes very little force to disengage in the Tesla, at least when it works properly (I haven’t had it work improperly, but I could imagine a bug that prevented it from detecting that causing issues).


> If you don't turn on your turn signal and you weave to the left to pass them

...oh nice, the car scares you into remembering that you should always signal passing maneuver with your blinker!

Another case where this system goes awry: in light drizzle, cars leave wet trails behind that shine and this system will mistake them for lane markings. I have to turn LTA off in such condition or it would steer me right off the road!


There are a lot of cases where you pass a line without indicating though. There are also a lot of cases where a line ease off to the side in a turn-off that would guide the lane assisted vehicle off the road. Just do a google search for these kinds of cases. This technology is immature and dangerously fails in a lot of cases already.


> There are a lot of cases where you pass a line without indicating though.

I'm curious to know what those circumstances are. I'm just a dumb American driver but even I was taught you always use your indicator when changing lanes: turning, passing, merging, parallel parking. I don't know where at any point you are not required to use the indicator.


Mainly during roadworks where temporary signage and barriers are put in place that override road markings. Maybe it's different in the UK (where I'm from) but it's pretty commonplace here, and would be really weird to indicate according to road markings when temporary cones are up.


> There are a lot of cases where you pass a line without indicating though.

You are legally required to in the US.

Edit: I'm not entirely confident in this. You are legally required to signal a turn. Lane changes may not meet the definition of a turn. In NYS where I live lane changes are indeed included.


I'm in the UK, to be fair, but it is common here when doing roadworks on a motorway to close one side of the motorway and set up cones which guide traffic onto the other side to share the "wrong" side of the motorway lanes (divided by cones). When entering the roadworks area and switch to the other side, this necessarily means crossing boundaries, but indicating at this stage would be weird and nobody does it.

A relation of mine was renting a car with lane assist and it nearly drove him into the cones by way of refusal to cross the boundary (until it conceded control to the driver).


Roadworks sort of temporarily change the layout of your lane. So as long as you keep inside whatever goes for the current lane, you don't need to indicate. But otherwise you do, since you temporarily go out of your lane. It's not about the line itself, it's signalling the maneuver and crossing the lane's boundary that does it.

Sure, the lane centering systems don't know that cones change how the lane goes. They also don't know much about temporary lanes painted in yellow (in my country road works do this) and that they override the usual white lanes. Needless to say, one needs to increase their attention when driving through such things anyway, so I turn LTA and adaptive cruise control off and slow down. A wild road worker may suddenly appear in my way at any moment.


Wait wait.

You are passing cyclists. Blinking serves two purposes: you are letting them know that you have noticed them (so they know you're not just going to plow right through them) and you are letting them know you are going to pass them (so they don't do stupid shit like decide to turn left at exactly this moment).

If a car behind me, in the same lane, was passing me without blinking, I would highly suspect a drunk or overtired driver.


I'm not sure how can a cyclist see your blinker - you are behind them when starting to overtake and looking back is not something cyclists do much (or more than once)...


I have no problems noticing lights and blinkers in my peripheral vision, or when they cast light onto objects ahead of me.


Helmet-mounted rearview mirror. Looks like the little mirror they use at the dentist's office.


My comment was a bit misleading, sorry. I do indicate when passing cyclists. I read your comment as suggesting that the assumption lane assist makes is OK because one should always indicate when crossing a lane marking (which is not always the case).


> Von Ohain and Rossiter had been drinking, and an autopsy found that von Ohain died with a blood alcohol level of 0.26 — more than three times the legal limit — a level of intoxication that would have hampered his ability to maintain control of the car, experts said.

The details make this seem less of an autonomous driving issue and more of an incredibly irresponsible operation issue.


These two were idiots, but this highlights the dangers of L2 driver assistance systems (misleadingly marketed as "Full Self Driving") that requires driver attention at all times to prevent accidents. It gives you a false sense of security and there's no guarantee you'll take over in time to prevent an accident.

If you give dumb toys to people, they will use it in dumb ways. This is why Waymo/Google abandoned their driver assistance efforts a decade ago and jumped straight to driverless. That turned out to be a masterstroke in terms of safety.


I totally agree with what you say. Certainly “Full Self Driving” should be illegal marketing for level 2 autonomy.

That said, cars are dumb toys to give people. Certainly access to cars is dangerous for people that are going to use them while drunk. “Sense of security” or not, why were these guys in control of a vehicle?

While autonomous vehicles are not ready, this event just cements the need for them in my mind. I expect that they have already saved more lives than they have cost. Some continue to insist that replacing human drivers is a high bar. In edge cases and for the best drivers, it is. Collectively though, evidence suggests that the bar is very low and that we have already surpassed it.

At this point, it is more of a “fairness” or “control” issue who gets hurt by autonomy than it is aggregate safety by the numbers. In this case, thankfully, it sounds like it was fair.


>evidence suggests that the bar is very low

There is no real evidence (not biased or cherry picked) that replacing all drivers with Tesla's would actually be better/ I am aware of Tesla's numbers but those are invalid statistics , not comparing apples to apples. We would need something like how many miles does a car drive before a driver takes over, not comparing number of accidents since those are comparing human vs computer+human


Why is it invalid to compare which of human vs. computer+human will save more lives?


It is inherently cherry picked data, because the computer can disengage at any moment, but the human cannot.

For example: all of the worst driving conditions are inherently in the 'human drivers' bucket because the computer won't engage in those situations.


I don't drink alcohol at all. It is possible that because of that I'm better than computer+human while the humans overall are worse.

I sometimes must drive on icy roads where the computer won't engage and all - these are more dangerous conditions and by refusing to do anything they ensure the computer does work. It is possible for humans to be better than computers+humans in all situations that computers work, but because of those situations where the computers don't work end up overall worse.

The above are just two issues that I can think of, and I'm not even a researcher who would know all of those special cases. Which is why everyone wants transparent data: someone independent needs enough information to account for factors like the above and allow us to have a real debate.


>Why is it invalid to compare which of human vs. computer+human will save more lives?

I think it is not relevant for real self driving. I can "invent" also a computer that will not let drunk people start the car, it will have good statistics, the issue I am not a bilionaire to get people install my invention in their cars.

if we the society want to reduce the deaths we could have already done a lot more for preventing drunk drivers, bad drivers, speeding etc.


> I totally agree with what you say. Certainly “Full Self Driving” should be illegal marketing for level 2 autonomy.

The driver was literally a Tesla Inc. employee. Do you really think that they were fooled by marketing into believing the system was more capable than it was? No, they were just drunk and made a terrible decision.

I mean, I'm tempted to just agree. Let's call it something else. Will that stop these ridiculous arguments? I really doubt it.


I'm confused by the question. If anything I think it's more likely that a Tesla employee could be tricked. Other companies like Waymo have cars that literally drive themselves, and Tesla routinely has demos which purport to show the same functionality. It doesn't sound ridiculous at all that an employee might see lots of videos like [https://www.youtube.com/watch?v=Ez0A9t9BSVg], where a Tesla car drives itself, and conclude that their Tesla car with "Self Driving" can drive itself.


Did you read the article?

> Von Ohain used Full Self-Driving nearly every time he got behind the wheel, Bass said, placing him among legions of Tesla boosters heeding Musk’s call to generate data and build the technology’s mastery. While Bass refused to use the feature herself — she said its unpredictability stressed her out — her husband was so confident in all it promised that he even used it with their baby in the car.

> “It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”

Seems like he was indeed fooled by marketing.


They were idiots.

> Before enabling Autopilot, the driver first needs to agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Subsequently, every time the driver engages Autopilot, they are shown a visual reminder to “keep your hands on the wheel."

They read and accepted the warnings informing them how to use the system, and then did the opposite. They willfully and knowingly used a feature in an unsupported manner.

Sorry, this is not being misled by marketing.


Ugh, the point is this "feature" shouldn't be allowed to use in an unsupported manner. It's not a button on a website, it's a safety critical system. There's a reason why real self driving companies are super conservative and use a geofence.


> There's a reason why real self driving companies are super conservative and use a geofence.

What reason is that? If they're safety critical systems, why are they putting them on the road at all?


Because they are validated extensively inside that geofence and any operations inside that area are "supported". They don't let the vehicles go anywhere they want and they don't ever allow you to use it in an unsupported manner. You want a Waymo stop right in the middle of a busy intersection? The vehicle will refuse and keep going until it finds a safe place to stop.


I thought we were talking about FSD, not autopilot, how is autopilot relevant in this conversation?


> > “It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”

That's a frightening attitude. Something I keep in mind every time I see another Tesla on the road. And I'm driving one myself.


Wouldn't "bought into the marketing" only apply to something like "I got in this Turo'd Tesla, turned on Autopilot, then 10 seconds later it crashed into another car on an unprotected left-hand turn"? You literally can't believe in the marketing after having used it for as long as it sounds like he has; the car does too much driver monitoring to leave you with the impression it works flawlessly and without human intervention. The entire Twitter/X community that talks about FSD reiterates that the current iterations feel like a 15-year-old driving.


Elon Musk: "FSD is already safer than humans, see our (misleading) safety report. If you use it and give us data, it will be a robotaxi by end of the year."

This dude: "Sounds good, let me put my baby in it. It's jerky, but I'm contributing for a bigger cause."

This counts as being misled by marketing to me.


Elon: "FSD is safer than humans."

This dude: "Neat, I'm going to use it!"

FSD, upon activation: "Full Self-Driving (Beta). Full Self-Driving is in early limited access Beta and must be used with additional caution. It may do the wrong thing at the worst time. Do not become complacent. ... Use FSD in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately, specially around blind corners, crossing intersection, and in narrow driving situations. Do you want to enable FSD while it is in limited Beta?"

This dude: "Elon said it's safe, I'm going to drink and drive!"


> Elon: "FSD is safer than humans."

Well, this is false. So it's misleading right off the bat.


Is it? I'd heard Tesla was being too tight lipped for any conclusions to be made at this point, but if you've got a study or source I'd love to see it!


Their safety report is a joke and their methodology is full of holes. There are a bunch of replies to this comment that explain why: https://news.ycombinator.com/item?id=39359437

As far as proving how unsafe it is, no one can do it unless Tesla is transparent with their data. Deliberately hiding that data doesn't automatically mean it's safer than humans, so yes, their claims are indeed highly misleading.


I see a lot of folks casting doubts and asking questions like you in the linked thread, but nobody seems willing to dispute the signal itself, just it's relevance. From the top comment:

> You're essentially telling us that drivers driving Teslas with active autopilot (i.e. limited to great weather conditions on high quality roads) have fewer accidents than those without active autopilot (e.g. driving in poor weather conditions, or on bad roads). That's not much of an insight.

FSD refusing to engage in unsafe driving conditions does not compromise the system's safety. In fact, it's the safest possible option, and something I wish more humans (myself included) would do.

> As far as proving how unsafe it is, no one can do it unless Tesla is transparent with their data.

Right, so how are you confidently discounting Elon's claim without data?


> FSD refusing to engage in unsafe driving conditions does not compromise the system's safety.

That’s not the relevant part here. It’s that other humans do drive in unsafe conditions that contribute to overall human crash rates and those variables are not controlled for in the comparison.

You can’t just drive in areas your system deems safe, but then compare to humans who drive everywhere. You understand how that skews the comparison, right? It’s pretty basic statistics.

> Right, so how are you confidently discounting Elon's claim without data?

Elon’s claim isn’t supported by data either. That’s how I can discount it.

When you say “safer than human”, you better support it with data that holds up to scrutiny. The burden of proof is on you. Otherwise, you’re asking us to prove a negative.


You were asked for any evidence that it's unsafe, and you don't have it. The linked article is the only fatal accident you can even point to, and even that one is (1) only a suspected FSD accident based on previous driver statements and (2) really obviously a DUI anyway.

Look, these are the most popular vehicles on the road. Where are the wrecks? Where are the bodies? Surely if these were significant there would at least be a bunch of anecdata showing suspicious behavior. And there isn't. And you know that, which is why you're arguing so strongly here.

Just chill out. The system is safe. It is not perfect, but it is safe. And no amount of whatifery will make it otherwise.


No one can tell with a wreck if FSD was active or not. No shit, no one can point you to actual evidence because Tesla doesn’t release any data. You realize what a ridiculous, circular argument you’re making, right?

It’s telling that you want to rely on anecdata and news reports as proof rather than asking Tesla to be forthcoming.

To call a system safe and more specifically safer than human, you need data and you don’t have it. Plain and simple. The burden of proof is on you.


Is the burden of proof really on me? I mean, I look around and don't see any accidents. Seems safe. That's good enough. And it's good enough for most people. You're the one shouting like crazy on the internet about how everyone is wrong and making explicit general statements that the system is clearly unsafe. And that doesn't match my experience. And when asked for evidence of your own you admit it doesn't exist.

So... it's like that Money Printer Go Brrr... meme. Shouting loudly about something you believe deeply doesn't make it true. Cars aren't crashing. QED.


Precisely what I expected from you. You’re exactly Tesla’s target audience. The irony and projection in your comment is comical.

“It’s safer than humans.”

“Proof?”

“Just look around, bro. Trust me, it’s good enough. No data needed.”

Thankfully, some of us don’t lack critical thinking.


This seems out of hand. My perception is exactly the opposite: you're all over these threads claiming in decidedly certain terms that this system is unsafe. And all I'm saying is that it's clearly not, since at this scale we'd have if nothing else extensive anecdata showing AP/FSD accidents that simply don't seem to exist in the quantity needed to explain your certainty.

So, yeah. Occam tells me that these cars are safe. Which matches my 2.7 years of experience watching one of these things drive me around. So I'm comfortable with that, no matter how angry that makes you. I'm absolutely willing to change my mind with evidence, but not because you yelled at me on the internet.


I said they are not safer than humans like Tesla likes to claim. The methodology is extremely dubious and it’s very easy to debunk it for anyone with a basic understanding of statistics.

You claim it’s true because have anecdotes and personal experiences. You also say there are no crashes. But there are crashes. We just don’t know if FSD was engaged at the time because, again, Tesla doesn’t reveal it. Go through the NHTSA public database, there are dozens and dozens of Tesla ADAS crashes. Those are just the reported ones.

You are also conspicuously silent in this whole thread when data transparency comes up. Not once have you admitted Tesla should be more forthcoming about their crashes, and that in itself is very revealing.

You want to argue data or methodology? I’m here. But you’re being intellectually dishonest and resorting to repeating the same things over and over again. That ain’t gonna convince me.

And cut the shit about being angry or yelling at you. No one’s doing that. You’re clearly projecting.


If you don't like Tesla's methodology, what data do you need to determine how much more unsafe Teslas are compared to humans? You sound like you have the data:

> Go through the NHTSA public database, there are dozens and dozens of Tesla ADAS crashes. Those are just the reported ones.

Why hasn't anyone else, or you, directly published conflicting data to show that it's less safe? "debunking" some statistics might be correct but it's not convincing, and definitely won't result in any action you might want to see from regulatory bodies.


> what data do you need to determine how much more unsafe Teslas are compared to humans?

Tesla should start by first reporting disengagement data to CA DMV like every other self driving company. It shows FSD's rate of progress.

Then they should take all their crashes, normalize for different factors I mentioned earlier and then make a comparison. See how Waymo does it: https://waymo.com/blog/2023/12/waymo-significantly-outperfor.... They have multiple white papers on their methodology. Go through it, if you're interested to see what apples-to-apples comparison looks like.

> Why hasn't anyone else, or you, directly published conflicting data to show that it's less safe?

Because Tesla redacts every single reported crash to point that it's useless. Is FSD enabled or Autopilot? Not reported. FSD version? Redacted. Injury? Unknown. Description? Redacted. Good luck trying to glean any information from it. This is by design to prevent independent analysis of their performance.

Be transparent like everyone else. You know it's fishy when they're actively trying to hide things.


By the way, if you want to see missing data in action, I did some legwork: https://news.ycombinator.com/item?id=39375581


Totally agree. As a driver with a car with advanced "driver assistance features", I honestly don't understand how people get value out of most of these. That is, I'd rather just use a system where I know I have to be in control at all times, vs one where "You only have to be in control 1% of the time, oh and if you miss that 1%, you're dead."

For example, I recently tried using the adaptive cruise control/lane centering features on a long road trip, and it was maddening, not to mention pretty terrifying. The exits from this highway weren't well marked, so at most of the exits the car tried to stay it the middle of the rightmost driving lane and the exist lane (i.e. directly into the oncoming divider). I get that other systems may be more advanced, but I don't see the benefit of an automation level that every now and then takes on the driving characteristics of a drunk toddler.


> Totally agree. As a driver with a car with advanced "driver assistance features", I honestly don't understand how people get value out of most of these.

I've got radar cruise control, lane keeping, and auto braking on one of my cars.

I just drive it like it's a normal car, and the computer helps me out from time to time. That's where the value is for me. I don't do a lot of driving with cruise control, so radar assisted isn't very helpful, but lane keeping nudges me when I drive on the line, and auto brake helps out in some situations (and prevents me from backing over flowers, and some other over reactions).

For my car, lane keeping doesn't nudge too hard, so it's not hard to push through, but it helps a bit if I lose attention.

I'm considering replacing this vehicle, and I'd get these systems again. If I replace my other vehicles, which don't get to go on long drives, it wouldn't be as big of a priority.


I found it invaluable one time driving at night through Northern Ontario (Canada) (which is actually southwestern Ontario, but shrug). This is a really bad idea because of Moose. If you hit one they are tall enough that their body tends to go through your windshield and kill you.

Self driving watched the road, I watched the ditches. I didn't need to avoid a moose that time, but it may have saved a coyote's life.


I would settle for being able to see out my back window. It's one of the reasons I still have my crappy little truck from 2006. The windows are all low enough that I don't need a backup camera to back up. I can just turn my head around and see everything through the rear windshield.


>I honestly don't understand how people get value out of most of these. That is, I'd rather just use a system where I know I have to be in control at all times, vs one where "You only have to be in control 1% of the time, oh and if you miss that 1%, you're dead."

A lot of accidents happen from a split second of lack of attention. Lane keep prevents you from inadvertently drifting over the centre line into traffic. Radar cruise control prevents you from rear ending the vehicle in front of you. Both of these simple features are awesome.


I prefer my Tacoma's approach, which is just audible and visual alerts if these start to happen.


This is more than Tesla and it's claims of "full self driving" though. People publish videos about doing reckless stuff with lane centering and adaptive cruise control tech in other vehicles as well.

This is a large issue that will take more than action against any individual manufacturer to solve.


Your contention is that having FSD the car makes accidents more likely because people will rely on it when they shouldn't be driving at all. So... does it? The statistics don't seem to bear that out. This is the first significant accident of that type we've seen, where there have been multiple "driver passed out and the car stopped on the road" incidents. Seems like the truth is the opposite, no? Tesla didn't invent DUIs, but it seems like autopilot is a net win anyway.


There are no third party statistics since Tesla lawyers actively force NHTSA to redact information from any reports they do make.

Even ignoring the fact that Tesla habitually lies and acts in bad faith to consumers and investors alike, the “statistics” their marketing department publishes are worthless. They present no analysis, methodology, or even data to support it. It is literally just a conclusion. The level of scientific rigor they demonstrate is unfit to even grace the hallowed halls of a grade school science fair.

Even ignoring that, this is not the first significant incident. By Tesla’s own admission there were already ~30 known incidents a year ago at the beginning of 2023. Unfortunately, I can not tell you which ones specifically because they, you guessed it, demanded NHTSA redact which system was active from their public disclosures.

Even ignoring that, their reports are suspect in the first place. Of the over 1,000 crashes they admit to being involved in they did not report the injury severity in ~95% of cases. Of the ~5% of cases where they do report the injury severity a third party (news, official complaint, etc.) published that information which compels Tesla to report the same information under penalty of law.

Of the tens of confirmed fatalities, Tesla only discovered a single one on their own (as of 2023-09). Their telemetry systems, which are responsible for detecting 90+% of their reported incidents detected fewer than 40% of their reported fatal crashes. A full 30% of known fatalities were undetected by both telemetry and media and are only recorded due to customer complaints by surviving parties who knew the system was engaged. The amount of missing data is almost certainly staggering.

So no, the data does not bear it out since there is no credible positive safety data. And, as we all know, in a safety critical system we must assume the worst or people die. No data, no go.


There's more statistics in your comment than in the Tesla safety report :)

I'm curious where the data is about their telemetry systems' failure to detect incidents. It seems very fishy.


I'm genuinely curious what you're citing here? You're being really specific about this stuff but not actually linking to (or even mentioning) any sources. And as a Tesla owner who's been steeped in this complete shitfest of a debate here on HN for three years, I'd expect to have been exposed to some of that. Yet this is all new.

Come on, link your stuff. I mean, what's the source for telemetry detecting different fractions of fatal vs. non-fatal crashes? Are you sure that's not just confounded data (fatal crashes are more likely to involve damage to the computers and radio!) or outliers (there are very few fatal crashes known!)?

Basically, the red yarn on your bulletin board looks crazy thick to me. But I'd genuinely love a shot at debunking it.


Given that I said reports to NHTSA it should be obvious that I am talking about the NHTSA SGO database [1].

But thanks for boldly calling me a conspiracy theorist when I quote data from official Tesla reports to the government.

As to your other questions, go ask Tesla. It is not my job to speculate for Tesla’s benefit when Tesla has all the data and chooses to act in bad faith by not only refusing to disclose it, but even forcing NHTSA to suppress unfavorable information.

As to “debunking” anything I am saying, whatever. It is not my job to thoroughly analyze Tesla systems from the outside to prove they are unnecessarily dangerous. It is Tesla’s burden to present robust data and analysis to third party auditors to demonstrate they are safe.

Debunking what I am saying does not somehow magically prove Tesla’s systems safe. It just means I, a random person on the internet, could not prove they were definitely unsafe. I probably also can not find the flaw in a random perpetual motion machine someone presents on the internet, but that does not make it work. Though if you are convinced by Tesla’s brand of bad faith safety analysis, then I have a perpetual motion machine powered on dreams to sell you since you can not prove it does not work.

[1] https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...


That's just a spreadsheet. Can you link me to the analysis that pulls out the specifics you're claiming? I mean, yes, I can do it myself. But my experience is that when people point to raw data and not analysis when challenged, it's because they're simply wrong and trying to hide confusion and obfuscation.

> Debunking what I am saying does not somehow magically prove Tesla’s systems safe.

No, but it's still a blow for good faith argument and worth pursuing.


You claimed Tesla systems are safe stating: “Your contention is that having FSD the car makes accidents more likely because people will rely on it when they shouldn't be driving at all. The statistics don't seem to bear that out. This is the first significant accident of that type we've seen…”

You have presented exactly zero analysis or data supporting your claim that machines that have demonstrably killed people are in actuality safe. The burden of proof is on you to present evidence, not me.

In fact, I have even presented you a new data source filled with official data that you apparently have never seen before that can bolster your point. So how about you engage in good faith argument and support your positive claims of Tesla safety instead of demanding I prove the negative?

Note that quoting unaudited statements by the Tesla marketing department are not support by the same token that official statement by VW about their emissions or Philip Morris about the safety of cigarettes are invalid. You should also not point to haphazard “analysis” derived from those statements.

Also try not to argue there is a absence of evidence that the systems are unsafe. That is only applicable before somebody dies. A death is sufficient evidence to meet the burden of proof that a system is unsafe. The burden of proof then shifts to demonstrate that the rate of death is acceptable. If there is a absence of evidence to demonstrate the rate of death is acceptable, then we must conclude, based on the burden of proof already established, that the system is unsafe.

That is your real burden here. Demonstrating the available data is sufficiently unbiased, robust, and comprehensive to support your claim. Good luck, you’ll need it.


It's raw data, just put the spreadsheet into Google Sheets or Excel and have at it.

Here's what I found in my quick analysis after filtering all Tesla crashes:

  Total Tesla crashes: 1048

  Crashes with 'unknown' injury severity: 997
  Percentage of crashes with 'unknown' injury severity: 95.13%

  Total number of reported 'fatal' crashes: 27

  Number of fatal crashes detected by telemetry: 11
  Percentage of fatal crashes detected by telemetry: 40.74%

  Number of fatal crashes reported only by 'complaint/claim' source: 7
  Percentage of fatal crashes with only 'complaint/claim' as source: 25.92%
Matches up to parent comment's numbers. Incredible amount of missing data!

My experience is that when people repeatedly ask for analysis in the face of raw data being presented, it's because they're afraid to find out what's in it and hope to sweep it under the rug.


Good grief:

   Number of fatal crashes detected by telemetry: 11
   Number of fatal crashes reported only by 'complaint/claim' source: 7
Yeah, that's what I thought. Reasoning from outliers. Now for extra credit, compute a confidence interval from these 18 lines you cherry picked from a 1000-entry data set. I mean, really?

Yeah, I declare this debunked. This nonsense is only a tiny bit better than trying to declare a product dangerous based on one DUI accident.

(Also, I'm pretty sure you're making the argument in the wrong direction. Wasn't the contention upthread that there were too *FEW* telemetry-reported accidents, as if to claim that Tesla was suppressing them? This seems to say that Telemetry is a more reliable reporter, no? Meh. Not even interested in the specifics anymore, there's literally nothing a data set this small is going to tell us.)


Your argument is:

“Ha, Tesla actively suppressed and concealed 95% of the evidence so you do not have enough evidence to prove them wrong. Checkmate.

Tesla just hides it because it is too vindicating. So you have no choice but to believe their unsupported and unaudited claims.”

Again, you have not presented a single claim supported by any auditable data or analysis. You demand others present conclusions with a confidence interval when even the Tesla safety team is unable to do so even though Tesla is the one pushing a system conclusively known to kill people. It is their duty to collect sufficient, comprehensive, and incontrovertible evidence that their systems do not incur excess risk and subject it to unbiased audits.

So, present your comprehensive, incontrovertible claim with a confidence interval based on audited data. That is the burden of proof to support killing more people.


Cherry picking from raw data of 1000+ crashes? Yeah, you're not here for a good faith discussion after vehemently asking for sources and assuming "confusion and obfuscation". You just want to shout down "I declare this debunked" with absolutely nothing to support it. This is gaslighting at its finest.

The original claim was this:

> Their telemetry systems, which are responsible for detecting 90+% of their reported incidents detected fewer than 40% of their reported fatal crashes. A full 30% of known fatalities were undetected by both telemetry and media and are only recorded due to customer complaints by surviving parties who knew the system was engaged.

So no, I'm not making the argument in the wrong direction. Perhaps try re-reading it? The numbers match it almost exactly.

I'm done with your nonsense.


We don't have apples-to-apples data in the public.


That's not a license to believe whatever anecdotes you want, though. I'm just saying that if anecdotes are sufficient, autopilot wins the argument. If they're not, we need data and shouldn't be freaking out over the one drunk driving incident in the linked article.


We're not responsible for that data, Tesla is. You should be calling for them to be transparent and not just settle for misleading "safety reports" because it has a narrative you want to believe.


Well, there's many, many more cars without "FSD" than with it (especially if you include historical data), so the rate fold change would have to be astronomical for FSD cases to outnumber old-fashioned cases.


This is a question with intersecting margins, and those are fiendishly hard to answer.

Here's what I mean. There are people who will drive drunk. At the margin, there are people who will not drive drunk, but who will drive drunk using FSD. But how many? How much more drunk are they willing to be?

On the other side, driving drunk with FSD is safer than doing it without. Criminally irresponsible, yes, but I think reasonable people will agree that it's somewhat safer, FSD has a better track record than drunk drivers. But how much safer?

Depending on how the margins intersect, FSD is either more dangerous or less so. I suspect the answer is less so, that is, there aren't that many people who would only drive drunk with FSD, and FSD is good enough at what it does to lead to fewer accidents. But I can't prove it, and reasonable people might come to a different conclusion. Practically speaking, it's impossible to get data which would settle this question.


> FSD has a better track record than drunk drivers.

We don't know that.


We do, in fact, know that.


You only get "recorded" as a drunk driver if you crash or cause a situation, drive erratically, cops are called, breathalyzers etc. Obviously I'm not condoning drunk driving, but you are looking at an extremely skewed set of circumstances, much like driving during daylight on California streets with no weather for FSD.


How do we know that?


It's not just Self Driving but Full Self Driving! Because today, self driving means not actually self driving at all.


I self drive every time I get behind the wheel. Who else is going to do it? Maybe they should call it "Someone else Driving" but that would just be Uber/Lyft.


Sorry but everything after "the driver who's responsible for their decisions was drunk" is moot.

But otherwise, yes, your statement otherwise about giving "toys" to "dumb" people stands no matter the technology.

And knives can be used to cut steak or stab people.


You could be perfectly sober and still not be able to intervene in time to prevent a crash. Systems like this encourage inattention, but still expect drivers to turn on attention in a fraction of a second. That's the entire point. So no, it's not moot.


How about alcohol odour detection required in all vehicles?

How about attention-distraction monitoring, and perhaps first penalty or safeguard is forced reduced speed, where attention-distraction level determined determines the maximum speed the vehicle can go - whereby reducing the need for as fast of a reaction time?

Any other possible solutions?

I think in general coddling people is more harmful than good, and is lazy to not find nuanced solutions - just because blanket rules are "easier."


It depends on the discussion you are having. I agree that it is not “moot” in either case.

Your point is that these systems need to be save even in the face of incapable drivers and that, despite the misleading marketing, they are not that ( yet ).

The other point though is that people already have access to cars and, according to the evidence, cars with these features are LESS dangerous.

Encouraging over-reliance is a problem but not as big a problem as having to rely on the human 100% of the time. This statement is just a fact given the statistics.

Given the above, while it is too extreme to say that operation by an impaired driver is “moot”, it is fair to suggest that the most significant source of risk is the impaired driver themselves and their decision to operate a vehicle of any kind. The biggest offer to this risk are the additional safety features of the vehicle. The degree of over-confidence caused by the vehicle type is a distraction.


There are no statistics that categorically prove cars with these features are less dangerous. Tesla's own "safety report" is extremely misleading and has no controls for geography, weather, time of day, average age of cars, demographics, etc.

If you're developing an autonomous system, you have a moral and ethical obligation not to roll it out when it's half baked and unsafe. You can't give an unfinished safety critical system to the general public and then blaming them for misusing it.


Don't all Tesla vehicles have the highest safety ratings ever?

I guess maybe though something being less dangerous isn't the same as something being relatively more safe?


Those safety ratings don't assess FSD performance.


You're missing or avoiding my point?

Comparing similar accidents regardless of FSD or not - a Tesla's occupants are arguably kept safer, would fair better than being in any other vehicle, right?


You’re making an entirely irrelevant point. They’d fare better if they were in a bus too.

We’re talking about cause of the accident here, not what happens after one.


Naw, you're just dismissing my valid point because it has weight to it - you want it to be irrelevant, it's not.


Then please tell us how crash ratings of a vehicle is helpful in assessing FSD performance. The entire discussion is of who is responsible for causing the crash. Automated system collisions are counted regardless of severity.


> The other point though is that people already have access to cars and, according to the evidence, cars with these features are LESS dangerous.

I don’t think we can say anything of the sort for Tesla FSD (beta).


> Sorry but everything after "the driver who's responsible for their decisions was drunk" is moot.

It's not for the larger discussion about safety of these sorts of systems. Any such system _has_ to consider the dumb human that's in the loop, because to not do so is almost inevitable to lead to problems. If your driving assist system makes people more likely, and more able, to drive drunk, then it's not just a problem with those people. Sure, they shouldn't be doing that, but ignoring that some people will is irresponsible.

Any system that involves humans that doesn't consider the human element is doomed to fail eventually. We humans are far too good at making that a certainty.


So the solution is dumbing down the curriculum to appease to the dumbest - which has externalized costs, of course.

But all this thread is great evidence towards requiring people of a certain ineptitude to be required to only use self-driving vehicles in the future, once they're proven ~100% safe, right?

But really what you're arguing is people needing to be raised to be responsible, as the root cause-problem.


Knives don't come with self-guided-propulsion-and-steering while requiring you to be ready to stop them within seconds.


Fair point. So would you argue for attention-reaction time testing to make sure a person is fast enough? We'd certainly be able to gather data for the situations that end badly to very horrifically - with harm or injury - and determine first at least if there is a threshold that everyone should need to meet; currently are there any thresholds, like I believe handicap drivers with only one hand/arm - who at first thought, and maybe I'm wrong, won't be able to react as quickly or strongly in any situation - though they may in general be far more cautious because of their state.


I don't think we specifically need a reaction-speed test for Level 2 driving systems, but maybe we could for driving in general.

The pernicious combination that makes L2 unsafe is that it encourages the driver to let their focus wander, and then unexpectedly demands that you have full attention. When 95% of your highway time requires almost no focus, your brain will naturally wander off and pay much less attention to the situation on the road. So when the 1-in-1000 dangerous situation occurs, you're less prepared to react to it than an unassisted driver would be.

Edit to be more clear: A reaction-speed test measures your best-case reaction time, because you know it's coming. Normal driving situations test your "normal" reaction time. Situations under L2 test your worse-than-normal reaction time.


That's not how proximate cause works in a negligence case.


Can you explain?


Sure. If it's a fact that being sober wouldn't have prevented the accident from happening, then being drunk could not have been a proximate cause.


>seems less of an autonomous driving issue and more of an incredibly irresponsible operation issue.

if the car is truly autonomous, then no, its failure of a life critical system

if the car is not autonomous, then calling it "Full Self Driving" is a wilful misrepresentation of it's capability. To a normal person (no ignore the driver's employment history) full self drive, means that, it drives it's self, safely. You and I know that's not the case. However the vast majority of people out there don't know it.

Everyone is rightly chewing Boeing out for lax safety standards, but that door didn't kill anyone. If you are going to be annoyed at Boeing, then you need to be absolutely fucked off at tesla.


> Everyone is rightly chewing Boeing out for lax safety standards, but that door didn't kill anyone.

The door was only part of the issue. 346 people died in a pair of crashes:

https://en.wikipedia.org/wiki/Lion_Air_Flight_610

https://en.wikipedia.org/wiki/Ethiopian_Airlines_Flight_302


You are indeed correct, I should have been more specific to exclude those disasters and scoped my comment to be about the recent near miss.


> To a normal person (no ignore the driver's employment history)

But in this case, which is what the previous comment is talking about, this doesn't apply.


No, it really does.

For example kinder eggs are illegal in the USA, because even though the small(ish) plastic parts are inside a plastic inedible case, too many kids ate the small plastic parts.

Now, the vast majority of people eating kinder eggs, don't eat the toy. However because its reasonable that an unsupervised child would eat the plastic, they were banned.

But.

The point of FSD is that it is safe. If its safe, a drunk person who is behind the wheel, shouldn't be able to cause it to crash, if the FSD is engaged. FSD should fail safe.

but it can't fail safe, it is not "full self drive" and should never have been marketed as such.

However, people's safety is supplemental to large corporation's feelings/profits/liability.


The driver in question worked for Tesla and would be aware to not trust the marketing name of the feature. That's why it doesn't apply.


He should have known better to trust his life to tesla? you betcha.

Doesn't make tesla any less culpable.


What? Why? The driver is also a target of marketing, as evidenced by the fact that they owned a Tesla. They aren't personally responsible for, or privy to, the lies of marketing. Employment at a corporation also doesn't grant you intimate knowledge of all parts of the corporate machine, in fact it might indicate that you are more susceptible to the corporate messaging.


This quote definitely indicates to me he was overly susceptible to the corporate messaging: “It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”


>The details make this seem less of an autonomous driving issue and more of an incredibly irresponsible operation issue.

In every single discussion with Tesla fans re: FSD, we hear that "FSD is safer NOW than human drivers", but every time there's an accident it's the driver's fault.

At what point are Tesla accountable for the mixed messaging around this product?


That's not really relevant here though as there is no claim of 100% perfection with AP/FSD, nor did I advocate for support of Tesla here.

This feature requires acknowledging multiple documents staying that it may make errors and is not to be classified as an autonomous driving system. This is present when you purchase the feature, in the tutorial videos about what the car is capable of, in a screen you must read before enabling this feature, and repeated in every set of release notes that appear on each update.

I fail to see why all blame for impaired driving evaporates because the manufacturer of the car is Tesla.

Let's put on the Product Manager hat. How would you convey to a user that they should use a feature in a responsible manner?


Those two facts are not necessarily contradictory.


I’m not saying this is where we are now,

but eventually when self driving cars are sufficiently advanced, they should be able to drive drunk people home without needing them to provide input (and probably best to forbid them from doing so all together as they’re impaired)


Drunk people are told not to drive themselves. Letting something labeled "Full Self-Driving" drive them would sound logical to a drunk brain.


I use a self-driving technology every time I drink. It's called Taxi or Uber.


[flagged]


Yeah. But also maybe don't call it "Full Self Driving"


Crazy that FSD is legal.


Why? You're supposed to be ready to take over. Marketing hype and a foolish name does not make the system inherently dangerous.


What does FSD stand for?


Someone should never drink and drive. Never!

Yet, I believe that even at 0.26 blood alcohol content he had a better chance at living if driving himself.


I’m not sure what “0.26” exactly means, but if it’s what I interpret to be that’s about one beer for a typical male adult.

It’s not nothing but it’s not a lot either.


I have no idea what some of the other commenters are talking about, but

  - 0.26 is around "blackout drunk" (I am not sure I have ever drank this much)
  - 0.20 is when I know that I have made a terrible mistake, definitely "wasted" as we say (though I haven't drank like this for many *many* years).
  - I begin to feel dizzy and ill at around 0.15ish... 
  - 0.1 is a drunk feeling.
  - 0.8 is the legal driving limit in most US states.
Source: I carry various breathalyzers with me whenever I drink. I probably have a higher tolerance than most though.

And here are some other sources for you:

https://www.utoledo.edu/studentaffairs/counseling/selfhelp/s...

https://en.wikipedia.org/wiki/Blood_alcohol_content

Also, one thing to note about these charts, they are pretty conservative. One drink is rarely 0.02 unless you rarely drink, and had this one drink on an empty stomach, or are a very small person. Or maybe if you take a shot and measure it 10 minutes later.


Correction: - 0.08 is the legal driving limit in most US states.


You may be thinking of 0.026. 0.08 is the legal limit in most states. 0.26 is drunk enough that you need to worry about them choking on their own vomit.


BAC is given in percent (parts per 100, %), whereas a lot of countries use permille (parts per 1000, ‰). This would equate to 2.6 permille, which is quite a lot, especially for non-drinkers.


The US typically uses "percent" when referring to blood alcohol levels while many other countries use "per mille", sometimes leading to confusion around the rules. When somebody mentions Sweden's "zero point two" limit, it's actually incredibly strict, not incredibly lenient!


Of course they do... but also the first time I hear that, didn't know.


Ah yes thanks. That’s it! So in Europe it’s 2.6, which is A LOT!!


0.26 if a BAC, would be more like 5-10 beers, one drink typically is good for 0.02-0.04


It means blood-alcohol percentage and the limit is usually 0.08%. The article correctly states that it's over three times the limit. I believe your math would mean four beers equals 1% of alcohol per 100ml of blood.


It is Blood Alcohol Content (BAC) and 0.26% is about 8-12 drinks depending on the person's tolerance and weight. Legally drunk. The legal limit to drive in most states is .08% which is typically 2-3 drinks.


0.26 is pretty drunk, in my opinion. A 140lb person drinking 8 shots would leave them close to blacking out unless they drank a lot, and very often.

https://www.healthline.com/health/alcohol/blood-alcohol-leve...


It's over 3 times the legal limit for the most forgiving definition of drunk driving in Colorado.


As someone who works in the autonomous driving space, it is absolutely mind boggling to me how much blind faith Tesla fans put into FSD. When you compare it to any true L4 system it’s not even close and very unlikely to get close any time soon.


When I was in a Tesla Uber, I found watching the front screen that shows where cars are around you frightening. Cars would frequently move around and change position unexpectedly and in a way that had nothing to do with actual car movements, lane markings would be totally off, etc. If that screen is at all representative of what FSD "thinks" is going on, I'm not at all surprised that there are lots of crashes when relying on the system.


YES. Thank you.

I rode in a Model Y last year and just could not believe the mistakes it was making. Disappearing and reappearing cars; the semi in front of us was apparently straddling the lane line for a few miles; somehow an early-2000s Dodge Ram was classified as a small sedan – the list goes on, and this was only a ~10-minute ride. I would be absolutely mortified if a product of mine ended up in front of a customer in that state.


I spent a decade working on commercial computer vision applications that, among other things, had to recognize and track cars. Those are exactly the sort of transient errors you'd expect to see in shipping products and they usually have heuristics to "smooth" over those sorts of problems.

That said, would I ever trust my life to a system like that? No.


I'm actually surprised they show it as raw as it looks. Doesn't inspire too much confidence, even though I bet the system must be reclassifying and changing way faster than it renders things on screen.


Don’t get me wrong, my background is also pretty CV-heavy, and I don’t expect perfection by any means.

But the display itself serves basically no purpose besides looking cool, and it just fails pretty badly at that. Also yeah, it made me maybe a little more nervous about being on the road with a Tesla than it should’ve…


We'd usually have something like that in our products as a developer/debug mode, not generally visible to customers.

If anything, if you've got self-driving on in a Tesla, you're not being nervous enough. :)


That’s not the perception for FSD, btw, that’s the output of a much older generation model that you’re seeing. But yeah, it’s pretty bad.


> If that screen is at all representative of what FSD "thinks" is going on, I'm not at all surprised that there are lots of crashes when relying on the system.

It has limited accuracy due to the poor processing power of the Intel Atom chips they used to put in the infotainment screen - they didn't even have anti-aliasing on the car models due to the poor processing power, they needed to make sure the rest of the infotainment experience remained snappy. (note this is not the same chip as what the FSD computer uses).

The "park assist" visuals[0] are closer to what the system actually knows about its surroundings from the cameras, but they're only enabled at low speeds for now (likely still due to processing power of the infotainment screen).

0: https://youtu.be/J5Tx3uEzz-g?si=AQRDcaukKQWrQ7-x


How on earth is it legal to sell a car with this sort of bug-riddled system attached?

That a car can disappear and reappear in the autopilot's knowledge of the world is really weird to me. I would have thought that each car would be tagged as an object that has physical attributes such as "can't teleport".


IT isn't what FSD sees FSD works with voxels, but they aren't rendering that out for the user, they're basically presenting the basic bare eseentials to entertain people and give people a little info.

To see what FSD is actually seeing see the new parking system that shows the voxels, the backend is like that but they cant afford the budget on the CPU to render that at high speed for visualization


FSD is a Level 2 system and not autonomous as stated by Tesla in legal filings. So yes there is a huge difference between a real self-driving system and whatever the hell Tesla is selling.


As a robotics person outside this space, it's not amazing that the shift to L4 as a goal has not been clearly grok'd by the public.

After years of rhetoric, most people still believe that cars will be fully autonomous because they just don't understand the limitations.

It's also quite amazing to me we, the so called smart ones, ever believed level 5 was happening but those were the good old days, huh.


It's hard to impossible for me to know how difficult level 5 driving even is since we don't have any machine that can do it. Trying to achieve it has been a good example of turtles all the way down.

At the same time I feel like people are ignoring how well self-driving works RIGHT NOW, the rate of accidents is still less than with human drivers. So I'm still pretty optimistic.


> It's hard to impossible for me to know how difficult level 5 driving even is

I have slightly a different opinion. I think you can easily guesstimate how difficult something will be by listing the "new" stuff. But I think you can't easily estimate how easy something will be because you can't list everything, so anything you miss could easily delay.

Said another way: We should have known it would take at least a decade, but should never have said it will take at most two decades. (as technologists).

And yeah - agree - lvl 3 is pretty good, feature-wise.


How can you know if Tesla won't publicly release stats of interventions or accident per 100k miles?


There is Cruise and Waymo data as well. Though I have not seen anyone independent who has access to that data making any claims about it being safer.


L4 is all that's needed for a taxi service, and that's where all the money is. Tesla sold cars with "FSD" to customers living anywhere, so they have to solve L5, and they have to do it with fewer sensors and less compute. Of course it's not going to work. We're all just waiting for the other shoe to drop for the inevitable lawsuits.


As an engineer, it boggles my mind how much people trust software in general.

Toyota was put under a microscope after the Prius crashes....do you think anything has changed since? Do you think the software running the cars has gotten any better?

VW cheated on emissions, they payed civil and criminal fines and a couple engineers got charged. No one from managment....Since then, it has come out that almost everyone was cheating. Cummins was most recently caught and it is now a civil fine(also being called record breaking in the news, at 1.6B where VW was 1.5B civil but 4.3B total...)

We normalize the behavior and decrease the penalties.


I trust humans even less than software. At least software can be debugged over time and is consistent so if we get it working...


I would imagine debugging something like FSD might be difficult. Not sure what kind of data they store, but no situation might be the same.


Every try to debug a human? Worse, humans are not copies so making one better doesn't make it easy to upgrade the rest of us.


If the software was broken up in a sane manner and followed the "do one thing and do it well" philosophy, it would be easier to debug.

You don't debug humans, you train them.

Yes, the training is imperfect and doesn't always stick. Which is why we have yet another imperfect system of checking and certifying the training has been done before release. And then we have yet another system of bug catchers that patrol around making sure that the training is being adhered to. And then a system to revoke certification if we detect to many issues with the driver.

I would like to trust a self driving car, however it has also been programed by the very same humans that we don't want to trust. And I doubt that the review is nearly as robust as it should be.

Also the code is not open so I can't review it even if I wanted to because "trade secrets" which might just translate to "we are way to embarrassed to let others see this code because of how shitty it is".

Not being easy to upgrade might be a feature in this case; Right now I don't have to deal with a pushed update that makes all 2024 model Chevy SUV platform encounter a race condition that adversely affects lane keeping after being passed on the left side while in a turn on my morning commute.


Just sold my Model S and bought a Lyriq and am incredibly surprised at how much better Super Cruise is than FSD. It drives just like a human, incredibly smooth.

Of course, it won't work on regular streets - just freeways, but maybe that's for the best, considering how inconsistent FSD is.


It's a supervised system as deployed. How much "blind faith" are you really seeing vs. what you think you're interpreting. None of us are taking naps in the cars. The system does continue to make mistakes, though virtually always of the "too conservative" variety. But it's also the most fun I've had in a vehicle in my whole life and an absolutely sale-maker for me personally. I simply won't buy a vehicle from another vendor until it can do this.

So... I guess I repeat the question: where are you seeing "Tesla fans" using FSD unsupervised?

I mean the linked article is almost unique in that it seems to have happened. And why? Well, it's buried but it turns out the driver was on autopilot because he was absolutely hammered:

> Von Ohain and Rossiter had been drinking, and an autopsy found that von Ohain died with a blood alcohol level of 0.26 — more than three times the legal limit


> So... I guess I repeat the question: where are you seeing "Tesla fans" using FSD unsupervised?

One of the biggest Tesla influencers using FSD without his hands on the wheel: https://www.youtube.com/watch?v=fFW4e0Pkz_Y

This is explicitly disallowed by FSD terms and conditions. He does this all the time.


That seems like a bad faith argument. Do you genuinely feel that driving with your hands in your lap constitutes dangerous operation? The car asks for your hands on the wheel because that's the primary sensor it uses to monitor attention.

In particular, do you genuinely feel that driving an FSD car with your hands in your lap constitutes the kind of inattention we were actually talking about (driving with a .26 BAC!).

I mean, you don't freak out if you see someone using cruise control without their foot hovering over the brake, do you? Objectively and largely inarguably, braking control latency is more important to safety than steering latency.


Yes, I genuinely feel driving with my hands on my lap is dangerous. I own a Tesla, use Autopilot (not FSD, but applies to that as well) and always have my hands on the wheel because I know, at highway speeds, I may not be able to react quickly that one time. This is the entire reason Tesla has hands on the wheel requirement.

And the reason for having that requirement, which is crux of the problem, is that the system is incomplete and incapable of safe operation.


> So... I guess I repeat the question: where are you seeing "Tesla fans" using FSD unsupervised?

Couple of days ago I passed a Tesla going 75mph down the highway and the driver was rummaging around in the back seat. And I don't mean just the lean over and reach for something on the seat, no he was unbuckled and halfway in the back row of the car looking for something.

It was only a few moments, and I am sure that some system or other in the Tesla eventually noticed and warned the driver to take over, but at those speeds, it only takes a few seconds of inattention to end up in disaster. And personally, I've never, ever seen drivers of cars without some sort of self-driving capability do that.


https://abc7chicago.com/tesla-asleep-at-wheel-sleeping-while...

Some of "you" actually are literally napping in the car.


So there's an interesting legal question here.

If someone's drunk (i.e diminished reasoning) and steps into the car expecting that a feature named "full self driving" will safely drive them home, how is responsibility divided between the person who got drunk and the entity who created and named the "full self driving" feature?

I don't think even a reasonable person could consistently conclude that "full self driving" isn't actually full self driving.


It depends what the "full self driving" feature is legally. How it is described, what the warning messages on the screen tell the driver, what the manual says.

If the person was a Tesla employee I think it's safe to assume that they knew it is only an assistant system and the driver still needs to monitor it at all time and able to take over at any time.

Hints into this direction pop up at the screen all the time. This could become more vague if once a driver gets into this situation that is illiterate or doesn't speak the language the car is set up to and genuinely misses all of that.


Anchoring and peer pressure are strong psychological biases. Public opinion and the opinions of a not-so-smart driver are very relevant when it comes to design for safety.


If it's truly self driving then the passenger shouldn't be responsible at all. Would riders be responsible for taxi drivers?


Interesting question.

If you get into a cab, and the driver is clearly drunk or exhibits violent behavior on the road, surely you have some responsibility to terminate your ride.

So if you get into a Tesla with it's wink-wink-nudge-nudge-named FSD, which warns you to be ready to take over, and you then note that the person being asked to take over is intoxicated... it seems like the responsibility is clearly on the operator.

But your standard - "it's truly self driving" - is a higher one. And I really don't know how to think about responsibility in that case, and it feels like a big problem. On one hand, it seems that, in the coming decades, we'll likely have actual full self-driving that operates in some situations in manners that are more safe than human drivers (especially people who are intoxicated, who also deserve a safe way home even in car-first hellscapes of urban planning).

On the other hand, we've seen how most of the USA has been terraformed to accommodate the product being hocked by the auto industry, and that this product has long had an undesirable role in population-level mortality rates (even without FSD!).

So yeah, I don't know.

It seems like micro-mobility solutions, and getting highway vehicles out of cities, give us a better outlook altogether anyway. Particularly if there is robust public medium- and long-haul public transit which can convey micro-mobility solutions to faraway lands.


A reasonable person shouldn't be operating a 2 ton death device without an understanding of it's capabilities.

Responsibility would fall to the operator unless the self driving assistance performed out of spec.


If they're drunk, can they reasonably assess the quality of the code? Can they do a legit code audit while drunk? Seriously, can they even decide anything?


Well keep in mind you can also overide speed with AP enabled, for all we know the drunk guy was in AP with the gas floored to override AP


One thing I hate about Musk saying things like "They have fewer accidents than people" is that it is only able to be engaged in the easiest of scenarios. So per mile, self driving on the easiest roads barely has less accidents than a human in adverse conditions?

That doesn't seem to actually be better than people yet.


https://digitalassets.tesla.com/tesla-contents/image/upload/... (from [0]).

This is all miles traveled, but clever data analysis uses aggregate data as they've done here. It's not "lying" in that the vast majority of miles are traveled on the highway instead of on city streets or even suburban access roads.

Also note:

> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed. (Our crash statistics are not based on sample data sets or estimates.)

Of course, Tesla's official guidance in-car via warning messages is that Autopilot is not safe for city streets, only highways, so technically it shouldn't be used on pedestrian-adorned streets anyways.

This also doesn't take into account FSD Beta.

0: https://www.tesla.com/VehicleSafetyReport


It's important for metrics to be measurable. Throwing in a bunch of subjective criteria like how tricky a road is will only make the analysis less meaningful.

Yes, it's important to understand the limitations of the metrics, but the existence of limitations doesn't mean that the metrics should be thrown out entirely.


Measuring metrics that are easy to measure doesn't help if the metric doesn't mean anything.

Comparing collision rates between human drivers in all conditions and Tesla automation in conditions where it allows activation and is activated is simply not a usable comparison.

Certainly, there's not an easy way to measure the collision rate for all humans all cars in only the conditions where Tesla automation can be activated, and that makes it hard to find a metric we can use, but that doesn't mean we have to accept an unusable metric that's easy to measure.


No, that is what any competent researcher would do. Here is an example from Waymo:

https://storage.googleapis.com/waymo-uploads/files/documents...

But if you are like Tesla and can not even be bothered to put in a methodology section... or any analysis... or any data... really anything other than a soundbite conclusion then maybe you should not get a free pass on driving vehicles when you only demonstrate a level of safety analysis unfit for even a grade school science fair.


Not defending Tesla or anything, but don’t we want to take the W where we can?

If a self-driving feature is safer than humans in nice conditions, don’t we want it enabled under those conditions? What’s the issue?

Of course, the risk is that it’s misapplied in worse conditions where it fares worse.


The issue is that it isn't safer, seemingly under any conditions. If it had lower accident rate per mile, that the stat for humans includes adverse conditions while the stat for tesla only includes ideal conditions. Presumably humans are safer than their average on highways...

Also the original quote is flat out wrong as they are deemed 10x more likely to crash according to this: https://prospect.org/infrastructure/transportation/2023-06-1...


It might be, or not, but no one knows, because the data is secret.


Only engaged on the easiest of roads? I haven't been on a road my tesla wouldn't enable lol WTF are you talking about, shit mine works in pooring rain, it complains but drives fine.


> is that it is only able to be engaged in the easiest of scenarios.

what? it works in basically all conditions and situations though not flawlessly of course


The article totally ignores the fact that there have been 900 crashes reported since 2021, which is a ridiculously low number of crashes.

This is basically a Tesla hit piece. Teslas are safer than human drivers by far, even when the systems are abused by their drivers.

Using 2020 stats (Forbes) there were an estimated 5,250,837 car crashes in 2020. Of those, < 1% have fatalities (35,766 accidents with fatalities).

https://www.forbes.com/advisor/legal/car-accident-statistics...


I do not understand what you are even trying to imply. Why even bring up that <1% of crashes have fatalities?

Are you trying to claim that these Tesla systems were only responsible for ~9 fatalities? You are aware that the very same dataset you are referencing has at least 23 confirmed fatalities. Are you trying to claim that there were actually 2300 crashes and Tesla only reported 900 of them? Or are you trying to say that Tesla crashes are 2.5x more deadly on average?

Are you trying to highlight how Tesla chooses not to investigate ~95% of crashes to avoid confirming if a fatality or serious injury occurred so the report numbers have insufficient data to draw any positive conclusions? Or are you trying to highlight how Tesla telemetry has missed ~60% of confirmed fatalities so is completely untrustworthy to estimate ground truth crash or fatality rates?

Or are you trying to highlight how this is not the first FSD incident since Tesla confirms there were already ~30 as of 1 year ago at the beginning of 2023, they just force NHTSA to redact which ones claiming that it is confidential information that might hurt their business?


There's ridiculous bias in the writing and presentation along with rage-bait, of course it's a hit piece. Corporate journalism is a joke.


Correct take. Go and look up their fire statistics, and you'll get a similar conclusion. Its getting much better YoY.

https://insideevs.com/news/501729/number-tesla-vehicle-fires...

https://www.businessinsider.com/tesla-deaths?op=1


Okay, now do the statistics for the number of people who burned to death. Actually, no need, I have them right here.

The NFPA (the same organization Tesla is quoting for vehicle fire counts) estimates 560 fire deaths per year in the US over 290 million registered vehicle years [2] or ~1.93 deaths per million vehicle-years.

Third party reports show at least 20 people burned to death in their Tesla's in the US in 2022 [1]. There were ~3.65 million Tesla's worldwide in 2022. If we divide US deaths by worldwide Teslas you get ~5.47 people burning to death per million vehicle-years or ~2.8x deadlier if every worldwide Tesla was in the US. By 2022 the US probably accounted for ~50% of vehicles, so you are probably ~5x more likely to burn to death in a Tesla than the average car (which by the way is ~10 years old, half the price, with fewer safety features).

That is not 500% more likely to burn to death given that you get into a fire, that is 500% more likely to burn to death total. If you get into a fire in a Tesla then you are ~5,000% more likely to die in it.

Quite odd how Tesla read the same reports yet only published analysis on the non-safety critical cases that make themselves look good instead of the safety critical cases that make them look dramatically more dangerous than the Pinto. Must be because their data analysis team consists of people with the analytical capability of grade schoolers since any competent individual could easily do this analysis. I mean, the only other possibility is that the analysts and their management are deliberately perpetrating criminal fraud by sacrificing human lives to line their own pockets. One or the other, pick one.

[1] https://www.tesla-fire.com/

[2] https://www.nfpa.org/education-and-research/research/nfpa-re...


>Third party reports show at least 20 people burned to death in their Tesla's in the US in 2022 [1]

I started fact checking [1], and stopped when there was only 1 death in 2022: a cat. Many of "your" fires are indirectly related to Tesla (e.x. bad EVSE wiring job), so please link me to anything I may have missed and I'll continue.


I have no idea what you are even talking about. The page has clear links to news stories corroborating the deaths. Here are a few in 2022 with minimal paywalls:

[1] https://web.archive.org/web/20221111234459/https://www.cbsne...

“A Tesla is believed to have burst into flames upon impact.

One person, a passenger inside the Tesla, was pronounced dead at the scene by first responders.”

[2] https://www.11alive.com/article/news/local/atlanta-crash-ral...

“ One person is dead after a fiery crash on Tuesday morning in southwest Atlanta where Atlanta Police officers were unable to rescue the driver from a burning Tesla.”

[3] https://www.denver7.com/news/investigations/fatal-tesla-cras...

Look familiar? “The crash happened in the late evening of May 16 on Upper Bear Creek Road when the car went off the road and slammed into a tree. The car then caught fire.

A passenger made it out of the vehicle, but the driver, 33-year-old Hans von Ohain, died at the scene.”

Yep, literally the very crash this thread is about was a fire death.

You clearly either did not even look at the link provided or failed to understand how the data was presented.

And just in case you did not read the NFPA report I linked and make a statement like, “But, but, those were crashes, they don’t count.” The NFPA report counts all fires where a death occurred directly or indirectly, especially collisions which were responsible for almost 2/3 of vehicle fire deaths.

What the data actually shows is that non-collision fires, which are almost never deadly, are uncommon on Teslas. However, collision fires, which are very deadly, are disproportionately dangerous and common on Teslas to the extent that their total fire death rate is multiple times higher than regular cars.

As it turns out, reducing your near miss rate in return for increasing catastrophic failure rate is not acceptable.


1 does NOT state the person "burned to death"

2 does NOT state the person "burned to death"

3 does NOT state the person "burned to death"


So your complaint is what, terminology?

Tesla deliberately deceives the public by intentionally publishing bogus, misleading analysis about their fire rates that deliberately imply Teslas are reduced fire hazards when Teslas actually result in 3-5x as many fatalities in fires per capita and you complain about my terminology?

Fine, fire deaths. In 2022, 3-5x as many people died in Tesla fires than would be expected amongst average cars amounting to ~3.5-8 extra deaths per million vehicles resulting in ~14-16 additional deaths in the US above expectation. In contrast, the Ford Pinto had 27 additional deaths over 7 years and ~1.5 million vehicles resulting in ~4 excess deaths per year and ~2.5 extra deaths per million vehicle years.

So now it is your turn to explain why it is okay for Tesla to result in 16 extra people dying and imply that a vehicle 1.5-3x more prone to deadly fires than a Pinto is not a fire risk.

Note that quoting the culprit, Tesla, or any of the Tesla hustlers is not a very good argument since they have a direct financial incentive to lie, so please do try to refrain from regurgitating the Tesla marketing and present actual analysis from competent sources.


> Teslas are safer than human drivers by far

Can you back up that claim? What numbers are you using to assert that Teslas are safer, and why do you believe the comparison valid?


> It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”

-> we were willing to make anyone on the road part of that without asking them


How can this be the first death?

Tesla is involved in 700+ court cases covering multiple deaths [1]. Tesla drivers are involved in more accidents for some reason [2]. I suspect the first death happened far before this.

[1] https://www.washingtonpost.com/technology/2023/06/10/tesla-a...

[2] https://www.lendingtree.com/insurance/brand-incidents-study/


It's pretty shitty journalism in general. The claim that it was on FSD mode was by a drunken, shell-shocked passenger. And the zero injuries and deaths was based on:

> Two years ago, a Tesla shareholder tweeted that there “has not been one accident or injury” involving Full Self-Driving, to which Musk responded: “Correct.”

And wapo clearly did zero research or effort beyond this statement.

The key here is likely "FSD" versus "autopilot" and playing loose of the definition of when it's engaged. Does FSD disengage and tell the driver "good luck" immediately before 99% of accidents? If so it's not technically and FSD accident/injury/death, maybe?

Elon is being misleading, WaPo is pushing shitty bias, and the journalist is being deliberately lazy for a better clickbait headline


> a Tesla shareholder tweeted that

I think they simply wanted the most click baity title. They are contradicting themselves in the other article



There is no difference between advertising "Full Self Driving" and advertising "Guaranteed Investment Returns", but only the latter is illegal. Your move, Lina.


All my investments have guaranteed returns between -100% and +100000000000%. (I don't sell puts)


My robotaxi is basically a money printing machine. https://www.youtube.com/watch?v=QoFxTTC-tm0&t=546s


I have FSD in my Tesla's (an "S" from 2018 and a 2023 "Plaid"). I don't have any confidence in it. After the initial fun of playing with it wore off, I've never used it.

I do like the smart cruise control where it will maintain speed based on the speed of the car in front of me, and the features to nudge me back in lane if I drift, etc, but not when the car drives for me.

I'm wondering if there are cut corners in the budget "3" that make it even worse...

That being said, the driver was drunk, and I have little sympathy for drunk drivers. I'm grateful he didn't kill anyone else on the road.

From the article:

> Von Ohain and Rossiter had been drinking, and an autopsy found that von Ohain died with a blood alcohol level of 0.26 — more than three times the legal limit — a level of intoxication that would have hampered his ability to maintain control of the car, experts said. Still, an investigation by the Colorado State Patrol went beyond drunken driving, seeking to understand what role the Tesla software may have played in the crash.

Obviously, any legal action the family wants to take against Tesla will be an uphill battle because juries aren't sympathetic to drunk drivers, either.


The issue is if you have FSD (beta) enabled and you attempt to yank it, TACC will still be engaged, which makes you drive straight. If they had lane assist on as well, it would essentially swing you back and forth. It's dangerous if you aren't aware of it since it suddenly goes from FSD to why does my car have a mind of it's own. If you ever see a Tesla on the highway frantically swinging back and forth, the driver is probably trying to disable the car and fighting with the other safety features. Hitting the brakes will ensure FSD/TACC gets disabled properly.

Note: Most of the time people are referring to FSD as FSD beta. Most videos you see online are FSD beta. FSD is still very dumb, but includes stop light awareness. FSD beta is what allows you to "self drive" in the city.


I have the opinion that if a self driving car is not able to prevent driving into a tree or any other static object, it should not be on the road. It doesn't matter to me if the driver was drunk. Maybe he was drunk because he depended on autopilot to get home. Maybe he would have not had an accident if he drove home himself. We don't know. All I know is drunk driving is dumb and autopilot sucks ass.


Human drivers drive into trees and static objects, and I object to the idea that we need self-driving cars to have superhuman driving abilities for them to be worth using.


There are a lot of people who shouldn’t be allowed to drive cars, too.


Not driving into trees is a superhuman driving ability? That means I have superhuman driving ability!


The allegation that it was on FSD mode was made by a drunk and shell-shocked passenger, and his statement wasn't clearly stating it was on at the time of the accident. FSD has problems for sure but don't draw conclusions from this shitty journalism


Tesla legal and PR teams are and will be the innovators long before the 'full self driving'


I just hope future cars don't take controls from my hands. I don't particularly enjoy driving but it is one thing I want to take some control with. The current division between automation and manual is perfect for me.

Neither do I want it to record too many details. The state might use that to sue you and the insurance company might use that to "improve" their profit margin.


This is a sensationalized hit piece against Tesla Autopilot. It is definitely unfortunate the drunk employee operating the car was involved in a crash, that at least from the wp reporting sounds avoidable—Autopilot active or not. Any death is unfortunate, and an opportunity to improve vehicle safety.

Tesla's Q4 2023 stats should speak for themselves:

> In the 4th quarter, we recorded one crash for every 5.39 million miles driven in which drivers were using Autopilot technology. For drivers who were not using Autopilot technology, we recorded one crash for every 1.00 million miles driven. By comparison, the most recent data available from NHTSA and FHWA (from 2022) shows that in the United States there was an automobile crash approximately every 670,000 miles.

https://www.tesla.com/VehicleSafetyReport

The stats are incredible, and—based on stats—Autopilot is safer than human-only driving. Don't be taken by sensation.

I personally feel safer driving while monitoring FSD, especially in unfamiliar driving situations, like new cities where I'm not 100% certain about the streets I'm navigating. If you speak to reasonable Tesla drivers who have experience using FSD, they'll undoubtably acknowledge the hiccups with Autopilot, but the data shows its improving over time.


And this is a misinformed comment from someone who blindly parrots Tesla's numbers even though we know, for a fact, that Tesla doesn't follow industry reporting practices by redacting, obscuring, hiding information.

Their numbers don't even normalize or otherwise account for different roads, road conditions, environmental conditions, cars involved and their safety systems, or drivers behind the wheel.

If they were interested in making good faith comparisons, the "US average" would read "US average of comparable vehicles under equivalent weather & road conditions" and "Tesla vehicles not using autopilot" would read "Tesla vehicles not using autopilot under equivalent weather & road conditions".

You're essentially telling us that drivers driving Teslas with active autopilot (i.e. limited to great weather conditions on high quality roads) have fewer accidents than those without active autopilot (e.g. driving in poor weather conditions, or on bad roads). That's not much of an insight.


> Tesla's Q4 2023 stats should speak for themselves:

When stats speak for themselves, they’re quite likely misleading.

> > In the 4th quarter, we recorded one crash for every 5.39 million miles driven in which drivers were using Autopilot technology. For drivers who were not using Autopilot technology, we recorded one crash for every 1.00 million miles driven. By comparison, the most recent data available from NHTSA and FHWA (from 2022) shows that in the United States there was an automobile crash approximately every 670,000 miles.

Let’s start with the obvious: what about crashes that occurred within, say, 10 seconds of autopilot turning off?

And the slightly more subtle: the distributions of roads and conditions driven on autopilot is surely very different than the distribution driven manually. One could try to control for this by identifying segment of road driven on autopilot in, say 10 minute intervals. Then normalize the analysis in the autopilot and manual groups to weight each (road segment, time segment) equally in each groups.

As a concrete example of the latter issue, consider I-80 through Donner Pass. In good conditions, it’s a very nice freeway. In snow, it’s not so nice. I would expect the general accident rate to by far more than 5.39x higher in snow. But, in snow, autopilot will get rather less use. (And I bet autopilot is very likely to disengage, by itself or by driver override, if the car loses traction.)


Tesla continues to misleadingly use this comparison, and somehow people continue to fall for it.

Autopilot will refuse to engage in a wide variety of situations. It will only engage in the easiest, safest scenarios. You cannot compare those miles against all miles driven by human drivers, which include narrow winding mountain passes, two inches of snowy slush, etc.

Even the 1M non-autopilot comparison; you're comparing against a car population that includes 20 year old beaters with far fewer safety features.


> Autopilot will refuse to engage in a wide variety of situations. It will only engage in the easiest, safest scenarios.

Do you have any proof of this? I can enable it on a dirt road if there are visible enough edges of the road.

But yes, obviously, they're being a bit misleading by aggregating all miles together. In aggregate, it is overall safer on the highway, and I don't think you can argue in good faith that the safety report is wrong about that.


Will you accept Tesla's word for it?

https://www.tesla.com/ownersmanual/model3/en_us/GUID-101D1BF...

> In addition, these features may not work as intended when: The road has sharp curves or significant changes in elevation.; Road signs and signals are unclear, ambiguous, or poorly maintained.; Visibility is poor (due to heavy rain, snow, hail, etc. or poorly lit roadways at night); You are driving in a tunnel or next to a highway divider that interferes with the view of the camera(s); Bright light (such as from oncoming headlights or direct sunlight) interferes with the view of the camera(s).

https://www.tesla.com/ownersmanual/model3/en_us/GUID-E5FF5E8...

> Full Self-Driving (Beta) and its associated functions may not operate as intended and there are numerous situations in which driver intervention may be needed. Examples include (but are not limited to): Interactions with pedestrians, bicyclists, and other road users.; Unprotected turns with high-speed cross traffic.; Multi-lane turns.; Simultaneous lane changes.; Narrow roads with oncoming cars or double-parked vehicles.; Rare objects such as trailers, ramps, cargo, open doors, etc. protruding from vehicles.; Merges onto high-traffic, high-speed roads.; Debris in the road.; Construction zones.; High curvature roads, particularly at fast driving speeds.


This is basically my point at the bottom of my original post. It isn't supposed to be used in those scenarios. That doesn't change the reality that my car's Autopilot can and does work on dirt roads, when the sun is directly in the middle of its FOV, on the curvy suburban roads of Georgia, in heavy rain, etc.


Sure, but that's not the argument being made here. The data that Tesla uses is obviously massaged to make their product have the illusion of superior safety. To what extent is unclear, but it is clear that this data has an agenda. It doesn't compare like with like and can't be piecemeal interpreted to make an unbiased observation. To bring it up at all is just marketing.


Normalizing the data to make a direct comparison is a hard problem. Do you know of any research done to try to do that?


It's a lot easier to tease out the impact of individual safety features.

For example, there's quite a bit of evidence that lane departure warnings/assistance helps: https://www.consumerreports.org/car-safety/lane-departure-wa...

Less indication backup cameras do anything: https://www.latimes.com/business/la-fi-hy-back-up-cameras-20...

Tesla's non-autopilot numbers likely reflect the fact that they're exclusively newer vehicles with safety features that aren't yet widely standard. You'd probably see similarly nice numbers for other $50k+ luxury vehicles with similar bells-and-whistles.


> Tesla's non-autopilot numbers likely reflect the fact that they're exclusively newer vehicles with safety features that aren't yet widely standard. You'd probably see similarly nice numbers for other $50k+ luxury vehicles with similar bells-and-whistles.

This is probably not possible since those automakers don't do constant data collection in the same way Tesla does. Tesla's telemetry logs actual data for the distance their cars are driving and whether AP is enabled. All we have for other brands is NHTSA's approximations.


There's a causation fallacy at work here. To infer that autopilot is safer according to these stats you would have to control for far too may confounding factors. For example, the majority of crashes occur in rural and urban settings. It is likely that autopilot is in use more often in situations in which crashes are less likely (such as simple, long distance highway driving).


Crashes generally have multiple factors. For the moment, let's set aside the multiple driver errors (not just drunkenness) and focus on issues Tesla has control of which contributed to the death:

* FSD was wildly insufficient for this road and required constant driver input

* Despite being insufficient for the road, FSD could be enabled on the road

* Despite needing constant correction, the FSD stayed enabled

* FSD can be enabled at speeds and conditions where the driver would need to correct with very little notice

All of these are completely self-inflicted issues caused by Tesla management arrogantly insisting on having the FSD attempt much more than it is capable of. Compare this to Ford and GM where they carefully geofence their systems to highways where they are absolutely certain their driving aids will work.


Drivers of late model, $50-100k vehicles slightly safer than fleet averages. More misleading news after this word from our sponsors...


Elon can also speak for himself: https://www.youtube.com/watch?v=5cFTlx68itk&t=93s


Stop falling for Tesla's cherrypicked statistics.


It's almost like the term Full Self Driving is a complete lie.


FSD is such a strange hill for Tesla to die on, reputation wise.

They already revolutionised the world with the electric car, killing a 100 years reliance of gasoline.

So why make such bold(and outright wrong) claims about a feature that is at best secondary?

Makes no sense to me.


I won't use self driving until legal liability is on the manufacturer. And self driving to me means I can tell the car to come pick me up at the airport. I'm not optimistic that will happen in my lifetime.


I don't think it will happen ever and I don't even think it should happen ever. The first step in making this possible would probably be standardization of all road infrastructure everywhere. That will never happen so instead I think we should abandon our car centric society and start building trains.

Something on a track is way easier to automate.


I'm curious of whether, in the US, Tesla can be sued if their cars hit other drivers who arent using a Tesla. Legally speaking, it seems that the hit driver has legal grounds to sue either Tesla or the driver.


I think I read a lot of these Tesla comments, I don’t know if it’s just people hating Musk but I scratch my head remember a Darwin Award (maybe honorary) from 2000’s of a caravan driver who crashed because they thought cruise control meant it would steer (or FSD in current terminology) and so was having a shower in the back or something.

Everywhere I go there’s some claim I’m not meant to take literally, but I feel only with Tesla things do I hear talk of ‘well if someone believed that at face value’. No where, anywhere, afaict, can I do that and things work out fine. No where at no time do I get to say, I don’t think, ‘I took a personal risk based on a title in your marketing material and ignored all the pop ups saying otherwise’.

For example, a real estate agent can sell you a house with the property boundaries incorrectly marked, pulled from official records, and well, sucks you be you. What do you mean you didn’t look over every bit of small print and made even the slightest assumption of fairness.

I guess it just falls down to whether or not you’re in with lawyers or have an axe to grind.

At this point I feel as though Musk and Trump should be appointed ceo of everything and then everything will be audited with a fine tooth comb. Put everything to the standard we put people we don’t like.


Oh God … So it's fully engulfed and there's one inside?

It will not be out of fire for several hours. You know how Teslas are.


Whether it is "half-baked", or not: how many driving hours passed before the first fatality?

How do they compare to human driving hours per fatality?


>Von Ohain’s widow, Nora Bass, said she has been unable to find a lawyer willing to take his case to court because he was legally intoxicated. Nonetheless, she said, Tesla should take at least some responsibility for her husband’s death.

Good on Tesla for ignoring the situation. This is how FUD sausage is made.


> In 2022, Tesla recalled more than 50,000 vehicles amid concerns that Full Self-Driving caused the car to roll through stop signs without coming to a full halt.

Not exactly relevant, but I remember this particular instance - the slow rolling through stop signs was an intentional feature to mimic human behaviour (and avoid surprising other drivers) but was deemed illegal because it goes against a strict interpretation of the law. Another tiny example of why self-driving is so hard.


>was an intentional feature to mimic human behaviour

Which is nonsense. Most of the world is not shitty California with drivers who think they are the most important person on the planet who can't even stop for a second at a literal stop sign. Rolling through stop signs is against the law, is dangerous, and should never have been on any sort of feature roadmap anywhere.


Haha says someone who has never driven in San Francisco's Sunset district. Stop signs at every intersection... for all 40+ avenues. 90% of the time there are no other cars even remotely close to the intersection. Not even SF traffic cops come to a full stop at each one.

The situation is not ideal. I wish we had a sign that meant "full stop if another vehicle is approaching the intersection, otherwise rolling stop is fine". Yield doesn't quite cover it.


Not nonsense at all in my experience. I come from Brazil, and outside city centers sometimes you absolutely risk being rear-ended if you decide to come to a full stop at a stop sign.

You’re expected to slowly roll past it, and only stop if there is oncoming traffic.


Decades ago, I failed my initial driving test for doing a California roll through a stop sign.

It's done so commonly that I just tested like I'd practiced. (It's entirely correct for me to have failed, of course.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: