The counterpoint goes something like this (not that I necessarily buy it, but this is what I infer to be Tesla's reasoning):
1) We're only going to fully "solve" self-driving with ML techniques to train deployed NNs; it can't be done purely in human-written deterministic code because the task is too complex.
2) Those NNs are only going to come up to the necessary levels of quality with a ton of very-real-world test miles. Various forms of "artificial" in-house testing and simulation can help in some ways, but without the real-world data you won't get anywhere.
3) Deploying cars into the real world (to gather the above) without some kind of safety driver doesn't seem like a great path either. There's no backup driver to take over and intervene / unstick the car, and so far driverless taxi fleet efforts have been fairly narrowly geofenced for safety, which decreases the scope of scenarios they even get data on vs the whole real-world driving experience.
4) Therefore, the best plan to acquire the data to train the networks is to use a ton of customers as safety drivers and let them test it widely on the real routes they drive. This is tricky and dangerous, but if it's not too dangerous and the outcome saves many lives over the coming years, it was worth it.
I feel like you could enable FSD for every Tesla car in a "backseat driver" mode and have it mirror actions the driver does (so it doesn't have control but you're running it to see what it would do, without acting on it), and you watch for any significant diversions. Any time FSD wanted to do something but the driver did something else could have been a real disengagement.
They had been doing that, and called it "shadow mode" [1]. I suspect it's no longer being done, perhaps they reached the limit of what they can learn from that sort of training.
When it's in 'real mode', any disengagement or intervention (ie. using the accelerator pedal without disengaging) is logged to the car and sent to Tesla for some data analysis, and this has been a thing for a while. Of course we don't know just how thorough their data science plays into FSD decision making and what interventions they actually investigate.
I don't think that would work due to "bad" drivers. We all drive differently than we know we should drive in certain circumstances (e.g. road is completely empty in the middle of rural new mexico)
For example, you can imagine FSD would determine to go straight down a straight lane with no obstacles - that would be the correct behavior. Now imagine in real life the driver takes their hand off the wheel to adjust the radio or AC, and as a result the car drifts over and lightly cross the lane marker - this doesn't really matter because it's broad daylight and the driver can see there's nothing but sand and rocks for 2 miles all around them. What's the machine conclude?
I forget who it was (maybe George Hotz) that said something to the effect of "All bad drivers are bad in different ways, but all good drivers are good in the same way".
The point being made was basically that in the aggregate you can more or less generalize to something like "the tall part of the bell curve is good driving and everything on the tails should be ignored".
Since learning happens in aggregate (individual cars don't learn – they simply feed data back to the mothership), your example of a single car errantly turning the wheel to adjust the radio would fall into the "bad in different ways" bucket and it would be ignored.
"All bad drivers are bad in different ways, but all good drivers are good in the same way".
I accept that as a plausible hypothesis to work off of and see how far it goes, but I would not bank on it as truth.
I'll give another example, I think a significant portion of the time, people roll through stop signs (we'll say, 25% of the time? intuitive guess). I do it myself quite often. This is because not all intersections are built the same - some intersections have no obstacles anywhere near them and you can tell that duh, there's no cars coming up at the same time as me. Other intersections are quite occluded by trees and what not.
I'm fine with humans using judgement on these, but I would not trust whatever the statistical machine ends up generalizing to. I do not think rolling a stop sign makes you a 'bad' driver (depending on the intersection). Still, if I knew I was teaching a machine how to drive, I would not want it to be rolling stop signs.
That sounds like a complicated way to say there are more ways to screw up than do it perfectly, which, duh.
Not to discount this at all, but... yea
Even if the brains of it become perfect, I doubt the vision-only approach (or has that changed?)
They need at least somewhat decently consistent 'signal' to act appropriately... and there are some mornings I just don't drive because visibility is so poor
The theory would be that washes out in the noise. It's a simplification, but on average, most of the people most of the time are not doing that - why would it zone in on the rare case and apply that behavior?
Well zoning in on the rare cases is the difference between what we (as in society's collective technology, not tesla) have today and full reliable self-driving.
Even in the anecdotes throughout the rest of the comment section, there's a lot of people that said "yeah I tried FSD for a limited period of time and it worked for me". Because we're not saying that taking FSD outside right now will kill you within 5 minutes. We're saying that even if it kills you 0.01% - that's pretty sketch.
The general principle is that all of the drivers that have been recruited to be 'teachers' to the machine are not aware that they are training a machine. As a result, they are probably doing things that are not advisable to train on. This doesn't even just apply to machines - how you drive when you are teaching your teenage child is probably different than the things that you do on a regular basis as an experienced driver. If you are not aware that you actually teaching someone something, that's a dangerous set of circumstances.
It seems they're doing quite well on their financials by offering access to FSD as a subscription. The misconception here is that FSD is needed for them to collect data - they collect autopilot sensor data on all cars regardless of FSD or not.
A tangent to that thought... "do you want people to be financially incentivized to get into novel situations to test situations where FSD was lacking data?"
I recall Waze had some point system to help gather positioning / speed data for side roads that it would try to have you go get with icons... and those were just fake internet points.
I could see something like this being their logic - maybe not with neural networks/machine learning specifically, but certainly "the only way to get to where we want to go is to do this".
My counter-counter-point would be that there's plenty of other companies that are doing this more safely, and also that ends don't justify the means when those means involve killing pedestrians.
Those other companies are rapidly going bankrupt because the economics of doing it the non-Tesla way seem impossible.
zoox was bought by Amazon for $1 billion, which seems a lot but it was the amount of money invested into company, so it was sold at cost to Amazon.
argo.ai just shutdown. VW and Ford spent several billions of dollars on that.
drive.ai shutdown and was acqui-hired by Apple for the car project that was reportedly just pushed to 2026
aurora is publicly traded and is on the ropes, reportedly trying to find a buyer before they run out of cash.
We'll see how long GM and Google will be willing to put ~$2 billion a year into Cruise / Waymo. I don't see them generating significant revenue any time soon.
Tesla and comma.ai have a model where they make money while making progress. Everyone else just burns unholy amounts of capital and that can last only so long.
No, I'm arguing that Waymo, Cruise and others following similar strategy will go bankrupt before delivering a working product and Tesla / Comma.ai won't.
As to crashes: the disengages part of your rebuttal is implied claim that Waymo / Cruise are perfectly safe.
Which they are not.
FSD have been deployed on 160 thousand cars. No fatalities so far. No major crashes.
Cruise has 30 cars in San Francisco and you get this:
> Driverless Cruise robotaxis stop working simultaneously, blocking San Francisco street
> Cruise robotaxis blocked traffic for hours on this San Francisco street
Another Cruise robotaxi stopped in the muni lane.
Waymo car also stopped in the middle of the road.
Neither FSD or Cruise or Waymo had fatalities.
They all had cases of bad driving.
This is not Safe-but-will-go-bankrupt vs. not-safe-but-won't-go-bankrupt.
It's: both approaches are unsafe today but one has a path to becoming safe eventually and the other doesn't, if only because of economic realities of spending $2 billion a year without line of sight for going break even.
If someone told you that they were going to revolutionize bridge building but it was going to take a bunch of catastrophes to get there how would feel about it?
The fact is they did not tell you but it happenned and still happens. Bridge designing and building uses safety factors, yet there have been bridges falling down, in Italy and Mexico as recent examples https://m.youtube.com/watch?v=hNLP5shZciUhttps://m.youtube.com/watch?v=YXmbkbr0L18
A few years ago I built realtime seismic impact monitoring and analysis technology and standard answer was along the lines of “we’ve got insurance if people die so why bother”
The fact that the other companies which went with a more thoughtful roll-out, delaying the time to market, got a much better track record, is a strong counter-counter-point IMO.
If you don't count disengagements then what metric do you use? Because I'd guess a statistically significant portion of disengagements are likely accidents that would've happened--if not for human intervention. Which if we're calling it 'full' self driving suggests you shouldn't need to intervene at all.
>4) Therefore, the best plan to acquire the data to train the networks is to use a ton of customers as safety drivers and let them test it widely on the real routes they drive. This is tricky and dangerous, but if it's not too dangerous and the outcome saves many lives over the coming years, it was worth it.
Maybe you should use specifically trained test drivers, who are acutely aware of the limitations and know how to deal with them, not random people who have been told through intentional snake oil marketing by a billionaire with a god complex who needs to feed his ego that the car can drive itself.
It's insane that governments allow these vehicles on the road.
Also, that kind of the-end-justifies-the-means reasoning has lead to a lot of catastrophic developments in history. Let's not go there again.
I appreciate being principled about ends not justifying the means. But in my experience this principle is not applied universally by people. It's cherry-picked as what amounts to a non-sequitur when deployed in a discussion. Don't get me wrong, I wish it were a universally held and enforced moral principle, but it's not.
Anyway, the reality is that Teslas are safer than any other car on the market right now, despite the scary voodoo tech. So it seems in this case the means are also justified. If auto-pilot and FSD were causing more accidents than humans, we'd be having a different conversation about ends justifying means, I surely agree.
Ends-justifies-the-means reasoning has also lead to many of the innovative wonders we're all now relying on every day. While the customer test drivers aren't "trained", there was some caution in the approach.
Customers had to opt-in to request beta-testing access, they had to pass a Safety Score system that grades their driving safety (same as car insurance apps, roughly) for a long period (in some cases many months!), etc. After going through those hoops, when they finally get the software for it, they're required to consent again. IIRC the text there includes things like: You are in control at all times, must keep hands on the wheel and eyes on the road and intervene for safety, you are liable for anything that happens, this software can do the worst possible thing at the worst possible time, etc. They also monitor for your hands on the wheel (via torque sensing) and use an in-cabin camera to monitor whether you're watching the road or looking at a cellphone, etc. These measures can be defeated with effort by unsafe idiots, but that's no different than the risks such unsafe idiots present when driving any car.
With all of that in place, they've scaled up over a couple of years to 160K customer test pilots. Accidents happen, but there's no evidence the rate of them is anything to worry about. If anything, what evidence there is seems to point in the direction of FSDb test drivers being safer than average. However, they're supposedly removing the Safety Score part of the process Very Soon (likely in the next few weeks), but the rest of the warnings and mitigations should remain.
--- meta stuff:
There's a ton of money and a ton of ego pushing huge agendas in every direction when it comes to anything Elon-related, Tesla included, especially since the Twitter saga began and he started really going off the rails more. Almost anything you read on related topics, regardless of which "side" it's on, you have to question the motive to even begin to understand the underlying objective truth. I follow Tesla news a lot, and I'd say ~90% of all random internet news articles on these subjects (positive and negative) are mostly utter bullshit clickbait when they're not outright fraud, and they're designed to influence stock prices and/or buyer behaviors more than they provide useful information. When big money and big egos are in a war over something, objective truth on the Internet is a casualty.
If you ignore all that line noise and look at the objective reality of the engineering parts though: it's pretty amazing beta software with a lot of future potential, and the testing has gone pretty smoothly in terms of safety. It could be many years before you'd let it chaffeur some elderly person on a pharmacy run as a robotaxi, but IMHO it's still a better bet than most of its competitors in the long game of fully-generalized self driving on the human-centric road networks we have today.
As for Elon himself: clearly some of his behavior and viewpoints lately are both pretty objectively terrible. At least you can see it? How many executives from companies that built things we all relied on from the past few decades have really been any better? They've mostly been better at hiding it, while Elon puts it on full display. The world is what it is.
We don't disect live humans despite the potential for scientific advancement. Would it be so bad if FSD wasn't on public roads until it's disengagements per 10K miles driven was at least as few as human accidents per 10K miles?
Disengagements in Tesla data are going to commonly be for much less serious things than potential accidents (merely inconveniencing others, or embarrassing the driver in some way, or even a safe move that just didn't /feel/ right to the driver). They've published actual accident statistics for Autopilot, and those show that it has a lower accident rate than manual driving even on the same fleet of Teslas (which in turn have a lower accident rate than the rest of the US even when manually driven).
Driving is inherently very dangerous. Traffic accidents are a leading cause of death in the US. You're not really chasing perfection to win this game. It's not all that hard to beat humans on average, because the average human is pretty terrible. It's a statistical certainty that some people will die at the hands of Autopilot even when it's in some final non-beta state, but it will probably be less people than would otherwise die to the same miles driven manually.
The hard thing for Autopilot-like systems is perceiving the world accurately. "The world" includes both the physical reality of roads+cars it senses, as well as things like traffic rules, corner cases (construction, debris, manual police control of an intersection with a dead traffic light), and gauging the intent of other/human drivers. Humans are inherently better at that stuff. The software has to get better than it is today, but it will probably never fully match the best humans at this part.
However, there are two key ways the software inherently outdoes humans:
(1) It can have more sensory data than humans. Even Tesla (currently sans Radar + Lidar, just cameras) can see in every direction at the same time, all the time, with an overlapping array of cameras. No blindspots, no temporary blindness in one direction while you crane your neck to check in another, etc.
(2) It never gets tired, distracted, or drunk. It's never checking a facebook feed, or nodding off after a sleepless night, or too engaged in a conversation with a passenger to remember to look for that cross-traffic, etc. This is a huge advantage when it comes to accident rates.
> Autopilot even when it's in some final non-beta state, but it will probably be less people than would otherwise die to the same miles driven manually.
Bold claim. We would need the data to be sure. Judging by reports of Tesla owners in this thread I'd guess FSD and Autopilot are probably causing more harm than it's preventing.
> It never gets tired, distracted, or drunk.
Which would be a great benefit if FSD didn't drive like a drunk teenager.
Look for any tool humans rely on there must be predictability. And until we have enough public data no conclusions can be drawn. That's why it's in Tesla's interest to continue releasing less and less data except for the data that makes them look good.
1) We're only going to fully "solve" self-driving with ML techniques to train deployed NNs; it can't be done purely in human-written deterministic code because the task is too complex.
2) Those NNs are only going to come up to the necessary levels of quality with a ton of very-real-world test miles. Various forms of "artificial" in-house testing and simulation can help in some ways, but without the real-world data you won't get anywhere.
3) Deploying cars into the real world (to gather the above) without some kind of safety driver doesn't seem like a great path either. There's no backup driver to take over and intervene / unstick the car, and so far driverless taxi fleet efforts have been fairly narrowly geofenced for safety, which decreases the scope of scenarios they even get data on vs the whole real-world driving experience.
4) Therefore, the best plan to acquire the data to train the networks is to use a ton of customers as safety drivers and let them test it widely on the real routes they drive. This is tricky and dangerous, but if it's not too dangerous and the outcome saves many lives over the coming years, it was worth it.