Tesla seem stuck-ish to me. They do have some incremental improvements each year, but even after several years of development, their cars want to randomly run into parked cars and other stationary obstacles on a frequent basis. We're not talking about edge cases, your cars shouldn't be regularly trying to hit a concrete wall after this much engineering effort.
Waymos do occasionally screw up, but if they did it as much as Tesla's FSD, it'd be chaos in the streets in SF, so it seems like it must be fairly infrequent.
I'm not sure how true this is anymore. FSD has improved significantly this year that they're on their new NN architecture.
It's worth remembering that Waymo required their users to sign NDA's during beta, while the Tesla FSD beta was open to everyone with no NDA. So there was a lot more Tesla content being posted and going viral.
I've heard "Autopilot/FSD has improved a lot recently" or "the next release is going to be a huge improvement" many times from Tesla superfans. And certainly there has been improvement, but it's still at a stage where it's making very basic mistakes in operating the car.
It's not even at the point where the challenge is handling weird edge cases with construction or strange intersections, it's still struggling with not running into parked cars and walls. How many years do you think it should take a self-driving system to be able to handle those basic tasks in the general case? Because Tesla has been working on self-driving for almost a decade at this point, and they still seem to be barely past the starting line.
> it's still at a stage where it's making very basic mistakes in operating the car.
> it's still struggling with not running into parked cars and walls
Based on the phrasing of your sentences, I must ask which FSD version you are running, and if you can share footage.
> Because Tesla has been working on self-driving for almost a decade at this point, and they still seem to be barely past the starting line.
Waymo/Google has been working on this since 2009, which was itself based on a Stanford project (whose team was hired from Stanford to Google) that started in 2004. So that's either 15 or 20 years depending on how you count, and it doesn't include the much harder tasks of mass producing vehicles and making electric vehicles commercially viable.
The NDA's ended when they were confident enough in their system that the good press would outweigh the bad.
So essentially we're comparing footage from Waymo when it was at the end stages of its development to footage from Tesla at the early stages of its development.
I don't know if they're wildly different at this point. Sitting in the shotgun seat, comparing the latest FSD vs the latest Waymo, on the same pickup and dropoff in San Francisco, I couldn't tell much difference. On the one hand, Waymo definitely chooses slower, quieter roads, and weird pickup/dropoff points - which means it's a slower ride. On the other hand, most people don't have access to actually try FSD so they rely on videos which are typically older FSD versions and spliced to only show "highlights" instead of being a raw 20 minute ride footage video.
I don't think we'll actually know until Tesla has an actual robotaxi product. When Cruise had one, most people who had tried both Cruise and Waymo said Waymo was better. That was my opinion as well.
> Sitting in the shotgun seat, comparing the latest FSD vs the latest Waymo, on the same pickup and dropoff in San Francisco, I couldn't tell much difference.
Well, except for the fact that one is doing it completely driverless. And it has to do that every single time without having the luxury of a driver to prevent accidents.
Big difference in reliability, which makes them wildly different.
There were no interventions, so both of them were doing it completely driverless.
We can't make an apples to apples comparison until Tesla also has a robotaxi product, but even then there will be questions around the role of remote operators.
> There were no interventions, so both of them were doing it completely driverless.
Well, no. A Tesla doesn't operate without a driver's supervision, so it can't be driverless. It did that particular drive without intervention, that's it. The stats [1] clearly show it's nowhere near capable of doing it without a driver in the seat. Community tracker puts them at 30 miles per disengagement.
FWIW a quick google search turns up Waymo reporting they have 0.41 incidents with injuries per million miles driven [0], whereas Tesla vehicles using autopilot had 0.152 incidents with or without injuries, per million miles driven [1].
So Waymo has 2.7 times more incidents with injuries then Teslas using autopilot have incidents, with or without injuries.
Maybe if I checked more sites they'd give different numbers, but from those initial numbers it seems your perception of reality of Waymo "screwing up" less is not accurate.
This is a ridiculous, apples-to-oranges comparison. You’re comparing fully driverless miles to driver assist miles with humans actively preventing accidents without controlling for any variables.
This is an extraordinarily disingenuous comparison. A big reason why Tesla superfans have such a poor reputation is because of bad faith arguments like this that frequently pop up in these discussions.
Tesla cars with FSD have a driver behind the wheel who can instantly take over if the car is about to crash into a stationary object. Any time a Tesla would've crashed into something an object but its human driver saved it, that doesn't count in stats like these. Many Tesla owners have reported that they have to regularly disengage FSD because it's trying to do something dangerous or looks like it's headed for a crash.
In contrast, Waymo cars do not have a human who can take the wheel if they try to run into a wall. The closest equivalent is that if Waymo cars get confused and don't know how to proceed, they can stop, then phone home and ask a human navigator to give them 'advice' or a general path; these people don't directly control the car, they're more comparable to a human navigator in the front passenger seat. It's still human assistance obviously, but it's not gonna save the car from running into an object that it didn't think was there.
> Many Tesla owners have reported that they have to regularly disengage FSD because it's trying to do something dangerous or looks like it's headed for a crash.
With Tesla the responsibility is on the person in the driver's seat, so there is a (rightfully!) a bias for overreaction on the part of the driver. We will never know many of these disengagements were necessary.
The only way to get a true comparison of data is to compare robotaxis with robotaxis.
It's true that not all of them would be crashes, but many would be, because, well, the car was about to crash. The car isn't just joking around when it swerves towards some parked cars.
> The only way to get a true comparison of data is to compare robotaxis with robotaxis.
100% agreed. And so far, Tesla hasn't taken the step of actually letting the cars be driverless.
Older versions of Tesla FSD tended to make steering adjustments that were short in duration, but at a higher turn angle. Human drivers in a similar situation would turn the wheel slightly but keep it turned for longer before returning to centerline.
People saw the steering wheel turn and perceived it to be the system going haywire, or thought that "the car was about to crash" as you put it, and intervened.
The newer NN based FSD acts more like what a human would do.
Yes, every year Tesla fans talk about how much it's improved, and every year it's still failing on basic driving tasks.
And there's definitely cases where the Tesla in question just tries to run into parked cars or similar for no apparent reason, but Tesla fans always have some excuse about why that's irrelevant, especially if it's not on whatever the absolute latest version is.
Then they accuse the people horrified at Teslas making basic errors and trying to crash of being "anti-Musk".
I actually watched the entire video in the article.
There were some private driveway situations where the uploader intervened to back out to go to a new destination (but Waymo drops you off half a block away and makes you walk instead of entering your driveway, so it's not possible to compare). And there were some situations where a human driver honked - this has happened to me in Waymo as well. There was one situation where the Tesla didn't seem sure if it could proceed, but Waymo in that scenario would ask a remote operator (this has happened to me in both Waymo and Cruise) and presumably Tesla robotaxi can also have remote operators.
The only case where he actually disengaged was at a stop sign with a slip lane, and the car turned right at the stop sign instead of turning right using the slip lane. He went there again at the end of the video and the car used the slip lane. I don't see this as an unfixable problem, because clearly the car can use slip lanes to turn, it just needs to be taught to always prefer slip lanes when turning.
So, your own video disproves what you're saying. It isn't failing at basic driving tasks.
I think the mistake you're making is assuming that they will never be good enough. A lot of people said the same thing about Google/Waymo until they actually rode in one.
Waymos do occasionally screw up, but if they did it as much as Tesla's FSD, it'd be chaos in the streets in SF, so it seems like it must be fairly infrequent.