Hacker News new | past | comments | ask | show | jobs | submit login
Tesla FSD data is getting worse, according to beta tester self-reports (electrek.co)
546 points by mfiguiere on Dec 14, 2022 | hide | past | favorite | 623 comments



Earlier this year when Mercedes announced its Level 3 "Drive Pilot" system [0], a lot of Tesla stans mocked its limitations, which to be honest, are quite numerous on the face of it:

- Only allowed on limited-access divided highways with no stoplights, roundabouts, or other traffic control systems

- Limited to a top speed of less than 40 mph

- Operates only during the daytime and in clear weather

But the big promise from Mercedes is that it would take legal liability for any accidents that occurs during Drive Pilot's operation, something that Tesla doesn't appear to be even thinking about wrt Autopilot and FSD.

I would love someone to goad/challenge Tesla to step up to Mercedes. If FSD is so much better than Drive Pilot, then why doesn't Tesla agree to provide a "safe mode" for FSD, that operates with the exact same restrictions as Mercedes' D-P, and offers the same legal protections to any users who happen to get into accidents during "safe mode" FSD operation?

[0] https://www.roadandtrack.com/news/a39481699/what-happens-if-...


Yep.

Mercedes has these limitations not because their tech is less capable than Tesla's but because Mercedes is a real car company with real engineers and a gold-standard reputation to maintain.

Tesla, in contrast, is a software company that is trying to take "move fast and break things" into the two-ton 75mph vehicle space, with predictable results.


> Mercedes is a real car company with real engineers and a gold-standard reputation to maintain.

Not only that, but they have a quality reputation to uphold with their domestic (German) and regional (European) market.

Your average discerning German car buyer (i.e. the sort who has a Porsche 911, or a higher-end Audi/BMW/Merc) in their garage will swiftly tell you about numerous problems with the Tesla before they've even sat in it.

Panel gaps, for example. They mean a lot to your average discerning German, and your average Tesla has them by the bucket load.

In fact, the German in-joke is that the reason Tesla built a factory in Germany is so that the Germans could (try to) teach them how to fix the panel gaps. :-)


> In fact, the German in-joke is that the reason Tesla built a factory in Germany is so that the Germans could (try to) teach them how to fix the panel gaps. :-)

As a German I can certainly say that folks here really pay attention to where cars are manufactured. Tesla Model Y's available on various platforms are boldly advertised as "Manufactured in Grünheide". At the same time people are aware that some domestic models (such as the Mercedes EQS SUV) are solely manufactured in the US and then shipped over, which are sometimes conceived as lower quality as a result.


I currently have a Tesla Model Y and a Porsche Macan. The Macan feels more luxurious, but the Model Y is easier to drive because of its unearthly acceleration.

The biggest thing that Tesla got right besides the acceleration is the value-add features like Sentry mode, the tight integration with the phone, the things like Walk-away locking (although I would very much prefer it lock closer to my car because it's about 40-60 ft away when it locks and it makes me nervous).

The build quality is cheaper, the sound system sucks, and I generally despise how many things are tied to the screen. I want buttons and knobs for the air system, to have to hunt for that on the screen is very dangerous, I hate it.

The Porsche feels more luxurious but it's mindblowing how they get so many things wrong. The dashboard is much too complicated with a lot of redundant buttons. Something as simple as there's no place to put my phone, there's only extremely awkward locations that cause my phone to fly around the car when I go around any turn, which is so dumb in 2022. The backup camera has way too narrow field of view, I've almost backed into 2 cars in parking lots, something the Tesla got right. At least they have Apple Carplay, but activating it is extremely annoying.

I also have BMWs before (X3 and X6) and overall my favorite car of all time is the X6.


The Macan is a sports car first, luxury car second. A better comparison would be against against a Mercedes GLC and that absolutely stomps a Tesla Y. Anyone driving with passengers in the car probably doesn’t care about “unearthly acceleration” as much as a quiet, smooth ride, and abundant cameras and safety features. The Tesla Y is on par with a Honda Civic from a decade ago in ride quality.


Honestly I do not get how they go away with so much touchscreen - that should be illegal honestly. You very quickly learn the buttons and knobs in your car for basic activities and do not even need to look. Not to mention a single point of failure with that touchscreen!


This reminds me of Blackberry saying “enterprise users need a real keyboard” when the iPhone was launched.

Which isn’t to say you’re wrong, a driving experience isn’t the same, but I’m sure that experience was the guiding principle behind Tesla going full touchscreen.

That being said apple put a lot of work on the keyboard experience with haptics and fuzzy hit design to make it as functional as it is. I’m not sure Tesla has done the same level of usability work.


The difference is you are walking / sitting when using your iPhone with the touchscreen.

Trying to find out what mode to be in to turn up the heat when going 70mph is a recipe for disaster no matter how much money they pour into it.


When I look at other road users, I find quite a lot of them are driving when using their iPhones :).


I personally have a theory that there'd be fewer texting and driving accidents if we had BlackBerrys with physical keyboards instead of touch screens.


They get away by cutting corners. The Tesla screen is not built to common automotive screen standards and often fails in inclement conditions or prematurely as a result.


I have no idea why car manufacturers don't have phone holders built in. We're forced to buy cheap crap off the jungle website to suction cup, clamp, or screw into the dash. IMHO it should be a standard like the cigarette outlet.


> the things like Walk-away locking

I think it's actually quite worse in terms of security, as you can use some kind of range extender to pretend that the key is close and open (or even start?) the car.


Fuck panel gaps. Just look at the quality of the interior. You’re paying 100k for a car that has worse quality than a Ford. Everything is plastic, there are no button knobs anywhere, no panel in front of the driver, the leather on the seats doesn’t feel like leather etc. I mean I get it that half the price of the car is in batteries and R&D, but still you can’t even compare it to a 50k Volvo. It’s just crap. And now that the big manufacturers are moving into electric cars Tesla’s got a lot of serious completion to face from companies who know how to treat a customer who's paying big bucks.


Yeah... it's funny, but I wanted a Tesla real bad before I actually sat in a friend's. The UX sucks. Everything from opening the door to trying to access the AC or glove compartment was a pain in the ass.

Nevermind FSD accidents, I think I'd get myself killed just trying to turn down the heat on that giant touchscreen.


Serious competition is good!

I just replaced my Audi Q5 with a Tesla Model Y (which... wasn't $100k). No panel gaps, no other problems, and the overall quality feels nicer than the Audi. Shrug!

Anyway, yeah, the next few years look really exciting for consumer EVs. So many announcements in 2022.


The Audi Q5 had issues with the infotainment IMO. It wasn't stock android auto like in my Hyundai. The voice assistant button always routed to the audi voice thing (completely worthless I know what I want google to do, now connect me to google). The buttons felt over complicated.

Although one feature I really liked was that it would tell me what the biggest consumers of power was. I would definitely like to have a car with a heated steering wheel in the winter here, but that isn't in the cards.


Mine was a 2018 Q5. Had a central knob, which I thought was nice for navigating around the screen. But yeah, overall the software and system was pretty lousy. In fact, I was _never_ able to get it to connect to my carplay (or whatever I was supposed to do) to show a map. My wife had that working with her phone. But I couldn't do it! I also couldn't charge my phone and stream music at the same time. And obviously, the software - which already seemed behind the times in 2018 - was never updated in four years. I got in the habit memorizing entire routes from my phone before I drove. It's interesting now to have a car where the map is front and center.

First thing I did with this Tesla is turn off the heated steering wheel for my driver profile. My hands sweat easily, so it was undesirable.


I was seriously considering a Model 3 for my new car, but I'm so glad I passed.

On the Model 3 subrreddit there are endless post about reliability including ones I never thought would be an issue.

Some recent ones include driver side mirror housing falling off and snowing entering the trunk compartment.


Have you looked at forums for other cars? There’s tons of complaints for every model. I’ve had my model 3 for 5 years with zero issues.


I can't recall mirrors falling off or trunks so poorly sealed as to allow snow inside with other car makers.

Maybe on a Yugo but nothing that purports to be anything better than a cheap as possible econobox.


A quick google reveals similar wing mirror issues with VW [0] and BMW [1].

[0] https://www.speakev.com/threads/wing-mirror-glass-fell-off.1...

[1] https://z4-forum.com/forum/viewtopic.php?t=79640


you can find one-off or rare problems on EVERY single model. For the things that actually matter, Teslas are very reliable on the whole.


> I think Mercedes and other automakers have a good chance to bypass Tesla now since Elno is captivated by his Twitter acquisition.

Do you have any evidence to back up these claims? On what trends are you basing that on?

Do you have evidence that Tesla performs worse now because of Twitter?

Literally do you have any real world bases for this or is that just what you hope will happen?


If you're paying 100k for a model 3, you're getting screwed.


I believe the “leather” is “vegan leather” which is really plastic. I own a MY — the one plus side to it is it isn’t as cold as leather in the winter.


> In fact, the German in-joke is that the reason Tesla built a factory in Germany is so that the Germans could (try to) teach them how to fix the panel gaps. :-)

Speaking more seriously on that front, there's been the acquihire of Grohmann Engineering.


> In fact, the German in-joke is that the reason Tesla built a factory in Germany is so that the Germans could (try to) teach them how to fix the panel gaps. :-)

Having owned model Ys made in both China and Germany, I can tell you that the germans have not succeeded. The made in China Y was 100% no issues, while the German one looks like it has been in a collision already. Anecdotal, but this seems to be the pattern with cars delivered to Europe.


> Your average discerning German car buyer

I've got plenty of descriptive words for the type of person you described, "discerning" isn't one of them.


>so that the Germans could (try to) teach them how to fix the panel gaps

Not a Tesla owner or fan but I have a question that's really bugging me now: Which customers really care about panel gaps?

Do average joes, SUV driving soccer moms or suburban white collar workers go around with a ruler measuring their panel gaps with others like "yeah your Tesla is cool and all, but sorry, my Audi A5 has much tighter panel gaps which is what matters most you know"?

When has the panel gap became "the benchmark" indicative of car quality beyond the body shell?

Like, if the panel gaps are the only thing you can find wrong in a car, then it must be a really really good car, right?

Is there any proven evidence that the panel gaps corelate to quality and reliability of the rest of the car, or is it just a myth of the car enthusiasts community that got spread around and went too far? I get it, some Tesla are unreliable and built poorly, but it's not because they have big panel gaps. The reverse is also true for many cars, so this isn't a rule.

Sure, if you want to measure and compare panel gaps, then by all means go ahead and measure panel gaps, but please don't pretend they mean anything more than that, and that it's somehow an indicative for the car's overall quality and reliability, because so far there hasn't been any proof of this correlation.


Aren't those cars (McLaren/Ferrari) also horribly uncomfortable and lacking in a lot of other amenities (like sound systems or tech)? It feels like those cars are a completely different category of good, and trying to measure them on the same scale is misguided.

To me, panel gaps are a proxy for how much faith you have in your consistency and quality control.


>To me, panel gaps are a proxy for how much faith you have in your consistency and quality control.

I beg to differ. Modern German cars might have panel gaps tighter than a nun's fanny, but their reliability, especially after warranty is over, is so awful than in no way can I say that they represent quality. Those quality cars went away in the late '80s early '90s when the engineers got replaced by the bean counters and cars became white goods with many critical parts outsourced to the lowest bidding contractor, that must look 'cool' in the showroom, but fail the second the warranty runs out, or many times even before that.

To me, the panel gaps are a superficial metric of quality and prove nothing of substance that goes beyond body shell.

Why don't we measure quality by how reliable a car is over time and how long it lasts? Surely that would prove good consistency and quality control on the manufacturer's side, no?

Tight panel gaps only shows how much efort the manufacturer has put in the body, but says nothing about the quality and reliability of the electronics and mechanics, which is what really matters in a car for most people.


Tesla was 19 out of 24 for reliability. BMW was 10th, in a recent survey.

https://www.google.com/amp/s/www.newsweek.com/tesla-ranked-n...


Does that prove a direct correlation between reliability and panel gaps, or could it be merely a coincidence?

According to the article:

> "the Audi E-Tron and Volkswagen ID.4 were singled out as being unreliable"

So if Audi and VW are also unreliable then the panel gaps prove as a poor signal for reliability, which was my original point.

Edit after your reply below: Sure, Tesla has poor reliability, but not because it has poor panel gaps. Those two can be completely disconnected. Just because they coincide sometimes, doesn't make this a rule of thumb like some car snobs try to convince you of.

You can easily have cars with great panel gaps that are incredibly unreliable, and vice versa. Panel gaps mean nothing more than panel gaps.


I'm just pointing out that we can have both reliability and tight panel gaps. We have the technology. They are separate things.

As are different models within a single manufacturer. I love my Ford, would highly recommend it, but would never buy or recommend a Pinto or a Bronco II.

The point people typically make is that Tesla has uncommonly poor panel gaps, which point to poor quality and tolerance control in their manufacturing. This is a complex skill that automakers have been refining for 100 years. It is indicative of something, just as the quality of paint job indicates the care and quality with which a hot rod was built.


If you look at reliability among only EV cars they are in the top half.


Sure, I definitely agree reliability is important, and that just because something can be build exactly to spec doesn't mean that spec is going to be reliable.

For me at least, i don't actually think of something like panel gaps as a reliability indicator, but rather it is one of many indicators of whether more quality of life stuff is going to go wrong (ie door rattling, buttons becoming loose or fiddly, etc). These things don't stop me from getting from A to B, but at that price point they're still important considerations.


Aren't those sport cars, basically? Luxury sport cars that don't sell comfort at all - they sell power, speed and "I am cool cause I am powerful and fast" look.


I wonder how on earth those things are allowed on public roads


I wonder the same thing about monstrous pickup trucks in North America. They pose far more of a threat to public safety than any McLaren ever has.


Just the sheer height of modern pickups and SUV is a safety nightmare (for others). I recently walked past a ferrari and I was shocked by how low it is to the ground and how inoffensive it looks nowadays compared to the tanks everyone has.

If a lambo was barreling towards me I could probably just hop over. Not that they would want to go particularly fast anyway in the city, if they care about the underside of the car.


> Which customers really care about panel gaps?

Thing is, no one else has them. It's a real achievement for Tesla that the doors don't align with the body when every other car, no matter how cheap, manages to not do that.

What's sold in Europe at least.

So you'll forgive me that I don't trust the rest of the car either.

Source: I've driven a friend's brand new Tesla. The rear right door didn't align with the body.

Also I couldn't figure out how to manipulate the a/c, the turn signal stalk was too smart for me and kept turning the signals off at the wrong time * etc. Too bad about that engine.

* I have a feeling their designers only ever drove on wide american streets that have only 90 degree intersections. For the life of me i couldn't get the turn signal to stay when the main road was going right and i was trying to get on a secondary road that was like 30 degrees to the left. Probably because i was basically going straight ahead from the point of view of the car at the start.


> Do average joes, SUV driving soccer moms or suburban white collar workers go around with a ruler measuring their panel gaps with others like "yeah your Tesla is cool and all, but sorry, my Audi A5 has much tighter panel gaps which is what matters most you know"?

Average Joes and soccer moms in a typical suburban area cannot even afford Teslas and are extremely happy with their Odysseys, Seinnas and Pacificas. Even an Audi A5 is cheaper than a Model 3 in most cases and Audi's ride quality and cabin noise is extremely better than any Tesla I have ridden in. Audi interiors (though not as good as its other German competitors) beat Tesla by a mile.


As per this thread "average discerning German car buyer". I don't know why the fuck everyone else should give a shitb about the opinions of the "average discerning German car buyer". Is he some kind of Nordic ideal car buyer everyone else should aspire towards?!

Not once in my life has anyone talked about car panel gaps except on web forums when demonstrating Teslas or rubbish. I had one poster explain to me what is and what is not a luxury car. Alright man, good for you!


People may not use the term "panel gap" but "hey that Tesla looks like it has big gaps between the fender and door". Who the fuck cares what forums think when normal everyday people notice it too and may not know the word?


I hate elon musk as much as anyone else, check my comment history. But literally no one has said that to me in the 4 years that I have owned a Tesla.

Honestly, Who really cares? Perhaps I haven't met the discerning average German car buyer.

You know what bugs me about my car? The fact that the range has dropped by 20% and Tesla support says there is no problem.


Re: panel gaps. People do tend to notice when they cars whistle and sport leaves, hair from the car washer's brush, and other debris on their body.


I've definitely noticed when Teslas (yes, plural) drive down the highway with their bumper cover hanging off, flapping in the wind.


No one cares about panel gaps. A lot of people on HN despise Elon and become relentless pedants when discussing any of his products/initiatives.


If the panel gap is not "neat", which is not the same as "small", then it looks bad.

I don't know if I could find a source, but it would probably be something like Forbes in the '80's: Lee Iacocca (of Chrysler) said that they learned from Japanese collaboration that if they made their gaps wider, then they appeared neater because they appeared parallel. And it was cheaper to make a wider gap, and it was easier for QA to pass the cars. Otherwise the cars would have to be whacked with mallets until the narrow gaps looked right, and that slows the line, and that indirectly adds cost.

[This doesn't mean that the Japanese collaborators were making huge panel gaps on their own cars, it simply indicates that Iacocca('s people) got a certain idea from them]


You'll care about panel gaps when there's snow in your trunk or blowing around in your cabin. You also might care that your car looks like it was in an accident and repaired by an amateur.

In a way though you're right - no one cared about panel gaps until Tesla came along because until then even the cheapest of the cheap manufacturers were able to get that right.


I care very much about panel gaps. I can afford a Tesla won't risk buying it because that points to shitty QA.


I use to read about the panels gaps on the Tesla and thought so what? Then as more Tesla's started to show up on the road and I saw what they were talking about I finally get it.

The cars looks like they were in an accident and had their body panels poorly repaired/reassembled.


I care about panel gaps and one of the reasons I didn’t consider teslas when buying a new car is the build quality.

Not sure how much this matters as Tesla’s sales are really high, but I think this basic stuff is important.


The super rich buy supercars for the increase in their own perceived value (i.e wow look how rich this dude is)

The average joe buys a car because of the value it gets them (because every dollar matters)


I think that most car buying is motivated by far far more than delivered value. There's so much status and image wrapped up in cars that thigh there are some who care little about the car, nearly everyone chooses something that fits their perception of themselves.

The reason that there are so many super-expensive pickup trucks on the road is not because people are hauling around things that require a pickup, for example. And when combined with the face that pickup beds are becoming increasingly useless...


Yes that's why car market isn't much practical.


What do you mean the beds are becoming useless?


The height of the base of the bed, the height of the sides of the bed, and the lengths of beds are becoming more for show than for practical usage.


Umm, except the cars the average joe buys depreciate in value, while the supercars the rich buy usually apreciate in value, kind of like art, so wouldn't it make more sense that panel gaps are more important for that market?

Does having tighter panel gaps help with the resale value for the average joes?


Supercars absolutely depreciate in value minus select limited releases (which holds true for non-supercars as well). Look at standard Lamborghini Gallardos, Ferrari F430s/458s, and Aston Martins of any model and you will see that some of these cars are worth less than half what their original buyers paid for them.


Fair point, that was a bad example on my end. Edited.

But my point stands, that panel gap as a benchmark alone is no measure of quality or any other metric.


This isn’t true. Your average super car does not appreciate in value when you consider factors like maintenance and the fact that you have to buy a bunch of other garbage to even be put on the delivery list for a desirable car’s production. For example, actually buying a top spec 911 isn’t feasible if you don’t have a good relationship with your dealer.

Notwithstanding the fact that the market for super cars is nothing like the market for Teslas or 5ers.


Tesla's mockery during the bull market is finally coming to an end.

Being cocky and funny without delivering great results is simply embarrassing.

I think Mercedes and other automakers have a good chance to bypass Tesla now since Elno is captivated by his Twitter acquisition.


> I think Mercedes and other automakers have a good chance to bypass Tesla now since Elno is captivated by his Twitter acquisition.

I’ve been disappointed in Elon for some time and feel society may have already taken him for all of his good ideas.

So I’d bet the opposite. This is a chance for Tesla to recover and match the established auto makers. And the only chance they’ll get.


So far, Tesla has been incapable of stepping up to be a major car manufacturer. There are many small car manufacturers, some build exquisite cars that are technological and design marvels, but what Ford realized early, and Nagoya perfected (just consider how many modern op practices originated there!) is that it's not about the machine, but about the infrastructure.

This is both pre-sale, where you have to build a lean, mean, fast pipeline from vendors to assembly, and post-sales, where you have to have service infrastructure that spans continents, if not the world.

Tesla so far shows very little signs of being able to do either. And just like Whitley failed and ended up being bought by Mumbai, my personal bet (caveat lector!) is that Tesla-the-brand might survive, but Tesla-the-car-manufacturer would end up a subsidiary of a Chinese car manufacturer, who has the car manufacturing chops, but can't build a brand.


VW made about 4.5 Billion in profit in Q3. Tesla made about 3.3 Billion.

Seems pretty major to me.


Yeah, and highest margin by far in the auto industry. While growing 50% per year.


Now that Elon has ruined his carefully crafted PR image of a real life Tony Stark we'll see how long that growth continues.


Only the terminally online think people buy Teslas because of Elon


I will never buy a Tesla because of Elon.


I have a Tesla, and love it (at least as much as I could ever love a car, I'd prefer a car-free life honestly).

The worst thing about it isn't the panel gaps or reliability (haven't had any problems). The worst part is Elon Musk and his fans. Shortly after getting the car three and a half years ago, I was leaving an outdoor party and a man who was a Musk superfan was doing that waving of arms of worship that you sometimes see fans do at metal conferences, and it was just embarrassing. Previously the same man had been gushing about full self driving, and I said there was no chance it would be delivered on time, if ever, and he professed his undying trust in Musk.

Combined with Musk's recent anti-Ukraine efforts, his hyper-partisan paranoia thay he's trying to push on Twitter, his hate for trans people, his hate for biological science exhibited throughout the pandemic and even today in his "jokes" about prosecuting Fauci, Musk is waging cultural war against every single aspect of my identity.

I hate Musk so so so much, and I know he had almost nothing to do with the creation of the car I like, but it still pains me everytime I get in it to know that I helped such a despicable person make a ton of money. Never again will I buy a Tesla, especially since there are now competitors. I'm sure I would hate all the rest of the auto execs almost as much if I knew as much about them as I know about Musk, but the nice thing is that I dont know a damn thing about Stellaris's CEO, from their name to their former partners. And there's a lot of value to that, as a customer.

Maybe Musk is just trying to win over conservatives and jackasses to but Teslas, but I doubt it. I think he's just a dangerous fool.


I hear that and struggle with it too.

OTOH, people (at least online) like to talk out both sides of their mouth re" Tesla, saying on one hand "Musk isn't an engineer, didn't found the company, doesn't add anything to Tesla, etc etc" and then also "i'll never buy a tesla because of Musk".. like, if those are both the case then what about every other company and CEO? eBay's CEO collaborated to harass a random couple who posted critical articles, do they not use eBay? Adidas made untold billions from collabs with Kanye, do they boycott Adidas? CVS, Walgreens, and Walmart agreed to pay $13bn in a settlement over the opioid epidemic -- literally killing people! -- do they shop at CVS, Walgreens, Walmart, or buy prescriptions from the companies that supply the opioids? Unethical Apple manufacturing practices... etc etc etc.

The point here, which is not 'whataboutism', is that there is seemingly no generalizable principle at play here, that for some reason Musk is a bridge too far in comparison with other companies that arguably do much, much more harm.


As somebody that doesn't use EBay and Adidas, sure, easy enough for me to avoid, but even for those I'm not sure I care as much, as the company isn't as much about the CEO. The opioid thing is a bit more tangential to the actual companies.

Whereas Musk is actually serving as a front man, greedily taking credit in a way that I never even saw Steve Jobs take credit for the work of others.

Musk's actions are also a very personal attack against my loved ones, in a way far more personal than drug stores selling opioids is. It's personal insults, and actions that are supporting the displacement of millions of family members's countrymen, deaths of more than a hundred thousand, and so many children's lives scarred. Pretty hard to compare Musk's actions to what Walgreens management did for the opiod crisis. Personal insults also go a really long ways towards negative polarization, whereas merely distributing prescribed drugs makes it a lot harder to fault drug stores.


Bill Burr has a funny bit about Steve being a marketer, not an inventor or engineer (https://www.youtube.com/watch?v=ew6fv9UUlQ8). I can't say I've seen Musk take credit for Tesla work any more than Steve did for Apple's, let alone greedily.

Don't discount the impact of the opioid crisis with more than a million dead (https://www.npr.org/2021/12/30/1069062738/more-than-a-millio...) -- those pharmacies did the math and decided double-digit billions was cheaper than a trial -- but I do see what you mean about the personal connection. What I'm wrapping my head around is I don't see anyone (not that that means they don't exist) saying their cousin died from opioid overdose so they're never shopping at CVS again, you know? The venn diagram of people who own an iphone and people who think sweatshops are evil is basically a circle, but Musk tweets something moronic about Fauci and everyone loses their shit.


Basically. I think Elon is a blowhard idiot who succeeds despite himself and have thought so since 2017.

But I purchased my Model 3 in 2021 because, after cross-shopping many car models and trying to optimize for the characteristics and features I wanted, it was the best thing at its price-point. And even now, even after the Twitter debacle, I still enjoy driving it every day. I couldn’t care less about the man who made the car; the product won on its merits.


No one thinks people buy Tesla's because of Elon.

But people absolutely will not buy Tesla's because of Elon.

Brands work hard to avoid negative sentiment for a reason.


The person I was replying to does, which is why I made the comment. I agree with your other two sentences.


They are the highest margin because EVs are still supply limited. How smart people think that margin will survive once there is competition amazes me.


Nobody thinks it will.

However Tesla makes about $9000 in profit per car. If that is cut in half it still beats Toyota’s $1200 by a wide margin.


I don't see why it would only cut in half, as opposed to settling on a profit margin similar to Toyota's (although it will be higher in dollars because it will be a percentage of the retail price).


> So far, Tesla has been incapable of stepping up to be a major car manufacturer.

When people make claims like this its just really fucking baffling. Like, are you so blinded by hate that you are unable to look up basic stastics?

Tesla is growing like gangbusters. In 2023 they will very sell pretty close to the same amount of as BMW, a company that has existed for 100 years.

They are overtaking more and more car companies all the time. They are in range of companies like Geely for example.

In fact, partly because of Tesla car companies have been merging. And these large mega companies are using sales volume.

Tesla is already making more profit then those companies, while also growing faster.

Like seriously, what about making record profits, fast growth and overtaking more and more large car companies in volume says 'incapable of stepping up to be a major car manufacturer'.

> is that it's not about the machine, but about the infrastructure.

> This is both pre-sale, where you have to build a lean, mean, fast pipeline from vendors to assembly, and post-sales, where you have to have service infrastructure that spans continents, if not the world.

You are joking right? Tesla has invested more in infrastructure then the other car makers. The super charger network. They have their own huge direct-to-consumer sales team. They have distribution centers all over the world that they own. They have vertically integrated their service and this for a long time was losing money, but as they grow is increasingly gone make money.

In addition to that they have built huge, vertically integrated factory parks that are bigger then almost everything anybody else is building. Go to Austin and see for yourself.

Tesla is investing in its own battery materials production and making its own battery in the same factory where they are making the cars. Literally nobody else has anything like that.

You statement is literally the opposite of correct. Tesla has exactly done what you suggest, a huge global investment in infrastructure.

> Tesla so far shows very little signs of being able to do either.

I'm sorry but are blind? Have you not look at Tesla numbers and statistics since 2014?

Because what you are saying is straight up delusional. Its not even open for interpretation.

Tesla made profit of 3 billion $ in Q3. That is while growing the amount of Super charger, amount of Service centers, amount of distribution centers, scaling two gigantic factory projects, investing in its own battery production and investing in self driving (including its own data centers).

> subsidiary of a Chinese car manufacturer

The largest manufactures in China are Geely and SAIC.

Tesla will produce more vehicles then Geely by next year and Tesla is already making far more money then Geely. SAIC is still a little bigger then that but Tesla is already beating them in terms of profit.

Are you simply not looking at these numbers?


"You die a hero or live long enough to become a villain."


> Being cocky and funny without delivering great results is simply embarrassing.

In the EV market, they #1 and #3 for most cars sold per model. What better results would you expect?


IDK, it seems like the people doing the work might have a better grasp on delivering for tesla than musk. I don't imagine tweeting promises that the engineers know they can't keep is that useful.


I think you mean to say Elmo


Come now, Elmo and Elon may both be childish Muppets, but Elmo at least has the excuse of actually being a child in an educational kids show.


I feel that casting TSLA as a company without “real engineers” isn’t helpful nor is it truthful.


For me, the measure of whether an engineer counts as "real" is if they're empowered to say no.

You can have hundreds of qualified people who are called "engineers" and would be excellent in another environment, but if the culture is "If I say jump, you ask how high" you don't have an engineering culture, you have an autocracy.


Yeah its not like Mercedes cheated the world on emission and with that killed many more people then Tesla Autopilot ever will.

And Mercedes simply does not have the ability to produce something like FSD Beta from Tesla no matter how 'real' their engineers are.


Mercedes already produced something better than the FSD beta: Actual L3.


> Mercedes is a real car company with real engineers and a gold-standard reputation to maintain

Totally agree with all your points against Tesla, but “gold-standard reputation” for Mercedes? Based on what? They are consistently rated as one of the worse brands reliability wise (Tesla usually being worse, but still).


"Nothing's more expensive than a cheap Mercedes."

Because they have a reputation of breaking down a lot, and a "cheap" Mercedes is still a Mercedes which is fixed using Mercedes-priced parts.


It's interesting. I only buy older used cars. Mercedes is a brand I have owned a couple of times. Outside of a few specific engines and models, they are mechanically very reliable and very solidly built. Moreover, especially the older cars, are quite easy to work on for a home mechanic. Parts are readily available and are not really more expensive than for any other car I've owned, which includes several other German as well as Japanese and American brands.

If you are in that market, and stick with the older models that are proven to be reliable, they are pretty safe buys. Like any used car, a lot depends on the care given by the prior owner, but people who buy Mercedes cars new tend to have at least above-average income, and can afford to maintain them properly.


I've put well over a million km on Mercedes vehicles and not a single breakdown. That's anecdata, but compared with the other vehicles I've driven whose brands I will not mention their record for me at least has been outstanding.

That said: the newer generation is not for me, I had a C-class with automatic emergency breaking try to murder me twice and I've sold it and gotten something much older without that kind of accessory.


I've heard both sides, especially that the older models have been reliable. That phrase just seems to be particularly sticky (and that's what I understand it to mean). It makes me wonder on the point about maintenance.


Any car that you intend to drive for more than a few thousand miles will need maintenance. These are no different. But the engine design (chain, not belt for the timing) and the oil pump taken together are massive factors in ensuring that engines will last. I have a 25 year old SLK that I've passed on to other people several times now on the condition that when they stop using it they give it back and even though there are all kinds of smaller issues the engine runs like new.


Agree. Which makes it all the more frustrating that popular press focuses on irrelevant distractions, like the fact that if you try really hard, you can defeat the protections designed to ensure that the driver is ready to take over: https://www.consumerreports.org/autonomous-driving/cr-engine...

A car is never going to prevent a determined individual from doing stupid things. But it is a big problem that people who are trying to be responsible are misled about what Tesla's "Full Self Driving" can actually deliver.


The difference between breaking Firefox nightly and writing software for a nuclear power plant.


Tesla isnt a real car company? Please climb back under your rock.


So Mercedes' solution is to offer a product that isn't usable? Why bother releasing it?


Tesla's solution offers a product that occasionally tries to kill you and people around you. The only reason it doesn't is because drivers are forced to pay attention and take over at a moment's notice at all times.

Mercedes' solution is a car company taking actual responsibility for their software. If they feel the lawsuits/insurance claims/legal snafus are worth the risk, that means their software is probably pretty damn good in that limited scope. Otherwise they could literally bankrupt the company with lawsuits! That's a lot more confidence inspiring to me than Elon's repeated pie-in-the-sky claims.


It's perfectly usable in its intended scope: it allows you to focus on other things while driving in heavy traffic. When they're confident that they can do so safely, they'll extend it to other situations.

Mercedes explains the purpose in their press release:

> Conditionally automated driving on suitable motorway sections where traffic density is high

> During the conditionally automated journey, DRIVE PILOT allows the driver to take their mind off the traffic and focus on certain secondary activities, be it communicating with colleagues via In-Car Office, surfing the internet or relaxing while watching a film. In DRIVE PILOT mode, applications can be enabled on the vehicle's integrated central display that are otherwise blocked while driving.

https://group-media.mercedes-benz.com/marsMediaSite/en/insta...


How heavy does the traffic need to be, I wonder?

Based on that I'm assuming it's following other vehicles rather than following the road, which does make me wonder what happens when you get a clump of DRIVE PILOT vehicles which are all trying to follow each other.

Edit: It seems like it needs a vehicle ahead and under 60kph and uses a mix of other vehicles and road markings. It seems quite usable (but I still have to wonder how many vehicles you can get to follow each other in a chain): https://www.youtube.com/watch?v=0yiPaKfKLZs


It's the other way around. Tesla is selling an extremely expensive beta test that you can't use at all anywhere in any conditions with any kind of safety expectations.

Mercedes is selling a product that has a small set of well-defined cases where it can actually be used.


When I commuted in the city, there was a traffic jam almost every day, and I'd be stuck 15-20 minutes driving at walking speed. On especially bad days it could be up to 45min.

If I could have read my emails in that time it would have been really nice.


The proper solution to this is trains


everyone knows that public transport is the solution to it but I can't buy a railroad track and a train to take to work so people are going to do what they can do


You can't personally buy a railroad but you can vote for people who will apply your tax dollars towards it instead of oil subsidies and auto bailouts :)


let's not turn this into a chain of saying obvious things that everyone involved already knows about, it's not going to add to the conversation and -hopefully- nor will it give you internet points in this website.


With it's current limitations, the only application for Mercedes' solution I can think of is during heavy traffic on highways. But calling it "not usable" does seem a bit harsh.

Of course if you prefer, move fast and brake... maybe


Traffic on highways is also by far the most frustrating part of driving basically for the same reasons it’s an easy-ish target for automation, so seems like a pretty good place to start.

IMO that’s just good product strategy.


It’s useable in certain situations that have a high probability of safety, and allows them to capture data and grow the program safely over time.


Here is the full list of restrictions for the Drive Pilot legal liabities to take effect:

  Roads need to be premapped ahead of time with LiDAR
  Roads need to be pre-approved
  Car cannot go above 37 MPH
  limited-access divided highways with no stoplights
  no roundabouts
  no traffic control systems whatsoever
  no construction zones
  only operate during daytime
  Reasonably clear weather
  Without overhead obstructions
It is actually illegal to be going that slow on a highway, in Texas at least. This would simply be too dangerous to even allow.

Let me know of any other system that is even remotely close to being able to do the following:

https://www.youtube.com/watch?v=qFAlwAawSvU


People have mentioned this in other sub-threads, but it's explicitly intended for stop-and-go traffic jams:

> Mercedez-Benz has announced approval of their “Drive Pilot” system, in Germany, which does fully autonomous operation in highway traffic jam situations.

> ...

> The Mercedes car provides the traffic jam assist function — only on German motorways to start — below 60 km/h. While people debate whether they want to drive their car or not, nobody likes driving in a traffic jam, and everybody hates the time wasted in them. As a luxury feature, this will let drivers make more productive use of that time.

https://www.forbes.com/sites/bradtempleton/2021/12/13/merced...


But stop and go traffic jams in perfect conditions can already be handled properly by numerous companies' adaptive cruise control and lane keeping systems. I'm not sure why I should be impressed with Mercedes' tech here. The impressive aspect is that they are standing behind the tech by taking on liability, but that could easily just be considered a marketing expense rather than actual confidence in the technology. We have all heard the auto manufacturer anecdote from Fight Club. The math these companies do is based off money and not lives saved.


The difference is that they're explicitly allowing the driver to stop paying attention to driving, which reduces fatigue, wasted time etc. It's actual level 3 self-driving tech rather than mere driver assistance tech.

Of course, other driver assistance systems might be close to on par with it, but a system that successfully navigates stop and go traffic 99% of the time is very different from a system that successfully navigates stop and go traffic 100% of the time, in terms of driver attention required.


I'm not sure level 3 is any safer than level 2. Level 3 still requires a driver to intervene if the car requests it. But going from not paying attention to driving isn't something that can happen instantly. Imagine you are playing some game on your phone and alarms start going off in the car. You need to be able to process what those alarms are saying, assess the situation, and take control of the car. How quickly can people do that? Likely not fast enough to avoid any urgent issues. A driver in a level 2 system should already be paying attention so they should be able to respond quicker.

And yes, I understand that drivers can get lazy with a level 2 system. But if the selling point of Mercedes is taking over liability from the driver, I am mostly concerned how the system would benefit me as a driver and I regularly use my car's level 2 features while paying attention.


the difference is that level 2 requires you to take over at any time, immediately, while level 3 allows you to do something else and gives you some time (for drive pilot: 10 seconds) to take over. 10 seconds is quite some time in contrast to


I think for the proscribed use case the situation where you require human intervention is where the traffic jam clears up and it’s time to drive at highway speed again. Not an emergency. I’m having a hard time imagining a situation where you would need to speedily regain complete control to avert a crisis that a human wouldn’t already fail.


I find the distinction between level 2 and level 3 to be unhelpful. How long do humans have to take over? Anything less than 20 seconds is not very feasible IMO.

Taking liability is an interesting PR move, but I don't think it matters in stop and go traffic where speeds are relatively low and damage is typically minimal if any.


If 10 seconds isn’t enough to orient yourself and take over after the car alerts you to do so, you shouldn’t be driving a car.


With level 2 you must be CONSTANTLY paying attention. With level 3 you can NOT pay attention, and car will bing / bong you to pay attention. If you can't get your hands on the wheel in 20 seconds that's pathetic. I'd be able to get onto wheel in 5 seconds or so, but I'd like to be just chilling with my podcast until then (I'll still be looking at road, just not worrying about watching for stuff).


It's not about getting your hands on the wheel. It's about knowing whether to brake or speed up, and which direction to swerve. If the system is confused by the scenario and wants you to take over, it's presumably not as simple as maneuvering around a parked van on the shoulder - it's going to be in a complex scenario.

If you've been playing an immersive game on your Steam Deck (or a VR headset), yeah good luck good with knowing what to do when the car bing/bongs you.

If you think people shouldn't play games, what exactly does "you can NOT pay attention" mean? If the difference is solely liability - that's great but it doesn't make a difference to people in other vehicles (or to pedestrians).


Collisions at 40mph / 60km/h are no laughing matter. And as far as I understand, the Mercedes system will let you know well ahead of time if you have to take over, as that would only be required as you leave the designated area. Taking over to drive faster than the limit for drive pilot would never be a requirement.


Stop and go traffic jams aren't completely automated by most ADAS systems. For one, you're still completely liable for it failing. Secondly, most of those lane keep assists will still let your car wander out of the lane if you really don't pay attention, they mostly just tug at the wheel to help you notice drift or will beep at you. Finally, a lot of those will require manual intervention for it to start moving again after a full stop.

Mercedes implementation takes the legal liability. It will definitely stay in its own lane without any driver input. It will continue going again after a full stop all on its own.


Based on all the information I've seen, adaptive cruise control with lane keeping is all that Tesla is reliable at as well. The main difference between them and Mercedes is that Tesla is willing to put out tech that is known to be unreliable and let their customers take the fall for it.


>But stop and go traffic jams in perfect conditions can already be handled properly by numerous companies' adaptive cruise control and lane keeping systems.

And do those adaptive cruise control/lane keeping systems allow the driver to take their hands off the wheel and stop paying attention to the road?


Perhaps this is just a rebrand of that already common tech? Kind of how some manufacturers claim "we have AI!" just based on something simple like adapting to a moving average.


I haven't driven every car but none of the cars I've driven could actually handle stop and go traffic. They will certainly stop but leave it to you to press the accelerator to go again. Now on a highway with medium to light traffic they are plenty capable of managing it.


It's not a jump in technology. It's the result of a slow growth of that technology to "mature enough to take liability for". Which is a better way to move tons of machine around under computer control.


> It is actually illegal to be going that slow on a highway, in Texas at least. This would simply be too dangerous to even allow.

So does the Texas Highway Patrol ticket everyone in a traffic jam? They almost certainly don't, which should be a big clue that it's not so simple.

This appears to be the actual law in Texas:

https://texas.public.law/statutes/tex._transp._code_section_...

> (a) An operator may not drive so slowly as to impede the normal and reasonable movement of traffic, except when reduced speed is necessary for safe operation or in compliance with law.

> ...

> (c) If appropriate signs are erected giving notice of a minimum speed limit adopted under this section, an operator may not drive a vehicle more slowly than that limit except as necessary for safe operation or in compliance with law.

It's not safely operating a vehicle to go 40mph in 10mph traffic.


> It is actually illegal to be going that slow on a highway, in Texas at least.

I've been at a dead stop on many highways in Texas, along with hundreds of other cars around me. I see such things happening pretty often outside my office window.

Honestly, times when I'm going <37MPH on a controlled access highway is some of the most annoying driving that I'd like to have completely automated. That usually means I'm in stop and go traffic, some of the most grating time to drive. Both of my cars are mostly there, keeping safe distances and coming to a stop with cruise control, but definitely not completely automated yet.


So it works on freeways when there's congestion, and the speed of traffic is < 37 MPH? Sounds like adaptive cruise control with lane keep (and insurance coverage, which isn't nothing).


>Sounds like adaptive cruise control with lane keep (and insurance coverage, which isn't nothing).

Without the restriction that hands are on-wheel and driver is paying attention to the road. That's a BIG difference.


"It is actually illegal to be going that slow on a highway, in Texas at least. This would simply be too dangerous to even allow."

You don't live in Austin or Houston, do you? :-)


Texas has done a lot to increase the dangers caused by their roads, and don't seem like they plan to reroute. Just look at the current plans for the I-35, ffs.


Tesla was found to be deactivating the autopilot mode at the second before a crash [0]. I think it's for a dubious reason so that Tesla could declare none of their cars were in autopilot/FSD mode when involved in a crash.

[0] PDF https://static.nhtsa.gov/odi/inv/2022/INOA-EA22002-3184.PDF

"The agency’s analysis of these sixteen subject first responder and road maintenance vehicle crashes indicated that Forward Collision Warnings (FCW) activated in the majority of incidents immediately prior to impact and that subsequent Automatic Emergency Braking (AEB) intervened in approximately half of the collisions. On average in these crashes, Autopilot aborted vehicle control less than one second prior to the first impact."


I fail to understand how would anyone at the top of any serious company would think bailing out at the last second would absolve them of anything.


The CEO of Tesla seems to think it does. He tried to bail on the Twitter acquisition at the last second. Didn't work out well obviously.

Pretty sure he has the exact same attitude to the liabilities involving people getting killed by using his "Full Self Driving" beta test.


It's for PR, so that they can publicly release statistics showing good results on paper. An average consumer will learn the good-looking statistics and spread the word about Tesla safety, as it already happens, without taking the above report into account, since it's buried underneath tens of comments and I imagine Tesla will also fight it in some ways. Then, the average consumer will believe that FSD is indeed safer than a human driver, and buy the system. If crash happens, Tesla doesn't provide legal liability. I imagine most cases will be closed and, overall, Tesla will be at a profit, so why not do it?


Not intended to absolve them legally but likely rather to create grey area for public announcements.


at best the only value I'd see is the ability to have newspaper write "tesla SDV wasn't on when crash happened".. actually a good amount of PR leverage I admit


It's the old capitalist adage, "the pilot never goes down with the ship"


It's actually 80 mph in Germany now [0], which makes this such a great and useful feature. It really feels like the future.

[0] https://www.therobotreport.com/un-allows-autonomous-vehicles...


I've been testing FSD, and I WISH their system did more to limit its use in bad conditions. The perception (based on dashboard visualization) is much worse during rain, yes Tesla lets you keep using FSD even in heavy rain.


I'd been saying for years before anyone had L3 out that the working definition of L3 is simply that the manufacturer will assume liability while the vehicle is in charge.


I really wish they would cut this "Level" nonsense, that system was invented by business people, not engineers.

Interventions are more nuisanced than just "Did the driver intervene". Many times I intervene and take control while using Tesla FSD not out of safety reasons, but to be nice to other drivers, or for a smoother ride. It tends to love passing other cars on the right and not letting cars into merges, for example. It also brake a little hard at traffic jams, for instance, not nearly to a point where it would be a safety issue, but when I see a traffic jam far ahead I would begin decelerating much, much earlier just for the comfort of myself and passengers.

That said, FSD is nowhere near ready, I do have a huge number of safety related interventions as well, but reducing this to a number like L3 or L4 is trying to oversimplify a problem that isn't simple.


ADAS Levels were standardized by the Society of Automotive Engineers. Are you saying the Society of Automotive Engineers are all just business people and not engineers?


> It tends to love passing other cars on the right

Is it supposed to do illegal things? It shouldn't be so hard to ensure it obeys laws.


I imagined it shouldn't, but it absolutely LOVES passing on the right. Both AP and FSD


> Only allowed on limited-access divided highways with no stoplights, roundabouts, or other traffic control systems

> Limited to a top speed of less than 40 mph

> Operates only during the daytime and in clear weather

Wait, don’t these things together mean it isn’t useful anywhere? Limited access roadways tend to be 50+ limits, going 40 or less on them other than in the adverse conditions ruled out by the third criterion would probably get you pulled over for obstructing traffic, unless you had Slow Moving Vehicle placards.


> But the big promise from Mercedes is that it would take legal liability for any accidents that occurs during Drive Pilot's operation

I am a FSD skeptic but I might be sold on this.


This 'level 3' is just a very cheap marketing trick. The system is a very simple highway traffic jam assistant. This trick plays with the misconception that ADAS levels actually determine how advanced the system is. They get to claim 'level 3' with a very simple system by assuming liability in those conditions. It's just marketing, and has nothing to do with actual capabilities of the system.


I mean personally the company assuming liability means A LOT more to me than how much the system can do. It's one thing to say your system can drive down a slick and curvy mountain road, and another thing to say you'll cover all liability if the car drives itself off the mountain. It's easy to write software the runs most of the time. This is our lives that we're talking about.


Liability is meaningless if it's limited to extremely low risk situations. Assuming liability in high risk conditions would be a big deal. Otherwise, its purpose is just to get mindshare for their 'level 3' tech, which in fact is just a self-driving starter project downloaded from Github.


Being liable still just means that it works most of the time and that they computed the cost of when it doesn‘t just like any insurance company. That‘s all there is to it.


Feels like the people here saying that Mercedes assuming liability doesn't matter are the same people who say it's your own fault if you lose your job and your healthcare and become poor.


> But there's one key difference: Once you engage Drive Pilot, you are no longer legally liable for the car's operation until it disengages.

What if they disengage right before an accident in order to transfer the liability to you?


If the car knows that it's about to be in an unavoidable accident and it is at fault, it has acknowledged that it has fucked up. To think that Mercedes wouldn't find itself in an expensive legal battle the first time that this happens would be ridiculous.

But I would expect that the disengagement is much less abrupt than what Autopilot/FSD do. From [0]:

> After consulting with the engineer in the passenger seat, I closed my eyes completely, and just eight or nine seconds later a prompt popped up asking me to confirm I was still alert. I ignored it, which soon started the 10-second countdown toward disengagement.

Which makes sense: if the point of the system is for you to be able to turn around and help your kids for a few seconds or watch a TV show on the center console, they simply can't expect that they can ding and have you regain control instantly.

[0] https://www.motorauthority.com/news/1136914_mercedes-drive-p...


they don't. the Mercedes drives on for at least ten seconds until you take over in that traffic jam on highway scenario, under all circumstances.


40 MPH is absolutely too slow for a highway, dangerous even. Is this for surface roads?


From my understanding it's initially meant for use e.g. in slow moving traffic jams on highways. They're working towards getting it approved for up to 130kmh.

There's some very general info here: https://www.roadandtrack.com/news/a39481699/what-happens-if-...


> From my understanding it's initially meant for use e.g. in slow moving traffic jams on highways.

I'm guessing the feasibility of this is very city-dependent. in LA, the usual traffic pattern is "drive 60-70 mph for 20 seconds, slow down to 5 mph for 60 seconds, repeat". (Granted that that's because of poor drivers, it would be better to have a consistent speed of 30 mph, but there's nothing Mercedes can do about that either way.)


I don't see how 'approval' is required for _anything_. Not for lane-keeping at high speeds, and not for Mercedes to take liability for accidents while their technology is active at high speeds.


Luckily, the law disagrees in most countries and you need to get a type approval before you can sell a new car type or new assistive technologies. I would not want to live in world where this is not the case.


> limited-access divided highways with no stoplights, roundabouts, or other traffic control systems

Yeah I don't understand where exactly this would be usable, at least around where I live. If it's a divided highway, it would have to have stoplights. Are there places where divided highways have stop signs?


My guess is that it's intended to be used in the entire State of Connecticut, between the hours of 8AM and 10AM and 4PM and 6PM. i.e. situations where the highway is doing its best impression of a Dunkin' Drive-Thru.

Signed, slightly-jaded person who drives the Boston<>NYC track enough to be slightly-jaded.

---

Or, Wareham -> Barnstable on Cape Cod, on any weekend morning for 6 months out of the year. Or 101 in the CA Bay Area during rush hour.

Basically, any time+place where the the thought of driving elicits an audible moan from the people then and there.


It's intended for use on interstates and highways during stop and go traffic jams.


So it's basically adaptive cruise control with lane keeping? I guess they don't have to worry about turns that are too sharp (which an be troubling for lane-keep systems) because they're limiting it to freeways that are meant to be driven at 70 MPH, but only when the speed of traffic is half that.


They are also trying to convince regulators that they, not you, are legally responsible for any incidents. As long as you are ready to take over with a ten second warning.


Limited-access divided highway in this context means freeways and toll roads with walls to roadsides. It is generally considered acceptable to operate dangerous robot machines in a fenced off areas with enough precautions, and that isn't much different in philosophy to a self driving car on such a highway.


> Yeah I don't understand where exactly this would be usable, at least around where I live.

German autobahn.


< 37 MPH?


Stop and go is an all too common thing on the Autobahn, often during rush hour in areas near large cities.


If you are thinking of places like Hamburg then the terms and conditions forbid it because the motorways in the Hamburg area are all construction zones and have been for at least the last five years that I have driven through them.


A limited-access divided highway does not have stoplights or stop signs, or any intersections at all. Cars enter and exit the roadway exclusively via on- or off-ramps.


A stop sign is a traffic control system, FWIW. They're saying a freeway, more or less, although the low top speed means really a freeway during congestion.


I've been on many rural roads with divided county highways with stop signs.

And as others have mentioned it's still a traffic control device.


Just to piggy back on the 40mph callout, but I would like to see self driving cars never really drive more than 5mph under the speed limit. It would have a great calming effect on traffic. If they combine that with very conservative acceleration, it would be even better, much less of that rushing and accordion effect that's causing so many crashes.

Instead, Tesla fsd, at least from the youtube videos, looks like it's driving like a BMW-driver. Way way way too aggressive.

The biggest contributors to car crashes is speed and not enough distance from car in front. If self-driving cars would exaggerate the basic premises of safe driving, low speed, low acceleration, long distance, ... it would be really good for traffic overall imho.


Driving significantly slower than the pace of traffic is dangerous. If the average pace of traffic is 5mph over but your car won't go faster than 5 under, you're now going 10mph less than everyone around you.

Speed differentials kill.


> Speed differentials kill.

That's the cope people use for habitual speeding. 10mh is not significant differential. 30 vs 60 on a highway, sure, 55 vs 65, not at all.

You should actually try it once. Go five under the speed limit and keep generous distance with the car in front of you. You'll barely notice it. Traffic will be ahead of you, you won't pass anybody. The biggest thing to get over is the ego-thing.

As you are doing this, then also pay attention to your capacity to act on any emergency stop you may have to make (dog sprinting across, car slamming their breaks, ...) and how much much more time and capacity you will have to respond.

The other thing that peoples mind immediately go to multi-lane highways. Never the other 70-80% of driving, in town, single lanes, where going slower is always manifestly better.


I've seen it many times before. I got people bunching up behind me, riding my bumper, cutting me off, swerving around me, causing near misses in other lanes as they cut other people off trying to pass. It causes backups near ramps to get on and off highways, backups which often result in rear end collisions, partially because...ding ding ding speed differentials.

Also, acting in capacity to react in an emergency is more about following distance than speed. And yeah, as speed increases a driver needs to increase their follow distance. Something that I agree loads of people fail at doing and then complain about their ADAS systems always slamming on the brakes suddenly.

> where going slower is always manifestly better.

Just tell that to all the cyclists going < 20mph in 40-50MPH roads. They're way safer going that speed than those fools driving their cars near the speed limit. It's often not safe for them, partially because...speed differentials. To solve this, we shouldn't just restrict cars to only go cycling speeds, we should build infrastructure so similar speed traffic is grouped together and separate, reducing...speed differentials.

If I started driving my car 5mph in a 40mph road, I'd probably cause more accidents than if I just went along with traffic at 43mph.

Speed differentials kill.


Overwhelming majority of time I was driving on highway, the right lane went below speed limit. That makes up quite a lot of cars that go below it.

And I used to drive exactly speed limit (as measured by GPS) and that maded me among the faster cars on highway. Only few cars went faster then me.

I made effort to slow down lately and can confirm that the biggest and only issue to overcome is the ego and the knee jerk "being there faster makes you better driver" kind of thinking.


My typical speed on the highway is speed limit plus 10% (so 60 in a 55 for example). That puts me right at the sweet spot for the middle of three lanes where I live. My exit is the last one before the highway drops to 2 lanes and gets a lot of use as it quickly becomes rural after that.

I will regularly get passed by someone going 75mph or more (in a 55) in the left lane when traffic allows those speeds. It's about 50-50 whether I pass them back when we get to the stop light at the end of my exit ramp a few miles later.

On a typical commute or trip around town, driving way faster than everyone frequently doesn't get you to your destination any sooner. It does make things more dangerous and unpredictable though.


> Just tell that to all the cyclists going < 20mph in 40-50MPH roads

That's again a 20-30 speed deferential, not to mention a huge difference in weight. We're talking about a 5-10 one between cars.

Also, if it's a heavy freight truck going 20 in a 40mph single lane. yeah, no issue at all with that speed deferential isn't there? Maybe the problem is here the inattentive, impatient drivers plowing through the cyclist?

> acting in capacity to react in an emergency is more about following distance than speed

The cars in front are not the only hazards.

Overall, I think you're making it too extreme. I'm not saying you should be going 20 on a highway. I'm saying going 5 under a posted speed limit is actually very reasonable, and it's what self driving cars (and human drivers) should do. It will reduce crashes. I think we disagree there.


> Maybe the problem is here the inattentive, impatient drivers plowing through the cyclist?

It's not just cars that become inattentive, and I agree the ultimate fault of those accidents are with the operator not paying attention. I've have cyclists swerve in front of me pretty close seemingly unaware I was there as they cut over for a left turn at the last second without signaling. Or cyclists blast through an intersection without stopping despite me already properly starting to go through the intersection. It's not like only people in cars make mistakes. However, speed differentials still increase risks. Reducing speed differentials and encouraging everyone to go about the same speed is better than having a mixture of speeds in the same traffic flow. Mixed speeds cause friction, friction increases the likelihood of accidents.

> Also, if it's a heavy freight truck going 20 in a 40mph single lane. yeah, no issue at all with that speed deferential isn't there?

No, there's still the exact same issues with that speed differential. People bunching up, lots of changing lanes, differences in speeds, etc. Sure, sometimes equipment needs to take roads and just can't or can't safely operate near the posted speed limit. I'd still say that equipment is causing more traffic flow issues, and thus more chances for accidents, being a slow member in traffic compared to all the other cars going about the same speed as each other. But they do have a right to use the roads, and I do agree people just need to deal with the disruption and be better operators around those obstructions. In the end there's still little excuse for cutting people off and changing lanes without watching, but if they never had to change lanes...

> I'm saying going 5 under a posted speed limit is actually very reasonable,

Going 5 under can be reasonable, I agree. There are lots of instances where one can do it safely, and increase the safety of those around them. If I'm driving down a residential street posted at 30mph but there's a high likelihood of pedestrians popping out from around parked cars, I'll drive under the speed limit, often 5mph+

I definitely disagree that individuals should subject themselves to a hard rule of always driving 5mph under the posted speed limit though. It all depends on what's the reality of the situation at the moment. If everyone is already going well over the posted speed limit, going well under it isn't going to increase overall safety. If the roads would benefit from traffic going slower, the speed limit should be changed and the road should be modified to encourage lower speeds.

If the roads should be 5mph slower, we should redesign it so everyone goes slower, not just some small percentage of cars while everyone else blasts past without immediate consequence.

Could a lot of our roads be safer if we redesigned them to make people drive slower? Sure. If I cause traffic congestion and drive in a way that's outside other driver's expectations I'm not making things safer though, I'm causing problems.


So it’s adaptive cruise control? That’s an absurdly low bar.


If they're claiming legal responsibility for it causing any crashes I'd say they're setting a pretty high bar as far as confidence goes.


But can Mercedes definitively prove their technology will never result in an accident, like people in this thread are demanding of Tesla?


They're willing to take liability for it, so they're confident enough that their legal team and accountants are satisfied. If Tesla were at that point I think most people here would be content, no need to definitively prove anything.


Ah yes, all's I have to do to get justice if I get killed by a Mercedes run amok is to take on a multi-billion dollar legal team. That makes me confident.


The fact that they accept liability is precisely to avoid [your next of kin] needing to "take on a multi-billion dollar legal team" if you get killed by a Mercedes run amok. Now then, if on the other hand you were to get killed by a Tesla run amok...


Well, if you have life insurance, your insurance company will be the ones suing Mercedes, and likely have an even scarier legal team.


There’s no subrogation for life insurance.


A big company being responsible for a crash is best case scenario. You would much rather sue Mercedes than Joe Shmoe for an accident, no doubt. They've got deep pockets, and your local courts are not particularly friendly to them


Any person who knows what they're talking about is asking Tesla to have an appropriate development process for safety critical systems.

Tesla's system are unsafe, by default, if they don't follow safety life cycle. And they don't - I saw dick-sharing apps that had better life cycle processes.


Nobody expects that any self-driving car technology will never result in an accident. That's impossible and not a reasonable goal.


A lot of people seem to think that even a single accident is unacceptable. Quite a few of the comments on this site and others about self-driving cannot be explained without understanding that the poster has that belief, at least implicitly.

We are lucky that our ancestors were not so risk averse, because if they were we would not have cars at all, or airplanes.


> A lot of people seem to think that even a single accident is unacceptable. Quite a few of the comments on this site and others about self-driving cannot be explained without understanding that the poster has that belief, at least implicitly.

That makes sense to me. I'm happy to accept the presence of full self-driving technology on the roads, for other people, once it has an accident rate comparable to or slightly better than humans.

I personally won't use one until it is so much safer than a human that the level of safety is the #1 feature though. Until then, what's the upside? I can screw around on my phone more often? I do that too much already, and it's not the kind of benefit that cancels out the potential downside of "...but you died because of an unhandled edge case in version 27.1.828 of our software that you as an attentive human would have easily handled, which was fixed in the next release" which just seems like such a banal way to go for more screen time.

I don't think my take is drastically out of the mainstream. It also seems to me the main thing separating my point of view from "ban all FSD until perfect" is a willingness to let other people make choices I don't think are good.


Yes, I see some people promote that idea but that was never the expectation on the part of the self-driving car creators, or the regulators. They also don't expect cars to be able to solve complex philosophical questions regarding trolleys. Nor does the general public have that expectation.


I don't think it's reasonable to expect an FSD vehicle to never be involved in an accident. I do expect it to never be the cause of an accident. I really don't feel that is unreasonable.


Even if self-driving cars ever get that good, which is probably impossible, it would only happen after a long period of testing on public roads.

We can’t expect this technology to work well in the real world unless it is tested in the real world before it works well.


You can't prove a negative. Nobody sensible has ever demanded this of Tesla.


When our Model 3 got access to FSD, my 6 year-old desperately wanted to try it out. I figured a low-traffic Sunday morning was the perfect time to test it out, so we headed out to grab Mom a coffee downtown.

The car almost caused an accident 3 separate times.

The first, was when it almost drove into oncoming traffic at an intersection where the road curves north on both sides.

The second, was when it failed to understand a fork on the right side of the road, swerved back and forth twice, and then almost drove straight into a road sign.

The third, in downtown, was when a brick crosswalk confused it on a left turn, causing it to literally drive up onto the corner of the intersection. Thank God there weren't pedestrians there.

When we upgraded to a Y, I turned down FSD. I don't imagine I'll be paying for it ever again.


Honestly, I feel this way about pretty much all "driver assist" systems in the wild today.

That is, I fully understand that to get to Levels 3, 4, and 5, you need to pass through levels 1 and 2 in autonomous driving. But the issue is that I feel like these systems are basically at the "slight drunk teenager" stage, with you as the driver having to ensure they don't mess up too badly. Honestly, unless I can, say, read a book or do something else, these systems (I'm specifically referring to an advanced cruise control/lane keeping system) right now just require me to pay MORE attention and they stress me out more.

Fully understand we need folks to help train these systems, but, at least for me, they currently make the driving experience worse.


> Honestly, I feel this way about pretty much all "driver assist" systems in the wild today.

My five year old Honda has a very limited driver assist system (radar cruise control + lane centering), which (in my opinion) is very good at what it's trying to do. It has no pretensions of being a "self-driving" system, but it very successfully manages to reduce some of the stress of driving and make my driving better. I think the key point is it only automates the fine adjustments and provides alerts, but is very careful to never allow the driver to rely on it too much.


I feel the same way about my Subaru's EyeSight system. It helps me stay in the lane, and annoys me if I get distracted and cross a line. It slows down the car automatically when it detects an obstacle ahead. It speeds up and slows down automatically to maintain a mostly-steady speed when I set cruise control.

Until autonomous vehicles reach "read a book or fall asleep" levels, this is all I'm interested in. No thank you to any dumb "autopilot" system that I can't actually trust, but tries to control my wheel.


I’ve had the same experience with eyesight. I would also add that it brakes very naturally. Much better than other similar systems I have tried.


I've also driven a Subaru with EyeSight. I think it's pretty good too, and kinda follows the same philosophy as my Honda, but with different tradeoffs. The Subaru doesn't lane center, so it's less relaxing to drive on the highway because you have to pay more attention to fine tuning your lane position. On the other hand, my Honda deliberately won't automatically come to a stop to avoid a collision (it will only slow down in the last few seconds), so it's more annoying in stop-and-go traffic.


Exactly! I would also add emergency breaking at low speeds so that pedestrian stepping in front of the car from nowhere can be spared. There is no need for real self driving, it wouldn’t really change anything and we are not even close to that.


For the Tesla driver assistance specifically (non FSD) it's more advanced and reasonable reliable. I find it helps a great deal to reduce fatigue on long drives. It is nearly flawless on highways and watching to see the car is safe is much less fatiguing than a constant centering and monitoring the accelerator. Seeing the car is the right speed is less mental energy than constant control of power to get the right speed


Given the potential consequences of a mistake, it feels like there's still a pretty big difference between "nearly flawless" and flawless.

Speed control I'm fine with and is obvs. a mature tech that has been around for decades. Maybe it's the way I drive, but I find lane assist a liability -- especially on curves. More than once the car swerved unexpectedly one way or the other going around a bend. After the 2nd time that happened, I shut it off.


I suspect the difference in experience might be attributable to differences in the environment. I went cross country in a model Y and noticed that it did not handle one lane turning into two lanes with any grace - but I also drove across entire states where that didn’t come up. It wouldn’t surprise me if some experiences were regional to an extent.


Lane assist isn't supposed to entirely keep you in the lane on it's own, it's supposed to just help tug you in the right direction as a hint in case you weren't paying perfect attention. It's usually not supposed to steer the car entirely on its own.


You can’t really concentrate for long times, and as it has been shown many times, people are bad at expecting rare events. Reasonably reliable is not enough.


FSD marketing has always seemed sketchy to me. Though I like Open Pilot. It's a much smaller scope for an L2 system to keep you in your lane and keep you a safe distance from the next car. It works well for highway driving.


I personally really enjoy my ADAS systems on my cars even though it's not to the read a book or take a nap level of automation. It's really just cruise control on steroids. Do you see value in regular cruise control systems, even though it's not 100% automated?

When I'm in stop and go traffic, I really like not having to constantly go back and fourth between the brake and gas pedal, I can just pay attention to what's happening in and around the lane and let the car keep a good follow distance.

I've gone over 100 miles at a time without having to touch the gas or brake pedal, even through stop and go traffic.


I could see the value in that. I am a different type of driver however. I have never used cruise control and don't even know how to engage it on my car. It is true I don't drive much but when I do drive I like to be much more involved in it. I love the sound of my engine at 7000+ rpm shifts and the feel of my hydraulic steering rack.

My only disdain for driving stems from sharing the road with other driver who are completely stupid and seem to lack and critical thinking skill(this shows up in especially in traffic jams and navigation within them on local roads with obstacles also present, like a parked truck loading).


Don't get me wrong, I enjoy the thrill of driving and racing. Some spirited driving on open highways is way different from stop and go traffic and long family road trips. In the end I still have a throttle lock on my 1050cc motorcycle though, it comes in handy when going on a long road trip.


> Honestly, I feel this way about pretty much all "driver assist" systems in the wild today.

I've found adaptive cruise control to be a simple, noticeable improvement.


Pointing out the obvious, it's extremely negligent for Tesla to have this feature available. It's not even close to ready for general use.


And who’s going to stop them? Certainly not the US government.

Enjoy beta testing FSD as an unwilling pedestrian.


If I see a Tesla in the wild, I shoot its tires out before it has a chance to strike. That's the American way.

(I assume this is why the Cybertruck features bulletproof glass, in case my self-firing gun (now in beta) misidentifies the tires.)


>Cybertruck

I still read this as Cyberduck every. damn. time.

>in case my self-firing gun (now in beta) misidentifies the tires.)

are you mounting that self-firing gun to a car with FSD? that would make for a great hands free experience.


The profit motive is so clear in this case, and criminal. Collecting telemetry for free so you get to improve your ever elusive model, literally paid by your “customers” as well as potentially with their (and potentially “collateral”) lives. It’s horrendous.


It's extremely negligent for the driver to let the car drive onto a sidewalk. As the driver you are solely responsible for everything the car does.


How long ago was that?

A friend of mine bought a Model Y and got access to FSD about six months ago. I've spent a fair bit of time in this car in the bay area and... I'm impressed? It doesn't drive like a professional but it feels safe.

My friend says it's been improving even in just the time he's had it. So maybe it used to be a lot worse?

I'm not in the market for a new car but the next time I am, FSD is going to be a big factor. Even if it's just as good as it is right now.


If it's anything like the original Autopilot was: yes.

I had one of the first Model S vehicles that was autopilot-capable. Early enough that autopilot itself was added later. The early versions were... intense. Not just in the "if you take your hands of the wheel it might try to kill you" intense, but also in the "even using this as adaptive cruise with lane-keeping, sometimes it will suddenly try to veer off the road and murder you" sense. Even when working "as intended" it would routinely dip into exit ramps if you were in the right lane. As a result, I didn't use it all that often, but over not a lot of time it improved pretty dramatically.

At this point my wife and I are on newer Model 3s, and our experience with autopilot (not FSD) as a frequently-used driver assist has been... uneventful? Given the _original_ autopilot experience, though, neither of us is particularly eager to try out FSD yet. Autopilot strikes a good balance for us in terms of being a useful second pair of eyes and hands while unambiguously requiring us to drive the damn car.


Personally i wouldn't trust FSD with my life until it was battle tested for at-least 10 years.


I don't have a Tesla but I do follow FSD development and a few people who test updates on Youtube. It really seems like your experience with FSD will vary depending on where you live. I see a guy testing FSD updates in downtown Seattle and neighboring residential streets where it seems very impressive driving in traffic, one way and narrow streets with cars parked on both sides. But then I also see it do some very bizarre moves in other cities. I don't know how Tesla collects and uses self-driving data but it seems like it's different from location to location.


> I see a guy testing FSD updates in downtown Seattle

In downtown Seattle doesn't it drive into the monorail columns?


Beauty of frequent updates is no two trips have to be the same. Bonus points for AI models no single human can understand.


I had two Teslas (3, Y) and used FSD in Seattle.

It did not work and based on my experience, I am extremely skeptical it will ever work in places like Seattle. The car could not even navigate circling Greenlake without a disconnect.

Sold them at the high of the used car market because, in part, I estimate FSD is a liability for the brand and will, eventually, hurt their resale value. That and Elon Musk. Don't need to support that and won't in the future.


I had a similar experience when I first got FSD.

What I realized was just that I was being scared of the computer. It wasnt about to drive into traffic, and it wasnt about to crash into anything or anything like that.

What was happening was that I was rightly being overly cautious of the beta program, and taking control as soon as there was really anything other than driving in a straight line on an empty road.

Over time, it became a dependable, usable copilot for me. I would massively miss FSD if it was gone.


Why give such a company even more of your money though?


Because—other than FSD—the car is fantastic. I can't imagine driving anything else.


Serious question for you: the FSD near-catastrophically failed twice before you got to town. Why did you continue to use FSD as you entered a more populated area?


Because it was Sunday morning in Old Town Scottsdale and there weren't any pedestrians around.


Maybe I'm old fashioned but I've got no intention of buying / enabling FSD on my 2020 Model X. I just want a car that I can drive from point A to point B. I'm not even that risk averse, but enabling a beta feature on the road with a bunch of other drivers who barely know what they're doing is a stretch.


This was my experience with the beta up until about 3 months ago. Since then it’s remarkably improved.


I’m glad that you are having a good experience, but FSD reliably and repeatedly tries to stop at green lights on a 50 mph road near me. I’m just happy - sort of - that I didn’t pay for it, and that I’m only able to try it because my new Model S Plaid has been in the shop for service four times in the last three months…

(The loaners that I have had three of those four times have all had FSD.)

I am reasonably satisfied with Enhanced Autopilot on highways, though it’s unclear to me what, exactly, is ‘enhanced’ about it. And Navigate on Autopilot seems to add nothing of value.


> FSD reliably and repeatedly tries to stop at green lights on a 50 mph road near me

I believe that's a "feature". I had it on a loaner too and it wants you to give it permission to cross the intersection by pressing down on the drive stalk or briefly press the accelerator pedal.

https://www.tesla.com/ownersmanual/modely/en_eu/GUID-A701F7D...


Weird. I’m able to navigate door to door from north east seattle to south west seattle, which if you’ve ever been here is one of the most difficult and screwed up bits of infrastructure on earth. FSD used to stop at green lights very early on in the beta and you had to tap the accelerator to make it not stop, but at some point they dropped that. My FSD doesn’t just do stoplights right but does yields and merging stop signs and traffic circles and other complex decision making road signs.

It’s not flawless, especially on the seattle roads where knowing where the road actually is requires human level reasoning most of the time, so it drifts into parts of the road that I know from experience are for parking but the car assumes is just a widened road. Or there a lot of roads with margins that are gravel but are so subtly different from the damaged road surface it thinks that’s part of the road and tries to drift over before it realizes it’s not. These issues are nothing like the unmitigated disaster it used to be, though, and if they can keep the pace they’ve had over the last 6 months up for another 12 months, it’ll be remarkably useful.


That sounds really terrible. I honestly thought that Tesla FSD was better than that. It just reminds me of my opinions I had (and still have) like 15+ years ago when I'd get into discussions on random forums about self driving cars. I mean sure, perhaps 100 years down the road when everything driving is required to be in a fully autonomous mode with interlinked wireless communication, maybe perhaps that would work.

But that is not what we have right now. Right now every driver on the road is exposed to a potential instant tragedy that is unavoidable. I mean, what is a self driving car going to do if a large semi drops a huge log or freight right in front of you? You have one second to impact. You can either attempt to hold your line and drive over the object, potentially killing/injuring the passengers. Or you can swerve left into oncoming traffic. Or you can swerve right into a busy two way frontage road.

No matter which choice is taken, I guarantee there will be a lawsuit. Perhaps one way forward would be perhaps something similar to what medical companies did in the 1980's with the creation of the "vaccine courts". Maybe we need some kind of new "autonomous driving" court which would be staffed with experts who would then decide what kind of cases have merit. That would at least better shield the parent companies and allow them to continue innovating instead of worrying about potential litigation.


It’s fine in 90% of scenarios. Those 10% are scary. Mine tried to pull do a right turn at red light but didn’t properly detect the cross traffic last week. That type of scary. If would be nice if cars talked to each other so it didn’t just have to rely on vision.


Tesla Autopilot significantly decreases fatigue on long trips. I have on numerous occasions (I think 10+) driven 2,400 mile trips. You absolutely have to stay aware during the use of Autopilot, but it really helps decrease cognitive load in many instances during a cross country trip.


So to be clear, your car almost drove itself into oncoming traffic with your child in it and your first instinct was "Hey, maybe give it two more tries"?


Comments like these disincentivize people from sharing honestly. I have full confidence that OP was telling the truth when saying they it was a relatively safe / low traffic environment, and I fully imagine they were paying attention and ready to intervene when FSD made mistakes.


> Comments like these disincentivize people from sharing honestly.

As an automotive engineer: Agreed. Realistic experience reports are useful, and that includes also e.g. what drivers are willing to attempt and how they risk-rate.


This is true, but they also disincentivize random amateurs from conducting uncontrolled safety experiments with children in the car. I think blame-free retrospectives and lack of judgement are important tools, but I also think they are best used in a context of careful systemic improvement.


Do you think they really do disincentivize that behavior (serious, not flippant)? If my very close friend questioned my decision certainly, but if an internet stranger with intentional snarky tone did it I'm not sure it would.


There are two groups of interest to me here. Secondarily, the original poster. Primarily, the hundreds or thousands of people who see the interaction.

I don't know what the original poster would do, but hopefully they will be more inclined to think twice next time they consider performing the behavior in question. If they do think twice and the situation is unsafe, I certainly hope they won't put their kid at more risk just to spite an internet stranger.

But my primary interest is in the many more people who might be inclined to imitate the poster's behavior when they get the chance. Having the behavior contextualized like this can only help encourage them to think about the risks.


> I fully imagine they were paying attention and ready to intervene when FSD made mistakes

Is that enough? The software could decide to accelerate and switch lanes at such a fast rate that the driver wouldn't have time to intervene. It hasn't happened yet to my knowledge. But it may happen.


People sharing anecdotes isn't productive either. Someone talking about "almost crashes" is a terribly subjective thing. We have thousands of hours of youtube video of FSD. We have some data. And the value add of one commenter's experience is virtually zero.


Teslas have been driven for millions of hours at least, if not billions, thousands of hours of youtube videos are anecdotes as well proportionally speaking. What about Tesla releasing complete real data? What are they scared about? Until then Tesla claims can't be taken seriously.


Videos on YT suffer from selection bias. Folks having scares are less likely to make the time to publish them, especially if they're fan boys -- the one cohort most likely to publish.

Agree raw data, or even just per 10K mile stats, from Tesla should be table stakes. Why aren't they required to report such things by law?


I strongly disagree. It's interesting to hear a thoughtful recounting of a HNers experience.

Tesla releasing the actual raw data would be much more helpful, but of course they are refusing to do that, most likely because it would betray how overhyped, unreliable and dangerous the software is.


What do you want them to release? What does "raw data" mean to you? Does Waymo release this raw data?


Even just disengagements per 10K miles would be a reasonable start. Anonymized dumps of all automated driving would be ideal.


At the very least anecdotes are a place to start thinking about what data to collect. And wherever you think of it, it's established in modern debates that people bring anecdotes as a way to motivate discussion. Maybe it's wrong without a proper statistical study, but it's what people do and have done since forever.


Yes, because it's not something you just try out on a whim. I personally paid $10,000 for the option, and it's nonrefundable. You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering. So yes, you overcome adversity, you keep on trying, and you teach those values to your kids.

Unfortunately, it increasing looks like the experiment has failed. But not because of us. We're pissed because Musk isn't doing his part in the deal. He's not pulling his weight. At this point, he's becoming more and more an anchor around the neck of technological progress. That didn't need to happen. It didn't need to be this way. So yeah, we're pissed off, not just because of the money we paid, but also because we feel like we were defrauded by his failure to pull his own weight in the deal.

I wouldn't be surprised to see him make his escape to the Bahamas before this decade is up.


> You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering. So yes, you overcome adversity, you keep on trying, and you teach those values to your kids.

Why not restrict your beta testing to a closed course? A race track? Some land you bought out in the middle of the desert? Have the kids make some some stop signs, plow your new roads, etc.

No one else on the road is consenting to a technology that could rapidly accelerate and course correct into their vehicle at some undetermined time.


Sacrificing your kid to make sure Musk’s project gets off the ground sure is devotion to science and engineering, I'll give you that.


Unfortunately paying that money also provides an incentive to lie about how advanced the functionality and hide unflattering data.

Funding responsible self driving research seems like a great use of money to me, but testing an flawed system in the wild does not.


I'm just confused that she'd buy another Tesla after that experience.


Not sure where you got "she" from, but regardless, I bought another Tesla because the car is fantastic. The FSD is not.


Well, she already had paid to install the at home charger for Teslas.


I wouldn't be surprised to see him make his escape to the Bahamas before this decade is up.

I hate to say it but starting to get this vibe too, particularly when I watch interviews from him which are 1-3 years old. They don't age well.

They're full of promises like, "only do good things for your fellow man, be useful etc" and those ethos seem to be lost now.


>Yes, because it's not something you just try out on a whim. I personally paid $10,000 for the option, and it's nonrefundable.

For those that didn't buy it, you can 'rent' it for a monthly subscription fee.


> You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering.

The fact that you associate those ideals with purchasing a consumer product made by a public company is intentional.


Well and you don’t need to go far to find others defending the risk to others as well, “it wouldn’t have gone up the curb if there was a person there”, its interesting to see how cavalier people normally are with making that judgement for others, especially for their offspring


Also consumer bias, or "post-purchase rationalization" - i.e. humans overly attribute positivity to goods/services from brands they buy from.

Even when it's as bad as throwing you in the wrong lane of traffic.


Anyone can make more kids. Not everyone can do more science!


We've experiments to run / there is research to be done / on the people who are still ali~ive!


-- Cave Johnson


New life motto, thank you


I know this is a joke, but not everyone can make more kids.


Every car "nearly drove itself into oncoming traffic" if the driver doesn't takeover. Its not like he climbed into the backseat and said, "Tesla, takeover". No, he let the car help with the driving, but maintained control of the vehicle to ensure the safety of the child.


> Every car "nearly drove itself into oncoming traffic" if the driver doesn't takeover.

Those other cars don't claim to drive themselves.


I empathize that people are frustrated with the marketing claims of this particular feature, which are clearly bunk, but the point of the post you're replying to is not to defend it, it's to defend that the other commenter is not being negligent and putting their child in danger...


Maybe not his kid, assuming he has more faith in Tesla's crash-worthiness than its FSD.

But, he'd definitely risking other road users and pedestrians if that car keeps trying to run up sidewalks and cause other havoc on the roadway.


If you are fully attentive, you can correct course once you realize a mistake is being made. My Honda Civic has adaptive cruise control and lane keep, and I run into issues with it reasonably often. I'm not complaining: after all, it's not marketed as much more than glorified cruise control. And either way, turning it on is not a risk to me. With any of these features, in my opinion, the main risk is complacency. If they work well enough most of the time, you can definitely get a false sense of security. Of course, based on some of the experiences people have had with FSD, I'm surprised anyone is able to get a false sense of security at all with it, but I assume mileages vary.

Now if the car failed in such a way that you couldn't disengage FSD, THAT would be a serious, catastrophic problem... but as far as I know, that's not really an issue here. (If that were an issue, obviously, it would be cause to have the whole damn car recalled.)

All of this to say, I think we can leave the guy alone for sharing his anecdote. When properly attentive, it shouldn't be particularly dangerous.


FSD can (sometimes very wrongly) act in milliseconds. Even attentive humans have to move their arms and the wheel, needing hundreds of milliseconds. The same humans who may have become numb to always paying attention, especially if it works well enough most of the time.


that makes absolutely no difference in the context of the comments above.

If a product being overhyped prevents you from using it after you paid for it, you're gonna have to live with no computer, no phone, no internet, no electricity, no cars, no bikes.


If FSD decides to do max acceleration and turn the wheels, can you stop it in time? Zero to 60 is under 3 seconds, right?


If your brake lines burst, will you be able to coast safely to a stop?

Every piece of technology has a variety of failure modes, some more likely than others. FSD is not likely to take aim at a pedestrian and floor it, just like your brakes aren't likely to explode, and neither of you are irresponsible for making those assumptions


The difference is brake lines are inspected and understood by humans. Failure to reasonably maintain them is illegal.

No single human fully understands these AI models, and they can change daily. Yet Tesla is putting them in control of multi-ton vehicles and leaving all liability on humans with worse reaction time and little to no understanding of how it works or tends to go wrong.


What if it decides to short the batteries and set the car on fire? Can you stop it from doing that?

I think you are making scenarios that no reasonable person would assume. There is a difference between 'getting confused at an intersection and turning wrong' and 'actively trying to kill the occupant by accelerating at max speed while turning the steering'.


Battery and battery management is more straightforward. BYD has mastered it.

FSD is a blackbox. Even Tesla appears to be unable to prevent regressions. One of the most sophisticated companies in the world can't prevent regression in a safety critical software they frequently update. Let that sink in.


So, third-hand story, about 20 years ago, from an acquaintance who heard it from a Police officer who dealt with the following situation:

This police officer was responding to an RV which had run off the road, and was speaking with the driver. The driver, a bit shook up from the experience explained that he was driving down the highway, turned on the cruise control, and got up to make himself a sandwich...


“3rd-hand story”, from a friend of a friend…uh, huh. Unless I’m missing a heaping bucket of “ironically”, you could have just said “there’s an old urban legend…” instead of passing off an old joke as something true.


Well, up until a moment ago, I legitimately believed it to be true - even though I was a few steps removed from it. Live and learn I guess.


I have been duped like this before too! Believe a story that just doesn't ring right when you tell it to somebody else years later. Teaching / communicating corrects a lot of errors.


I remember my grandma telling that story maybe 30-40 years ago. Gotta be an urban legend. Yup: https://www.snopes.com/fact-check/cruise-uncontrol/


I've no doubt this has probably happened in real life at some point but it's practically a fable by now.

I think the Simpsons done it at least 30 years ago.


I saw a "trying out my parking assist" (I think it was with a Hyundai) video the other day where the guy didn't realize that the function only assists with steering and not the pedals. So he backed right into a car.


This is literally the story of a Berke Breathed Bloom County (or whatever the follow-on was) comic strip.


Even bought another car from the company.


The car is great. FSD is not.


The full self driving package act like grammar error on spam emails, self select for people with no understanding of technology. I fully expect that the more wild shenanigans in tesla future will be targeted at them directly.


Come on, this feels overly aggressive. Circumstances are nuanced, we don't know to what degree of danger any of these situations posed to the child, only the parent does. Judge not lest ye, and such.


Ah yes, surely the meaning of that bible verse is, "Don't ask mildly difficult questions based in any sort of moral stance." Because we all know that book is famously opposed to performing any sort of moral analysis.


There's asking questions about the circumstances to better understand before casting judgement, and then there's sarcastically implying that OP is a bad parent for endangering their child without asking any actual questions about what happened.


That was not sarcasm, which generally requires words used in contradiction to the normal meaning. E.g., if somebody makes a dumb mistake, the response, "nice work, Einstein" would be sarcastic. This was at worst mocking, but it wasn't ever hyperbolic, given that the what was written was a literal description of what the guy did.

Regardless, you haven't answered the point about the quote. "Judge not lest ye be judged" does not mean we have to empty-headedly refrain from any sort of moral criticism. In context, it's about hypocrisy, reminding us to apply our standards to ourselves as stringently as we do others. I think it's only appropriate here if tsigo somehow indicated he would happily endanger his own children, which I don't see any sign of.


Semantics that ultimately don't change the crux of my point, even if I disagree with some of them, but thank you for clarifying.


Beta testing the guidance system for a 4500 lb steel slug in a pedestrian environment is one thing.

Deciding that you want to put your family into that steel slug for the very first test seems to me to be an entirely different level of poor decision making.


We'd used Autopilot (non-FSD) for a a year at that point. I was used to mistakes and knew how to disengage quickly before they could be problems.

I was expecting FSD to be bad, but I sure wasn't expecting it to be that bad.

Maybe without disengaging any of the three incidents could have become problems, but for someone who knows how Autopilot works it was more comical than dangerous.


No harm, no foul. GP's child learned a valuable lesson which may serve the child well in the decades to come: don't trust self-driving.


I guess the lesson is more for the parent, unless the child will get $10k worth of ice cream less.


Pressing a button and then paying careful attention to watch the system perform is VERY different from just engaging the system and trusting it to do its thing.

I think the software is a half-baked gimmick but come on "look guys, I care about children too" variety of in-group signaling with a side of back seat parenting adds less than nothing to the discussion of the subject at hand.

And INB4 someone intentionally misinterprets me as defending the quality of Tesla's product, I'm not.


I assume that you have snarky comments to spare for the company who legally put this option in his hands as well, is that right?


[flagged]


To be fair, you did say that it literally drove up onto the corner of the intersection and "thank god there were no pedestrians there", which does not make it sound like you were in full control at all times, but rather that it was lucky no one was there or they would have been hit.


Presumably, if they had seen pedestrians standing on the corner, they would have intervened long before Tesla's AI could mount the curb. I've never driven a self driving car but I imagine it's a bit like being a driving instructor whose feet hover over the passenger brake. You'll be a lot more vigilant when the danger or consequence is higher.


> nobody was ever actually in danger

not sure you're qualified to make that assertion, simply based on the series of choices you've described yourself making here.


Only on HN could someone be arrogant enough to think their non-experience is more accurate than another's actual experience.

At no point was anyone—myself, my child, or another party—in any danger. Perhaps that would be different for someone who had never used Autopilot and didn't know how to disengage. But after a year driving that car, I'm comfortable in my assertion that the situation was safe.

But, by all means, please continue with the HN pedantry. We wouldn't want this place to change.


Why even buy from that manufacturer then?


[flagged]


~1500 animals died at Neuralink, vast majority of them being rodents. These are not large numbers.

https://www.reuters.com/article/factcheck-neuralink-fabricat...


> These are not large numbers.

Only when compared to completely different industries, the most commonly cited being education or food.

When compared to its own industry, the numbers are still large.


Its own industry being medical research? Perhaps somewhat more important than eating meat or beauty products?


That article also mentions an ongoing federal investigation into the poor treatment of animals there.


This factcheck looks legitimate. “Over 280 sheep, pigs and monkeys” is unfortunately not very specific though.


It does show that the "3000 monkeys" claim is misinformation since the real number is over an order of magnitude less.


It's a huge number for this sort of research. Huge.


A single cancer drug from idea to FDA approval kills untold numbers of rodents and monkeys. Price we are willing to pay as humans.


Could you please not propagate fake news? There is high enough signal-to-noise ratio already on the internet.


One problem with this experiment is that, I'm guessing, you monitored the car very closely, ready to take over at any moment, and, in fact, did so in a few cases.

But this raises the question, what would have actually happened if you just let the car be? Would it have gotten into accidents? Or maybe it just drives in this alien unfamiliar way but is actually safer?


> Or maybe it just drives in this alien unfamiliar way but is actually safer?

As long as the roads are a mixed environment of autonomous and non-autonomous vehicles, driving in an unfamiliar way is by definition unsafe because other road users can't anticipate what your car is going to do. That's not even mentioning pedestrians or cyclists.


Sounds like great questions to answer in a safe controlled test environment before letting it loose on public roadways.


> Thank God there weren't pedestrians there.

It would've still stopped - the FSD system doesn't override the background tasks that power the AEB systems that stop the car from hitting humans or cars.


> It would've still stopped

We don’t know that. An Tesla cannot claim it.


We also don't know the opposite but that sure as heck won't stop people claiming they know it as a fact and cause the publication of dozens of news articles over nothing.


It is the precautionary principle.


Except that time FSD drove right into a semi truck and killed the guy sleeping in his car...


You mean it would disable FSD when it would see that impact was inevitable so Tesla could claim that FSD had nothing to do with the accident.


> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact,

https://www.tesla.com/VehicleSafetyReport


Would make sense with 30+ seconds depending on the level of the warning


30 seconds is a _very_ long time driving.


Imagine AI puts you into a separated oncoming traffic lane when visibility is extremely low (a few meters) and then disengages. It might take you quite a while to get out of such conundrum.


Standard autopilot doesn't do lane changes. Enhanced autopilot does, but requires the driver to acknowledge with the indicator stalk.

I know this was just an example scenario and your point is broader, but I'm struggling to think of another circumstance where autopilot is at fault 30 seconds post disengage, as it's effectively just a fancy adaptive cruise control.


How does this work on intersections? E.g. if you have to turn, because straight ahead is wrong way sign for a one way road.


Autopilot steers with the lane you're in. If it's unsure at the intersection, it will force the driver to retake control.

I've never been at an intersection where straight ahead is a wrong way (wouldn't the traffic be then facing you?), but like any cruise control system, it would likely require driver attention.

Autopilot is not intended to be enable and go to sleep. It's simply fancy cruise control.


Citation needed.

Unless you are referring to these [0] incidents from 2016, before FSD was released. That was Tesla Autopilot (which comes standard on all Tesla vehicles now).

Also, FSD uses active camera monitoring to make sure the driver is paying attention, so no you can't sleep in your while FSD is activated.

[0] https://www.wired.com/story/teslas-latest-autopilot-death-lo...


What is the difference between Autopilot and Enhanced Autopilot?


AEB cannot magically instantaneously stop a car that FSD has put into the wrong lane. If you believe otherwise, please do not drive. Cars are dangerous in the hands of those who do not take risks seriously.


> It would've still stopped

Would it have though?

https://www.youtube.com/watch?v=3mnG_Gbxf_w

At this point I can't imagine buying anything Tesla related. Or anything related to Elon Musk for that matter. He's a grifter who has managed to stand on the shoulders of giants and call himself tall.


I want to understand the minds of engineers at Tesla - or anywhere - who willingly put this kind of stuff out into the world. How do you convince yourself Tesla FSD is safe enough to ship? Are you just so worried about your boss being mad at you that you don't care? Are you so bought-in that you ignore obvious signs what you're doing is dangerous?

It's engineering malpractice to test this product on public roads, in my view. It's beyond malpractice that the government - at city, state, and federal levels - is allowing it.

(To be clear, I am not against self-driving cars in all instances. I am talking specifically about the Tesla FSD, which has been dangerous since launch and isn't getting better.)


The counterpoint goes something like this (not that I necessarily buy it, but this is what I infer to be Tesla's reasoning):

1) We're only going to fully "solve" self-driving with ML techniques to train deployed NNs; it can't be done purely in human-written deterministic code because the task is too complex.

2) Those NNs are only going to come up to the necessary levels of quality with a ton of very-real-world test miles. Various forms of "artificial" in-house testing and simulation can help in some ways, but without the real-world data you won't get anywhere.

3) Deploying cars into the real world (to gather the above) without some kind of safety driver doesn't seem like a great path either. There's no backup driver to take over and intervene / unstick the car, and so far driverless taxi fleet efforts have been fairly narrowly geofenced for safety, which decreases the scope of scenarios they even get data on vs the whole real-world driving experience.

4) Therefore, the best plan to acquire the data to train the networks is to use a ton of customers as safety drivers and let them test it widely on the real routes they drive. This is tricky and dangerous, but if it's not too dangerous and the outcome saves many lives over the coming years, it was worth it.


I feel like you could enable FSD for every Tesla car in a "backseat driver" mode and have it mirror actions the driver does (so it doesn't have control but you're running it to see what it would do, without acting on it), and you watch for any significant diversions. Any time FSD wanted to do something but the driver did something else could have been a real disengagement.


They had been doing that, and called it "shadow mode" [1]. I suspect it's no longer being done, perhaps they reached the limit of what they can learn from that sort of training.

[1] https://www.theverge.com/2016/10/19/13341194/tesla-autopilot...


When it's in 'real mode', any disengagement or intervention (ie. using the accelerator pedal without disengaging) is logged to the car and sent to Tesla for some data analysis, and this has been a thing for a while. Of course we don't know just how thorough their data science plays into FSD decision making and what interventions they actually investigate.


I believe this is exactly how Comma trains OpenPilot.


I don't think that would work due to "bad" drivers. We all drive differently than we know we should drive in certain circumstances (e.g. road is completely empty in the middle of rural new mexico)

For example, you can imagine FSD would determine to go straight down a straight lane with no obstacles - that would be the correct behavior. Now imagine in real life the driver takes their hand off the wheel to adjust the radio or AC, and as a result the car drifts over and lightly cross the lane marker - this doesn't really matter because it's broad daylight and the driver can see there's nothing but sand and rocks for 2 miles all around them. What's the machine conclude?


I forget who it was (maybe George Hotz) that said something to the effect of "All bad drivers are bad in different ways, but all good drivers are good in the same way".

The point being made was basically that in the aggregate you can more or less generalize to something like "the tall part of the bell curve is good driving and everything on the tails should be ignored".

Since learning happens in aggregate (individual cars don't learn – they simply feed data back to the mothership), your example of a single car errantly turning the wheel to adjust the radio would fall into the "bad in different ways" bucket and it would be ignored.


"All bad drivers are bad in different ways, but all good drivers are good in the same way".

I accept that as a plausible hypothesis to work off of and see how far it goes, but I would not bank on it as truth.

I'll give another example, I think a significant portion of the time, people roll through stop signs (we'll say, 25% of the time? intuitive guess). I do it myself quite often. This is because not all intersections are built the same - some intersections have no obstacles anywhere near them and you can tell that duh, there's no cars coming up at the same time as me. Other intersections are quite occluded by trees and what not.

I'm fine with humans using judgement on these, but I would not trust whatever the statistical machine ends up generalizing to. I do not think rolling a stop sign makes you a 'bad' driver (depending on the intersection). Still, if I knew I was teaching a machine how to drive, I would not want it to be rolling stop signs.


That sounds like a complicated way to say there are more ways to screw up than do it perfectly, which, duh.

Not to discount this at all, but... yea

Even if the brains of it become perfect, I doubt the vision-only approach (or has that changed?)

They need at least somewhat decently consistent 'signal' to act appropriately... and there are some mornings I just don't drive because visibility is so poor


The theory would be that washes out in the noise. It's a simplification, but on average, most of the people most of the time are not doing that - why would it zone in on the rare case and apply that behavior?


Well zoning in on the rare cases is the difference between what we (as in society's collective technology, not tesla) have today and full reliable self-driving.

Even in the anecdotes throughout the rest of the comment section, there's a lot of people that said "yeah I tried FSD for a limited period of time and it worked for me". Because we're not saying that taking FSD outside right now will kill you within 5 minutes. We're saying that even if it kills you 0.01% - that's pretty sketch.

The general principle is that all of the drivers that have been recruited to be 'teachers' to the machine are not aware that they are training a machine. As a result, they are probably doing things that are not advisable to train on. This doesn't even just apply to machines - how you drive when you are teaching your teenage child is probably different than the things that you do on a regular basis as an experienced driver. If you are not aware that you actually teaching someone something, that's a dangerous set of circumstances.


By this reasoning, shouldn't Tesla pay users instead to enable FSD and collect data for them?


It seems they're doing quite well on their financials by offering access to FSD as a subscription. The misconception here is that FSD is needed for them to collect data - they collect autopilot sensor data on all cars regardless of FSD or not.


A tangent to that thought... "do you want people to be financially incentivized to get into novel situations to test situations where FSD was lacking data?"

I recall Waze had some point system to help gather positioning / speed data for side roads that it would try to have you go get with icons... and those were just fake internet points.


I could see something like this being their logic - maybe not with neural networks/machine learning specifically, but certainly "the only way to get to where we want to go is to do this".

My counter-counter-point would be that there's plenty of other companies that are doing this more safely, and also that ends don't justify the means when those means involve killing pedestrians.


Those other companies are rapidly going bankrupt because the economics of doing it the non-Tesla way seem impossible.

zoox was bought by Amazon for $1 billion, which seems a lot but it was the amount of money invested into company, so it was sold at cost to Amazon.

argo.ai just shutdown. VW and Ford spent several billions of dollars on that.

drive.ai shutdown and was acqui-hired by Apple for the car project that was reportedly just pushed to 2026

aurora is publicly traded and is on the ropes, reportedly trying to find a buyer before they run out of cash.

We'll see how long GM and Google will be willing to put ~$2 billion a year into Cruise / Waymo. I don't see them generating significant revenue any time soon.

Tesla and comma.ai have a model where they make money while making progress. Everyone else just burns unholy amounts of capital and that can last only so long.


So we're arguing it's better to offer a FSD that crashes rather than go bankrupt because maybe one day it won't?


No, I'm arguing that Waymo, Cruise and others following similar strategy will go bankrupt before delivering a working product and Tesla / Comma.ai won't.

As to crashes: the disengages part of your rebuttal is implied claim that Waymo / Cruise are perfectly safe.

Which they are not.

FSD have been deployed on 160 thousand cars. No fatalities so far. No major crashes.

Cruise has 30 cars in San Francisco and you get this:

> Driverless Cruise robotaxis stop working simultaneously, blocking San Francisco street

> Cruise robotaxis blocked traffic for hours on this San Francisco street

Another Cruise robotaxi stopped in the muni lane.

Waymo car also stopped in the middle of the road.

Neither FSD or Cruise or Waymo had fatalities.

They all had cases of bad driving.

This is not Safe-but-will-go-bankrupt vs. not-safe-but-won't-go-bankrupt.

It's: both approaches are unsafe today but one has a path to becoming safe eventually and the other doesn't, if only because of economic realities of spending $2 billion a year without line of sight for going break even.

https://www.theverge.com/2022/7/1/23191045/cruise-robotaxis-...

https://techcrunch.com/2022/06/30/cruise-robotaxis-blocked-t...


> As to crashes: the disengages part of your rebuttal is implied claim that Waymo / Cruise are perfectly safe.

I didn't mean to suggest that. I was responding to your words here:

> Tesla and comma.ai have a model where they make money while making progress.

I'm saying that it's not OK for a car company to keep going with dangerous self driving just because it can afford to.

> FSD have been deployed on 160 thousand cars. No fatalities so far. No major crashes.

That doesn't seem to be the case[1]. Though now we're going to squabble about definitions of "major" and also how is this reporting happening.

[1] https://www.latimes.com/business/story/2022-07-14/elon-musk-...


It's worse than that in my reading; the argument is entirely neutral on crashes, the only metric of success presented is not going out of business!


That's how we got cars, planes, medicine, bridges, and ... almost everything.

We can't wait for perfection. The question is how much risk are we willing to absorb.


If someone told you that they were going to revolutionize bridge building but it was going to take a bunch of catastrophes to get there how would feel about it?


The fact is they did not tell you but it happenned and still happens. Bridge designing and building uses safety factors, yet there have been bridges falling down, in Italy and Mexico as recent examples https://m.youtube.com/watch?v=hNLP5shZciU https://m.youtube.com/watch?v=YXmbkbr0L18 A few years ago I built realtime seismic impact monitoring and analysis technology and standard answer was along the lines of “we’ve got insurance if people die so why bother”


In terms of the grandparent's question:

If the FSD crashes, but much less than human drivers, I'm all for it.

But how much less is enough? Very interesting question!

1000:1 sure!

10:1 strong maybe!

11:10 probably not, given the moral and legal can of worms, though it'd be nice to save ~10% of injuries...


IMHO, if they are allowed to use the public as a guinea pig, the data they collect should be available for everyone.


The fact that the other companies which went with a more thoughtful roll-out, delaying the time to market, got a much better track record, is a strong counter-counter-point IMO.


The thing is that Tesla FSD is trying to solve another problem than cars driving in geograhically limited areas.

Thus comparing the disengagement rates does not make sense.


If you don't count disengagements then what metric do you use? Because I'd guess a statistically significant portion of disengagements are likely accidents that would've happened--if not for human intervention. Which if we're calling it 'full' self driving suggests you shouldn't need to intervene at all.


>4) Therefore, the best plan to acquire the data to train the networks is to use a ton of customers as safety drivers and let them test it widely on the real routes they drive. This is tricky and dangerous, but if it's not too dangerous and the outcome saves many lives over the coming years, it was worth it.

Maybe you should use specifically trained test drivers, who are acutely aware of the limitations and know how to deal with them, not random people who have been told through intentional snake oil marketing by a billionaire with a god complex who needs to feed his ego that the car can drive itself.

It's insane that governments allow these vehicles on the road.

Also, that kind of the-end-justifies-the-means reasoning has lead to a lot of catastrophic developments in history. Let's not go there again.


I appreciate being principled about ends not justifying the means. But in my experience this principle is not applied universally by people. It's cherry-picked as what amounts to a non-sequitur when deployed in a discussion. Don't get me wrong, I wish it were a universally held and enforced moral principle, but it's not.

Anyway, the reality is that Teslas are safer than any other car on the market right now, despite the scary voodoo tech. So it seems in this case the means are also justified. If auto-pilot and FSD were causing more accidents than humans, we'd be having a different conversation about ends justifying means, I surely agree.


> Anyway, the reality is that Teslas are safer than any other car on the market right now, despite the scary voodoo tech.

What data are you using to conclude that?

Safer for Tesla drivers or others near and on roads?


Ends-justifies-the-means reasoning has also lead to many of the innovative wonders we're all now relying on every day. While the customer test drivers aren't "trained", there was some caution in the approach.

Customers had to opt-in to request beta-testing access, they had to pass a Safety Score system that grades their driving safety (same as car insurance apps, roughly) for a long period (in some cases many months!), etc. After going through those hoops, when they finally get the software for it, they're required to consent again. IIRC the text there includes things like: You are in control at all times, must keep hands on the wheel and eyes on the road and intervene for safety, you are liable for anything that happens, this software can do the worst possible thing at the worst possible time, etc. They also monitor for your hands on the wheel (via torque sensing) and use an in-cabin camera to monitor whether you're watching the road or looking at a cellphone, etc. These measures can be defeated with effort by unsafe idiots, but that's no different than the risks such unsafe idiots present when driving any car.

With all of that in place, they've scaled up over a couple of years to 160K customer test pilots. Accidents happen, but there's no evidence the rate of them is anything to worry about. If anything, what evidence there is seems to point in the direction of FSDb test drivers being safer than average. However, they're supposedly removing the Safety Score part of the process Very Soon (likely in the next few weeks), but the rest of the warnings and mitigations should remain.

--- meta stuff:

There's a ton of money and a ton of ego pushing huge agendas in every direction when it comes to anything Elon-related, Tesla included, especially since the Twitter saga began and he started really going off the rails more. Almost anything you read on related topics, regardless of which "side" it's on, you have to question the motive to even begin to understand the underlying objective truth. I follow Tesla news a lot, and I'd say ~90% of all random internet news articles on these subjects (positive and negative) are mostly utter bullshit clickbait when they're not outright fraud, and they're designed to influence stock prices and/or buyer behaviors more than they provide useful information. When big money and big egos are in a war over something, objective truth on the Internet is a casualty.

If you ignore all that line noise and look at the objective reality of the engineering parts though: it's pretty amazing beta software with a lot of future potential, and the testing has gone pretty smoothly in terms of safety. It could be many years before you'd let it chaffeur some elderly person on a pharmacy run as a robotaxi, but IMHO it's still a better bet than most of its competitors in the long game of fully-generalized self driving on the human-centric road networks we have today.

As for Elon himself: clearly some of his behavior and viewpoints lately are both pretty objectively terrible. At least you can see it? How many executives from companies that built things we all relied on from the past few decades have really been any better? They've mostly been better at hiding it, while Elon puts it on full display. The world is what it is.


We don't disect live humans despite the potential for scientific advancement. Would it be so bad if FSD wasn't on public roads until it's disengagements per 10K miles driven was at least as few as human accidents per 10K miles?


Disengagements in Tesla data are going to commonly be for much less serious things than potential accidents (merely inconveniencing others, or embarrassing the driver in some way, or even a safe move that just didn't /feel/ right to the driver). They've published actual accident statistics for Autopilot, and those show that it has a lower accident rate than manual driving even on the same fleet of Teslas (which in turn have a lower accident rate than the rest of the US even when manually driven).

Driving is inherently very dangerous. Traffic accidents are a leading cause of death in the US. You're not really chasing perfection to win this game. It's not all that hard to beat humans on average, because the average human is pretty terrible. It's a statistical certainty that some people will die at the hands of Autopilot even when it's in some final non-beta state, but it will probably be less people than would otherwise die to the same miles driven manually.

The hard thing for Autopilot-like systems is perceiving the world accurately. "The world" includes both the physical reality of roads+cars it senses, as well as things like traffic rules, corner cases (construction, debris, manual police control of an intersection with a dead traffic light), and gauging the intent of other/human drivers. Humans are inherently better at that stuff. The software has to get better than it is today, but it will probably never fully match the best humans at this part.

However, there are two key ways the software inherently outdoes humans:

(1) It can have more sensory data than humans. Even Tesla (currently sans Radar + Lidar, just cameras) can see in every direction at the same time, all the time, with an overlapping array of cameras. No blindspots, no temporary blindness in one direction while you crane your neck to check in another, etc.

(2) It never gets tired, distracted, or drunk. It's never checking a facebook feed, or nodding off after a sleepless night, or too engaged in a conversation with a passenger to remember to look for that cross-traffic, etc. This is a huge advantage when it comes to accident rates.


> Autopilot even when it's in some final non-beta state, but it will probably be less people than would otherwise die to the same miles driven manually.

Bold claim. We would need the data to be sure. Judging by reports of Tesla owners in this thread I'd guess FSD and Autopilot are probably causing more harm than it's preventing.

> It never gets tired, distracted, or drunk.

Which would be a great benefit if FSD didn't drive like a drunk teenager.

Look for any tool humans rely on there must be predictability. And until we have enough public data no conclusions can be drawn. That's why it's in Tesla's interest to continue releasing less and less data except for the data that makes them look good.


Yeah, move and fast and break things, you can just do beta testing with live real human beings.

That Tesla is even allowed to do it speaks volumes to the unchecked power Elon's influence yields.


Maybe abusing workers into 80 hour a week death marches really doesn't produce good results?


The even more insane part is that the newest FSD turns off a the distance sensors and tries to rely entirely on computer vision, even though it has blind spots and computer vision is one of those fields where success is still measured in statistical deviance. The newest Teslas don't even ship with the sensors anymore. It's like Elon is setting them up to fail.


You misunderstand what it does today. It doesn't make the car autonomous. It is a driver assistance system that must be constantly monitored by the driver, it is perfectly functional and helpful if you understand what it does.


You're right.

However, this is copy-pasted from Tesla's website:

"Tesla cars come standard with advanced hardware capable of providing Autopilot features, and full self-driving capabilities—through software updates designed to improve functionality over time."

Who's to blame for this misunderstanding?


My take - Tesla doesn't really employe people who understand safety (and I don't mean that as a ding for people working there - most of software engineers just have no idea what is required to build safety critical system). As a result, people likely think they're doing things correctly.


I would guess the actual engineers know it's not nearly ready, and it's product/marketing people (or senior leadership) who are forcing it to be released too early


If the engineers know it is not ready and they still deploy it despite knowing it is reasonably foreseeable that this system will cause injury or death, that is still on them. Unless the product/marketing people are holding the engineers at gun point, the engineers are free to refuse and quit if need be. I have quit jobs (with no prospects lined up) and turned down offers in the past due to ethical objections, so I am not particularly sympathetic to the "just doing my job" defense.


It's the same way that the news can get biased without corruption. If you disagree with the boss, they can find someone who agrees.


Also H-1Bs are kind-of trapped.


This is what happens when no-one takes their ethics classes seriously because naive "innovation good".


There's been a lot of cars on the road with FSD for quite a while now. You say it's dangerous, but do you have any data to substantiate that? Even a single accident?



I wonder if it's the case of, "someone will ship this even if I quit and lose my income, so I may as well do it."


The individual engineers working there probably make whatever part of the code that they're working on much better--before they get burned out and quit after a year or three and then that part of the codebase rots.


I've commuted a bit on the German highways. I only run Autosteer, which is cruise-control + lane-control, and it disengages almost every time there's roadwork.

It is very bad at handling ambiguity when the road has both white and yellow navigation lines. There's a number of scenarios, some of which involve criss-crossing lines, some where they run mostly in parallel.

The car sometimes wants to follow the white lines when the yellow are actually overrides. When the yellow lines are wrong (and should rightfully have been scraped off), this always leads to disengagement. For stretches where the yellow lines are correct, and the older white lines are off by half a meter so that there's effectively not enough space between the white lines and the concrete side, autosteer will sometimes try to switch to the white lines, which would lead to the car smashing into the concrete side.

And sure, autosteer != FSD. But it's a much simpler problem that the Tesla still basically fails at.

Sometimes, in fog, cruise-control will abrubtly break. Very nerve-wrecking.

Relying fully on vision has its downside. Listening to Kaparthy on Lex Fridman's podcast, it sounds like this was cost-cutting. I can't believe that "nope, that big white thing isn't a wall" is not objectively better navigation.


Bit by bit, it gets harder to figure out how buying a Tesla is ever the right answer. Autosteer (beta) is worse than the competition, traffic aware cruise control is timid to the point of braking for things which don't even exist, you don't get basic things like ultrasonic sensors, radar, rain-sensing wipers, or even carplay. And even when they do include something that sounds halfway cool, it turns out to be underwhelming -- the matrix headlights are all of 100 pixels, which would have been cool about 10 years ago. And they aren't even enabled. Given how poor automatic high beams are, do we really want to see how Tesla matrix headlights would work in real life?


> Autosteer (beta) is worse than the competition

Autosteer on our Tesla Model 3 is actually pretty great. I use it all the time because it makes driving a lot less tiring, especially over long distances. I don't know where you get your information.

As far as competition, our other car Kia Forte 2022 now comes with Autosteer like feature as well. With that, unless, the lines are pristinely painted, you are literally taking your life into your own hands.


same here! i love the autosteer in my Sept 2019 Model 3.


> Bit by bit, it gets harder to figure out how buying a Tesla is ever the right answer

Great car or great charging network. Pick one.

If you pick charging network get a Tesla. Otherwise get something else.


This.

I really want to sell my Model S (for like an Audi for example), but no one can compete with its battery:

https://docs.google.com/spreadsheets/d/1k1DOw-NwvW8E8tQeXlac...


Or rather, the Hummer EV has the battery of two Model S cars! It just is twice as inefficient..


Their charging network is open to all cars, not just Tesla.


...in some countries in Europe [1]. It's not quite there yet in the US [2].

[1] https://www.tesla.com/support/non-tesla-supercharging

[2] https://cars.usnews.com/cars-trucks/features/superchargers-o...


"All cars" is absolutely not true. If I drive up to a Tesla Supercharger with my Mach E in the US, I'd have no way to charge my car. The Tesla charger has a proprietary plug that needs a proprietary app with a proprietary payment network to start the charge.

Tesla chargers are not open, in the slightest. Them publishing specs online on the physical plug design doesn't make it open to everyone. There's still lots of Tesla IP that covers those specs and designs. Tesla won't license it to other automakers unless other automakers essentially never enforce any of their IP.


Thanks for pointing that out. I genuinely was under the impression that it was open to all cars, so I learned something new today. It made me realize I fell prey to Tesla's PR.


Apparently not in the United States, at all.

https://www.tesla.com/support/non-tesla-supercharging


Not in the US.


People will pay more for cars that project images they want to project. This explains pretty much all the car preference patterns for people who have enough money sloshing around that they don't need to purely focus on maximizing utility for cost but who don't have "I don't care what you think" money.


Autosteer: I haven't tried the competition; Tesla's is pretty great on the highway, but not when there is roadwork or in cities where the roads will have unexpected swerves (e.g. with cars parked in the road, or blockades).

Cruise-control: In heavy traffic, I don't use it. I like to contribute towards reducing congestion, and the cruise-control accellerates too much at once, causing lumps. But for non-heavy traffic it's pretty great. The "braking for things that don't even exist" is a separate concern, IMO.

Ultrasonic, radar: Yeah, I'm disappointed here.

Rain-sensing wipers: It actually has that, and they do sort of work. What's annoying is, apparently in Model Y, you can't disable automatic wipers if you're on autosteer. So when you're on a very regular stretch of road, and the rain-sensing wipers think it's raining and it's not, you can either choose between autosteer + wiping a dry windshield at the lowest frequency, or driving manually.

Automatic high beams: They work pretty good in my experience.

Carplay: I get that Apple wants to be my car's operating system. It may seem great, but I'd prefer if my devices got along without total platform dominance from one party.


My experience is that Autosteernis significantly better than the competition. And the Tesla absolutely has rain-sensing wipers.

I find it to be a fun, well performing and comfortable car. I am shopping for a second car currently and have yet to see anything close. There are no other electric cars on the market that tick the price, performance and range boxes, before I even start looking for the creature comforts I'm used to.


Talking about "lane assist" on a German Autobahn - I had a bad, if not worse anecdote with a French (the model) rental car one or two years ago. I did not know that assistant was on, but had I known I would have thought nothing of it.

Because what happened really surprised me: Entering a construction zone the assistant took the wheel away from me(!!!). I was steering manually, normally, but the assistant insisted I follow the previous line in the road, that lead right into the construction zone. Which was behind a concrete divider.

I actually had to fight the "assistant", with not insignificant use of force on the wheel, to not crash sideways into that divider wall.


I swear my car was (gently) fighting my steering here due to the lane keep feature before I turned it off. I frequently drive around potholes and dead animals and don't need the car trying to fix that for me.


I tried VW(+Skoda), BMW, and Stelantis(Citroen/DS/Peugeot models) "Autosteer" in past year(changing comanies, waiting for new company car to be build while having Hertz rentals), plus I had autosteer with adaptive CC in Skoda from 2018.

It got a lot better over the 2018 thing, that tried to crash itself on several roadworks, I actually almost scratch the car in first 100km I had it because of it.

My latest BMW works perfectly fine in German roadworks with bright yellow lanes, Also all the stelantis cars I've driven (all 2020+ models) worked fine in that conditions.

Camera system has problems when there's fog, or direct sun in precise angles. Radar has problems where there's lot of rain, and rainsplash from trucks, or snowy conditions, and the front radar gets frozen.

But it's awesome as assistive system.


Skoda/VW lane assist feels like it would bounce you back and forth between the lane markings if you just let it do it's thing; more useful a backup system to stop you crossing the line when you have a lapse in concentration (which is how they describe it I believe).


Yep, its function is basically: if your child distracts you and you don't pay attention for a few seconds then it'll try preventing you sliding into the other lane. Which is fine.


They have several tiers of it I believe, the best one also offers automatic lane change, like BMW, it was actually working perfectly fine for me, with no back and forth bouncing - it had the annoying <steer a bit or I don't think you are holding the steering wheel> feature (Karoq 2018) - but newer models I think also have capacitive steering wheel, so you can only touch it.


Hah, ironically my experience of it is based on a 2018 Karoq (perhaps with the most basic system, as you said).


This is an example of the risks of vertical integration. Some of what Tesla needed/chose to do itself is paying off. More integration into a single electronics system puts Tesla ahead of an industry trend with many unique features.

But rolling your own ADAS and FSD might result in having spent a lot for a result that is inferior to what ADAS suppliers can sell to other car OEMs, with no substantial benefit to the FSD effort, which is much riskier, with the added, and perhaps gratuitous risk of trying to do FSD with only camera sensors. The risks in that are not just it is a risky approach to real time sensor data, but that the data all your cars are collecting is worth a lot less for not having a LIDAR point cloud providing 3D data.


I’m most curious how Teslas autosteer compares to CommaAI (or other offerings). I’ve been seriously considering letting CommaAIs set of supported cars to try it out.

It’d be cool if there was a yearly competition to rank the basic lane keeping/speed matching/turning. Eventually when it’s actually full-self-driving they can rank FSD.


This[1] is one of the videos on comma.ai’s website.[2] I count at least 10 disengagements. It doesn’t even follow the road in some cases.

1. https://youtube.com/watch?v=NmBfgOanCyk

2. https://comma.ai/openpilot


My brother uses CommaAI's openpilot on his Honda HR-V, and it's really quite interesting. It does pretty well where we've tested in Virginia and Maryland, but we haven't compared it head-to-head against Tesla's offering.


This comment made me wonder how they train the car for different sets of traffic rules. Some of this goes beyond formal rules. On most US highways you are technically supposed to overtake on the right, but it's totally common. In contrast in much of Europe this is actually enforced and the convention is that you only leave the rightmost lane while overtaking.

Do you do 100% of training on data from the same region? Are there low-level skills that all translate and don't bring the risk of training contradictory expectations? Can you just annotate training data with the rule set/location that applies?


> On most US highways you are technically supposed to overtake on the right

You mean the left?


Yeah, sorry, I omitted the "not". In practice both seem believable looking at traffic here.


I think they meant „aren’t technically supposed to“, that makes more sense


when it comes to decision-making there is a lot of hardcoded rules in these systems. They don't learn the speed limits or to stop at red lights. The learning is all in the perception and object recognition but a lot of the behavior is just explicit rules.


> And sure, autosteer != FSD. But it's a much simpler problem that the Tesla still basically fails at.

They've publicly admitted that the entire autosteer codebase hasn't been touched in over 4 years, since they fully pivoted to getting FSD in a usable level 2 state.


The California DMV has been licensing autonomous vehicles since 2014. There are three categories of license. Testing with a safety driver is the learners permit: must have licensed driver ready to take over, no driving for hire, no large vehicles. About 45 companies have a learner's permit. Driverless testing is the next step up: no driver in the vehicle, but a remote link and observer. 7 companies have that permit.

Finally comes deployment: no driver, paying passengers, remote monitoring.[1] Three companies are far enough along for driverless car deployment in California: Waymo, Cruise, and Nuro. Waymo is going about 13,000 miles between disconnects now. The remote monitoring center can reset.

Waymo is still being very cautious. Only in Phoenix, AZ is the service really deployed. There's a service in San Francisco, but you have to sign up as a "trusted tester" and it's mostly Google employees. When it goes live in SF, then this is real. Waymo has a new vehicle they're showing. It's an electric van with no steering wheel. Not clear how far away deployment is.

Tesla isn't even trying to qualify for autonomy in California any more. They've given up. They used to whine about being "over-regulated". What they hated was having to report all disconnects and accidents to DMV, which publishes them.

[1] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...

[2] https://www.theverge.com/2022/11/21/23471183/waymo-zeekr-gee...


> Tesla isn't even trying to qualify for autonomy in California any more. They've given up. They used to whine about being "over-regulated". What they hated was having to report all disconnects and accidents to DMV, which publishes them.

Sunlight as a disinfectant indeed.


> partnered with Chinese automaker Geely

(╯°□°)╯︵ ┻━┻

Are you telling me there are no American companies they could have partnered with? Why would you willingly give US dollars to the CCP for a free market R&D effort?


Waymo has previously partnered with Chrysler/FCA/Stellantis, and Jaguar. They don't seem to be strongly committed to a vehicle maker.


I just tried FSD last night in suburban Dallas for the first time with light traffic and it was harrowing. Drove in the wrong lanes, almost hit a trash can, accelerated way too fast on side streets, and it made a right turn at a red light without stopping or even coming close to slowing down. This was a 3 mile drive.

I've been using autopilot on the freeway for years and that's been mostly fine—but I'm never going to use FSD on streets again.


I’ve been using FSD for a month and my experience with it has been mostly great. I was skeptical initially because of the prevalence of this sort of comment online but it didn’t match my experience. There are predictable situations where it can’t navigate but you get used to them and anticipate them. Otherwise, it is a nice upgrade in functionality over EAP that generally makes the car nicer to operate.


One thing I maybe should've mentioned is that it was dark outside (not raining or anything though). Maybe I'll try it again in the daylight one day.


Did you actually keep it on those whole 3 miles? I would have stopped after the first near miss.


I disengaged twice because I was terrified of what it might do but turned it back on. I'm generally a very early adopter with these kinds of things (I'm the type of person to always turn on "Beta" mode regardless of what I'm working with). So I have a high tolerance for things not working the way they should.

This is unbelievably terrible though. I really regret purchasing it.


> I'm generally a very early adopter with these kinds of things

Which in turn makes everyone else around your car an early adopter?


By that logic, all drivers are early adopters every time a new driver gets their license and enters the "driving pool".


Isn't that why lots of of jurisdictions have constraints on the learning driver (e.g. graduated licensing of some sort) and/or visibility requirements (e.g. car has to have a "learner" sticker of some sort) so that other drivers know?


I literally never heard of such stickers.


We had them in the Midwest US. And the learner vehicles have an extra break pedal for the instructor.

Which is good because a student in my group nearly killed us trying to merge into a semi.


We should require a test of one's ability to drive based on some basic standards before issuing a license. We should come up with a series of rules for what happens if someone does not adhere to these standards, as well as a mechanism of enforcement if they violate those rules.


They probably are to some extent, and you know where the liability would lie if they are responsible should something bad happen. What about this case ? There is no sense of responsibility or realization of the danger they are introducing at scale .

Even when they actually admit that they have failed at it [0]. I am not sure if they are aware of the doublespeak in this admission.

[0] Failure to realize a long-term aspirational goal is not fraud.


Hence why in the UK they're strongly advised to have a P plate so other drivers are aware they are a new driver.

I suppose everyone should just assume a Tesla is about to do something silly and drive defensively ...


I think this logic is still square-

Normally, each person puts one new driver on the road per-lifetime.

When you beta test a baby driving robot, you’re now at two new drivers per life! And the Tesla doesn’t seem to be learning faster than a human!


It would seem like a return and a refund for selling a defective product is in order.


I expect that's exactly why you're paying to be a beta tester: they can keep your money and not deliver anything.

The one time I really beta tested a for-profit product, not only did I get it for free, I actually got a rebate on the final product (it was pycharm, and jetbrains gave me a free license for a year, which they got back many times over as I renewed yearly ever since).

Though I guess the early accesses I got for kickstarters were kinda like paying for a beta in a way.


I don't think 'beta tester' is a recognized class in terms of consumer law. You're a customer, a merchant, a manufacturer or a bystander. Besides that for something that costs that kind of money you can simply expect it to work.


This makes me feel like running an Android beta on my daily use phone just isn't that daring.


This thread and the other comments on this post are amazing. One cannot sell a car without a seatbelt, but Tesla can use their customers for beta testing a dangerous system that drives a whole car around.


Is your experience the norm though? Uncut YouTube videos show a reliable FSD in many conditions


I have FSD Beta. These uncut youtube videos were a large reason of why I bought into FSD Beta, and I will say they show it in a much better light than my on-the-ground experience. I'm not even accusing the posters of selecting only good videos to begin with (although I'm sure that's also happening) - but the videos really don't do justice to the "in the car" feeling.

As a specific example, when the car suddenly slams on the brakes, especially at slow speeds, it doesn't look like a big deal on these GoPro cameras - but it feels like a much bigger deal when you're in the car and you feel the g-force of your body reacting to the sudden unexpected deceleration. Some of the more transparent and honest reviewers like "Chuck Cook" even add an overlay showing these forces in real time on their videos - but seeing a number briefly spike is a very different experience than feeling it live.

I would say FSD is getting closer to "safe" (not there yet though imho) but it's still very far away from "comfortable".


If a cop would've been behind me I would not have been surprised if they pulled me over thinking I was drunk. The car kept driving erratically, switching lanes and turning the turn signal on and off. If you would've been next to me you'd be fearing for your life

If we assume I just got repeatedly unlucky and this was a 1:1000 situation would that be acceptable to you?


Selection bias perhaps? Fanboys are most likely to upload, and not talk up the disengagements--even if caught on camera.


I've had FSD for about 3-4 months and it can do the vast majority of my drives without disengagement.

There are some very obvious things it still struggles with like construction and roads without clearly defined lines. But I just don't use it on those routes.

I think Tesla made a mistake trying to take a "boil the ocean" approach that will work literally anywhere instead of focusing on key areas first. It ensures you need to 100% nail it before it can be considered "released" as opposed to releasing with a specific set of criteria to meet.


I agree, I have been testing it for a year or so.

1. I have many many multi mile 0 intervention drives

2. it is getting a noticeably better every release

Their video model will be very good in a the future if the current release pattern keeps up. Not sure what it will take to be certified by the USG as level 5, but it seems like it will be at the top of this chart in the future.


>Their video model will be very good in a the future if the current release pattern keeps up

It appears they’re abandoning the video-only approach:

https://techcrunch.com/2022/12/07/tesla-appears-to-be-turnin...


sure, but it still going to use a video model, with additional sensors.


Or they will hit a wall with no further progress.


not likely for a long time, given they are getting more data every day.


Getting more data is useless if it's not the right data.


They are getting the right data because their users are reporting exactly the instances where it fails.


Knowing where it fails isn't the same as knowing what is right.


What if it's not a data problem?

That's the same reasoning as thinking a faster CPU or more RAM would solve problem.

More data could even lead to worse models.


Worse is that Tesla is no longer releasing the safety data quarterly as promised. Not exactly hard to report this as they already have been doing it. Suspicious to stop reporting since 2021.

We want this data!

https://www.tesla.com/VehicleSafetyReport


Am I misreading that? They're comparing accidents of tesla owners against people belonging to different demographics and circumstances!?

An equivalent improvement in driving safety might also be had by eliminating drivers under 30 and since those aren't Tesla drivers (the average Tesla owner is a 54 year old white man making over $140,000 with no children) this self-driving malarky would have zero safety impact.


Their more numbers of more affordable cars is probably why their safety data is getting worse and why they're not releasing it any more (I support tesla by the way just wish it were transparent about safety).


This is relatively consistent with my experience using the FSD Beta in and around Miami with one major caveat.

With two exceptions, all of my disengagements have been "quality of life" disengagements where I disengaged for reasons other than safety:

* to take a different route or get in the correct lane * to be polite / courteous to a pedestrian who wasn't yet in the crosswalk * to move more quickly through a construction zone or other traffic irregularity.

In none of those cases did I believe that the car was going to cause any risk of an accident, at best there was risk of pissing someone off (which, to be fair, in a city like Miami with far more firearms than judgement could be fatal, but not due to car at all). So yes, these are disengagements, but they weren't dangerous.

There is one case where I do get dangerous disengagements. At the intersection pictured below, when I'm in the position of the white van, sometimes tesla will mistake a green on the far traffic light (which is for the other road, intersecting at a sharp angle) for my road and proceed if it is green even though it is red. Happened twice, haven't noticed on the last version yet. Intersection here: https://www.google.com/maps/@25.750348,-80.2061284,3a,75y,26...


The original data source of all this breaks this out into "Critical Disengagements" (CDE) and Non-Critical.

  * Critical: Safety Issue (Avoid accident, taking red light, unsafe action)
  * Non-Critical: Non-Safety Issue (Wrong lane, driver courtesy, merge issue)
That both Fred from Electrek and Taylor from Snow Bull ignore this distinction shows me their intent is less than neutral.

Looking at the original data sources [1], FSD seems to be improving over time at the metric that matters most:

  * City miles per CDE have gone from ~50 to ~120. 
  * % of drives over 5 miles with no CDE have gone from 72% to 93%.
I've been pleasantly surprised that human oversight while the software improves seems to be a viable approach. FSD doesn't appear particularly close to Waymo/Cruise at the moment, but it's not as if they are crashing left and right (you'd certainly hear about it if they were).

Personally I have it, I don't find it enjoyable to use -- but I also don't feel unsafe when using it. Highway autopilot on the other hand I find immensely reliable and valuable.

[1] https://www.teslafsdtracker.com/ https://twitter.com/eliasmrtnz1


It's sort of meta to this whole discussion, but it has been interesting to see "journalists" like Fred Lambert and the evolution of their coverage.

Fred was an early Tesla fanboy and investor; he used to gush over Tesla and Elon Musk in his writing at Electrek. It seemed like most of the time when Tesla was mentioned in an article, he also mentioned TSLA, the stock ticker...but he didn't seem to do that as regularly for say, Ford or GM.

It was rumored that Fred had a direct line to Elon and there were also occasionally public Twitter interactions between the two, but Fred gradually become more critical of Elon, especially over FSD stuff. After one critical article, Elon blocked Fred on Twitter: https://twitter.com/fredericlambert/status/14176561698297487...

Fred certainly hasn't forgotten about it: https://twitter.com/FredericLambert/status/15186308301381795...

So, Tesla fanboy/blogger comes up with some valid criticism, Elon cuts him off, which in turn probably makes him more critical of Tesla/Elon.


Can you speak a little more about your experiences in Miami? I'm reading this thread and the comments are abstract; I wonder if it does better in specific situations.

I tend to drive on 95 north, in Brickell, Downtown, Little Havana, on the 395 and 112, Miami Beach, north Miami (up to 100th), and to the airport. If it can handle these places it seems like it's something to consider.


Highways are a dream, works fine.

For the most part it works fine in Brickell and downtown. I put a link to a street view of one tricky intersection above. It also struggles here when you're trying to go south on Miami Ave:https://www.google.com/maps/@25.7626232,-80.1929899,3a,75y,2... basically where the black car is going south it needs to follow the street in a 90deg curve to the right then immediate left at the stop sign. Sometimes it tries to just go straight there, i.e. to where the truck is.

Also, i do often need to "intervene" by pressing the accelerator since the car is a bit to passive for the typical lunatic miami drivers.


I have been driving Tesla FSD 10.69 for the last few weeks and it is far from perfect. It's definitely beta level software, and unfortunately, there's no real distinction between beta web frontend and beta life-critical ASAD.

I routinely have to take control. It definitely accelerates to hard on some sleepy side streets. It's extremely conservative about speed and lane changes. It's terrible at anticipating slowing where humans obviously would, like brake lights stacking back toward you on a curving highway: the driver can see the braking problem coming from a thousand yards ahead, maybe miles. Summon is slightly better than a parlor trick. FSD is approximately useless in parking lots.

But....

It's not completely unusable. There are definitely segments of a drive I can mentally relax (back to my normal driving level of attentiveness).

Having gotten two kids through their driving tests recently, I'm struck how much FSD 10.69 resembles a 16 year old with their early driver's permit. They're terrible. But they are learning really fast.

Similarly, Tesla is now getting millions, if not billions, of disengagements coming back for retraining. The next release is going to be better.


> can mentally relax (back to my normal driving level of attentiveness).

I'm not sure if this was intentionally funny but it made me laugh.


It sounds accurate to me. If you are a defensive driver, you will most likely find that even basic autopilot increases your stress substantially. It happily drives you full speed into situations your brain easily recognizes as risky. Personally I'd probably have palpitations if I tried FSD.


Damn I can't wait to try neurolink.


The main interesting charts they're showing are about disengagement rates, but this is a pretty sketchy comparison, both between manufacturers and over time with Tesla's FSD as well.

"Disengagement" is going to happen for different reasons for all of these different cases. There's the axis that runs from merely inconveniencing others (e.g. "I disengaged because I was holding up traffic by being embarrassingly slow through a turn at an intersection") to the ones where a serious accident was averted (e.g. "I hit the brakes moments before the car caused a head-on collision). There's the scenario differences that feed into thresholds for deciding to disengage: professional safety drivers on planned tasks that have been given a sort of "rules of disengagement", vs everyday normal humans getting groceries and intervening whenever they feel like it for whatever reason. There's the fact that many competitors are operating in tight geofenced, HD-mapped areas, while FSD is operating real-world all over the country.

I agree that Tesla's lack of transparency is troubling here, but this article seems to be trying to pressure them to increase it by taking a dishonest tack and stack all the unknowns against them in the worst possible way.

They have published basic accident statistics (in the past), and those have generally shown their automation to be a net win. In their own published stats from Q4 21 (latest available on https://www.tesla.com/VehicleSafetyReport ), the comparison they make is this:

* NHSTA data says the whole US averaged one accident per 484K miles.

* All Tesla cars on the road averaged one accident per 1.59M miles (so, ~3.5x better? This could also be caused by the profile of Tesla drivers rather than any real car safety difference - income, where they live, etc)

* Tesla cars with some form of Autopilot engaged averaged one accident per 4.31M miles driven (~2.7x better than non-Autopilot Tesla data, and overall ~10x better than the national average).


Is there a reason there aren't stats for 2022 on the VehicleSafetyReport link?

Is that a typical delay (e.g. were Q1, Q2, Q3 21 reports not available in December of 2021), or have they stopped or suspending publishing the reports?

I'm just not familiar enough with the site to know what is normal.

> Tesla cars with some form of Autopilot engaged averaged one accident per 4.31M miles driven (~2.7x better than non-Autopilot Tesla data, and overall ~10x better than the national average).

Do you know if "accident" encompasses all accidents, or is there a threshold level for severity? I'd be curious if the same patterns hold for "serious accidents" defined as someone (in any vehicle in the accident) being seriously injured or killed.


One of the comments points out:

> Comparing Telsa's disengagements for its FSD beta, which can drive anywhere, against autopilots that work only on (certain) highways is not very comparable. At least compare highway to highway disengagements.

I own a Tesla, but do not have FSD (Not interested in it at all beyond being a party trick). After 3 years and 20,000 miles, I can say that I've never had to disengage AP because of imminent danger, though I HAVE had many times where the lane is splitting and AP can't decide which way to go, so I had to take control. Likewise, when a lane is merging, it would sometimes jerk left and right a bit and I'd take over. Both cases were more for comfort than safety.

Total, I'd probably average 5 miles per disengagement, but that's just so I can change lanes since the base AP does not include lane changing.


I have FSD and owned my M3 for the same amount of time. With FSD you get Navigate on Autopilot that handles lane changes on the highway. It has the same issues you see. The lane changing isn't great, it wants to change lanes more frequently than I do and will often try going into the fast lane even though there is significantly faster traffic coming that I would be slowing down. I often have to disable navigate on autopilot to keep it in lane because it will suggest changing lanes, I'll cancel, then a minute later it will suggest it again.

I'm hoping once the two stacks are merged, that will improve.

The other problem I have on high ways is I can't really use it in stop and go traffic. If I do, it will frequently accelerate to fast from being stopped then have to break hard when traffic reaches a stop again. Too jerky for my comfort.

Overall, when I can use it, it makes driving less stressful. I drive 600 mile round trips a few times per year and am able to have it engaged vast majority of the time on highways.

FSD Beta on city streets requires too frequent overrides for it to be anything more than a party trick at the moment. Even in very rural areas it will have problems of either not changing lanes when it should or changing lanes when it shouldn't.

Auto parking worked great for me initially but something changed and it is never able to detect parking spots for me. The few times it does it just flickers on the screen then disappears, resulting in me not being able to use the feature. I'm not sure if this is because of a problem with sensors or if it's related to me being in the FSD Beta. I've been in the FSD Beta for over a year now.


> The other problem I have on high ways is I can't really use it in stop and go traffic. If I do, it will frequently accelerate to fast from being stopped then have to break hard when traffic reaches a stop again. Too jerky for my comfort.

110% this.

In addition to this, I often find that it is a bit timid on the acceleration once traffic starts really moving. Like, it'll jerk a bit forward from a stop, but then veeerrryyy slllooowwwlllyy speed up, even once traffic ahead is over 100 feet away.


> I can say that I've never had to disengage AP because of imminent danger, though I HAVE had many times where the lane is splitting and AP can't decide which way to go, so I had to take control.

This suggests a trivia question, which I do not know the answer to: what lane in the US goes the farthest without a split or dead-end? Similar question for other countries.

I'd count most freeway exit ramps as splits from the adjacent lane, but would not count intersections with cross streets as splits unless the two roads intersect at a small angle. It is the actual lane that matters, not its name, so if a lane has a name change when it crosses some political boundary it still counts as the same line.

My first guess would be it is on some long freeway like I-10. My second guess would be it is on some rural road through the middle of nowhere.


US is surprisingly bad at this though: in Germany, you get on Autobahn and you know the lane you are in is going to go forever. In US you get on a highway, get to the leftmost lane, it gets to the city, you driver straight and suddenly find yourself in the exit lane on the right? Who designs highways this way?!?!


> This suggests a trivia question, which I do not know the answer to: what lane in the US goes the farthest without a split or dead-end? Similar question for other countries.

I-80 starts in San Francisco and ends in New York City, but I'm certain there are splits that would make you change to some other highway if you stayed in the left lane.

Likewise, I-90 starts in Seattle and ends in Boston and travels through Montana.

My guess would be I-90, just because of Montana.


I have the exact same experience with AP. I trust it on highways until a merge or lane split is ahead. I think it's so silly that the decision making isn't more robust..

If I can't trust AP with simple road rules, then why bother upgrading to FSD?


FSD handles road rules significantly better: the road up to my house has a complicated five-way intersection with a railroad crossing that EAP could never handle. FSD navigates it perfectly. For my 40 minute compute, there are about three spots where there’s predictable road issues that force me to disengage (no safety issue, just the car gets really hesitant) but otherwise it’s obviously an upgrade from non-FSD.


Something interesting to me, as someone with a 2020 base-level Forester, is that you describe my experience completely. It works great on most highways, but needs me when lanes split or merge.

Of course it's not 1:1. AP has other features, such as a much nicer UI that better reports what it thinks it sees, and I think has sexy lane changing features and such.

But it makes me feel that FSD was more business necessary to keep a competitive advantage against cheap cars. Soon many cheap cars will have AP-like features at comparable fidelities.


I know that a lot of Tesla fans view FSD as a sort of tip jar to throw extra money towards the cause but I have a feeling regulators and gov attorneys will come to see it quite differently.


I'm kind of amazed there hasn't been more government action on this.


Yeah I would've been one of these fans until I tried FSD yesterday (see my other comment). I suspect now that it's been GA for a week or so a lot of people are going to change their mind like I did.


How has your mind changed? Was it based on aggregate stats or anecdotes?


I actually used it myself. It's worse than I ever imagined it would be.


I was one of the first people to get FSD Beta access and have given it a try every single time they pushed an update, and honestly, it's unusable and dangerous.

The car just does not behave like a regular driver in any capacity. It's a neat trick to show when there is nobody on the road, but besides that, I have lost all faith in FSD ever coming out in any meaningful way. I only paid $2k for FSD ($7k total w/ Enhanced Autopilot), but even that is too much for what FSD actually is.


I will say, FSD has genuinely gotten better over the last year. I would not trust it to drive without a driver monitoring it. But I've had ~20mi trips end-to-end (surface streets and freeways) without requiring disengagement. It's been a great driver assist - but it's not true FSD.

That said, it still does dumb things like:

* Getting in the left-most turn lane when I will make a right turn immediately after a left. This usually results in a disengagement or me having to change lanes immediately after turning.

* Changing lanes, but the process sometimes makes it so that I change lanes in an intersection. This is illegal.

* Not stopping at the crosswalk on the right-most lane on a red. I expect the car to stop firmly at the intersection and then slowly creep out.


Does anyone else remember being told Tesla owners would be able to rent their cars out automatically to Uber/Lyft when they weren't using them because of the FSD?


Tesla Autonomy Day 2019-04-22:

"By the middle of next year [2020] we'll have over a million Tesla cars on the road with FSD hardware feature complete at a reliably level that we would consider that no one needs to pay attention. Meaning you could go to sleep. From our standpoint, if you fast-forward a year, maybe a year and 3 months, but next year for sure we will have over a million Robotaxis on the road. The fleet wakes up with an over-the-air update. That's all it takes."

Source: https://www.youtube.com/live/Ucp0TTmvqOE?t=11616


I can’t believe anyone could take this guy seriously. It’s so comical at this point.


I just, you know, steer my Volt. With my hands. And eyeballs.

If I need to do something on a drive I get my wife to drive.

I honestly do not understand the obsession with self driving or driver assist.

I drove a Ford with active lane assist and having the car physically move the steering wheel is the worst.

But apparently Im one of those weirdos who actually enjoys driving.


People are super dangerous in cars, so it's nice to have technologies to make driving safer.

I also enjoy driving, but I love the adaptive cruise control in my Toyota, because it makes the most frustrating kind of driving (lots of traffic on the highway) super easy to deal with. It makes me much less hesitant to make the trip from Austin to San Antonio, because rather than having to be hyper-vigilant for two hours due to constantly shifting stop-and-go traffic, I can let the car control the speed and instead focus more on watching out for crazy drivers.

I am not sure whether I feel the same way about full self-driving. At that point, in my mind, you'd be better off taking a train, since you're very likely to become super distracted/tired with nothing to focus on. That said, if it makes the roads safer in aggregate, I'm not against it.

Given the kind of information we see in this article and other factors, though, I'll never buy a Tesla. I'll wait until it hits the major car manufacturers.


It is nice to have technology to make driving safer. Maybe someday Tesla can get some.



Btw, radar powered adaptive cruise control on my 2017 Lexus is excellent. Isn't jerky, can start from a complete stop again, and is reliable.

I feel bad for Tesla owners who are stuck with shitty cameras. Lex Friedman should have given a way bigger tongue lashing to kaparthy for being so stupid.


> People are super dangerous in cars

That is a very subjective description. One fatality per 100,000,000 miles actually seems pretty good, given how complex driving can be. And even then, that number includes the shitty drivers that account for most wrecks. The median driver is quite good, objectively.


I think it's also important to note that fatalities are not the only way to define dangerous. Plenty of accidents leave people disfigured, badly injured, permanently disabled, with lifelong chronic pain, and so on. While I'd certainly prefer any of that to dying, I'd rather have neither if I could choose.


It is subjective, but almost everything you can compare cars to is less dangerous.

Relative to trains, buses, and airplanes, cars are super dangerous.

Relative to motorcycles, they're safe.


Relative to trains & buses (airplanes is a separate niche entirely), cars are a lot more useful too. That is inextricably tied to why they are more dangerous.


Does that also include pedestrians and cyclists killed by drivers?


Yes, it does.


I'm one of those weirdos who enjoys cycling, and the biggest threat to my life, by far, is humans driving cars. Drunk humans, tired humans, humans eating food, humans otherwise distracted, or humans just being human.

When I think about the people I know who died under age forty, I think it's majority car accidents. At least a plurality.

This holds roughly true in the statistics: car accidents have been at least in the running for leading cause of death for Americans under 30 for years. We need to address this issue, and self-driving cars are rapidly approaching viability as a solution.

Even if we don't eliminate human drivers, having viable alternatives means we can be strict about who drives. Right now, for so, so many people, driving is a necessity, and thus we're unwilling to limit access to people who are actually good and safe at driving. If you get in an accident, your insurance company will your rates, but you keep your license. If you get in another, we'll...they raise them some more more. It's really quite difficult to actually lose your license in the US. My friend had to get a breathalyzer after multiple DUIs, but he got to keep his license through all of it.

If driving were seen as an optional privilege, we could limit it to people who do it safely, and perhaps those people would only do it while sober and alert given the option to use self driving when they aren't.


Stopping humans from driving cars could save many lives. A million people worldwide, die in road accidents every year. Half are cyclists or pedestrians.

But I tend to believe Gill Pratt, at Toyota, who says he doesn't know how to make level 5 AVs, more than I believe Elon.


A third of all fatalities are caused by drunk drivers. That's easier to solve than self-driving cars. Another big chunk are weather related, driving at night, driving while sleepy, etc. The median driver is pretty safe, there are better ways to spend money than building automated car toys for them to play with.


That may be a US number. Other places that have stricter drunk driving laws, or that culturally have less drinking, can also have much worse road safety.

AVs can also solve other problems like efficiency and surge-ability in public transportation. Not just a toy for the rich.


While nobody knows how to make a level 5 AV, we do know how to eliminate road deaths:

https://www.theguardian.com/world/2020/mar/16/how-helsinki-a...


> I honestly do not understand the obsession with self driving or driver assist.

Self driving would be a complete game changer for the disabled.


What benefit does this bring over public transport or something like taxis/ride sharing services?

If I can be allowed to grossly generalze here: would the elderly and the disabled be in a position to afford a vehicle as expensive as this? Would they /need/ a permanent vehicle to take them to wherever they need to be, if they already have everything they need right there (hospice, retirement complex etc)? Remember: I'm grossly generalizing here, but I think my point stands.

The way I see it: I've been making use of FSD vehicles to take me to work and back for almost a decade now. Sometimes even the vehicle arrives right to my front door and takes me directly to my destination. The fact that it's wetware instead of software has no material impact on my experience or expectations. Thus I'll always go for the cheapest option.

In my view FSD is trying to solve a problem that only exists in the mind of the isolated techie who forgets about the real world sometimes.


> What benefit does this bring over public transport or something like taxis/ride sharing services?

No offense but do you own a car? Or do you only ever take public transport and taxis / ride-shares all the time?

Being able to do things on your own, when you want to, on your own schedule, especially in places that are not cities and have much worse infrastructure on both public transit and ride sharing, is immensely freeing.


Or, the elderly. How many times have we seen accidents due to an confused individual driving down the wrong side of the street or mistaking the accelerator for brake?

Tesla may not ever get FSD working with its current approach, but if a competitor can do it (even geofenced like how Waymo and Cruise operate today!) and the economies of scale could kick in to make it more affordable than Ubering everywhere, then it would be a huge benefit to those that are even unable to use public transportation.


For the disabled people with money anyway. How are disabled people who live entirely from disability subsidies supposed to buy a Tesla?


I'm not talking about Tesla, just self driving in general, which is what the comment was asking about. I don't trust Tesla to ever deliver an actual self driving product.


You can't use your smart phone while driving. People tend to use their smart phones anyway which has caused a big increase in distracted driving and the resultant carnage.

What's the world coming to when some schmuck on public transit can do things like play games or even get work done where a driver can not do that? What's worse is that smart phones make public transit a lot more usable and attractive in general. Self driving cars keep the automotive transportation mode more competitive.


> I honestly do not understand the obsession with self driving or driver assist.

What a lack of imagination.


I drive 150+ mile trips all the time. Autopilot is a game changer.

FSD is far less impactful on city streets as I enjoy city driving, but it gets mind numbing on long highway drives.


> I drive 150+ mile trips all the time. Autopilot is a game changer.

Ha, AP puts me in situations (especially at merge points) I'd never put myself in. It certainly doesn't make the drive less stressful. Quite the opposite. And my wife won't even let me turn on AP now when she and the kids are in the car (this would be most road trips) because it's startling when the car brakes suddenly for an overpass.


I was brake checked by a Tesla while driving home on the 10W the other day. My first thought was, "what the hell man, you're in the #1 lane without a car in sight in front of you?!". Then I realized that the car had probably disengaged AP and this involved the cruise control disengaging and slowing (i.e. braking) the car.

I had already known that Tesla drivers using AP was a major cause of slow, too-far-following-distance driving (which is why I take every opportunity to cross in front of a Tesla doing 60ish), but this was a whole 'nother level of literally "fuck you, pay attention until you get home, asshole".

/Rant.

P.S. Tesla drivers are absolutely the new Prius drivers.

Edit: and the fact that the behavior of "if there's a car following don't slow abruptly" isn't a 0-th level if loop in the FSD algorithm is completely inexcusable. Shit needs to be regulated now.


My anecdote: I’m a FSD beta user. It seems like it has become much better lately in city driving, but somewhat worse in highway driving. I bet that both have to do with changes to “lane topology” detection, which appear in the release notes. On the highway, the car sometimes perceives a change in the lanes of the highway, for example an exit lane splitting off. This would be fine and correct, except that it can now spoil an in-progress maneuver. Previously, maneuvers around exiting the highway seemed more stable. On the other hand, the latest versions seem to have an impressively fluid understanding of the lanes around intersections, where markings aren’t necessarily visible until you are up close.

In my experience, it looks like Tesla’s recent concentration on surface roads has come with some neglect of highway driving.


I don't understand how regulators and insurance companies are so asleep at the wheel (hehe) around this. It seems insane to me that you can just start selling a car that you claim can drive itself without any kind of certification or safety data being provided.


Knowing little about how the FSD chip works, is the fact that it runs locally and not on some super cluster a bottleneck in performance right now?

The fact that it's only in the last year or so that Stability/OpenAI and efficient hardware like before mentioned processors that can produce relatively fast and accurate results in the NN sphere makes me wonder if it's just "too early" to have self driving cars?

ChatGPT still requires a supercluster to run and as a layman one would think video analysis with near zero latency would be even more demanding.

Also how does it compare to to apples newer Arm based architecture and Neural Engine which as far as i know is a good example of in-house designing (not production off course).


I don't think it would be feasible to do cloud analysis on a live video stream in self-driving context. The latency would simply be too high, and a single internet hiccup would result in a dangerous situation. So I think all the models run locally.


Imho, if Tesla could train a huge neural network to demonstrate that self driving is possible with their sw stack, they would have done that already to let Elon Musk have it's road show with it's capabilities.

(Running ChatGPT does not require a supercluster, training the model does)


People (and technical people) like to believe that one data point isn't important, that it's an anecdote and doesn't matter.

But as a lawyer once said, "I convict with anecdotes."

In any case the reports are interesting because with them you can start to understand the shape of the autopilot's capabilities and limitations. IMO the individual reports (like the ones here) are nice and detailed; the kind of summary data the article wants is pointless, because it doesn't include any real detail in it. Your autopilot worked on a 400 mile stretch of empty Kansas highway, big deal.


Ranking by miles per disengagement without context as to which roads the vehicles drove on does not feel like the most useful metric. (On a track? In a well-mapped city? Cross-country on highways?)


Limiting to a well mapped city should be a point in favor of Cruise and Waymo, they are operating it with more restraint. Normalize for the raw difficulty of the area (city vs highway, etc.), but don't penalize for taking steps to make their operation area safer (hd mapping it).


You are comparing apples to burgers. Mercedes system attacks narrow scope ( gaps in traffic approved roads, certain speed conditions) etc. They bring for that lots of expensive hardware. Tesla is trying to build general AI. Wether they will succeed or not, it's a debate. But basically Mercedes strategy is do something NOW with expensive hardware where Tesla strategy do way more in the future, with cheap, 5 y.o. hardware


Musk has tweeted about the importance of transparency/candor https://twitter.com/elonmusk/status/1598858533608431617?s=46... and yet does not release recent FSD safety data. This is simply hypocritical.


As a stock market investor that is purely self-interested in predicting the direction of stocks and who could not give a crap about your opinion on Elon's latest tweet, it's EXTREMELY interesting to see TSLA flip from getting a pass on everything to getting bad PR on everything. It's like a case study in a fall from grace of a company because of political factors.

I feel like a good way to play the market with all this is to look at mainstream media news service articles about TSLA before the Twitter purchase and after the Twitter purchase and train them to spot the sentiment difference in the before and after articles. It would serve as an early warning sign that the company has been switched from being boosted or trashed.

One funny thing is before the Twitter purchase, alt-right outlets like ZeroHedge were constantly mocking Tesla and predicting its imminent demise during its gigantic run up while mainstream outlets were throughly praising it. Now, it's the other way around.


It truly is fascinating to what degree news coverage is politically biased. Partisanship rules everything.

As you point out, the stark flip-flop of roles between left and right-leaning coverage of Tesla is almost hilarious when observed from any "neutral" point of view. Even in this HN comment thread, the emotional injection of political points throughout is both funny and disheartening.


The reason TSLA was ever worth anything was because everything the golden boy touched turned to gold.

The Twitter acquisition (starting from his initial offer) brought expectations down to reality for everyone.

Please note that his behaviour as a multiplier (and the changing nature of that multiplier) of investor value has zero bearing on whether or not the cars he sells are any good. Just like the quality and quantity of those cars had zero bearing on TSLA's stock price.


I'm the first to complain about Tesla overselling their self-driving abilities, but I think the numbers used in this article are misleading. Miles per disengagements comparisons between Waymo (to pick one) and Tesla is like apples to oranges. Waymo operates in a closed fashion, is a laboratory-like setting (1), and I don't believe for one second that their miles per disengagements would be significantly better than Tesla in the real-world open settings that Tesla's FSD is being tested in. These numbers mean nothing when the environments used to compute them are so vastly different.

(1) Here's what I think is the case for Waymo's operations: - Phenix AZ only (maybe one more city?) -- Amazing sunny weather -- Clear line markings -- Can overfit to said city with HD maps, handcoding road quirks, etc - Waymo employees only -- Not to sound to tinfoil-hat, can we really trust this data - Even within Phenix, some filtering will happen as to which route are possible


First time Tesla owner and recently had opportunity to experience their full self drive feature. My findings:

1. When it works it’s just great. From maintaining lanes, speed limit, distance to other cars, auto park and summon are delightful.

2. Auto park in home garage is a bit too slow to use. But parallel parking is cool.

3. Even without a radar it worked on most roads and different lighting conditions.

4. Random speeding/ slowing is definitely present. Not like jamming into the vehicle in front of you but sudden acceleration can surprise. More dangerous was random slowing down at traffic signal, even when green and at traffic circles.

5. Overall it’s definitely good enough to show promise of self driving. They are not there yet but I would place them as one of the most advanced considering the range of functionality.

6. Still the $15k is simply way too much. You are better off with occasional $200 per month like when going on a long trip.


> Still the $15k is simply way too much. You are better off with occasional $200 per month like when going on a long trip.

I still haven’t been able to digest how Tesla can have beta testers pay them vast amounts of dollars for improving their product, without the ability to even opt out of omnipresent and opaque data collection… For a safety critical product that the company rejects liability for.

The old trope “you’re the product” is not enough to do justice to this bizarre situation. It’s like the peak Apple fanboyism a la 2010, but on meth.


I think 'awful' is underselling it a bit here. The fact this is allowed to be in widespread use in the hands of amateur 'beta testers' is horrifying. I'm surprised USDOT hasn't put the kibosh on it yet.


Anecdotally this matches my experience too. I'm not even in the FSD beta and with every software update plain old auto-pilot gets less intelligent. I don't know if regulators are restricting what can be done, if Tesla is self-censoring to avoid PR issues, or what, but instead of being helpful like it used to be it's mostly just frustrating. I find myself using it less an less.

All that's said from someone who otherwise loves their Tesla. Has saved many thousands on gas. Hasn't had any reliability or "panel gap" issues. And would likely buy a Tesla again.


It is not hard to imagine that Tesla is going to die at this rate.

Where the moat? Traditional car makers are catching up on electric and already have superior build quality. And if anything, Tesla is behind on FSD.


Supercharger network is the only moat. In all other respects other EVs beat Tesla feature for feature.

Maybe the gov't will squash FSD and Tesla will have an epiphany and decide to refocus their efforts on the basics where they're quickly falling behind. Hell, maybe they'll start including the better technology the competition relies on, and try to stick to just the things they're good at. Which is ... I don't know. Image seekers?


Tesla is certainly behind on FSD. What that team has produced so far is nothing short of amazing, but gosh, their boss sure hasn't helped their credibility.

I can say that after experiencing the software of the BMW i4 and the Audi e-tron — holy cow, Supercharger is certainly not the only advantage of a Tesla. Audi's is laughably bad. BMW's is better than Audi's. Both are much worse than Tesla.

The obvious retort there is: but Tesla's competitors will get better. And sure, they will. You're right. But Tesla likewise has their own R&D investments and unreleased products.

I think sometimes when we talk about competitive landscape, we tend to say "Company B will catch up to Company A" while in our minds falsely imagining Company A as being static. Both Company A and B are moving, but in Tesla's case, they had a ten year jump on tightly integrated hardware + software. I think it's likely that continues to yield dividend.


Tesla's software is also shit compared to most of the American brands, Porsche, and with 22+ MY vehicles, even Toyota/Lexus have better infotainment/software/sensors.

It's literally just a good battery and good charging. That's all Elon has, and that advantage will die in 5 years.


What? I just looked at Porsche and they use the standard VW infotainment which is atrocious. In the meantime, you can play Steam games on a Tesla: https://www.youtube.com/watch?v=CEhBzX-6FjA


My other car with TACC and autosteer claims to be a pro pilot, but can't handle a shallow curve and also accelerates towards stopped traffic fast enough to trigger its own collision warning.

Tesla will be fine.


Still superchargers for now


In Europe there is regulatory pressure to open all publicly accessible charging stations to be open to all vehicles.

If superchargers are Tesla's main advantage, they are not only in competition with auto makers but with all electricity providers in an increasingly competitive market.


100mile mean distance to critical failure is an interesting number. It's long enough to create multiple, fantastic looking demos of FSD. Impressive enough to convince folks that we are almost there. But it's at least 4 orders of magnitude shy of the performance you'd need to take the steering wheel out of the vehicle.

Given the current rate of improvement it's hard to see how the remaining 4 orders of magnitude will be solved in a reasonable time frame, without a radical reimagining of their entire tech stack.


This seems like a pretty typical video in an area that FSD doesn't handle well: https://www.youtube.com/watch?v=xIvfNHwZsCc

I bet this is extremely common.

To be fair, we've never seen a test of Waymo or Cruise on the same road, as they are only available on certain specific roads that have been well tested. I'm not making a judgment here as to whether that is good or bad.


There's so many decent quality FSD videos on YouTube. Why watch the one where you can hardly see what's in front of the car?


Credibility. That's Jason Hughes who owns a third party shop that repairs Teslas. He has a long history of reporting fairly, both positive and negative. He also lives in a fairly rural area that hasn't gotten a lot of attention from FSD YTers.

The fact that some of his first FSD trips showed major issues is very telling.


I find a video that gives a better view more credible. Chuck Cook's videos are amazing for going deep on FSD's handling of some very specific driving scenarios. He even usually has a drone overhead to give a better view of the surrounds.


His videos are great, but they also cover a lot of ground that has been tested heavily by Tesla employees. They don't always translate well to how things might work in Hickory, NC.

And, tbh, those Chuck Cook videos show FSD doing some really dumb things fairly often.


I'm commenting on the quality of the video rather than the quality of FSD. More visibility equals more insight into the actual performance (or nonperformance) of FSD. A video where I can't see what's in front of the car is worthless.


Its not worthless when one of the significant events was it cutting off a car that was traveling at high speed. That should literally never happen.


The price for this software is $15,000 USD at the present time.


If it disengages rather than eagerly plowing into something, that's almost certainly for the better. This could easily be explained by a renewed focus on safety after massive amounts of negative press, and that would be a good thing.

Is that the reason? Not a clue! Am I optimistic about Tesla's FSD? Not even slightly! But the "data" here is extremely shaky and there are many possible explanations beyond "getting worse".


"Don't look at the data, give me lots of money to try it yourself" is something you expect to hear from a cult leader, not an engineer.


It has been obvious to me ever since, a few years ago, Musk promised that fully self-driving robotaxis were going to be on the roads a year in the future that he is a con artist and/or ridiculously over-optimistic about self-driving technology.

But is this why I am seeing this article on Hacker News now? Or am I seeing it now because Musk pissed off a bunch of people who disagree with his political stances?


Good example of the 2nd-order effects of how being "aggressive" on putting self-driving into the hands of drivers can cost lives. Not only can Tesla "F"SD cost lives directly through accidents, it could sour peoples' opinions of more mature driving systems like Waymo's and delay our shift to safer, automated, drivers.


I briefly tested FSD out on a Model 3, for about a mile and a half on what was mostly a clean, recently painted 2-lane road. It seemed to be okay until I got to the point the lines broke for an exit and then it tried to drive me into a concrete median.

I trust the lane-keep feature on the old 2017 Corolla I rented more than FSD.


After seeing what musk did to twitter it’s pretty clear the emperor has no clothes, and his main asset is being a tyrant and taking the surplus value of the 80-100 hour weeks his peons do for their lord.

Musk is no Steve Jobs, despite clearly seeing himself that way. Full Self Driving will get nowhere under Musk.


Comparing Musk to Jobs is ridiculous. The scope of what they've done and what they're aiming for is vastly different. One person made cool phones, the other one is putting humans on mars. Why would you think Elon wants to be a computer seller?

> his main asset is being a tyrant and taking the surplus value of the 80-100 hour weeks his peons do for their lord.

Cool, I agree, can we start holding everyone accountable that does this? Can we maybe start with the ones that use actual slave labor? Where are the dozen articles a day about those people?

> Full Self Driving will get nowhere under Musk.

It's gotten farther than literally any other organization on the planet.


> the other one is putting humans on mars

Hasn't yet. And IF that happens, and if it turns out to be SpaceX that enables it, thank Gwynn. Musk is many things, but this idea that he's some kind of genius engineer is extremely insulting to all the actual rocket engineers.


Rushing half finished cars out the door, and over promising on capabilities/updates is such a flawed model. I am amazed by how many people bought in, and will be really interested to see how many people replace with another Tesla as legacy brands bring EVs to market.


My partner and I just drove from Denver to NYC in a Model 3. We stopped using any sort of "autopilot" in the middle of Kansas when it kept randomly slamming on the breaks on an empty I-70 with clear skies in broad daylight.


I have 3 teslas but never got FSD on any of them.

They are mostly for gullible people who want to pay for fairy dust. "There's a sucker born every minute."

Let it be ready, then I will see reviews and only then will I pay a bit to get it.


Let me know of any other system that is even remotely close to being able to do the following:

https://www.youtube.com/watch?v=qFAlwAawSvU


An average human brain with 1000 hours of training.


Can please some manufacturer make an EV without being a connected vehicle? I want to replace my combustion car with an electric one, not a "trying to be smart and connected vehicle".


Seems like they are in an irreversible development hell of bugs. Probably the whole soft and hardware infrastructure was badly planned and/or implemented.

If they are developing it and it is getting worse, this simply means there is a lot of technical debt and more to surface. The guy leading it has left as soon his stock vested, why not stick around and write history?

Probably when waymo departed from them and they thought they will fork the code and then improve it is where it all went wrong.

This also means that new devs and engineers cant make sense of the code and product, we have all inherited some of these code bases, havent we.

Seen many such people who deploy much less than a mvp and then face the consequences eventually.

All the mvp out there who survive the beginning at least have a solid software architecture.


Honestly surprised, some youtube fans did test recent updates (recent as in 6 months ago) and the SDV software was capable of dealing with complex high speed, irregular traffic crossroads/ramps.


I have a Model 3 with enhanced autopilot. My wife wanted a Model Y. I find the enhanced autopilot so glitchy that I decided to just stick with regular autopilot.

Guess what: I like regular autopilot better!


Obviously not all miles are equal, and the others in the chart (Waymo, etc) only run their vehicles in specific areas that they have previously mapped onto their own system.


I know right? This is worse than the FAA! Imagine requiring that airline Pilots have to be certified on a specific jet plane? Surely we should just let any Cessna pilot hop into the pilot seat of a 747 or (haha) 737 Max.

We should let just anyone fly planes. And when all those planes crash, we should point out that it's not fair to show that planes with properly trained pilots don't crash. Not all airmiles are equal! Those non-crashy airlines only use pilots with very specific training. Not the same thing at all!


To be clear - disengagements are not crashes. People often disengage in a Tesla because the car is driving too slowly/cautiously.


It's interesting how quickly the narrative has turned against Tesla in recent month, not just Musk but in respect to the performance of their cars.


It was a bad sign when Andrej Karpathy peaced out. A director doesn't walk away from a project when it's on the verge of success.


Has anyone posted on "free speech absolutism" Twitter 2.0 yet? Wonder what happens to that account?


I’m no Tesla fanboy, but it should be said that publicly reported miles per disengagement metics are complete bullshit. Different companies have wildly different criteria for “disengagement”. Those reporting 1000mi+ between disengagements (at least in urban settings) are only doing so because they aren’t counting the vast majority of events.


> Those reporting 1000mi+ between disengagements (at least in urban settings) are only doing so because they aren’t counting the vast majority of events.

Source? Do you work at one of those companies?


Calling it FSD ever should be considered straight up fraud.


https://www.thedrive.com/news/38129/elon-musk-promised-1-mil...

"Next year for sure, we will have over a million robotaxis on the road," said Musk on October 21, 2019. "The fleet wakes up with an over-the-air update. That's all it takes."

https://twitter.com/PPathole/status/1249209968877862919

Elon Musk @elonmusk · Apr 12, 2020 Replying to @PPathole and @Tesla Functionality still looking good for this year. Regulatory approval is the big unknown.


Tesla FSD may be bad, but no way it is that much worse than Cruise. Looks like they are inflating their data.


Cruise has cars driving out without a driver in San Francisco right now. Tesla is a world away from being able to do anything close to that.


I would not be surprised at all.

The biggest problem with FSD is bounding box detection as from previous videos it routinely fails to identify objects in the road correctly.

A problem that LiDAR (which Cruise uses) is especially good at.


That magnitude of a difference would be pretty noticeable and I haven't seen Cruiser users posting amazing reviews on YouTube in comparison to Tesla's. Maybe I am using an incorrect proxy, but it's hard to believe that Tesla would be so far off.


The difference is noticeable.

You only have to look at Cruise versus FSD videos to see how the latter struggles with reliable object detection.

Also this is about Cruise the robo-taxi company and not the GM SuperCruise technology which isn't what I would consider a peer to FSD.


Why do you say this?


Are you kidding? 40k km with no disengagement? This is nuts. I don't think we have reached that level yet. And if we did, I can't believe Tesla engineers would be so oblivious to that that they wouldn't update their technology. The likeliest explanation is that the article's chart is using some weird metric that is not comparable between automakers.


You've given no actual reasons you find this hard to believe, just running on 100% vibes. The data reported to regulators doesn't lie. Tesla really is that far behind the competition.


[flagged]


What an incredible insipid comment. I'm sure if you extrapolate anything to the logical extreme it sounds silly; it doesn't make a point beyond displaying that you think so highly of this car company that you want to disregard all criticism of it.


[flagged]


Do you disagree with the article’s call for Tesla to publicly release the same kind of disengagement data that others release publicly, so that we don’t have to just compare anecdotes as if they were data?

This thread is full of people who are describing their own personal experiences, which mostly tilt negative, with some positive notes like yours. But the article makes the point that a bunch of anecdotes is *not a replacement for data*, and the data we have (which is seriously flawed), shows that the average experience is not as good as yours.

Maybe your experience is an outlier compared to the data that’s been collected? Maybe your experience is the norm and the Tesla data would corroborate your experience. We can’t really know unless Tesla releases their data.

But they refuse to.


>But this software is working folks. It's going to happen. As I see it our choice is either to celebrate the advent of new technology or hide behind comment vote buttons trying to pretend that it isn't.

I don't believe people don't want this technology to happen, or don't think it's going to happen; I think the pressure and the criticism is there because people want it to be safe. When it comes to safety, I'd rather have FSD be subject to more criticism than it needs to be, not less.


The linked article isn't about safety data, FWIW. FSD beta has been deployed on hundreds of thousands of cars for over a year now. If there was any safety data to measure, it would be shining like a beacon at this point. Which is why the people (Taylor Ogan and Fred Lambert absolutely qualify, FWIW) with an anti-Tesla schtick need to scrap for editorialized content like this[1].

The truth is that this is deployed as a supervised system, and as such it's at least as safe as the driver operating it. I can absolutely attest to situations where the car noticed and reacted to a safety situation before I did. I think on the whole I'm a safer driver monitoring my car vs. operating it.

[1] In this case, measuring self-reported "disengagement" data of amateur drivers in general traffic vs. the controlled conditions reported by professionals operating vehicles in limited environments and with limited functionality. You can't take a Waymo or Cruise on the freeway to try it out, they won't operate in parking lots or unpainted roads or construction areas, hell my understanding is that they fence off and disallow most unprotected left turns! You can get a Tesla to try any and all of that (and I do! regularly! and sometimes it fails!). I mean, come on, this isn't a fair comparison and you know it.


>The linked article isn't about safety data, FWIW.

Que? Disengagement can occur for a variety of reasons, up to and including preventing the car from performing a hazardous/deadly move. This is absolutely about safety.


Out of curiosity, are you perhaps driving on streets that are likely to be heavily represented in the training data?

I would be interested in seeing how performance differs between cities and countries.


Not sure why this was downvoted lol. If everyone can happily share their stories of autopilot failing without being downvoted why not a positive case?


It's pretty damn obvious why it's being downvoted, or did you not make it through the whole comment?

> But this software is working folks. It's going to happen. As I see it our choice is either to celebrate the advent of new technology or hide behind comment vote buttons trying to pretend that it isn't.

> For me, I'm happier watching it than I would be being a luddite. I'll take my downvotes now, thanks.

Snarkily dismissing everyone else's experiences as invalid isn't a great way to endear yourself.


It’s one thing to report your positive experience. It’s another to presume that everyone else who is reporting a negative one is just a complainer, because you’re assuming that you and them have equivalent experiences. “This software is working, folks!” is an example of that; so is the presumption that the complainers are simply caught up in “Tesla/Musk hate”


Because AJ explicitly asked to downvoted.


The downvoting is for this: You presume that your own experience is the same as everyone else’s (regardless of differing driving conditions, etc.), and thus, that everyone else is being overly critical of their (assumed to be similar) experiences.

There’s a term for this: Gaslighting.


Please don't. I don't think anyone in the linked article or in this thread is lying. I think they all have real experiences to relate and we should listen to them. Given that, can you please retract your needlessly inflammatory accusation of gaslighting? Thanks.

All I'm saying is that my experience makes it very clear that what problems remain with this technology are resolvable ones. Consumer cars are going to drive themselves. Period. And given existing evidence, it seems reasonably clear to me that Tesla is going to get there first (though I wouldn't be shocked if it's Waymo, Cruise seems less like a good bet).

And given that interpretation, I find the kind of grousing going on in this thread unhelpful. If I show you a bug in your iPhone, would you immediately throw it out and declare that no one could possibly market a smartphone? That's how this thread sounds to me.


> All I'm saying is that my experience makes it very clear that what problems remain with this technology are resolvable ones. Consumer cars are going to drive themselves. Period. And given existing evidence, it seems reasonably clear to me that Tesla is going to get there first (though I wouldn't be shocked if it's Waymo, Cruise seems less like a good bet).

Perhaps I'm unreasonably dense, but I do not see the link (direct or otherwise) between 'it works well for me under the following circumstances' and 'it will work well for (almost) everyone, under all (reasonable) circumstances'. The former can be true, without any guarantee of the latter. Given all the hyperbole and promises that have been made thusfar, it's hardly surprising that folks are inclined to be skeptical, particularly if their own experiences are as poor as many in this thread indicate.

What I do agree with is that cars are going to be allowed to drive themselves, irrespective of how well they can do it, because the politicians and the bureaucrats apparently are unable to resist the siren song of 'progress'. That much we've seen in SF with Cruise among many other plraces. Whether this is a good thing is a different question entirely...


Jules Verne predicted space flight decades before technology made it possible.

In 1903, New York Times predicted that airplanes would take 10 million years to develop (https://bigthink.com/pessimists-archive/air-space-flight-imp...).

Many people who saw the first actual flights from Wright brothers dismissed them as a parlor trick that will not amount to anything practical.

Technology has a long history of exponential improvements. At first it improves slowly and then suddenly.

Sure, everyone (not just Tesla) was expecting self-driving to happen by now. It didn't happen yet.

I share the OP's perspective that the maturity of self-driving is already so high that it's a matter of time for it to become viable to deploy at scale.

The initial DARPA challenge was Wright's brother. We're closer to a plane that can travel halfway over Atlantic. Not quite there yet but it's just a matter of time until it can cross it.


> needlessly inflammatory

> As I see it our choice is either to celebrate the advent of new technology or hide behind comment vote buttons trying to pretend that it isn't.

> For me, I'm happier watching it than I would be being a luddite. I'll take my downvotes now, thanks.

Pot, meet kettle.


I think the fact that my post now sits flagged and invisible, yet you're still commenting on it, more or less bears out the point I was making. I'm genuinely sorry you took offense, really I am. I thought that was fairly gentle, honestly. It was intended to be a fun way to point out the tribalism at work here. But... something are maybe just too fun to hate?

Let me repeat: I love my car. I'm not here to hate. I'm here to try to explain how great this thing is. Really, you have no idea how fun it is to have a robot drive you around town. And, comments like yours and the rest here make it clear that you're really missing out on that kind of joy in your rush to hate.

Call a friend and get a ride in an FSD beta car. You really might change your mind.


> You presume that your own experience is the same as everyone else’s

Aren't the negative experience comments doing the EXACT same thing?


No? Surely you've had a negative experience with a product or business and recognized that not every other person out there has had the same experience.

The people here sharing negative experiences aren't suggesting that the positive experiences others report are just the result of being a Musk fan.


The person who was downvoted didn't say that negative experiences are from Musk haters.

He correctly predicted that his positive experience will be downvoted by Musk haters even though it's as valid as negative experiences shared by other people that were not downvoted.

That's clear bias and so obvious that it can be called ahead of time.

He didn't write anything less valid that the upvoted complaints and shouldn't be downvoted by anyone.


> The person who was downvoted didn't say that negative experiences are from Musk haters.

I got the...pretty strong implication from it. I didn't downvote them, but it did rub me the wrong way for exactly the implication that the people with negative experiences are from Musk haters.

There are two sentences that, taken together, seem very dismissive of people who have negative experiences:

> Everyone hates it (and given the dumpster fire at Twitter right now, everyone really loves to hate Tesla).

> But this software is working folks.

"Everyone hates it" and the parenthetical reference to Twitter really leans into the idea that people who dislike it feel that way for ideological reasons and not their own personal experience with the vehicle.

That's strongly reinforced by the next sentence that states "This software is working folks", which is a flat categorical statement that contradicts the experiences of the people who dislike it.

So, I didn't downvote, but if I did it would be exactly for the implication that the people who dislike it didn't have valid reasons for disliking it.


I downvoted because I downvote anyone who preemptively complains / invites downvotes and assumes that they’ll be doled out in bad faith. It pollutes the conversation and is the cheapest form of sophistry. It’s no different than setting up an argument that ends with “and if you disagree, that just shows how right I am!”


Under no circumstance will I ever let a vehicle I am in drive itself. If you can't nearly mathematically prove the software will function as expected I am not putting my life in its hands. From the sounds of it a lot of it is controlled by what is effectively a machine learning black box.

No thanks.


Feels great seeing all this reputational damage attacks towards the one guy making the biggest impact in the incoming climate disaster.


Same guy fueling proof-of-work cryptocurrency mania? It is not entirely clear to me which direction his contribution to the carbon levels is going to end up at.


Tesla isn't even the largest manufacturer of electric vehicles and their impact on climate change is not as great as it seems like it should be. Not to mention that launching rockets into space is like driving a million gasoline vehicles.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: