Hacker News new | past | comments | ask | show | jobs | submit login
US moves closer to recalling Tesla’s self-driving software (fortune.com)
522 points by heavyset_go on June 11, 2022 | hide | past | favorite | 605 comments



I think one of Tesla's biggest problems with any of their autonomous systems is the names they give those systems. They confuse regulators, reporters, and consumers.

Most reporters seem to confuse "autopilot" with "full self driving" or vice versa. It's not uncommon to see sentences like "Autopilot, a feature that costs $10k, is still in beta."

Autopilot is cruise control that keeps you within a lane when it can. That's it. Other cars have it, and they're equally effective or ineffective.

Full self driving, aside from being terrifying, does it all, and I suspect it's usually what regulators and reporters mean when they say "Autopilot."

I think if Tesla had used "lane aware cruise control" as the name for "Autopilot", it'd have helped reduce the number of times people confuse the two.

Frankly, as an owner of a Tesla that has both features, I wouldn't mind at all if both were pulled from the software. FSD is absolutely terrifying. I'd rather let my teenage son drive me than have FSD do it. Autopilot feels like a novelty. Given how frequently the car yells at you when you're not holding the wheel, it's hardly better than simple cruise control.


It doesn't help that Tesla's own marketing confuses the two features. For example, their marketing material for Autopilot sure sounds like it's describing a feature called "Full Self-Driving"[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

That very same embedded video from Tesla has the title "Autopilot Full Self-Driving Hardware"[2], as well, further confusing the two.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...

[2] https://vimeo.com/192179727?embedded=true&source=vimeo_logo&...


To me it seems that Teslas goal was to get to ”full self driving” by now.

If Musks moonshot had been successfull, we would not be having this discussion now.

But it was not, and now Tesla is in trouble because they can’t deliver what people think they were promised.


> people think they were promised

When so many people "think" they were promised something then the more likely explanation is that the promises were misleading.

I think this was Tesla's strategy and it was a winning one too. Tesla got a lot of cash from that promise, helping it be the company it is now. But the strategy is also a victim of its own success because there comes a point to deliver on the promise and you can only delay that by doubling up on the promise for so long. Then so many people start asking questions at the same time it's like a bank run. The strategy goes bankrupt unless it can actually deliver.


> they can’t deliver what people think they were promised. [emphasis mine]

Tesla isn't delivering what people not only were promised, but were actually sold for real money.

Musk very explicitly claimed in consumer shows that FSD would allow your Tesla not just to safely drive you around, but allow you to use it as a robotaxi, making money for you while you're at your own job.


What a great idea. When I buy a $70,000 car, I would like to use it as a robotaxi so that random people can use it as a bathroom or a diaper changing station, and test the sharpness of their EDC knives on the seats. Perhaps Elon can add some AI feature that detects vandalism and responds with a bleating noise.

I would find this very rewarding and lucrative.


Well, according to his pitch [0], in 3 years it will be financially irresponsible to buy something that doesn't make its value back like this.

[0] https://www.youtube.com/watch?v=F8TLsdpYsow


He funded another team to work on the same problem many teams had been working on for a long time, and he wasn't bringing anything significantly new to the mix. Maybe that qualifies as a moonshot but it's not what I normally think of for the positive connotations of the word. Tesla should have had a better handle on timescale.


> Given how frequently the car yells at you when you're not holding the wheel, it's hardly better than simple cruise control.

I tried that in a Hyundai Sonata traveling at highway speeds. The car stayed in the lane, turning the wheel on its own as it was supposed to. My hands were off the wheel, but at no point did I feel comfortable enough to look elsewhere, or move my hands further than a couple of inches away from the wheel. It's completely pointless to me, and actually worse than that, it gives me one more thing to worry about. I suppose if all of the vehicles were 100% autonomous, and the entire highway infrastructure was set up with that expectation, I could go ahead and fully relax while the car drives itself completely. But this 10% automation is like an uncanny valley. Well, valley of terror is more like it.

Cruise control on the other hand seems fine, because my foot is basically in the same position with or without it. The automation just saves me from having to modulate the thing with my foot, and it's not making changes that have very fast, significant consequences.


I have a car with lanekeeping and distance-keeping (Mercedes) and it is extremely nice for long distances on U.S. interstates. After many hours of driving, I feel much less fatigued, and more able to keep up "command" tasks like looking ahead and scanning mirrors, compared to no lane assist. I can't recall a single time when it made a dangerous move, either; it's been extremely good at following lane markings even in difficult conditions like driving into the sun, with patchy concrete. I've been really impressed.


I have a Volvo and I can confirm this exact experience. After an 8 hour drive I feel totally fine.

I’ll add it makes the drive much safer, on occasion it’s slammed the brakes before I can react. Both of us together is much better than just one of us.


Same. Did a 12 hour drive across Europe in a Volvo XC60 with the PilotAssist system and I felt fine. In my previous cars I know I'd be absolutely shattered - in the Volvo with this system the mental load was massively reduced.


I have generally the same feeling in the Tesla. It's not full self driving by any means but it's nice. The only thing is a lot of other drivers on the road are not so good at staying in their lane and the Tesla will generally not react at all unless the violation is egregious, instead plowing forward at full speed down dead center of the lane. Often the result is just fine, it asserts your dominance and snaps the other driver back into their lane. But it's not how I drive and it's nerve rattling at times.


I had a full kitted rental with level 2 systems and it was the same for me. After a long day at a client I drove three hours to get home and I got there almost more rested than I started


Also rental for 2 weeks of a petrol ionic. Just feel less tired when the car keep the distance and lane. A bit nervous when the car turned for you. But by and large ok.


> because my foot is basically in the same position with or without it.

Do you think that's true in general? I've never left my foot hovering over the pedals, back to when cruise control first became popular. I rest my foot fully on the floor, which is much more comfortable, but requires that I move my foot to apply either the brake or the accelerator.


I didn't mean to imply it's hovering. I'll usually have it close by on the floor. But it's where it would be even if the car were not running.


That's confusing.


It'd be nice if there was a thing to lean your foot on near the brake pedal.



The muscle memory for braking is all it matters, if you used to food on floor it works well enough, but to me if I don't have the foot on the gas pedal I take longer to find the brake pedal


> Cruise control on the other hand seems fine, because my foot is basically in the same position with or without it. The automation just saves me from having to modulate the thing with my foot, and it's not making changes that have very fast, significant consequences.

Maybe I am just living in the past with my 2007 Toyota Camry but cars have come a long way in just fifteen years. Just look at adaptive cruise control or lane assist.

https://www.youtube.com/watch?v=smf1uop7HoY

Maybe I am just in awe because I don't own a new car and don't feel the pain of things like replacing a broken windshield (holy smokes these new things are expensive to calibrate) but it is nice when these things work. However, when I drive someone else's new car I can totally see myself going fora new car if I could somehow justify it financially.


>because I don't own a new car and don't feel the pain of things like replacing a broken windshield

I’m having my Land Rovers windshield (with heated filaments and collision avoidance) replaced in a week. It’s a $50 deductible and they said a $50 calibration charge, but that wasn’t on the final bill. Given that these windshields are $500-$1500, I’m pretty happy with the cost.


My insurance company replaced a broken windshield on my Volvo, and per policy the glass company used some cheap third-party garbage. The adaptive cruise control and lane assist kept erroring out. After three separate attempts to calibrate it, eventually the dealership said that the third party glass was distorting the image that the cameras were capturing so much that it was impossible to calibrate correctly. We had to get the windshield replaced again using the OEM part, and the glass company is trying to force us to pay the deductible again.

Just a word of warning about how this can go.


I’ve read a bit about that. I almost spent the time to get my insurance company to prefer OEM glass, but then I noticed that the 3rd party provider is actually the same manufacturer as my current OEM glass (Fuyao, if anyone is curious) not sure how it’ll all go.


That's weird. If strong distortion was the problem it should be very obvious and it should cause problems well beyond the camera.


Both of my Range Rovers need the windscreens replaced because the heating elements have failed. It's about a £75 excess, but it's kind of questionable whether it's worth it for a 25-year-old vehicle. You still need to scrape the ice off the side windows ;-)


I agree with what you say, but part of the reason Tesla is even around to have these problems is Musk's dubious talents as a relentless hype man. Musk himself says they were at the brink of bankruptcy at least twice: https://www.businessinsider.com/elon-musk-tesla-bankruptcy-m...

Personally, I think if we had better laws Musk would be looking at charges of criminal negligence for encouraging people to think they have things like actual "autopilot" and "full self driving". But as it is, he lied his way to the top of the world's richest list. So Tesla's problems here are just the consequences of his actions.


He strikes me more as a physicist arguing from first principles. Once you have drive-by-wire and a sensor array, self-driving is a pure software problem. You should be able to get 99% of the way there in simulation with a team of 4 really smart coders and an endless supply of Redbull and a year, tops. Also, the world's energy problems are trivial when you consider how cheap and efficient solar power is, and how little surface area is required. Clean water is just a matter of filtration at best or desalination at worst, which just devolves into an energy problem. And so on.

I do it myself sometimes, but at least I recognize how annoying and inaccurate it is. It is kind of nice to have someone arguing these positions and even better to act on them, and I for one wish that these analyses held (the world would be a better place if they did). But the real world just doesn't care about your first principles, and even simple ideas present obstacles you never imagined. Musk knows this, but willfully leaves it out during his pep talks, which is dishonest. But sometimes I get the uncomfortable feeling he's deceiving himself, which is quite chilling considering the amount of real power he's amassed (being able to launch large things into space is about as real as real power gets.)


> Once you have drive-by-wire and a sensor array, self-driving is a pure software problem.

This argument proves a little too much. Even if we assume that the sensors are equivalent to whatever input stream humans are working with, reduction of the problem to ”software eng” doesn’t mean it is tractable in our lifetime. Compare: ”Once you remove the requirement that employees work from the office, building a software engineer is a pure software problem.”


>Once you remove the requirement that employees work from the office, building a software engineer is a pure software problem

Hey, that's a good point!


Once you have drive-by-wire and a sensor array, self-driving is a pure software problem.

This is correct, but only if you have the right drive-by-wire and sensor array tech. Essentially what you're saying is "Once you solve the hardware problem the rest is just software" which isn't particularly insightful.

What happens if Tesla can't solve the software problem because the sensors aren't providing the necessary data?


>what you're saying .. which isn't particularly insightful.

Which was my point. I'm intentionally making silly arguments that are valid from first principles, but clearly wrong in the real world. I'm sorry that wasn't obvious because I endeavored to make it so.


add more inaccurate sensors and do corrections in software? We have photos of black holes out of almost a pure noise, surely it is possible to build self driving car with current gen of hardware.


add more inaccurate sensors

The point here is that Tesla have already sold the $10k FSD sensor pack to people and told them that one day in the future a software update will provide FSD, with no additional sensors necessary. Adding more sensors is not an option (well, maybe Tesla will do that for people who bought the upgrade, but it seems unlikely they'd do that for free.)


One part of the argument you are missing is that FSD requires real-time decision making as well, so when complexity goes up with a larger number of inaccurate sensors that can become an issue. No one cares if getting a picture of a black hole from noisy data takes one year to process, the same is not true for self driving.


To others commenting on the comment above: read the entire comment, the first paragraph is a satire on the trivialization Musk and others often do of complex subjects.


Thanks for saying that. But the lesson for me, or anyone making points like this, is that you can never be too explicit. Intricate constructions are dangerous on the internet. I like them but I have to admit they function something like a trap that readers can fall into. It's tempting to blame them for not understanding, but it's really on me. At the very least I should have put a warning label at the bottom.


> self-driving is a pure software problem. You should be able to get 99% of the way there in simulation with a team of 4 really smart coders and an endless supply of Redbull and a year, tops.

Hyperloop was a great example of a project that made no sence from first principles, yet he pushed it anyway. It makes no sence because it's capacity, passengers it can move per hour, is like 2% of a high-speed train line. Additionally he came up with some pure fantasy cost estimates where the whole things cost less to build than a cycle path would, budgeted $0 for land aquisition, compacting or earth-works.

Re software problem:

Human genome is now digital too now, so figuring out immortality is a pure software problem.

Making money on the stock market is a pure software problem.

Simulating a concious mind is a software problem.

> I get the uncomfortable feeling he's deceiving himself

Maybe tha's what happens when you fire agressively anyone who disagrees and end up surrounded by yes-men


> You should be able to get 99% of the way there in simulation with a team of 4 really smart coders and an endless supply of Redbull and a year, tops.

I can say this about a lot of things, but man, it scares me that there’s people in the world who are crazy enough to think that’s true. Even Elon Musk isn’t that delusional.


> But sometimes I get the uncomfortable feeling he's deceiving himself

That is one of the best ways to lie. If you stop caring whether things are true, it can be much easier to sound like you believe whatever thing is most useful in the moment. And think of all the brain cycles you can save by not checking in with thinks like external reality or consistency with one's past statements.


> pure software problem

Rather a missing algorithm.

Software can no doubt be capable of being written to implement the alg... but no one has a good enough alg or even knows if one can exist or can exist with tesla's limited sensors.


Yeah, because entire armies of top-class engineers and researchers are clearly less competent than “4 really smart coders with Redbull”. Jfc.


In case anyone else was wondering: a) jfc => jesus fucking christ, and b) you missed the point.


ah sorry, missed the sarcasm in your original post. I should have read more closely.


"he lied his way to the top of the world's richest list" ... why do midwits always chose AP thread to gather .... i guess you dont need to start a rocket company an electric car company or try to create a fintech company in 1999 ... why work 100 hr weeks when you can just "lie" .... if biden goes ahead with this he will only shoot the foot of his supporter's like GM ( half of its current value is tied to cruise ) and blow up VC mna pipline for all the lidar based startup's


I am having trouble extracting much sense from this.

I'm not saying lying is all he did. But his major skill is hype. E.g., you cite his fintech history as something apparently good. But he got rich on that because after a merger he ended up as PayPal's CEO for like 6 months, after which he was fired. His major focus, apparently, was shifting away from Unix to Windows. Which even if it were a good idea at the time (it wasn't), it was a terrible priority for a fast-growing company.

People treating that as some sort of sign of business acumen, as opposed to a golden parachute for an arrogant chump correctly getting fired, is a great example of how good a hype man Musk is.


> why work 100 hr weeks when you can just "lie"

Because lying is hard work! Ask any politician, clergyman, lawyer, or nearest big corp C-suite leader. Noone said it was easy.

> tell a main paint is wet, and he has to touch it to be sure. But tell him there's an invisible man in the sky that created the universe, and the will believe you


> i guess you dont need to start a rocket company an electric car company or try to create a fintech company in 1999

Did you mean "start" as in founding a company or "start" as in purchase established existing companies?

> why work 100 hr weeks when you can just "lie"

Well 60 of those hours are playing wario on SNL then playing wario on twitter then strategizing with your PR firm on how to convince people that buying Twitter means you invented social media.

>blow up VC mna pipline for all the lidar based startup's

Better to blow up lidar pipeline VCs than autopiloting battery fires straight into pedestrians

> why do midwits always

Peak Dunning–Kruger effect I suppose.


i cant counter plain crazy . you win i guess . I am not claiming the dude can do no wrong . I am stating the fact that claiming he didn't do any thing new or special is just plain crazy . just try doing any one of the things he has already done .


> just try doing any one of the things he has already done

Alright I'll go find some wealthy South African emerald mine owners and ask them to give birth to me.


Agree. Regarding so called auto-pilot/full-self-driving, Tesla was at best misleading, if not deceiving. So far seems not much we can do about it.


It's pretty bad that Musk was asking people to pre-pay for the FSD feature years before it's even close to finished - with promises like "it's $5000 if you buy it now, or $10000 if you buy it when it's done."

Even worse, some people are falling for this scam.


>>Personally, I think if we had better laws Musk would be looking at charges of criminal negligence for encouraging people to think they have things like actual "autopilot" and "full self driving". But as it is, he lied his way to the top of the world's richest list. So Tesla's problems here are just the consequences of his actions.

Your idea of better laws would mean no Tesla and no SpaceX.

Tesla has replaced two million gasoline cars with electric cars, and given its current growth rate, and Musk's long standing plan to release progressively more affordable cars, this number will likely be massively larger in a few years.

Beyond Tesla's own sales, its success has sparked massive investment by other carmakers to push their electric vehicle manufacturing timetables forward. All told, Tesla has had a massive impact in pushing the world to replace gasoline vehicles with electric ones.

SpaceX, for its part, is responsible for reducing the cost of launching material to orbit ten fold, with another 100 fold reduction possible with StarShip. The spike at the end of this graph is almost solely due to SpaceX:

https://ourworldindata.org/grapher/yearly-number-of-objects-...

I see laws that prevent the emergence and flourishing of Tesla and SpaceX as far worse than current laws.


Your line of reasoning depends on a strawman: that Tesla could not have achieved this much if Musk had called it Cruise control and lane assist.


Not quite - GP cites the spectre of Tesla nearly going bust to imply that Musk helped achieve the opposite (extremely healthy company) via dubious, ideally criminal means. It's not entirely based on that, but the implication is there.

Put simply, if Musk only made one controversial call, this comment thread wouldn't exist. The many controversial calls cannot be easily disentangled.


Do really healthy companies often have their stock price fall by half?

Personally, I think that Tesla is not particularly healthy, and that its future is grim as mainstream car manufacturers get in on the EV action. Tesla is not particularly well regarded by Consumer Reports. Of the 16 EV cars with current rankings, Tesla's models are at spots 4, 10, 11, and 16, with low reliability scores. [1]

Tesla does have a big slice of the US EV market, but that's only 3% of the total market. It's perfectly plausible that Tesla's lead among tech enthusiasts, always a small fraction of a market [2], won't translate into mainstream acceptance, and that Tesla will enter a death spiral where their relatively low volumes mean they won't be able to keep up with the major car manufacturers. Their eventual fate could be what happened to so many promising early manufacturers of internal combustion cars: they become brands owned by bigger car companies. [3]

So personally, I think Musk lies did create a window of opportunity for him, but that as with so many liars, he sowed the seeds of Tesla's destruction with the same lies that enabled initial success.

[1] https://www.consumerreports.org/cars/types/new/hybrids-evs/r...

[2] see the first graph here: https://thinkinsights.net/strategy/crossing-the-chasm/

[3] https://www.titlemax.com/discovery-center/planes-trains-and-...


Apple are down 26% ytd. It wouldn’t be unheard of in this market. What should hopefully be clear is that stocks such as Tesla are not particularly correlated to the fundamentals of the company, there are whole-economy effects driving price rises and drops.

What I see in Tesla: - extremely profitable car manufacturing. industry beating profits per car driven by cheaper BOM than legacy manufacturers, in large part due to innovation. margin of 30.5%. - Manufacturing limited - huge wait list despite accelerating production (Q1 2022 best quarter ever, 68% increase yoy). - huge investments in manufacturing across the supply chain starting to pay off. - for the first time, manufacturing investments that can rival premium legacies. With the factories in Berlin and Texas coming online, it’s believable that Tesla has capacity to produce in excess of a manufacturer like BMW. - very high purchaser satisfaction (the product is good). - large overall profit - already beating most in the industry.


I'm no finance expert, but Tesla being down twice as much as Apple is not what I'd call a positive sign.

We'll see how Telsa's finances go once competition heats up. A major source of profit for them is selling emissions credits to other companies. Which a) undercuts Tesla's claims to eco-goodness, and b) will surely decline as others EV sales pick up. We'll also see how much that profit is affected by recalls and lawsuits.

In many cases, high customer satisfaction is indicative of a good future, but I'm not sure that's the case here. One, their satisfaction is in the same range as a lot of car companies, including BMW and Honda [1], so it's not a competitive advantage. And two, their current user base is a technophile, early-adopter niche. It's not clear that Tesla can cross Moore's Chasm and serve a mass audience that doesn't care who Musk is.

I look forward to seeing how it turns out. But given the way Musk is flaming out in his attempts to buy Twitter, his success is clearly not guaranteed. And that's before we account for him being distracted by trying to run 3 big companies at once.

[1] https://www.theacsi.org/news-and-resources/press-releases/20...


> Your idea of better laws would mean no Tesla and no SpaceX.

Elon is not the only one that needs to be held accountable -> big oil has literally prosecuted and killed people, just look at what they've done to Steven Donziger.

If we actually enforced these laws, maybe we would have electric cars even earlier, and indeed, there maybe wouldn't be Tesla.

> Tesla has replaced two million gasoline cars with electric cars

Laws don't work this way - If I save someone's life today that does not give me a voucher to murder someone tomorrow.


>>If we actually enforced these laws, maybe we would have electric cars even earlier, and indeed, there maybe wouldn't be Tesla.

This is just utopianism.

>>Laws don't work this way

I wasn't saying they do. I was explaining the consequences of those "better laws" existing. In truth, the laws being sought by the OP would further undermine the very foundations of a liberal society, with contract liberty, to replace it with social control by an officialdom made up of unionized government bureaucrats with next to zero accountability, micromanaging the actions of others based on an elitist "government knows best" philosophy.


> replaced two million gasoline cars with electric cars

What does the power method to the wheels have to do with self-driving? Tesla fan bois.


Are you saying that bold lying and criminal negligence were also necessary for SpaceX to succeed? I wasn't aware of that, but I'm happy to take your word for it.


Economy less based on lies and more based on truth would be win for everyone. The competitive market would be much better.


> Your idea of better laws would mean no Tesla and no SpaceX.

Tesla, SpaceX, and Elon Musk are not cool/impressive any more.

And with every tweet, Elon becomes less respected by the public.

It’s telling when you see this sentiment on Tesla Motors Club, r/SpaceX and r/TeslaMotors.

Elon was on a roll. But he royally fucked up the past 2 years.


It’s fascinating how quickly people will suddenly decide apolitical things you are associated with are “not cool/impressive” when you start expressing political opinions they disagree with.


You're talking about Musk becoming open about his backing for an authoritarian, cult-of-personality party? People have been plenty critical about Musk well before that, and about most of the same points.

Honestly, I think the causal arrow goes the other way. Musk has beclowned himself with the way he handled the Twitter deal, and Tesla's stock price has dropped accordingly. So a lot of the noise he has made since then can be seen as attempts to distract people with politics so they don't notice how his impressiveness is declining. This article makes a good case for that: https://twitter.com/BITech/status/1534939630809800706


>>You're talking about Musk becoming open about his backing for an authoritarian, cult-of-personality party?

Musk doesn't have any good alternatives, unfortunately. He's supposed to back the Democratic Party, that relies on a long-tradition of union-backed left-wing violence to intimidate the opposition? [1] The party that fanned the flames of 500+ riots in the summer of 2020, leading to dozens being killed and billions of dollars worth of people's livelihoods going up in flames?

The party that is aggressively moving toward authoritarianism, and trying to silence/cancel any one who speaks out about it, like Glenn Greenwald? [2]

The party of lawyers [3], who early on pushed aggressively for CCP-style lockdowns and vaccine mandates [4]?

The party fully backed by elite anti-Free-Speech movements? [5]

Even when the GOP were under Trump, the Democrats weren't clearly the more moral choice.

[1] https://www.independent.co.uk/news/world/americas/us-politic...

[2] https://twitter.com/ggreenwald/status/1450487818766143492

[3] https://www.opensecrets.org/elections-overview/industries

[4] https://twitter.com/NYSBA/status/1326149426063224832

[5] https://youtu.be/54zIUalrCyA


Tesla use Autopilot to refer to FSD too, for instance take the below statement from the Tesla website:

> Many of our Autopilot features, like Autosteer, Navigate on Autopilot and Summon, are disabled by default.

So I don’t think it’s fair to say that Autopilot just refers to cruise control - that’s certainly not how Tesla use the word.


Tesla “Autopilot” is otherwise known as the combination of “lane keeping assistance” and “adaptive cruise control” for virtually every other manufacturer. Sure, Tesla’s version is better than many of their competitors, but it still doesn’t make the marketing Tesla insists on doing any less misleading.


real autopilot on an airplane is just keeping it in a straight line and constant speed - much less advanced than tesla's autopilot features. autopilot is notoriously stupid, that's why "operating on autopilot" is a metaphor for the mindset that causes careless mistakes.

as a name, it's a pretty good descriptor for what the system actually does. the misleading part is all the other marketing telsa does around it, and the muddying of the line between autopilot and full self-driving.


> real autopilot on an airplane is just keeping it in a straight line and constant speed - much less advanced than tesla's autopilot features

The fact you think this is exactly why Tesla’s naming choice is so dangerous.

Even antiquated general aviation autopilots can intercept headings and change elevation. This is capability that has existed for decades. GA planes are getting the capability to land now. Airliners have been able to land themselves for decades.


Whether "[intercepting] headings and [changing] elevation" is meaningful improvement when the real issue is situational awareness of the automated system (which is obviously the problem with Tesla's system, or any currently existing vehicle automation, really) seems dubious to me. The ability to change altitude instead of just keeping the flight level will not help the automated system avoid an airplane that comes out of nowhere because it's not an environment sensing feature -- it's just a feedback control feature. I strongly suspect that what is controversial about automated car driving systems is not their inability to follow a fixed pre-determined trajectory.


Well flying and driving don’t have much in common so the comparisons break down fast.

Airplanes operate in a completely different environment than cars. Airplanes have radar and radios onboard and rely on ground based information and pre-planning for traffic avoidance. It’s a completely different system.

GM’s vision of self driving cars looked a lot more like the air traffic control system than Tesla’s autopilot.

https://youtu.be/cPOmuvFostY


> real autopilot on an airplane is just keeping it in a straight line and constant speed

Not even close. Even light aircraft autopilots can fly a full route from just after takeoff all the way to 200' AGL on final approach. Autopilots in transport category aircraft have full autoland including rollout.


Modern autopilots can autoland a plane. The systems have enough limitations that they are usually not used except in extreme low visibility situations.


Absolutely untrue. They can follow a preprogrammed path and can land themselves if the airport has the correct equipment, which most do.


> Tesla “Autopilot” is otherwise known as the combination of “lane keeping assistance” and “adaptive cruise control” for virtually every other manufacturer. Sure, Tesla’s version is better than many of their competitors, but it still doesn’t make the marketing Tesla insists on doing any less misleading.

The Autopilot moniker is also used by Tesla to describe automatic lane changing, automatic start/stop at stop lights and 'summon' - so it's not quite just lane keeping assistance and adaptive cruise control.


> automatic lane changing

Multiple manufacturers have that feature too as part of their “lane keeping assistance” offering


“Autopilot feels like a novelty”

No way are you for real? Autopilot is incredible, for me it substantially reduces fatigue on long highway drives and decreases my stress when commuting in stop and go traffic. The difference between autopilot and simple cruise control is substantial. I wouldn’t buy a car that didn’t have Autopilot at this point.


You couldn't pay me to use it in stop and go traffic. My stress would be through the roof. I was using Autopilot once on one of the straightest stretches of highway in America (I-80 as it passes through Bonneville Salt Flats in Utah). With no cars on the road, and perfectly clear conditions, Autopilot decided that the lane I was in was ending (it wasn't) and began merging on to the shoulder.

I've never used Autopilot without it doing something clearly stupid. That is not a recipe for "stress free."

Are you sure we're talking about the same thing here?


When was the last time you used it? A couple years ago, it wouldn’t go 10 minutes before I had to intervene because of a phantom break or some other stupid/weird thing it was doing. But just last week, I engaged FSD and Autopilot when I left an Airbnb in New Jersey and it drove me from the house to the highway and all the way through dense NYC traffic into Long Island, an 85 mile (!) drive with only two interventions. The first was when a car to my right darted into my lane (I instinctively slammed on the brakes immediately, though the car also beeped loudly so I suspect it also would have hit the brakes anyway), and the second time I shut it off because it was being too cautious getting into an exit lane and other cars were never going to let me in (if you drive through NYC you know what I’m talking about).


> (I instinctively slammed on the brakes immediately, though the car also beeped loudly so I suspect it also would have hit the brakes anyway)

Interesting, I would have thought that when (if) the autopilot intervenes it would be way faster than human reaction.

I have no car with similar features but I remember renting a VW golf that had some adaptive cruise control, the simple "keep the speed constant" worked just fine (like in older, simpler cruise controls) but the "adaptive" part was (IMHO, and it may well be a specific/single issue of the car or some settings that I knew not how to regulate) very, very cautious.

On the highway, driving in the left lane at a speed of 120 km/h, every time a car a long distance ahead (100 metres or more) merged from right lane to left lane the car would suddenly slow down/brake long before I would have normally done manually (and often without need as the car was travelling at my same speed or nearly so), it was, if not uncomfortable, "strange".


It’s possible I hit the brake the same time as the car did and just didn’t notice, we’re talking fractions of a second here. Like I said in a previous comment, since I had the autopilot on I was able to look at more of my surroundings rather that just focusing only on the car in front of me so I actually saw him driving erratically a few minutes before that so when he was next to me I was being extra defensive and I reacted really quickly to it.


I see, you were already somehow alerted.


Thanks for providing a positive perspective. It feels like I only hear negative things of autopilot and Tesla on HN and Reddit, so it's nice to hear a different perspective.


It is pretty unfortunate, I don’t know how else people think we’re going to solve driverless cars and eventually save tens of thousands of people from dying in car crashes every year in the US. At some point these systems need to be tested on real roads.


This does not reflect my experience with autopilot. I've found it to be incredibly reliable and convenient.


Autopilot doesn’t merge or change lanes on it’s own. That only happens with “Navigate on autopilot” which is a FSD feature not part of basic autopilot.


Jfc people make a ton of excuses for Tesla. This doesn’t invalidate what they were saying in any way. If lane changes are this buggy, why should they trust the auto braking?


Because one is fully baked, the other is explicitly billed as under development not yet complete Beta software. Also there are a lot of people that just make up lies about Tesla people deserve to hear the truth.


A company that allows people to use "beta" software on public roads should not be trusted to manufacture vehicles. That is grossly irresponsible.


I love the FSD beta, and the car has already prevented my wife from getting hit head on by a car running a red light at 80-90MPH. Your comment is exactly how I feel about Microsoft, it's grossly irresponsible to run that crap, and it's not even labeled as beta.


Lane keeping assistance and adaptive cruise control make long distance driving significantly less physically stressful. It's really impressive.


I agree 100%. It definitely makes me a safer driver because I can pay attention to other things that may be going on 7 cars away when I don’t have to micromanage the job of staying in the lane and making sure I’m at the correct distance between the car ahead of me.


I too use adaptive cruise control (this is the real name for it, not that Autopilot PR of yours). This is not something Teslas are unique for


I recently drove a BMW 750i with lane-following on a long trip and then he lane-following feature was terrifying. It seemed like it was doing the most brain dead line-following algorithm that made it “bounce” back and forth between the left and right sides of the lane. I couldn’t stop worrying that it was freaking out the other cars around me by careening towards them and yanking back in the other direction at the last moment. I sure hope Tesla autopilot is better than that.


My Mercedes E-class has a setting where you can change between strict och relaxed (whatever the names were.)

In strict, it very closely keeps in the middle of the lane. That's not how I normally drives, so it feels scary. I usually bias away from oncoming traffic. It's probably fine on a wide highway, so this is more about me not feeling in control.

However, in relaxed mode, it seems to have the issue you're describing where there's too much slack, and it looks like it's playing pong between the lane dividers. Have the exact same worry about what this looks like to others...


It feels like different systems are better and worse at that. On my 2019 Honda Insight, it's actually pretty smooth and does a good job of staying in the center of the lane, but it won't turn the wheel too much -- if you're taking all but the gentlest curves, you're going to be turning the wheel. On my mom's 2018 Subaru Outback, it's exactly what you're describing -- it's fine in a straight line, but it doesn't really seem to judge the curve of the lanes as much as simulate a pinball. (I don't think I ever tried to let it take a turn on its own!)

I've only ridden in Teslas, not driven one, but it seems like it's probably got the best following/lane-keep system on the market, although the last time I rode in one -- admittedly about two years ago -- it was markedly jankier when the driver engaged more "self-driving" features, e.g., have it change lanes, take exits based on its own GPS guidance, etc. It was impressive that it could do it, to be sure, but it was the self-driving version of your reckless friend in high school you do your best never to ride with.


It's usually very good at staying perfectly centered between painted lane lines, even when following curves. This is fine usually, except when a lane on the right merges into your lane. In the United States, interstate highways don't always have any sort of markers establishing the non-merging lane. Because of that, about halfway into where the lanes merge, Autopilot considers that portion just one very wide lane and aggressively tries to center itself, rather than just keeping a relative distance from the lane markings on the left.


VW/Audi lane assist also "bounces" you off the side of the lane, but it doesn't send you so far to the other side that you bounce back. Overall the combination of lane assist + adaptive cruise does an okay approximation of FSD on highways.


Tesla fans will say "Autopilot in planes isn't much more than glorified cruise control," but they miss the point. What matters is what people think when they hear "autopilot," and Tesla AP or even NoA is nowhere close to that perception.


Autopilot can land and take off, so it _is_ more than glorified cruise control.


sigh

No. It can't.

There are no aircraft autopilots which are permitted to be used for takeoff. None. Zero. Nada. Full stop.

There are no aircraft autopilots which can be used for a fully hands-off landing. Doing that would require a "Category IIIc" certified autopilot and IIIc precision approach. The FAA recently (a few months ago) removed the definition of IIIc. Why? Because, 30 years after being defined, there are ZERO autopilots certified to do that and ZERO IIIc approaches for them to use.

Among those autopilots with a low decision height (when the pilot takes over), all require a so-called "precision approach". That, in turn, requires a specialized radio beacon at the end of the runway (GPS is not sufficient).

Additionally, there is exactly one type of airplane in the world, no longer in production, (the Airbus A380) that can automatically take evasive action if it detects another airplane on a collision course.

It's well beyond time to admit that, whatever you feel about Tesla, their "autopilot" is no less sophisticated than an aircraft autopilot.


Autoland is a common capability on many aircraft, and it is commonly used in inclement weather.

I don't know what you are talking about with the A380. I have never heard about it being able to fly a TCAS RA maneuver (collision avoidance) on autopilot.

The A350 can handle TCAS RA maneuvers on autopilot. It can also fly windshear escape maneuvers on autopilot.

Bill Palmer (A350 check captain) talks about it: http://www.airplanegeeks.com/2018/02/14/490-airbus-a350/


> There are no aircraft autopilots which can be used for a fully hands-off landing

There's no certificated 3C approaches or equipment, but category 3B autopilots will arm at DH, and then continue all the way down and flare and land.

For that matter, there's all the emergency landing autopilots showing up in turboprops now. Push a button and wait to be on the ground. https://www.youtube.com/watch?v=d-ruFmgTpqA&t=219s


_sigh_

Yes. It can.

You’re intentionally misrepresenting reality for what I suspect is some ideological fondness of Tesla.

Sure, no _commercial_ plane can take off with autopilot, but such planes exist[1].

But, saying Cat IIIc is required for hands off landing is silly. It’s required for hands off landing in the absolute worst conditions. It’s needed only when conditions are so bad that you can’t safely taxi and you’ll be stuck on the runway. That’s why it’s not used.

[1] https://youtu.be/9TIBeso4abU


I don't think that really matters when you're marketing to the public. Using language that people perceive in a way that benefits you, and then claiming it was the customers fault for not understanding the technical meaning and reality of the words you used is at least unethical and possibly illegal.

If I make a chocolate bar and tell everyone it cures cancer, I shouldn't be able to wriggle out of lawsuits by saying "cures" doesn't actually mean "stops a disease" with some complex medical jargon.


Auto land exists https://www.planeandpilotmag.com/article/garmin-autoland-thi...

I would think auto take off would be easier to automate


In 1947 the Air Force supposedly flew a Douglas C-54 across the Atlantic Ocean fully under autopilot, including landing and takeoff.

https://www.nytimes.com/1947/09/23/archives/robotpiloted-pla...

Partial article text: https://www.rarenewspapers.com/view/623884


May want to look into the VTOL space.

Many are looking at an autopilot capability to do the take off and land.


No one is confusing anything. It's obvious that they intend these things to do much more than they are currently capable of. Why are you making excuses for a company that some 8 years in has only ever doubled down on the naming and the connotations?

If anyone is confused, it's intentional.


> No one is confusing anything. > If anyone is confused, it's intentional.

I guess you're right. I've never read a more confusing comment on HN.


Software in general has a naming problem - they all tend to be aspirational metaphors. From 'AI', 'ML', 'Cryptocurrency', 'Agile process' to 'self-driving', the confusion they create is really frustrating and harmful at times.


A Big Mac isn’t a big burger. I’m sure many shoe names don’t reflect well on the increased abilities they sound like. Naming is a big part of marketing hype. It always has been, it’s not just software.


Indeed. However, Big Mac does not cost you $10,000.

In contrast, something like Comma.ai is open-source and even the complete kit that's compatible with almost any car is like $2k and they don't claim to be "Full Self Driving", by any means. Funny enough, they perform with just a kit that goes where your rear mirror is almost as good as FSD beta that Tesla has.


I won't dispute that the branding is hyperbolic (isn't all/most branding?).

Do you think Tesla's support page describing Autopilot and Autopilot with Full Self Driving is clear enough? https://www.tesla.com/support/autopilot

For instance, there's this paragraph: "Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment. While these features are designed to become more capable over time, the currently enabled features do not make the vehicle autonomous."


The problem is that this technical writing runs counter to all the flashy marketing, and the flashy marketing sticks in people's minds more thoroughly.


> Autopilot is cruise control that keeps you within a lane when it can. That's it. Other cars have it, and they're equally effective or ineffective.

But "Autopilot" obviously sounds cooler. So you can pay 10k and feel good, while that feature costs 1.300 Euro in my Hyundai (where it also comes with heated seats and steering wheel in the package).

The more I read and see from Tesla, the more it look likes some kind of A/B test on how much money people are willing to spend on a beta software/car-test.


Tesla's Autopilot feature is standard. The thing that costs $10k is Full Self Driving.

Which is an example of what I mean when I say "Tesla's biggest problems with any of their autonomous systems is the names they give those systems. They confuse regulators, reporters, and consumers."


But isn't FSD at this moment just that lane holding assistant?

Anyway... Confusion at its best. :-D


Nope. FSD is the nightmare inducing beta software that does a pretty good job of actually driving for you. This includes stopping at stop signs, making turns, changing lanes. It also sometimes does very dangerous and stupid things, like stopping as it makes a left hand turn, then deciding to start driving against the flow of traffic as a group of cars heads your way. AKA, nightmare inducing.


I have a Nissan Leaf with lane following and adaptive cruise. It’s nice on road trips but it’s not something you can set and ignore. It’s just better cruise control that lets you relax a bit more.

If FSD is anything like the many videos I’ve seen I don’t want it and don’t think it’s ready for general use.

FSD seems like one of those problems where getting 80% of the way there takes 20% of the effort and getting the remaining 20% takes 80% of the effort. There is a truly massive chasm between adaptive cruise with lane following and FSD.


I drove a rented Nissan Murano and I found the lane keeping and adaptive cruise control to be excellent. If the auto supply was in better shape, I’d have seriously considered buying one.


It’s such an irony that Nissan features are almost entirely development in spite of Tesla, but they’re far better at actually sciencing it and shipping it to be delivered today.


Sciencing? Is that like engineering or designing?

Just curious as to word choice here, does engineering not include the design of technology?


It's of course not a real word, but my mental definition I was thinking was problem discovery/definition is science and clarifying/solving is engineering. Nissan is shipping NoA and demoing dynamic obstacle avoidance, latter of which I thought is towards discovery.


> Given how frequently the car yells at you when you're not holding the wheel

What? You claim to own a Tesla but don't know that you must keep your hand on the wheel at all times?


The car will yell at you even when your hand is on the wheel. If it doesn't feel resistance, it assumes you're not paying attention.


It’s confusing until you use it, then it’s capabilities and limitations become clear. So I don’t see this as a reason to pull it.


I think the number of comments in this thread where people confuse Autopilot with FSD is reason enough to at least rebrand the names of the two distinctly different features.


They need to rename Autopilot to Deadpilot now. More accurate.


Or Livepilot as my Model Y saved my wife's life.


If your Tesla is "yelling at you when you're not holding the wheel" "frequently" you are admittedly misusing the level 2 assist.

Contrarily, I use autopilot responsibly and appreciate that statistically it is reducing my odds of death. As such I WOULD mind that a political decision like buttieg's appointment, is being made to endanger me.


> I use autopilot responsibly and appreciate that statistically it is reducing my odds of death

I don't think it really does, does it? I thought that "Autopilot" basically only worked on highways in fairly unsurprising conditions?


Autopilot crash statistics for last quarter were one crash per 4.97M miles driven.

National average crash statistics for that quarter were one crash per 484K miles driven.

Even if you consider that accidents are 3x less likely to occur on highways and boost that to 1.5M, that is still a multi-times safety factor.

A lot of accidents on highways occur because of aggressive or negligent driving. It's really not hard to beat humans so this shouldn't be a shocking find.

I'm not sure what you mean by "in unsurprising conditions" I turn on autopilot on the entry ramp, and disable it on the exit ramp. I would instead characterize that as "all highway conditions"


Okay, but that 3x should be 3000x, because people don't get in crashes in the conditions in which "Autopilot" actually works.

It's easy to avoid crashes if you're travelling on a perfectly straight road at the same speed as everyone around you with no changes in speed or lane, which is all that Autopilot does.


????

i've already explained that autopilot is capable of all highway conditions

not straight lines... curved roads that split with multiple lanes, getting around slow traffic and avoiding fast traffic... i'm really not sure what your argument really is. What you are really explaining here is that you don't have experience with autopilot, or you would know these things.. maybe you've used driver assists from other companies which are as limited as in your imagination? you believe that autopilot doesnt change lanes? lol that's the main benefit. I hate merging into traffic, finding gaps, etc. that's why i have autopilot engaged on the entrance ramp... to do it for me.

Average highway users aren't average highway users because "its easy"?

so lets just add 3 orders of magnitude??? what kind of logic even gets you to that point...

i've provided statistics to show my original argument, which was statistical safety... you are just using evidently ill-formed opinions and invented numbers


> Average highway users aren't average highway users because "its easy"?

Maybe the standard of driving in the UK is several orders of magnitude better than it is in the US.

I've driven a Tesla with Autopilot, and it simply didn't feel safe to let it do its thing, and it gave up as soon as it saw anything it didn't like, like maybe a junction or a vehicle changing lanes.


I don't know why you insist on inventing orders of magnitude for each opinion ... Is this hyperbole I'm not picking up on?

Maybe the vintage accounts for our disparity of experience? I took a 2000km road trip to the Austin GP last year. Texas is known for agressive drivers. It never disengaged because of a vehicle changing lanes, or because of a road union.


Im downvoted for using level 2 with my hands? Or for recognizing its safety value? Or for agreeing with the authors assesment of buttieg's appointment?


My use case was autopilot steering me off the left hand side of the road at 80 MPH during a sunset [1] My (WAG) guess is that the ML corpus was based on a model 3, while the model Y, I was driving, had placed the sensors slightly higher? I dunno; again, tis a WAG.

I am a huge fan of Tesla's permise. My Dad sold Vanguard Citicars [2], and I worked on GM's interactive ad for the EV-1. I put a down payment on Aptera's original EV and again on the resurrected version [3].

But, I backed out of buying a Tesla. Even though I disabled Autopilot and ran purely cruise control, the Model Y would brake for phantom obstacles. Such as: low rises in the road. Or: passing a 18-wheeler on the left. Driving from AZ to CA, the phantom braking was occuring every 5 miles or so. So, I had to drive 800 miles with a manual accelerator. Bummer!

[1] https://news.ycombinator.com/item?id=31504583

[2] https://en.wikipedia.org/wiki/Citicar

[3] https://aptera.us [edit]


When my dad was teaching me how to drive, he told me that cruise control is only for when you have at least a hundred feet between you and the next car, and that you fully turn it off to pass other cars. I treat Autopilot the same way when I use it. I treat it as cruise control, if I use it at all.


Problem is that, with no cars around for hundreds of feet, with dozens of seconds of reaction time, when you hit a low rise on the AZ Interstate, the Tesla Y will abruptly brake. This is with Autopilot OR with simple cruise control. Pretty stressful. Bonus impact on range prediction. So, I couldn't use either.


I know this is a controversial take, but I'll say it anyway: anyone who doesn't feel safe driving with cruise control is a dangerous driver. Cruise control is a tool. So long as you understand how it works, and that it doesn't paralyse your right leg, you really can use it in any driving context. It is an alternative input mechanism to the accelerator pedal, nothing more.


Agreed. Modern cars with adaptive cruise control and lane keep assist are a breeze to drive if you maintain awareness of your surroundings and practice confident defensive driving.


No, it is useless on VAG cars in Norway. You can’t drive at the speed limit at 110 and suddenly have the car slow down to 80 because of “reasons”. That increases the risk of being rear-ended for no reason, even though the car behind you is supposed to be attentive

Granted, you seem to be able to disable it in the menus, but if you rent/use car sharing services, this may not be obvious to you


Human drivers sometimes perform phantom braking, but we don't realise it because we are able to justify it to ourselves.


> My (WAG) guess

Wives-and-girlfriends?


Wild-ass guess


Wild-ass guess guess?


Yes, like when people say PIN number or ATM machine. Or HIV virus or LCD display.

https://en.wikipedia.org/wiki/RAS_syndrome


Honestly I don't think these are as often wrong as some people think. While PIN and ATM work well even when articulated (please input the PIN, the ATM is broken), others don't ("the HIV has killed millions"). LCD is more on the fence for me.

Other common examples that don't sound that wrong to me are "the IP/TCP/... protocol".

For a different reason, I also don't think "this API expects a JWT token" is redundant: JWT is the name of a standard, so the phrase should be parsed as "the API expects a token in JSON Web Token format".


Or, as recently, PUBG Battlegrounds, which reuses 2 letters from the ititialism.


Are there iterated versions of that?


SSD drive


JWT token


This is not a good example - JWT is the name of a format, not of the token stored in that format. JWT token just means "token in JWT format".

Saying "a JWT" is in fact more dubious, like saying "an SQL" instead of "an SQL string".


https://datatracker.ietf.org/doc/html/rfc7519 disagrees:

Abstract

JSON Web Token (JWT) is a compact, URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted.


Even the quoted fragment is a bit debatable - while the standard does commonly use "JWT" to mean a particular token, the very first sentence is only grammatical if we take JWT to be the name of the standard/format. Otherwise, it's missing an article - it should have been "*A* JSON Web Token (JWT) is [...]" if it were consistently used to refer to individual tokens, not the collective format.


> like saying "an SQL"

An SQL is an implementation of the System Query Language isn’t it? Like MySQL, or MSSQL.


> the Model Y would brake for phantom obstacles. Such as: low rises in the road. Or: passing a 18-wheeler on the left.

Fwiw, I had similar phantom breaking experiences with my model y.

One of the recent updates fixed it such that phantom braking rarely happens (maybe once per 1000 miles driven or less now versus almost every time I passed an 18-wheeler on a two-lane highway before).


Yay; progress! My experience was from last year. Will stop whining, now.


Autopilot cannot be enabled until the position of the cameras is calibrated. That's why it requires you drive on well-marked roads for several miles when you first receive your car, before it works.


I have zero knowledge of Tesla cars, but do wonder why this can't be pre-calibrated at the factory?


I don't know, but the calibration also has to be re-done if you have a camera replaced.


Through that is not how neural nets work.

It's anything but trivial to make a neural net properly abstract over some "camera position parameter".

More over it's nearly impossible to be sure it did properly abstract in all cases. I.e. it might have abstracting in every case but some arbitrary edge case which for a human looks no special at all but for some arbitrary reason is for the NN.

Anyway this is highly speculative and might very well be unrelated to the given Teslas behaviour.


I’m pretty sure some things get normal algorithms, that not everything is a giant neural net.


Hard to know without being on the team, hence my WAG modifier to the guess. My reasoning is that the Model Y is built on the same chassis as the Model 3. So, a quick and dirty solution is to use the same training set results. But, it most likely is something else.

This is a roundabout way of saying that maybe - just maybe - the problem is only for the model Y.

Am rooting for Tesla to fix it. Maybe a Tesla engineer looks at hacker news? Already filed a complaint with the Tesla Sales Manager. Or perhaps the log of my screaming in terror did the trick. Is prosody for QA sentiment a thing?


I don’t think they segregate their training data based on car models. I mentioned in an above comment, but Comma.ai doesn’t do this and their devices support tons of cars. It would be very odd if a company far smaller than Tesla was able to figure out how to account for different camera positions and Tesla wasn’t. I bought a 2022 model year car and plugged the Comma device in and it just worked, and they would have had pretty much no training data from my car at that point. Just my speculation though.


Are there more recent papers? I see one from 2016 [1] But, then again my assumptions are a bit dated as well. Was thinking: could a different horizon on a CNN mid-layer trigger a false positive? Perhaps, classify a slight rise as a bumper or some other obstacle?

Maybe a simpler system, like Comma.ai's cameras has looser tolerances. Somewhat akin to the one or two eyes of a Human driver.

Maybe it is policy. I could imagine the brand hit to Tesla for every crash - even due to driver error. Maybe phantom breaking is an artifact of erring on the side of caution. Maybe lawyers got involved. (The horror!)

Anyway, idle speculation, this.

[1] https://arxiv.org/pdf/1608.01230.pdf


I have a Comma 3 in my car, and it has a calibration phase that takes like 3 minutes of driving. The device works for many different cars of different sizes, and their driving algorithm uses neural networks. As far as I know you still have to get the device fairly close to centered on the windshield for it to work, but clearly you can still do some sort of calibration based on driving data alone. Maybe Tesla can account for even more deviation in camera position because they know ahead of time where the cameras are mounted?


I'm not saying it's impossible or will take long, I'm saying it's not easy to implement and depend on your pipeline subtle bugs can sneak in you might not be able to find with any testing.


How long ago was that, Autopilot has been improving rapidly and since they stopped using radar I haven’t experienced and phantom breaking.


One thing I don't quite understand is that many cars have autopilot-like software. Unlike tesla's which constantly requires putting pressure on the wheel to show you're there and paying attention, Ford's lets you drive indefinitely without any hands on the wheel. Wouldn't this same investigation be put onto other manufactures as a giant audit? Hitting emergency vehicles is obviously bad but 1) it happens in cars without autopilot and 2) if you're hitting one you're clearly no paying attention. it's not like they just appear out of no where


What it comes down to for me is the marketing. The other manufacturers are very careful in how they market the software, Tesla is not.

If you look at Mercedes, for example, their marketing page describing their driver assistance technology[0] (with a very similar feature set to Tesla's) uses the word "assist" more than 30 times and in practically every header. Few people would come away from that marketing thinking that their car is going to drive itself without them paying attention.

Tesla, in contrast, advertises their "autopilot" and "full self driving" capabilities. The word "assist" is used exactly once on the Autopilot landing page[1]. The rest of the words and names are carefully chosen to convey a sense of total autonomy.

[0] https://www.mercedes-benz.com/en/innovation/autonomous/the-n...

[1] https://www.tesla.com/autopilot


Tesla has also used this wording to advertise their purported self-driving features since 2016[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


I find it really interesting that for Mercedes L3 self driving they are even willing to put their money where their mouth is and take liability for the car in self driving mode.


Read the fine print though.

They do disable the system in complex situations.

And they give ten seconds warning in advance of this happening.

Which sounds great until you think it through…

It means they schedule the disabling of the system way earlier in the progression of any potential traffic complication. Ten full seconds is long before incidents even start unfolding. The car has to be psychic, but it's not. So in order to accomplish the ten-second warning, what that means is those warnings need to be on a hair trigger with many false positives.

The driver, if they have it enabled, will be constantly getting warnings that the system may disable itself in ten seconds when the car sees even the very slightest possibility that things may potentially get complicated way down the road. I don't see this being endurable for most people.

So my conclusion is this is just marketing hype, until they can get rid of the auto disabling thing.


I haven’t read the fine print but you can have ten seconds of warning without predicting complications ten seconds in advance.

That 10 seconds simply means that the car must handle any situation safely for at least that amount of time. In practice, this means decisions such as emergency braking and steering must be autonomous but possible more complex scenarios (e.g. moving for emergency vehicle on a tight street) can be delegated to a human (or the system can simply pull over and stop).

They disable the system in cases of rain but at least from the marketing and videos demonstrating it, there’s no prediction of complex scenarios like you described.


No, you are not understanding what I explained. There is a moving horizon of time during which the ten seconds is continually extended, and the system needs to draw the line somewhere. When does the final ten seconds before cutoff begin? Your examples don't explain this. This is not easy stuff to solve without some compromises.


If it’s limited on what roads it can run on the 10 second window makes perfect sense. Just enable when you’re entering an unsupported roadway.


> And they give ten seconds warning in advance of this happening.

which is infinitely better than the <1 second that tesla gives you.

> those warnings need to be on a hair trigger with many false positives.

unless it is actually reasonably good at tracking threats and traffic. look, if you have multiple sensors of different types, then you are able to take firm action when things turn south quicker than a human.

A lot of tesla's problems are because they have shit sensors, and a stupid approach to designing the system (no lidar, no radar, no multiview cameras, no high res GPS or maps) They also have an overwhelming pressure to just yolo shit, rather than test, and redesign.


>which is infinitely better than the <1 second that tesla gives you.

Not at all. With Tesla you are constantly paying attention, or should be to the same level that you are with any other car.

Meantime it is actively intervening to keep things safer with better follow distance, collision avoidance, lane departure detection, and automatic braking. Most scenarios leading toward accidents are entirely avoided due to all this.

So the <1 second scenarios (which are not talking about warnings, but about when the system was in a complete outlier situation it does not know how to deal with and disabled itself) are very unusual things.

Like "a Prius driver drove off a bridge and is now landing on our hood." Of course the system will disable itself in that situation; what else would you expect? What sensors do you suggest for that?


> With Tesla you are constantly paying attention, or should be to the same level that you are with any other car.

you should be paying attention, but you are not. Human attention is a difficult thing. from what I recall, in a driving it takes about 7-14 seconds to re-gain situational awareness. This means that 1 second isn't enough.

> Like "a Prius driver drove off a bridge and is now landing on our hood." Of course the system will disable itself in that situation;

I'd expect that the system would slam on the brakes, not disengage to avoid liability. Thats the point here, its not about tech, its about legality. Thats the worse part, the entire system appears to be designed to stop tesla being taken to court.


That’s a paranoid and particularly uncharitable view which ignores the perfectly valid reasons that it’s best the system behaves as it does.

The human has responsibility for the safe operation of the vehicle, whether they step up and fulfill it or not. If they don’t fulfill their responsibility, all bets are off.

There is no other (equally good or better) way this could work in practice. You can imagine other ways, and I’m guessing you will, but they are imaginary, not practical.

> I'd expect that the system would slam on the brakes

It can and it does, even while they autopilot system is disabled. Who said it wouldn’t?

You should learn more about the cars before hardening your opinions so much.


> paranoid

I doubt its paranoid, I am not worried about it, just rather annoyed that driver's aids are being marketed as something they are not, cheapening an entire industry.

> There is no other (equally good or better) way this could work in practice.

We are literally discussing a company that has another way.

> It can and it does,

https://www.youtube.com/watch?v=45XMhMzMDZY

suggests otherwise.

>You should learn more about the cars before hardening your opinions so much.

I work in machine perception, this is my bread and butter. more over I have worked with life critical infra, and I know corner cutting when I see it.


I think the clever thing about this is that it breeds a culture of responsibility at Mercedes - if you are programming the self driving, you will be more considerate of how it works because your company (and perhaps ultimately you) bear that responsibility. The consumer gets confidence in the product, and Mercedes holds itself accountable because it doesn’t want to ship a product that will endanger people.


Smart. This will be the move that triggers widespread adoption. Cheaper insurance.


It makes sense as long as it is failproof. But is their tech better than Tesla's? I assume it is not and they will have to severely limit the capabilities of the system. Maybe it will work in traffic jams, simple routes.


Why are you assuming it isn't better, just Musk's relentless promotion and exaggerated claims?


The page literally talks about "full self-driving capabilities [...] through software updates designed to improve functionality over time". Which to me sounds like FSD is not actually being advertised as currently existing, as opposed to something you'll eventually get.


Yes the actual finished FSD software release does not exist and Tesla does not claim it exists. There is a SKU you can pay up front for (in other words, pre-pay for) called FSD, so the SKU exists, but it does not yet exist as a released, finished software release.

A lot of people can't get their head around this level of subtlety. Those people probably should not have too much confidence in their take on this, but they do. Dunning-Kruger effect. It seems like you do get it.

Just for completeness to soothe anyone triggered by details missing here, certain features of the current beta version are enabled for drivers who have purchased the SKU. And there is also a full beta release that is available to some drivers who opt in. All of this does not mean FSD exists yet as a public non-beta software release.


> A lot of people can't get their head around this level of subtlety.

Have you considered that this might be intentional on Tesla's part?

In my original post I never said anything about full-self-driving being a real thing, I simply said that their marketing uses that term a lot and leans heavily on those capabilities. I'm well aware of the distinction, but I'm also aware that Tesla seems to intentionally cultivate the ambiguity as to what exactly FSD means. It's not fair of you to cast blame on those who get confused when the company makes the line very, very blurry in their marketing.

EDIT: I'm also not accusing Tesla of outright lying. They can "not claim it exists" while still making sure plenty of people miss that "subtlety".


Yes, I've considered the possibility of it being intentional on Tesla's part. It's impossible not to consider it with all the conspiracy theorists and short sellers repeatedly bringing it up on Hacker News and elsewhere.

However, I don't buy into the theory, for the reason that Tesla's proactive, frequent, and aggressive informational notices clarifying the point are impossible to miss when you are in the car, and that goes completely counter to the theory.

And their informational messages are also there prior to that during the buying experience.

And prior to that in the marketing.

Why would Tesla do all these reminders about the human needing to maintain oversight and control, if they are trying to trick you into thinking the opposite?

Haters latch on to the marketing as if it's the only thing, and ignore the caveats in the marketing, and ignore the informational messages during the buying process, and ignore the in-car information and the in-car active measures the car takes to make sure you are paying attention.

Haters dismiss all that and pretend it does not exist.

To what end, I'm not sure, I think it is to stay in a comfort zone regarding a delusion they have about their opinion being the right one. Confirmation bias doing its job.

I do appreciate you bringing up the point. It's a fascinating phenomenon.


You're assuming that FSD is a software problem, and that there is any realistic chance that the feature will be available in the lifetime of any of the cars they sold including it.

Both of these beliefs seem unwarranted: FSD as described in the advertising is most likely much more than 5 years away, and will almost certainly require LIDAR to achieve any kind of safety.

People pay for a pre-order based on the promise that the item will be delivered. If I pre-order a game and it is later cancelled, or even delayed for many years, I will get my money back, I won't just be told "well, you knew it wasn't ready at the time".


I don't think I said anything about hardware or whether FSD will ever be delivered in the lifetime of the current fleet.

Myself, I am skeptical of Elon's timelines. But I also understand they are not his promises, they are just his (foolish, imho) expectations.

He admitted he vastly underestimated the problem… but what he might not admit is continuing to do so. I think he continues to underestimate it.

On the other hand, the power of compounded returns of improvements over time is counterintuitive, and he probably understands that better than most of us. Maybe he used that understanding to get overconfident, or maybe it's still beyond reach. We really don't know. It's still possible that at some point, his team might just crack it. Not just with vision, though. They will need a world model for things like predicting the behavior of a group of children occluded by a bus near a crosswalk.

The money back thing is another question. I hope Tesla offers money back to ease the experience of those who are bitter, but I probably won't take it back myself, because I don't mind supporting the effort even though it seems like the results are far away. I don't think money back was an option previously. As a matter of company survival (which in Elon's mind equates to humanity's survival, take it or leave it, but suffice it to say he doesn't treat it as a normal throwaway company) Tesla just didn't have the money. Now, they probably do.


> I don't think I said anything about hardware or whether FSD will ever be delivered in the lifetime of the current fleet.

You said "Yes the actual finished FSD software release does not exist and Tesla does not claim it exists." (emphasis mine).

Even if you didn't say it, the whole false advertising investigation revolves around the difference between FSD being a software or a hardware problem. Tesla marketing and Musk personally have stated clearly (at least in the past) that all cars sold with the FSD option are FSD ready on the hardware side, and that FSD will be delivered as an over-the-air software upgrade to all of them once it's ready.

If they can indeed enable (working) FSD without a hardware upgrade, then they have not lied (even if the timelines they suggested were wildly optimistic). If they in fact need hardware upgrades to support FSD on the cars sold with this option, then they have lied in their advertising, and people who bought this are entitled either to a refund or to a free upgrade when the feature is available.


> and will almost certainly require LIDAR to achieve any kind of safety.

Is there a physical reason for that? We know that humans do just fine with just ~8cm of stereoscopic separation, and for example cars have the potential for a significantly higher amounts of stereoscopic separation.


Not a physical reason, no, but an AI one.

Humans and most other animals don't rely solely on stereoscopic vision to navigate the world, we rely on a model of the world where we recognize objects in the image we perceive, know their real size from experience, and use that as well as stereoscopic hints to approximate distances and speeds. We additionally use our understanding of basic physics to assist - we distinguish between an object and its shadow, we can tell the approximate weight of something by the way it moves in the wind (to know if we need to avoid an obstacle on the road), and there are other hints we take into account.

We also take into account our knowledge of the likely behavior of these objects to judge relative speeds (e.g. thr car is moving away, it's not the tree coming closer).

Without this crucial aspect of object recognition and experience about the world, our vision is actually very bad at navigation. If you put us in an artifical environment with, say, pure geometric shapes at various distances, no/fake shadows, objects with non-realistic proportions and so on, we will have much more trouble navigating and not bumping into things even at walking speeds. And this is the level the AI is currently operating at, more or less.

And if you don't believe me, note that humans with one eye, while having impaired depth perception, are still perfectly able to drive safely, with ~0 physical mechanisms for measuring distance (I beleieve the spherical shape of the iris may still give some very subtle hints about distance as you move your eye around, but that is minimal compared to stereoscopic vision). A LOT of our depth perception is just 2D image + object recognition + knowledge about those objects.


While all of this may be true, this doesn't explain why stereoscopic vision wouldn't work where a LIDAR would. Both provide identical geometrical information and neither has anything to do with AI. Neither tells you approximate weights of things, or judge based on human experience how things might move in the future depending on their type (tree vs car), or anything like that. And if you swap one system providing geometric information for another one that provides identical information, I don't see how this makes the cognition of any AI later in the pipeline magically any better, no matter how good or bad that AI was previously.

However, one benefit that long baseline stereoscopic vision (for example with cameras in corners of the front windscreen) would have compared to a short baseline stereoscopic vision (a human) or a point measurement (LIDAR) that could be relevant for safety would be the ability to somewhat peek around the vehicle in front of you from either side. Admittedly, this may overall be a small-ish benefit relative to a LIDAR but it does provide strictly more information (slightly) than a LIDAR would.


Well, LIDAR uses very well understood physics to give you precise measurements of distance from the world around you, without any need for object recognition. It is not enough on its own, but it is an excellent safety technology. It's basically impossible to run into an object that's moving slow enough to avoid based on LIDAR input.

Stereoscopic vision first relies on object recognition of the elements of the pictures taken by each camera, then identifying the objects that are the same between the pictures, and only THEN do you get to do the simple physical calculation to compute distance. If your object recognition algorithm fails to recognize an object in one of the images; or if the higher-level AI fails to recognize that something is the same object in the two pictures, then the stereoscopy buys you nothing and you end up running into a bicycle rider crossing the street unsafely.

LIDAR does have limitations of its own (for example, it can't work in snowy conditions, since it will detect the snow flakes; not sure if the same applies to rain), but the regimes under which it is guaranteed to work are well understood, and the safety promises it can make in those regimes don't rely on ML methods.


> Well, LIDAR uses very well understood physics to give you precise measurements of distance from the world around you, without any need for object recognition. It is not enough on its own, but it is an excellent safety technology. It's basically impossible to run into an object that's moving slow enough to avoid based on LIDAR input.

Again, claiming that LIDARs make things magically safer sounds like a lot of snake oil to me. Both LIDARs and stereoscopic systems use well-understood physics. Stereoscopic rangefinders were being used in both World Wars for gun-laying and you wouldn't say that you don't need precise measurements for gun-laying.

> Stereoscopic vision first relies on object recognition of the elements of the pictures taken by each camera, then identifying the objects that are the same between the pictures, and only THEN do you get to do the simple physical calculation to compute distance. If your object recognition algorithm fails to recognize an object in one of the images; or if the higher-level AI fails to recognize that something is the same object in the two pictures, then the stereoscopy buys you nothing

As for whether stereoscopic vision relies on object recognition, that seems like a mild stretch to me. Generally it, like for example SfM (of which it is a special case), seems to rely on local textures and features for individual data points -- and in a simple single-dimensional stereoscopic vision case, your set of possible solutions is extremely limited, so matching features from SIFT or SURF in stereoscopic vision is way simpler than even the general SfM case. Those individual data points do not require in any way for individual objects to be recognized and separated. I have NOT seen in my life an SfM solution that would not give you a point cloud if it failed to separate objects -- in fact, SfM software doesn't even try to identify objects when generating a point cloud because it doesn't even operate at such a high level. Note that this actually provides the exact same information as a LIDAR would, namely a point cloud with no insight how the points are related to each other.

Pretty much the only situation where stereoscopic vision or SfM fails to provide depth information is with a surface of highly uniform color completely devoid of textures. Whether this could or couldn't be solved with structured light is an interesting problem.


Human stereoscopic vision could also be fooled by specifically designed optical illusions in science museums. We just avoid them when designing roads.


> Unlike tesla's which constantly requires putting pressure on the wheel to show you're there and paying attention, Ford's lets you drive indefinitely without any hands on the wheel

That’s not Ford’s version of Autopilot, it’s one step further. It’s actually hands off (despite how many treat autopilot). Named BlueCruise.

It’s comparable to GM SuperCruise. It ONLY works on specially mapped divided highways that Ford has approved. It will disengage for strong turns and anything it’s not ready for. You MUST watch the road, it keeps track with a camera on the wheel.

Basically the way it treats the driver is far more conservative. Instead of telling the driver they need to pay attention, it actively monitors them. Instead of saying “you should only use it on these kind of roads“ it actively prevents you.

It’s a fundamentally different approach. Ford’s ACC (not hands free, Co-Pilot 360) constantly monitors for steering wheel torque to ensure your hands are on the wheel and disengages pretty quickly if they’re hot and you ignore the warning.

That said, I have it on my car. It’s freaky as hell to use, kind of scary. Maybe I would use it on long drives in the country, but I just don’t want something else that in charge in even medium traffic.


That is kind of similar to Tesla still though. At least with FSD Beta there is a camera actively monitoring you and making sure your eyes are on the road. I've also been on roads where autopilot either won't activate at all or will activate but at a reduced speed. I was hoping that once Tesla enabled that internal camera they would stop relying on weight on the steering wheel and just use eye tracking.


> Named BlueCruise

Ah, thanks! I thought the dealer was saying "Blue's Clues"


As far as I know there had been a unusual high amount of "unusual" accidents associated with Tesla autopilots and there is not such observation for other car manufactures.

This doesn't mean their system is more advanced it actually could mean their system bails earlier due to being less advanced and in turn luckily avoiding this problems.

Or that they are much much less used.

But then Tesla is not really known for good QA.

And in the past there had been multiple unrelated tests for emergency brake systems in which Tesla cars failed really hard. Behaving worse then many much "simpler" less advanced systems. Sometimes to a point of only braking after/when hitting the pedestrian... (mechanized test dummy puppet the Tesla system by it's own feedback recognized as human).

If your most advanced self driving system can't even compete with emergency brake systems by such a large margin I would not be surprising if the Teslas system has major faults tbh.


Autopilot has Prevented far more accidents than it has caused.


Any citations for this claim?


Yeah I've always hated that statement. How do you even measure that? I've had my Tesla sound the alarms when I'm 4 car lengths away from the car in front of me and nothing bad is happening at all. Do they count that as having prevented an accident? Do they count all those phantom breaks as preventing a crash ;^∀;^)


Ford, like GM, has a literal camera monitoring the driver's face so they can see if they are paying attention. Seeing what your face is doing is a much more reliable system of measuring attention than whether or not the driver is touching the steering wheel while reading their book.


Doesn't Tesla have that too? Not that I trust Elonbois with a camera looking into my car, but hey, maybe someone will offer to buy me a horse...


No. See https://electrek.co/2021/01/20/tiktok-star-criminally-tesla-...

Edit: Sorry, I should have said _mostly no_.

Last year they pushed an update to use the built-in cabin camera installed in 2021+ cars for "driver attentiveness."

Compared to Ford and Cadillac's IR-illuminated driver attentiveness cameras pointed straight at your eyeballs and can see through sunglasses and works in the dark, the wide-angle cabin camera may not even see your face if you wear a hat, certainly _does not_ have a clear view of your pupils, and doesn't work in the dark or if you have dark glasses on.

https://electrek.co/2021/05/27/tesla-releases-driver-monitor...

https://electrek.co/2021/04/08/tesla-driver-monitoring-syste...



Tesla's isn't good enough to replace wheel torque as an attention monitor, and they don't even pretend to (still requires wheel torque to keep FSD/autopilot enganged.)

In particular, it has terrible night vision and no IR illumination so it can't see you well enough at night to work.


A some older Model S/X vehicles with Autopilot 2.0 hardware lacked driver-facing cameras, but modern ones, along with all Model 3/Y have them.


I don’t think anyone reads about Fords (for example) manual and thinks “oh wow, I don’t even have to have to pay attention and have my hands on the wheel!”.

The fact that it’s a big enough problem for Tesla that they have to monitor it, and still have issues, points to the main difference being user expectations and marketing around these features.


It's almost as if calling it "autopilot" was wildly irresponsible.


It's a very appropriate name for something that automates the long boring things while not eliminating a human operator for exciting transient things, just like in an airplane or on a ship. What about it was irresponsible? Sounds like a completely typical autopilot to me.


What's irresponsible to me is how Tesla has advertised their Autopilot feature for the last six years[1]:

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


Possibly, but notice that this is not what I was responding to, which was purely the naming of the feature. So I'm not sure how this segue is relevant? Do you have anything to comment w/r to the name?


This is very clearly labeled as an ad for future software capabilities.


There is no such "clear" label on the video or website. Tesla titled the page "Tesla Self-Driving Demonstration". Tesla's embedded video is titled "Autopilot Full Self-Driving Hardware (Neighborhood Long)" with a video description that says[1]:

> Take a ride in a Tesla with Full Self-Driving Hardware.

That's it. Nowhere is it "clearly labeled as an ad for future software capabilities".

[1] https://vimeo.com/192179727?embedded=true&source=vimeo_logo&...


> …how Tesla has advertised their Autopilot feature…

I don’t see the word autopilot on that page or that video.


Look at the URL[1] and then look at the title of the video, which is "Autopilot Full Self-Driving Hardware (Neighborhood Long)"[2].

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...

[2] https://vimeo.com/192179727?embedded=true&source=vimeo_logo&...


Fair points, but it doesn't invalidate what I said: I don’t see the word autopilot on that page or that video. And it's not on that page, and it wasn't in the video. The VIMEO video title doesn't appear on embeds.


It's a very appropriate name for people who understand aviation, but I've heard A LOT of people over the years say stuff to the effect of "the planes these days can fly themselves, they don't actually need pilots they're just in there due to the unions"


If the argument was "Tesla shouldn't use the Autopilot name because some people are grossly misinformed about aspects of the aviation industry" then I'd have some sympathy for it. I'd still consider it a poor argument, but at least it's one I could comprehend and respect.


Not just some people but many people and not just some aspects of the aviation industry but precisely those with are being referenced.


No. Tesla should not use that term because of what that term means everywhere outside of aviation industry. Aviation industry is one small niche.

The term autopilot implies autonomous driving everywhere outside of that one small subgroup of people.


"Everywhere outside of aviation industry"? I'm sorry if this sounds harsh, but I can't avoid a strong feeling that you've just made that up. Where are all the different non-aviation autopilots in the real world we live in that "imply autonomous driving" when that term gets applied to cars? Are there bicycle autopilots? Crane autopilots? Bulldozer autopilots? All of them operating devices fully autonomously with no operator oversight? Because I can't for the life of me recall a real-world usage of the word "autopilot" that could give people the misconception you're alluding to. If there's a wide range of different real-world types of autopilots that almost all require no human attention except for airplane autopilots which do require it, all of them in such widespread use that they justify some people's notion that systems called "autopilots" are operator-free, then I must have completely missed that somehow. Please give me some examples of those different systems if they do exist.


Yes the word autopilot is used and understood by people autside of aviation to full autotonumous driving.

Just like spaceship is used space vessel able to travel through space. And teleport is used for moving from one place to another in blink of eye. They don't need to exist.


> the word autopilot is used and understood by people autside of aviation to full autotonumous driving.

...is a completely different statement from...

> because of what that term means everywhere outside of aviation industry. Aviation industry is one small niche.

The former implies that the common "understanding" by people who know little of consequence about the field is actually a misconception, whereas the latter insinuates that there do exist separate, valid, non-aviation usages of that word (of which you haven't provided any) which would justify such expectations about Tesla's product.

> They don't need to exist.

Except autopilots actually exist and they partially automate the boring parts of operation of means of transportation such as airplanes and ships. To argue with a fictional notion of a fully autonomous system against an actually existing device is like saying that you don't care what words mean and that you can reuse established terms to describe whatever you want, even if they already mean something very different from what you want them to mean. This is just like for example the East-German reinvention of the word "Aktivist". You're basically peddling Newspeak by saying that autopilot doesn't mean what it actually means.


> The term autopilot implies autonomous driving everywhere outside of that one small subgroup of people.

And with that you've disproven your own argument.

To the extent that "Autopilot" is used to refer to driving technology, it is as the brand name for Tesla's version of such. The name literally cannot be the causal factor as to why Tesla was supposedly wrong to pick that name, if it wasn't associated with autonomous driving prior to Tesla's use of it. Your argument is circular and thus invalid.


When drivers receive as much training as pilots, then it will be reasonable to draw comparisons between autopilot for planes, and autopilot for cars.


How is this relevant? Autopilots for airplanes don't eliminate human operators. Autopilots for ships don't eliminate human operators. Tesla's autopilot doesn't eliminate human operators either. On basis of that, I would deem the naming as very appropriate, the level of training of the respective required operators notwithstanding.


Interesting idea. Pilots require 40 hours minimum to take the certification test, but most need maybe 60 or so. Pretty sure I had a lot more than that for driving. In addition to the state-mandated driving instruction which was probably around 8 hours total (some of it "watching" other student drivers) I'm sure I had a few hundred hours of supervised driving under my belt before taking the test. Outlier? Maybe, but I can't imagine less than the 40-60 hours required for prospective pilots.

Also, I don't think student pilots typically train on autopilots. Those kinds of technology components are usually learned either on one's own or with an instructor/co-pilot after completing basic training. The basic pilot training focuses on safe pilotage from takeoff to landing, including land nav and radio communications, and ignores pretty much all modern technology.


That argument makes no sense. Generally speaking, tools are named based on the capability of the tool. We don't make a rule of changing the names of tools based on the predicted skill of the operator.


Ford's system has a lot more limitations. AFAIK, Tesla is the only company who has non-geo fenced FSD.


You say “limitation” but I hear “safety feature”. Best to put limits on dynamic systems that aren’t well understood.


Maybe their systems are safe and Tesla's isn't?

I don't know anything about other manufacturer's systems but I've seen video of Tesla's on autopilot doing unsafe things and read a lot of anecdotes of this as well. Elon Musk has made ridiculously optimistic statements about when Teslas will be self-driving and that by itself can influence people's behavior - perhaps fatally.

Edit: also, Tesla removed Lidar from their system. And there have been well-publicized deaths of people using autopilot.


Tesla has not removed Lidar form their system. You are mixing things up, they removed rader. And most of cases in the report are about system with radar.


Tesla never had lidar lol


Panic didn’t say they did.


Right

> rader

Radar is actually short for "RAdio Detection And Ranging"

And Lidar is short for "LIght Detection And Ranging"


I'm similarly confused. And with the shade Biden has been throwing at Tesla I'm concerned it may be politically motivated.


[flagged]


Seriously? If money is power, Musk _is_ the ruling class.


Your premise is not necessarily always true though. There are other kinds of power such as political power that can only be partially interchanged for economic power. Its pretty clear the mainline political parties dislike Musk as he’s a wild card.


"The billionaire is being oppressed!"


If we are in a capitalist economy, Musk, as a member of the haut bourgeousie, is the ruling class.


> If we are in a capitalist economy

We aren't, and haven't been for some time.


Nope, look at the constant strife between him, the California government, and the federal government. The Biden administration doesn’t even acknowledge that Tesla has any significance in the electric car industry.

Money is certainly helpful to get power, but it is definitely not sufficient nor completely necessary.


Who's the ruling class?


I bought FSD version of Model 3 in the spring of 2020. I'm still waiting for what I purchased to be turned on. Frankly, I'd be psyched if they would expand on the little things instead of the big things. "Park the car in that spot". "Exit the parking spot". These would be worth the price I paid.


I bought FSD version of Model 3 in the spring of 2020

I don't understand how people keep falling for this. Sure, it seemed realistic enough at first but how many cracks in the facade are too many to ignore? In 2015 Musk said two years. In 2016 he said two years. In 2017 he said two years. Tesla did a 180 on the fundamental requirements of FSD and decided it doesn't need lidar just because they had a falling-out with their lidar supplier. That level of ego-driven horseshit is dangerous.


> In 2015 Musk said two years. In 2016 he said two years. In 2017 he said two years.

Look, the message is clear enough already: FSD is two years away, and always will be! You've got to admire that kind of consistency.


Unfortunately, it is not even consistent:

• In January 2021 he said end of 2021 [2]. • In May 2022 he said May 2023 [1].

So he moved from around 2 years to 1 year.

Looking at the rate of progress from Tesla FSD videos on YouTube, I wouldn't bet on 2023.

Whenever people talk about Tesla FSD, I always like to point to Cruise who just got a permit to carry paying riders in California [3]. Honestly, their software looks significantly smarter than Tesla's -- Highly recommend their 'Cruise Under the Hood 2021' video for anyone interested in self-driving![4]

[1] https://electrek.co/2022/05/22/elon-musk-tesla-self-driving-...

[2] https://www.cnet.com/roadshow/news/elon-musk-full-self-drivi...

[3] https://www.reuters.com/business/autos-transportation/self-d...

[4] https://www.youtube.com/watch?v=uJWN0K26NxQ


It's still not actually available to people who paid for it, and it doesn't actually work* (at least not to a degree where it can be trusted not to crash in to street signs or parked cars). I have no idea why anyone pays a $10k premium for vaporware.


$10k is insane. That’s 1/4 the price of my nicely loaded Honda Ridgeline (or pretty much any well-loaded mainstream sedan or crossover). Yah, I don’t have an AI trying to drive (and crash into fire engines) for me, but I have basic lane-keeping, auto braking, and assisted cruise. I still have to drive the car. The horror.



A non-transferrable $10K premium, at that. Unless something has changed. I've always wondered what is going to happen when the early adopters start to move on to their next car without ever having received any value for the FSD license.


I suspect the venn diagram of people paying $10k for Tesla vaporware and people who've realized >$10k in gains trading TSLA stock is pretty close to a circle.


I think that is actually a decent point. Definitely know people that bought Teslas with their Tesla stock gains.


I've made enough off TSLA to afford a Tesla, but I'm actually now considering the Ioniq 5 over the Model Y. Specs are very similar and it's like 2/3 the price.

I've been a believer in Tesla for a decade now but it is starting to seem like the competition is catching up.


Gotta love the apologists as well, because hey, if I admit I was conned out of money for some buggy-ass deadly software, then who's the idiot, and I know I'm not an idiot, so this software works, yeah it has a few kinks, but it's great, and soon it'll be chaffeuring me all over the place! Soon! He tweeted!


FSD was $7k in spring 2020, but your point remains.


I paid for it and have been using FSD beta for over 6 months, the most recent 10.12 is a big improvement on smoothness and further improves my confidence in the system


> I don't understand how people keep falling for this.

Every successful fraud has people it's tuned for. For example, consider how terribly written most spam is. That selects for people who are not fussy about writing. Conversely, a lot of the people doing high-end financial fraud is done by people who are very polished, very good at presenting the impression of success. Or some years back I knew of a US gang running the Pigeon Drop [1] on young East Asian women in a way that was tuned to take advantage of how they are often raised.

Telsa's only has ~3% of the US car market, so they're definitely in the "fool some of the people all of the time" bucket. Musk's fan base seems to be early adopters and starry-eyed techno-utopians [2]. He's not selling transportation. He's selling a dream. They don't care that experts can spot him as a liar [3] because listening to experts would, like, totally harsh their mellow.

Although it's much closer to legal fraud, I don't think that's otherwise hugely different than how many cars are marketed. E.g., all of the people who are buying associations of wealth when they sign up for a BMW they can't afford. Or the ocean of people buying rugged, cowboy-associated vehicles that never use them for anything more challenging than suburban cul de sacs.

[1] https://en.wikipedia.org/wiki/Pigeon_drop

[2] https://www.theverge.com/2018/6/26/17505744/elon-musk-fans-t...

[3] e.g.: https://twitter.com/kaifulee/status/1126238951960993792


I don't know. The latest version is insanely impressive. https://www.youtube.com/watch?v=fwduh2kRj3M

I am cautiously optimistic about future of FSD in general now.


Interesting combo of very impressive, and also clearly not ready for general availability. It’s doing something noticeably wrong every few minutes. It’s so hard to guess how close this really is, but I’d guess … a few years or so? I’d imagine all those edge cases where it’s stopping too early, stopping too late, getting stuck making certain decisions, etc. will take quite awhile to iron out.


Yes, you are right. Not quite ready for GA just yet, at least not for the insanity of San Francisco driving. I would settle for having to intervene every now and then though. I have a base Tesla Model 3 and just the free "self-steer" mode is very helpful. I definitely miss it when driving one of my kids' cars.


For me, for a “fully self driving” feature to be ready for wide market sales, it needs to be as good or better than a human. A more “drive assist” feature, like current Tesla autopilot, where there has to be a human driver ready to take over at any moment, that’s different, but full self driving, where there doesn’t even need to be a human driver at all, the standards are a lot higher.

Also, by “as good or better than a human”, for me the biggest things are:

- Involved in the same number, or fewer, accidents

- Does not piss off other drivers any more than a human (like stopping way back from stop signs, getting “stuck” on a decision and not making progress, etc.)

In this vid the Tesla was nowhere close to the above standard. Still really impressive, but lots of work to do. Hard to say how close it is - maybe a few quarters away, but could also be a decade or more.

Google/Waymo is closer in terms of safety and pissing off other drivers, but it’s also much more conservative from the rider’s point of view. Like it will do 3 right turns to avoid a tricky left, will take side streets over the highway, etc. Waymo vids seem much safer and more predictable, fewer clear bugs (e.g. all the times the Tesla FSD makes the wrong decision on where to stop at an intersection are bad, “hard dealbreaker” bugs, that Google/Waymo don’t have), but I think it would just get you to your destination too much slower than a human driver, so buyers wouldn’t like it.


Insanity ? Driving anywhere in the US is an order of magnitude easier than in Europe so if it struggles there, i would like to see how it drives in Paris or Rome


As someone who has driven over lots of Europe and the US, this is absolutely not true in my experience.


While Rome and Istanbul are really bad, most of Europe is good. Consistent use of traffic circles are a dream (instead of America's infatuation with signalled t-bone collision intersections).


And here is the equivalent from Cruise: https://www.youtube.com/watch?v=HiG__iqgYHM

No basic mistakes. No shut downs in the middle of the road. And is approved for use as a taxi service.

And you clearly see the benefits of LiDAR by the stability and consistency with which it identifies other objects e.g. cars, pedestrians on the road. The inability of Tesla FSD to accurately identify the bounding box of the truck at 6:06 is extremely concerning for example.


No basic mistakes. No shut downs in the middle of the road.

Yeah, phantom braking is an absolute plague when it comes to systems that depend on cameras or radar.


This statement is a bit vague. I know Tesla’s system had this issue with a monocular camera & radar.

I doubt the same issue would appear with stereo depth.


Interesting, but you can see how AI makes mistakes at [1]. It perceives a dog as a small car for a moment.

[1] https://youtu.be/fwduh2kRj3M?t=792


Damn, that's wild and really bad. If I had a forum for every time people post on these forums: 'it only needs to be better than a human".

But it misses that the kinds of erros AI can make are so wild, other drivers and pedestrians can't even imagine or anticipate them. Imagine all kinds of different responces the AI might trigger on sudden appearence of a car, where a dog should be.

Human drivers could be drunk, but they don't go from perfect driving to batshit crazy in a split second.


Wow, I haven't kept up with FSD but I am impressed.


Apparently Tesla are probably going back to adding a radar: https://www.notateslaapp.com/news/787/tesla-registers-new-hi...


There's got to be a sizeable number of people who have paid for this feature and never received it before selling the car. And of course Tesla is willing to re-sell the feature to the next driver. Are they willing to pay back customers for a function they haven't shipped?


They had a falling out with MobileEye, who provided the AP 1.0 hardware. It never used LIDAR. Tesla doesn’t use LIDAR because you need to solve vision anyway. And once you solve that, LIDAR makes no more sense.

(You need to solve vision anyway, because for that object, of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things. It can only tell you how far they are away. It is not some magical sensor which Tesla is just too cheap to employ.)


“Solve vision” is a bit ill-posed.

If you can accurately determine the 3D geometry of the scene(e.g. with LiDAR), the 3D object detection task becomes much easier.

That being said, most tasks for self-driving such as object detection can be robustly “solved” by LiDAR-only (to the extent that the important actors will be recognized) but adding in cameras obviously helps to distinguish between some classes.

Trying to do camera only (specifically monocular) runs into the issue of no 3D ground truth meaning it’s a lot more likely to accidentally not detect something in frame (say a white truck).

That’s why you can have LiDAR and partially-“solved” vision but need fully solved vision if it’s the only input.


IDK. Avoiding crashing into anything 12.327 ft way might be considered a feature.


> you still need to figure out whether it is a trashcan or a child

Why? First of all don't drive into anything that moves, period. Secondly, if you can avoid bumping into anything at all, do that.


No, Tesla doesn't use lidar because Elon thinks it's too expensive and potentially also not pleasing esthetically.


> not pleasing esthetically

Which Luminar has solved for Volvo:

https://www.forbes.com/sites/samabuelsamid/2021/06/24/next-g...


But only initially in California on selected highways at “lower speeds” and they haven’t tested it yet in the US.

https://techcrunch.com/2022/01/05/volvo-partners-with-lumina...


My point was that LiDAR is at least capable of being aesthetically pleasing.


This claim is just as false as the one about Tesla having had a falling out with their “LIDAR supplier”. Elon has explained many times why he considers LIDAR, while useful for things like docking to the Space Station, to be a fool’s errand for self driving cars.


Sure, and it doesn't make sense and he's lying.


> you still need to figure out whether it is a trashcan or a child

No you don't. You just need to avoid hitting it.

The problem with a vision-only system is that you need to know what an object is to determine the bounding box and thus how to avoid hitting it. Which is the problem we've seen with FSD where if you combine two options e.g. a boat on a truck then it gets confused because it hasn't been trained on it yet.


You think a vision system cannot detect that something is there at all unless it can also correctly categorize the object? In other words, you think if the object just happens to be one which the system wasn’t trained on, a vision based system will necessarily report “nothing there”?


The big problem for vision systems (in particular if they don't use a lot of cameras) is that it's very difficult for them to determine movement and distance. This becomes exaggerated when objects move perpendicular to camera, one of the reasons is that the distinctive features of cars, trucks, buses are not so clear anymore. There are quite a few examples of hilarious mischaracterisations in these sort of cases.


> whether it is about to jump on the road or walking away from the road

Their Ai confuses a dog and a car. If confuses a parked semi and an underpass. It drove into a solid concrete wall on a few occasions.

Even when they solve vision, that does not tell you what a pedestrial intends to do - even another human can't always be sure.


I'm not sure you know what lidar is. It gives you 3D images while telling you exactly how far away things are. In fact you can much more easily determine what an object is from Lidar data.

That is not to say Lidar doesn't have its issues (and there are quite a few), we likely will need a combination of sensors including cameras, lidar and radar.


Some better examples of his point are how do you determine the color of lights (stoplights, blinkers, brake lights, cop car lights, and so on) so as to make legally correct driving decisions with LIDAR? Some other things LIDAR won't give you: the need to read signs (parking legality, stop signs, speed limits, construction crews with flashing detour directions, painted information on the road like lane markers and speed limits, wrong way signs, and so on). In general, you can't, because the sensor doesn't give you enough information - like color - to let you solve the problem. If you have LIDAR, but not vision you literally can't make a legally correct driving decision in the general case, because you lack the relevant data to make that decision.


LIDAR certainly can see painted information on the road (possibly better than cameras in some situations) see e.g. this [1]. They can also read some road signs and there are proposals how to make them more readable for LIDAR. That said I don't say LIDAR is sufficient for autonomous driving, we will need a suite of sensors.

[1] https://www.researchgate.net/figure/LiDAR-point-cloud-a-high...


Nobody is saying it will be LiDAR only.

Every self driving car company (except for Tesla) uses both LiDAR and vision.

But they all rely on LiDAR for bounding box detection which is exactly the main problem Tesla FSD has.


Lidar also has the ability to catch things both cameras and human eyes miss, like a black truck tire lying in the middle of the interstate at night.


cycomanic is claiming that leobg is ignorant. They do this despite leobg displaying accurate historical knowledge and stating the way that the sensor works in a way which does not fundamentally disagree with the correction that cycomanic implied leobg needed. As a reader of the comment chain, I have to ask why cycomanic thinks leobg is ignorant - he failed to articulate why. It seems to me that the most contentious and debatable claim that leobg made was the claim that full self driving requires solving vision regardless of whether or not you have LiDAR. If this was the reason - maybe it isn't, but if it was, the fact that everyone uses vision isn't evidence for cycomanic's position - it is evidence for leobg's position.

You've retreated from this as the reason that leobg is frightfully ignorant on behalf of cycomanic. That means the next most contentious claim is the claim that builds upon the foundation of the first claim - that vision being required makes LiDAR irrelevant. The problem for you though is that when you make the concession that vision is necessary, you run into a problem. The sensor situations under which LiDAR is much better than vision tend to involve a vision failure through a lack of light or due to heavy occlusion. There is definitely and necessarily a cut off point at which leobg's claim becomes somewhat true. This denies the right to call him ignorant, because the law of charity demands that his point be the thing that maximizes the truth of his comments. So the claim of ignorance - which amounts to a character attack - becomes unjustified.


the OP claimed:

>of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things.

That is ignorant, because LIDAR together with processing obviously can tell you if the thing is a trashcan or a child. The post by is ignorant, because to my understanding it implies that LIDAR does not provide enough information to make that determination, which is untrue and not how LIDAR works.

Now if they mean we still need some way to process this information and make decisions of what the different things are, that's a bit disingenuous because that's completely orthogonal to LIDAR vs cameras vs RADAR and using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.


Thanks for the response. I agree that LiDAR can make that determination. I think he was confused about what it was possible to learn from the LiDAR sensors rather than what LiDAR provides. His ability to distinguish between radar in former Tesla vehicles and LiDAR in former Tesla vehicles wouldn't be present if he thought they were the same sensor. I figured you would be responding to his argument, which was outside the () rather than his fallacious support for a premise that was true which was inside the ().


> using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.

Bellman gave us the bellman equations, but also gave us the term curse of dimensionality. The equations he shared and the modeling problems he encountered are fundamentally related to modern reinforcement learning. More data doesn't come without cost. So often I hear people speak of the introduction of lower resolution as equivalent to the introduction of error, but this is a false equivocation. Latency in decision making means the introduction of a lower resolution can increase the resolution error, but still decrease the solution error. This is so fundamental a result that it applies even to games which don't have latency in their definition. Consider poker. The game tree is too big. Operating with respect to it as an agent is a mistake. You need to create a blueprint abstraction. That abstraction applied to the game introduces an error. It is lower resolution view of the game and in some ways it is wrong. Yet if two programs try to compete with each other, the one that calculated with respect to the lower resolution version of the game will do better than the one that did its calculations with respect to the higher resolution view of the game. High resolution was worse. The resolution without error was worse. Yet the game under consideration was orders of magnitude simpler than the game of a self driving car.

I've been paying some attention to this debate and I'm not convinced yet that the situations under which LiDAR is superior are sufficient. I think we agree on that already. For me, this reduces the set of situations under which LiDAR is able to be considered superior - if vision is bad, but you need vision, then better to avoid the situation then use the wrong thing [1]. So the situations under which LiDAR becomes superior becomes a subset of the situations that it is actually superior. That subset doesn't seem very large to me, because both LiDAR + vision and vision alone are both necessarily going to be reducing the dimensionality of the data so that the computation becomes more tractable.

[1]: This isn't exactly uncommon as an optimization choice. It'll get dark later and you'll stop operating for a time. Then light will come. You'll resume operation. This is true across most of the species on this planet. If you are trying to avoid death by car accident you could do worse than striving to operate in situations where your sensors will serve you well.


Just a note Lidar can read traffic signs. There are plenty of examples of this. This is based on the different reflectivity of the different colors on the sign.


IDK what Tesla uses, but what difference there is between LIDAR and a SAR, some zeroes on the wavelength?


The wavelength difference has big implications in terms of the detail that can be resolved due to the surface interactions.

Tesla doesn’t use their radar anymore in any case. Only monocular cameras.


The same change, in the other direction, turns LIDAR into a source of lethal ionising radiation that goes straight through a dozen cars without attenuation.


No kidding!

The ability to use the phone or remote to move the car forward or back in a straight line is super useful and a cool, novel feature by itself. It’s also a buggy piece of shit that a few engineers could probably greatly improve in a month. Doesn’t seem like Tesla cares, it’s been stagnant for years.

Meanwhile Tesla is still charging people $10,000 for an FSD function that doesn’t exist.


> The ability to use the phone or remote to move the car forward or back in a straight line is super useful and a cool, novel feature by itself.

Is it? It’s hard to think of a situation where moving a car I’m at most a couple of hundred feet from backward or forwards in a straight line by fiddling with my phone is superior to just getting into the car and moving it myself. Maybe I’m not finding myself and my car on opposite sides of a gorge often enough?


I found it useful in a variety of circumstances.

Someone parked too close to your driver's door? Just back the car out remotely and get in.

Parallel parked, then walked away from your car and noticed you didn't leave the car behind enough space to exit without scraping your bumper? Pull it forward a little without getting back in.

Parked in your driveway and need to move it three feet so you have space to get the lawnmower out of the garage? Use the remote.

All of these use cases depend on the functionality being fast and hassle-free to use. It works that well about 60% of the time - the other 40% the phone fails to connect to the car, or seems to connect but the car inexplicably doesn't move, or gives a useless "Something went wrong" error message, etc.


Of all the scenarios you describe, they happen to me once in 5 years, even then, I can’t help but feel it’s just being lazy using a remote instead of just moving the car ?


It's not even lazy, it's just plain gimmicky. Aside from the door being blocked in scenario, it would take longer to get your phone out, open the app and tap through the interface than it would to just get in the car and move it. It's just a toy feature with very few genuinely valuable uses.


Not disputing the overall point here, but also not passing up an opportunity to bring up https://www.youtube.com/watch?v=MvGKxDlXgvQ


8 months out of the year I either have no doors or a soft door I can remove without opening very far. Sometimes analog is the best technology.


Yeah, it’s being lazy - just like using a dishwasher, a vacuum cleaner, a microwave, a blender, a calculator. What’s your point?


Being physically lazy is the point? Your examples all describe things that help with daily routines by doing things faster, efficiently (for the users perspective) and often perform the task better.


Some people (me) have tandem parking spaces, where you have to move one car to get the other out. Super useful feature for me on a daily basis. But as stated earlier, it's buggy and frustrating to use and could be a whole lot better without much work. Tesla doesn't care about it, they are focused solely on FSD.


The charge isn't really for the FSD functionality. It's a charge to cut the line. There is a long waitlist if you don't pay for FSD.


That doesn’t appear to be true; on Tesla’s site, the delivery estimate doesn’t change if you add the FSD option. Unless you’re saying it’s an undocumented policy?


A waitlist...for a firmware update?


They're claiming that the FSD equipped Tesla waitlist is resolved before the non-FSD waitlist. It seems trivial to verify, but it is possible that FSD cars are shipped faster without It being noticed. Maybe by shipping FSD cars at something like twice the rate they are ordered.



What happens if the phone UI thread / touch screen hangs?


By default you have to continuously press a button to move the car. Also the car has cameras and ultrasonic sensors. You can set thresholds for how close you want the car to get to obstacles. I think the default is 2 feet. The car will refuse to move if it thinks it's too close to an obstacle. I think the top speed is 1 or 2 mph, and the car emits a warning sound as it moves.

I mostly use the summon feature for annoyances such as someone parking too close to me or rain causing a puddle to form around the car.


I made the same mistake in 2019. $6000 for FSD. The car is an enigma. It is simultaneously wonderful and a total scam. My next one will be some other EV from another manufacturer. I would never buy another Tesla or anything else from Elon. Like the car, he is an enigma. I oscillate between believing he is brilliant and seeing him as a lying snake oil salesman. I can’t exactly say I’m disappointed with my purchase overall because it is a damn good EV, but I can’t help but feel like I was conned.


I got it this week. It's not really worth using in its current state. It makes minor mistakes every minute or two, major ones every five minutes or so.

I bought it years ago mostly for the guaranteed computer upgrade and the novelty of testing it as it develops. It's great as a novelty, really cool! But it is dangerous to use right now, and I don't care what anyone says, it's not going to be truly ready for years. In fact I think it needs another computer upgrade and probably a camera upgrade too.

I agree that they should have nailed autopark and summon before moving on to FSD. As it stands they are both useless. But if I could record and play back summon paths in known locations, that would be actually useful.


> I bought it years ago mostly for the guaranteed computer upgrade and the novelty of testing it as it develops.

I did the same, and it was significantly less expensive. I wouldn't buy it for a $1 today.

Also, your idea of novelty is my idea of a nightmare. There has never been a time when I used it where it didn't do something completely insane. The last time I used it (which will truly be the last time), it waited patiently to make a left hand turn. It waited far longer than I would have, and it was clear of oncoming cars for ages. When it did decide to turn, it did it when there were several cars coming, though it was still safe. Except then half way through its turn, it literally stopped, then turned a bit to the right and centered itself in a lane going the opposite direction of traffic flow, right into oncoming cars. Thankfully I was able to take over and two of the three oncoming cars stopped. Had I done nothing, we would have all been in a head on collision.

Never again.


> it waited patiently to make a left hand turn.

Having been in a t-bone collision (other driver's fault and he got his parole revoked for that) the USA government traffic engineer love affair with unsafe intersection design is horrific. Traffic circles fix most turning issues, and do it without a big control box and expensive poles for mounting signal lights.

In other words, many collision chances could be removed with safer intersection design, with benefits to both computer control and human control.


doesn’t the car have multiple video cams built in? Take the footage and upload it online, if you have it.

I see Elon constantly interacting with fanboys hyping up autopilot features. One would get the impression that all features were perfect with the hype videos online.


Not really. There is no shortage of YouTube videos of it doing stupid and dangerous things, mixed with boring regular driving. If you watch the popular YouTube channels it will give you a very accurate picture of how it works and fails. Based on that I had a very good idea of what to expect when I tried it out for the first time and my expectations turned out to be entirely accurate.

It's really quite surprising how open Tesla is about it. They could have added all sorts of rules to the beta legal terms about publicity and posting videos. But they did nothing and have not tried to take down any of the many unflattering videos, or kick people out of the beta for posting them, AFAIK.


>or kick people out of the beta for posting them,

didn't they fire an employee because he posted a video that included some software flaws of Autopilot?


Customer and employee are different situations. He was a data labeler and test operator working on Autopilot, and he had a YouTube channel where he criticized Autopilot using the free FSD beta access he was granted as a Tesla employee, and sold "Full self-driving beta" branded merchandise. Tesla fired him for the conflict of interest and for dangerous use of Autopilot, and he lost his free FSD beta access. I think it is fair for a company to not want to further employ someone with a conflict of interest like that. But you'll note that they didn't make him take down the videos. And he's still making new ones.


The fact that we don't have reliable, fully automated parking yet is bizarre. I'd love a solution that automatically parallel parks for me, and a computer with sensors around the vehicle should be able to do a better job. Plus, the problem is of limited scope, and low speed, so you don't have to deal with most of the potential issues and dangers with full self driving on a road.


BMWs have had that for quite a while. My 2016 5 series does it perfectly 100% of the time. It even gets into some spots that I consider way too short, but it surprises me.

I use it all the time, it’s quick and easy.


Does it work if the spot is full of snow and ice?


If the snow isn’t high enough to trigger the parking sensors - yeah.


My 2015 Model S does parallel parking between two other cars quite well. It will also reverse into the space between two cars parked perpendicular to the road.


Ford does that with Active Park Assist.


Mercedes too, here's a demo: https://www.youtube.com/watch?v=hz4Bm2jui-c


Also Peugeot/Citroen and VW/Audi. Probably all major car manufacturers have it by now?


Now that I see your list of manufacturers, I can only think: “of course, manufacturers don’t do squat in a car, Bosh, or some other equipment company did it and sold it to all the manufacturers”


That's pretty typical. And sometimes when the car manufacturer gets particularly good at parts subsystems, they spin that off into a separate company. E.g. Delphi. It's nice, because then we see awesome things like magnetic ride show up on more than just GM performance cars.


Tesla doesn't have it. Many other automakers do offer that functionality on their luxury lines.


This is weird to me because there's another comment reply to me saying that their Model S has this functionality.

So it looks like it's more common than I thought, but also of mixed availability/awareness?


Tesla has autopark on all their models. It is terrible. I can barely ever activate it because it doesn't recognize parking spaces. It will curb your wheels. It's very slow. I'm certain that if I used it on a daily basis it would have collided with something by now.


Tesla does have self-park but it's only enabled if you have paid for the FSD upgrade. I have it. It works. Slowly, but it works.


Been optional on Toyota Prius for a few years now as "Intelligent Park Assist", I use it all the time and it works great.


Tesla does have it. My Model Y will quickly and reliably park itself in almost any type of space I pull up to.


My 2018 Chevy Volt has automated park assist for parallel parking.

Then again I haven't used it again since it backed into the side of a parking garage and scraped a body panel :-)


I don't see why you don't return the car? It doesn't have what you paid for.


If you have been following their updates, it looks like that they have finally found the correct approach for FSD and it is improving very quickly. At this rate, it seems that it'll be close to level 5 this year or next year.

Video of the latest version: https://www.youtube.com/watch?v=fwduh2kRj3M


> If you have been following their updates, it looks like that they have finally found the correct approach for FSD and it is improving very quickly. At this rate, it seems that it’ll be close to level 5 this year or next year.

Yeah, their perpetual 2-year estimate dropped to 1 year around a year and half ago, after being at 2 years for at least 6 years. So we’re probably four and half years from it either being ready…or dropping to 6 months off for the next several years.


I have the beta (finally). It’s very very rough.


So would you say your experience is very different from the one shown in this video[1], or is it that you'd describe the video as very very rough?

[1] https://www.youtube.com/watch?v=fwduh2kRj3M


I have the beta as well, it's very very rough.

Like, trying to change lanes into a "flush median" (aka yellow stripes across) to turn onto a one way in the wrong direction. Or randomly swerving back and forth (hard) when a lane splits into 2. Or trying to take 90deg turns at 45mph.

There are times it works amazingly, I've had it slow down and swerve out of the way of someone barreling out of a parking lot without stopping. It's also great at finding a gap to fit into between cars to turn, but it also got itself into that situation by not getting over sooner and instead trying to merge ~500ft before the turn.

Also for some reason on a 2 lane road (in each direction) it would constantly try to be in the "passing" lane.


> Also for some reason on a 2 lane road (in each direction) it would constantly try to be in the "passing" lane.

The problem with collecting real-world training data is that there are going to be a lot of assholes in your data.


It has algorithms and heuristics that dictate how it drives, so I think this is more the programmers haven’t gotten to it yet.


It’s a shame. I really wanted Tesla to succeed.

But as time goes on, their products become less impressive and their CEO is not helping the brand.


They could turn this around if they gave up on the futurologist software bullshit and strived to simply sell battery-powered cars. Stop selling dreams of robotaxis, just sell cars.


This is actually what excited me about Tesla, I thought I’d just get a great car that was electric.

I didn’t really ask for a smart car.

I also drove my friends Volkswagen the other day, all the lane assist was just ridiculous. I was driving on a remote road pretty fast and the line markings went missing, it was a dark night. It immediately disengaged and had I not been paying attention we wouldn’t went straight into a barrier or other lane. It doesn’t claim to be autopilot but either I’m driving the car or not imo.


That doesn’t sound like it would lead back to obscene valuations decoupled from today’s reality


Some form of Tesla would exist under this future. TSLA the stock might not.


That sounds better for basically everybody except Elon.


Or maybe use radar or real sensors, again.


Yes become a boring company nobody cares about like the rest of the car industry. Nobody should try and innovate with futuristic products and become bureaucratic behemoths like the government, which has an excellent track record of success.


Having your paying customers act as alpha testers, for software that can potentially injure and kill other people, is imho not exactly the kind of "innovation" we should strive for.

Btw; Weird dig at "the governments success", considering a whole lot of Musk's ventures wouldn't be around without all kinds of government subsidies.


For all of it's flaws the US govt does have a remarkable track record. It has presided over one of the most stable and productive societies in history. Sure it hasn't always been great, and the future outlook isn't looking so strong at the moment, but it has been wildly successful for the past few centuries.


Tesla succeeded in showing the world that an electic car can be something more than a glorified golf cart. That is frankly a huge achievement, compared to what came before. EVs are now the centerpiece of every manufacturer's plans for the coming decades, and that is because of Tesla. Whether they succeed as a brand is not really important anymore.


"EVs are now the centerpiece of every manufacturer's plans for the coming decades, and that is because of Tesla."

100% because of tesla? I remember in college before any EV being asked to sign a petition to car manufacturers, "please build EVs so I van buy one"

I agree Tesla has shifted the perspective on EVs. The absolutism that there was nothing else going on before in the absence of Tesla and nothing else since - seems perhaps myopic


I think it is. GM had their EV1 experiment, but they weren't serious about it and destroyed all the cars after the experiment was over. Tesla was the first serious effort at an EV that wasn't a shed-built project car, that had something approaching usable range, and looked like a normal car not a cartoon.

I don't think any other major auto manufacturer had EVs on the radar in a serious way before Tesla.

I'll add that I think the self-driving, glass panel display, and all of that was unnecessary. They would have had as much success with normal manual controls and simple conventional cruise control with maybe lane assist and auto-braking like other luxury cars had. But that's mostly based on what I like in a car, I could be wrong from the perspective of the market.


Would the Chevy volt be a counter example,or the wildly popular prius?

Arguably range is still a problem for any EV on a long distance road trip - lack of charging stations (even for a Tesla, some routes are ok, but nothing guaranteed about any route). So if long distance trips are generally out, the previously mentioned cars did have enough range for around town and then some - similar to Tesla My opinion, I feel like Tesla ownership is overrepresented in specific metro areas, creating a bit of a bubble perception effect (if a person looks at SF or Seattle alone, one would think Tesla is crushing everything else, yet overall they still sell a fraction of what the other automakers are selling)


I think you’re right, we would have ended up at EVs no matter what. There are just too many advantages.

That said, I think tesla sped up that timeline quite a bit. Maybe a decade, maybe two. I don’t know how I would ever guess the correct number. but I think there’s no question EVs would not be what/where they are without Tesla.


TSLA will succeed, if you believe the hypothesis that oil prices will continue to climb to the point that many buyers are priced out of that market and switch to electric vehicles. It’s screwed if the alternate hypothesis is true (that is we continue to use oil forever).


Thank God. I for one did not opt into this beta test every time I get on the road.


This, more than anything, is what worries me. I do not own a Tesla. I didn’t agree to anything. I’m not choosing to enable auto-pilot (or FSD).

And yet my life is/may be in danger because of it.

No other car manufacturer has ever done anything like this as far as I know.


Oh man you’re going to be really upset when you hear that autopilot is just another name for advanced cruise control. A huge percentage of the cars on the road have it.

Even “normal” cars like the Civic or CRV.


I had an Accord. I never got a single false stop in 5 years. My mom owns a CRV. Same, no false stops.

Lots of manufacturers have ACC and collision mitigation. You’re right, Tesla didn’t invent it.

But I don’t hear constant complaints about other cars. I don’t hear people saying you “just get used to it” trying to stop on an empty highway. I don’t see other NHTSA investigations.


The only reason you hear about Tesla is because they have a loudmouth CEO who rubs journalists the wrong way. Practically every manufacturer deals with this “problem.”

https://www.consumeraffairs.com/news/nhtsa-to-investigate-17...


This is fear mongering disguised as concern. Tesla publishes data about incident rates for drivers both on and off autopilot and they are markedly lower than other drivers.

https://www.tesla.com/VehicleSafetyReport


What if you control for demographic factors? I'd expect that new drivers and the elderly have accidents at a much higher rate than the average driver and also have low Tesla ownership rates.

Edit: I see they do have base rate w/o autopilot listed and it does significantly differ from national average.


The families I know with very expensive sedans often drive their other vehicle in snowy weather.


So because I don’t believe numbers published by the company trying to protect its own image I’m fear mongering?

No, I genuinely don’t like this situation. I’m not trying to troll. I’m just saying my opinion.


Does it distinguish between the types of road driven on? It's a very important distinction because where people sensible people choose to activate the system is probably on long journeys where crashes are already rare.


>And yet my life is/may be in danger because of it.

??? While neither is perfect, your life is more in danger from a regular driver, that statistically can be drunk, distracted, going too fast for skill level, e.t.c, versus that exact person in a Tesla using FSD.

Its actually amazing


> your life is more in danger from…

I don’t agree. People keep SAYING it. Tesla CLAIMS it. The only real evidence I’ve ever heard came from some insurance company that said that Teslas have less accidents. But I don’t know if that was controlled for the demographics of the drivers.

I also don’t know if that was compared to other cars with active safety systems.

What if Teslas have 15% fewer accidents than the average car but Hondas have 16% fewer. I don’t know.

Basically I’m highly skeptical of the claim. I don’t think it’s truly been rigorously studied. Without that I’m not willing to go along with the “you shouldn’t mind that someone is beta testing software on a 4000 pound car that may be driving next to you“ line of thinking.


The advantages of autonomous systems in parts of their operation have already been studied for quite some time since they have been features on non-Teslas. Automatic braking systems help reduce collision, and lane keeping assist helps prevent accidents, and neither are perfect.

There is no question that public roads would be way safer if everyone drove a Tesla with the systems in place, especially given the fact that most people are going to be responsible with the system.


Yes, we know automatic emergency braking is great. A mandate for it was in the infrastructure bill that failed I think. Lane keeping is probably quite good too (though I’ve never seen data on it).

However you take that info and make a leap of logic that doesn’t work for me.

> There is no question that public roads would be way safer if everyone drove a Tesla with the systems in place…

You can’t support that. In fact it directly contradicts the fact NHTSA is investigating. Because the Tesla system may not work correctly. A car that brakes randomly with fancy safety features is worse than a car without those features that never slams on its brakes falsely (IMO).

> …especially given the fact that most people are going to be responsible with the system.

You can’t know that.

We know people are irresponsible with eating in cars. And doing makeup. And shaving. And drinking. And cellphones are a disaster. And people playing with the radio or maps.

Even if the Tesla system was nothing but an improvement when used properly (we’ll see what NHTSA says), you don't know the average person will use it safely. I would argue there is a lot of anecdotal evidence that’s not true.

This is not an Android vs iOS or Vi vs EMACS discussion. There are literally lives at stake. I think it’s very fair we move conservatively based on documented evidence.


When you drive on the road with other Teslas, your unwillingly participating though?


Which is why they express relief at the prospect of the recall.


I've got news for you. Whether you have a Tesla with FSD or not, you're part of the beta the moment you're near a Tesla with FSD engaged.


This is not about FSD.


Hyperbovine didn't mention FSD. Let me spell it out for you: Autopilot is beta grade software at best. It drives straight into parked trucks.


What about other driving assistance system that claim to do the same thing as Autopilot? My 2022 Hyundai Elantra has a system that drives itself on highways. GM Super/UltraCruise is available on highways. Is NHTSA gonna investigate them too? Presumably they’re no farther along in self driving than Tesla is.


Presumably they will if they're involved in a statistically significant number of incidents.


> What about other driving assistance system that claim to do the same thing as Autopilot?

Do other driving assistance systems claim to drive into parked emergency vehicles?


Yes, every adaptive cruise control system explicitly ignores stationary objects when travelling at highway speeds (the radar filters out stationary things, as it cant tell the difference between a road sign and a stopped car)


The company doesn’t consider autopilot to be in beta, but it does call FSD this.


> The company doesn’t consider autopilot to be in beta

That only makes it even worse.


I think the question of whether something is a beta is different than whether anyone thinks it ought to be. There’s a lot of “non-beta”, but bad, software.


Tesla thinking this software is production ready shows a complete disconnect from either reality or common morality.


the difference being that most of that software doesn't move two tons of steel through traffic at lethal velocities so maybe accurate terminology is appropriate here.


It routinely drives straight into parked emergency vehicles displaying a blaze of flashing lights.


“Routinely” isn’t even close to the truth.


It doesn't help that Musk tries to create confusion by naming things something they're not. Next thing he'll implement might be the Flying Car Update... which will turn lights green as you drive by using the 14hz infrared signal.


With the way Tesla markets autopilot [0] it's really no surprise people are using these as interchangeably as Tesla itself tends to do.

[0] https://www.bbc.com/news/technology-53418069


It used to say on the autopilot page "full self driving hardware". They left of the distinction that "full self driving software" is not done yet.


To other commenters: this is a technical audit of autopilot software, not Tesla FSD.

And the actual context is much less of a big deal than it seems: the biggest plausible consequence would be forcing Tesla to push an over-the-air update with better driver attention monitoring or alerting.

I encourage reading the actual report.


The worst case outcome would require Tesla to disable the autopilot suite entirely, for an indeterminate amount of time, perhaps permanently on the existing fleet of vehicles.

The NHTSA is tired of Tesla's hand-waving away their safety investigations into Autopilot by pushing stealth updates that fix specific scenarios in specific places being investigated. NHTSA wisened up to that and independently purchased their own Tesla vehicles, and disabled software updates, so that they can reproduce those scenarios themselves.

If NHTSA asks Tesla to provide system validation tests showing that an updated version of their software meets the design intent of the system, Tesla would not be able to do so. If they can't prove the new Autopilot software corrects the safety-related defects identified in the current version, then it's not a valid recall remedy.

All evidence from their own AI/AP team and presentations is that there is no real design and system validation going on over there. They're flying by the seat of their pants, introducing potentially lethal regressions in every update.


> All evidence from their own AI/AP team and presentations is that there is no real design and system validation going on over there. They're flying by the seat of their pants, introducing potentially lethal regressions in every update.

What is this evidence?

I've seen a few talks from Andrej Karpathy that indicate to me a more deliberate approach.[0] "Software 2.0" itself seems like an approach meant to systematize the development, validation & testing of AI systems, hardly a seat-of-your-pants approach to releases. I have my own criticisms of their approach, but it seems there is pretty deliberate care taken when developing models.

[0] https://youtu.be/hx7BXih7zx8


I’ve been working few years ago at a very big tech company, focusing on validation of the AI systems.

It’s all smoke and mirrors. You cannot perform proper validation of AI systems. Rollbacks of new versions of ML models are very common in production, and even after very extensive validation you can see that real life results are nothing like what tests have shown.


Can't you do outlier detection, and disable the AI if the input wasn't in the training set?


How do you identify the outlier? You need to write some rules that could look at it. But that’s a lot of rules. What if you could use computers to do that?

You basically put another ML on top of ML, to correct it. I’ve seen that in use in production systems, and it helps with some problems and generates new ones. And if you thought that reasoning about correctness was hard before…

And what do you mean by disabling AI, if input wasn’t in the training set? That’s the whole point of ML, to reason about new data based on data seen in past.


> That’s the whole point of ML, to reason about new data based on data seen in past.

I think we like to think this is true.

In reality, I have seen a lot of real world ML models. I wouldn't trust ANY of them to do extrapolation. There are just tons of real world problems, and extrapolation is HARD.

I have to put extremely tight boundaries on ML models for deployment scenarios, and ALWAYS have a backup rule engine in case the ML model comes up with an answer that has a low confidence score.

> How do you identify the outlier? You need to write some rules that could look at it. But that’s a lot of rules. What if you could use computers to do that?

> You need to write some rules that could look at it.

Pretty much. Any time ML is involved, you will need TONS of lines of code.

In short, tightly define the target classes your ML model deals with.

Any variable that falls outside your tightly bound list of target classes, you have to deal with using a rules engine. THEN you need to spend a lot of time doing work to minimize false positive classification in your target classes.

And make sure that "false positive, high confidence" classifications don't do racist things/lose the business a lot of money things.

ML projects are just a ton of work. You essentially make the entire ML workflow, and you NEED a backup "not-ML" workflow.

In my experience, 50-80% of normal software engineering projects fail.

90% of ML projects fail. Square the fraction of normal software projects.

ML is complex AND it's a ton of work. Really, really hard.


> What is this evidence?

I think the onus should be on Tesla to prove that their testing and validation methodology is sufficient. Until and unless they have done so, Autopilot should be completely disabled.

I really don't get why the regulatory environment is so behind here. None of these driver assistance technologies (from any manufacturer, not just Tesla) should be by default legal to put in a car.


>> They're flying by the seat of their pants, introducing potentially lethal regressions in every update.

>> What is this evidence?

Without a documented development and testing program, every development is essentially this.


I see your point, to OP's point, I know a couple people who were horrified at what they saw and it did not match this public talk. Both started at least 6 months after this video, and both left Tesla within 8 months, of their own volition. Unfortunately, off the record.

Not to disparage Andrej, sometimes (frequently, even) what executive leadership thinks is going is not the day-to-day reality of the team.


can confirm, a former coworker had just come from Tesla 5 years ago and he had serious ethical problems with his work over there. Tesla is killing people through negligence and greed, it's pretty disgusting, but par for the course


This is the Karpathy that gave a big talk about how vision was superior to radar when Tesla dropped all radar units at the height of the chip crisis. Now they are bringing radar back.

Give it a few years and they will probably introduce LIDAR.


Tesla is bringing Radar back? First I've heard about it, and good news if true.



Wasn’t this approval based on an application from 2018?


> The NHTSA is tired of Tesla's hand-waving away their safety investigations into Autopilot by pushing stealth updates that fix specific scenarios in specific places being investigated.

Why isn't Tesla prosecuted for that? It's lawless!


No, that’s typical software development. Find a bug in circulation, fix and deploy a fix. There are probably hundreds of internal issues that get fixed per normal protocol, as with any piece of software. Putting out a “recall” or alert for every modification to the code is pointless. What regulators need to do is keep up with the times. They need to have their own safety test suite which manufacturers can test against, and be independently audited


> No, that’s typical software development.

Software that controls multi-thousand pound machines at 70+mph isn't typical, and typical practices don't necessarily apply.


Yes, those practices absolutely shouldn't apply for self driving cars. Good luck regression testing how a system change impacts the AI handling of every edge case of traffic.


Waymo does this. Their infrastructure costs would be astronomical though if not attached to a company with its own cloud


It seems like the test suites for these deep learning based models are themselves almost comprehensive knowledge bases from which you could build a more traditional control software.


> They need to have their own safety test suite which manufacturers can test against

Coaxing regulators into producing a test that can be optimized for is exactly how we got the WW scandal.

> What regulators need to do is keep up with the times.

Keeping up with the times sounds awfully like allowing insane things because some whiz kid believes there's no difference between a car and a website.


It's typical software development when there's nothing at stake (such as human life). When human life is at stake, the controls on software reviews/changes/updates SHOULD be much tighter, but as there's no governing body to require this, it's on the developers themselves to do it right. Tesla is an example of a company that does not employ those controls in mission/life critical software.


Sorry, typical software development is for a national regulator to find some illegal feature in your website, and then you disable the feature for specific IP ranges or geofenced areas where the regulator's office is? No, I don't think it is.


Typical software development as practiced by Uber, perhaps


Yeah, well, just like half of US sites just flatly block Hetzner IPs (where I happen to have a VPN) because GDPR.


They've been accused of silently pushing updates to fix specific scenarios in order to gaslight the regulators.

Imagine an airbag incorrectly deployed sometimes, and the fix was to use a GPS geofence disable the airbag entirely on the test track, but only on days when the regulator was trying to reproduce spurious airbag deployments, not on crash test days.


That sounds like a cartoon villain. Here's an actual example:

Regulators were concerned after a car on non-FSD Autopilot (AKA Auto-Steer + Traffic Aware Cruise Control) hit an emergency vehicle parked half way in the right lane of a highway due to driver inattention. Tesla quickly pushed an update that uses ML to detect emergency lights and slow to a stop until the driver pushes on the accelerator to indicate it is clear to go.

That's not cheating, that's life-saving technology. No other steer assist technology gets (or sometimes is even capable of getting) updates that fast.


> That sounds like a cartoon villain.

Contempt is such an overused tactic, and never meant anything anyway. Plus, it doesn't sound unrealistic to me.


What's to stop them to push an update that turns the workaround off because it leads to unexpected deceleration in odd lighting, or when there is an emergency vehicle on the other side of a divided freeway?

What process is used to make such decisions?


Evidence they are doing anything like this? Or are they fixing the "actual" issue.


Per the article, the regulators are sufficiently concerned that they're blocking all updates to their test vehicles.


[flagged]


Imagine a person that is totally ignorant of the fact that major corporations regularly engage in fraud and is willing to give them all a pass


Imagine a scenario almost identical to simple mixture of one that was ally proven to have happened and confessed by the CEOs of two corporations (VW and Uber).


No, it is not. You sure don't do it in aerospace. Each change is verified, the entire system is validated prior to a release.


> No, that’s typical software development.

It's not typical software development in life-critical systems. If you think it is, you should not be working on life-critical systems.


Releasing software updates in normal is life-critical systems. Can't believe you are arguing differently.


Narrator: (s)he doesn’t


> typical software development.

So if a pharmacy swindles you out of your money or gives you fake drugs, I should reply 'that's just typical drug dealer'


It’s not typical for safety critical systems. A car isn’t a Web 3.0 app and shouldn’t be updated in the same way.


> No, that’s typical software development.

Cars will never be software, much like pacemakers and ICDs won't ever be software


It really is insane. It’s one thing to have flaws, it’s quite another to stealth cover-them-up like it’s a game of hide and seek.


Wut? You want Tesla prosecuted because they are fixing issues over the air?

If the NHTSA think there is a safety issue with Tesla Autopilot they will require Tesla to… fix it. Perhaps remotely.


If you read the report, you will realize that NHTSA is considering requiring Tesla to do better driver attention monitoring, or to improve alerting. They are not considering banning autopilot.


I assure you, I'd already read the report before it was shared here. I also assure you, there's more to the investigation than that.


Perhaps, but that’s speculation at best.


But you’ve been assured with absolutely no evidence /s.


99% of everything is speculation at best, even this site is pretty much (high value) speculation as a service.

The NHTSA has a reputation of not f*king around so I would definitely side with @dangrossman on this thing.

As of today, Autopilot IS dangerous software and it is not something that should be tested live on the streets.


> The NHTSA has a reputation of not f*king around so I would definitely side with @dangrossman on this thing.

So did the FAA and then they let Boeing self-validate the 737 MAX. Just saying..


Yup, that's true. Let's see.


> If you read the report, you will realize that NHTSA is considering requiring Tesla to do better driver attention monitoring, or to improve alerting

If you read the report, you will realize that it says nothing about NHTSA might do if the kind of defect they are focussing on is confirmed.

It is certainly the kind of defect where it is plausible that better attention monitoring and alerting might be at least a partial mitigation, but that's about all you can reasonably conclude on that from from the report.


> introducing potentially lethal regressions in every update.

Meh. I mean, I understand the emotional content of that argument, and the propriety angle is real enough. I really do get what you're saying. And if your prior is "Tesla is bad", that's going to be really convincing. But if it's not...

The bottom line is that they're getting close to 3M of these vehicles on the roads now. You can't spit without hitting one in the south bay. And the accident statistics just aren't there. They're not. There's a small handful of verifiable accidents, all on significantly older versions of the software. Bugs are real. They've happened before and they'll no doubt happen again, just like they do with every other product.

But the Simple Truth remains that these are very safe cars. They are. So... what exactly are people getting upset about? Because it doesn't seem to be what people claim they're getting upset about.


Total straw man. The question isn't whether Tesla's are safe to drive. The question is whether "autopilot" is safe to auto pilot.


I think that's largely a correct way to reason about it. And what data we have says that a Tesla on autopilot is safer than a Tesla driven by a human, which is safer than an average vehicle. Both by quite some margin.[1]

Tesla publishes this data quarterly. And it's been largely unchanged for years. And yet we still keep having these discussions as if this system "can't be proven safe" or "is presumptively unsafe" despite years of data showing the opposite.

It's just getting so tiresome. Where are the accidents if it's unsafe? How are they managing to hide all the bodies?

[1] Now, could there be a more rigorous study? Undeniably! But no one has managed to do one, despite this kind of data being readily available (especially to bodies like the NHTSA).


Actual real world data says autopilot is on average at least as safe as the average driver in the average car. Of course that’s on average, in many specific situations it’s much worse but conversely it means it’s better in other situations.

How regulators deal with this frankly tricky as the same will likely apply to all self driving systems.


Those real world autopilot averages happen exclusively in the most trivial driving situations. Fair weather and almost exclusively on limited access roads. No real world average driver dataset exists that is similarly restricted to the subset of least error-prone driving situations.


But it is also not the case that people are crashing and dying left and right from the FSD beta, or as a result of using Autopilot in less than ideal conditions. This despite OTA updates increasing functionality and millions more cars being sold. Even if what you're saying is true: it is empirically true that in practice, humans take over when necessary.

The statistics aren't there. The risks people have been shouting about for years just haven't materialized. If regulatory agencies are looking to reduce crashes, injuries, or deaths, there must be dozens of more effective places to focus attention on than Autopilot. But it's 2022, and yet again, it's on the front page of Hacker News, and the top comment is (you guessed it): the naming of the features is the problem.

It's Groundhog Day all over again. Geesh.


Fair weather limited access highways is one of many datasets available. However, Autopilot operates in wet and rainy conditions so that’s hardly an accurate assessment. Weather bad enough to prevent autopilot is a contributing factor ~5% of accidents.

“On average, there are over 5,891,000 vehicle crashes each year. Approximately 21% of these crashes - nearly 1,235,000 - are weather-related” “ 70% on wet pavement and 46% during rainfall. A much smaller percentage of weather-related crashes occur during winter conditions: 18% during snow or sleet, 13% occur on icy pavement and 16% of weather-related crashes take place on snowy or slushy pavement. Only 3% happen in the presence of fog.”

https://ops.fhwa.dot.gov/weather/q1_roadimpact.htm


> Actual real world data says autopilot is on average at least as safe as the average driver in the average car.

Unless this is on the same road and conditions, instead of “autopilot where and when used vs. real drivers everywhere and everywhen” it is meaningless, even moreso if it doesn't also account for “autopilot disengages immediately before anticipated collision so it doesn't count as driving when it occurred.”


The point is people have risk tolerances, if the average driver is taking an acceptable risk then suggesting a lower risk than that is unacceptable is hardly reasonable. If that level of risk is actually unacceptable then you should be suspending peoples licenses for doing 5 MPH over the speed limit etc. Instead driving laws are based on a wider risk tolerance.

People count disengagements directly before collisions such as NTSB is doing in the article. Where people disagree on how wide that window should be. Disengaging 15 seconds before a collision is hardly autopilots fault, but even picking such a wide threshold doesn’t somehow push autopilot to less safe than the average driver.


> Unless [...] it is meaningless

This is just so frustrating. It's not meaningless, it's measured data. Could it be better corrected? Could there be other analysis done? Sure. But data is data, and the effect is extremely large. Cars with AP enabled aren't just safer, they're like 5x safer!

You can't wave that away with a innumerate statement about confounding factors. You need to counter data with data, and (despite millions of Teslas on the road now!) no one has it.

Is it really so hard to just accept that... the system is safe?


> This is just so frustrating. It's not meaningless, it's measured data.

Measured data that is used to make a comparison to data not gathered under similar conditions aside from the difference being assessed or structured so as to support controlling for the irrelevant differences is, in fact, meaningless for that purpose.

It may have meaning in other contexts, but when it's offered to justify the comparison it cannot support, it is, in that context, meaningless.


Only if the differences between conditions are enough to matter.


If autopilot disengages 1 second before crashing into a stationary object, does this count as autopilot crashing or the human driver?

Is autopilot engaged in the places where crashes are frequent, eg. during left turns?

What are the “scenario-equalized” safety stats for autopilot vs human drivers?


Tesla's statistics count it as autopilot if it was engaged within 5 seconds of the collision.

It seems reasonable for a regulator to decide what that time span is and require all automated driving assist systems to report in a consistent way. I'm curious what % of the time a crash in a typical car occurs within N seconds of cruise control or lane assist or traffic aware cruise control engaged.


The article says they're using a 1 second threshold, not 5, and that a substantial number of accidents fall between the two numbers.


No, those 16 accidents are counted as a failure by autopilot. Hell the NTSB is explicitly doing so in the article.

Further, rather than what the article is insinuating autopilot disengages when users apply the break such as occurs when they are trying to avoid an accident. What’s concerning is cases when autopilot decides to give up control and the driver isn’t ready to take over.


By actual real world data, you mean cherry picked average data published by Tesla, that doesn’t account for any bias, and wasn’t audited by independent third parties?


Sources for claims would be appreciated.


It’s also worth noting Tesla is nearly infamous at this point for making owners sign NDAs in exchange for repairs when their autopilot is likely at fault.


This is a meme. I've never seen any significant corroboration on this. I mean, how would they even know without violating their own published privacy policy? I think you got hoodwinked. This is what's so frustrating about this argument. The clear truth is that the system is operating very safely on millions of vehicles, because a safety defect of that magnitude would clearly be visible in the data, and it's not.

So people invested in the argument now have to resort, like you just did, to theorizing about a literal conspiracy to hide the data that you know must be there even though it can't be measured.

It's just exhausting. They're fantastic cars. Get a friend to give you a ride.


Really? Wow. That's a lawsuit waiting to happen.

No way would I sign, and they'd fix it, or see me in court.

And not rich guy wins US court, but Canadian court. And yeah, it's different.


Don't you have to waive your right to sue in the US to purchase or boot a Tesla?


No, they have an arbitration clause, but you have the right to opt out.


Tesla leads the world in driving fatalities related to AP and FSD-type systems.

The entire rest of the industry has 1 fatality. Tesla has dozens, and 14 of those are old enough (and located in the right country) to be part of this investigation. (The multiple Tesla autopilot/FSD fatalities from 2022, including the 3 from last month, are not part of this investigation.)


The proper comparison is AP vs. other SAE Level 2 systems throughout the industry.


So, the rest of the industry has at most one fatality? How does that change the conclusion?


If disabling FSD makes teslas less safe then what is the point? Are they saying fsd can potentially go berserk? Are we into future crime prevention?


Yes, in the same way that taking down an active gunman with a loaded weapon is future crime prevention.


What about possible future software update to any hospital system? Should we preemptively stop all of those?


If the manufacturer isn't adequately testing for regressions that kill people, then yes, we should block those updates, and use the software the device was certified with.


gl getting anything done with a government agency defining adequacy of your software testing. here in canada government employees cannot get software that pays them salaries to work https://en.wikipedia.org/wiki/Phoenix_pay_system


"I encourage reading the actual report."

Wow. It seems like now on HN, this sort of turn-of-phrase has sadly become a boiler plate way to dismiss a host of comments. IE, what "actual report" are you referring to?? The headline article is about the investigation that has opened but not yet closed, as the linked text shows (it's a PDF but it's essentially a press release showing discoveries so far - why they've escalated. It's not closed and not friendly to Tesla).

"Accordingly, PE21-020 is upgraded to an Engineering Analysis to extend the existing crash analysis, evaluate additional data sets, perform vehicle evaluations, and to explore the degree to which Autopilot and associated Tesla systems may exacerbate human factors or behavioral safety risks by undermining the effectiveness of the driver’s supervision"

https://static.nhtsa.gov/odi/inv/2022/INOA-EA22002-3184.PDF


> the biggest plausible consequence would be forcing Tesla to push an over-the-air update with better driver attention monitoring or alerting.

I see no basis for this conclusion, which appears to be pure speculation about what NHTSA might decide is necessary and sufficient to address the potential problems if confirmed by the deeper analysis. I encourage reading the actual report.

> I encourage reading the actual report.

I did, and the conclusion do which you appeal to it does not appear to be well-supported by it.


Tesla has no hardware for proper driver monitoring. Most of model S have no internal camera. And model 3 internal camera wasn’t designed for it (doesn’t work in the dark, cannot see through sunglasses, cannot see where your eyes are actually pointing, etc).

You cannot OTA HW deficiencies.

Now, will they be forced to have monitoring like that, to be on par with their competitors? That’s a different story, and given how weak USA regulatory agencies are, and how reckless Tesla is at disregarding them - I’m pretty sure Tesla won’t be hurt by it.


> Tesla has no hardware for proper driver monitoring. Most of model S have no internal camera. And model 3 internal camera wasn’t designed for it

See. That has been my whole point for months that both Autopilot and FSD is still unproven safety critical software and it goes to show that if used in circumstances say at night it becomes even far dangerous to use at the worst time to drive, like I have said before [0][1][2][3]. Even worse that it lacks proper driver monitoring.

I guess they should be required to have this driver monitoring hardware installed on their cars as well as night vision cameras to avoid the crashes I have listed below. If the regulators enforce this, perhaps they might have saved another Tesla driver from losing control or avoided another crash.

Or perhaps if Tesla still finds it difficult to prevent further crashes using night vision cameras, perhaps they have to admit that they should have used LIDAR instead.

[0] https://news.ycombinator.com/item?id=29639080

[1] https://news.ycombinator.com/item?id=30267710

[2] https://news.ycombinator.com/item?id=28732866

[3] https://news.ycombinator.com/item?id=29516199


Most driver assistance systems do not do any driver monitoring. I don't think it could be retrospectively required.


Systems with basic acc and lane centering, sure.

But for systems closer to AP (that claims to stop at the lights, take highway exits, etc), it’s de facto industry standard to have dedicated camera for monitoring.


Because that’s a stupid system. I will never accept being under camera surveillance constantly by the car, no matter what stupid name people come up for it (“attention monitoring”)


Playing the devil's advocate if I may - why would you oppose e.g. an internal, offline system that monitors your eyes for being closed and can save your spouse and kids from dying in a car crash if you doze off during a long night drive, by alerting you / safely stopping the vehicle?

I saw a similar reaction to Volvo's announced plans for such tech (it was a YouTube video) - it seemed to be a completely offline system, but people still read "camera" and reacted that they don't want "someone watching them" in the car.


I have no real issue with a system that has no memory or logging of events and only exists to help me, not to cause liability to fall on me because the system, possibly erroneously, logged I took my eyes off the road for a split second 30 seconds before an incident. Or insurers using false positives as a reason to raise rates. And saving the video is right out.

But there's no such system, nor is there any trivial way of demonstrating that such a system works that way. Because all car companies and insurance companies want data.


Because you can never be sure it's 'completely offline' or that it actually can help. If you don't want to kill a couple dozen of people by driving into a fuel truck, just don't drive while about to pass out. It's that simple, but no electronics can ever replace responsibility.


Also if you don’t want to kill people don’t have a seizure or a brain infarct or a stroke or such while driving.


Do you have the link to the report?



It really should be clear to anyone that any of this political attention on Tesla is just theater to win elections/reelections.

Its pretty much not worth even paying attention to.


NHTSA formally investigated Tesla Autopilot after incidents in 2016, 2017 and 2018. There was also a formal NHTSA investigation into Tesla battery fires in 2013. These investigations span 3 presidencies under both parties.


NTHSA also is the type of agency that got lobbied by auto makers into instituting the minimum 25 year old import rule under the pretense of safety, because it ate into profits of foreign manufactures selling cars in US.


So is this clearly political theater for an election, or is it clearly lobbying by foreign auto manufacturers?


Would this mean that Tesla would have to provide refunds for customers who purchased FSD? If so, would they pro-rate the refund based, as is customary under lemon law refunds? On the flip side, could they be required to inflation-adjust the refunds, so that customers get back the same purchasing power that they spent?

Separately, would this prevent Tesla from having a beta tester group that tries out FSD features at no cost?


No, this is about autopilot, not FSD. And the actions being considered would involve only an over the air update of the software.


They can still do the shadow driver thing for testing.


> Since Tesla vehicles can have their software overwritten via a wireless connection to the cloud [...]

Tangential, and we all knew this happens already, but wow, it sounds crazy that we live in a world where a car can completely change its operation based on someone somewhere pressing a button on their computer.


Oh shocking. The administration which has been openly hostile towards tesla is now going to start actively kneecapping their vehicles.

What the hell is wrong with us? Why do we do this to ourselves? Tesla is an insanely innovative company, and we have a government who now wants to fight them? WHY?

If you turn on FSD, and then just turn your brain off: YOU are the problem. Inattentive drivers are not a new problem created by tesla.


> Inattentive drivers are not a new problem created by tesla.

Obviously not a new category of problem, but what sort of ballpark figure could we put on the number of inattentive drivers created by tesla? As a percentage of sales?


Based on your comment, I’m assuming you only read the title of this post and not this article.

I’m all for technological progression but from what I’ve been reading, it appears that Elon and Tesla have had pretty shady safety practices. I’d change your questions to, “Why does Tesla do this to itself?” I’d hope our government would step up in times like this regardless of who is on the other side.

I’d also recommend you watch “Elon Musk’s Crash Course” on Hulu. https://www.hulu.com/series/f22278d1-ef56-40e8-9227-af3a029c...


"Elon Musk's Crash Course" oh gee I wonder what non biased reporting this documentary is going to be providing!?

I did read the article. The Biden administration is particularly hostile to Tesla.


> The administration which has been openly hostile towards tesla

No. They just haven't gone out of their way to publicly promote the company.

Which is understandable given that Biden is pro-worker/pro-union and Musk from events to date isn't.


Unions


Considering the personalities involved, I wouldn’t be surprised if a recall is issued, and then the recall is canceled when a new administration gets voted in.


> On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.

> CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision.

> While anything that might indicate the system was designed to shut off when it sensed an imminent accident might damage Tesla’s image, legally the company would be a difficult target.


So what happens is

1. Speed straight towards your doom.

2. Give back control a moment before collision, when it realizes it fucked up.

3. "It was the driver's fault"


Plot twist: it's always the driver's fault. Autopilot is glorified cruise control (the technical term is 'Level 2' on the SAE scale). It literally doesn't matter when Autopilot bails out altogether (even if it manages to automatically avert a crash on other occasions), the driver must be fully in control at all times. Would you trust your cruise control to prevent a collision? Of course not, that would be crazy. It's not what it's built for. What a worthless discussion.


The whole concept is flawed.

We cannot expect humans to be able to pay attention and be able to take over at a moment's notice at all times, simply because that's not how our brains work.

Autopilot is in the danger zone, where it's good enough to make your brain relax, but it's bad enough to require your brain to not do that. So it's fundamentally unsafe.

Cruise control in contrast isn't good enough so your brain will have to pay attention, otherwise you'll crash very quickly.

And this is all made much, much worse by Elon's and Tesla's irresponsible marketing, and the believers who dismiss this as a "worthless discussion".


> We cannot expect humans to be able to pay attention and be able to take over at a moment's notice at all times

And yet, this is what ordinary driving requires already. It's not a new requirement. The whole point of this investigation is to figure out the matter of whether a SAE Level 2 ADAS really makes distracted driving more likely. The jury is still out on that, and even if it does it would be reasonably easy to cope with the problem by tuning the system to alert the driver more frequently, especially ahead of impending danger (such as in low visibility conditions, which are expressly mentioned as a key factor in these crashes).


And yet, this is what ordinary driving requires already

No, it's not. Not at all.


>And yet, this is what ordinary driving requires already.

What? No it isn't. Ordinary driving has the driver controlling the vehicle at all times.


> We cannot expect humans to be able to pay attention and be able to take over at a moment's notice at all times, simply because that's not how our brains work.

Plus, it will always give you the control back in the most difficult situation you can ever encounter as a driver. It's already an emergency, the computer threw a fit and doesn't want to deal with it.

You're on your own, you have 10 seconds, all the best! Better get it right the first time.


It seems so weird to me how people so vehemently defend Tesla instead of holding them to higher standards. What do you owe them and why?


Well when you wrap your identity around someone else's idea, you get weird outputs at the edge cases


Some might also be defending their stock portfolios, 401ks, and future tesla purchases built on tsla price appreciations.


> the driver must be fully in control at all times

Yes, that's precisely what the manual said last time I checked. But it's not how Elon Musk has presented it, nor how innumerable Tesla drivers treat it.


Tesla say in the video here the driver is only there because of regulators, not ethical reasons:

https://www.tesla.com/autopilot

This video was from 2018 or so I think and they pretended the driver wasn't needed.


The problem is that people can’t actually take over fast enough necessarily. If you’re not actively driving, then you can’t just immediately react.


Sure, but what does that have to do with Autopilot? "Driver gets into a crash because they were spaced out and not actually paying attention to the road" is something that happens all the time, no need for modern ADAS.


Because “drivers space out when using a system that removes the stimulus of driving that keeps them alert” is not solely the fault of the individual when it’s not possible in real world scenarios to use it safely.


Exactly, the most important metric/question is whether it improves overall safety compared to the baseline.


That's the metric Tesla and the self-driving car industry would like to use, but I don't see why it's the most important. Also, it misconstrues 'most important' to mean 'all that's important'.

We could just ban personal vehicles to improve overall safety.


But you're supposed to be actively driving with autopilot.

I use autopilot every now and then, and when I do, I literally keep my hands on the wheel in a driving position, and my foot resting lightly on the pedal.


Then what are you going from Autopilot? How's foot hovering any more comfortable than lightly putting it down? How is hands on wheel in a driving position more comfortable than occasionally turning the wheel?

Seems a tiny gain compared to the dangers of the system.


Very late reply but...what I gain is basically nitpicky adjustments to lane and speed control. I'm extremely active in the sense that I am constantly observing near, medium, and far distances while driving, but I'm just not providing minor steering inputs directly.

I think that I can better observe the conditions if I'm not providing those controls, even though I need to keep a model of autopilot and its abilities running in my head. Having said that, that model is simple - dump out of autopilot as soon as there is any kind of complexity added to the environment.


You by definition can’t be actively driving. You’re just watching. It’s active driving that keeps you alert in the first place.


The NHTSA report says that on average, an attentive driver would have been able to react 8 seconds before the crash. 8 seconds is a lot of time on the highway (where autopilot can be used).


Yet the autopilot system chose to only warn the drivers 1 second before impact, which is basically no time at all to react to anything.

So if those 8 seconds of "autopilot ain't too sure" actually exist, then the car needs to signal that to the driver so the driver can be ready to take over.

The fact it still doesn't do that, speaks volumes about the kind of "progress" Tesla seems to be making, none of it good.


The crashes were not of a kind that Autopilot could have detected earlier. Several involved stationary objects, that Autopilot seems to intentionally ignore because it can't tell apart what's actually in the way of the car from harmless features of the surroundings that will never cause a collision. Which just goes to prove: Autopilot = glorified cruise control. You must never expect it to keep you safe.


> Autopilot = glorified cruise control

Isn't that what the name implies? An autopilot keeps heading and speed.


If you read the actual report, you will see that NHTSA actually acknowledges that autopilot makes things safer overall, and they agree that crashes are due to misuse of the software. They highlight that on average, when there was a crash, the driver should have been able to recognize the hazard ~8 seconds in advance.


The challenge of course is what level of misuse is to be expected (especially once it becomes more widely available), and if using the software with normal levels of misuse results in an overall safer level of driving than without it.


It appears to be so: misuse is included in the overall safety assessment.

But this is exactly the position of NHTSA in the report. They say that if Tesla can reasonably do something to mitigate misuse, then they should. That’s what this is about.


My point is:

1) This is currently limited to a pretty restricted market (people who drive Teslas, which is a small subset of car drivers). When used more widely by a wider set of people, it may have a worse track record.

2) It isn't clear that such misuse has reasonable mitigations that will continue to make it attractive to use.

But it's all speculation at this point.


> If you read the actual report, you will see that NHTSA actually acknowledges that autopilot makes things safer overall

Where does it say that in the report?

> and they agree that crashes are due to misuse of the software

I also do not see this in the report other than acknowledging that is Tesla's stance. In fact, they explicitly reject that as a defense, to quote:

"A driver’s use or misuse of vehicle components, or operation of a vehicle in an unintended manner does not necessarily preclude a system defect. This is particularly the case if the driver behavior in question is foreseeable in light of the system’s design or operation."

> They highlight that on average, when there was a crash, the driver should have been able to recognize the hazard ~8 seconds in advance

That's simply not what the report says. Here's the passage you're referencing:

"Where incident video was available, the approach to the first responder scene would have been visible to the driver an average of 8 seconds leading up to impact."

_Except_ you're leaving out some extremely important context.

1. This sentence is part of the expository section of the report, it's in no way "highlighted".

2. This is referring only to 11 specific collisions in which Teslas struck first responders, not the other 191 collisions they examined.

I've seen some other things you've said in this thread so I feel like I should address those as well.

> this is only autopilot not FSD.

To quote the report:

"Each of these crashes involved a report of a Tesla vehicle operating one of its Autopilot versions (Autopilot or Full-Self Driving, or associated Tesla features such as Traffic-Aware Cruise Control, Autosteer, Navigate on Autopilot, and Auto Lane Change)."

> this will be fixed by an over the airwaves update.

I see nowhere in the report that says this. In fact as far as I can tell from reading it the report says they're upgrading and escalating their investigation because of the seriousness of the concerns.

Here's the report's conclusion in its entirety:

"Accordingly, PE21-020 is upgraded to an Engineering Analysis to extend the existing crash analysis, evaluate additional data sets, perform vehicle evaluations, and to explore the degree to which Autopilot and associated Tesla systems may exacerbate human factors or behavioral safety risks by undermining the effectiveness of the driver’s supervision. In doing so, NHTSA plans to continue its assessment of vehicle control authority, driver engagement technologies, and related human factors considerations."

Actual report here for those curious: https://static.nhtsa.gov/odi/inv/2022/INOA-EA22002-3184.PDF

And for those who don't here's the best summary I can provide.

* The government received 11 reports of Teslas striking first responders with autopilot engaged.

* After an initial investigation of these reports they deemed them serious enough to open a broader investigation into the technology.

* They found 191 cases where vehicles using the technology crashed unexpectedly.

* Of those 85 were dismissed because of external factors or lack of data (leaving 106).

* Of those "approximately half" could be attributed to drivers not paying attention, not having their hands on the wheel, or not responding to vehicle attention prompts.

* Of the 106 "approximately a quarter" could be attributed to operating the system in environments where it's not fully supported, such as rain, snow, or ice.

* That leaves 37 cases where it appears the cars malfunctioned, both not prompting the driver and not taking preventative action like braking until it was too late to avoid the crash or not at all.

* As a result, the NHTSA is escalating their investigation to do an engineering analysis of these systems looking to better understand the defects.


Exactly as predicted by many, many skeptics. Of course that's what his statement meant.


> legally the company would be a difficult target

Is this the case?

Look at the Toyota "Acceleration Gate" fiasco. Toyota paid billions(full amount isn't disclosed due to NDAs in hundreds of cases) because of poor software. If Tesla engineers failed to adhere to industry best practices(hard to do when you're using deep learning in your control loop) then they'll likely be liable.

An overview of the Toyota scandal: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_...


I remember reading that at the time, and just looked at the slides again.

A bug which commands wide open throttle also therefore depletes engine vacuum, leading to a lack of power assist if the brakes are pressed and released and pressed again.

Drivers who have been around the block a bit and have some mechanical sympathy would take a moment to realize neutral will resolve the situation. But many other drivers would not realize this.

Although it would not absolve Toyota from responsibility in such a case, I wish driver training required and tested for dealing with a long list of adverse situations within N seconds each.


I don't want to be that guy, but with all the things Elon is saying and doing, why does it feel like this is a coordinated attack against Elon?


Probably, because it kinda is. What is more upsetting is that this treatment of cars like software was ok until now. It only now that he is on the 'wrong side' that there is a threat of him getting slapped. It is annoying, because it ( misrepresentation of what autopilot is to the customers ) has been going on for years now..


I don’t think it’s anything new, it’s just that his negligence is finally catching up to him. There have been serious issues that folks in side and outside of the government have been concerned about for a while.

I think the bigger issue is people only being able to see things in black and white. For instance, either Tesla is awesome because of what they have done to push forward electric vehicles, or they have problems they need to be held accountable for. Why not both?


The biggest problem with is the way Tesla, Musk and Tesla Fans have marketed the AI capabilities. It gives people a false sense of security. Another thing is I feel Elon is being idiotic by trying to mimic human driving so not using Lidar wanting to just use cameras. Like wtf you should want the Ai cars to be better than humans and having more information than a human is a good way to do that.


Lidar approach just has to be the winner in my opinion, simpler is better.


A future with high powered infrared lasers on top of every car will probably lead to a mysterious rise in people slowly losing their vision. Someone should do a study of the eyes and vision of people who work with LIDAR.

Why not instead use multiple radars (we already put 1-2 radars in cars with adaptive cruise control) to augment Tesla's vision based approach?


According to this article, lidar using eye-safe lasers is standard: https://photonicsreport.com/blog/is-lidar-dangerous-for-our-...

"If a lidar system ever uses a laser with a higher safety class, then it could pose a serious hazard for your eyes. However, as it stands, according to current information and practices, lidar manufacturers use class 1 eye-safe (near) infrared laser diodes."


And what if you are hit in the eyes with the outout of 20 of them at once?

Like on a busy highway.

What if you are hit all day long, for hours, every single day?

History is replete with studies which were used to presume different exposures would be fine.


You’re right that it needs further studying, to not end as asbestos.

On a side note, you eyes are hit with more powerful and full-spectrum radiation (from IR to UV) from a burning star every day for hours (or at least they are designed to).


Agreed, during the day.

Eyes fully dilated during night driving don't constrict from flashes of coherent IR radiation, though.

BTW: this is neat: https://en.wikipedia.org/wiki/Melanopsin


Phototoxic maculopathy induced by quartz infrared heat lamp

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5279082/


The math makes this quite implausible for a random scenario. Perhaps a toll collector could introduce the right systematics to make the scenario not ridiculous on it's face.

Imagine pointing a laser at something more than five feet distant the size of an eyeball and then getting another laser to point at the same spot at the same time. And then another five. And those lasers are all attached to cars moving and the eyeball is moving too...


Put an IR filter in the car window glass?


You sound like the kind of internet commenter who whinges about how dangerous bicycles are on the road because someone in a car might swerve to avoid them and head-on collide with a tractor trailer.


Lidar is lo-res, though it has other advantages. Sensor fusion is also not without complexity and problems. But camera-only systems seem like one of those bets that hangs out there for too long before it will have to be walked back, leaving a lot of customers with stranded technology and unmet expectations.


FMCW radar is more tenable for automotive applications, in my opinion. No lasers necessary.


Way too basic. They don't even produce an image.


Look at a recent video of FSD beta on YouTube and focus your attention on the visualisation. I can’t see how anyone can look at that and conclude that LiDAR is still necessary or even incrementally beneficial.


I watched it, you might have rose-tinted glasses on, it's not ready yet. Compare it to AutoX videos. And you can't expect others to follow this path, it's not like Tesla is going to make it available to others.


I didn't say FSD beta was "ready". I merely assert that it is demonstrating remarkable competence in spatial sensing. If spatial sensing isn't a problem—and clearly it is not—it makes no sense to add LiDAR, which only does point cloud spatial sensing. The real "hard problem" of self-driving is in the path planner, not spatial sensing.

I hadn't heard of AutoX. I watched a couple of their videos. Despite being videos cherry-picked by the developer they show a car being driven in cities at slow speed and on faster arterials seemingly unable to drive without ping-ponging. No evidence that these cars could ever drive on roads they haven't been directly trained on.

By comparison, thousands of videos of FSD beta are being posted by hundreds of real world vehicle owners driving intentionally challenging routes in dozens of distinct geographic jurisdictions. It is showing remarkable progress, but of course there's still limitations; understandable when exposed to such a massive long tail of edge cases.

E.g. https://www.youtube.com/watch?v=0g4TnvvuGqM


The current SOTA in monocular “spatial sensing” (3D object detection) is just much less reliable compared to LiDAR-only systems.

You can’t conclude that spatial sensing isn’t a problem because it didn’t crash with a monocular camera. The test is whether it accurately detects all relevant actors/objects in the scene and does so reliably.

From a couple FSD videos I’ve seen, the issues center around perception (failing to get the right lane lines, certain detections disappearing briefly, etc.)


> You can’t conclude that spatial sensing isn’t a problem because it didn’t crash with a monocular camera. The test is whether it accurately detects all relevant actors/objects in the scene and does so reliably.

Indeed, which is why Tesla have put a lot of effort into showing a visual representation of its object detection. Unlike many (most) other players in this space, Tesla isn't afraid to let people see what their system is capable of in literally any situation their customers wish to drive to.

Most of their competitors are hiding behind L4 which, because of how machine learning works, makes systems appear better than they are. Overfitting is a great way to make rapid gains, but much of it is illusory.

> From a couple FSD videos I’ve seen, the issues center around perception (failing to get the right lane lines, certain detections disappearing briefly, etc.)

Even if true, LiDAR wouldn't help as it cannot see painted lane lines and is extremely weak at object identification.

Looking at the most recent FSD beta (10.12+) as shown on YouTube by real customers, it's quite clear that detection is already damn close to exceptional. When FSD does fail, it's usually either out of an abundance of caution (it is extremely cautious around pedestrians) or because the path planner wasn't making a good decision. It's almost never because the car has failed to see something that LiDAR might have.


> Overfitting is a great way to make rapid gains, but much of it is illusory.

I'm not sure why you think this is the case for L4. Do you mean since it can be geo/weather restricted the learning task is easier? Surely weather plays a large role but I don't see why other L4 companies necessarily overfit more (also how does overfitting provide rapid gains?). If you overfit when training for your L4 system...it's not an L4 system.

> Even if true, LiDAR wouldn't help as it cannot see painted lane lines and is extremely weak at object identification.

True, but:

A) Again, the test is reliable detection which is simply out of reach atm from single-sensor input

B) HD maps help a lot with this issue since the lane lines become less important


When you limit your service area, it becomes feasible to train your models on pretty much (or literally) every road in the service area. Every single permutation of weird intersection it will ever come across. If your L4 car only ever has to drive in San Francisco, it doesn't matter that this ML model would have no hope when introduced to Salt Lake City. The model never has to spend a moment excluding any of the millions of potential things it has never been trained on and will never see.

Both the ease of data collection and the absence of contextual noise makes L4 machine learning an order of magnitude easier.

Can you point me towards any recent FSD Beta video where there was an important failure of object detection?


Here's a list of robotaxi companies https://en.wikipedia.org/wiki/Robotaxi#Testing_and_revenue_s..., some of them operational, Level 4.

I would be very surprised of any of them use AI vision without Lidars. That's what my point is about, Tesla is lagging behind. Wake me up when it gets Level 4 certification.


The SAE levels don't measure technical sophistication. In fact when it comes to machine learning, an L4 system is substantially easier to implement than L3 or L2 because the task is monumentally less difficult and you can overfit your way to perceived success.

Whether you (or I, or anyone else) thinks Tesla is lagging behind or not has no bearing on the actual reality, and it won't change whether or not Tesla's approach is ultimately successful. Hopefully we're both fortunate to live a few more years and see what happens first hand.


LIDAR isn’t simpler.


Why not? Sure it's all complicated enough, but conceptually measuring distance with a laser is simple.

Do that many, many times and you have a good enough picture of the surrounding area in a machine-readable 3d format, so you have a good starting point.

How is that not simpler than what Tesla is doing?


Trains and public transport was the winner 100 years ago.


Simpler is better, so you want to add another sensor that humans don’t have, to do a task that only humans can currently do safely? This makes no sense to me.


The problem with Tesla's "self-driving" is people assume they have to provide no oversight or inputs beyond engaging the system. It's a recipe for disaster. Couple it with the rapid acceleration and speeds the Tesla is capable of and you exponentially increase that risk. I have seen people shave / apply makeup, be on laptops, and even sleep while these cars were "driving." Conversely, I have seen people accelerate rapidly and unsafe, taking their previously gas-driven aggressive behaviors to a whole new level.

The cars need governors to keep other motorists safe and I hesitate to say "other motorists," since I think most find their driver's license as the toy in a Happy Meal.


One thing that still confuses be is the idea of pedestrian avoidance. Sure, a sensor can identify a person in the road quickly and brake. I just don't see how cars are ever going to be good at the question of when you aren't sure whether a pedestrian will dart into the road or not. The car could detect that there's limited visibility on the side of the road but it's very situational to realize you're in a city or a neighborhood with kids so you need to drive especially slow.


Tesla moved too fast imo. Basically exploiting loopholes to get self driving cars on the road when there was no approval from government. I like waymos approach a lot better.


> CEO Elon Musk has often claimed that accidents cannot be the fault of the company, as data it extracted invariably showed Autopilot was not active in the moment of the collision.

> NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle.

Ouch.


"On Thursday, NHTSA said it had discovered in 16 separate instances when this occurred that Autopilot “aborted vehicle control less than one second prior to the first impact,” suggesting the driver was not prepared to assume full control over the vehicle."


I don't know about you guys, but I'd personally feel less anxiety around Teslas if all this Autopilot and FSD bullshit were recalled and I could be relatively confident those vehicles are simply dumb EVs with conventional distracted primates controlling them.


I wish the sector wasn't all about money and tried to solve real problems. Making a car for disabled people that is not autonomous but requires minimal intervention would make a lot of people Happy. Instead we make fail videos for YouTube


Has there been any studies done on the number of accidents prevented by Autopilot?


Well, you can do some rough math by using Tesla's own numbers.

With autopilot: 1 crash in 4.31 million miles driven

Without autopilot: 1 crash in 1.59 million miles driven


Can those numbers be directly compared? For instance, I’d assume the proportion of highway miles driven is much higher for autopilot miles. And highway driving is much safer than city driving.


I have always been skeptically on those on road auto-driving and I double whether can get to the ready-to-deploy ever.

The logic is simple: our road environment is so volatile and even best trained drivers make deadly mistakes. Such environment is obviously cannot be handled by a bunch of if-else. To allow those techs to mature, we'll need some AI can UNDERSTAND the situation and make near optimal or optimal decisions accordingly. Unfortunately we are nowhere close to that.


This is a political move. Just like the “electric car summit” in the White House where Tesla wasn’t invited.


I think too much focus at Tesla is on FSD beta while barely anything trickles down to standard cars. They need more developers who extract new features down and make them ready for a worldwide rollout. The best symptom of this is the horrible state of Auto Pilot in Europe. Or how their auto blinding lights are just outright bad even though they have all the tools to fix it with a nice update.


Autopilot is terrible in europe because of EU laws. Driver assist systems in the EU cannot impart more than 3 m/sec^2 of lateral acceleration[1], which means they're useless for many turns. Another restriction is that lane changes must be initiated by the driver and they must abort if the lane change has not succeeded after 5 seconds.[2]

1. See page 15 of https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELE...

2. See page 18 of the same document.


What a shitty website, hacking the back button to confirm if I want to leave.


> On Thursday, the National Highway Traffic Safety Administration, an agency under the guidance of Transportation Secretary Pete Buttigieg, said it would be expanding a probe and look into 830,000 Tesla cars across all four current model lines, 11% more vehicles than they were previously examining.

It's pretty obvious what's happening. Musk becomes a Republican, says he'll buy Twitter, so now the establishment liberals will punish him. Politics as a weapon, as usual.


How many government agencies can Elon Musk piss off at once?


Crap, this is going to be turned into a political issue now because of Musk. Irrespective of the legitimacy of the investigation, it is now going to be branded as a witch hunt. Musk is shrewd, he explicitly went to the party that fully condones sexual misconduct with their leaders just before his own issue came out and the whole party is fully behind him and without questions. A lot of my friends blindly support him even those that never paid attention to him before. That party affiliation has served him well with Twitter too, where the Texas AG did him a solid by launching an investigation into bots. This is a mess all the way down. Musk has become a huge distraction at this point for his companies, at the least Tesla. He is too powerful for the board to replace him though


[flagged]


Your second try is even less funny.


[flagged]


What I'm afraid of is that the people in my society with large amounts of money, power, weapons, and influence are acting like a bunch of sixth-graders.

I'm not asking for miracles. I just want the crazy to stop.


This stuff is scary and certainly more important than twitter spam bots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: