Hacker News new | past | comments | ask | show | jobs | submit login

They should teach Tesla’s “autopilot” (and its FSD upgrade) in business schools. Turns out you can sustainably push up company valuation on vapourware. You have to wonder if Tesla’s autonomous driving technology was actually ever meant to turn into a product. Or whether it is mostly a tool to justify the lofty Tesla stock price. I very much doubt that it is technologically ahead of its competitors.



> I very much doubt that it is technologically ahead of its competitors.

This is where they are as of April 2024: https://static.nhtsa.gov/odi/inv/2022/INCLA-EA22002-14498.pd...

"ODI completed an extensive body of work via PE21020 and EA22002, which showed evidence that Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities. This mismatch resulted in a critical safety gap between drivers’ expectations of the L2 system’s operating capabilities and the system’s true capabilities. This gap led to foreseeable misuse and avoidable crashes. During EA220002, ODI identified at least 13 crashes involving one or more fatalities and many more involving serious injuries, in which foreseeable driver misuse of the system played an apparent role. ODI’s analysis conducted during this investigation, which aligns with Tesla’s conclusion in its Defect Information Report, indicated that in certain circumstances, Autopilot’s system controls and warnings were insufficient for a driver assistance system that requires constant supervision by a human driver."


And...one more yesterday

"Tesla driver using self-driving mode slammed into police cruiser in Orange County" - https://www.latimes.com/california/story/2024-06-13/self-dri...

"Tesla in self-drive mode slams into police car responding to fatal crash": https://youtu.be/ukq6h55GnvE


> "...a critical safety gap between drivers’ expectations of the [...] system’s operating capabilities and the system’s true capabilities"

Just s/driver/user/g and it sounds like a lot of contemporary LLM hype.

IMO, Tesla's not an outlier -- in today's stock-price-is-king world, it's common to see such overselling in various domains.


Ford’s Blue Cruise is hands-off in mapped highways. Waymo is driverless. Musk is really getting away with all the hype for what Tesla cruise control delivers.


Ford's Blue Cruise is not hands-off on mapped highways, only on portions of some highways where curves are shallow. And it can suddenly demand you take over in a failure mode that completely disables itself forcing you to instantly take over in a split second.

Also it still kills people: https://www.youtube.com/watch?v=YgFPW5esM04


> And it can suddenly demand you take over in a failure mode that completely disables itself forcing you to instantly take over in a split second.

Like FSD, too, you mean.


Tesla is hands-off since a week ago (provided you are sunglass free and watching road).


It is really difficult to read if this is sarcasm or not, and I am not being sarcastic.


Tesla's most recent version of FSD (which is released to a limited number of non-employee testers so far) uses only eye tracking for driver monitoring and does not require the user to touch the steering wheel as long as they are looking forward.


Right, I was confused. "Hands-off" intially sounded like it was trustworthy to drive itself.

But in this case it means the car now trusts you to trust it by not putting your hands on the wheel?


Whole point is for two systems to monitor the road.

If you think Waymo doesn’t have thousands of people doing the same but remotely - I have bridge to sell you.


For Tesla yes, because it can't be trusted. Allowing you to not touch the wheel while still expecting you to jump in at any time isn't an improvement by any measure.

Any remote watcher can't be expected to avoid a crash in realtime.

Waymo is trusted to behave safely without supervision, but of course they monitor everything to validate and improve.


Ah, so it is just another word for negligence? Is that a feature people are championing?


One interesting bit is that recently Nvidia's CEO said Tesla is ahead of the other companies in the space. My opinion is also that Tesla isn't but then we have a connundrum. Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?

> “Tesla is far ahead in self-driving cars,” Huang said in an exclusive interview with Yahoo Finance.

https://finance.yahoo.com/news/nvidia-ceo-says-tesla-far-ahe...


Nvidia's CEO said that a day after Elon raised a capital multi billion dollar capital round all spent on a 100,000 gpu Nvidia cluster for another of his companies, so it could easily just be flattery.


> Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?

Yes.


It's amazing how little you can trust these people even though you would expect that the CEO of NVIDIA has a reputation to maintain.


Did you forget the crypto and nft hype train presentations that Jensen did ~2 years ago?


Bear in mind he was selling shovels to the miners, not shilling coins, I think?


> Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?

Waymo works in limited places and relies on a ridiculous number of sensors. Have you seen one in real life with all its equipment? Surely that makes it less advanced or at less ambitious. Assuming something malicious about Nvidia’s CEO seems like a big leap.


I've always wondered why adding more sensors to a self driving car was such a big deal.

Yes, it makes the cars more expensive but at some point, these parts will get mass produced and they will be cheap as hell.

The more data a car has, the better it should drive.


Once you add sensors you can't promise buyers that their car with less sensors will be just as self driving as the car with more sensors. So adding sensors is a big deal for Tesla, because they are in the market of selling a promise of self driving.


Maybe that's a harsh lesson in making promises that can't be delivered?

Or, more likely, no lessons will be learned and people will still trust what the company says in the future for arbitrary reasons.


Ambition isn't the goal, self driving is. Who cares how many sensors it has as long as it works?


> Who cares how many sensors it has as long as it works?

Funny enough, The original topic was that looking sexier was more profitable than working.


Well, if something will be prohibitively price it is just like not existing.


TIL that Porsche cars don't exist.


You know what I mean, don't catch me on words.


One is literally able to drive itself with no human. The other is advanced cruise control. This is like saying a regular plane is better than a fighter jet because it has less sensors and both can fly.


Give it some time !

When the dust settles, it will certainly be taught in business schools. And Musk will be in prison (not for FSD specifically).

I watched the shareholders meeting yesterday - it was amazing. Elon repeated all the same things, he kept telling for the past 5 years at least, none of which is close to become a reality. And none was described in any tangible detail - all very vague promises.

As for FSD, autonomy and Robotaxis, one has to remember when it was announced and promoted - when Tesla was close to bankruptcy (per Elon himself).


as GME, that trump social thing, et al. already showed it a couple of times fundamentals don't matter. Tesla is held by folks who either don't care (ETFs, institutions, hedge funds, blablabla) or Elondong lovers.


I'm not necessarily disagreeing, but, given enough capital, a lot of wild sounding things can become real. Hype is a great tool to attract capital.

Clearly Musk understands this very well and plays that game expertly.

It's really not necessary for all his promises to come true, as long as he can point to a track record of having made some of those wild things come true. So far, that's working.


sure, I mean Amazon hasn't paid dividends either, yet it's a good investment. so there are many ways to value a stock. and as Jim Simons showed the usual traders miss quite a lot.


You can’t half a half trillion market cap with a fanboy stock.


That’s why the litany of apathetic institutional investors are listed first.


Bitcoin begs to differ


Somebody should have made a prank, and arrive to the meeting on a Mercedes EQS or S-Class sedan with Level 3 autonomy....


I would be very impressed if they'll manage to do it. This so-called Level 3 "autopilot" doesn't change lanes.


"And Musk will be in prison"

Or closer to formal political power.


Yep, agree ! Sad as it is.


I sleep easy at night knowing that Elon Musk was born in South Africa and is therefore ineligible to be US president.


That is a law. Laws can be changed.

(even though it is very unlikely, but give it some years and there might be the next iteration of someone leading a state like a company trope)


While I agree that Tesla is nowhere close to having an actually autonomous driving system, I think that Tesla did invest more into research and probably collected more data than anyone else on the market. This amount of research has to have some results, even if they don't have a product yet.


Yep, because if you want something bad enough, and if it’s clearly possible, enough research will get us there! Except: commercially viable fusion, quantum computers, hyper loops, AGI, interstellar space travel. Hmmm.

That’s the problem with research; much of it turns out to be a dead-end, or exponentially more difficult as you approach the goal. FSD looked extremely likely there for a time, but I think the problem was actually AGI in disguise.


Machine-learning of any kind has this uncanny ability to get you really far with very little work, which gives this illusion of rapid progress. I remember watching George Hotz' first demo of his self-driving thing, it's absolutely nuts how much he was able to do himself with so little. Sure, it drove like a drunk toddler, but it drove!

And that tricks you into thinking that the hard parts are done, and you just need to polish the thing, fill in the last few cases, and you're done!

Except, the work needed to go from 90% there to 91% there is astronomically higher than the work needed to go from 0% to 90%. And the work needed from 91% to 92% is even higher. Partly because the complexity of the corner cases increase exponentially, and partly because everyone involved doesn't actually know how the model works. It's been hilarious watching Tesla flail at this, because every new release that promises the moon always has these weird regressions in unrelated areas.

My favourite example of complexity is that drivers need to follow not only road signs and traffic lights, they also need to follow hand signals from certain people. Police officers, for example, can use hand signals to direct traffic, and it's illegal not to follow those. I can see a self-driving system recognizing hand signals and steering the car accordingly, but suddenly you get a much harder problem: How can the car know the difference between lawful hand signals, and some dude in a Halloween police uniform waving his hands?

You want to drive autonomously coast to coast? Cool, now the car needs to know how to correctly identify local police officers, highway patrol officers, state police officers, and county sheriffs, depending on the car's location.

Good luck little toaster!


Park rangers, all the fire departments, normal people who try temporarily route traffic around something unusual like a crash, animals, hazardous conditions.

And to detect when someone is doing a prank or just a homeless guy yelling and waving their fist at cars etc


One of the original overpromises from Musk was that you could definitely totally summon your car from NY to LA and it would magically drive all the way, next year, for sure.

Yeah, because if it understands hand gestures, it totally won't be used by criminals, directing it to a chop shop where they can disable it and cut it to pieces. What are you gonna do as the owner?


safest bet you can make - no one old enough to have HN account today will live to see anything even closely resembling FSD


It already exists in Waymo. It obviously has a limited ODD but it absolutely works and easily passes “closely resembling FSD” for most real use cases (I.e. getting to work, school, and the store and back)


Cathy Wood of Ark investments just gave Tesla a $2000 price target on the back of FSD taxi service... That's around a $5T valuation.

You have to wonder if she is dumb, or just knows Tesla investors are totally delusional.

Or perhaps, when you come upon an OG delusional musk worshipper, and call them out, they can point at their money pile and call you the idiot...


>... the 65-year-old divorced mother of three is a devout Christian who starts every day by reading the Bible while her coffee brews, and who relies on her faith during testing moments, such as the many market upheavals...

God will make it go to $2000.


I really think she is a hack


> Turns out you can sustainably push up company valuation on vapourware.

It is very nearly standard practice in startupland to describe a yet-to-be-developed product in the present tense prior to/during development.

Strictly speaking, it’s lying/fraud, but it is so pervasive and widespread as to be expected and could rightly be called standard industry practice.

This is in no way a Tesla-specific thing.


It's weird how people keep calling this vaporware when it actually works, is active on roads, and is used by tens of thousands of people. That's the strangest usage of the term I've ever seen. Vaporware is the term used to describe products announced that never make it to market in any form.


> when it actually works*

*: The definition of "work" includes veering incl. but not limited to other vehicles, road shoulders or road divisions, sometimes self stabbing the car incl., but not limited to its driver with road railings or other roadside objects. The car might catch fire as a result or independent of the event if its feelings are hurt, or just feels like it, and burns for days, releasing its densely packed magic smoke, sweat, blood vapor and condensed tears of its designers and builders. The fumes might be toxic. Please don't inhale them.


Did someone give you the impression that cars without FSD have ever been safe?

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Most dangerous way to travel, full stop. FSD or not. I don't think a perfect safety record is possible. Only better than what people currently accomplish given the inherent unsafety of the whole system. If safety were a top priority, the cars would be on rails.


> Did someone give you the impression that cars without FSD have ever been safe?

Did I say anything resembling or implying that? I don't think so.

> Most dangerous way to travel, full stop.

I love a quote from a famous driver, paraphrasing: "Racing is some people knowing what they're doing driving in a closed circuit. Traffic is the same, but with people who don't know what they're doing".

On top of that, I had enough incidents to know that what humans can do in traffic. They make good stories though.

> I don't think a perfect safety record is possible.

Me, too.

> Only better than what people currently accomplish given the inherent unsafety of the whole system.

I think cars with driver monitoring is more secure than cars with FSD or hands-free driving. I love to drive cars with lane hold, adaptive cruise and driver monitoring, because these systems improve safety and augment humans at the same time.

I don't believe that AI and/or computer vision is close to matching human perception and reasoning to handle a 2ton steel box like humans. Augmenting humans' capabilities is a far safer and reliable (if not unsexier) way.

> the cars would be on rails.

I love trains to death, but they're not perfect either.


Fake it til you make it is a fundamental principle of startups. We just don’t usually see it at such a vast scale.

There’s a timeline where Theranos was acquired for 9b by UnitedHealth if they could keep the grift alive juuust a bit longer and Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.

Tesla has even more and deeper financial and branding defense mechanisms. That said, the clock is ticking, now, I think


> Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.

Holmes and at least some of her supporters still ardently insist, to this day, now that everything is out of the bag, the "pulling filing cabinets in front of doors to specific labs on FDA inspection days so they only see the labs we want them to" crap, all of it, that she, and humanity, have been robbed of the truly magnificent biomedical advances that Theranos was just about to solve.


Wouldn't United Health have been in the same position as HP when they acquired Autonomy for $10b in 2011 on the basis of their cooked books?


But it's... out right now. You can literally use it today.


And it sort of works in limited cases.

FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.

To the investor, ChatGPT is sold as “AGI is just round the corner”.


But "works in limited cases" is absolutely not enough, given what it promises. It drove into static objects a couple of times, killing people. Recent videos still show behavior like speeding through stop signs: https://www.youtube.com/watch?v=MGOo06xzCeU&t=990s

Meaning that it's really not reliable enough to take your hands off the wheel.

Waymo shows that it is possible, with today's technology, to do much much better.


It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.

What they do claim is that with human supervision, it lowers the accident rate to one per 5.5 million miles, which is a lot better than the overall accident rate for all cars on the road. And unlike Waymo, it works everywhere. That's worthwhile even if it never improves from here.

Fwiw you can take your hands off the wheel now, you just have to watch the road. They got rid of the "steering wheel nag" with the latest version.


Well the recent NHTSA report [1] shows Tesla intentionally falsified those statistics, so we can assume Tesla-derived statements are intentionally deceptive until proven otherwise.

Tesla only counts pyrotechnic deployments for their own numbers which NHTSA states is only ~18% of all crashes which is derived from publicly available datasets. Tesla chooses to not even account for a literal 5x discrepancy derivable from publicly available data. They make no attempt to account for anything more complex or subtle. No competent member of the field would make errors that basic except to distort the conclusions.

The usage of falsified statistics to aggressively push product to the risk of their customers makes it clear that their numbers should not only be ignored, but assumed to be malicious.

[1] https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf


> It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.

"By 2019 it will be financially irresponsible not to own a Tesla, as you will be able to earn $30K a year by utilizing it as a robotaxi as you sleep."

This was always horseshit, and still is:

If each Tesla could earn $30K profit a year just ferrying people around (and we'd assume more, in this scenario, because it could be 24/7), why the hell is Tesla selling them to us versus printing money for themselves?


They do plan to run their own robotaxis. But there are several million Teslas on the road already. They're just leaving money on the table if they don't make them part of the network, and doing so means they have a chance to hit critical mass without a huge upfront capital expenditure.


The product doesn't work until the human can be human instead of telling followers only subhumans complain.

Being more specific: Product either requires a certification, like a driving license, or is foolproof.


> it works everywhere.

where there's enough bandwidth

> you just have to watch the road

... and then react in a split second, or what? it's simpler to say goodbyes before the trip.

> They just think they'll get there.

of course. I think too. eventually they'll hire the receptionist from Waymo and he/she will tell them to build a fucking world model that has some object permanence.


There's no bandwidth requirement, it runs locally on the car.


The driving into static objects thing is horrible and unacceptable, I agree. As I understand, this occurred because Autopilot works by recognizing specific objects: vehicles, pedestrians, traffic cones - and avoiding those. So if an object isn't one of those things, or isn't recognized as one of those things, and the car thinks it's in a lane, it keeps going.

Yes, it was a stupid system and you are right to criticize it. And as a Tesla driver in a country that still only has that same Autopilot system and not FSD, I'm very aware of it.

But the current FSD is rebuilt from the ground up to be end-to-end neural, and they have the occupancy network now (which is damn impressive) giving a 3d map of occupied space, which should stop that problem occurring.


> Meaning that it's really not reliable enough to take your hands off the wheel.

Soooo just like ChatGPT then? As the parent comment said.


There are no FSD deaths. Only old Autopilot ones.



Has Tesla actually stated that in a clear manner? They seem a bit cagey about such data.


I think this is one of my fave FSD predictions:

https://www.huffingtonpost.co.uk/entry/tesla-driverless-cars...

Oct 2014: "Five or six years from now we will be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination."


And waymo got there, in limited conditions


Indeed. "Limited conditions" being the issue here (and as per the article).


That’s metaphysics.


I guess the difference is ChatGPT is less likely to cause death if it makes a mistake.

Users generally have time to decide if the output ChatGPT provides is accurate and worth actioning.


At this point, I'd be surprised if ChatGPT has not yet given someone a response which caused them to make a mistake that resulted in a death.

We found out about the lawyers citing ChatGPT because they were called out by a judge. We find out about Google Maps errors when someone drives off a broken bridge.

https://edition.cnn.com/2023/09/21/us/father-death-google-gp...

For other LLMs we see mistakes bold enough that everyone can recognise them — the headlines about Google's LLM suggesting eating rocks and putting glue on your pizza (at least it said "non-toxic glue").

All it takes is some subtle mistake. The strength and the weakness of the best LLMs is their domain knowledge is part way between a normal person and a domain expert — good enough to receive trust, not enough to deserve it.


If ChatGPT emits a fragment of code that doesn’t compile, a developer can simply undo and try again.

No such luxury is granted to the driver using FSD who has just collided with another vehicle.


Or produces code that compiles but is subtly wrong it probably won't kill someone, well until we start developing safety critical systems with it. One day we might have only developers that can't actually write code fluently and we'll expect them massage what ever LLMs produce into something workable. Oh well.


Very low bar to have code that just compiles.


Not much better than assisted driving, which actually says what it does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: