Like a dummy I paid something around 7,500 euros for the so-called FSD package when the Model 3 arrived in Europe in early 2019.
Five years later it still does practically nothing. The car can’t even do the bare minimum: it doesn’t recognize speed limits on most highways in Finland because it gets confused by round LED signs showing the limit (these are used because it’s different in winter and summer). It’s just embarrassing. A self-driving car should understand road signs that a five-year-old has no difficulty reading.
I’m guessing it works better on some Californian roads that have been carefully hand-tuned by Tesla so they can show the Boss an illusion of progress.
Personally I’m not going to buy another Tesla after this episode. Optimism is one thing, but this level of hype is actively dangerous because it gives people the wrong idea about the capabilities of something they rely on in traffic.
I bought "just" the EAP since by 2022 the FSD was clearly a scam, but I feel like an idiot for having purchased even that. 0 of its 5 advertised features are usable and 3 of them don't exist at all:
Navigate on Autopilot - Useless. Only handles highways, deactivates itself on normal roads. It can theoretically handle switching from one highway to another - in practice it attempts suicide every single time. If you don't intervene it will either crash, or cause a crash behind you.
Auto Lane Change - Nope, not "Auto" at all in Europe. Got to "consent" with your indicator lights and a gentle nudge on the steering wheel. Said nudge must apply a torque between 0.44 and 0.46 newton - anything less won't register, anything more will override the autopilot and leave you in manual control. Also it won't work if there's a car less than 200mt behind you in your target lane - it's ridiculously cautious. Finally, 30% of the time, despite above conditions being met, it hesitates for 30 seconds then opts to stay in its lane with a timeout error.
Autopark - nope, not since they removed the 3 EUR worth of ultrasonic sensors.
Summon - nope, see above.
Smart Summon - nope, again, see above
If there's anything like a class action in Germany I'll gladly join.
This is really a pity because I like the car apart from this scammy aspect.
There was an article here recently of someone who settled with Tesla for a refund with interest. I don't know Finnish consumer protection legislation but if they didn't deliver and aren't doing to, that might be something to look into.
I'm thinking it would be fair to get a refund based on what the cost of the FSD would be if invested in TSLA stock at the time of purchase. For me, that'd be a 17x. They used that money to build a company, after all.
The whole FSD thing is so misleading that it should make buyers eligible to a full refund. While the Tesla website only promises to deliver the feature at some point in the future, public comments from musk and others made it look like this point will be end of year
Especially when Tesla has stated in filings that @elonmusk or whatever his Twitter handle is "is an official communication channel of Tesla" (which was part of the SEC's issue), though I realize that many of those statements were elsewhere too.
Musk, in 2015, viewed FSD as a solved problem just needing implementation: "I view it as a solved problem. We know exactly what we need to do and we will be there in a few years."
Somewhere along the line, the problem got unsolved. In 2022, "Our focus now is just on solving this problem".
Not that it changes much, but your cheap car probably uses the same tech developed across all of VAG group, so the price of your car is less of a factor. People buying VWs, Audis and Porsches probably fund it!
Mine never knows either, and if I dare use the cruise control or autopilot on the highway near Oslo it phantom brakes by bridges and shadows randomly. Same bridge it was phantom braking when a friend had a Model X like 8 years ago, still not fixed even if probably thousands of Teslas drive there every day
Mine has no idea about the speed limit in the school zone next to my house. Why? Because the speed limit changes based upon if school is in session. When is school in session??? You might be able to tell by looking at the school parking lot. You could open a web browser, navigate to the correct school districts web page, download the school year calendar in pdf form, and determine if today is a school day. Or you could do what my tesla does and lets me drive 10mph over the regular speed limit (as of about a year ago - maybe they’ve fix it)
Huh, the schools around my place have signs with times indicating session, so like 8-16 weekdays is 30 km/h otherwise 50. Are you expected to know that yourself?
New one to add: The front license plate falls off constantly. They dont require those in california, so when you buy one in states that do require it, they send you a janky plastic holder with just a bit of double sided tape that never seems to hold.
Funniest one: When you are using cruise control you cannot disable the wipers.
Kind of… there is an unwritten rule that if you have an expensive looking car without a front plate, it won’t be enforced. In wealthy neighborhoods a substantial fraction of cars won’t have front plates. You will see a lot of Porsches, Teslas, etc. that were sold new without holes for a front plate mount and never had one.
If you have an old beater, or the wrong skin color, you can get pulled over for not having a front plate.
You can also try a color printer and adhesive laser-friendly sheets that claim to be weatherproof. Ours are looking a bit haggard five years later after printing them, but we haven't been issued any tickets. I'm sure they're not legal, but nobody seems to mind.
> Funniest one: When you are using cruise control you cannot disable the wipers.
You cannot disable the automatic wipers when using cruise control. This makes sense to me: Because they use the cameras for cruise control, they need to be able to clean the cameras whenever necessary. If the weather is clear, the automatic wipers never come on unless you hit a puddle or something.
It makes sense once you make the somewhat arbitrary and questionable design decision to do everything with cameras. A 17,000 brand new minivan has a better driving experience re: console, space, and adaptive cruise, because it doesn't use cameras.
It is difficult to understand how this is not outright fraud.
If I bought a Honda without ABS (let's pretend it was in the 90s) but paid a lot extra for a promise that they'll retrofit ABS later but never do, clearly that would be fraud.
I am wondering if your local laws can help in this case or in the contract you signed there is some tricky text that allows them to sell you a broken FSD.
> I paid something around 7,500 euros for the so-called FSD package when the Model 3 arrived in Europe in early 2019.
So, this is something I never got. Like, even if you were convinced it was something which was definitely going to be delivered, on time, as promised, in a form that wouldn't cause the regulators to immediately ban it (and frankly this would be an unreasonable set of things to believe; even where the initial set of promises is reasonable, and they were not in this case, shit happens and things get delayed for reasons beyond anyone's control), why not just buy it when it became available? I don't get the impulse to pre-buy this sort of thing.
They've been misleading to the point of lying about FSD, but I feel like anyone who reads this site should have known enough not to pay that! Hope you get some of it back eventually anyway.
Why there hasn't been a class action about it yet is a real mystery.
> I’m guessing it works better on some Californian roads that have been carefully hand-tuned by Tesla so they can show the Boss an illusion of progress.
It has nothing to do with “the boss”. It’s just basic bias of where the engineers are and where testing happens.
All the stories about Twitter/X suggest that absolutely everything revolves around what the boss wants on a given day.
That’s autocratic micromanagement, and its downside is that the boss ultimately has a limited attention span and employees learn how to distract him with shiny demos that show a glimpse of the impossible thing he demanded they ship.
Nope, cruise and Waymo suffer the exact same biases. They are just savvy enough to not pretend they can operate outside of their focus/training markets.
It reflects Musk's own bias. If he actually gave a shit about competence and quality, Teslas would be more functional outside California. But he doesn't and won't, so until he crawls into a hole somewhere and disappears, Teslas will continue to be over hyped garbage.
Cruise and Waymo suffer the same issues and don’t have Musk at the top. The only difference is they don’t try to operate there.
A valid criticism would be that they shouldn’t operate out of the envs they are optimized for, but that has nothing to do with engineers pleasing the boss.
>Teslas will continue to be over hyped garbage.
Why do you suppose people keep buying them so much and seem to like them? Is it possible you just don’t like Elon Musk and are projecting that onto the car?
Do you work at Tesla? Or any big company? I definitely make sure that my work demos well for my higher ups, especially those that are known to be unpredictable and moody :)
It isn't about hand-tuning. I'm still confused by your description of speed limit signs in Finland.
Is the speed limit emitted by LEDs? Why is it ever different between winter and summer? Etc.
If the sign is not white background,no LEDs, and black or red font I as a human would ignore the signage just like the car is probably trained for as that isn't reflective of North America standards.
First of all, the standard for speed signs in most of the world is a round sign with a red border and a black number inside indicating the actual speed. US speed signs are extremely non-standard. Having the numbers be made out of LEDs in a country where winter means guaranteed ice and snow (so a much lower limit to be able to safely negotiate the same curves) would not be hard to recognize, as long as the standard is otherwise followed.
Secondly, road regulations are not dependent on what you personally know or are used to. A driver in Finland, even if they are coming from America, has the obligation of following all Finnish road markings. And if Tesla is selling FSD in Finland, it is obvious that they have the same obligation as any other road participant.
I'd like to see some dynamic "no passing" zones on multi-lane highways, to tell people to stop trying to change lanes every fifty feet when neither lane is moving.
I remember when the CHP used to do rolling traffic breaks in Los Angeles. The officer would flip on the lights and use the PA to tell everyone to stay in their lane and not try to pass. They’d slowly accelerate to 20mph, and it worked beautifully for turning stop and go traffic into consistent flow. Unfortunately, few drivers think about what they’re doing enough to learn that lesson and it’s very labor intensive (and probably not what the officers want to do).
>Why is it ever different between winter and summer?
Because doing 120kph on an icy road or in heavy rain is a recipe for disaster. Northern Europe isn't California. The temperature ranges from +30C to -30C during the year.
As the car is sold outside of North America (with a promise of FSD) , you don't think it's fair to expect it to recognise the speed limit signs in the country that the car is sold?
You would ignore all the speed limit signs on a highway because they don’t have a white background? Come on. That doesn’t make any sense. Obviously the sign displaying a number enclosed in a circle is the speed limit (which you might also deduce by observing the speed everybody else is going). Next you’ll tell me that Tesla can’t possibly be expected to understand speed limits expressed in the exotic unit of km/h.
It’s fine if the car is trained for North American standards only. Why then is Tesla marketing and selling this feature in Europe at all? What did I get for 7,500 euros?
I’m sure they have fine print in the contract saying “we don’t guarantee this feature will ever do anything.” So I’m taking it as an expensive lesson.
You can likely make a small claim to recoup that cost under EU consumer protection laws. Given the ongoing promises as per the article, I doubt there would be a time limit on instantiating such a case.
EDIT: I think the limit is only 5000 for small claim, but you dont need to go the small claim route, its just super easy to do so.
Variable speed limit signage is not uncommon in the United States. It exists in areas of frequent and heavy fog, in construction zones, and in areas of regular stopped traffic.
> Why is it ever different between winter and summer?
Here in Oslo it's due to the local air pollution caused by winter tires. Studded tires especially but non-studded as well as they have soft rubber which sheds particulates more easily.
Due to physics, cold air can act as a lid, preventing the ground-level air from getting refreshed, causing a build-up of particulates.
Lower speed means less particulates from the asphalt and rubber.
I hope you ignore it. The fine you receive is based on your income. If you for example earn 5k EUR a month and go 20mph over the limit, you'll be fined 2640 EUR, payable on the spot. That you're not from Finland doesn't matter.
> Why is it ever different between winter and summer? Etc.
It doesn't matter to the discussion. For whatever reason, if that is how the local laws work, it is what it is. If a car company wants to sell a car in that market, they need to comply or get out.
I mean, remember that time when the Tesla engineers fixed Elon Musk's commute by literally repainting the lane lines so the vision system would see them [1]. Oh, or how about that time that they hand-tuned the FSD reveal demo, you know, the one that said "The driver is only there for legal reasons." and re-ran it over and over again until it succeeded without crashing (literally, it crashed once while they were making the demo video) as admitted under oath by the current head of Autopilot software who personally faked the demo even though it still, 7 years later, can not do that route consistently without error today [2].
At what point is saying "it's two years away, trust me!" Not just naive optimism but false advertising or market manipulation? He publicly said this for like four years, and then he walks it all back.
I just don't get it. Any other little person does this and they would be big time side by investors and shareholders, and yet people want to trip all over each other talking about how awesome he is.
After those two years have passed, probably. But the claim was actually stronger than that. It was that it should earn back the money it cost by running an autonomous taxi service during night and office hours. That specific claim was a scam from day one. That was not even on the research stage, and probably never will be.
Also that would require lending your vehicle out to random strangers all the time, so you couldn't keep anything valuable/fragile in the car and you'd need to clean it rather frequently.
Even on the face of it, it's more like owning a taxi than a car that makes you money.
Not to mention the various licenses and insurances required to ferry people around.
Anyone who gave it a second thought probably didn't take it seriously, but that didn't stop a whole bunch of fans and stock market analysts to sketch out some far flung implications.
Not defending Musk, but investors and shareholders don’t abandon him because he has delivered huge returns to them through Tesla stock. He is always too optimistic (maybe intentionally) about the timelines, but ends up generating money in the end.
Plus, he is super rich and people want to be close to big money - could lead to many benefits through networking and opportunities.
Does he end up generating money in the end though? The only way to say that definitively is if Tesla were at some point in the future to shut up shop and return a balance to the shareholders many multiples of what they bought the stock for.
Until that happens the plates are still spinning and could go either way.
Well we already know alt right weirdos have their heads so far up their own asses they can lick their nostrils clean, so we'll just keep marginalizing them and their insanity.
Which is equally balanced with the anti-fanbois that don't have a Tesla but still flock to any thread about Musk to voice their opinions.
Look, I don't have a Tesla, never owned one. I don't use Twitter. I think Musk has major flaws. There is definitely a growing trend of anti-Musk vs Musk fanboi flame wars. This thread seems to be no better.
It never actually has to be a lie. Making predictions years into the future is impossible if variables are changing. He can make a perfectly truthful estimate of when the product will be ready as of today’s information, and then the information changes.
It’s up to you as a consumer to either discredit the optimism or buy into it.
On the other hand: Because he is always wrong with his timelines it cant be security fraud because everyone knows (and ridicules him/memes it) that his timelines are wrong.
It's more about the consumer's satisfaction IMHO. For one reason or another, it appears that the Tesla customers are happy with their purchase.
There is a cult mentality and the cult members are happy with what they got, even if it's not exactly as advertised however it's not just about the cult mentality but it's about the overall experience of the product.
I think if Elizabeth Holmes delivered slick blood testing machines that work no different than those on the market(large amount if blood instead of a drop), but had better workflow and user experience, Theranos could have been a success like Tesla.
She tried to imitate Steve Jobs, had she imitated Elon Musk she would have been fine. She should have had the Siemens machines modified to work with modern GUI, developed some automation for drawing large quantities of blood in a pod at the mall and integrate all that with the healthcare and keep repeating that in 2 years just a drop of blood would ne enough.
The only difference between Musk and Holmes was that Musk actually delivered something that had some redeeming properties.
The cars are nice and fun to drive. The charging network is ubiquitous and works perfectly. The highway-based "autopilot" isn't revolutionary, compared to other similar products, but works competently and makes long drives easier. The FSD Beta is a hot steaming pile of garbage that will kill people given the slightest opportunity.
Exactly, Theranos' R&D is also capable of threatening people's health by giving too inacurate results. Instead, they should have made an App that coordinates healthcare providers with service points where modified for automation Siemens machines are used. When asked about the single drop tests, just say its coming in 2 years and if the tech and science develops enough some time in the future they can do that too.
> I think if Elizabeth Holmes delivered slick blood testing machines that work no different than those on the market(large amount if blood instead of a drop), but had better workflow and user experience, Theranos could have been a success like Tesla.
I wouldn't have said that Tesla just imitated existing car companies. Can you elaborate?
I don't suggest that at all. What Tesla did was to create an electric car with good user experience without developing any revolutionary tech. That's what Theranos should have done, create a good user experience on the existing tech. They did wrong by betting on inventing the tech.
If Tesla's prime value proposal was to invent some tech that will do something revolutionary, they would have gone bust like Theranos. Instead, they built charging network and kick ass infotainments system that made their electric cars practical.
I very much disagree. They didn't just make cars "that work no different than those on the market". They jump-started mass-market electrification of cars, solving who knows how many problems along the way, in technology, business, regulation, manufacturing, etc etc. That's very different to making another Ford Focus.
Tesla did and is betting on battery costs coming down due in part to tech under development and they could have been on the wrong side of that bet. I think unlike Theranos, battery tech has a semipredictable cost curve
Betting on economies of scale and improved performance is quite different than betting on tech that doesn't exist. Theranos bet on doing over 200 blood test from single drop of blood, which is science fiction. They may try to solve it but there's no clear path.
Theranos could have been amazing; the core conceit of "cram a bunch of different experts into a room and let them riff off each other" could have been amazing.
The Problem is that it didn't happen, because Elizabeth Holmes was a control-freak fraudster. She specifically siloed them to try and control information, which is the exact opposite of "put em in a room together".
> I don't suggest that at all. What Tesla did was to create an electric car with good user experience without developing any revolutionary tech.
Not sure what you mean. They are the ones that made the tech good enough for mainstream adoption. Your post reads like the “iPhone didn’t have any revolutionary tech” types.
> They are the ones that made the tech good enough for mainstream adoption.
So, this is a common idea, but it doesn't _really_ gel with reality. The Tesla S, Nissan Leaf, VW's first electric car platform, and the Renault Z-E platform are contemporaries to within about a year (iirc it was Leaf, then an unsuccessful Z-E-based car, then eGolf in limited quantity, then S, then mass-market eGolf and Renault Zoe). Of these, the Leaf sold far more units than the S over its lifetime; the Zoe was about the same as the S, the eGolf somewhat less.
Tesla didn't have a _true_ mass-market car until the Model 3 some years later.
Your chronology is screwed up, we didn’t hit mass market until the model 3.
The litmus test is what cars you see on the roads of Wyoming. There are plenty of model 3s, and a leaf was an extremely rare site because the car was nearly useless outside of a city commute.
> The litmus test is what cars you see on the roads of Wyoming.
Given that US adoption of electric cars lagged other developed world countries, and thus early electric cars were generally designed for European, Chinese, and to some extent Japanese, market preferences, that seems like an extremely weird litmus test. Tesla _was_ the first company to release a mass-market car aimed at the _US_ market (or at least the first credible attempt; the Leaf was available in the US but was just too far from US market preferences), but the Chinese and European markets already had mass-market electric cars.
The iPhone didn't have any revolutionary tech, in fact, it was quite lacking when you compare what was out there and that's why many people claimed that it will fail.
What iPhone had was a revolutionary user interface that provided the outstanding user experience despite lacking a lot of features and functions.
Its capacitive touchscreen was revolutionary. That's what enabled the UI to be done in a different way. This isn't some "great design creates great product" thing. It's the underlying tech that did it, and if the UI had been different, it would been just as revolutionary.
I would argue that the addition of the app store is what made the iPhone really jump phones forward.
Nope, capacitive screens and multitouch were 30 years old tech. Sure, Apple did engineering to actually make it into the product they shipped but that wasn't what Apple invented with iPhone.
Everything is built from something else. Not doing the fundamental science that leads to a product doesn't mean that the product itself isn't innovative. Apple's interface elements[0] and interactions[1] were no doubt derived from elsewhere too. Getting UI elements on a screen wasn't new. Cramming a capacative touchscreen into a phone was very new.
Right, people were buying iPhones because they wanted to touch two or more fingers. Everyone was trying to make screens that can register multiple touches but Apple was the first to crack it and this led to natural success of market demanding multiple touches and they having it.
had musk not purchased tesla after the roadster was built, all of his claims would have been utter bull. now that there was an entry electric car he could spin any tale or justify any hype by pointing to the one functioning model.
from there it was absolutely the modern car manufacturers game, promise a concept, deliver some part of it, talk about next quarter.
frankly having level 4 self driving pulled of by mercedes in california first should be the ultimate proof. merc did it in house faster without hyping it. wild
yeah he over promised on FSD, but still delivered on reusable rockets, and more or less nudged all other manufacturers to producing eletric cars at minimum, a few years earlier than they intended to.
He also exposed twitter's political bias & collusion with US security agencies, and exposed that the company can run with 80% less staff.
This then is a fine example that past performance is absolutely no indication of future one, despite our emotions screaming at us that its the case. We just don't like uncertainty or lack of trust environment by default, so subconsciously prefer replacing it with more comforting bad truth/lie rather than accepting the other choice.
The Space Shuttle from the 80s was reusable too. Falcon 9 isn't fully reusable (upper stage) and now Starship drops some new ring into the sea.
> More or less nudged all other manufacturers to producing eletric cars at minimum
Didn't he instead let them delay making electric cars by transacting with them for their ZEV credits?
I think the SpaceX stuff is still impressive, but he didn't deliver on his big statements. Dragon was supposed to land on Mars years ago for instance. The Space Shuttle wasn't economical, but Starship went back to lots of elements of its design.
This really captures the essence of his naïveté, to me:
> There are no fundamental challenges remaining. There are many small problems. And then there's the challenge of solving all those small problems and putting the whole system together.
That is the fundamental problem, of all AI research. How many edge cases are there, and how many does it have to handle gracefully before it’s “good enough”? I think the tail is longer than people expect, and the changes that have to be made to address it are very fundamental.
At the same time, I really want them to succeed, and I think FSD could be one of the most transformative tech advancements of the century, like the original invention of the automobile.
That's not the fundamental problem of all AI research. That's the fundamental problem of all software engineering!
Especially integration complexity is todays' biggest complexity in any significantly complex product. And you can't cheat yourself around it. We managed to cheat ourselves around growing complexity in subsystems by isolating them more and more, which allowed us to tackle the complexity with more people working in parallel. But that just further increases the integration complexity, where we don't know a way to effectively scale our approaches to deal with it.
The only way we know is "cutting corners": creating a system that's just working well in 99% of cases - leave the small problems of the remaining 1% unsolved, hence there's nothing to integrate for those. Works well enough for production use in a lot of situations. Doesn't work at all when it comes to Level 5 autonomous driving. And that's what Elon either doesn't understand or actively ignores.
>Especially integration complexity is todays' biggest complexity in any significantly complex product.
100% This, which should be very obvious to anyone working at a FAANG tier company.
There are no shortage of ideas, technical documents, prototypes working locally. However to get a product to production that integrates (even in a paved path) is most often the most complex task.
I'm not sure it is down to edge cases. It's more like they are a fundamental AI breakthrough away until it can understand say that that thing must be a fire truck although it doesn't look the same as the previous ones seen.
I mean alternatively you can take the Waymo approach with loads of sensors and pre mapped routes but Musk seems to be banking on almost human level AI and we aren't there yet.
> To me, it indicates that they haven't solved even the easiest part: there is something in a static setting, don't crash into it, stop.
That turns out to be a harder problem than most people think, and it gets in the way of the 'driving' part. "Are we looking at some big, blank object or are we looking at the sky? It's probably the sky, so we'll just drop it from the model." Which is how Teslas drive into big trucks. :/
Even if visuals failed, Tesla has a sonar, radar and Tesla is moving at snail pace.
Hell, my vacuum stops moving when it detects it crashed into something (likely torque required to move wheels is larger than expected).
Edit: at the end of the day, it doesn't matter why it happened. The result is a crash in a super simple environment and is (to me) quite indicative to the state of things. They are supposed to come up with solutions, not excuses (also, Elon hates lidar, but that is still on Tesla).
Tesla dropped radar, lidar and ultra-sound from their cars a while ago [0]. This was another childish decision by Musk, who personally overruled the engineers [1].
"Did you agree with your egomaniacal boss who thinks he is a brilliant engineer and programmer when he made one of the biggest blunders, or did you know better?"
I would compare them to other companies: they’re not going to lightly outshow their own products since that both affects sales and increases the odds of a lawsuit on behalf of everyone who bought the older models under false advertising. It’s more likely to end badly in court if the lawyer can say everyone else in the industry got this right but their clients vehicles will never get the promised features because they lack the necessary hardware.
It's still a hard problem as radar/lidar have similar object identification issues as cameras. (I've made production products with both lidar and cameras and the situations were both fail is non-zero).
The problems are both false-positive and false-negative and no matter what you do, you'll see some of both. (Yes, a problem people have, too.)
Oh, and don't forget cars are operating in an adversarial environment. Obstacles will appear w/out giving you time to stop.
Okay how about we make cars that can fully self-drive, and then remove hardware crutches afterwards. Some blind people can ride a bike safely (with echolocation), but I'd strongly recommend you use eyes too if you have them.
Humans have millions of years of evolution tuning our visual system and we still have tons of crashes due to poor performance during low light situations, high contrast, etc. Twilight is recognized as a time of increased accidents, we’ve had decades of mandating better types and positions of lights, roads are expensively designed with various paints and reflectors, etc.
Not a computer vision engineer here - do we just not know how the human brain recognizes objects or does it do so in a way that's hard to reproduce in silicon?
We don't know how it works, and the parts that we do know are very hard to implement.
In particular, our vision is very very tied to our model of the world, and is clearly an evolutionarily derived trait, not something we learn from scratch as children. We are basically born with a model that can augment the raw sensory input with basic physics (object permanence, but also we have an intuition for things like "heavy = large momentum even at slow speeds", and for "what goes up will come down", some basic optics of how objects cast shadows, and others). In addition, we learn to recognize objects and we know their actual sizes from past experience, so we typically estimate distance this way far better than relying on parallax. "Small image of car" is automatically "car far away", even if seen with one eye, because we know what size a car is.
In fact, much of our brilliant vision starts going awry immediately in an artifical setting with misleading sized objects.
We don't have much idea of what "our model of the world" is. Probably quite little of it is genetically encoded. E.g. object permanence has to be learned (takes around 7 months).
What you describe is largely the classical "cognitivist" model, which is more or less dead in the contemporary tought. We don't even have a very good account of depth estimation, apart from "bunch of things seem to affect it".
Object permanence is not "learned", it develops at around 7 months. Same as you don't "learn" to reach puberty, your body is pre-built to only start puberty at a certain age. Edit: I should note that object permanence in particular is a more contentious topic. Other parts of of our understanding of the world are more readily assigned to innate functions, though.
Note that many mammals have object permanence, and for some it develops right after birth. The fact that human development is so slow is the only reason some of these things look like actual learning. Another more clear example is our ability to walk/crawl - in humans it seems to be learned, but most other mammals are able to do it within minutes of being born, making it quite obvious it is in fact a genetically coded trait.
If the babies don't see objects, they don't "develop" object permanence. Yeah, here we can summon the "use it or use it" copeout, but then the nativist account becomes unfalsifiable.
Being able to crawl within minutes (and even somewhat run within hours as horses do) is mostly due to anatomy and "reflex pathways". E.g. a cat can walk without brains [1], as can famosly headless chicken.
Human babies too have these kinds of reflexes but they are "abandoned" well before the baby learns to walk.
While I agree it makes it very hard to test some of these hypotheses, "use it or lose it" is very clearly true of at least some bodily and cognitive capacities. So it's not unfair to posit it could apply to these as well.
If you think quadruped walking is only a bunch of reflexes and human walking isn't, that still doesn't completely negate my point. Primate infants also display significantly more agility than human babies right afer birth, and "learn" to walk significantly faster.
Here [0] is a study for example that finds that the age of walking is 94% predictable for any mammalian species that walks on the ground by a particular ratio between the mass of the infant brain and the adult brain (that is, an infant animal essentially learns to walk when its brain reaches a certain percentage of its final brain size).
Besides this type of animal comparison, another signal we can use to distinguish between built-in capacities and things we learn from experience is to see whether a capacity can be developed later in life. For example, if you lack visual stimuli in the early part of brain development, even if you get it later, you will never develop vision (we know this from studies on kittens...).
Cognitive scientist here: we have no idea how the brain works in general. In locomotive tasks like driving the brain probably does not even try to recognize objects most of the time.
Problems involved being hard doesn't make it okay to willfully sell an unfinishable product.
I know for the fact that making a flying carpet is hard. Doesn't mean you should be greatful for life to be able to pay me $1M for a non-flying flying carpet that sets itself on fire.
It’s actually not hard, like at all. Every other manufacturer solved it. The problem is that Elon dropped all sane forms of “vision” like lidar because he wants to squeeze out profits at the cost of consumer safety (not to mention Tesla kills motorcyclists hand over fist)
He’s hell bent on machine vision which can’t even identify a cyclist for fucks sake! And somehow this is legal to deploy on public roads!
Are they just doing object detection, or are they also doing 3d scene reconstruction (visual SLAM)? My bet is on the latter, and the SLAM reconstruction should show them that there's something dangerously close to the car. Of course, vSLAM is notoriously finnicky in corner cases, lighting conditions etc., so it might have just not noticed the plane at all.
Given Elon's obsession with "it should see like a human does" i.e. only relying on vision and nothing else, I don't think FSD will ever be feasible and safe tbqh. This would be trivial to avoid if the car had an actual sense of distance other than comparing pixels.
Teslas drive under things on autopilot. The first autopilot fatality in 2016 decapitated Joshua Brown, and a couple later ones were due to what appears to be the same failure. They never even tried to fix it.
Driving very slowly into a parked giant blue airplane and not even stopping?
I think we can be pretty sure that almost all human drivers won't make that mistake and those who do should not be driving on their own just like Teslas software.
It's using smart summon, where a person, who can see the car because it's short range, uses their phone to move the car. They can stop it using their phone at any time by removing their finger from the touchscreen, yet weirdly didn't.
It's not autopilot, and it's not FSD, and it maxes out at 6 mph.
Also they've never updated the code for it, so it's not representative of the current tech in FSD
Makes no difference. Not crashing into big stationary things is the absolute bottom floor of what is required for anything 'car moves without driver behind wheel'.
From technical perspective, either they use same stack for environment awareness for summon and driving for shared capabilities and it failed or they do not, which is even worse. Either way, fail.
Interestingly, if you read the new Musk biography from Isaacson, Musk’s approach of committing to an overaggressive deadline, rallying all his workers to stay up until 3am on the shop floor and generally yelling at them like a military commander actually worked remarkably well again and again at Tesla and SpaceX. They accomplished business goals that no one thought was possible, continually pulling rabbits out of a hat.
However, the evidence suggests that this approach does not work for a research problem like developing self-driving cars. It seems that you can’t just yell at software engineers and make them stay awake all night until they make an AI research breakthrough. The difference with SpaceX/Tesla was that the hurdles were fundamentally manufacturing and efficiency problems (after all, we made it to the moon in 1969 - and in both cases Musk had a basically working prototype very early on), not problems of science research.
I think a bigger factor is the job market. Where's an aspiring rocket scientist going to go if they don't get a job for SpaceX? Or an electric car designer (back when Tesla started at least)?
There just weren't really many other places for people who wanted to work on those cool problems to go to work. So Musk had the leverage to push them really hard.
But software? Not so much! Programmers can work pretty much anywhere. Even for a cool project like driverless cars there's quite a lot of companies trying to do it. And how many of the driverless car programmers specifically want to work on driverless cars? They'd probably be cool with other AI projects, of which there are a gazillion.
> Where's an aspiring rocket scientist going to go if they don't get a job for SpaceX? Or an electric car designer (back when Tesla started at least)?
NASA, Boeing (or any defense contractor, really), Planet for satellite engineering, Google/Waymo was doing self-driving around that time. I contracted for some random small self-driving startup around 2010.
His method of deletion and challenging requirements works well for manufacturing. It would appear to me that the same doesn’t work for software nearly as well.
This feels like hearing all the talk about AI replacing developers and then actually building stuff with the best models available and seeing them fail to generate JSON function calls according to a simple JSON schema and just make shit up about what the parameters should look like.
I think the hype is driven by a combination of naivety, religious thinking and hope.
Personally my cynical perspective is I work with people that it could viably replace now and that's sad, not because it's good tech but the people are so fucking awful.
Except I’ve never heard anyone talking about AI replacing developers and selling a “full self-development” package for thousands of dollars at the same time.
>Except I’ve never heard anyone talking about AI replacing developers and selling a “full self-development” package for thousands of dollars at the same time.
Some developers are complaining that we will be replaced, that somehow GPT4 can write good code. Similar artists, writers etc complain they will be replaced.
I can't see an AI in the next 20 years that you ask it
"Build me GTA6" and it will create it for you, code, art, text, voices.
From my experience GPT is good for exploring, asking questions and getting sometimes inspiration from the answers, if the code you asked for what writen already by others and was trained on it then it will aproximate a good solution but it still does mistakes.
Current AI it is just interpollating some response based on what he saw in training, if the real answer is not close enough to points in training then you get garbage. Add on top bad training data and it generates most of the time sub optimal code. For example it always generates JavaScript code using old syntax and using old DOM APIs.
There is no logic with them, I asked an AI "is X legal in Javascript" and it answered No, it created an it created an answer that shows that was legal and working and rambled about why is not a good idea to use that. And I have many examples where the fcking long answers they give contradict themselves.
Sorry for the long comment, I daily see this devs complaining about this, earlier I read comments where some web devs were sure in three years they will have no job.
See, people think moving an AI capabilities from entry level dev to manager+ is about intelligence, but it's not. It's about many things but mostly personal responsibility. Outsource that to machines at your own risk.
Yes, I’ve been hearing outlandish opinions daily way before AI was a thing of public interest. They only become very problematic and potentially fraudulent when a company charges money for it.
Btw illustrators are getting replaced. Not all of them of course, but the demand is shrinking. Similarly for certain writing roles.
>Btw illustrators are getting replaced. Not all of them of course, but the demand is shrinking. Similarly for certain writing roles.
Do we have any real data on this? Because it would make more sense that with the new tools you can be more productive as an artist and sell more. From the little I know about AI art is that at this moment it is still in need of a Photoshop user to do final touches, plus a few steps of in-painting and other AI manual steps.
I assume some super low quality output is created but I don't know of anyone who pays directly or via patreon on just a lazy single prompt AI generation(without other more involved work).
You can find some data through a casual search. [1] and similar articles cite a 70% drop in Chinese video game illustrator jobs in just one year, partly due to adoption of AI tools, as confirmed by both studios and illustrators.
Speaking from personal experience, I know a guy whose firm has sharply reduced headcount in art/design and another guy whose firm used to hire illustrators but has since stopped because AI-generated illustrations (not central to their product) are considered good enough.
> with the new tools you can be more productive as an artist and sell more
I don't see a demand explosion outside of enthusiast circles (largely pr0n, to be honest). Sure you can make more, but you have no one to sell to, and unit price won't be the same when there's very cheap competition.
Your link has no real data, it says that one company fired 5 artists "probably" because AI, and one example of a freelance artist that she has less work. The rest of the article is about artists fears.
But I agree, in the past in my village were only 2 people with photo cameras, when you needed a photo you would talk with them or go in the city, now we all have cameras but the real good photographs still have a job and the photo shops still have jobs for printing the photos people take with their cameras or even making photos from time to time.
Since GPT appeared our JIRA tickets did not changed, the AI does not have magic power to debug code andfix stuff, plan new features, optimize slow code, do customer support.
I've had a Tesla Model S for quite some time now. It cannot reliably adjust the high beam / low beam nor the windshield wipers. Those problems seem a bit easier than full self driving.
Model Y here. My automatic wipers are mostly okay except sometimes they seem to get obsessed with a tiny speck of bird crap they can't remove. High beam/low beam is a complete cluster f*ck: It gets fooled by highly-reflective road signs. So I adjust headlights manually, just like I have done over my entire driving career.
The phantom braking is the biggest deal: The car slams on the brakes in the middle of major highways in good weather just because it's a hot day and the camera saw a mirage and the car thinks it's about to drive off a cliff. This means the cruise control ("autopilot") on the car isn't even as reliable as "dumb" cruise control on a 20-year-old car, so I rarely use it.
I absolutely love the full self-driving in my Model Y. I use it every day and still chuckle to myself all the time about how crazy it is that I'm in a car that can autonomously drive all over town.
I understand all the complaints in the comments section about Elon over promising and wishing FSD were perfect by now, but I can't believe there aren't more positive comments too.
When I show my friends the FSD and acceleration of my Tesla, Santa mode, "emissions" mode, etc, they are always amazed, as am I. I'm living in the future and loving it.
> When I show my friends the FSD and acceleration of my Tesla, Santa mode, "emissions" mode, etc, they are always amazed, as am I. I'm living in the future and loving it.
Yes, when you show off something you spent a decent amount on and think is amazing and that you love, your friends will also indicate a positive reaction to it. Welcome to social dynamics 101.
People are incapable of nuance when they’ve been given enough propaganda to dislike someone. You see it all the time with political figures and people saying Trump is literally worse than Hitler
Funniest thing is he said at one point that it is 12 months away, including automatic charging? Does such a thing work anywhere or is even attempted? I haven't seen a car that does not need a physical plug and someone getting out to do it.
They had a marketing demo of a robotic charger a few years ago, but nothing came of it. They also had the demo of full battery swaps before that, but they'd have to change their entire architecture for that to work.
The battery swap demo was essentially a subsidy scam:
> In 2013, California revised its Zero Emissions Vehicle credit system so that long-range ZEVs that were able to charge 80% in under 15 minutes earned almost twice as many credits as those that didn’t. Overnight, Tesla’s 85 kWh Model S went from earning four credits per vehicle to seven. Moreover, to earn this dramatic increase in credits, Tesla needed to prove to CARB that such rapid refueling events were possible. By demonstrating battery swap on just one vehicle, Tesla nearly doubled the ZEV credits earned by its entire fleet even if none of them actually used the swap capability.
IIRC, they did deploy that battery swap thing to some charger locations (hearsay, didn't confirm) and charged as much per swap as gassing up an SUV, plus you had to return later to get your battery back. They gave up because nobody wanted to do that.
A bunch of these claims are understandable with the tech and hype available at the time they were said, but there wasn't enough committent to follow-through. Not that it would have helped much.
I know this is a joke, but I'd not be surprised if they settled for pinging another person at the charging station and asking them to plug/unplug the car. Either because hype makes them want to be nice, or if they get some $FINANCIAL_VALUE for doing it.
* he's lying, i.e. he's knowingly deceiving his audience to create hype and raise the stock price which he financially benefits from
* he's bullshitting, i.e. he's embellishing what he knows is in development with the hopes that engineering will overdeliver and meet his promises or something similarly impressive to distract from the failed expectations
* he's truthful, i.e. he's genuinely and continuously overestimating what engineering can realistically deliver and keeps being wrong over the span of a decade
I'm not sure which scenario is most flattering. Keep in mind that a lot of his claims seem to be unplanned and he doesn't believe in PR departments so most of his statements seem to be his personal judgement based on what has been reported to him. I'm inclined to believe he doesn't care much about the truth value of his claims as long as they feel like the right thing to say to him ("hashtag no filter" as the kids used to say).
The first two are just lying. The third is only not lying if you assume that he's extremely stupid; otherwise he would have learned by now that he's bad at estimating these sorts of things, and adjusted.
I think the "he's truthful" case is the most damning because it calls his expertise in any of these fields into question. Looks an awful lot like Dunning Kruger.
He has demonstrably lied about his education, lies about his family background and keeps getting caught making statements that sound plausible if you know a little about the field but nonsensical if you are an actual domain expert (e.g. nearly everything he said about software development re Twitter).
I think he's a good generalist because he (like many autistics) finds it easy to drill into domains he finds interesting and keep up with discussions at a fairly advanced level but when you do that it's important to be aware that having binged expert sources on a subject for a month straight doesn't make you an expert in the field the same way a conversational crash course over a weekend doesn't make you a fluent speaker in a new language even if you know how to order a pizza and get a general idea of what people say to you.
I've actually experienced this misattribution of expertise myself: just because I've read several Wikipedia articles and binged some video essays on a given subject I've had people assume I'm an expert when I tell them about it - except I know that my "expertise" would collapse as soon as I were to talk to an actual expert. I think this is why a lot of people suffer from imposter syndrome: because they're literally faking it by having a good generalist knowledge without realising most people in the field also lack sophisticated underpinnings.
> I haven't seen a car that does not need a physical plug and someone getting out to do it.
AFAIK, the Formula E safety car charges without a physical plug or someone getting out to do it (which is important for safety because it has to be able to leave its charging station quickly, without the extra delay of someone getting out to unplug it); it uses a wireless charging pad on the ground.
Back in 2014, it was “90% of your miles on auto” by the next year. In 2015 it was “a month away” from highways and simple roads, and 3 years away from full autonomy. In 2016 it was fully autonomous end-to-end cross-country (LA to Times Square) trip “by the end of the year”
What I would find really interesting: If someone drove the same route with every version of FSD and counted the interventions.
That would give a nice chart over time from which one could see the progress.
Is there anything online similar to this?
Another way to look at it would be to look at the revenue that is generated via FSD. Buyers probably do some due dilligence what FSD offers. If the number of buyers goes up, that could be an indicator that FSD is getting better. But a quick Google search did not bring up much data about this published by Tesla. In their earnings report, there is a chart showing the cumulative miles driven with FSD. Not sure if one can draw conclusions from that. It seems very smoothed out and only goes back to 2021.
A YouTuber named DirtyTesla has done a series of drives over the years throughout his hometown where he's tracked the intervention and disengagement rates per mile. He often shows those numbers at the end of his videos.
I think that is not true. They can just take data from the route, annotate it and put it into their training set. That is how you would do route tuning.
Sort of off-topic but I believe Openpilot allows this. You can record every route you take, copy it off to your PC and run it in a simulator. You can then tweak the model for this particular issue.
That would depend on traffic, like it could perfectly work for you 100 times and the 101 time when a truck with some weird painting on it is present on the road and it will just hit it, or because of day of the year the shadows are different.
Tesla keeps the true data secret and cherry pick the results they want.
Would be inserting to data mine that data, like
- are there roads where humans always (most of the time ) take over
- how many near accidents are caused by FSD/autopilot
- are there time of day variations
- are there weather variations
- are there issues after software updates
- what are the good roads and bad roads see if we can identify what is special about them.
Probably there could be more the public would like to know if the data would be made public, and IMO makes no sense to keep this public safety data secret.
Yeah. I've had my Model 3 for 5 years now (great car) and bought the FSD package. It's nowhere to be found and I want my damn money back for the FSD package. It'll never come to fruition during my ownership of the car, so I just pissed that money away. This wasn't some kickstarter, it's something they actively sold just like other features or packages.
I would like to see Mr Musk removed from decision-making positions at Tesla and SpaceX. He is making both companies, which have much to be proud of, look bad.
If he wants to kill Twitter, that's fine by me. I prefer a more decentralized internet anyway.
I have a Tesla Model 3, but I never believed in FSD vaporware so I didn't buy it, and even if they had delivered, I would not entrust it with my life. That doesn't make Tesla's misrepresentations bordering on fraud (on the wrong side of the border) OK, but some basic critical thinking would not have been amiss here.
Why can't Waymo's autonomous taxi service in San Francisco be considered FSD? There's the geographical limitation but it's operational and autonomous to an advanced degree.
This seems to be an incredibly popular belief despite the fact that there's no evidence for it and it wouldn't make any sense.
Waymo's remote support teams can reach into the driver's model of the world to fix things, for example maybe it mistook this stray traffic cone for actual roadworks - but they can't actually drive the car.
If Waymo needs a human to drive the car, they send a human to physically sit in it and drive the car. They have a whole bunch of people for that role. If you were in a Waymo that drove somewhere and there wasn't anybody in the driving seat, that's not because some 10 year old was using a PS4 controller to steer it from Bangalore, that's the Waymo driver software.
What happens when you lose connection mid-maneuver, or there's congestion on the network causing high latency?
The vehicle would have to be smart enough to execute all the tactical parts of driving anyway. Adding remote piloting just adds more failure cases and complexity.
I'm not talking about remote driving all the time - I'm talking about recovering from weird edge-cases, like the car gets confused by a traffic cone on a flat bed truck and stops and needs to be reversed 3 feet and pointed in a slightly different direction to get it running again.
Tesla's system is generalised and can operate on any roads. It also uses vision, as opposed to a crazy number of sensors including LIDAR. Tesla FSD is a much more promising and human-like AI.
Humans are pretty fallible though. And human eyes are way better in dynamic lighting conditions than cameras. Our dynamic range is huge in comparison.
Elon's objections to lidar are not because it's a bad idea but because of the costs. Especially because he's already sold cars with "full self driving" that don't have LiDAR. So he's painted himself in a corner.
Even if it can be done with vision only, lidar will add an extra data point for cases where the object recognition is in doubt. And the price will come down.
Humans have great sensor fusion, too, with stereo microphones in most models, as well as situational awareness.
One should think of the car cameras as a fixed periscope view. Try driving with that, what the computer sees is more like driving a tank with monitors. Not the best.
It’s only reasonable to give the cars some super human senses to compensate, like LIDAR etc.
It like when Apple funded lots of usability studies on why one mouse button was optimal for most users.
They held out longer than expected, some 20 years or so, but common sense prevailed. Even if they had to make a design where the buttons weren't clearly visible first.
Yeah I think part of that was Steve Jobs' extreme aversion to being wrong. And he of course shared a lot of personality traits with Musk (narcissism in particular) so that makes sense.
Even today Macs don't really have 2 mouse buttons, they just call a double-finger click a "gesture" even though it is simply a right-click.
I'm not sure: "I drove with the radio blaring so I could not hear the siren" is something to brag about. More sensors == better. And the fact that LiDAR costs have come down two orders of magnitude since Tesla made this decision suggests that some of the economic drivers for such a decision have changed.
Part of the reason I've owned FSD since 2017 and yet spent exactly zero minutes in my car in the expressway, in light traffic, in good weather, in the middle of the day reading a book is because of the lack of an entry on the posted timeline saying something like: "Tesla states it will accept liability for any crashes that occur while FSD was engaged."
After all these years, it's fair to say: "In the 7 years that Tesla has accepted money from customers for FSD, Tesla still requires human supervision in all cases."
So far they have proven they are asmpytotically approaching something that drives badly and is in danger of hitting things.
True FSD will only come from LIDAR + vision. Relying only on vision feels exceedingly dangerous. I recall a recent (last 2 years?) case where a guy was essentially decapitated by a flat-bed semi because his car couldn’t recognize the semi’s bed due to it being right at camera level.
I think the Pure vision stuff is nifty, but “nifty” is the exact dead-last parameter when trying to transport my family safely from point A to point B.
So how does LIDAR read a sign (not just the shape of the sign, but the content)? How does it read the colour of a traffic light? How does it read brake light status or indicators on the car in front?
Oh, it can't?
So you need cameras too?
So you have to build a 3-dimensional sensor fusion system for LIDAR to work?
Wouldn't that fusion system be more complex, less performant, and more fallible that just choosing a single model (vision/LIDAR) and optimising around that?
10 million years of animal evolution makes a good case for stereoscopic vision as a sensing system for navigating the world.
LIDAR is a crutch for low-maturity software systems.
Well, bad news, Tesla Vision is not stereoscopic since they use monocular cameras [1] except in the front where they use three side-by-side cameras with different focal lengths (60m, 150m, 250m) with less separation than human eyes which can not be used for stereoscopic depth calculation. So, even if we assumed that for some reason we would want to hamstring ourselves by restricting ourselves to evolution's solutions and the metaphorical equivalent of flapping airplanes, Tesla was apparently still too stupid to realize that animal evolution resulted in two eyes, not one.
It is actually shocking that anybody pushes the "good enough for evolution" narrative when Tesla has completely and utterly failed to do even that right. This is even ignoring all of the other purely mechanical inadequacies of cameras relative to eyes such as resolution, dynamic range, dynamic attention-based focal lengths, being mounted on mobile swivel to allow for parallax calculations, etc. Let alone the other neurological elements that are not fully understood of integral to human-level perception. But no, they could not even get the two eyes instead of one eye part right.
Not all animals have stereoscopic vision. Many birds and fish see entirely separate images from their two eyes. Animals with injuries to one eye still have functional vision, even if worse. The "brain" part is the one that really helps with animal vision. And it includes evolved generational models, sensor fusion, memories of past experiences and other inputs beyond just two "cameras".
Overall, the complexity of modeling the world the way an animal does seems much much bigger than a few different sensors.
There is no reason to believe that we can achieve brain-like success at 3D vision with any current approach. As such, having multimodal sensors that animals don't have access to seems like a much more promising approach, and far closer to a sure bet. Basically, leverage technology that far surpasses animal senses to make up for the much dumber processing powers.
If Tesla had been limiting its ideas to theoretical research, or even applied research, I'd be all for vision only as a valid research avenue. But they are putting this thing on the streets where I walk, and they are charging people thousands of dollars with claims that it works today, and that it will do wonders tomorrow. That is simply not acceptable for a green field research concept that probably has decades left in front of it.
Some people think that because we can build cameras we can build vision. Most of vision happens in the brain, and we're nowhere close to being able match the human brain's visual processing system (despite what some AI proponents would have you believe). That includes the ability to build 3D models of the environment based on two slightly different images, and then (most important) to infer what those models mean w.r.t. learned experience and common sense, and thus whether it's safe to run over them (e.g. a crumpled newspaper in the road) or not (a soccer ball rolling out into the road, which will often be followed by a child).
Assuming positive intent here, I'm not sure how you reason that a fusion system might be less performant than a single model?
My whole point is that a single system is not sufficiently safe to do what we're trying to do. The point of FSD isn't just to navigate without hitting stuff. The point is to do it with ~99.999% accuracy, ~99.999% of the time while flying down the road at 80 MPH.
Humans are _terrible_ at this, just check the deaths due to automobile accidents each year. We have some pretty amazing stereoscopic vision. But I don't like trusting my safety to another human, and I sure won't trust it to a machine whose vision isn't as good as mine.
For me to trust a machine, it needs to be an order of magnitude better than myself.
>Wouldn't that fusion system be more complex, less performant, and more fallible that just choosing a single model (vision/LIDAR) and optimising around that?
No, the whole point about sensor fusion is that it can be greater than the sum of it's parts.
If, and that is a big IF, they manage to achieve it. My best guess is that Tesla won't get anywhere close to L5 before the end of this decade.
Your best chance today is to pack more sensores.
Edit: I still believe that the systems which assist the driver today are useful and can make driving safer and easier. I don't want to downplay what they have achieved but they are a looong way from L5.
I believe the CEO of Waymo, the company that is actually succeeding in self-driving, has said several times now that L5 is not even on the horizon, for anyone.
The problem isn't just training, it's sensors. There are simply no sensors available today that work in all the different weather conditions that L5 requires with the required precision.
And even if they manage to pull it off, they ought to have data showing that they, in particular, can remove lidar sensor without losing fidelity, simply because we know that lidar is what gives reliable range information.
I don’t know the specifics and have no dog in the fight, but the argument is flawed here. Human eye sight is in some ways better than small form factor cameras, eg anti-glaring, dynamic range, light sensitivity at night. Also, our vision is quite bad at some stuff, for instance distance estimates have a high error rate. You can do a lot better than that.
Tech is not biology, so in order for a computer to not do something stupid, it generally needs different sensors. Some sensors (depth gauge when scuba diving, altimeters for planes, or even gps) are obvious examples of when tech outperforms humans and is safety critical. Just because humans can drive doesn’t mean computers should replicate our “control loop”.
> Also, our vision is quite bad at some stuff, for instance distance estimates have a high error rate. You can do a lot better than that.
Our distance estimates are quite good at human scales, especially when compared to vision based approaches. We don't have great precision (we will never say something like "it's 19.7m away") but we very rarely grossly misestimate (think something is 10m away when it's actually 200m away or 10cm away).
Of course, compared to a LIDAR or advanced techniques used in physics, we don't hold the tiniest candles (say, if we were to compare to LIGO's distance measurements).
My system is even better, it requires no additional hardware whatsoever, nor any single line of code. And it's not only for cars but any vehicle in any environment. It doesn't work for now, but that only means it's in the same state as the Tesla system, meaning very promising.
This must be a troll comment. Humans don't rely on vision when driving because it's superior, humans rely on vision because that's all we have. Actually that isn't even true as we also use our ears and with assisted driving also the car's various sensors (including LIDAR). The only justification for sticking to vision alone is nerd cred. We don't want human-like AI, we want safe AI. Right now Tesla's AI is making mistakes no human would ever make so it's not even getting that part right.
Not using a "crazy number of sensors" is not something you can brag about outside a kind of "sell me this pen" sales pitch.
The “tesla vision update” faq entry was last updated on October 5th, 2023, and as far as the way back machine surfaces the only changes were removing park assist from the list of disabled features, a few wording changes, and removing some of the more optimistic wording (e.g. “in the near future” around mentions of feature parity).
So it seems like they’re still all in on pure vision. I don’t think the xruck has lidar either, only hits I get talk about a lidar-equipped prototype in early 2023.
They've never used LIDAR. Tesla have always been all-in on a vision based system, with Elon stating that LIDAR is a "crutch," and a "fools errand"
> Andrej Karparthy, Senior Director of AI, took the stage and explained that the world is built for visual recognition. Lidar systems, he said, have a hard time deciphering between a plastic bag and a rubber tire. Large scale neural network training and visual recognition are necessary for Level 4 and Level 5 autonomy, he said.
> “In that sense, lidar is really a shortcut,” Karparthy said. “It sidesteps the fundamental problems, the important problem of visual recognition, that is necessary for autonomy. It gives a false sense of progress, and is ultimately a crutch. It does give, like, really fast demos!”
There was a suspicious news article on a Chinese supplier that specialize in non-rotating LIDAR is investing in a Tesla parts factory in Latin America, which was then suspiciously corrected.
Pure Vision currently. The last we heard about LiDAR was the issue of combining Lidar Data with Vision in the NN. Which technology should take priority in the NN? For example LiDAR might recognise the shape of a sign but vision will know if its a stop sign or not.
I'd be super interested to see if someone has successfully combined RGB data with LIDAR data.
This is basic university robotics. Sensor fusion ! There are a whole host of techniques for dynamically updating confidence between two or more sensors estimating the same values. Kalman filter being the standard approach (which for reference was used in the apollo missions 50 years ago - developed a lot since then)
And yes, it is commonly applied with vision models. There are a host of combined rgb/lidar, structured light, depth camera and more setups in the labs students are working on at my local university, and have been for at least 6 years
For reference, I, a computer science undergrad, learned this kind of sensor fusion theory in an elective class that was just an excuse for a soon to retire professor to play with lego robots
You can know more about sensor fusion than Elon does by reading a literal blog post.
The NN shouldn't have to "choose" one or the other. It's a classic "Not even wrong" question!
The most interesting part of their pivot to vision is how none of the disaster scenarios people were so sure of seem to have come true. Meanwhile FSD is better than ever and there are millions of miles per month being logged.
> By the end of next year, said Musk, Tesla would demonstrate a fully autonomous drive from, say, a home in L.A., to Times Square ... without the need for a single touch, including the charging.
Ironically, just being able to leave your car in a parking space to charge when ready, then re-park, would be so useful.
Apparently FSD is the nuclear fusion power of the car world. Except with a much more aggressive "timeline" (always only 2 years away instead of 15 years).
Is this meant to be convincing? If you slowly scroll down and read every tweet you will see a massive repetition of simplified statements and over-promising. "X is _only_ 2 years away" is not very interesting when it's mentioned several times a year... for 5 years.
I think that's the point: to show how much, for how long, and how consistently Tesla has been lying about this. These are not offhand comments made once or twice. Every single time FSD is talked about in public, Tesla releases a statement like this through Musk.
Tesla bagholders here on HN immediately flag any negative Tesla story that they see in order to censor it, so if you want your link to stick around it has to at least give the illusion of positivity.
I think Elon inspired a lot of great things. Undeniably he brought back the Space craze, he gave the automotive industry enough of a push to finally invest in something different from ICE tech or their pipe dream of hydrogen cars. But things like FSD really ruin all those achievements for me. I'd still be impressed if he said: "this is hard, we're working on it, we do incremental roll-outs and once we're there, we're there".
It's not updated because it "doesn't work" - no one who "believes in Elon" actually cares that he is bullshitting. Remember when Trump failed to build a wall, and somehow no one noticed? A nation with 11 nuclear powered aircraft carriers failed to build a simple wall, a wall that was the hottest issue in a successful presidential campaign.
According to U.S. Customs and Border Protection data, the Trump administration built 458 miles of border wall system, which consists of steel bollards, sensors, lights, cameras and parallel roads. However, only 52 miles of new primary border barriers were built where there were none before.
The problem with FSD is that it's one of those problems where it takes 5% of the effort to get 95% of the way there and the other 95% of the effort to get the remaining 5%.
Problems like that are exactly the kind that lead to perpetual just around the corner predictions like this. We're almost there! We're almost there!
FSD is almost a solved problem under ordinary predictable driving conditions, but when you're driving there's a long tail of unusual circumstances that you run into that blow the limits of these models. Humans are able to improvise because we have a huge mental model of the world we inhabit and our location within it, which FSD systems do not. They're just reacting to stimuli.
Telsa went and made it even harder on themselves by abandoning LIDAR. One way to compensate for inferior intelligence is with superhuman senses, but they insisted that merely human-like senses like vision were "enough."
Rather than doing what everyone else is doing (carping endlessly about Elon Musk), why not look at the actual progress of FSD. It's extremely impressive at this point:
And also in fairness (though it is funny, and yes more drawn out than the early estimates, but every big project is delayed and overbudget, everyone knows that) it does start talking more about regulators, that Tesla's pretty happy with it, which strikes me as fair enough? That's not that much in their control, unless there was something specific regulators objected to or needed to see and Tesla was refusing/hadn't managed to.
I have heard that the previous driving automation on a Tesla while not as advanced was generally solid for what it does and might be still a plus for those vehicles.
I just searched a bit and it seems waymo is considered level 4. You just can't buy the cars as consumers.
And from a consumer point of view, Tesla pilot is more capable than the one from mercedes. But legally you can take the hands and eyes of the road on the Mercedes.
It’s always been weird to me that it seems like it’s assumed that the world really wants FSD. It clearly has its benefits in freight transport via truck, as the value add there is clear. When it comes to personal transportation, however, where’s the value add? It seems like adoption would be fueled by personal preference, and in my experience people tend to want to retain control where there is an option to do so. It’s possible this is a solution searching for a problem.
Sure some people love driving, but (and even if we say those people always do) some just don't care, the car is a tool, an A->B transportation mechanism, and the time they spend operating the machinery is time and mental taxation they could spend on something else: sleeping, reading, watching, conversing/thinking/anything to a greater extent than they feel they can safely while driving, etc.
And we don't really know how liability will end up working when regulators start approving them (assuming that happens), but I would love it if it meant any potential issue or collision was never my fault or my problem - no concern that even though I didn't believe it was my fault I had to convince someone else, find a witness, deal with all that.
Just like taking a taxi, except it's there waiting for you whenever you're not using it. It's a chauffeured car for the masses, really. And that is evidently something that people do want.
I have to disagree here, as someone with two elderly parents, one of whom has stopped driving due to eye sight issues, having FSD would be hugely impactful.
There are many, many people out there who don’t drive or can’t drive for various reasons and FSD would enable them to be independent, get jobs, see friends and family. Not everyone lives in areas with decent public transport and with my parents, they live in a hilly area where walking to use public transport is at most very difficult and at worst impossible.
Another example, I know a fairly young man who had started having seizures in his 30s and was disqualified from having a driving license, FSD would change his life. What about all the people who didn’t get a driving license for one reason or another?
For me personally, while I can drive fine, I would much prefer to study or read or long car journeys and would love a car which is more like a moving living room, so I can sit and work at my laptop while the car takes me to my destination.
I assure you there is a massive, massive market for the first company that really cracks this problem.
FSD may not be the earth shattering feature that every consumer is looking for (although it would be a big differentiator). However, for taxis, trucks, any transport where an expensive human can be removed, it would be paradigm shifting. You know, like the EM Drive would have been if it had worked, or the room temperature super conductor, or cold fusion.
It gets airplay because it’s easy to imagine all the cool things that would be possible if it were real. Like FLT. Musk said himself; Tesla is worth practically nothing without FSD.
https://electrek.co/2022/06/15/elon-musk-solving-self-drivin.... Tesla is just another car company if these hopes and dreams aren’t true.
People pay to be driven places all the time, it's a huge global industry! I'm not sure how the value add is eluding you because you must know this. People rich enough almost always pay for a driver so they can conduct business or relax while going from one place to another, it simply buys you time to do more productive things than driving. Giving that option to everyone on the planet who can afford a car, you don't see the value add in that?
comments about "see the progress!" are insane, fsd was sold as a feature for 7-15k and in some vehicles it doesnt do anything of value. You cant grift people out of money with false promises and then deliver 15 years late without pissing people off, its simple. Speaking only of progress does a disservice to those killed/injured while fsd was engaged
When you make promises and don't keep them while taking people's money on the back of those promises, that's fraud. When you do it for almost 10 years, that's almost 10 years of fraud.
Elon musk is a rich kid grifter who has never started his own company. Maybe spacex counts but it only survives off government grants. He never founded anything. He never created anything. He is an antisemitic pathological liar who is a net negative on humanity.
So what? It's a complex problem, and they're making progress. Why is it always only Tesla that has to be held accountable for all promises? Other companies often promise great things (with deadlines!), don't keep their promises, but for some strange reason no one cares.
Holy smokes! I wanted to build the exact same website about COVID (the reason i decided not to is because of the "downvotes" - people don't like seeing something uncomfortable).
I think those paying thousands of dollars many years ago don't view it that way. My guess is they don't care about years of incremental steps, when they purchased something promised much earlier.
Unless of course you are selling cars with a Full Self Driving Package for $12k that promises
"Your car will be able to drive itself almost anywhere with minimal driver intervention and will continuously improve All functionality of Basic Autopilot and Enhanced Autopilot
Autosteer on city streets Traffic Light and Stop Sign Control"
Is it though? I mean they replaced the computers, all the sensors and the entire strategy at least 2 times each and haven't managed to deliver anything viable.
Haven’t managed to deliver anything viable? There are millions of miles of FSD driving being logged every month. I use it all the time and it’s pretty damned good today. Recalls for Tesla just amount to OTA updates that regulators have asked for and they’ve agreed to implement.
Flapping wing aircraft where being perfected for a long time before being abandoned, so not every progress actually leads to a result. Selling hopes as a "sure thing in 2 years" I think justifies snarky responses at least.
I can see why what's going on here is problematic and, sure, why not poke holes into, well, everything, that seems fun. I do feel however that, as a society, we are unclear about our asks.
Making claims about the future necessitates uncertainty. Would we rather only know about factuals? i.e. do not tell me about GPT-4 until the very fucking moment there is an API endpoint I can hit?
Having FSD in 12 month from now is/was probably _THE_ selling point for Teslas. I'd feel somewhat deceived if the CEO is repeating that over 9 years and not delivering.
> Having FSD in 12 month from now is/was probably _THE_ selling point for Teslas.
What are your data points? Is the lesson here, that we are good, as long as we make sure that bold claims are not easily falsifiable?
Again, I am not sure what we are looking for. If we are interested in punishing people for making bold claims about the future, do we understand that that necessarily means, that we okay with learning less about the future? It's a eat your cake and have it moment. I understand that we can meet somewhere in the middle, but as a rule of thumb. It obviously a trade-off. What do we want more of?
Five years later it still does practically nothing. The car can’t even do the bare minimum: it doesn’t recognize speed limits on most highways in Finland because it gets confused by round LED signs showing the limit (these are used because it’s different in winter and summer). It’s just embarrassing. A self-driving car should understand road signs that a five-year-old has no difficulty reading.
I’m guessing it works better on some Californian roads that have been carefully hand-tuned by Tesla so they can show the Boss an illusion of progress.
Personally I’m not going to buy another Tesla after this episode. Optimism is one thing, but this level of hype is actively dangerous because it gives people the wrong idea about the capabilities of something they rely on in traffic.