Hacker News new | past | comments | ask | show | jobs | submit login

When our Model 3 got access to FSD, my 6 year-old desperately wanted to try it out. I figured a low-traffic Sunday morning was the perfect time to test it out, so we headed out to grab Mom a coffee downtown.

The car almost caused an accident 3 separate times.

The first, was when it almost drove into oncoming traffic at an intersection where the road curves north on both sides.

The second, was when it failed to understand a fork on the right side of the road, swerved back and forth twice, and then almost drove straight into a road sign.

The third, in downtown, was when a brick crosswalk confused it on a left turn, causing it to literally drive up onto the corner of the intersection. Thank God there weren't pedestrians there.

When we upgraded to a Y, I turned down FSD. I don't imagine I'll be paying for it ever again.




Honestly, I feel this way about pretty much all "driver assist" systems in the wild today.

That is, I fully understand that to get to Levels 3, 4, and 5, you need to pass through levels 1 and 2 in autonomous driving. But the issue is that I feel like these systems are basically at the "slight drunk teenager" stage, with you as the driver having to ensure they don't mess up too badly. Honestly, unless I can, say, read a book or do something else, these systems (I'm specifically referring to an advanced cruise control/lane keeping system) right now just require me to pay MORE attention and they stress me out more.

Fully understand we need folks to help train these systems, but, at least for me, they currently make the driving experience worse.


> Honestly, I feel this way about pretty much all "driver assist" systems in the wild today.

My five year old Honda has a very limited driver assist system (radar cruise control + lane centering), which (in my opinion) is very good at what it's trying to do. It has no pretensions of being a "self-driving" system, but it very successfully manages to reduce some of the stress of driving and make my driving better. I think the key point is it only automates the fine adjustments and provides alerts, but is very careful to never allow the driver to rely on it too much.


I feel the same way about my Subaru's EyeSight system. It helps me stay in the lane, and annoys me if I get distracted and cross a line. It slows down the car automatically when it detects an obstacle ahead. It speeds up and slows down automatically to maintain a mostly-steady speed when I set cruise control.

Until autonomous vehicles reach "read a book or fall asleep" levels, this is all I'm interested in. No thank you to any dumb "autopilot" system that I can't actually trust, but tries to control my wheel.


I’ve had the same experience with eyesight. I would also add that it brakes very naturally. Much better than other similar systems I have tried.


I've also driven a Subaru with EyeSight. I think it's pretty good too, and kinda follows the same philosophy as my Honda, but with different tradeoffs. The Subaru doesn't lane center, so it's less relaxing to drive on the highway because you have to pay more attention to fine tuning your lane position. On the other hand, my Honda deliberately won't automatically come to a stop to avoid a collision (it will only slow down in the last few seconds), so it's more annoying in stop-and-go traffic.


Exactly! I would also add emergency breaking at low speeds so that pedestrian stepping in front of the car from nowhere can be spared. There is no need for real self driving, it wouldn’t really change anything and we are not even close to that.


For the Tesla driver assistance specifically (non FSD) it's more advanced and reasonable reliable. I find it helps a great deal to reduce fatigue on long drives. It is nearly flawless on highways and watching to see the car is safe is much less fatiguing than a constant centering and monitoring the accelerator. Seeing the car is the right speed is less mental energy than constant control of power to get the right speed


Given the potential consequences of a mistake, it feels like there's still a pretty big difference between "nearly flawless" and flawless.

Speed control I'm fine with and is obvs. a mature tech that has been around for decades. Maybe it's the way I drive, but I find lane assist a liability -- especially on curves. More than once the car swerved unexpectedly one way or the other going around a bend. After the 2nd time that happened, I shut it off.


I suspect the difference in experience might be attributable to differences in the environment. I went cross country in a model Y and noticed that it did not handle one lane turning into two lanes with any grace - but I also drove across entire states where that didn’t come up. It wouldn’t surprise me if some experiences were regional to an extent.


Lane assist isn't supposed to entirely keep you in the lane on it's own, it's supposed to just help tug you in the right direction as a hint in case you weren't paying perfect attention. It's usually not supposed to steer the car entirely on its own.


You can’t really concentrate for long times, and as it has been shown many times, people are bad at expecting rare events. Reasonably reliable is not enough.


FSD marketing has always seemed sketchy to me. Though I like Open Pilot. It's a much smaller scope for an L2 system to keep you in your lane and keep you a safe distance from the next car. It works well for highway driving.


I personally really enjoy my ADAS systems on my cars even though it's not to the read a book or take a nap level of automation. It's really just cruise control on steroids. Do you see value in regular cruise control systems, even though it's not 100% automated?

When I'm in stop and go traffic, I really like not having to constantly go back and fourth between the brake and gas pedal, I can just pay attention to what's happening in and around the lane and let the car keep a good follow distance.

I've gone over 100 miles at a time without having to touch the gas or brake pedal, even through stop and go traffic.


I could see the value in that. I am a different type of driver however. I have never used cruise control and don't even know how to engage it on my car. It is true I don't drive much but when I do drive I like to be much more involved in it. I love the sound of my engine at 7000+ rpm shifts and the feel of my hydraulic steering rack.

My only disdain for driving stems from sharing the road with other driver who are completely stupid and seem to lack and critical thinking skill(this shows up in especially in traffic jams and navigation within them on local roads with obstacles also present, like a parked truck loading).


Don't get me wrong, I enjoy the thrill of driving and racing. Some spirited driving on open highways is way different from stop and go traffic and long family road trips. In the end I still have a throttle lock on my 1050cc motorcycle though, it comes in handy when going on a long road trip.


> Honestly, I feel this way about pretty much all "driver assist" systems in the wild today.

I've found adaptive cruise control to be a simple, noticeable improvement.


Pointing out the obvious, it's extremely negligent for Tesla to have this feature available. It's not even close to ready for general use.


And who’s going to stop them? Certainly not the US government.

Enjoy beta testing FSD as an unwilling pedestrian.


If I see a Tesla in the wild, I shoot its tires out before it has a chance to strike. That's the American way.

(I assume this is why the Cybertruck features bulletproof glass, in case my self-firing gun (now in beta) misidentifies the tires.)


>Cybertruck

I still read this as Cyberduck every. damn. time.

>in case my self-firing gun (now in beta) misidentifies the tires.)

are you mounting that self-firing gun to a car with FSD? that would make for a great hands free experience.


The profit motive is so clear in this case, and criminal. Collecting telemetry for free so you get to improve your ever elusive model, literally paid by your “customers” as well as potentially with their (and potentially “collateral”) lives. It’s horrendous.


It's extremely negligent for the driver to let the car drive onto a sidewalk. As the driver you are solely responsible for everything the car does.


How long ago was that?

A friend of mine bought a Model Y and got access to FSD about six months ago. I've spent a fair bit of time in this car in the bay area and... I'm impressed? It doesn't drive like a professional but it feels safe.

My friend says it's been improving even in just the time he's had it. So maybe it used to be a lot worse?

I'm not in the market for a new car but the next time I am, FSD is going to be a big factor. Even if it's just as good as it is right now.


If it's anything like the original Autopilot was: yes.

I had one of the first Model S vehicles that was autopilot-capable. Early enough that autopilot itself was added later. The early versions were... intense. Not just in the "if you take your hands of the wheel it might try to kill you" intense, but also in the "even using this as adaptive cruise with lane-keeping, sometimes it will suddenly try to veer off the road and murder you" sense. Even when working "as intended" it would routinely dip into exit ramps if you were in the right lane. As a result, I didn't use it all that often, but over not a lot of time it improved pretty dramatically.

At this point my wife and I are on newer Model 3s, and our experience with autopilot (not FSD) as a frequently-used driver assist has been... uneventful? Given the _original_ autopilot experience, though, neither of us is particularly eager to try out FSD yet. Autopilot strikes a good balance for us in terms of being a useful second pair of eyes and hands while unambiguously requiring us to drive the damn car.


Personally i wouldn't trust FSD with my life until it was battle tested for at-least 10 years.


I don't have a Tesla but I do follow FSD development and a few people who test updates on Youtube. It really seems like your experience with FSD will vary depending on where you live. I see a guy testing FSD updates in downtown Seattle and neighboring residential streets where it seems very impressive driving in traffic, one way and narrow streets with cars parked on both sides. But then I also see it do some very bizarre moves in other cities. I don't know how Tesla collects and uses self-driving data but it seems like it's different from location to location.


> I see a guy testing FSD updates in downtown Seattle

In downtown Seattle doesn't it drive into the monorail columns?


Beauty of frequent updates is no two trips have to be the same. Bonus points for AI models no single human can understand.


I had two Teslas (3, Y) and used FSD in Seattle.

It did not work and based on my experience, I am extremely skeptical it will ever work in places like Seattle. The car could not even navigate circling Greenlake without a disconnect.

Sold them at the high of the used car market because, in part, I estimate FSD is a liability for the brand and will, eventually, hurt their resale value. That and Elon Musk. Don't need to support that and won't in the future.


I had a similar experience when I first got FSD.

What I realized was just that I was being scared of the computer. It wasnt about to drive into traffic, and it wasnt about to crash into anything or anything like that.

What was happening was that I was rightly being overly cautious of the beta program, and taking control as soon as there was really anything other than driving in a straight line on an empty road.

Over time, it became a dependable, usable copilot for me. I would massively miss FSD if it was gone.


Why give such a company even more of your money though?


Because—other than FSD—the car is fantastic. I can't imagine driving anything else.


Serious question for you: the FSD near-catastrophically failed twice before you got to town. Why did you continue to use FSD as you entered a more populated area?


Because it was Sunday morning in Old Town Scottsdale and there weren't any pedestrians around.


Maybe I'm old fashioned but I've got no intention of buying / enabling FSD on my 2020 Model X. I just want a car that I can drive from point A to point B. I'm not even that risk averse, but enabling a beta feature on the road with a bunch of other drivers who barely know what they're doing is a stretch.


This was my experience with the beta up until about 3 months ago. Since then it’s remarkably improved.


I’m glad that you are having a good experience, but FSD reliably and repeatedly tries to stop at green lights on a 50 mph road near me. I’m just happy - sort of - that I didn’t pay for it, and that I’m only able to try it because my new Model S Plaid has been in the shop for service four times in the last three months…

(The loaners that I have had three of those four times have all had FSD.)

I am reasonably satisfied with Enhanced Autopilot on highways, though it’s unclear to me what, exactly, is ‘enhanced’ about it. And Navigate on Autopilot seems to add nothing of value.


> FSD reliably and repeatedly tries to stop at green lights on a 50 mph road near me

I believe that's a "feature". I had it on a loaner too and it wants you to give it permission to cross the intersection by pressing down on the drive stalk or briefly press the accelerator pedal.

https://www.tesla.com/ownersmanual/modely/en_eu/GUID-A701F7D...


Weird. I’m able to navigate door to door from north east seattle to south west seattle, which if you’ve ever been here is one of the most difficult and screwed up bits of infrastructure on earth. FSD used to stop at green lights very early on in the beta and you had to tap the accelerator to make it not stop, but at some point they dropped that. My FSD doesn’t just do stoplights right but does yields and merging stop signs and traffic circles and other complex decision making road signs.

It’s not flawless, especially on the seattle roads where knowing where the road actually is requires human level reasoning most of the time, so it drifts into parts of the road that I know from experience are for parking but the car assumes is just a widened road. Or there a lot of roads with margins that are gravel but are so subtly different from the damaged road surface it thinks that’s part of the road and tries to drift over before it realizes it’s not. These issues are nothing like the unmitigated disaster it used to be, though, and if they can keep the pace they’ve had over the last 6 months up for another 12 months, it’ll be remarkably useful.


That sounds really terrible. I honestly thought that Tesla FSD was better than that. It just reminds me of my opinions I had (and still have) like 15+ years ago when I'd get into discussions on random forums about self driving cars. I mean sure, perhaps 100 years down the road when everything driving is required to be in a fully autonomous mode with interlinked wireless communication, maybe perhaps that would work.

But that is not what we have right now. Right now every driver on the road is exposed to a potential instant tragedy that is unavoidable. I mean, what is a self driving car going to do if a large semi drops a huge log or freight right in front of you? You have one second to impact. You can either attempt to hold your line and drive over the object, potentially killing/injuring the passengers. Or you can swerve left into oncoming traffic. Or you can swerve right into a busy two way frontage road.

No matter which choice is taken, I guarantee there will be a lawsuit. Perhaps one way forward would be perhaps something similar to what medical companies did in the 1980's with the creation of the "vaccine courts". Maybe we need some kind of new "autonomous driving" court which would be staffed with experts who would then decide what kind of cases have merit. That would at least better shield the parent companies and allow them to continue innovating instead of worrying about potential litigation.


It’s fine in 90% of scenarios. Those 10% are scary. Mine tried to pull do a right turn at red light but didn’t properly detect the cross traffic last week. That type of scary. If would be nice if cars talked to each other so it didn’t just have to rely on vision.


Tesla Autopilot significantly decreases fatigue on long trips. I have on numerous occasions (I think 10+) driven 2,400 mile trips. You absolutely have to stay aware during the use of Autopilot, but it really helps decrease cognitive load in many instances during a cross country trip.


So to be clear, your car almost drove itself into oncoming traffic with your child in it and your first instinct was "Hey, maybe give it two more tries"?


Comments like these disincentivize people from sharing honestly. I have full confidence that OP was telling the truth when saying they it was a relatively safe / low traffic environment, and I fully imagine they were paying attention and ready to intervene when FSD made mistakes.


> Comments like these disincentivize people from sharing honestly.

As an automotive engineer: Agreed. Realistic experience reports are useful, and that includes also e.g. what drivers are willing to attempt and how they risk-rate.


This is true, but they also disincentivize random amateurs from conducting uncontrolled safety experiments with children in the car. I think blame-free retrospectives and lack of judgement are important tools, but I also think they are best used in a context of careful systemic improvement.


Do you think they really do disincentivize that behavior (serious, not flippant)? If my very close friend questioned my decision certainly, but if an internet stranger with intentional snarky tone did it I'm not sure it would.


There are two groups of interest to me here. Secondarily, the original poster. Primarily, the hundreds or thousands of people who see the interaction.

I don't know what the original poster would do, but hopefully they will be more inclined to think twice next time they consider performing the behavior in question. If they do think twice and the situation is unsafe, I certainly hope they won't put their kid at more risk just to spite an internet stranger.

But my primary interest is in the many more people who might be inclined to imitate the poster's behavior when they get the chance. Having the behavior contextualized like this can only help encourage them to think about the risks.


> I fully imagine they were paying attention and ready to intervene when FSD made mistakes

Is that enough? The software could decide to accelerate and switch lanes at such a fast rate that the driver wouldn't have time to intervene. It hasn't happened yet to my knowledge. But it may happen.


People sharing anecdotes isn't productive either. Someone talking about "almost crashes" is a terribly subjective thing. We have thousands of hours of youtube video of FSD. We have some data. And the value add of one commenter's experience is virtually zero.


Teslas have been driven for millions of hours at least, if not billions, thousands of hours of youtube videos are anecdotes as well proportionally speaking. What about Tesla releasing complete real data? What are they scared about? Until then Tesla claims can't be taken seriously.


Videos on YT suffer from selection bias. Folks having scares are less likely to make the time to publish them, especially if they're fan boys -- the one cohort most likely to publish.

Agree raw data, or even just per 10K mile stats, from Tesla should be table stakes. Why aren't they required to report such things by law?


I strongly disagree. It's interesting to hear a thoughtful recounting of a HNers experience.

Tesla releasing the actual raw data would be much more helpful, but of course they are refusing to do that, most likely because it would betray how overhyped, unreliable and dangerous the software is.


What do you want them to release? What does "raw data" mean to you? Does Waymo release this raw data?


Even just disengagements per 10K miles would be a reasonable start. Anonymized dumps of all automated driving would be ideal.


At the very least anecdotes are a place to start thinking about what data to collect. And wherever you think of it, it's established in modern debates that people bring anecdotes as a way to motivate discussion. Maybe it's wrong without a proper statistical study, but it's what people do and have done since forever.


Yes, because it's not something you just try out on a whim. I personally paid $10,000 for the option, and it's nonrefundable. You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering. So yes, you overcome adversity, you keep on trying, and you teach those values to your kids.

Unfortunately, it increasing looks like the experiment has failed. But not because of us. We're pissed because Musk isn't doing his part in the deal. He's not pulling his weight. At this point, he's becoming more and more an anchor around the neck of technological progress. That didn't need to happen. It didn't need to be this way. So yeah, we're pissed off, not just because of the money we paid, but also because we feel like we were defrauded by his failure to pull his own weight in the deal.

I wouldn't be surprised to see him make his escape to the Bahamas before this decade is up.


> You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering. So yes, you overcome adversity, you keep on trying, and you teach those values to your kids.

Why not restrict your beta testing to a closed course? A race track? Some land you bought out in the middle of the desert? Have the kids make some some stop signs, plow your new roads, etc.

No one else on the road is consenting to a technology that could rapidly accelerate and course correct into their vehicle at some undetermined time.


Sacrificing your kid to make sure Musk’s project gets off the ground sure is devotion to science and engineering, I'll give you that.


Unfortunately paying that money also provides an incentive to lie about how advanced the functionality and hide unflattering data.

Funding responsible self driving research seems like a great use of money to me, but testing an flawed system in the wild does not.


I'm just confused that she'd buy another Tesla after that experience.


Not sure where you got "she" from, but regardless, I bought another Tesla because the car is fantastic. The FSD is not.


Well, she already had paid to install the at home charger for Teslas.


I wouldn't be surprised to see him make his escape to the Bahamas before this decade is up.

I hate to say it but starting to get this vibe too, particularly when I watch interviews from him which are 1-3 years old. They don't age well.

They're full of promises like, "only do good things for your fellow man, be useful etc" and those ethos seem to be lost now.


>Yes, because it's not something you just try out on a whim. I personally paid $10,000 for the option, and it's nonrefundable.

For those that didn't buy it, you can 'rent' it for a monthly subscription fee.


> You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering.

The fact that you associate those ideals with purchasing a consumer product made by a public company is intentional.


Well and you don’t need to go far to find others defending the risk to others as well, “it wouldn’t have gone up the curb if there was a person there”, its interesting to see how cavalier people normally are with making that judgement for others, especially for their offspring


Also consumer bias, or "post-purchase rationalization" - i.e. humans overly attribute positivity to goods/services from brands they buy from.

Even when it's as bad as throwing you in the wrong lane of traffic.


Anyone can make more kids. Not everyone can do more science!


We've experiments to run / there is research to be done / on the people who are still ali~ive!


-- Cave Johnson


New life motto, thank you


I know this is a joke, but not everyone can make more kids.


Every car "nearly drove itself into oncoming traffic" if the driver doesn't takeover. Its not like he climbed into the backseat and said, "Tesla, takeover". No, he let the car help with the driving, but maintained control of the vehicle to ensure the safety of the child.


> Every car "nearly drove itself into oncoming traffic" if the driver doesn't takeover.

Those other cars don't claim to drive themselves.


I empathize that people are frustrated with the marketing claims of this particular feature, which are clearly bunk, but the point of the post you're replying to is not to defend it, it's to defend that the other commenter is not being negligent and putting their child in danger...


Maybe not his kid, assuming he has more faith in Tesla's crash-worthiness than its FSD.

But, he'd definitely risking other road users and pedestrians if that car keeps trying to run up sidewalks and cause other havoc on the roadway.


If you are fully attentive, you can correct course once you realize a mistake is being made. My Honda Civic has adaptive cruise control and lane keep, and I run into issues with it reasonably often. I'm not complaining: after all, it's not marketed as much more than glorified cruise control. And either way, turning it on is not a risk to me. With any of these features, in my opinion, the main risk is complacency. If they work well enough most of the time, you can definitely get a false sense of security. Of course, based on some of the experiences people have had with FSD, I'm surprised anyone is able to get a false sense of security at all with it, but I assume mileages vary.

Now if the car failed in such a way that you couldn't disengage FSD, THAT would be a serious, catastrophic problem... but as far as I know, that's not really an issue here. (If that were an issue, obviously, it would be cause to have the whole damn car recalled.)

All of this to say, I think we can leave the guy alone for sharing his anecdote. When properly attentive, it shouldn't be particularly dangerous.


FSD can (sometimes very wrongly) act in milliseconds. Even attentive humans have to move their arms and the wheel, needing hundreds of milliseconds. The same humans who may have become numb to always paying attention, especially if it works well enough most of the time.


that makes absolutely no difference in the context of the comments above.

If a product being overhyped prevents you from using it after you paid for it, you're gonna have to live with no computer, no phone, no internet, no electricity, no cars, no bikes.


If FSD decides to do max acceleration and turn the wheels, can you stop it in time? Zero to 60 is under 3 seconds, right?


If your brake lines burst, will you be able to coast safely to a stop?

Every piece of technology has a variety of failure modes, some more likely than others. FSD is not likely to take aim at a pedestrian and floor it, just like your brakes aren't likely to explode, and neither of you are irresponsible for making those assumptions


The difference is brake lines are inspected and understood by humans. Failure to reasonably maintain them is illegal.

No single human fully understands these AI models, and they can change daily. Yet Tesla is putting them in control of multi-ton vehicles and leaving all liability on humans with worse reaction time and little to no understanding of how it works or tends to go wrong.


What if it decides to short the batteries and set the car on fire? Can you stop it from doing that?

I think you are making scenarios that no reasonable person would assume. There is a difference between 'getting confused at an intersection and turning wrong' and 'actively trying to kill the occupant by accelerating at max speed while turning the steering'.


Battery and battery management is more straightforward. BYD has mastered it.

FSD is a blackbox. Even Tesla appears to be unable to prevent regressions. One of the most sophisticated companies in the world can't prevent regression in a safety critical software they frequently update. Let that sink in.


So, third-hand story, about 20 years ago, from an acquaintance who heard it from a Police officer who dealt with the following situation:

This police officer was responding to an RV which had run off the road, and was speaking with the driver. The driver, a bit shook up from the experience explained that he was driving down the highway, turned on the cruise control, and got up to make himself a sandwich...


“3rd-hand story”, from a friend of a friend…uh, huh. Unless I’m missing a heaping bucket of “ironically”, you could have just said “there’s an old urban legend…” instead of passing off an old joke as something true.


Well, up until a moment ago, I legitimately believed it to be true - even though I was a few steps removed from it. Live and learn I guess.


I have been duped like this before too! Believe a story that just doesn't ring right when you tell it to somebody else years later. Teaching / communicating corrects a lot of errors.


I remember my grandma telling that story maybe 30-40 years ago. Gotta be an urban legend. Yup: https://www.snopes.com/fact-check/cruise-uncontrol/


I've no doubt this has probably happened in real life at some point but it's practically a fable by now.

I think the Simpsons done it at least 30 years ago.


I saw a "trying out my parking assist" (I think it was with a Hyundai) video the other day where the guy didn't realize that the function only assists with steering and not the pedals. So he backed right into a car.


This is literally the story of a Berke Breathed Bloom County (or whatever the follow-on was) comic strip.


Even bought another car from the company.


The car is great. FSD is not.


The full self driving package act like grammar error on spam emails, self select for people with no understanding of technology. I fully expect that the more wild shenanigans in tesla future will be targeted at them directly.


Come on, this feels overly aggressive. Circumstances are nuanced, we don't know to what degree of danger any of these situations posed to the child, only the parent does. Judge not lest ye, and such.


Ah yes, surely the meaning of that bible verse is, "Don't ask mildly difficult questions based in any sort of moral stance." Because we all know that book is famously opposed to performing any sort of moral analysis.


There's asking questions about the circumstances to better understand before casting judgement, and then there's sarcastically implying that OP is a bad parent for endangering their child without asking any actual questions about what happened.


That was not sarcasm, which generally requires words used in contradiction to the normal meaning. E.g., if somebody makes a dumb mistake, the response, "nice work, Einstein" would be sarcastic. This was at worst mocking, but it wasn't ever hyperbolic, given that the what was written was a literal description of what the guy did.

Regardless, you haven't answered the point about the quote. "Judge not lest ye be judged" does not mean we have to empty-headedly refrain from any sort of moral criticism. In context, it's about hypocrisy, reminding us to apply our standards to ourselves as stringently as we do others. I think it's only appropriate here if tsigo somehow indicated he would happily endanger his own children, which I don't see any sign of.


Semantics that ultimately don't change the crux of my point, even if I disagree with some of them, but thank you for clarifying.


Beta testing the guidance system for a 4500 lb steel slug in a pedestrian environment is one thing.

Deciding that you want to put your family into that steel slug for the very first test seems to me to be an entirely different level of poor decision making.


We'd used Autopilot (non-FSD) for a a year at that point. I was used to mistakes and knew how to disengage quickly before they could be problems.

I was expecting FSD to be bad, but I sure wasn't expecting it to be that bad.

Maybe without disengaging any of the three incidents could have become problems, but for someone who knows how Autopilot works it was more comical than dangerous.


No harm, no foul. GP's child learned a valuable lesson which may serve the child well in the decades to come: don't trust self-driving.


I guess the lesson is more for the parent, unless the child will get $10k worth of ice cream less.


Pressing a button and then paying careful attention to watch the system perform is VERY different from just engaging the system and trusting it to do its thing.

I think the software is a half-baked gimmick but come on "look guys, I care about children too" variety of in-group signaling with a side of back seat parenting adds less than nothing to the discussion of the subject at hand.

And INB4 someone intentionally misinterprets me as defending the quality of Tesla's product, I'm not.


I assume that you have snarky comments to spare for the company who legally put this option in his hands as well, is that right?


[flagged]


To be fair, you did say that it literally drove up onto the corner of the intersection and "thank god there were no pedestrians there", which does not make it sound like you were in full control at all times, but rather that it was lucky no one was there or they would have been hit.


Presumably, if they had seen pedestrians standing on the corner, they would have intervened long before Tesla's AI could mount the curb. I've never driven a self driving car but I imagine it's a bit like being a driving instructor whose feet hover over the passenger brake. You'll be a lot more vigilant when the danger or consequence is higher.


> nobody was ever actually in danger

not sure you're qualified to make that assertion, simply based on the series of choices you've described yourself making here.


Only on HN could someone be arrogant enough to think their non-experience is more accurate than another's actual experience.

At no point was anyone—myself, my child, or another party—in any danger. Perhaps that would be different for someone who had never used Autopilot and didn't know how to disengage. But after a year driving that car, I'm comfortable in my assertion that the situation was safe.

But, by all means, please continue with the HN pedantry. We wouldn't want this place to change.


Why even buy from that manufacturer then?


[flagged]


~1500 animals died at Neuralink, vast majority of them being rodents. These are not large numbers.

https://www.reuters.com/article/factcheck-neuralink-fabricat...


> These are not large numbers.

Only when compared to completely different industries, the most commonly cited being education or food.

When compared to its own industry, the numbers are still large.


Its own industry being medical research? Perhaps somewhat more important than eating meat or beauty products?


That article also mentions an ongoing federal investigation into the poor treatment of animals there.


This factcheck looks legitimate. “Over 280 sheep, pigs and monkeys” is unfortunately not very specific though.


It does show that the "3000 monkeys" claim is misinformation since the real number is over an order of magnitude less.


It's a huge number for this sort of research. Huge.


A single cancer drug from idea to FDA approval kills untold numbers of rodents and monkeys. Price we are willing to pay as humans.


Could you please not propagate fake news? There is high enough signal-to-noise ratio already on the internet.


One problem with this experiment is that, I'm guessing, you monitored the car very closely, ready to take over at any moment, and, in fact, did so in a few cases.

But this raises the question, what would have actually happened if you just let the car be? Would it have gotten into accidents? Or maybe it just drives in this alien unfamiliar way but is actually safer?


> Or maybe it just drives in this alien unfamiliar way but is actually safer?

As long as the roads are a mixed environment of autonomous and non-autonomous vehicles, driving in an unfamiliar way is by definition unsafe because other road users can't anticipate what your car is going to do. That's not even mentioning pedestrians or cyclists.


Sounds like great questions to answer in a safe controlled test environment before letting it loose on public roadways.


> Thank God there weren't pedestrians there.

It would've still stopped - the FSD system doesn't override the background tasks that power the AEB systems that stop the car from hitting humans or cars.


> It would've still stopped

We don’t know that. An Tesla cannot claim it.


We also don't know the opposite but that sure as heck won't stop people claiming they know it as a fact and cause the publication of dozens of news articles over nothing.


It is the precautionary principle.


Except that time FSD drove right into a semi truck and killed the guy sleeping in his car...


You mean it would disable FSD when it would see that impact was inevitable so Tesla could claim that FSD had nothing to do with the accident.


> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact,

https://www.tesla.com/VehicleSafetyReport


Would make sense with 30+ seconds depending on the level of the warning


30 seconds is a _very_ long time driving.


Imagine AI puts you into a separated oncoming traffic lane when visibility is extremely low (a few meters) and then disengages. It might take you quite a while to get out of such conundrum.


Standard autopilot doesn't do lane changes. Enhanced autopilot does, but requires the driver to acknowledge with the indicator stalk.

I know this was just an example scenario and your point is broader, but I'm struggling to think of another circumstance where autopilot is at fault 30 seconds post disengage, as it's effectively just a fancy adaptive cruise control.


How does this work on intersections? E.g. if you have to turn, because straight ahead is wrong way sign for a one way road.


Autopilot steers with the lane you're in. If it's unsure at the intersection, it will force the driver to retake control.

I've never been at an intersection where straight ahead is a wrong way (wouldn't the traffic be then facing you?), but like any cruise control system, it would likely require driver attention.

Autopilot is not intended to be enable and go to sleep. It's simply fancy cruise control.


Citation needed.

Unless you are referring to these [0] incidents from 2016, before FSD was released. That was Tesla Autopilot (which comes standard on all Tesla vehicles now).

Also, FSD uses active camera monitoring to make sure the driver is paying attention, so no you can't sleep in your while FSD is activated.

[0] https://www.wired.com/story/teslas-latest-autopilot-death-lo...


What is the difference between Autopilot and Enhanced Autopilot?


AEB cannot magically instantaneously stop a car that FSD has put into the wrong lane. If you believe otherwise, please do not drive. Cars are dangerous in the hands of those who do not take risks seriously.


> It would've still stopped

Would it have though?

https://www.youtube.com/watch?v=3mnG_Gbxf_w

At this point I can't imagine buying anything Tesla related. Or anything related to Elon Musk for that matter. He's a grifter who has managed to stand on the shoulders of giants and call himself tall.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: