Hacker News new | past | comments | ask | show | jobs | submit login

So to be clear, your car almost drove itself into oncoming traffic with your child in it and your first instinct was "Hey, maybe give it two more tries"?



Comments like these disincentivize people from sharing honestly. I have full confidence that OP was telling the truth when saying they it was a relatively safe / low traffic environment, and I fully imagine they were paying attention and ready to intervene when FSD made mistakes.


> Comments like these disincentivize people from sharing honestly.

As an automotive engineer: Agreed. Realistic experience reports are useful, and that includes also e.g. what drivers are willing to attempt and how they risk-rate.


This is true, but they also disincentivize random amateurs from conducting uncontrolled safety experiments with children in the car. I think blame-free retrospectives and lack of judgement are important tools, but I also think they are best used in a context of careful systemic improvement.


Do you think they really do disincentivize that behavior (serious, not flippant)? If my very close friend questioned my decision certainly, but if an internet stranger with intentional snarky tone did it I'm not sure it would.


There are two groups of interest to me here. Secondarily, the original poster. Primarily, the hundreds or thousands of people who see the interaction.

I don't know what the original poster would do, but hopefully they will be more inclined to think twice next time they consider performing the behavior in question. If they do think twice and the situation is unsafe, I certainly hope they won't put their kid at more risk just to spite an internet stranger.

But my primary interest is in the many more people who might be inclined to imitate the poster's behavior when they get the chance. Having the behavior contextualized like this can only help encourage them to think about the risks.


> I fully imagine they were paying attention and ready to intervene when FSD made mistakes

Is that enough? The software could decide to accelerate and switch lanes at such a fast rate that the driver wouldn't have time to intervene. It hasn't happened yet to my knowledge. But it may happen.


People sharing anecdotes isn't productive either. Someone talking about "almost crashes" is a terribly subjective thing. We have thousands of hours of youtube video of FSD. We have some data. And the value add of one commenter's experience is virtually zero.


Teslas have been driven for millions of hours at least, if not billions, thousands of hours of youtube videos are anecdotes as well proportionally speaking. What about Tesla releasing complete real data? What are they scared about? Until then Tesla claims can't be taken seriously.


Videos on YT suffer from selection bias. Folks having scares are less likely to make the time to publish them, especially if they're fan boys -- the one cohort most likely to publish.

Agree raw data, or even just per 10K mile stats, from Tesla should be table stakes. Why aren't they required to report such things by law?


I strongly disagree. It's interesting to hear a thoughtful recounting of a HNers experience.

Tesla releasing the actual raw data would be much more helpful, but of course they are refusing to do that, most likely because it would betray how overhyped, unreliable and dangerous the software is.


What do you want them to release? What does "raw data" mean to you? Does Waymo release this raw data?


Even just disengagements per 10K miles would be a reasonable start. Anonymized dumps of all automated driving would be ideal.


At the very least anecdotes are a place to start thinking about what data to collect. And wherever you think of it, it's established in modern debates that people bring anecdotes as a way to motivate discussion. Maybe it's wrong without a proper statistical study, but it's what people do and have done since forever.


Yes, because it's not something you just try out on a whim. I personally paid $10,000 for the option, and it's nonrefundable. You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering. So yes, you overcome adversity, you keep on trying, and you teach those values to your kids.

Unfortunately, it increasing looks like the experiment has failed. But not because of us. We're pissed because Musk isn't doing his part in the deal. He's not pulling his weight. At this point, he's becoming more and more an anchor around the neck of technological progress. That didn't need to happen. It didn't need to be this way. So yeah, we're pissed off, not just because of the money we paid, but also because we feel like we were defrauded by his failure to pull his own weight in the deal.

I wouldn't be surprised to see him make his escape to the Bahamas before this decade is up.


> You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering. So yes, you overcome adversity, you keep on trying, and you teach those values to your kids.

Why not restrict your beta testing to a closed course? A race track? Some land you bought out in the middle of the desert? Have the kids make some some stop signs, plow your new roads, etc.

No one else on the road is consenting to a technology that could rapidly accelerate and course correct into their vehicle at some undetermined time.


Sacrificing your kid to make sure Musk’s project gets off the ground sure is devotion to science and engineering, I'll give you that.


Unfortunately paying that money also provides an incentive to lie about how advanced the functionality and hide unflattering data.

Funding responsible self driving research seems like a great use of money to me, but testing an flawed system in the wild does not.


I'm just confused that she'd buy another Tesla after that experience.


Not sure where you got "she" from, but regardless, I bought another Tesla because the car is fantastic. The FSD is not.


Well, she already had paid to install the at home charger for Teslas.


I wouldn't be surprised to see him make his escape to the Bahamas before this decade is up.

I hate to say it but starting to get this vibe too, particularly when I watch interviews from him which are 1-3 years old. They don't age well.

They're full of promises like, "only do good things for your fellow man, be useful etc" and those ethos seem to be lost now.


>Yes, because it's not something you just try out on a whim. I personally paid $10,000 for the option, and it's nonrefundable.

For those that didn't buy it, you can 'rent' it for a monthly subscription fee.


> You also have a human desire to help out, be part of something bigger, do your part in the advancement of science & engineering.

The fact that you associate those ideals with purchasing a consumer product made by a public company is intentional.


Well and you don’t need to go far to find others defending the risk to others as well, “it wouldn’t have gone up the curb if there was a person there”, its interesting to see how cavalier people normally are with making that judgement for others, especially for their offspring


Also consumer bias, or "post-purchase rationalization" - i.e. humans overly attribute positivity to goods/services from brands they buy from.

Even when it's as bad as throwing you in the wrong lane of traffic.


Anyone can make more kids. Not everyone can do more science!


We've experiments to run / there is research to be done / on the people who are still ali~ive!


-- Cave Johnson


New life motto, thank you


I know this is a joke, but not everyone can make more kids.


Every car "nearly drove itself into oncoming traffic" if the driver doesn't takeover. Its not like he climbed into the backseat and said, "Tesla, takeover". No, he let the car help with the driving, but maintained control of the vehicle to ensure the safety of the child.


> Every car "nearly drove itself into oncoming traffic" if the driver doesn't takeover.

Those other cars don't claim to drive themselves.


I empathize that people are frustrated with the marketing claims of this particular feature, which are clearly bunk, but the point of the post you're replying to is not to defend it, it's to defend that the other commenter is not being negligent and putting their child in danger...


Maybe not his kid, assuming he has more faith in Tesla's crash-worthiness than its FSD.

But, he'd definitely risking other road users and pedestrians if that car keeps trying to run up sidewalks and cause other havoc on the roadway.


If you are fully attentive, you can correct course once you realize a mistake is being made. My Honda Civic has adaptive cruise control and lane keep, and I run into issues with it reasonably often. I'm not complaining: after all, it's not marketed as much more than glorified cruise control. And either way, turning it on is not a risk to me. With any of these features, in my opinion, the main risk is complacency. If they work well enough most of the time, you can definitely get a false sense of security. Of course, based on some of the experiences people have had with FSD, I'm surprised anyone is able to get a false sense of security at all with it, but I assume mileages vary.

Now if the car failed in such a way that you couldn't disengage FSD, THAT would be a serious, catastrophic problem... but as far as I know, that's not really an issue here. (If that were an issue, obviously, it would be cause to have the whole damn car recalled.)

All of this to say, I think we can leave the guy alone for sharing his anecdote. When properly attentive, it shouldn't be particularly dangerous.


FSD can (sometimes very wrongly) act in milliseconds. Even attentive humans have to move their arms and the wheel, needing hundreds of milliseconds. The same humans who may have become numb to always paying attention, especially if it works well enough most of the time.


that makes absolutely no difference in the context of the comments above.

If a product being overhyped prevents you from using it after you paid for it, you're gonna have to live with no computer, no phone, no internet, no electricity, no cars, no bikes.


If FSD decides to do max acceleration and turn the wheels, can you stop it in time? Zero to 60 is under 3 seconds, right?


If your brake lines burst, will you be able to coast safely to a stop?

Every piece of technology has a variety of failure modes, some more likely than others. FSD is not likely to take aim at a pedestrian and floor it, just like your brakes aren't likely to explode, and neither of you are irresponsible for making those assumptions


The difference is brake lines are inspected and understood by humans. Failure to reasonably maintain them is illegal.

No single human fully understands these AI models, and they can change daily. Yet Tesla is putting them in control of multi-ton vehicles and leaving all liability on humans with worse reaction time and little to no understanding of how it works or tends to go wrong.


What if it decides to short the batteries and set the car on fire? Can you stop it from doing that?

I think you are making scenarios that no reasonable person would assume. There is a difference between 'getting confused at an intersection and turning wrong' and 'actively trying to kill the occupant by accelerating at max speed while turning the steering'.


Battery and battery management is more straightforward. BYD has mastered it.

FSD is a blackbox. Even Tesla appears to be unable to prevent regressions. One of the most sophisticated companies in the world can't prevent regression in a safety critical software they frequently update. Let that sink in.


So, third-hand story, about 20 years ago, from an acquaintance who heard it from a Police officer who dealt with the following situation:

This police officer was responding to an RV which had run off the road, and was speaking with the driver. The driver, a bit shook up from the experience explained that he was driving down the highway, turned on the cruise control, and got up to make himself a sandwich...


“3rd-hand story”, from a friend of a friend…uh, huh. Unless I’m missing a heaping bucket of “ironically”, you could have just said “there’s an old urban legend…” instead of passing off an old joke as something true.


Well, up until a moment ago, I legitimately believed it to be true - even though I was a few steps removed from it. Live and learn I guess.


I have been duped like this before too! Believe a story that just doesn't ring right when you tell it to somebody else years later. Teaching / communicating corrects a lot of errors.


I remember my grandma telling that story maybe 30-40 years ago. Gotta be an urban legend. Yup: https://www.snopes.com/fact-check/cruise-uncontrol/


I've no doubt this has probably happened in real life at some point but it's practically a fable by now.

I think the Simpsons done it at least 30 years ago.


I saw a "trying out my parking assist" (I think it was with a Hyundai) video the other day where the guy didn't realize that the function only assists with steering and not the pedals. So he backed right into a car.


This is literally the story of a Berke Breathed Bloom County (or whatever the follow-on was) comic strip.


Even bought another car from the company.


The car is great. FSD is not.


The full self driving package act like grammar error on spam emails, self select for people with no understanding of technology. I fully expect that the more wild shenanigans in tesla future will be targeted at them directly.


Come on, this feels overly aggressive. Circumstances are nuanced, we don't know to what degree of danger any of these situations posed to the child, only the parent does. Judge not lest ye, and such.


Ah yes, surely the meaning of that bible verse is, "Don't ask mildly difficult questions based in any sort of moral stance." Because we all know that book is famously opposed to performing any sort of moral analysis.


There's asking questions about the circumstances to better understand before casting judgement, and then there's sarcastically implying that OP is a bad parent for endangering their child without asking any actual questions about what happened.


That was not sarcasm, which generally requires words used in contradiction to the normal meaning. E.g., if somebody makes a dumb mistake, the response, "nice work, Einstein" would be sarcastic. This was at worst mocking, but it wasn't ever hyperbolic, given that the what was written was a literal description of what the guy did.

Regardless, you haven't answered the point about the quote. "Judge not lest ye be judged" does not mean we have to empty-headedly refrain from any sort of moral criticism. In context, it's about hypocrisy, reminding us to apply our standards to ourselves as stringently as we do others. I think it's only appropriate here if tsigo somehow indicated he would happily endanger his own children, which I don't see any sign of.


Semantics that ultimately don't change the crux of my point, even if I disagree with some of them, but thank you for clarifying.


Beta testing the guidance system for a 4500 lb steel slug in a pedestrian environment is one thing.

Deciding that you want to put your family into that steel slug for the very first test seems to me to be an entirely different level of poor decision making.


We'd used Autopilot (non-FSD) for a a year at that point. I was used to mistakes and knew how to disengage quickly before they could be problems.

I was expecting FSD to be bad, but I sure wasn't expecting it to be that bad.

Maybe without disengaging any of the three incidents could have become problems, but for someone who knows how Autopilot works it was more comical than dangerous.


No harm, no foul. GP's child learned a valuable lesson which may serve the child well in the decades to come: don't trust self-driving.


I guess the lesson is more for the parent, unless the child will get $10k worth of ice cream less.


Pressing a button and then paying careful attention to watch the system perform is VERY different from just engaging the system and trusting it to do its thing.

I think the software is a half-baked gimmick but come on "look guys, I care about children too" variety of in-group signaling with a side of back seat parenting adds less than nothing to the discussion of the subject at hand.

And INB4 someone intentionally misinterprets me as defending the quality of Tesla's product, I'm not.


I assume that you have snarky comments to spare for the company who legally put this option in his hands as well, is that right?


[flagged]


To be fair, you did say that it literally drove up onto the corner of the intersection and "thank god there were no pedestrians there", which does not make it sound like you were in full control at all times, but rather that it was lucky no one was there or they would have been hit.


Presumably, if they had seen pedestrians standing on the corner, they would have intervened long before Tesla's AI could mount the curb. I've never driven a self driving car but I imagine it's a bit like being a driving instructor whose feet hover over the passenger brake. You'll be a lot more vigilant when the danger or consequence is higher.


> nobody was ever actually in danger

not sure you're qualified to make that assertion, simply based on the series of choices you've described yourself making here.


Only on HN could someone be arrogant enough to think their non-experience is more accurate than another's actual experience.

At no point was anyone—myself, my child, or another party—in any danger. Perhaps that would be different for someone who had never used Autopilot and didn't know how to disengage. But after a year driving that car, I'm comfortable in my assertion that the situation was safe.

But, by all means, please continue with the HN pedantry. We wouldn't want this place to change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: