Hacker News new | past | comments | ask | show | jobs | submit login
Tesla FSD Beta Lunges Toward Bicyclist (twitter.com/omedyentral)
226 points by codechicago277 on Feb 8, 2022 | hide | past | favorite | 321 comments



What a load of BS. Regulators need to start stuffing Musk's ideas hard in the paint, and stop trying to goaltend when they realize it was shit.

His FSD and AutoPilot naming schemes, tunnel deathtrap, the FSD and AP actual driving errors, horrible Telsa customer service, anti-union and abusive workplaces, the whole pedo-sub-diver debacle, "Funding Secured" tweet, etc.

I did not agree to be in the beta experiment, and neither did that cyclist. This is playing with people's lives and someone is gonna get killed again, soon.

Will no one rid us of this turbulent priest?!


Speaking as a cyclist of many years, maybe the system is just modelling human drivers a little too accurately.


Ha! Yes. This is definitely terrible, but its some variation of what I survive through every day anyway.

But still, even before this clip, I make sure to stay far away from Teslas if I see them out, they are just scary to me..


As a driver in the beta program, unless there is a lot of clearance I generally take over when passing a bike. Not so much because I don't trust the software, but because I don't trust the bicyclist! Around here none of them signal. They appear to assume that you recognized the 250ms glance under their armpit they made two seconds ago as a formal and legally binding declaration of intent and just swerve across the non-bike lane to make their left while running the stop sign. (Old suburban smallish grid blocks with >50% stop sign coverage, so I guess it makes them grumpy.)

Also, a plea to other drivers… Please! Stay off the bumper of Teslas. The software is generally cautious and gets spooked. It rapidly decelerates a couple miles per hour in preparation for a full on evasion, which wouldn't be a problem for a driver with a sane following distance, but if you are close enough to angrily look down at my speed limit driving head through my glass roof things could get a little awkward.


> Around here none of them signal

It is not always safe to remove one hand from the handlebar to signal. In fact, in a lot of urban spaces, it's downright dangerous given the bad quality of the streets. Another reason why cyclists don't signal is when they notice a car too close or too aggressive. In this situation, it's best to have full control of the bike.

The cyclist is not a threat to your life. At most to the exterior of your car. You are a threat to their life. They don't want you to crash on them or the other way around. Give them space and it should be fine.


can u imagine telling other people to drive differently so your car works?


Even before self-driving cars, you've always been supposed to leave enough room that you won't rear-end the car in front of you if it suddenly brakes hard.


I do that a lot already, in my dumb car.


Do you go around posting every anecdote you have about bad drivers too?


Pretty poor execution by Tesla though. It didn't even honk and yell "ride on the sidewalk, moron!"


Well that’s what these betas are for, iron out the bugs like this. Rumour has it in the next couple of versions the car will look for cyclists in its rear facing cameras so it can open the door right into them.


Not a far-fetched request given they already have ways to replace the horn via the exterior speaker.


I've had humans literally drive at me with the express intent of seriously injuring or killing me.

At least with the Tesla it's just going to kill you through its own stupidity...


Dead is dead, no? Ride safe and keep it in the big ring, lol.


Kind of like running stop signs, I guess.


The predicted route line was straight for the first part of the clip (but still wiggling around).. and then it suddenly veers off / disappears for no discernible reason. It doesn't seem like the FSD planner has any "memory" of what it was trying to do just moments ago, meaning it can just randomly change its mind like it did in the clip and surprise the driver.

But the only time that should happen is if there's an immediate danger that needs to be avoided. If we're going to have this awkward level 2 style self driving, it should at least be predictable, and preferably inform the driver what it wants to do well ahead of time (e.g. in this case, maybe it wanted to change lanes for an upcoming right turn..?)

I would definitely not risk using this thing in a city yet..


> I would definitely not risk using this thing in a city yet..

... or anywhere. Highway stakes are just as deadly, just less pedestrians and bicycles. Also, things can go wrong much faster there.


The one situation where full automation makes sense is traffic jams on highways. Simple control requirements, very hard for the algorithm to catastrophically mess up (or require rapid intervention to prevent life threatening accidents), and it’s extremely boring for humans to drive manually.


> very hard for the algorithm to catastrophically mess up (or require rapid intervention to prevent life threatening accidents)

And yet Teslas plow into emergency vehicles stopped on the side of the road at a rate much higher than other cars.


Is this true? Emergency vehicles unfortunately get struck rather frequently by human drivers.


Yeah, but my 2017 Subaru does all but the steering with adaptive cruise control in this situation. Given that a driver needs to pay attention anyway, it's not that hard to just sit back and move the wheel a little when needed.


I would definitely not risk using this thing ever. The world is going insane.


The planned path on the FSD display always jumps all over the place. That's your first clue that it hasn't been developed by qualified roboticists.


It should by now be pretty clear that Tesla, like Uber before it, are taking the Lord Farquaad approach to self-driving: "Some of you may die, but it's a sacrifice I am willing to make."

It does not have to be this way, and these companies and their leadership deserve every bit of reputational damage they will get out of this.


It's quite literally a public beta, just that the other traffic participants didn't get a say in the opt-in. Given the stakes for them I agree that it begs the question.

In the end it's a regulation problem, not a Tesla problem (even though yes, the specific decision to exercise their options is theirs) - do you allow unlicensed drivers to perform this on-road testing of unfinshed vehicles or not?

It's not quite a deliberate information hiding campaign as with MCAS (as far as we know). But it is pre-production testing of experimental engineering on safety-critical equipment. Experiments not guaranteed to pan out, and engineering lacking a mature certification process.


the fact that there is such a thing as a 'public beta' for moving two tons of steel through pedestrian areas at lethal speeds seriously makes me question our collective sanity. Imagine if gun vendors did public betas where random police officers just get a beta testing gun.

"It might fire in random directions at arbitrary intervals 5% of the time, but that's just how innovation goes folks. You don't like that, do you hate progress?"


Someone said in another thread earlier today that there is a worry that we are making the path towards FSD too erratic, and that if we screw it up, the next shot might be in generations.

People are arguing, with a straight face, that we are ruining (aspects of) humanity by not letting Tesla be laisez-faire with such things (and lets be real, it's predominantly Tesla who is just in full Cowboy/YOLO mode... other players in this field certainly aren't perfect, but they are at least trying to be conservative and cautious).


It's still a regulatory failure in the end. If the government does not want to lay down exactly what path should be taken in getting to FSD, then we should not complain when we don't agree with the path that the manufacturers choose.

As long as Tesla can demonstrate a similar or better rate of serious harm than human drivers, why should we stop them? It's up to government to do their job and outline exactly what price society should bear in the pursuit of this paradigm shift (because let's not kid ourselves, FSD is a complete shift in paradigm for society).


> As long as Tesla can demonstrate a similar or better rate of serious harm than human drivers, why should we stop them?

Because it can't, or won't. Instead they release statistics that _imply_ that it is better, that they have to know are absolutely misleading, because anyone who did high school statistics would be aware thereof.

Everything else, they refuse to release.


I'm not aware of _anyone_ releasing such data in a way. Not that I have been looking that closely for it (so I certainly could have missed this!).

This is another major regulatory failure. Government should lay out the minimum data that the companies must publish, so we can get a better idea of their performance. (It's still a chicken and egg scenario though. You can't get that kind of data anywhere but on public roads in uncontrolled environments.)


That reminds me of the ED-209 prototype in Robocop: "You have 20 seconds to comply!"


Here's the "20 seconds to comply" scene from Robocop:

https://www.youtube.com/watch?v=Hzlt7IbTp6M


This is a fair point. It's pretty bad that we have learner drivers on the road. Can't think of anything more public beta-y than that. Maybe trainee doctors?


I've had three "learner drivers" in my family and I spent quite a few hours as the "safety person" during their apprenticeship. During all those hours I never touched the wheel, brake, or gas. I gave advice occasionally, but never felt that the "learner driver" was driving unsafely.

I assure you that 15 year old young adults drive much better than a Tesla "self driving" computer with millions of miles and millions of hours of training.


Oh I’m sure you were convinced but you never asked me if I consented to you randomly putting underaged kids behind the wheel of a two ton vehicle on a road which I share with you.

You’re a regular Yaroslav Kudrinsky, just with better luck. But I suppose one can go around swinging swords in a crowd, hit no one, and convince oneself of one’s good judgment.


Category error: people have brains.


That's the important one - a Tesla has no concept of the real world. Remember that clip where a Tesla was slowing down because it mistook the full moon in the sky for a traffic light? Likewise, no one would aim at a cyclist, except when they are a homicidal criminal.


> no one would aim at a cyclist, except when they are a homicidal criminal.

The process of not wanting to impact something and attempting to avoid it can occupy so much attention that hitting that thing becomes inevitable. People tend to go where they are looking.

https://www.google.com/search?q=target+fixation


… or under the influence of some substance, or a health event, or voluntary or involuntary distraction, or …


In other words - not living in shared reality.


A seatbelt also doesn’t have a concept of the real world and doesn’t live in a shared reality either.


A seatbelt isn't given control of 2 tons of steel moving at 70 mph.


>Imagine if gun vendors did public betas where random police officers just get a beta testing gun.

This is a thing. They give marketing samples to organizations they want to sell to on a "try 'em out and let us know if you want to buy more" basis.


Marketing samples are not public betas. They are marketing samples of tested product.


If your complaint is that a very conservative manufacturing industry doesn't have 1:1 analogues with software development then I guess it's valid.

Giving organizations stuff you haven't 100% nailed down yet expecting them to come back with stuff they want changed in order to buy is pretty par for the course in the firearms industry. That's as close to a public beta as you're gonna get.


The big difference is that marketing firearms are still safe, or as safe as firearms tend to get. It's a fully functional firearm; they want feedback on whether the ergonomics are good, whether the weight is good, how it handles recoil, etc.

It would be more akin to handing out prototype weapons to police departments, which doesn't happen specifically because they might fire themselves or blow up or fail in some annoying but not spectacular way like failing to feed new rounds.


The key mechanical features always "work" but little stuff like "gun fails to cycle 500% more often with the particular spec of ammo a particular force uses". A rifle spec'd out for sale to Cambodia probably is going to have a lot of teething issues if you try and sell it to the Swedes unchanged. Are these things "safe"? IDK. But no force is going to put in an order for tens of thousands of guns unless correctable issues like those are proven fixed. Infantry arms are a battle of margins. Everything "works" but a marginally more finicky gun to use is going to mean the people using it are marginally less effective which extrapolates pretty directly to "safety".

At the end of the day the analogy isn't great and we're building a castle in a swamp here.


A public "beta" of a self driving car should lean towards failing safe. I would expect it to have sudden braking issues and be overly conservative. The kind of bugs we're seeing where obstacles are blatantly ignored make it feel more like an "alpha" release. But hey, we're not designing airplanes here, just thousand pound chunks of metal flying around roads capable of killing people.


...I am aware of zero other manufacturers brazen enough in terms of blatant disregard for Engineering Ethics to have embarked on this type of testing before Silicon Valley did.

In fact, there were licensure processes for this sort of thing already in place in California iirc, so this is less gray area, more "lets outsource testing to the end user".

Frankly, it's downright disturbing.


> In the end it's a regulation problem, not a Tesla problem

Are you joking? If you’re the one causing the problem, you’re the problem.

No regulation can prevent me from kicking someone in the shins. This isn’t a regulation problem, it’s a me problem.


The <Insert State Here> Code certainly applies penalties for assault and battery, and you can rest assured knowing shin-kicking would happen regularly without it.

The car problem presented here is quite a bit more shades of gray and we absolutely need to set acceptable parameters for the technology.


Yes, but my point is that if you’re being a jerk, then you’re the problem, not the person who is not saying, “please don’t be a jerk.”

You don’t need a regulation telling you not to be a jerk — unless of course you are a jerk. And even then, you’d probably still be a jerk, regulation or not.


Except it is: literally none of this is illegal. And it's not clear how it could be illegal: what's the distinction between cruise control and lane keeping and these systems? Very little - nothing you can codify.

It's also very likely that this system can pass a standard driving test: AIs are really good at handling fixed test cases. So any regulation you do pass has a complicated assurance problem: how does it prove the system is safe?

Pretend Tesla just knocked "beta" off the name and said "this is a limited self driving system, Tesla exclusive" - then you wind up in the same bucket with the same "it's the drivers responsibility problem".

People see this footage and assume Tesla self drive does this all the time - but it wouldn't. It probably passes this scenario in testing 100% of the time: but the issue of course is the test is not quite whatever this circumstance is, and the system has a dramatic failure.

There is absolutely a regulation problem.


There is an extremely bright line.

Can the software automatically make lane changes? Can it initiate turns? Can it decide to make non-emergency stops at lights and signs? Can it accelerate from a stop? What is the expected level of human intervention?


That determines what the system is.

The question I'm posing is, how do you prove it works to an independent authority?

It would be extremely reasonable for us to regulate along these lines, but I'm also absolutely willing to bet that Tesla and every other system would get through an approval process just fine.

Which sort of loops around too a wider issue: if self driving is a priority, then it needs to be a government, collective priority for public safety and development.

I.e. independent reading institutes, a suite of diverse randomised tests and yes: because this is no longer "on the driver" for accidents then both the government and company will be taking responsibility for failures.

The issue is that the law of the land is: these systems aren't illegal, and the driver is responsible for how the car handles including any automation they engage.

To make any progress you have to break driver responsibility - or use that bright line and just outlaw the whole lot.


>but I'm also absolutely willing to bet that Tesla and every other system would get through an approval process just fine.

It would be Diesel Gate all over again. The testing would have to be documented somewhere and the car could just be trained for the test scenarios. Considering how often Tesla like to massage their numbers (nurburgring data, sales figures in basically every country, most recently Australia, the big asterisk next to their 1.9s 0-60 run, etc etc) I can't trust they wouldn't train directly for the test and neglect everything else just so they can have the best headlines once again.


I've wondered if it would be a good idea to have some sort of external indicator on cars that indicates to others that are driving, cycling, on foot that the vehicle is in autonomous mode. At least then maybe one can understand the potential risk driving around them. (Noted that an indicator doesn't really help if the vehicle is behind you in a lot of instances!)


One thing I've noticed the past couple of years is that it isn't that hard to determine whether or not a car is autonomously driving. If they're driving like they're drunk and they're in a Tesla, 9 out of 10 times, you'll see that that the driver isn't paying attention to the road at all because they're using Autopilot or FSD. I just treat the Tesla drivers around me on the road as if they're drunk and keep my distance.


Elsewhere in the thread a FSD Beta participant is imploring other drivers to keep their distance to Teslas because the software "gets easily spooked".

This makes it even harder to assess the safety of the system - other drivers are helping it along as well.


Several manufacturers have experimented with such systems in concept cars, e.g. animated grilles that signal in green when an oncoming car has recognized a pedestrian and is slowing in response.



What does MCAS mean here?


It's a software system overriding pilot input on Boeing 737 MAX airplanes, designed to make up for cost reductions in adequate hardware design. Pilots were deliberately not well-informed and the system caused two crashes. It was brought up elsewhere in the discussion. In general cars are headed into an aviation-like situation with automation, so case comparisons are becoming more frequent.

It's very apples to oranges here, but loosely relevant wrt/ information/documentation and in the cameras-vs-LIDAR/sensor fusion debate.


I'm sure the policy and regulatory systems will be able to fend off the lobbyists of the richest person in the world working to get the word "liability" redefined in the law. \s



lol @ public beta


I was expecting autopilot to be seriously dangerous, but the data doesn’t back that up. Out of 234 deaths from accidents involving Tesla’s 12 people have died while Tesla autopilot was known to be in use or had recently been in use. http://www.tesladeaths.com Autopilot might be slightly worse than human drivers depending on how you slice the data but that’s about it.

I don’t think it’s a regulation problem as much as it is an acceptance by regulators that driving is inherently dangerous and autopilot isn’t dramatically worse. It’s not even obvious if on net more people would have died if Tesla had never released autopilot.


Note that FSD beta participants are selected/limited by Tesla based on proprietary "safe driver" scoring, so FSD data doesn't say much about how the gen pop might fare at managing the system. (I know you were writing about Autopilot -- discussion is on FSD Beta though.)

Note also that other drivers are learning behaviors such as keeping their distance from Tesla's, etc., making it harder to assess the system functionally.

Edit: Actually, there are docs for the scoring, sans changelog though: https://news.ycombinator.com/item?id=30267858


Or social media clout (maybe not so much any more) where several prominent "influencers" got access regardless of scoring.


Autopilot and FSD are two different things. Elon claimed 0 accidents for FSD late last year, but this year there is video of a Tesla running into a post.


the data is extremely questionable


In what way?


1. Dangerous means causing injury, not just death.

2. How many times has autopilot disengaged and would've caused injury without human intervention?


Sampling bias, confirmation bias.

Did you delete your link to tesladeaths.com because you realized the fine print about sketchy autopilot data contradicted your point? Oops!


No, I was trying to keep the link without a http://, https://www.tesladeaths.com/ or https://www.tesladeaths.com/ works www.tesladeaths.com doesn’t nore does www.tesladeaths.com/

I am sure I have seen comment links without the http:// but might have just gotten so used to it than I stopped seeing it.


uh huh. why do you think it's reasonable to conclude that this system is safe given that the reported data is based on faulty analysis [0] and inexplicably exempt from reporting requirements for testing self-driving cars?

[0] https://jalopnik.com/feds-tesla-autosteer-safety-investigati...


I don’t think it is safe.

I just don’t have data showing it’s terrible.


ok but you think it's reasonable to let them test their unproven software (which has already killed people) on public roads anyway?


Yes, as there is no other way to prove it’s safe. Aka your suggesting an impossible standard for any company to meet.

Further the only reasonable standard is roughly average human competency. As soon as a tired driver can turn on self driving and be safer on the road then banning that technology has a real cost.


of course there is, it's what every other company is doing: safety drivers with reporting requirements


Testing hardware and software via safety drivers on public roads doesn’t actually prove safety in the hands of the general public. Just as FSD being released to the safest drivers first doesn’t prove safety in the hands of the general public.

The general public doesn’t use products like you might assume. For example toilet plungers have killed or seriously injured people and not because they where used as a weapon.


what??? they're not subject user error. that's the whole point, and how many times fallible humans had to take over is exactly what the disengagement report tracks


The platonic ideal self driving system possible doesn’t prevent a tire blowout from poor maintenance etc. Similarly a test fleet isn’t going to replicate the publics actual origins, destinations, and time of day. Something as simple as people going to bars more because the self driving system can get them home would mean a different risk profile.


The thing that really grinds my gears about all of this as a roboticist, is that as a community we had a pretty good culture of taking safety seriously. In 2014, the idea of unleashing onto the public admittedly beta-quality autonomous car software, on a platform that eschews industry best practices was unfathomable.

Then came Autopilot and later FSD, and Tesla threw the concept of safety was thrown out the window. An autonomous Tesla decapitated a man in 2016, and Tesla continued unabated! What actionable steps did Tesla take as a result of that incident to make sure it never happens again. Even today in 2022 you can find videos of Teslas on public roads just completely failing to detect and colliding with stationary objects. The culture at Tesla seems to not care this is happening, as it continues to happen.

It used to be that when your robot did a single thing wrong or caused the most minor of injuries, let alone it killed someone, then you engineered the hell out of it and implemented safety protocols to make sure that failure mode would never happen again. Not at Tesla, apparently.

I get the utilitarian argument that maybe self driving cars could reduce traffic accidents in the future by being better than humans on average. But safety doesn’t happen unless you really work at it, with intention. Using your customers to beta test your 2 ton robots on public roads is not a safety-focused mindset. Tesla is not a company that values safety first when it comes to their autonomous car project.


I haven't worked in robotics, but I've worked in automation for quite some time (digital and real-world). Two examples I can think of:

- SCADA always had hot, hardware backups and failure detection. To the extent that I thought this was dogma.

- Physical automation (like car construction equipment and robotics, pick and place machines, etc) have safety zones that humans aren't allowed in or near. Same thing for Amazon warehouse robotics.

I'm not sure how we leapt to, "beta test cars on the street" from there.


I think the issue is partly that people don't see cars as large industrial robots because we're just so used to them.

Everyone who has ever written code for a robot knows you should treat every pinch point as a finger blender, but the cars look nothing like them.

Tesla's branding may make that excusable for the layman, but for the employees at Tesla, that branding is incredibly irresponsible


It seems to me like the lives of countless people are seen as edge-cases that don't matter to Tesla's big picture of profits and growth.


Why all the euphemisms and diplomatic language? There is one word for it: Criminal.


> I get the utilitarian argument that maybe self driving cars could reduce traffic accidents in the future by being better than humans on average.

This keeps being touted (by Elon and others), but is a long way away: https://twitter.com/Tweetermeyer/status/1488673180403191808

Tesla uses misleading stats (like not excluding highway miles, which are inherently safer) to try and make it seem like autopilot is any good.


Yeah, and the reason Elon is reaching for aggregate statistics is because he can't argue that his platform is fundamentally sound. I mean, the primary directive of any driverless car is: don't run into anything. That's it, that's all it has to do. Yet Teslas routinely run into stationary objects, and not because those objects are somehow particularly pathological and would generally fool most anyone. No, they are specifically confounding to the way Tesla does things, in a way that practitioners in the field aware of and know how to solve, but which Elon Musk stubbornly refuses to recognize.

I keep harping on it but the decapitation incident in 2016 is just so illustrative of what kind of culture Tesla is fostering. Here's a situation where reckless engineering choices by Tesla literally caused a customer to lose his head. The solution which would fix this problem would be to add more orthogonal sensor modalities, so that any blind spot in one sensor can be compensated for by another (e.g. LIDAR will not see a glass wall, but SONOR will).

But what did Tesla do? They actually doubled down! They removed existing orthogonal sensors and they made everything worse! Now basically the entire thing is driven by vision, and it's still predictably exhibiting the same failure mode of running into broad and stationary objects as it did 5 years ago (which by the way was approximately the amount of time Elon Musk predicted it would take for them to get to level 5 autonomous driving. Instead they've bumbled around for half a decade and failed to make any significant progress).


For Uber at least, self driving was effectively a money-losing pipe dream, so it kinda did itself a favor by divesting via the sale of the division to Aurora.

For Tesla, though, FSD is still such a front and center thing that I'm having trouble seeing how it ever plans on saving face as it gets more and more bad press from incidents like this.


The feature sells cars.

Remember that guy in southern California who got arrested for sleeping in the back of his Tesla while it drove him home, and he said he did it many times and would do it again? The system works well enough that he did it many times before he got caught.

That guy is an idiot, but think about that story from another angle: You can fall asleep while driving any car. The difference is, if that car is a Tesla, you might not die.

As a driver aid for non-crazy people FSD is a good system and Tesla owners like it. It's far from being vaporware.


dang is going to ban me if I share how this take makes me feel. so let me just say that the proliferation of evidence-resistant memes such as this one concerns me greatly, and I feel even more unsafe as a pedestrian and cyclist on the street than I did before knowing that there are people who think that this is reasonable.


> The feature sells cars

Right. Similarly, the fast Prime delivery guarantees from Amazon are backed by warehouse workers working their asses off, and I'm sure we've all heard about the bad rep Amazon gets for worker treatment and the subsequent attrition challenges they've been facing.

It feels like Tesla is setting itself up to be stuck between a rock and a hard place, being reliant on their FSD marketing to make money but simultaneously tainting their brand beyond salvation by this very same reliance.


I think you vastly overestimate the effect this has on the brand. At this point everyone who buys a Tesla has heard stories similar to this, in addition to stories about quality and body panel fitment etc., and they buy them anyway.

Amazon doesn't seem to be hurting either, despite the bad press they get about their warehouses.


> he said he did it many times and would do it again?

Based on the videos I’m seeing of FSD in action, I have my doubts. Wouldn’t be surprised if this was a publicity stunt. Elon Musk has a very loose relationship with ethics and the truth. They’re just obstacles for him.


It's not money-losing if you take the program's effect on stock prices into account, and maybe you'll even accept that it's always been a stock manipulation tool.



Going out of its way to hit the bollard - wow


Did you notice the most likely reason? Slightly to the right, of the line of bollards, there is a small patch of road and another strong white line.

Kind of almost 1 meter, ( 3 feet) wide.

So it thinks the tiny patch between the bollard, and the right most white line, is the right driving side of the road.


Not that it justified overhyping or undercaution by Tesla, but lets be real:

everyday driving comes down to that same acceptance of risk

Let's not be perfect be the enemy of the good: a self-driving highway system can probably be achieved with current technology that beats human drivers for long distances. I wish that was the priority rather than the robotaxi obsession.

Self-driving highway systems would produce immediate profit and economic savings that could fund the drive-grandma-to-Taco-Bell systems, as well as the convergent infrastructure, common signaling systems, embedded sensors, better signage and standards, improved and predictable construction signals and routing, etc.

Following the usual hype cycle, it will be highway logistics that smooths over the "despair" portion and does the practical incremental improvement.


Self-driving cars will never be a reality if the standard for success is zero deaths during the early stages of development. It just won't happen. The technology won't ever be good enough if it's not tested in the real world.

Furthermore, no self-driving system will be perfect even in a mature stage of development. People will still die, and they may die because of mistakes a human driver would not have made.

If society isn't willing to accept that sacrifice, then let's just ban self-driving cars.

But let's not try to have it both ways -- let's not allow self-driving while also holding the view that risk of death is unacceptable.


The standard for success is not zero deaths, but it should at least be a system no more dangerous than human drivers.

Generally speaking, the average human driver causes a “ding” every 100k miles, a insurance worthy event every 250k miles, a police worthy event every 500k miles, and a fatality every 60M miles. Assuming an average driving speed of 60 mph, which is obviously an overestimation, those are 1.6k, 4k, 8k, and 1M hours respectively.

Evaluating a L2 autonomy system is somewhat challenging since in truth you are evaluating a human+autonomous system, so you should expect the overall safety to increase unless the autonomy system is actively making things more dangerous. However, for a L4/L5 autonomy system, which is Tesla’s goal, we can reasonably approximate the quality by miles per intervention as an intervention is a failure of the autonomy.

As a reasonable baseline based on video evidence of FSD, we can assume that each intervention prevents at least a “ding”. Therefore, we can evaluate the quality of FSD relative to a minimal L4 system as the ratio of time per disengagement versus 1.6k hours. Using those same videos as evidence, we see the FSD system requiring intervention every few minutes, i.e. <10 minutes, on average. That is 1/10,000 of the minimal quality expected from a fully functional L4 system.

So, for Tesla to create a minimal acceptable L4 autonomy solution that would actually usher in a age of safe self driving systems, they need to improve the core functionality of their beta product by 10,000x. It is that gap between what they are delivering and what they still need to achieve that is outrageous.


In this stage of the product development the rate of intervention is not, and likely won't be for a long time, a good measure of failure. We likely won't see full L4 systems for a very long time.

In the mean time, we could start to get a better handle on the safety profile of these systems by better classifying the interventions.

Something like these:

* Intervention due to unreadable conditions (e.g. unpainted lines, a new area with no training data, failure of LIDAR / camera, w/e)

* Human intervention due to not feeling safe

* Intervention due to unsafe condition (e.g. imminent collision, or just hit something)

* Human intervention for other reason (e.g. human is bored, or human changing destination, or human wanting to take another route)

---

#1 could certainly lead to #2 and #3, but is more of an indicator that the car is in novel conditions not trained for, or has hardware failure. #1 is the low hanging fruit (relatively speaking), and #2 is where the majority of the work lies.

#3 is where the system has actually failed.

And #4 need to just be discarded.


I agree with you overall, but would question two parts of your post.

> we can assume that each intervention prevents at least a “ding”.

I don't think we can be sure that the vehicle wouldn't have corrected itself in time, nor how many would have been much worse than a "ding". This is where some closed course testing with simulated pedestrians and bicycles etc may give more evidence. In evaluating self driving software, knowing what the consequences of a failure are is important. As you point out, safety drivers prevent this assessment.

> based on video evidence of FSD

I think that there is selection bias in the readily available footage. Firstly, the short clips are likely to be where there is an incident. The longer ones are usually beta testers deliberately choosing tricky routes to test the FSD. I don't see anyone posting videos of a 3 hour freeway drive where nothing happens.

But my point is just that this is complex, and we should be much more careful than we are being.


Very well said. You’ve put into words what my intuition has been screaming at me but has been unable to properly formulate. Thank you.


You are introducing a ridiculous binary. There are innumerable ways to test this software, even on public roads, without unleashing it on the thousands of Tesla owners.

It's not like Tesla has to release new features every year, they just feel they need to to continue to pump up and justify their stock price.

EDIT:

I misread the parent comment, I’m not opposed to just banning self driving cars as self driving cars likely won't make everyday folks' life better.


I'm fine with them unleashing it on Tesla owners, they at least have a choice in the matter. I'm not fine with unleashing it on everybody else too.


Driving on public streets is an exercise in choreography, so they're the same thing.


No they are not. Low quality software in control of a two ton chunk of iron moving at 35 mph through traffic including pedestrian and bycicles has nothing to do with choreography, that bike was about to be hit from behind because of a vehicle steering into it.


Choreography in terms of doing things in concert with other road users without classing, including vulnerable road users (pedestrians, bicyclists). My point is that unleashing it on Tesla owners is obviously going to involve interacting with other road users.

Just because this instance was regarding a bicyclist doesn't mean it's only a danger to bicyclists, there are still ways for people to die if the car were only interacting with other Teslas.


the other companies seem to have no problem following the rules and not killing people.

I'm more of the mind that cars should become the second-class citizens they ought to be in cities and that I should be able to bike around the city without constant threat of death, that dream remains far off in american cities.

Why is that fire codes and aviation regulations are written in blood, but traffic violence is treated as a normal, necessary cost of modern life? When a biker is killed by a driver who wasn't signaling and cut them off on a right turn (as happens to me on a daily basis in SF), that should trigger immediate redesigns of intersections and streets across the city, for that was an entirely preventable death.

But nope. Go ahead and test your fundamentally flawed, unproven death machine on public roads.

I gotta say, I am really not that keen on being killed by a white tesla.


> the other companies seem to have no problem following the rules and not killing people.

What rules did Tesla break?

Also, remember that time Waymo killed a pedestrian?


that this is not explicitly illegal is the problem. everyone else has safety drivers and is subject to reporting requirements.

waymo never killed a pedestrian


> waymo never killed a pedestrian

Yeah, it was Uber, my mistake.

But that's an example another self-driving company that killed people. It could have easily been any of them -- they have all had accidents that could have resulted in death, and just got lucky. Tesla is not special here.


I think it is of questionable rationality to accept the null hypothesis between uber (0.4 miles/disengagement in 2018) and waymo/cruise (~700k miles, ~30k miles/disengagement in 2020; ~11k and 5k miles/disengagement in 2018, respectively), but reject the null hypothesis for tesla which reported....... 12 miles of self-driving in 2020.


But there are competent folks in the field... ? Waymo's easily five years ahead of Tesla.

Previously: https://news.ycombinator.com/item?id=30182141


I think a reasonable standard is: if someone dies, is seriously injured, or likely would have died or been seriously injured if the driver hadn't intervened in a situation that a normally competent driver wouldn't have had a problem then the self-driving system in question should be disabled globally until the vendor of the system can reasonably convince the NTSB or other relevant government agency that they've taken adequate precautions that it's very unlikely to happen again.

It's to be expected that people will occasionally die in car accidents. Some things are hard to avoid. If someone on a bicycle comes shooting out into a busy street when they don't have the right away and no one could see them coming, they're probably going to get run over. My general expectation for self-driving vehicles is: don't create unsafe situations that weren't there before, don't behave in unpredictable ways, and if you can't avoid running someone over at least do it in a way that the consequences fall on the person doing risky things. In the above example, swerving to miss the biker is fine if it can be done safely, but swerving to miss the biker and instead running over someone on the sidewalk instead is not. (Or, in other words: if you can't avoid an accident, at least be predictable and try to obey the general principles of right-of-way.)

Swerving at a bike for no reason as in the video above should result in suspension of self-driving features (assuming the video is real and wasn't staged in some way).


> or likely would have died or been seriously injured if the driver hadn't intervened

I can see this being a reason to ban a system from operating at SAE level 3 or above, but isn't the entire difference between level 2 and level 3 that things like that are expected to happen at level 2 and below, hence the reason for drivers having to always be ready to immediately intervene at those levels?


I think one could reasonably say that a car shouldn't be expected to save the driver from all situations when the self-driving features are just a backup system and the driver is expected to be in control. However, those features shouldn't cause unsafe situations all by itself, like the example video of the car trying to swerve into a bicyclist.


Somehow, I don't think the only way to get self-driving cars is if we first crack a few skulls testing them out on the public without their consent.


Your framing assumes as a given that Tesla's self-driving tech will eventually succeed given a suitable buffer for human collateral damage, but there is no guarantee that this is the case and Elon's decade of falsehoods regarding the timeline and capabilities of FSD technology doesn't inspire confidence.


Right, there is no guarantee it will ever work.

However, self driving is definitely guaranteed to fail if it can't be tested on public roads. And that's really the challenge here -- it can only succeed if society is willing to take the risk.


We don't have to ban self driving cars. We just set the success criteria at zero deaths. If companies want to spend money pursuing it believing they can hit that goal that's on them.


We have around 38,000 deaths a year on US roads. We already accept significant risk to maintain the status quo.

Sensible legislation is needed to balance innovation and safety for self driving cars on public roads, but not zero. We don’t even accept zero pedestrian hit and run deaths yet…

California, Texas, and Michigan legislators have already bought in to the benefits that these companies will bring.


Zero deaths is a completely unreasonable standard. I literally can't think of a single thing at all that's never contributed to anyone's death.


> We don't have to ban self driving cars. We just set the success criteria at zero deaths.

That's equivalent to a ban.


There's multiple alternatives to driver-driven cars that result in many fewer excess deaths. They've been trialed and deployed in many countries very successfully. You may know of them by common names such as "buses", "streetcars", "trains", "metros", or even "people movers". There's nothing wrong with innovation but we do not need to put self-driving technology on a pedestal when there are alternatives that exist today that move people around that have none of these externalities. If entities want to innovate, let them innovate within safety parameters.


We could also set the speed limit to 5mph and limit cars to 10hp. That would save a lot of lives. The reason we don't is because some deaths are worth the cost.

>If entities want to innovate, let them innovate within safety parameters.

Sure, but the safety parameters have already been established[0] at 40,000 deaths a year, or less, for transportation equivalently convenient and functional as compared to a manually-driven car.

[0]By the crowd-sourced wisdom of democracy.


> We could also set the speed limit to 5mph and limit cars to 10hp. That would save a lot of lives. The reason we don't is because some deaths are worth the cost.

Are you saying that, despite transit alternatives that do not lead to excess deaths, that deaths due to self-driving cars is worth the cost? We have existing solutions that don't result in these excess deaths, but because of some reason, we don't want those solutions _and_ we want to kill more people just to birth this technology? _Why_?

> [0]By the crowd-sourced wisdom of democracy.

... Which bill are you talking about? I don't remember my Rep or Senator voting on allowing me to be a casualty of the self-driving industry. The fact that automobiles in the US result in 40k deaths is itself a travesty. Even Canada is doing better than the US here. Pointing to the worst driving safety record in the G8 and one of the worst in the G20 isn't exactly proof positive that the American automobile system is safe, it's just proof that we have grandfathered an extremely unsafe system here.


>We have existing solutions that don't result in these excess deaths

The existing solutions <As chosen by the collective will of the American people> cause 40,000 dead Americans every year. Other 'Solutions' are not. Don't ask me why they are not, ask your neighbor, and your congressman -- but they are not. Also ask yourself how many collective minutes spent waiting for a bus is a traffic fatality worth? This isn't rhetorical, the answer is a number.

>Which bill are you talking about?

Lets start with the United States Department of Transportation, followed shortly thereafter by your state's department of transportation.

>it's just proof that we have grandfathered an extremely unsafe system here.

All the reason why a potentially new system should have to meet the safety standard of 'less unsafe' and not 'perfect'. Literally every day the adoption of self driving is delayed kills 109.6 people[0] because that's one more day of the status quo. To save lives we should be damn near throwing people in front of Teslas to help them train instead of wringing our hands over the matter.

[0]As much as, divided by the relative eventually safety of a fully developed self driving. As Americans, more globally of course.


I'm confused. Is the only way forward private automobiles? Did the creation of the Department of Transportation ban the creation of transit?

Americans choose automobiles because of zoning laws mandating massive setbacks and minimum parking requirements. There's nothing in the DoT or my state's DoT that mandates use of a car. It's all due to American city planning, which is driven by auto friendly city planning policy. So again, why is transit out of the question here?

Self-driving cars aren't an incremental improvement away. We have no idea how many more deaths are needed to mature the technology. So why are we bending over backwards for it when there are proven solutions to move people without casualty? If self driving can innovate without casualty, then be my guest.


>If self driving can innovate without casualty, then be my guest.

You would kill tens of thousands of people a year with that policy decision.


> To save lives we should be damn near throwing people in front of Teslas to help them train instead of wringing our hands over the matter.

The unstated assumption in this is that self-driving is feasible with current technology and without enough training will be better than human drivers.


It seems like what you really want is to ban cars and force everyone to use public transit.

That's just not realistic, not even in Europe.


That's a strawman. You can make private automobile ownership much safer and more convenient by building transit. What you're implying is that "forcing everyone to use public transit" is bad, so we need to sacrifice a couple thousand people for self-driving cars? My argument is that we can put much more stringent safety requirements on automobile users. If self driving cars can meet those requirements, then be my guest. Why are you willing to bend over backwards, to the point of endangering others, just to avoid public transit or other proven safe technologies?


That seems like a much to fun thought experiment to just do away with.

Why wouldn't it be realistic?


Riots in the streets? That happened in France when the government increased the price of diesel. Imagine what would have happened if the government had tried to ban cars entirely!

If a state government or the US Federal government tried to ban cars, they would be out of power very quickly, even in the most liberal parts of the country.


Yes and I'm pretty sure if you let everyone know they're going to be enrolled into a massive experiment to test beta self driving software upon themselves with or without their consent, then they're going to be rioting also.


We fix that by doing it really really slowly.


Because the technology is a non-starter and will never work unless accepted terms of public safety are redefined in its favor. "Just draw a bigger bullseye!"


of course it's not, it just increases expense, but that should be offset by the potential reward of a true breakthrough. it will change the innovation path yes but it's not a ban


Impossible standards are equivalent to bans, and zero deaths is an impossible standard.

All of the companies that are testing self-driving cars have had accidents with other cars and pedestrians. Some of them resulted in death. Others did not, but easily could have, and eventually their luck will run out and they will also cause deaths.

Who would invest millions of dollars into developing a technology when a single mistake could flush all that money down the drain?


This is normally when regulators step in.


I also think there is a certain fear or outrage around this stuff which is fueled by "technology" or the unknown.

People are some of the worst drivers around, they cause all kinds of problems with cluelessness and inattention and they crash and die and kill and injure in all sorts of stupid ways. They get drunk, high, check their phone, fall asleep, speed, show off

I think it's great that there are still companies out there throwing research into moonshot types of research projects like this. Automobile deaths are perhaps the biggest and most persistent pandemic we've faced, we've been facing for a hundred years, and it kills and maims young and old alike. Millions of people dead over the years.

Many were willing to relax some of the protocols around vaccine testing to bring the covid vaccines in to use earlier, put mandates and coercion around them, and yes some suffered adverse reactions and even died from them. Still, it was thought to be a reasonable compromise by many.

I hope we don't go the other way on other technology that can advance society and potentially save lives just because not everything goes perfectly from the start, or we get stuck in the pharmaceutical corporations = bad mindset.

Not to say there perhaps shouldn't be more regulation around it, but a serious and careful study of risks and benefits I think is appropriate rather than handwringing and outrage.


> People are some of the worst drivers around

1 fatality per 100 million miles driven is relatively good, to be honest.

This seems to me like a good argument for the public funding of electric rail lines instead of self-driving cars that perpetuate all of the problems personal vehicles are responsible for.


Sure and covid only kills a miniscule proportion of the people who contract it. Over a million people die on the roads every year around the world. It's been a 100 year covid.

Handwringing about someone carefully testing a car and not hitting a cyclist in the process is really pretty rich. I'll happily take actual data and statistics but for every one of these I can also post a video of a driver doing something stupid that a computer would not.


> Sure and covid only kills a miniscule proportion of the people who contract it.

A 1 out of 100 to 1 out of 1,000 fatality rate for COVID depending on demographics is quite literally several orders of magnitude higher than a 1 out of 100,000,000 fatality rate for driving. You're comparing apples to atoms.


No I'm not because not everyone goes out and gets covid most days for their whole lives. It's also much much less than 1 in 1,000 for young people in the range of 0-30 where auto wrecks are the number one killer in a lot of places.

So no, it's more like apples to watermelons. The watermelons being bigger, the ones which have killed far more people and will continue to kill far more people.


Friendly reminder that yesterday, COVID-19 killed over 2000 people in a single day. It’s been keeping that grim pace for weeks. For every person who dies, at least a dozen will suffer from long Covid. Even still, no dangerous shortcuts were taken trying to get the vaccines out (in the West), other than doing steps in parallel rather than in sequence (at great $cost).

So even COVID-19 wasn’t enough to “move fast and break things” (though I remember hearing many such calls to “move fast and break things” from the same contrarian-sphere community that is now Very Concerned about vaccine injuries). If COVID-19 wasn’t bad enough to completely remove our safety guardrails, certainly the never ending promise of FSD (watch any Musk video compilation where he promises FSD is just around the corner, going on five years now) isn’t enough.


Yesterday auto deaths killed 4000 people and horrifically maimed many more. As has been the case for the past century and as will continue to happen long after covid is background noise.


4000 in the USA? No it did not. Apples and oranges.

Edit: I just looked up the numbers and it’s 102 deaths/day average in the USA. I think it’s a horrific problem and support many initiatives that will take cars off the road and calm traffic.


4000 people died, what I said is true.

102 deaths every day in the USA alone is staggering, and many children and young people! Are you trying to it's not? How many people has self driving cars and their research and development killed? Vanishingly few by comparison.

The situation is very analogous to covid and vaccines.


What vaccine guardrails are you under the impression were removed? AFAIK none were removed, it's just steps were done in parallel at great $expense. It appears you are trying to imply that it's fine if Tesla skips safety testing and pops a few skulls like watermelons in pursuit of $FSD. I'm not saying perfection is possible, but certainly Tesla can do better than it's doing. If anything, Tesla threatens to set everything back with its sloppiness. One bad accident could create enough bad PR to set self-driving cars back for years.


> What vaccine guardrails are you under the impression were removed?

The guardrails that the typical trial process has, see e.g.,:

https://coronavirus.jhu.edu/vaccines/timeline

https://www.ncirs.org.au/phases-clinical-trials

> AFAIK none were removed, it's just steps were done in parallel at great $expense.

No, the far greater expense is the typical 5-10 years that clinical trials require.

> It appears you are trying to imply that it's fine if Tesla skips safety testing and pops a few skulls like watermelons in pursuit of $FSD. I'm not saying perfection is possible, but certainly Tesla can do better than it's doing. If anything, Tesla threatens to set everything back with its sloppiness. One bad accident could create enough bad PR to set self-driving cars back for years.

It doesn't appear that I'm trying to imply that at all, it appears you're just making things up.


That timeline you linked matched up with the COVID-19 mRNA vaccine timelines. mRNA pre-clinical data had been being researched for over a decade. I don’t see anything that conflicts with what I said. Are you under the mistaken impression that mRNA vaccine research began in 2019?

Again, you’re claiming vaccine guardrails were removed but posting no evidence. Do they do 5-10 year trials on the annual flu vaccine? Please show me an authoritative medical source concerned about major shortcuts taken bringing the mRNA vaccine to market. Should be a piece of cake.


I don't know what your first paragraph means, what matched up?

I don't know what you mean by guardrail either. Vaccine trials were significantly accelerated from typical vaccines, long term data was foregone and emergency use authorizations were used to deploy them. I don't know what you're trying to argue or whether you're actually trying to deny that reality.


The number of times most people will catch COVID is way, way fewer than the number of miles that they'll drive.


> I also think there is a certain fear or outrage around this stuff which is fueled by "technology" or the unknown.

From other communities I could buy the "fear of the unknown" bit. But here, most of us are involved in the production of software in some form. It's not the unknown that we fear, it's the known. We fully understand what modern software is actually capable of, and we know that it's not ready for prime time in these kinds of environments.

Further, many of us have been in a chaotic public beta before, and seeing those patterns play out during a beta test of lethal machinery is really scary. Rapid chaotic iteration is fine when it's a simple SaaS product. Not so much with Tesla.


Most here have no idea what's involved in automotive control systems or these autonomous driving systems, let's be honest.


> People are some of the worst drivers around

Compared to which other species?


We don't need moonshots to solve the car pandemic. Strong Towns and others have shown that we already have plenty of common-sense, effective tools that we've simply chosen not to roll out on a wider basis.

We don't need new technology, we need new leaders.


Of course we do. If you can get "Strong Towns and others", whatever exactly that is, to end the car pandemic, that would be a a moonshot wouldn't it? Maybe not technologically but politically and socially.

You're not the first one to have thought it's a good idea or that we should try to do something about it. It's not like it's easy and people have just decided not to.


> "Some of you may die, but it's a sacrifice I am willing to make."

Honest question: If beta testing speeds up the transition to FSD by x months/years and causes y deaths during the intervening period but FSD ultimately reduces the number of road deaths by z per day. For what values of x, y and z is it an acceptable trade off?

Zeroism is often overly simplistic. y being non-zero might be hugely net-positive in the long run.


Ethically, it is not an acceptable trade off. Your question is set in the future, after the action has been shown to be a success. A choice made today cannot be made ethical retroactively because it turned out to be the right decision. And it is unlikely we can prove the action caused the success in any case, or that not performing the action might have been better (maybe a success tomorrow sets back a better approach by 50 years, causing many more deaths). The better question to ask is 'if beta testing might speed up the transition to FSD and it might reduce the number of road deaths, is it an acceptable trade off?'. At which point you are doing risk analysis and algebra. But given this is the mathematics of human life, you are still in a very dubious ethical area, and the usual answer is still no.


Of course, I don't know the future, it was just a hypothetical question, not a mathematical definition. My point was just the answer is not zero because the alternative is human driving and human driving is also not zero deaths. So, at some point we will have to make some sort of trade off that will involve some number of deaths. Unless you are suggesting that the likelihood of deaths should literally be zero before we start rolling out in public?


To indemnify manufacturers and drivers from the harm, it is going to require extraordinary proof and regulation. Similar to medicine, medical equipment, aircraft. Until that point, it is individual choice to turn it on, and their responsibility for any harm. Same situation as a driver choosing between wearing their eyeglasses or not. What is unclear is which individuals are taking responsibility with driverless trials, or if any indemnification by the state for driverless trials was done ethically.


No, of course it isn't; think, for instance, of applying that same logic to other circumstances. You can't knowingly or recklessly endanger people simply because your long-term intentions are benign.


There needs to be some testing of FSD solutions in real world scenarios and there will inevitably be deaths. There are thousands of deaths on the road every day right now.

Even if FSD is mature and 1000x safer than human driving then there will still be deaths. Are you suggesting that if FSD is 1000x safer then it still shouldn't be allowed on the road because there is the possibility of deaths?

I'm not claiming that FSD should or shouldn't be allowed on the road right now. My point is that we shouldn't wait for the chances of death to be zero.


> There needs to be some testing of FSD solutions in real world scenarios and there will inevitably be deaths. There are thousands of deaths on the road every day right now.

This is a solved problem with experimental design. Experiments are designed and reviewed by ethics boards, and participants are gathered after getting their informed consent and giving them just compensation for the risks involved.

"Testing of FSD solutions in the real world" can follow the same path in their experimental designs, but they don't, because executives and shareholders don't want to pay for expensive ethical experimentation. They'd rather move fast and break bodies.

Most modern life saving devices and treatments have gone through exactly this type of ethical experimentation. There's no reason Tesla's products couldn't, as well.


Reminds me of that moment in Fight Club.

“Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.“


the other companies have no problems following the rules


What rules did Tesla break here?


40,000 people die in car crashes every year in the US alone. If the lord farquad approach brings widespread autonomous car adoption which decreases that number substantially even a month sooner it's the right thing to do. Many of us are already dying.


The ethical problem with Lord Farquad's line isn't that "some of you will die". It's where he says "but that is a sacrifice that I am willing to make".

Tesla's self-driving beta is unethical because most of the participants in this "experiment" did not opt in. Most of the people killed by Tesla won't be their customers. It will be the cyclist or the pedestrian that gets killed because the driver thought "full self driving" meant the car was fully driving itself. They didn't even know they were part of an experiment that day, yet they were the sacrifice that Tesla was willing to make.


I'm sympathetic to that, but in the end don't trust people to make decisions that are in everyone's best interest. Americans especially have proven again and again that they are too blinded by the false ideals of rugged individualism to see what's right. What's unethical is allowing tens of thousands of people to die every year because some people have a (understandably)misguided view of right and wrong.


Can you clarify how those of us calling for the government to prevent Tesla from involuntarily enlisting drivers into their beta test are "blinded by the false ideals of rugged individualism"?

Just so we're talking about the same term:

> The belief that all individuals, or nearly all individuals, can succeed on their own and that government help for people should be minimal.

https://www.dictionary.com/browse/rugged-individualism

Typically, the rugged individualists are the ones who say that the government should stay out of all this and people can take care of themselves. Basically the opposite of what I'm saying.


Sometimes sacrifices need to be made so that we can all prosper. The American mythos is unfortunately allergic to this truism. The idea that society needs permission to better itself is misguided. Same reason America is lagging in vaccination rate even though we are the society with most access. I'd be extremely happy if people could come to the correct conclusion themselves. I don't want to allow their failure to harm the rest of us.


What you're arguing against, then, isn't "rugged individualism", it's "the sanctity of human life". The difference is that the one says "individual human freedom is paramount" while the other says "each individual human life is paramount".

Neither ethic would approve of systematically killing people for the greater good, but they're two very different ethics in most other respects. For example, I could arrive at "vaccination should be mandated" from a sanctity-of-life perspective, but I never could from a rugged individualism perspective.


Not quite. My thesis is "The idea that society needs permission to better itself is misguided." It's not limited to human life at all. It's about rejecting your thesis of "Tesla's self-driving beta is unethical because most of the participants in this "experiment" did not opt in." We don't need permission to save lives.


And your calculus isn't altered at all by the fact that Tesla isn't "society", it's a single for-profit corporation?


Of course, but that doesn't change the fact that the software would help everyone, even if Tesla profited.


Sorry, but I don't see how a company that...

- Beta tests critical software in the public which has lead to actual customer deaths;

- Refuses to follow industry best practices when it comes to safety sensors;

- Refuses to seriously investigate safety incidents and implement technological and cultural fixes to prevent them from happening in the future;

...will lead us into a utopia of automotive safety. At best what they're liable to do is create a climate where mega corporations feel free to use public spaces as their own personal laboratories, because the ends justify the means after all (don't pay attention to the decapitations, we're doing it for the safety of us all). We will all pay the price for that.


Don't forget to add

- willfully misrepresents data to imply a better safety record than they actually do have

https://twitter.com/Tweetermeyer/status/1488673180403191808 (I've posted this elsewhere on this thread too)


I agree it would require an incredible amount of evidence to convince me that this actually would speed up the development of self driving. Just saying that I'm not opposed to the lord farquad method if it helps.


Many people die of HIV, it’s a plague on society. If we just killed everyone who has it, it would be gone.

Many of “us” are already dying, and it saves lives in the future.

/s


The fundamental issue with FSD is that it doesn’t seem to actually make driving any easier. If the driver is doing their job and properly babysitting the car against these sorts of mistakes, then driving in FSD mode will be more exhausting than normal driving.

Of course, many people will interpret FSD as a convenience feature (as they do for “autopilot”) and will neglect to properly monitor it.


If the driver is doing their job and properly babysitting the car against these sorts of mistakes, then driving in FSD mode will be more exhausting than normal driving.

I've heard it described as being like a driving instructor, watching a very inexperienced student who mostly drives fine, but is liable to suddenly make extremely poor decisions.


I think it would be much, much worse than being a driving instructor. When I teach my partner how to drive, they announce their intentions, act cautiously, and make clear what their next moves will be.

What I'm seeing here is more like teaching a chimpanzee how to drive. It's irrational, has no clear intent, and does not ever announce what it's trying to do.

I've never been in a FSD-beta car, but I would take a novice adult driver over that every day of the week.


What baffles me is that the car doesn’t have any trouble plowing right into things which are visible in the GUI. It looks like it sees the cyclist and goes right ahead into them. Anybody have an explanation for this behavior?


The pathfinding algorithm is separate from the object detection neural network. It's possible for it to incorrectly identify paths going through solid objects, however, I would hope they have dumb code to detect that the path is invalid and throw it out when that happens. Maybe they don't?

Part of the difficulty of trying to have reasonable discussions about FSD is that so much of it is black-boxy nobody really knows what's going on. I don't care if Tesla wants to keep its ML models proprietary, but I feel like we deserve to know how it uses those models to make driving decisions on public roads. And not just Tesla either; every self driving company should be required to open source their "dumb" code so that we understand them better. The competitive advantage comes from the ML models anyways, so it wouldn't be anti competitive to regulate that disclosure.


The most recent proposed legal framework in the UK for self driving vehicles (summary at https://www.bbc.com/news/technology-60126014) recommends that legislation be passed so that

"data to understand fault and liability following a collision must be accessible;

[There are] sanctions for carmakers who fail to reveal how their systems work"

The proposals are very detailed but do seem to be addressing a number of areas around how autonomy is marketed, and how to ensure vehicles are safe and that vehicle makers are making available material information on the inner workings of their software.


"Don't drive a path through an object" seems like such a core function that a video like this is proof FSD is intrinsically broken.


That’s the fundamental problem with machine learning. You get no control over what the machine has actually learned.

It’s like GPT-3. In 9 cases out of ten, it responds in a way that lets you think the model has “grokked” facts of the world, and a human like sense of logic. In truth, it hasn’t. It’s just pretending. It’s taking the easy path by parroting back what it has seen before.

They say that ML requires a re-thinking on the part of the developer. Whereas with traditional programming it would be his job to “structure the problem and the path to its solution”, in ML, he should hold back his human notions and let the network discover its own structure.

To use an analogy: Whereas we teach our children concepts step by step - you and me, identifying objects, gasping objects, referring to objects, etc., all the way up in complexity to writing thoughtful comments on HN - the neural network is to be bombarded by unfiltered real world data. And the hope is that, if you just do this long enough, the AI will come up with the important underlying concepts by itself. That is to say, for GPT-3, the idea that words are really describing a real world out there, with objects that have certain properties, and that claims may be true or false. Or, for Tesla, that the car is a physical object in space, and that it can collide with other objects.

This is very unintuitive. It might be the right approach. But if I was building a self driving car, I would want to build that system block by block. Learn the basics. And only proceed to the next level of complexity once I have verified that the previous step has been properly learned.

1. Let the car learn to predict it’s own acceleration, steering and braking. 2. Repeat on different road surfaces. 3. Add other objects to avoid. 4. Train ability to tell objects from non-obstacles (fog, raindrop on camera) 5. Train lane markings, traffic lights on empty roads. ... X. Throw real world data at it. Now you can hope (and validate) that the model will have the foundation to make sense of it. Like a kid after Highschool. It will still make dumb mistakes. But at least you can talk about them and understand why they happened and take corrective action. Because, while you do not know the other’s mind, you do speak the same language.

With GPT-3 and with FSD, we just have no idea.


It's telling if Tesla doesn't have an explanation either. Remember when they had early autopilot accidents/fatalities there was an almost instantaneous response from Tesla--even Elon himself tweeting out detailed root cause analysis of accidents that would show the safety driver being impaired, the system working as intended, the pedestrian being impaired, etc.

So if an incident like this happens and we don't hear anything from Tesla it tells me there's a far, far bigger problem here with their failure to collect diagnostics, or failure to analyze these problems. Ideally we should see a hyper detailed root cause analysis even going into the vision model inputs in this exact case and exactly how they failed to classify and detect this cyclist, and most importantly how the beta software is being improved to fix issues like this. Silence is deafening here.


The GUI is the biggest indictment of it all. Have you seen that line of the trajectory planner? It's spazzing out all the time, even if the car ends up going the right way. Their system literally has no concept of planning and is permanently reevaluating, hence why it goes pedal to the metal hard steering into the cyclist at a frames notice.


Not just the GUI, the collision avoidance assist alarm was going off.


Tesla hired a bunch of vision people, and to their credit, it detects pretty much everything in its FOV all the time. But these results show that vision AI isn't driving AI and the hard part about driving isn't seeing things, it's predicting their future and handling the situation.


> It looks like it sees the cyclist and goes right ahead into them. Anybody have an explanation for this behavior?

Its sensitivity was probably lowered because it would brake "too often" for a comfortable consumer product if it was sufficiently cautious enough to avoid hitting pedestrians.

This is what happened with the Uber self-driving vehicle that struck and killed a pedestrian[1]. The system detected the pedestrian as an obstruction, but Uber had programmed the system to ignore it because of "phantom braking". Instead of braking, it proceeded to strike and kill the pedestrian.

[1] https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg


This cyclist went on to murder three people. The Tesla was simply choosing the better of two bad options to minimize fatality.


Actually they went on to sue Tesla, saving many lives


It is funny to me how most of the self-driving industry is hellbent on solving the long-tail problem while Tesla is stuck on…most of it.

It’ll be interesting to see if they ever add LIDAR/re-add radar and admits their mistake.


The cyclist was visible on the screen. This is obviously a case of "you don't get a good driving AI by hiring dozens of vision AI people".


Most of the self driving industry is not operating at scales like Tesla, or in as diverse environments.

I'm extremely skeptical they're going to fair much better on scale up: its notable that the Uber crash had LIDAR and still hit a similar object (a metal mesh with lots of gaps and a person).


It seems that Tesla’s main issue is detection, not decision making (well at least their biggest issue).

Sure, some environments like extreme weather are not being entered by most self-driving companies. However, their detection stacks in general work fine in the vast majority of places, it’s their decision making that needs work.

As others pointed out, Uber failed not because of a lack of detection of the person. Moreover, the cases where Tesla publicly fails aren’t particularly challenging for detection (daylight, good weather).


People treat this as though all Tesla's everywhere were for some reason never trained on a dataset including cyclists off to the side, and people make emotional statements of "obviously this is someone's fault for not dealing with this obvious scenario".

But we all (should) know that's not how machine object detection systems work: they would reliably pass this sort of scenario all the time, and then some sort of adverse (input data wise) input situation from something humans don't even notice would trigger a complete failure.

Which is to say it's not intuitively obvious that other systems won't have these sorts of failure scenarios - it certainly won't be proven till they have sufficiently widespread deployment to eek them out.


Totally agree here. I do think redundant detection is important though and highlights the issue with the vision-only approach. If your object detection fails unexpectedly, as often happens with our current approaches, it's not catastrophic if you have another sensor modality to at least indicate there's a serious missed detection.


Uber hit the person because they had disabled automatic emergency braking, not because the sensors failed to detect objects.


Their Lidar wasn't plugged into AEB, so it's not like Lidar would have helped AEB either. The car's built in radar would have triggered a stop.


right, going to need more data to avoid running over a cyclist, as someone biking on the street is, as we all know, a black swan event.

after all, who would want to bike somewhere with no protected bike infrastructure that allows shit like this to happen on public streets?


Where was the car even going? Into the bus lane?

It seems like these issues could be easily avoided by putting LIDAR back in. I'm afraid that they're hesitant to do so because they'd have to admit that removing it was a mistake for all of the cars on the road without LIDAR.


Do you mead radar? I thought Teslas never had LIDAR.


At least some had LIDAR for a while, but it's all disabled on recent updates because Musk wanted to save some money and trim LIDAR from production.


Lidar was never in a production Tesla, however they do run Lidar cars with manufacturer plates through the streets to keep HD maps up-to-date (I imagine they contract out/buy HD maps for the rest of the US). Radar is also still fully enabled unless they're in a 3/Y with a Radar and they have FSD beta, as up-to-date non-fsd radar cars can still TACC up to 90 MPH.

*: HD maps are mainly for complicated interchanges, visualizations, and aid in not curbing the rims of the car; a few early FSD videos show what it does when it really has no map data, and it's less confident overall in turns as it tries to make out the edges of a road.


The user had tried to change lanes to (I assume) get into the right turn only lane past the bus lane.


There was a discussion on HN a few days ago about Star Citizen, and I'm realizing that Tesla FSD and Star Citizen have a lot in common. Both were promised to be delivered years ago, beta testers keep showing each to be quite buggy, there is reason to believe it may not even be technically possible to deliver everything that was promised, and they both have their fanboys who are ready to defend it and accuse skeptics of being haters who just want to see it fail.


As much as I like the Tesla vehicles and Autopilot in general, I'm really baffled as to why the FSD beta is allowed in its current form. The beta testers have done well for the most part, but its just such a strange feature to have in the public.

It has to be watched really closely and each build seems to replace old problems with new problems.


We drive significant distances each year and I need to be comfortable when I drive. If the wheel alignment is out, it's an irritation having to continually push the wheel to centre and I get it fixed. If something in the load is continually rattling or squeaking (hate polystyrene) then I pack it so that it doesn't. Distractions like this are tiring and potentially create distractions.

I love the idea of fully autonomous driving where I can sit back and read a book or watch a movie, but I cannot imagine what it would be like driving a car where you are not actually driving, but have to be constantly on your guard in case it decides to do something unexpected.

Driving with constant attention is fairly easy, I would imagine not driving, but having to continually monitor ready to make split second would be exhausting.


"… and now with a software update, you can actually make /999/ of people safer"


Three nines is good enough for Tesla, I guess.

  It is worth noting that humans make stupid mistakes like this too, but I don't think this excuses the automation's mistake. Cars don't abuse substances (okay, most of the time), so we should be throwing out like 75% of all accidents right there. Do better, Tesla. Beta testing like this is unacceptable when it is fucking with cyclist's safety.

  On the technical side of things, I can see how a slower-but-still-moving object next to the car might screw with path-making. But this isn't how the bugs should be hammered out. I say we outlaw all OTA updates for vehicles. If it didn't ship with the software it's running, it's not street-legal. (And as for vulnerability fixes, Tesla's smart. Let them figure it out.)


> It is worth noting that humans make stupid mistakes like this too

Humans generally don't. Only humans that are impaired (drunk/lack of sleep) do this more than "autopilot" in its current state: https://twitter.com/Tweetermeyer/status/1488673180403191808


True. What I was trying to say is that since the automation does not have this weakness, we should be improving on mistake-rates, not introducing more failure points.


I mean... both situations could still be true :-P

but seriously, that was a serious WTF video. The comedic timing and the horror of what that car just tried to do.


"It may do the wrong thing at the worst time" -- each FSD Beta announcement email.


As another driver or cyclist on the road, I don't receive those emails nor did I agree to any terms, yet I am still being put at risk in this "beta." Lucky me, I guess? Where are regulators?


The question raised here is not whether FSD works. Everyone can see that it does not. The question is why the two people in the video can simultaneously be rich Americans with a fancy new car and also so naïve and gullible about FSD at the same time. Where did we leave our critical thinking skills? How did we manage to structure a society that rewards people who lack it?


In the town where I grew up the police HQ had a Safety Village for teaching rules of road and such to kids. It had a little McDonald’s and a little post office and even a little Tim Horton’s. I loved going there as a field trip. We would get to drive Power Wheels jeeps.

How large is the biggest fake city used for soak in testing by one of these autonomous car companies?

How much did it cost?


I have a Tesla Model Y and I love it so much. I also paid $10k for “full self driving” and I’m in the beta program and I’d ask for a refund for that if I could. It behaves unpredictably - sometimes it seems smarter than you could imagine and then it does something that seems obviously dumb. That unpredictability makes it stressful to drive because being ready to take over instantly means trying to predict what it will do. I couldn’t live with myself if it did something I wasn’t prepared for and someone got hurt. So I don’t use it. It’s unfortunate that Elon has gone all in on self driving being the future of Tesla because they built a hell of a car on its own merits.

These Tesla threads are always tense but there’s a lot of important discussions to be had about Tesla and FSD wrt technology and society. That said user: omedyentral seems to be farming anti tesla rage both here on HN and on their Twitter page. I’m not sure that’s in the spirit of what HN is trying to be.


There are a lot of people out there who think Tesla FSD is close to completion, even on HN. What’s wrong in showing the reality of where Tesla stands in terms of a safety critical technology using real world footage?

There are anti-{FB, Google, Big Tech} posts on HN all the time. I’ve never seen anyone say that’s against the spirit of HN. What makes Tesla special?


> That said user: omedyentral seems to be farming anti tesla rage both here on HN and on their Twitter page. I’m not sure that’s in the spirit of what HN is trying to be.

It's actually your post that is against HN guidelines[1]:

> Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

[1] https://news.ycombinator.com/newsguidelines.html


TIL. Thanks.


"That unpredictability makes it stressful to drive because being ready to take over instantly means trying to predict what it will do."

I feel this. I rarely if ever use autopilot because I find the cognitive load and stress of monitoring it to be higher than just driving the car myself. The exception to this is when the road is absolutely empty.


> I’m in the beta program and I’d ask for a refund for that if I could.

I don't know what the FSD license says, but I'd look into Lemon Laws in your state. If any other part of your car made you have legitimate concerns about your safety, you'd take it back; same thing should apply here.


I think this is undisputed evidence that this FSD aka ‘Fools Self Driving’ contraption is beyond defending as it clearly has put the lives of drivers and cyclists at risk. As I have said multiple times.

Tesla had a good run with this contraption, but it is time to scrutinize this fraudulent system that does not work as advertised and malfunctions on the public roads to run over or crash into cars and cyclists.

This is not even at night that FSD has malfunctioned, let alone night vision support. The car doesn’t even monitor the drivers attention properly (It still uses the wheel). How can FSD be possibly safe to use when not only the system has failed in daylight but it also cannot see at night?

Perhaps it is better to not pay and sign up as a crash test dummy with safety-critical software that doesn’t even work.

No thanks and absolutely no deal.


As an Tesla driver in the Northeast, I have always felt Tesla AP/FSD dev has a sunny mild/hot weather California bias, and fails to account for the vagaries of driving in the rest of the world.

The Northeast has high density, old infrastructure (narrow bridges, tunnels and highways, lack of medians and breakdown lanes, etc which are out of spec to modern standards), fewer sunshine hours, cold weather (and the resulting pot hole & lane line degradation of the streets)..

On the other hand, I keep seeing a lot of crazy FDS videos from the Bay Area so clearly they don't even have their home turf covered.

I still don't see how local road FSD is going to happen on current hardware - camera resolution, placement, and CPU inability to handle high frame rates.. We are perpetually 5 years away.


As a cyclist, I view self driving cars as one of my biggest wins in terms of safety relative to the easily distracted monkeys who drive them now. Sad to see Tesla doing damage to the overall effort with their GSD approach to it. Hope they don't salt the earth behind them.

Seems like Google, Cruise and co are doing well enough here to make it look like a tesla problem as opposed to a self driving car problem.


Does Tesla get sued when people are injured or die due to self-driving failures? This can be people inside or outside the Tesla car. I am surprised that I never saw any press or discussion about settlements with Tesla.


Move fast and break cyclists


From my perspective, everything went right here. The software screwed up, it alerted the passengers that a collision was going to happen, and a human driver quickly took over and avoided the accident. Is this not what we should expect from a beta?

The responsibility is still on the driver even with driver assist / full self driving technology turned on. All this talk of regulating the software when we just saw a worst case scenario resolve with a favorable outcome.

"But what if the driver wasn't paying attention?" Then the driver will face the consequences as if he had been driving the car himself, because it's his car and it's his responsibility.


> From my perspective, everything went right here.

Did you miss the part where the autopilot steered straight toward a clearly visible cyclist?

If the autopilot is so unreliable that it requires lightning-quick reflexes to override, it's going to lead to a lot of accidents from people who don't happen to be paying perfect attention at the time. I know the whole story is that drivers are supposed to be 100% engaged, but realistically that's just not going to happen when you literally market something as "full self-driving"


I would add that it's human nature to fall into a mode of complacency when it works 99% of the time. We've got a story on the front page about "the rock says" heuristics working 99% of the time. The rock says TESLA DRIVES CORRECTLY, and it's mostly right. So we relax, and there's a dead cyclist.


The name is stupid and should be changed, but I also don't want people to be absolved of responsibility because they bought into marketing. "But they said the ship couldn't sink!" Doesn't matter, you're still on the hook for the hundreds of people who drowned. Due diligence is a part of responsible adulthood.

The airlines that got fucked by Boeing's MCAS still had to send payouts to families who lost loved ones, because it was ultimately their decision to buy and operate a 787 MAX. They can (and certainly should) sue Boeing for the losses, but it doesn't change the fact that the crashes are theirs to own. You don't get a get out of jail free card for being gullible.


This comment is factually wrong. The airlines did not pay compensation to victims families, Boeing did. Boeing accepted full responsibility for the crash, for providing an unsafe system. Ethiopian airlines did their "due diligence" by purchasing an FAA approved aircraft, by Boeing. They weren't to know that a serious case of regulatory capture was happening. https://www.google.com/url?q=https://www.theguardian.com/bus...


Huh, TIL (and relevant username). Thank you for the correction. I noticed the article says they only admitted fault for the second crash, do you know what happened with the first? Did Boeing admit fault for that too?


I'm not sure actually. There was a lot more attention to the second crash after it became apparent that Boeing had sold an unsafe aircraft.


I believe that GP's point is that it demonstrably did not require lightning-quick reflexes to override: a normal human driver with normal human response times prevented the accident. No one was hurt or killed. Presumably this incident is an important bug that will now be fixed.

That's the best possible outcome, no?


> No one was hurt or killed.

This time...

> Presumably this incident is an important bug that will now be fixed.

What gives you this confidence? Tesla started decapitating people in 2016, and 5 years later it's still lunging toward cyclists, running over pedestrians, running into parked cars, running into barriers, running stop signs...

Tesla has a fundamental perception problem. Teslas fundamentally have blind spots due to their perception stack. The way this gets better is that Tesla:

- admits its perception system is a bust, and it goes for a more robust system which takes into account over a decade of industry wisdom as to how to properly engineer a potentially life-threatening system with safety-first engineering practices

- starts emphasizing a safety-focused company culture that doesn't use the general public as a beta testing playground.

I predict neither of those things will happen.


The best possible outcome would be "the cyclist was not at additional or different risks".


The training we give drivers on how to operate cars and the failure modes the training prepares for are designed for cars without FSD, though, and I'm not sure it adequately prepares drivers to compensate for the possible failure modes of FSD. Especially when those can change from release to release, making it hard to get used in any meaningful sense to the system. This is also a lot more stressful than regular driving and may require better driver condition and more frequent rest.

Unfinished vehicles on the road are nothing new; all companies developing cars move pre-prod hardware across public roads. However it usually requires several levels of additional licenses and special safety driving training and instruction before you're allowed to operate a car that may surprise you by behaving very different from a certified production road car. And regular renewal thereafter.

I think what's unprecedented here is the scale of the pre-prod testing, the type of experimentation being done and how the drivers are selected. And also the degree to which it's designed into the approach of the experimental engineering being done that it must fail to improve (by widening the training set). There's enough new here it should be discussed imho.

Side note: German driver's license training now includes some material on assistance systems and how they may behave unreliably and how to prepare for their failure modes ... but of course this doesn't retroactively expose existing license holders. The rate of change in driver-car interaction being so high is also a novum.


> However it usually requires several levels of additional licenses

This should be highlighted. I work at another car company, and we're not even allowed to drive the very lightly modified test cars without a special license, and those are essentially production vehicles just with extra debug logging. That extra license requires being CPR certified as well as two separate driving course certifications. The idea that Tesla can push this out to basically anyone who wants it is absolutely absurd.


It is for unprecedented and should be discussed. I just don't buy into the kneejerk reaction of getting this off the roads ASAP when we know drivers use FSD to drive millions of miles a year and accidents are not common at all. It shows that the drivers seem to be responsible enough to keep the tech in check.


It may show that for the drivers who currently have access, but Tesla selects/limits the beta participants based on proprietary "safe driver" scoring that isn't publically documented. So it's the product manufacturer who can turn the accident probability screw here, or could also get it wrong. And I think that gets to the bottom of what makes many people feel uneasy.


> that isn't publically documented.

https://www.tesla.com/support/safety-score

1. what it uses

2. how it's calculated

So far they've let everyone in with a 98 or above, but they come in waves so getting 98 doesn't necessarily mean you'll soon have access.


Thanks! I didn't know this doc. Only thing I see missing there is changelog so you can see when they adjust the formula or its constants, which they say there should be expected.


> From my perspective, everything went right here.

Holy smokes. Did we watch the same video?

> Is this not what we should expect from a beta?

Is it OK for a beta software product to nearly kill another person that has no agreement with the company putting the product on the road?

People have the right to do whatever they want with their own stuff, fine. But putting others at risk comes with huge responsibility, and must be weighed against the desire to do what you want. In this case the balance clearly falls IMO to Tesla being selfish and negligent by employing their drivers to be the safety backstop in a clearly buggy system.


Many car systems can fail. Think power steering going out with tires rotated 50 degrees to the right a driver that doesn't know how to properly steer without it.


> Is this not what we should expect from a beta?

No, what we should expect from a beta is for it to not be deployed at scale, to non-professional drivers, in a live environment. It's clearly still not ready to go live, much less under the name "full self driving"


What you're missing is that it isn't reasonable to expect people to be paying enough attention to "quickly take over."


Nobody should be using driver assist technology if they're not ready to accept the consequences it may get them in. We let teenagers buy sports cars despite being totally incapable of understanding the risks that 300+ bhp brings. I don't see the difference between that and FSD.


It blows my mind that the US doesn't have any power-to-weight ratios for new drivers.

In Australia until you were off your probationary license (which was a minimum of three years after your learners permit, which was either 1 or 2 years, it's been a while), there were power-to-weight restrictions on the vehicles you were allowed to drive.

Here? Saw a YouTube video of a seventeen year old doing his driver's license test in a 1500hp Bugatti. That'll end well, no doubt.


> Then the driver will face the consequences as if he had been driving the car himself, because it's his car and it's his responsibility.

You must be unfamiliar with the usual consequences of a driver killing cyclist(s)


Yep. Cyclists are murdered/manslaughtered with regularity by drivers who recieve zero punishment, and the cyclists and their families receive zero justice. Not just average schmucks in the US, either. Professional cyclists like Michele Scarponi are killed in the EU, too, by a driver using their phone. I forget the results of that case.

But cyclists rarely receive the protection of laws that supposedly exist for that purpose.


Looking forward to getting killed by a tesla and then having a bunch of tesla stans argue that my preventable death was actually OK and an example of the Great Progress being made towards reducing traffic violence!


So basically the Uber thing from a year or two ago?


> and a human driver quickly took over and avoided the accident

Can we be sure that the average driver who has received no special training in the matter will be able to take over from an autopilot in a split second when driving at high speeds? Heck people can't pay attention and keep their eyes on the road even when they are the ones in control.

> The responsibility is still on the driver even with driver assist / full self driving technology turned on.

If/when the system is rolled out at scale will drivers actually comprehend that?

> Then the driver will face the consequences as if he had been driving the car himself

Assigning blame after the fact is fine, but no one can bring a dead pedestrian back to life.


> Can we be sure that the average driver who has received no special training in the matter will be able to take over from an autopilot in a split second when driving at high speeds?

The beta system controls for this by only allowing in people who score 98 or above in Tesla's safety surveillance program. The restrictions are tight enough that you have to actively try to be a safe driver to get anywhere close to the cutoff (some people even got pissed off because their score went down because someone brake checked them and they had to decelerate too quickly). Not everyone who buys FSD gets into the beta.

> If/when the system is rolled out at scale will drivers actually comprehend that?

The only ethical way to roll this out at a wider scale than the scale allowed by the safety surveillance program is if it's good enough to avoid incidents like the one depicted here. I think we can all conclude that Tesla deserves the fat lawsuits if they were to green light FSD in the state it is in today.

> but no one can bring a dead pedestrian back to life.

An unfortunate fact of life, hopefully made irrelevant by the comeuppance of autonomous transportation technology (including but not limited to automobiles) that makes motor vehicle accidents a thing of the past. Until we get there, though, someone has to keep trying.


> An unfortunate fact of life, hopefully made irrelevant by the comeuppance of autonomous transportation technology (including but not limited to automobiles) that makes motor vehicle accidents a thing of the past. Until we get there, though, someone has to keep trying.

Autonomous transportation already exists, it already works, and it's _already safe_! It just isn't deployed for private automobiles, it's deployed on rail-based transit systems. The technology is widely deployed across Europe and Asia. How many more people need to die to let Americans have a more comfortable ride in their private vehicles?


> How many more people need to die to let Americans have a more comfortable ride in their private vehicles?

That's the entire point of America. We have so much space, so we make every effort to use it, regardless of the consequences.


Expecting every buyer to have the ability and discipline to be perpetually watchful for something terrible when almost all of the time there's nothing to worry about is unreasonable. For most humans, standing guard (well) is really hard work, whereas for most drivers, driving is not. This is part of the reason being a cop or a soldier is so stressful.


> Then the driver will face the consequences as if he had been driving the car himself

What consequences? Drivers almost never face any consequences whatsoever for manslaughter or murder with a car.

Even if they did face "consequences", they would not be anywhere near the consequences a completely random, innocent, and highly vulnerable road user will.


> Is this not what we should expect from a beta?

The issue here is that the person on the street does not know they are the ones likely to get impacted by a bug/failure of a software in beta. Beta products are usually not released to the public, or are opt-in.


Beta self-driving software shouldn't be on the road in the hands of random drivers. It's crazy to me that people think this is ok because "it's just a beta!" The stakes are vastly different between a bug in a photo sharing app and a 2 ton hunk of metal targeting bicyclists on accident


I've been driving a rental Model 3 with FSD for the past day - for the first time - and totally agree. And after a day's experience, it's obvious it needs to be watched. It's an amazing capability but there are clearly still corner cases, and it makes you very aware of them.

I liken Autopilot / FSD more to a horse at its current state. You keep a firm but gentle hand on the rein and let it find its way. Sometime you have to check it.

If you get complacent (which the screen explicitly warns you against), yeah, things could go wrong. That's also 100% true when driving a normal car.

Edit to add: I think a lot of criticism focuses on the term "Full Self-Driving" which implies "Full Self-Driving". That's fair, but I think it misses an important point. ("Autopilot" made a lot more sense to me: real autopilots can't do everything either. They can't even taxi.)

When you're actually IN THE CAR, enabling this functionality for the first time, you have to enable three separate toggle switches. Each one pops up a progressively longer, more dire WARNING dialog, emphasizing the software's primitive state, the importance of paying attention and keeping your hand on the wheel. It also demands your constant attention while engaged.


> It's an amazing capability but there are clearly still corner cases, and it makes you very aware of them.

The thing that most of these companies are failing to grasp is that the whole endeavor is like 99% corner cases. It's corner cases all the way down. The hope was that there was this big chunk of the problem space that could be handled in a tractable way, and that would hopefully account for ~80% of driving conditions. Then corner cases could be handled on an as-needed basis to fill in the gaps, and slowly you could handle more and more scenarios.

The problem is that the reason for so many corner cases in this domain is that driving is a social activity. The rules of the road actually belie how much communication between humans goes on when they are driving. Driving has its own norms, its own language, culture, emergent behaviors, and traffic exists as a crucial force that must be balanced in a city ecosystem. But autonomous cars aim to solve the driving problem by ignoring the social, and reducing it to an optimization problem. Autonomous cars are blind to the larger world around them except to recognize colors and the nearness of physical bodies.

They miss social cues, they lack higher order reasoning about traffic patterns, they can't negotiate with pedestrians or other drivers, they are unable to weigh the ethical considerations between when following a law is actually worse that breaking it.

As of now I consider them menaces on the road, and that pains me as someone who got to play with some of the first driverless car technology in the lab. I had such high hopes for it, and what I hoped was not... this.


It's like a teenage driver. You have to keep an eye out for developing situations you know it's too stupid to handle elegantly.


Oops, maybe it'll be ready next year instead


We're now, what, negative three years since the 2019 date Elon said cars would be full self driving?


This year is actually the _ninth_ year in a row Elon has promised that they would be self-driving "this year" or "next".


I believe the way Tesla is approaching autonomous driving is fundamentally flawed by political/ethical reasons.

The general public accepts the fact humans may cause fatal mistakes while driving, because responsibility is expected and ultimately there’s someone to punish. But automation making fatal mistakes, even at a lower rate than humans, isn’t accepted the same way, in part because companies are not individuals – Elon Musk isn’t personally responsible for any accidents – and in another part because this space is currently unregulated – unlike in aviation where some body has to vet a manufacturer’s plane project before it can be mass produced.

So, Tesla wants to iterate a product with significant risks on the wild while unregulated, at the same time enjoying a lower level of accountability an individual holds while driving – they just put on the contract that “self-driving” feature has to be monitored all times and if it fails the blame is on the driver.

This is not a surprise considering Musk apparently extremely libertarian views, but is obviously BS as it’s a way to dodge risk while capturing all the upside. It’s a perfect rational thing to do in his opinion (we’re taking risks to innovate, etc), but it’s ultimately unethical.

Maybe self-driving could be more reliably achievable with the help of local government adapting roads so cars don’t have to rely only on their sensors, or the whole industry could agree on a standard so cars can coordinate – but that would mean regulation, and Musk doesn’t believe in that. He believes Tesla can hack their way out of the problem only with technology in computer vision. I don’t think it will succeed, because if you look at aviation, another industry that enjoys a high level of automation and safety, nothing would be possible by initiative of a single plane manufacturer – the automation relies on equipment installed on planes, airports, etc. all standardized, as well as training and processes followed by the entire industry.

Time will tell who’s right, but my bet is a reliable self-driving car has better chance of coming out of a place in Europe, Japan, Korea or China, where manufacturers and governments have a better chance of collaborating to achieve a solution that isn’t stuck into the “computer vision” local maxima.


I wonder how many people who think this is okay actually bike in locales with high populations of white teslas?


[flagged]


Forget important, this isn’t even a problem to highlight and your comment contributes nothing of value.


Looks like a marketing campaign from a competitor to be honest...


I'll give you $20 to be that cyclist


Would be interested to see footage from other companies like Cruise, any open beta testing footage like this?


No.

Because Cruise isn't stupid or reckless enough to (a) have open betas and (b) deliver FSD at all without some form of human backup.


Their taxis they just launched in SF are driverless?

https://mobile.twitter.com/kvogt/status/1488559060785975298


They are monitored by professional humans who take over in the event of any issues.

No offence to Tesla owners but too many of them don't take the limitations of FSD seriously enough.


Where is the professional in the video above (the one in the tweet)?

Just two passengers in the back seat.


Cruise and Waymo have both had fully driverless taxis running for the public for quite some time. If either of them had ever tried to murder a passenger or passerby I'm sure we would have heard about it.


There's a fundamental difference between LIDAR based self driving and vision based. The difference is LIDAR tells you EXACTLY where EVERY possible obstacle is. It easy to develop logic to avoid hitting something else.

With vision, you're basically having a machine learning model guess where the object is, its size, etc. So there's a chance of failing the basic task of knowing that there's something possible to hit.


Well, you still have to do perception on the lidar data to segment the road surface and classify objects. But it is a much easier problem (takes you from infeasible to extremely hard).


There’s tons of videos of Waymo’s service in Chandler.


Bad news about Tesla is better click bait so you won’t hear about screwups with those.

It’s like how there are thousands of gasoline fires a year but stop the presses if an EV goes up.


> It’s like how there are thousands of gasoline fires a year but stop the presses if an EV goes up.

It's because EVs are new, and people want to know the risks. Granted, it would be nice for reporting to also mention that gasoline fires are as or more common, and or state the distinctions between fires from a gas vehicle v/s an EV.


Why do so many people want to ban FSD Beta from public roads every time it almost causes an accident, when we usually don't take away people's driver's licenses even when they do cause accidents?


Because we do take people's licenses away if they repeatedly cause accidents?

Treat the Tesla FSD as a human driver and it would've had its license revoked many years ago. It's only through close supervision by humans that accidents have been as infrequent as they have been.

The clip in question, if a human hadn't been paying attention, FSD would've knocked a cyclist over - at best. Do that enough times as a human - not many - and you'll be disqualified from driving pretty promptly in most civilised countries.


Interesting to treat FSD a single driver, when it’s driven further than any human driver has many times over. Objectively, that is a bit of a double standard.


In my state vehicular manslaughter is still vehicular manslaughter, no matter how many miles you drive.


It’s arguably an inappropriate standard, but not a double standard. If as a human driver you drive a bunch but get into accidents frequently, they won’t make any allowances for the fact that you drive a lot when deciding whether to suspend your license.


but they obviously should.


That’s not true multiplied across the hours and miles it has driven.

No single human driver could do this amount of driving, so comparing all FSD mistakes to a single human is an error.


Do we apply the same logic to the 737MAX?


The 737 MAX actually had a really bad rate of fatal accidents per hour of flight. Like if you flew a certain number of hours per year on the 737 MAX prior to the MCAS fix you’d be more likely to die than if you drove for the same number of hours.


Because a human putting 5 in the field for 1+3 is an isolated mistake, but the machine doing it is the result of a deterministic process that is likely to occur much the same way over and over again.

In other words, no, we don't ignore MCAS causing the plane to go into a stall under some isolated circumstance just because humans also put planes into stall.


> 5 in the field for 1+3

Please translate.


Strawman much?

If you link a twitter video from a driver swerving erratically at cyclists, there will be no shortage of a mob demanding the driver lose his license and be put behind bars.

In this case the driver is an AI, so you respond by banishing all the cars this AI controls, goodbye Tesla FSD.


Doesn't that kind of stuff get linked all the time in /r/IdiotsInCars? But how many of those people do you ever hear of losing their licenses?


>when we usually don't take away people's driver's licenses even when they do cause accidents?

Huh? Yes, we do?


Not really. We will take away drivers licenses for driving under the influence or extreme speeding, but a lot of accidents are due to long tail bad drivers (elderly people who just don’t have good reactions, careless people, people who just haven’t learnt to drive well, cell phone users, etc.) who rarely have their licenses taken away.

Of course, I think the solution to this is improving public transport and more aggressively taking away people’s driving licenses, rather than giving autopilot a pass.


> who rarely have their licenses taken away.

My understanding is if you accumulate a certain number of points, it gets taken away. The reason DUI etc. receives such harsh penalties is because the number of accidents DUI has caused makes it obvious that drinking is a factor. It should be the same with "autopilot".


I agree with what you say and just want to add that the benchmark for FSD shouldn't be the bad drivers, it should be the best ones.


Because 1 guy doesn't drive 12000 cars at the same time while Tesla FSD Beta software does.


A single human driver can't kill other people at scale. Human drivers are predictable in aggregate, from decades of observation of driving patterns. Every single software update has the potential to turn the whole fleet into murder machines at once even if last week it was comparable to human performance.


The standards for automated driving have to be way higher than they are for human drivers. What's the point otherwise? "it's ok if our buggy AI car crashes into things because humans do too" is not a great pitch


Reckless driving, DUI etc convictions often come with license suspensions, and ultimately license revocation if the bad behavior continues.


A lot of people lost a lot of money shorting Tesla, or currently hold short positions, and will grind their ax in any context where it's even vaguely relevant.

A lot of people simply don't realize that human drivers kill each other by tens of thousands per annum in the US alone, so there's a lot of room for self-driving technology to do a lot of good.

A lot of people are unhappy at being used as unwitting beta testers simply by virtue of having to share the road with beta FSD systems. They have a point, but see the previous one.

Finally, a lot of people question some of the engineering practices at Tesla, such as their refusal to incorporate multiple sensor inputs as failsafes. I'd fall into that category, personally.


> so there's a lot of room for self-driving technology to do a lot of good.

There's a lot of room for self-driving technology that actually works to do a lot of good. Tesla's FSD is hot garbage.


Seriously, this argument is tired.

Cue up those who think that it's fundamentally impossible to see a downside to anything Tesla does without having a vested financial stake in their failure.

That's categorically false.

Or are you going to claim that many of the people here in this thread, for but one example, are "Short TSLA"?

I'm not. I hold no financial stake in TSLA.

> A lot of people simply don't realize that human drivers kill each other by tens of thousands per annum in the US alone

According to the EMR system at my job, I've been an EMS responder on 378 fatality MVAs, and several thousand more non-fatality. And I still hold my position that Tesla is reckless. Throwing fixes over the fence for safety issues (phantom braking for one recent example) with 48-72 hours since the last update proves beyond any doubt that there's no ethos of thorough testing.


Cue up those who think that it's fundamentally impossible to see a downside to anything Tesla does without having a vested financial stake in their failure.

Is that really a good-faith interpretation of my post? Please consider reading it again, the whole thing this time.


When the question being asked is "Why do people want to ban FSD from roads", and on an article about unsafe activities from the vehicle, I'd expect that the discussion centers around the technology and implication.

In your answer, yes, you do address those things, but (with apologies if that's sincerely not your intent) those are lightly brushed off with a "but people kill in cars, too".

What is not brushed off, and is the first response you have, is entirely about people who have a vested financial interest in the failure of FSD, Tesla, Musk, or some combination of the above. And that's the only response you don't brush off with a "well, but". For but one example, in our marketplace, for better or worse, short selling is an entirely valid tool (though I'm not particularly a fan, and don't even start me on naked short selling).

So given that it's the first response you have, and it's a common refrain, "oh, people just want to see it fail", then it's not necessarily unexpected that I might express frustration at the discussion not paying much mind to the technology itself.

But as said, if that's not really your intent, my sincere apologies.


> Cue up those who think that it's fundamentally impossible to see a downside to anything Tesla does without having a vested financial stake in their failure.

While you're right in the general case, the presence of "$TSLAQ" in the original tweet means that the author of it almost certainly does have a vested financial stake in their failure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: