It's quite literally a public beta, just that the other traffic participants didn't get a say in the opt-in. Given the stakes for them I agree that it begs the question.
In the end it's a regulation problem, not a Tesla problem (even though yes, the specific decision to exercise their options is theirs) - do you allow unlicensed drivers to perform this on-road testing of unfinshed vehicles or not?
It's not quite a deliberate information hiding campaign as with MCAS (as far as we know). But it is pre-production testing of experimental engineering on safety-critical equipment. Experiments not guaranteed to pan out, and engineering lacking a mature certification process.
the fact that there is such a thing as a 'public beta' for moving two tons of steel through pedestrian areas at lethal speeds seriously makes me question our collective sanity. Imagine if gun vendors did public betas where random police officers just get a beta testing gun.
"It might fire in random directions at arbitrary intervals 5% of the time, but that's just how innovation goes folks. You don't like that, do you hate progress?"
Someone said in another thread earlier today that there is a worry that we are making the path towards FSD too erratic, and that if we screw it up, the next shot might be in generations.
People are arguing, with a straight face, that we are ruining (aspects of) humanity by not letting Tesla be laisez-faire with such things (and lets be real, it's predominantly Tesla who is just in full Cowboy/YOLO mode... other players in this field certainly aren't perfect, but they are at least trying to be conservative and cautious).
It's still a regulatory failure in the end. If the government does not want to lay down exactly what path should be taken in getting to FSD, then we should not complain when we don't agree with the path that the manufacturers choose.
As long as Tesla can demonstrate a similar or better rate of serious harm than human drivers, why should we stop them? It's up to government to do their job and outline exactly what price society should bear in the pursuit of this paradigm shift (because let's not kid ourselves, FSD is a complete shift in paradigm for society).
> As long as Tesla can demonstrate a similar or better rate of serious harm than human drivers, why should we stop them?
Because it can't, or won't. Instead they release statistics that _imply_ that it is better, that they have to know are absolutely misleading, because anyone who did high school statistics would be aware thereof.
I'm not aware of _anyone_ releasing such data in a way. Not that I have been looking that closely for it (so I certainly could have missed this!).
This is another major regulatory failure. Government should lay out the minimum data that the companies must publish, so we can get a better idea of their performance. (It's still a chicken and egg scenario though. You can't get that kind of data anywhere but on public roads in uncontrolled environments.)
This is a fair point. It's pretty bad that we have learner drivers on the road. Can't think of anything more public beta-y than that. Maybe trainee doctors?
I've had three "learner drivers" in my family and I spent quite a few hours as the "safety person" during their apprenticeship. During all those hours I never touched the wheel, brake, or gas. I gave advice occasionally, but never felt that the "learner driver" was driving unsafely.
I assure you that 15 year old young adults drive much better than a Tesla "self driving" computer with millions of miles and millions of hours of training.
Oh I’m sure you were convinced but you never asked me if I consented to you randomly putting underaged kids behind the wheel of a two ton vehicle on a road which I share with you.
You’re a regular Yaroslav Kudrinsky, just with better luck. But I suppose one can go around swinging swords in a crowd, hit no one, and convince oneself of one’s good judgment.
That's the important one - a Tesla has no concept of the real world. Remember that clip where a Tesla was slowing down because it mistook the full moon in the sky for a traffic light? Likewise, no one would aim at a cyclist, except when they are a homicidal criminal.
> no one would aim at a cyclist, except when they are a homicidal criminal.
The process of not wanting to impact something and attempting to avoid it can occupy so much attention that hitting that thing becomes inevitable. People tend to go where they are looking.
If your complaint is that a very conservative manufacturing industry doesn't have 1:1 analogues with software development then I guess it's valid.
Giving organizations stuff you haven't 100% nailed down yet expecting them to come back with stuff they want changed in order to buy is pretty par for the course in the firearms industry. That's as close to a public beta as you're gonna get.
The big difference is that marketing firearms are still safe, or as safe as firearms tend to get. It's a fully functional firearm; they want feedback on whether the ergonomics are good, whether the weight is good, how it handles recoil, etc.
It would be more akin to handing out prototype weapons to police departments, which doesn't happen specifically because they might fire themselves or blow up or fail in some annoying but not spectacular way like failing to feed new rounds.
The key mechanical features always "work" but little stuff like "gun fails to cycle 500% more often with the particular spec of ammo a particular force uses". A rifle spec'd out for sale to Cambodia probably is going to have a lot of teething issues if you try and sell it to the Swedes unchanged. Are these things "safe"? IDK. But no force is going to put in an order for tens of thousands of guns unless correctable issues like those are proven fixed. Infantry arms are a battle of margins. Everything "works" but a marginally more finicky gun to use is going to mean the people using it are marginally less effective which extrapolates pretty directly to "safety".
At the end of the day the analogy isn't great and we're building a castle in a swamp here.
A public "beta" of a self driving car should lean towards failing safe. I would expect it to have sudden braking issues and be overly conservative. The kind of bugs we're seeing where obstacles are blatantly ignored make it feel more like an "alpha" release. But hey, we're not designing airplanes here, just thousand pound chunks of metal flying around roads capable of killing people.
...I am aware of zero other manufacturers brazen enough in terms of blatant disregard for Engineering Ethics to have embarked on this type of testing before Silicon Valley did.
In fact, there were licensure processes for this sort of thing already in place in California iirc, so this is less gray area, more "lets outsource testing to the end user".
The <Insert State Here> Code certainly applies penalties for assault and battery, and you can rest assured knowing shin-kicking would happen regularly without it.
The car problem presented here is quite a bit more shades of gray and we absolutely need to set acceptable parameters for the technology.
Yes, but my point is that if you’re being a jerk, then you’re the problem, not the person who is not saying, “please don’t be a jerk.”
You don’t need a regulation telling you not to be a jerk — unless of course you are a jerk. And even then, you’d probably still be a jerk, regulation or not.
Except it is: literally none of this is illegal. And it's not clear how it could be illegal: what's the distinction between cruise control and lane keeping and these systems? Very little - nothing you can codify.
It's also very likely that this system can pass a standard driving test: AIs are really good at handling fixed test cases. So any regulation you do pass has a complicated assurance problem: how does it prove the system is safe?
Pretend Tesla just knocked "beta" off the name and said "this is a limited self driving system, Tesla exclusive" - then you wind up in the same bucket with the same "it's the drivers responsibility problem".
People see this footage and assume Tesla self drive does this all the time - but it wouldn't. It probably passes this scenario in testing 100% of the time: but the issue of course is the test is not quite whatever this circumstance is, and the system has a dramatic failure.
Can the software automatically make lane changes? Can it initiate turns? Can it decide to make non-emergency stops at lights and signs? Can it accelerate from a stop? What is the expected level of human intervention?
The question I'm posing is, how do you prove it works to an independent authority?
It would be extremely reasonable for us to regulate along these lines, but I'm also absolutely willing to bet that Tesla and every other system would get through an approval process just fine.
Which sort of loops around too a wider issue: if self driving is a priority, then it needs to be a government, collective priority for public safety and development.
I.e. independent reading institutes, a suite of diverse randomised tests and yes: because this is no longer "on the driver" for accidents then both the government and company will be taking responsibility for failures.
The issue is that the law of the land is: these systems aren't illegal, and the driver is responsible for how the car handles including any automation they engage.
To make any progress you have to break driver responsibility - or use that bright line and just outlaw the whole lot.
>but I'm also absolutely willing to bet that Tesla and every other system would get through an approval process just fine.
It would be Diesel Gate all over again. The testing would have to be documented somewhere and the car could just be trained for the test scenarios. Considering how often Tesla like to massage their numbers (nurburgring data, sales figures in basically every country, most recently Australia, the big asterisk next to their 1.9s 0-60 run, etc etc) I can't trust they wouldn't train directly for the test and neglect everything else just so they can have the best headlines once again.
I've wondered if it would be a good idea to have some sort of external indicator on cars that indicates to others that are driving, cycling, on foot that the vehicle is in autonomous mode. At least then maybe one can understand the potential risk driving around them. (Noted that an indicator doesn't really help if the vehicle is behind you in a lot of instances!)
One thing I've noticed the past couple of years is that it isn't that hard to determine whether or not a car is autonomously driving. If they're driving like they're drunk and they're in a Tesla, 9 out of 10 times, you'll see that that the driver isn't paying attention to the road at all because they're using Autopilot or FSD. I just treat the Tesla drivers around me on the road as if they're drunk and keep my distance.
Several manufacturers have experimented with such systems in concept cars, e.g. animated grilles that signal in green when an oncoming car has recognized a pedestrian and is slowing in response.
It's a software system overriding pilot input on Boeing 737 MAX airplanes, designed to make up for cost reductions in adequate hardware design. Pilots were deliberately not well-informed and the system caused two crashes. It was brought up elsewhere in the discussion. In general cars are headed into an aviation-like situation with automation, so case comparisons are becoming more frequent.
It's very apples to oranges here, but loosely relevant wrt/ information/documentation and in the cameras-vs-LIDAR/sensor fusion debate.
I'm sure the policy and regulatory systems will be able to fend off the lobbyists of the richest person in the world working to get the word "liability" redefined in the law. \s
I was expecting autopilot to be seriously dangerous, but the data doesn’t back that up. Out of 234 deaths from accidents involving Tesla’s 12 people have died while Tesla autopilot was known to be in use or had recently been in use. http://www.tesladeaths.com Autopilot might be slightly worse than human drivers depending on how you slice the data but that’s about it.
I don’t think it’s a regulation problem as much as it is an acceptance by regulators that driving is inherently dangerous and autopilot isn’t dramatically worse. It’s not even obvious if on net more people would have died if Tesla had never released autopilot.
Note that FSD beta participants are selected/limited by Tesla based on proprietary "safe driver" scoring, so FSD data doesn't say much about how the gen pop might fare at managing the system. (I know you were writing about Autopilot -- discussion is on FSD Beta though.)
Note also that other drivers are learning behaviors such as keeping their distance from Tesla's, etc., making it harder to assess the system functionally.
Autopilot and FSD are two different things. Elon claimed 0 accidents for FSD late last year, but this year there is video of a Tesla running into a post.
uh huh. why do you think it's reasonable to conclude that this system is safe given that the reported data is based on faulty analysis [0] and inexplicably exempt from reporting requirements for testing self-driving cars?
Yes, as there is no other way to prove it’s safe. Aka your suggesting an impossible standard for any company to meet.
Further the only reasonable standard is roughly average human competency. As soon as a tired driver can turn on self driving and be safer on the road then banning that technology has a real cost.
Testing hardware and software via safety drivers on public roads doesn’t actually prove safety in the hands of the general public. Just as FSD being released to the safest drivers first doesn’t prove safety in the hands of the general public.
The general public doesn’t use products like you might assume. For example toilet plungers have killed or seriously injured people and not because they where used as a weapon.
what??? they're not subject user error. that's the whole point, and how many times fallible humans had to take over is exactly what the disengagement report tracks
The platonic ideal self driving system possible doesn’t prevent a tire blowout from poor maintenance etc. Similarly a test fleet isn’t going to replicate the publics actual origins, destinations, and time of day. Something as simple as people going to bars more because the self driving system can get them home would mean a different risk profile.
In the end it's a regulation problem, not a Tesla problem (even though yes, the specific decision to exercise their options is theirs) - do you allow unlicensed drivers to perform this on-road testing of unfinshed vehicles or not?
It's not quite a deliberate information hiding campaign as with MCAS (as far as we know). But it is pre-production testing of experimental engineering on safety-critical equipment. Experiments not guaranteed to pan out, and engineering lacking a mature certification process.