>4) Therefore, the best plan to acquire the data to train the networks is to use a ton of customers as safety drivers and let them test it widely on the real routes they drive. This is tricky and dangerous, but if it's not too dangerous and the outcome saves many lives over the coming years, it was worth it.
Maybe you should use specifically trained test drivers, who are acutely aware of the limitations and know how to deal with them, not random people who have been told through intentional snake oil marketing by a billionaire with a god complex who needs to feed his ego that the car can drive itself.
It's insane that governments allow these vehicles on the road.
Also, that kind of the-end-justifies-the-means reasoning has lead to a lot of catastrophic developments in history. Let's not go there again.
I appreciate being principled about ends not justifying the means. But in my experience this principle is not applied universally by people. It's cherry-picked as what amounts to a non-sequitur when deployed in a discussion. Don't get me wrong, I wish it were a universally held and enforced moral principle, but it's not.
Anyway, the reality is that Teslas are safer than any other car on the market right now, despite the scary voodoo tech. So it seems in this case the means are also justified. If auto-pilot and FSD were causing more accidents than humans, we'd be having a different conversation about ends justifying means, I surely agree.
Ends-justifies-the-means reasoning has also lead to many of the innovative wonders we're all now relying on every day. While the customer test drivers aren't "trained", there was some caution in the approach.
Customers had to opt-in to request beta-testing access, they had to pass a Safety Score system that grades their driving safety (same as car insurance apps, roughly) for a long period (in some cases many months!), etc. After going through those hoops, when they finally get the software for it, they're required to consent again. IIRC the text there includes things like: You are in control at all times, must keep hands on the wheel and eyes on the road and intervene for safety, you are liable for anything that happens, this software can do the worst possible thing at the worst possible time, etc. They also monitor for your hands on the wheel (via torque sensing) and use an in-cabin camera to monitor whether you're watching the road or looking at a cellphone, etc. These measures can be defeated with effort by unsafe idiots, but that's no different than the risks such unsafe idiots present when driving any car.
With all of that in place, they've scaled up over a couple of years to 160K customer test pilots. Accidents happen, but there's no evidence the rate of them is anything to worry about. If anything, what evidence there is seems to point in the direction of FSDb test drivers being safer than average. However, they're supposedly removing the Safety Score part of the process Very Soon (likely in the next few weeks), but the rest of the warnings and mitigations should remain.
--- meta stuff:
There's a ton of money and a ton of ego pushing huge agendas in every direction when it comes to anything Elon-related, Tesla included, especially since the Twitter saga began and he started really going off the rails more. Almost anything you read on related topics, regardless of which "side" it's on, you have to question the motive to even begin to understand the underlying objective truth. I follow Tesla news a lot, and I'd say ~90% of all random internet news articles on these subjects (positive and negative) are mostly utter bullshit clickbait when they're not outright fraud, and they're designed to influence stock prices and/or buyer behaviors more than they provide useful information. When big money and big egos are in a war over something, objective truth on the Internet is a casualty.
If you ignore all that line noise and look at the objective reality of the engineering parts though: it's pretty amazing beta software with a lot of future potential, and the testing has gone pretty smoothly in terms of safety. It could be many years before you'd let it chaffeur some elderly person on a pharmacy run as a robotaxi, but IMHO it's still a better bet than most of its competitors in the long game of fully-generalized self driving on the human-centric road networks we have today.
As for Elon himself: clearly some of his behavior and viewpoints lately are both pretty objectively terrible. At least you can see it? How many executives from companies that built things we all relied on from the past few decades have really been any better? They've mostly been better at hiding it, while Elon puts it on full display. The world is what it is.
We don't disect live humans despite the potential for scientific advancement. Would it be so bad if FSD wasn't on public roads until it's disengagements per 10K miles driven was at least as few as human accidents per 10K miles?
Disengagements in Tesla data are going to commonly be for much less serious things than potential accidents (merely inconveniencing others, or embarrassing the driver in some way, or even a safe move that just didn't /feel/ right to the driver). They've published actual accident statistics for Autopilot, and those show that it has a lower accident rate than manual driving even on the same fleet of Teslas (which in turn have a lower accident rate than the rest of the US even when manually driven).
Driving is inherently very dangerous. Traffic accidents are a leading cause of death in the US. You're not really chasing perfection to win this game. It's not all that hard to beat humans on average, because the average human is pretty terrible. It's a statistical certainty that some people will die at the hands of Autopilot even when it's in some final non-beta state, but it will probably be less people than would otherwise die to the same miles driven manually.
The hard thing for Autopilot-like systems is perceiving the world accurately. "The world" includes both the physical reality of roads+cars it senses, as well as things like traffic rules, corner cases (construction, debris, manual police control of an intersection with a dead traffic light), and gauging the intent of other/human drivers. Humans are inherently better at that stuff. The software has to get better than it is today, but it will probably never fully match the best humans at this part.
However, there are two key ways the software inherently outdoes humans:
(1) It can have more sensory data than humans. Even Tesla (currently sans Radar + Lidar, just cameras) can see in every direction at the same time, all the time, with an overlapping array of cameras. No blindspots, no temporary blindness in one direction while you crane your neck to check in another, etc.
(2) It never gets tired, distracted, or drunk. It's never checking a facebook feed, or nodding off after a sleepless night, or too engaged in a conversation with a passenger to remember to look for that cross-traffic, etc. This is a huge advantage when it comes to accident rates.
> Autopilot even when it's in some final non-beta state, but it will probably be less people than would otherwise die to the same miles driven manually.
Bold claim. We would need the data to be sure. Judging by reports of Tesla owners in this thread I'd guess FSD and Autopilot are probably causing more harm than it's preventing.
> It never gets tired, distracted, or drunk.
Which would be a great benefit if FSD didn't drive like a drunk teenager.
Look for any tool humans rely on there must be predictability. And until we have enough public data no conclusions can be drawn. That's why it's in Tesla's interest to continue releasing less and less data except for the data that makes them look good.
Maybe you should use specifically trained test drivers, who are acutely aware of the limitations and know how to deal with them, not random people who have been told through intentional snake oil marketing by a billionaire with a god complex who needs to feed his ego that the car can drive itself.
It's insane that governments allow these vehicles on the road.
Also, that kind of the-end-justifies-the-means reasoning has lead to a lot of catastrophic developments in history. Let's not go there again.