Well, yeah. Imagine there's heart surgery which has only 20% success rate. So every time 100 people go in for that surgery, only 20 survive.
Now, someone makes a robot that has 100% success rate, but every 100 operations, it has a glitch in the sofware, where it stabs and kills the patient instead. It has been calculated that on average, using the robot only kills 1-2 out of 100 people going in for surgery, compared to the old method of 80 people out of 100. The robot is clearly 80 times better than manual procedure, so why not keep using it?
The answer seems to be - because robot killing people is not an unavoidable outcome. It can be fixed. It's a software problem, a mistake that costs someone their lives. Similarily, an auto-driving Tesla may be killing less people on average than manual drivers, but that doesn't mean that any software problems that lead to it don't matter.
Step 1: don't recall the robot surgeon. While it does have a fatal bug, recalling eat would kill 79% more patients. Oops.
Step 2: Correct the bug, so the robot stops killing patients.
The Tesla autopilot may be in a similar situation. Even if there is a number of fatality inducing bugs, disabling it would be even worse. That said, 40% crash reduction is not enough. I want to see 75%, 90%, and more.
That one is easily dealt with: "Our robot surgeon reduces the fatality rate by 98.5%. Our study of the remaining fatalities suggest we can reduce them further. We hope to reach near-zero fatality rate within the year.
Don't even speak of the bug. Just state what matters: this stuff is better than what we had before, and it can (and will) be even better. This may prevent talks about killer robots.
This analogy doesn't work because you don't do manual heart surgery on yourself. Whether or not a robot does the surgery, the hospital's butt is on the line.
The biggest difference between an autopilot accident and a traditional accident is fault. Most accidents are driver error. Even if it drives accidents, overall, down (which it will), changing the responsibility for safe travel from mostly the individual to mostly the manufacturer has huge legal repercussions. Not to mention many people's discomfort with no longer being in control of their own safety, even if study after study shows that they shouldn't be.
Now, someone makes a robot that has 100% success rate, but every 100 operations, it has a glitch in the sofware, where it stabs and kills the patient instead. It has been calculated that on average, using the robot only kills 1-2 out of 100 people going in for surgery, compared to the old method of 80 people out of 100. The robot is clearly 80 times better than manual procedure, so why not keep using it?
The answer seems to be - because robot killing people is not an unavoidable outcome. It can be fixed. It's a software problem, a mistake that costs someone their lives. Similarily, an auto-driving Tesla may be killing less people on average than manual drivers, but that doesn't mean that any software problems that lead to it don't matter.