I think this related to the "illusion of control". People feel safer when they are driving, rather than a machine, even when they are not safer. I hope government regulators do not impose 5x safety requirements on self-driving cars.
Assuming the system is properly maintained and used, if anyone's responsible it has to be the manufacturer. Certainly the passenger isn't any more than if an Uber gets in an accident today.
And, with the possible exception of drug side effect (and even there there are lawsuits), we don't really see consumer-facing products that, even if used as directed, kill a fair number of people and we just go oops. Let's say autonomous vehicles kill 3,000/year in the US, i.e. 10% of the current rate. (In reality, human-driven cars will take a long time to be phased out even when self-driving is available but go with the thought experiment.) Can you imagine any other product we accept killing thousands of people a year and we're fine with that?
ADDED: As someone else noted, you could argue that tobacco etc. fall into that category but we're mostly not OK with that and is reasonably thought of as in another category. (And pretty much no one is smoking because they think it's good for them.)
Just about any food is potentially unhealthy if not consumed in moderation. A bag of potato chips and a Coke now and then isn't going to kill anyone. But a couple bags and half a dozen cans a day sure isn't good for you. And a porterhouse steak every day probably isn't that great for you either.
You asked for accepted products that kill people, not for products that kill unconditionally. Foods are conditionally unsafe (if consumed in excess) just like cars are conditionally unsafe (if not operated carefully). Deaths by cardiovascular diseases (partially caused by inappropriate diet) exceed vehicular deaths. And yet they're accepted.
There is no shortage of products that can injure or kill you if you operate them unsafely including cars. But you won't "operate" an autonomous vehicle at least while it's autonomous. An autonomous vehicle causing an accident due to a software mistake is the equivalent of a regular automobile suddenly losing steering control because of a design defect on a highway--and the latter would absolutely be a liability issue for the car maker.
Right, I forgot that this was an argument about responsibility. In the case of food I guess there's some shared responsibility. The customers of course have a lot of choice here, but the manufacturer still optimizes for tastiness (increasing consumption) without necessarily optimizing for healthiness. That could also be considered a design defect.
Perhaps for an owned autonomous vehicle the equivalent shared responsibility would be a user-selectable conservative ("comfort") vs. aggressive ("sporty") driving style. Or the option to drive yourself and only let the software intervene if it thinks what you're doing is unsafe.
So, back to the question
> We don't really see consumer-facing products that, even if used as directed, kill a fair number of people and we just go oops.
The only very nebulous other case that comes to mind are unsafe computer systems in general. When a hospital or critical infrastructure gets hacked then this is treated almost like an unavoidable natural disaster rather than the responsibility of the operator or manufacturer.
You may have to sue the manufacturer and prove that their system is at fault. Which is pretty much impossible considering the legal resources these big corporations have versus the little guy. This would end up like tobacco or junk food where companies were able (and still are) able to deflect any kind of responsibility.
It may be easier to find fault in an autonomous vehicle. Assuming it has a black box that records sensor data, you can replay the algorithm and see what went wrong.
If corporations are people, you should be able to bring criminal murder and manslaughter charges against them, with the top-level executives acting as proxies to serve the jail sentence.
The illusion of control is a thing, but actual control is a thing as well. One possible reason to avoid self-driving cars is that there actually are safe and unsafe drivers, and fatal accidents in self-driving cars will presumably be a much flatter distribution among those drivers than the one we have now. Which means that even if they're safer overall, they could still be less safe if you're a good driver.
If they are setting an objective measurement, how is that an illusion of control? In fact it seems like exactly the opposite - they are putting hard numbers on the level of risk they consider tolerable. They are making that available to everyone so they can debate and dispute it.
If anything, this is removing the illusion of control. The illusion of control would be to say you would never trust self-driving cars. Saying you will trust them at a level of 5x measurable safety criteria above human drivers is totally different.
Now we can make actuarial arguments about whether it should be 5x vs 2.6x vs 0.9x and debate how to measure the safety criteria - that’s a completely different world from one where people “feel like” human control of the car is safer.
For sure it is good to seek a measurable criterion. The question is whether laboratory subjects' views on the right level should have normative force. An alternative take is: these are just not-very-informed people, and unless they can give reasons for their views, we shouldn't take them seriously as inputs into the policy-making process.