This calculation is actually very rational. What you seem to ignore is that, with conventional cars, there is a relatively small amount of "known unknown" risks. There are of course significant risks, but almost all of them are known not only in kind, but also in quantity. Drunken drivers, dumb people, broken brakes, whatever. We have several decades of data regarding these risks. The amount of "unknown unknowns" can also be assumed to be relatively low, given that the concept of humans driving cars has quite a history now and largely stayed the same for a good number of decades.
With autonomous cars, even once you have a few years of safety data from a large enough number of cars to be able to make the call of them being 5x less dangerous than human-driven alternatives in that data, you will still end up having much more "unknown unknowns" (of which I can't tell you any, because they are by design unknown) in addition to also having much more "known unknowns" like the possibility for large-scale software bugs potentially causing thousands of casualties at the same time. These risks will only go down slowly with enough time, there's practically no way of fast tracking getting these down, hence you have to incorporate a large enough risk buffer in your assumptions for rationalizing to even start using that fancy new tech, and the only place where this risk buffer can come from is having a much bigger difference in the "known knowns" department of risks.
Those unknowns already are being elucidated by experimental fleets. Self-driving cars won't be deployed en masse before the vendors can already demonstrate solid statistics worth hundreds of millions of passenger-miles, which will be sufficient to get the fatality rate.
How much does that tell me about potential software failure modes that don't kick in until a significant scale (speaking of double-digit percentages of all traffic, these test fleets are not even close to that) has been reached? Or about weird, but potentially fatal side effects of incorporating rules put up by regulators into the software that cannot be tested with today's alpha testing fleets because these rules might not even exist yet? Or about how good all these different AI vehicles of different vendors in very different software and hardware revision states interact with each other (think of situations like HFT trading algorithms that run each other into a doomsday spiral, just with vehicles at an intersection twitching around quickly in weird ways, trying to interpret each others actions)? Or about the hackability of future robotic cars (think for example of those slightly modified fake traffic signs)?
Nothing. That's why regardless of how impressively big these test fleets are, there will be a lot more of these unknowns.
Some of them seem like tail risks to me that are unlikely to dominate fatality statistics even if they were to occur and will be quickly patched or recalled if needed. Many of these hypothetical concerns could also affect existing driver assistance systems and aren't unique to autonomous vehicles. Hacking can also happen with human-operated vehicles. Interaction between multiple self-driving ones can also be tested with experimental fleets by concentrated local deployments.
With autonomous cars, even once you have a few years of safety data from a large enough number of cars to be able to make the call of them being 5x less dangerous than human-driven alternatives in that data, you will still end up having much more "unknown unknowns" (of which I can't tell you any, because they are by design unknown) in addition to also having much more "known unknowns" like the possibility for large-scale software bugs potentially causing thousands of casualties at the same time. These risks will only go down slowly with enough time, there's practically no way of fast tracking getting these down, hence you have to incorporate a large enough risk buffer in your assumptions for rationalizing to even start using that fancy new tech, and the only place where this risk buffer can come from is having a much bigger difference in the "known knowns" department of risks.