If there is clear evidence that these cars would have lower fatality rates than human drivers (I don't think this evidence currently exists), then holding off release to continue development is potentially a moral bad, for the same reason that we end clinical trials early when they demonstrate clear life-saving potential that would save lives in the control group.
I don't think it is morally reprehensible to ask what other people's intuition around this problem is.
> Is it worth killing n random people, for a uncalculated and unknowable chance of saving n+m lives in the future. The pool of people randomly selected to die have not volunteered or consented to be part of this project. There are ways to achieve the same lifesaving endgoal without the upfront sacrifice of lives.
The part which really strikes me as morally reprehensible is where the companies are saving money on test drivers and controlled test environments and externalizing those costs onto every other driver sharing the road with their training data collectors
> Is it worth killing n random people, for a uncalculated and unknowable chance of saving n+m lives in the future. The pool of people randomly selected to die have not volunteered or consented to be part of this project.
This reasoning renders any governmental policy change of any sort impermissible.
It isn't coherent to apply these forms of deontological ethics to state action - a random set of people will die with both state action and omission of action, I see no reason why not to pick the option with the smaller expected number of deaths.
But this is all besides my original point: this is a legitimate moral debate to have and the rhetoric used by the above commentator was entirely uncalled for.
I don't think it is morally reprehensible to ask what other people's intuition around this problem is.