The point is that we allow teen drivers on the road with the expectation and understanding that they will improve over several years. Why would we not expect to have to make the same concession for AI?
Because they are companies that sell a product to consumers, who expect that this will keep them safe on the road. We need to hold them to a much higher standard than a teenager.
I suspect that the corporate entities involved have such deep pockets and/or so many lawyers and lobbyists that won't work.
Instead, how about:
All new autonomous vehicle configurations (let's call that the algos + sensors + vehicle) have to take some kind of actual driving test, just like us humans do.
Maybe the public could even help design a good test? "Not driving at speed into a stationary fire truck which is parked on the highway right in front of you" would be one element I'd want to see tested.
If an autonomous vehicle is involved in an accident, and the algo/sensors/vehicle are found to be (partially) at fault the configuration earns penalty points.
If that configuration earns enough penalty points over a period of time, the entire configuration loses its certification, plus a fine, plus a mandatory re-test.
This method appears to work reasonably well in dealing with us not-always-perfect human drivers, and ought to concentrate the minds of the designers/developers/managers behind autonomous vehicles.