> Do you think that increasing the number of autonomous cars will linearly decrease the number of accidents to a certain point? Don’t you think that increasing their number will increase the chances of meeting with really bad human drivers and we simply don’t have sufficient info on whether those ‘meetings’ are less or more deadly than with a human driver - and those are the cause of a significant chunk of accidents.
There's a difference between the risk changing, versus merely going from insufficient data to sufficient data.
When you have an extremely small data pool, it's also quite possible that one or two meetings with a really bad driver will give you a misleadingly bad impression of autonomous cars.
But I'll put it this way. Once we've seen either 10 billion miles or 100 fatalities from a particular tier of self-driving, we'll have a very solid idea of how dangerous it is. Getting that much data only requires a tenth of a percent of cars in the US for three years. (And if they're particularly dangerous we can easily abort the test early.)
> And by intuition I doubt that today’s AI could react better than a competent human driver to someone cutting in front of it, and the like. Simply because we are better at reading high-level patterns in others’ driving. Reaction time is not the only metric that matters.
If someone's dangerously cutting people off there probably isn't much to read in their patterns. Being cut off seems to me like one of the situations that is most about reaction time and least about thinking.
There's a difference between the risk changing, versus merely going from insufficient data to sufficient data.
When you have an extremely small data pool, it's also quite possible that one or two meetings with a really bad driver will give you a misleadingly bad impression of autonomous cars.
But I'll put it this way. Once we've seen either 10 billion miles or 100 fatalities from a particular tier of self-driving, we'll have a very solid idea of how dangerous it is. Getting that much data only requires a tenth of a percent of cars in the US for three years. (And if they're particularly dangerous we can easily abort the test early.)
> And by intuition I doubt that today’s AI could react better than a competent human driver to someone cutting in front of it, and the like. Simply because we are better at reading high-level patterns in others’ driving. Reaction time is not the only metric that matters.
If someone's dangerously cutting people off there probably isn't much to read in their patterns. Being cut off seems to me like one of the situations that is most about reaction time and least about thinking.