If true then robot cars are scarier then I've thought ... AI is going to be able to process the right scenario to avoid a deadly crash within the first decade or more of market adoption or the programmers are going to learn and ship bug fixes for each person their prior code killed?
*I'm always downvoted for this thought but is this not the true way forward with robot cars? Those who downvote me think the first batch of market adopted robot cars and their AI is perfect and is not going perform the wrong scenario ... people arent going to be killed by these things?
It's a legitimate question but the mechanism of how it works is completely wrong. Many people will be killed by any given mass market implementation of self driving cars. Those people will be killed by crashes that a human could not have avoided, people will be killed by errors few humans would make (e.g. "plowing into an overturned truck at full speed") and people will NOT be killed by facets of the system that humans cannot replicate (never tired, never drunk, never angry).
Dozens, hundreds, or even thousands of people will die between "bug fixes". The only metric needed here is to determine if the system as a whole performed better or worse than a fleet of typical people. We know what the accident rate is for a fleet of typical people (for now) and it is pretty bad. People also occasionally drive into obvious trucks for no known reason. People are inattentive. People are emotional. People rush. People ignore things they shouldn't or react too late to things they should. The bar to operate more safely than people is high, but not impossibly high.
In response to market changes, world changes, and continued investment, code will change but also input training samples will change, labeling technology will change. Those changes will result in a measured change in the fleet's performance.
There will be cases where a single death results in some code change. Early on there will be many such cases. But as time goes on, those cases will become less and less frequent, as the cases where specific code points are needed become not only unnecessary but even an impediment to the proper mapping of world features to behavior output.
The important part here is to use the right benchmark for human drivers. Crash statistics are heavily skewed by drivers that are speeding or under the influence. That can be the right benchmark to show how AI driving can help in certain circumstances, but they don't show why most human drivers should be replaced.
Taking those out and considering assistance systems in new cars (which already work quite well), AI has to perform incredibly well to drive safer, especially if it can only drive in easy weather conditions.
Far in the future, we will also see the effects of having full or near-full adoption of self-driving. If all the cars on the road are driven by the "same" or at least similar driver, the edge-cases will drop off significantly.
A valid question, and one that deserves an answer.
However, also valid to note that 38k+ people die on US roads each yeah, and as I understand it, most of those are chalked up to being freak, unpreventable "accidents". So maybe there's a step here even ahead of autonomous vehicles where we commit to abandoning this way of thinking and insist that every road death is fully root-caused. Not just back to human error, but in the FAA sense, back to why equipment, processes, and infrastructure were in place that allowed a single moment of human inattention to be so deadly.
> Not just back to human error, but in the FAA sense, back to why equipment, processes, and infrastructure were in place that allowed a single moment of human inattention to be so deadly.
I'm for it - but I'll bet the average person will be against all the weeks of training we will soon require every years before you are allowed to touch a car. It will be even worse once people realize how high the drop out rate is (people who fail and suddenly can't get around)
Yeah. I think most advocates have long ago realized that the only way you could ever hope to have the requirements for vehicle piloting tightened up to where they should be is if you've built a society where 99% of people can drop out of the pool of drivers and survive, and that means some combination of a) robust mass transit, and b) machine pilots.
The current system is obviously the result of a century of symbiosis between car-centric development and driving being seen by most as a requirement to participate in society, and therefore a de facto right.
That's my sentiment on this .... progress will be a killer in this realm of software development and on a large scale. Developers who get into robot cars the deaths their AI makes doesn't or won't bother them? Just analyze the data, learn and ship the fix who cares about johnny, susie and their kid killed by their AI's mistake in which they weren't even driving a robot car just driving alongside one like Uber's pedestrian killer robot car.
Right now the vast majority of developers code we are shipping is just fixing bugs in business and consumer applications where loss of life is almost nil in what we fix and ship.
Paul, some engineers already live in the world where mistakes can cost lives. Including automotive software engineers. In fact, the engineering world is full of people who make compromises knowing that the product could be safer, but they're aiming for safe enough because that's the only real way to move forward.
It depends on the kind of accident. Currently, accidents are also treated very differently depending on intend and the likelihood that this fault would've been made by other drivers. I assume software will be judged under the same circumstances.
I'm always thinking the thing we have to keep in the back of our heads -- "cars" as humans have done them are stupid. Very very stupid. You go to LA and you see all the people going to places, each in separate cars, when trains and subways were invented a long time ago, and it becomes clear.
So, sure, you have an interesting technical question of "robots being able to drive like people," but let's not take any of this too seriously if we're comparing to a concept of "intelligence." The collective stupid is far too overwhelming.
I wonder how they do on the L.A. freeway with average traffic that goes to heavy and average again where there's a police chase happening in a torrential downpour... im sure more scenarios that happen in life at once can be added on ... yet don't worry the AI knows all the millions of scenarios on top each other and will be able to handle it like an attentive human driver would in such situations.
*I'm always downvoted for this thought but is this not the true way forward with robot cars? Those who downvote me think the first batch of market adopted robot cars and their AI is perfect and is not going perform the wrong scenario ... people arent going to be killed by these things?