As with most thought experiments, it's a little far-fetched though. How often, in the run-up to a crash, is it really possible to make oneself safer (or less safe) at the expense (or benefit) of another car?
A more common scenario might be driverless cars that don't communicate with each other and therefore try to "optimize" a crash without knowing how other cars are going to respond – potentially far more dangerous than the magnanimous AI you talk about.
A more common scenario might be driverless cars that don't communicate with each other and therefore try to "optimize" a crash without knowing how other cars are going to respond – potentially far more dangerous than the magnanimous AI you talk about.