It's a more generalised problem than you probably think, and is a factor in virtually every multi-party accident. Even if the root cause of the accident is someone else's human error, this still leaves surrounding vehicles with a choice about how to respond to it: whether to take damage or inflict damage, and if the latter, then inflict damage upon whom.
For an AV, that's a trolley problem. For a human, it's not. We don't rationalise about such things: it occurs in an eyeblink, and gets covered under "shit happens". We convince ourselves that the outcome is inevitable, just a function of physics. But it isn't. Unlike humans AVs will have the ability and obligation to make choices in such situations; this is critical for their harm-reduction capabilities. When all of those choices are to some degree bad (which is very, very often), then it's a trolley problem. Which is why people who work on AVs are legitimately concerned with them.
Eh. Or you can just let the auto-driving auto-mobile take the same "shit happens" approach that humans do. Whatever the code happens to spit out, that's the direction it goes, ethics ignored.
Yep. All that stands in the way is every personal injury lawyer, class-action lawyer, consumer interest group, public safety regulator, competitor, and short-seller in the world.
For an AV, that's a trolley problem. For a human, it's not. We don't rationalise about such things: it occurs in an eyeblink, and gets covered under "shit happens". We convince ourselves that the outcome is inevitable, just a function of physics. But it isn't. Unlike humans AVs will have the ability and obligation to make choices in such situations; this is critical for their harm-reduction capabilities. When all of those choices are to some degree bad (which is very, very often), then it's a trolley problem. Which is why people who work on AVs are legitimately concerned with them.