It is likely that an AI sophisticated enough to make the consideration will be sophisticated enough to avoid the question.
That is, there isn't all that much reason for the cars to be operating on the precipice of disaster.
(If you want to say "But what about uncontrollable and unforeseen circumstances?", we should hope that we make the test out to be 'better than most humans', not 'perfect')
Assuming we change slowly to self-driving cars and they're operating for a while in an environment with human driven vehicles, no disaster scenario should be assumed out. They will have to deal with the situation that kill people right now, and them choices could easily reduce to "kill my passengers or those drunks in that car hurtling toward me" for example.
Better than humans is great, sophisticated AI is great, we can't expect perfect but I don't see a situation where they enter an environment much less dangerous than today's.
I expect the standard heuristic when the program detects an unavoidable collision will be to minimize impact energy, whatever that means.
(It won't have good information about the mass of oncoming traffic, but there aren't that many situations where choosing to collide with the oncoming vehicle makes any sense at all)
That is, there isn't all that much reason for the cars to be operating on the precipice of disaster.
(If you want to say "But what about uncontrollable and unforeseen circumstances?", we should hope that we make the test out to be 'better than most humans', not 'perfect')