Generally speaking, lawyers will throw everything they have at the wall and see what sticks. I believe this is basically taught as official strategy.
However, it is pretty obvious that some of what they are throwing has a reasonable chance of sticking, and some things are just being thrown because why not? It isn't that hard for the parties to sort their arguments by likelihood of success and present just the best, and if you think about how to model that, you'll see that for a vanishing fraction of the work you basically end up with a high probability of getting the same result the hypothetical full trial would have.
Now, combine that with the fact that trials aren't free and the taxpayers are a party in the suit in the sense that they are paying the judge, infrastructure, etc, and also the general fact that if there was an injustice committed by Google every day that Oracle goes without renumeration is itself a further injustice, and you can see that there is a compelling interest in completing the trial sooner rather than later, and in the balance of things if that means jettisoning a series of arguments unlikely to succeed anyhow, this is a net win. "It will take too long to try them all" is a valid concern for everybody; what's the point of the trial if the expenses the trial incurs to Oracle, Google, and the taxpayers are larger than any possible judgment's time-value-of-money could possibly be, to take one limit case?
Also, this will apparently go to a jury. You can't stuff arbitrary amounts of information at a jury and expect results. I would argue that the bound on what you can expect a jury to understand is itself a critical (and underappreciated) component of our system. If 12 laymen can't understand it after several months of targeted instruction, the law is too vague to apply in the first place.
However, it is pretty obvious that some of what they are throwing has a reasonable chance of sticking, and some things are just being thrown because why not? It isn't that hard for the parties to sort their arguments by likelihood of success and present just the best, and if you think about how to model that, you'll see that for a vanishing fraction of the work you basically end up with a high probability of getting the same result the hypothetical full trial would have.
Now, combine that with the fact that trials aren't free and the taxpayers are a party in the suit in the sense that they are paying the judge, infrastructure, etc, and also the general fact that if there was an injustice committed by Google every day that Oracle goes without renumeration is itself a further injustice, and you can see that there is a compelling interest in completing the trial sooner rather than later, and in the balance of things if that means jettisoning a series of arguments unlikely to succeed anyhow, this is a net win. "It will take too long to try them all" is a valid concern for everybody; what's the point of the trial if the expenses the trial incurs to Oracle, Google, and the taxpayers are larger than any possible judgment's time-value-of-money could possibly be, to take one limit case?
Also, this will apparently go to a jury. You can't stuff arbitrary amounts of information at a jury and expect results. I would argue that the bound on what you can expect a jury to understand is itself a critical (and underappreciated) component of our system. If 12 laymen can't understand it after several months of targeted instruction, the law is too vague to apply in the first place.