If you had to reach into your bag of probability concepts to pick the most closely related one, it would be "Large Deviations", which comes up for engineers in the context of information theory.
Basically, as explained in the article, many kinds of rare events (like, averages attaining values far from the population mean) can be characterized without knowing a lot of problem-specific details.
For instance, if the typical value for an average of 1000 positive numbers is 10, and you want to know the chance of an outcome greater than 100 (a "large deviation"), you can pretty much just calculate the chance of the outcomes very near 100, without caring about 120 or greater. That is, it turns out that the larger outliers carry very little probability, although that's not obvious.
This phenomenon comes up a lot and goes by several names. It was known to Laplace and there's an approximation used commonly in that area bearing his name. It's also related to the notion in information theory of the "typical set".
Similarly, you can, in some ways, characterize the realizations that cause these extreme values. (As opposed to just computing their probabilities.)
For instance, if I remember right, the typical realization that gives rise to a large average has just a few larger-than-expected values and a vast majority of about-average values, rather than every value being slightly too large. This is a highly general property.
The verbiage in the article is a little too "wanky" for me to tell.