You are thinking of a different gamma-correction step. The reason you gamma-correct during rendering is because monitors expect data to already be gamma-corrected, and it is this expectation that causes the problem I am talking about. Data in a bitmap is not linear with the light-intensity of the colors; it is gamma'd!
The simplest kind of mipmapping is a box filter, where you are just averaging 4 pixel values at once into a new pixel value. Thinking just of grayscale pixels, if you add 4 pixels that are each 1.0 (if you are thinking in 8 bits, 1.0 == 255), and divide by 4, you get 1.0 again. If you add two pixels that are 1.0, and two that are 0, you get a value of 0.5. Which would be fine if your bitmap were stored in units that are linear with the actual intensity of light; but they are not, because they are in a weird gamma! What you are going to get is something like pow(0.5, 2.2) which is way too dark.
Thus when you don't gamma-correct during mipmapping, bright things lose definition and smudge into dim things way more than actually happens in real-life when you see objects from far away.
> Data in a bitmap is not linear with the light-intensity of the colors; it is gamma'd!
That is true for common sRGB images, such as what you would see on the web. But for games which usually have tightly controlled art pipeline it would not be unfeasible to use linear colorspace bitmaps as textures, and in such case you would not need to have gamma-aware scaling.
It is somewhat infeasible to use linear colorspace, because you need a lot more precision in order to do this without banding. You end up with substantially bigger texture maps, possibly twice as big; but actually it ends up being a lot more than twice as big, more like 8x or 12x, because the compressed-texture formats that graphics chips are able to decompress onboard do not support this extra precision. So if you were to try using something like S3TC compression to reduce this bloat, the result would be really ugly.
In general, games only use light-linear texture maps when they also need HDR in the source texture, which is not that often. Ideally it is "purest" to use HDR for all source textures, but nobody does this because of the big impact on data size. (And even for the few textures that are stored as HDR source, often the data will not be integer, but some other slightly-more-complex encoding.)
[Claimer: I have been making 3D games professionally for 17 years, so this information is accurate.]