The important thing to know is that Gamma is just an optimization step for low gamut color spaces. Ideally, you would just work with linear CIE RGB values and forget all about Gamma. Unfortunately you still have relics like sRGB around, so you cannot just ignore it, but you really should not read an article about gamma, you should read a book about colorimetry.
You would never want to ignore it. Gamma is about perceptual intensity, which follows a power law, as opposed to physical intensity (counting photons). That's why it's an "optimization" in the first place. Even in a magical world where all colors were represented with infinite-precision rational numbers, you'd still want to use a power law every time you wanted to display a gradient.
Gamma is more or less a poor man's perceptual color space, but I don't think it's very useful for that nowadays. There are much better options for image processing that requires perceptual uniformity (which is not everything, e.g. blurring is usually better done in linear RGB), and when you don't need that, I'm not aware of much reason to use it other than limited precision and sRGB being a ubiquitous device color space.
Depends on what you mean by "display". If you are talking about a monitor, sure, do whatever magic works best in there. But otherwise I prefer to work in a mathematical vector space and ignore it. Raytracing software for example can just ignore it as long as renderers understand the image format it outputs.
Indeed, but reading this I definitely get the sense this would all be easier if that was left to displays at the end of the chain, rather than be present in how the information is encoded on disk.
Right. The second issue is that the example grayscale ramps in the page lack a color profile as well, and depending on how you're looking at them (monitor and, sadly, which browser/version) you'll get different results.
On my current high-gamut display the uncorrected images look almost linear, while the gamma-corrected ones have too much black (exactly the opposite of what the page is intended to show).
I don’t understand what you mean here. On the last engine I worked on, PQ encoding was done in a shader in the last rendering pass, which then ended up copied into the back buffer. That’s what I call output.
Did we miss a magical hardware feature somewhere so that we could have kept output linear and just let the hardware do its thing ?
Is there actually a mapping to downsample colors so that the mapped colors are visible on any screen? (Assuming there is a minimal common intersection on the color space)
In an ideal world, why would you use gamma there? Especially in a rendering pipeline using vectors (= linear) is better. And floats give you the range and precision you need. If you use CIE XYZ instead of CIE RGB, then the Y channel corresponds to luminosity, a float value for luminosity should cover any range any hardware can handle.
You'd use gamma to convert it to the limited rgba8888 scheme. Or maybe with hdr monitors some sort of 10 or 12 bit scheme. Either way you'll need to cut out information, because a light with value 1029 and a shade with value 3 in the same picture won't bode well without some gamma correction and cutoff.
We use gamma in order to emulate human sensibility that is logarithmic, lot linear. Most machine sensors work in the linear space, they are either linear or linearized around a specific point(like most cameras).
Human senses adapt continuously to the strength of the signal. Your eardrum has muscles that become more rigid with loud sounds and decouple it. Your eyes have pupils that contract and let much less light to pass. Half pupil's diameter means 4 times less light, not linearly but as a power of two.
Then the sensors itself reduce the signal again. If you look at a bright thing in sunlight for two seconds and then look away you see a "shadow" of the bright object because the specific part of the retina has adapted to the level of sunlight and automatically subtracts the brightness in this region.
Neurons itself work logarithmically. The hairs in the ears reacts proportionally less than the signal itself. The chemical diffusion of to cells is also proportional to itself. That is they work in the logarithmic (or the inverse, exponential if you want to recover the original signal).
The reason float works is because float encoding is logarithmic. But there is a problem, 16 float bits is not standard and it is very limited. You need 32 bit floats per image channel and that is wasteful.
Adobe raw format(dng) created a 16 bit float. But if you do something, do it well, use 32 bits like the Gimp does. The great thing about 32 bits is that you could combine multiple exposures in a single image and that way you are not wasting so much resources because instead of 3 or 4 pictures(each for a different exposure), you just have one with almost the same info.
For example: https://mitpress.mit.edu/books/color-sciences