I think the deep underlying problem is not just handling gamma but that to this day the graphics systems we use make programs output their graphics output in the color space of the connected display device. If graphics system coders in the late 1980-ies and early 1990-ies would have bothered to just think for a moment and look at the existing research then the APIs we're using today would expect colors in linear contact color space.
Practically all the problems described in the article (which BTW has a few factual inaccuracies regarding the technical details on the how and why of gamma) vanish if graphics operations are performed in a linear contact color space. The most robust choice would have been CIE1931 (aka XYZ1931).
Doing linear operations in CIE Lab also avoids the gamma problems (the L component is linear as well), however the chroma transformation between XYZ and the ab component of Lab is nonlinear. However from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.
The biggest drawback with contact color spaces is, that 8 bits of dynamic range are insufficient for the L channel; 10 bits is sufficient, but in general one wants at least 12 bits. In terms of 32 bits per pixel practical distribution is 12L 10a 10b. Unfortunately current GPUs experience a performance penality with this kind of alignment. So in practice one is going to use a 16 bits per channel format.
One must be aware that aside the linear XYZ and Lab color spaces, even if a contact color space is used images are often stored with a nonlinear mapping. For example DCI compliant digital cinema package video essence encoding is specified to be stored as CIE1931 XYZ with D65 whitepoint and a gamma=2.6 mapping applied, using 12 bits per channel.
> If graphics system coders in the late 1980-ies and early 1990-ies would have bothered to just think [...] the APIs we're using today would expect colors in linear contact color space.
Nope. As you point out, if you use 8-bit integers to represent colors, you absolutely want to use a gamma-encoded color space. Otherwise you’re wasting most of your bits and your images will look like crap. In the 80s/90s, extra bits per pixel were expensive.
Linear encoding only starts to be reasonable with 12+ bit integers or a floating point representation.
> The most robust choice would have been CIE1931
RGB or XYZ doesn’t make any difference to “robustness”, if we’re just adding colors together. These are just linear transformations of each other.
> (the L component is linear as well) [...] from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.
This is not correct.
It is true that the errors you get from taking affine combinations of colors in CIELAB space are not quite as outrageous as the errors you get from doing the same in gamma-encoded RGB space.
> RGB or XYZ doesn’t make any difference to “robustness”, if we’re just adding colors together. These are just linear transformations of each other.
What I meant was, that there are so many different RGB color spaces, that just converting to "RGB" is not enough. One picture may have been encoded in Adobe RGB, another one in sRGB. And even after linearization they're not exactly the same. Yes, one can certainly bring them into a common RGB. But then you can as well transform into a well defined contact color space like XYZ.
> It is true that the errors you get from taking affine combinations of colors in CIELAB space are not quite as outrageous as the errors you get from doing the same in gamma-encoded RGB space.
That's what I meant with "expected" results. In general Lab is a very convenient to work with color space. I was wrong though about L being linear. It's nonlinear as well, but not in such a nasty way as sRGB is.
The explanation of why there this kind of gamma mapping was introduced in the first place is inaccurate. An often cited explanation is the nonlinear sensual response of human vision; if you wanted to accurately model that you'd need a logarithmic mapping. The true reason for the need of gamma correction is the nonlinear behaviour of certain kinds of image sensors and display devices. CRT displays, be the very physics they are based on have an inherent nonlinearity that's approximated by a gamma=2.2. When LCDs got introduced they of course were made to approximate the behaviour of CRT displays, so that you could substitute them without further ado.
Another important aspect to consider is, that using just gamma is not the most efficient way to distribute the bits. You want a logarithmic mapping for that; which also has the nice side effect, that a power law gamma value ends up as a constant scaling factor to the logarithmic values.
Now, it's also important to understand that these days the bread-and-butter colorspace is sRGB and that complicates things. sRGB has the somewhat inconvenient property that for the lower range of values its actually _linear_ and only after a certain threshold it continues (differentiable) with a power law curve. That's kind of annoying, because with that you no longer can remap logarithmically. And of course converting from and to sRGB can be a bit annoying because of that threshold value; you certainly can no longer write it as a convenient one-liner in a GPU shader for example. That's why modern OpenGL profiles also have special sRGB framebuffer and image formats and reading from and writing to them will perform the right linearization-mapping.
However either way what the explanation for gamma is, the important takeaway is, that to properly do image processing the values have to be converted into a linear color space for things to work nicely. Ideally a linear contact color space.
The L component of CIE Lab is not linear, it has a similar non-linearity to sRGB due to one of the aims of the design of the CIE Lab space is to be perceptually uniform.
Practically all the problems described in the article (which BTW has a few factual inaccuracies regarding the technical details on the how and why of gamma) vanish if graphics operations are performed in a linear contact color space. The most robust choice would have been CIE1931 (aka XYZ1931).
Doing linear operations in CIE Lab also avoids the gamma problems (the L component is linear as well), however the chroma transformation between XYZ and the ab component of Lab is nonlinear. However from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.
The biggest drawback with contact color spaces is, that 8 bits of dynamic range are insufficient for the L channel; 10 bits is sufficient, but in general one wants at least 12 bits. In terms of 32 bits per pixel practical distribution is 12L 10a 10b. Unfortunately current GPUs experience a performance penality with this kind of alignment. So in practice one is going to use a 16 bits per channel format.
One must be aware that aside the linear XYZ and Lab color spaces, even if a contact color space is used images are often stored with a nonlinear mapping. For example DCI compliant digital cinema package video essence encoding is specified to be stored as CIE1931 XYZ with D65 whitepoint and a gamma=2.6 mapping applied, using 12 bits per channel.