Hacker News new | past | comments | ask | show | jobs | submit login
Color Spaces (ciechanow.ski)
236 points by tylerchr on Feb 16, 2019 | hide | past | favorite | 50 comments



Like many resources about color, this one has an output-medium-centric followed by early-20th-century-experiment-design-centric viewpoint, presumably because that was where the author started and what he read about. But however perfect the exposition might otherwise be, that is the wrong way to start a general discussion of the subject, and the result is inevitably misleading and confusing for non-experts.

> it is a pragmatic approach

Having tried to talk face-to-face to many people from different backgrounds about color, in practice this doesn’t work. People are starting with too many misconceptions and gaps in foundational knowledge.

If you want to understand color you have to start with at least the basics about the physiology of human vision, along with some basic optics. It is all but impossible to properly understand what is going on with RGB displays, understand what “white point” means, etc. without that background.

One of the resource recommended at the end, Bruce MacEvoy’s site http://handprint.com/LS/CVS/color.html is a much better starting place, albeit pretty long for someone just interested in the basics.

Or there are many books about human color vision and color reproduction, some friendlier for laypeople and others highly detailed and technical. To understand recent color models Mark Fairchild’s book Color Appearance Models is pretty good, and the first half is a pretty accessible general overview. Or for more about color reproduction try Billmeyer and Saltzman’s Principles of Color Technology (recent editions rewritten by Roy Berns). For more about colorimetry try Hunt & Pointer’s book Measuring Color. Or for something historical, try Kuehni’s Color Space and Its Divisions.


The article is very well written, but the order does feel a little backwards to me too. Color spaces are confusing because they are not physical things. They're abstractions created by people to solve problems. You can come to understand a solution without understanding the problem, but I definitely think it's the harder way to approach learning.

There's a 10 minute overview of color theory from "A better default colormap for Matplotlib" that I really like. It's very condensed, but it takes you from light, to perception, to color spaces: https://youtu.be/xAoljeRJ3lU?t=218


There are quite a few legitimate entry points. There's a huge body of artists who had no idea about the physiology of human vision but absolutely understood hue, chroma, brightness.

They clearly understood the real world was HDR even if they didn't use that term. They understood the massive loss of dynamic range of their artwork, and yet they developed the concept of rendering in order to compress shadow and highlight detail to preserve some sense of the real world when important and to clip it when it wasn't important. And you don't even have to go back 600 years for such examples, even today's amateur photographer learns about these problems and work arounds, without the benefit of learning any physiology.

We can't all dedicate the time to read Wyszecki and Stiles as the entry point into color and imaging, albeit of course it's quite useful for those who need to understand such origin stories and the math for all of it.


Yes, you can develop a good intuitive understanding if you spend 15 years or whatever painting full time (the first several years mostly bumbling around under various misconceptions making lots of mistakes) as an alternative to reading a few pages about how the eye works, if you prefer.

In painting (or, say, in developing dye transfer prints in a darkroom) having a deep understanding of the output medium is essential to do any kind of passable work, because color mixtures need to be made quite explicitly by the artist. The artist can’t just pluck a 10 YR 8/10 chip out of their Munsell Book of Color and immediately transfer the color to their canvas. Instead they have to squeeze some particular paints out of tubes and smear them together with a brush.

But even so, it is very helpful for painters, graphic designers, photographers, darkroom photo printers, etc. to learn a bit about how color vision and optics work. It will help organize and clarify their intuition, help them have comprehensible conversations with other artists or laypeople, help them transfer skills between media, and so on.

It’s deeply unfortunate that new computer artists are subjected to RGB values just because highly experienced painters can learn to work with a similar system. More perceptually relevant models are much nicer to work with in user interfaces.


> People are starting with too many misconceptions and gaps in foundational knowledge. [...] start with at least the basics about the physiology of human vision

A question I ask first-tier physical-sciences graduate students is "A five-year old comes up to you and asks, 'I'm told the Sun is a big hot ball. Awesome! What color is the ball? I really care about the color of balls - I'm 5.' What color is the Sun?". Among the pervasively incorrect answers I get, is a recurrent "It doesn't have a color; it's lots of different colors; it's rainbow color."

It reminds me of a failure mode in learning high-school stoichiometry: students not thinking of atoms as real physical objects. And why should letters be conserved?


For a 5-year-old I might say something like:

“The sun is so bright when it is high in the sky that it will permanently wreck your vision – please don’t look directly at the sun! – so it’s hard to give it a color label, but let’s say it looks ‘white’. Notice that the sky absorbs some of the light coming from the sun to us, changing the apparent color depending on how much sky we are looking through, so when the sun is very close to the horizon and we are looking through a whole lot of sky to see it, it looks orange. In between it looks like various yellows.”

Coming up with well-defined named color categories that will work for both surface colors and an emissive source as bright as the sun is hard to begin with. Talking about color in a precise way with 5-year-olds is even tougher.


I'd tweak that as...

"The ball is white." So often in education, the most significant bit is neglected.

For example, atoms are tiny little balls, sticky little balls that jiggle (insanely fast)... and yet, not thinking of them as real physical objects, something you can teach pre-K, is a failure mode teaching stoichiometry in high-school, and in some intro college chemistry.

NASA has done entire museum exhibits about the Sun, bereft of a single ball, or a single true-color image. Imagine a museum exhibit about elephants, without a single model, and only false-color images. Images of elephant surface temperature, of skin parasite density, of average mud thickness, etc. But not a single gray elephant in the room. Would you not then be, "WTF were they thinking?"?

I'd skip the distinction between "really is white" vs "appears white because eye has been saturated by brightness, or is monochrome from dimness". And also the issue of color accommodation. Because the Sun really is white. As can be demoed by sunlight through a gap (window, leaves, pinhole camera) on a white surface, or through a neutral filter.

I like the emphasis on media. "The white ball... (Not) seen from underground, or from a closet, or from deep in the ocean, appears black. :) Seen from underwater, through tens of meters of water, appears blueish. Seen from under the atmosphere, low in the sky through lots of atmosphere, appears redish. Seen higher in the sky through not much atmosphere, appears white." Years ago I saw a NATO full-sky survey, which had Sun chromaticity as white, even through clouds, until quite low in the sky (in rural Spain I think?, so not much dust or smog). I wish I could be more specific with that "redish" and "atmosphere", but I've not seen real data. If one wants to clearly and correctly explain something to a 5-year old... expect to spend a lot of time with scihub.

> Coming up with well-defined named color categories that will work for both surface colors and an emissive source as bright as the sun is hard to begin with. Talking about color in a precise way with 5-year-olds is even tougher.

Might this be a misunderstanding of color? Light has a spectrum, and thus a color, regardless of where the light came from. And can be so bright as to appear to the eye white, or so dim as to appear dark gray (rods vs cones), also regardless.

> Talking about color in a precise way with 5-year-olds is even tougher.

Here's a fun cautionary tale. There's a genre of science education research ph.d theses that basically goes "We tried to teach science topic X to grade G. <Implict: We didn't really understand the subject, and taught it the-usual badly, often facepalm shockingly badly.> We were surprised and disappointed that this didn't work. We draw the obvious conclusion... Students in grade G are developmentally incapable of understanding topic X." :) I've come to think of "you can't explain <subject> to <age>" as analogous to a code smell.

Drifting off topic, but a random thought... if you watch even best-practices classroom video, say around estimation, it often seems like the teacher is coaching students on how to pretend to understand the material. Pretend well enough for the teacher to nod and move on. So for instance, the use of an only-locally-equivalent alternative to actually understanding will be encouraged, not debugged. So perhaps there's a mismatch between a nominal goal of "learn science", and an actual goal of "learn to pretend to have learned science". And one might get better results by abandoning one or the other, in favor of consistency and transparency. Random thought.

Thanks for the comment!


> Might this be a misunderstanding of color? Light has a spectrum, and thus a color, regardless of where the light came from. And can be so bright as to appear to the eye white, or so dim as to appear dark gray (rods vs cones), also regardless.

“Color” is by definition a perceptual response, something that happens inside the brain. It depends on the observer, their eyes’ current level of adaptation, other stimuli which provide context, etc. If you move an object from a light background to a dark background, the color it evokes changes. If you take an object into a dim room and allow the eyes to adapt, the color it evokes changes.

Objects don’t inherently have a color. Light sources don’t inherently have a color. Light rays don’t inherently have a color. The same spectral power distribution can evoke (sometimes quite dramatically) different colors in different circumstances.

Of course, if you talk to philosophers or physicists instead of color scientists, you might get alternative definitions.

> bright as to appear to the eye white, or so dim as to appear dark gray (rods vs cones)

This is a bit confusing/confused. The retina has two types of detectors, rods and cones. In bright conditions, the rods are overstimulated and so provide no useful signal (and are therefore ignored in favor of the signal from the cones). When the eye is adapted to very dim conditions (e.g. starlight) and looking at a dim stimulus, the cones don’t respond vigorously enough to get much signal from them (and are therefore ignored in favor of the signal from the rods). In some dim-but-not-too-dim situations, signals from both rods and cones are integrated.

You can have both light gray and dark gray colors in any kind of context, but which specific stimuli will look light or dark depends on context.

* * *

> I've come to think of "you can't explain <subject> to <age>" as analogous to a code smell.

I have a 2.5 year old, and there are many topics which have too many prerequisites to adequately explain to him, or which are not relevant enough to keep his interest.

But he is learning a lot every day. Ask me again in a few years what he can learn at age 5.

> often seems like the teacher is coaching students on how to pretend to understand the material

Trying to teach a classroom with >10 students is extremely difficult in my opinion. I find that lecturing can’t truly teach most ideas; students have to grapple with them independently while working problems or projects or doing their own introspection. A lot of 1:1 attention from an expert mentor/tutor who can suggest problems or projects, ask a lot of questions, provide patient guidance, gently nudge a student back on track, stop the process from time to time and prompt some introspection, etc. is generally the most efficient way to learn. But this doesn’t easily scale to a classroom setting, and 1:1 tutorial isn’t considered economical for widespread adoption.

> goal of "learn science"

I think to “learn science” students need to be spending at least some time on all stages of the process: coming up with research questions, proposing hypotheses, designing experiments to test them, evaluating the experiments, etc.

(And it helps if they have some context for that scientific investigation, e.g. a bunch of experience observing bugs in the forest or trying to make mechanical devices out of meccano or learning to cook or ....)

In a similar way, to “learn journalism” students would need to go out and do a whole bunch of interviews and then write stories and publish them.

But this kind of learning is hard to achieve in a classroom setting.

Of course, “learn how to solve exercises in a typical introductory physics textbook” (or whatever) also has some value, so long as everyone is clear on how it relates to the world.


A wide-ranging conversation. :)

> “Color” is by definition a perceptual response [...] Objects don’t inherently have a color.

"Dad, what color is my shirt?" "Well son, it’s hard to give it a color label, but let’s say it looks ‘white’. Objects don’t inherently have a color. Under differently circumstances, lit differently, or seen through filters, or by people with different eye genetics, they can look quite different. Illuminated by a red flashlight, your shirt would look red!" :) Ok, sure.

But for me, that doesn't quite make "yellow" (or "red" or "it doesn't have a color" or "it's rainbow color") into nicely reasonable answers for a 5-year old's "What color is my <white> shirt?" Nor for their "What color is my <white> Sun?" Especially not for the Sun, given the local unavailability of similarly-scaled red flashlights.

And once you have a spectra, you have chromaticity, however crufty Color Matching Functions are. And thus at least one reasonable definition of color.

> Of course, if you talk to philosophers or physicists instead of color scientists, you might get alternative definitions.

Yeah, that's an issue. So while I start with a strong agreement with "People are starting with too many misconceptions and gaps in foundational knowledge.", getting to actionable "you have to start with at least the basics" implies an ability to successfully execute that approach.

My experience is that it requires vastly greater effort, and domain expertise, than is currently considered non-insane, just for the associated content creation. To craft usefully-correct and accessible stories about the physical world.

So I while I think (some value of) "teach [from the] basics" could be a transformative improvement in science education, and consider it a profound failure of current science education research that the opportunity doesn't get more work, I wonder if it's currently a viable approach for teaching?

I've a one-liner that goes "Consider a best case. An MIT professor, with a strong background in education, trying to teach their own kid, 1-on-1, with lots of time available (we're getting very counterfactual), about their own area of expertise, indeed their own research focus... the science education content doesn't exist to allow them to succeed". Which may not be useful without additional context, but basically, any non-trivial topic will extend to require someone _else's_ research-level expertise to craft good content, at which point the hypothetical MIT professor hits the pervasive lack of existing technosocial mechanisms to obtain that, and thus is back to crappy content, and thus fails.

> [sensor clamping] This is a bit confusing/confused.

Sorry I was unclear. I was targeting a misconception that the Sun is itself yellow, and only appears white because it's brightness causes cone saturation. And being puzzled by "Coming up with [...] named color [for a bright source] is hard"... but punt.

> A lot of 1:1 attention from an expert mentor/tutor [...] isn’t considered economical for widespread adoption

One nifty possibility is hybrid computer-human systems providing this via AR/VR avatars. Extreme personalized learning. Just having eye tracking data gives a lot of insight into attention and interest.

> “learn how to solve exercises in a typical introductory physics textbook”

There's a story of a long-existing end-of-chapter problem, in a popular physics textbook. A problem that survived multiple new-edition reviews, and presumably some use. And one question it raises is, should we consider all the physics professors, graduate students, and students, who plug-and-chugged the problem to the in-the-answer-book "correct" result, should we consider those a successful understanding and application of the ideal gas law? Or should we reserve "success" to describe the presumably rare recognition, that the numbers described solid Argon?

If the former, is our objective then a variant of Indian-style "memorize and regurgitate without understanding"? Just with plug-and-chug added? And might we be better off not pretending otherwise?

If the latter, there would seem a misalignment between that objective and our current effort allocation and testing.

> so long as everyone is clear on how it relates to the world.

And that's the question which prompted my thought. One perspective is "transferable understanding is our educational goal, and we're abjectly failing at it, and worse, we're so dysfunctional, we're not even really trying". An alternative is, "we've an ecology of goals, of which actually understanding the physical world turns out to be a very low priority... we're just often less than transparent about reporting our actual collective goals". And so I wondered... What if everyone was clear on how our science education content doesn't relate to understanding the physical world? Might that clarity yield additional points of leverage for change?


I assumed that ...

> Light has a spectrum, and thus a color, regardless of where the light came from. And can be so bright as to appear to the eye white, or so dim as to appear dark gray (rods vs cones), also regardless.

was no longer the conversation with the 5-year-old. I would call those two sentences ‘confused’ for pretty much any audience, but obviously a conversation with a 5-year-old isn’t going to get too deep into the weeds about definitions.

* * *

> getting to actionable "you have to start with at least the basics" implies an ability to successfully execute that approach.

Fair enough. But someone who doesn’t have the background to more or less properly explain the basics should expect some criticism when they completely ignore them.

> My experience is that it requires vastly greater effort, and domain expertise, than is currently considered non-insane, just for the associated content creation.

There are plenty of people with a solid basic understanding of whatever subject you might point to. The problem is more that folks making materials for beginners are (often) not part of that group, and in many cases should be leaving the job to someone with more expertise.

I don’t think someone has to be a cutting edge researcher (or tenured professor) in order to do a good job explaining the basics about scientific topics. They should at least read the introductory material aimed at scientists though.

> [1:1 instruction] via AR/VR avatars

I am not holding my breath. I expect videos, readings, interactive diagrams, better programming environments, ... will more likely be helpful than future-VR-Clippy.

But it’s hard to substitute for human interaction with someone competent.

> we're just often less than transparent about reporting our actual collective goals

Well the first goal is more or less babysitting so that parents can go to work. But this doesn’t sound too inspiring.


It's white, right?


Yes. I've found it an uncommon answer, even with first-tier physical sciences and astronomy graduate students. US cultural convention is yellow, and apparently red in Japan. Lots of orange answers. Astronomy misconception is yellow (the the white point for naming is blueish), plus assorted "and orange"-like noise. Non-astronomy physical science grads are... creative, with misconceptions of light and color a common mode. A nicely disruptive follow-up question is "And what color is sunlight?", setting up a "white" (dominating answer) vs whatever-they-just-said-for-Sun conflict - juxtaposing two rote-memorized bits that they've perhaps not rubbed together before.


If you are standing on the Moon, the Sun is white.

If you are standing on planet Earth, the Sun is yellow.

As viewed on Earth, sunlight directly from the Sun is also yellow... but normally we include the blue light from the blue sky, which indirectly comes from the Sun. Scattering of blue light by our atmosphere is what makes the sky blue, and by subtraction makes the Sun yellow.

This is why normal shadows are a bit blue. They are lit only by the blue sky.

Things out in "sunlight" are lit by both the yellow directly from the Sun and by the blue indirect light coming from all directions in the sky. This adds up to being white light.


This is also a somewhat confused take. The human visual system adapts to arbitrary light sources within a pretty wide range. Whatever the brightest light source is will typically look “white”. Which is why either an incandescent lamp with 2400K CCT or a fluorescent lamp with 5500K CCT will appear to be “white” after adaptation.

The midday sun is so much brighter than everything else in the field of view that to the extent it can be given a color label (again, it will superstimulate all of your cones to the point that “color” loses meaning and permanently damage your vision; please don’t look at the sun) that “white” is really the only reasonable choice. If you pointed a long tube at the sun to keep out light from the rest of the sky and used the sunlight to light a surface that diffusely reflected say 30% of incident light irrespective of wavelength, then that surface would appear white.


If you're going to count adaptation, then a car's brakelight is white, as is the light (normally considered blue) on a cop car. It's cheating to count adaptation. Assume you just look at it up in the blue sky, using untinted eye protection. The Sun is yellow.


No, a car’s brake lamps and police beacons are not within the range that would appear white, under any type of adaptation. You could stare at a brake lamp for 30 minutes and it would still look red. Even a low-pressure-sodium street lamp is going to keep looking orange after you’ve been standing under one for a long time. These are all more or less monochromatic light sources.

> Assume you just look at [the sun] up in the blue sky, using untinted eye protection.

Under ordinary circumstances if you look at the midday sun through a neutral density filter it will look white, though if your filter blocks out a high enough proportion of the light coming through it that might change. But in general, any light source will change apparent color if you put it through strong neutral density filter. Color doesn’t map 1:1 with normalized spectral power distribution.

Be careful to make sure that your filter also blocks solar radiation outside the visible spectrum. You can seriously damage your eyes looking at the sun through the wrong neutral density filter.


With adaptation, low-pressure-sodium street lamps look white to me. Key to this is having no other source of light as an alternative reference.

So if you are going to count adaptation, then all light sources are white. That makes the whole question of color completely pointless.

The same goes for not reducing the brightness. Burning out your eyes makes the question of color completely pointless. Here is a filter that would work: a small hole in a rapidly-spinning disk. Look right at the sky with the Sun, but with a duty cycle that reduces the light down to levels similar to typical indoor lighting. Another method, not quite as good due to field of view and imperfectly white screens, is the pinhole camera box commonly used to view eclipses.


No amount of adaptation brings low pressure sodium to white. Several hours doesn't achieve it. Half the cars under it will simply appear black, the rest yellow. You're left with little to no space for colour perception.


That is a different problem, called illuminant metameric failure. See here: https://en.wikipedia.org/wiki/Metamerism_(color)

Illuminant metameric failure can happen with relatively normal LED and fluorescent lighting, particularly with a low color rendering index. You can get an orange object to appear black under the "white" lighting of RGB LEDs. Different orange objects would be differently affected depending on exactly what frequencies/wavelengths get absorbed. The objects could appear red, orange (good!), dark yellow, green, or black.

Under low pressure sodium, the eyes do adapt. Some cars are white, some cars are grey, and some cars are black. There is no yellow.


Then I have a problem with my eyes. ¯\_(ツ)_/¯

Eyes adapt, but mine get nowhere near the point of perceiving white items as looking anything but yellow under sodium. Always yellow near orange. 25 years of driving and dog walking in a very yellow world before LED started arriving. With all yellow snow when we still got snow. There was no white, until headlights caught something or dawn.

FWIW eye tests put my colour vision as normal.


Ha ha, "white" is relative, isn't it. It's why we have white-point controls on our cameras/software.

I suppose the physics answer is, it is the color that a black-body incandesces at for the given temperature of the Sun.

A cop-out answer now that I typed it out and re-read it....

Honestly though, my answer to "what color is the sun" would depend on the audience. I would tell the 5-year old the sun is yellow but when you get older you'll find it is more complicated than that.


> "white" is relative, isn't it. It's why we have white-point controls on our cameras/software.

There is a region of chromaticities than can be accommodated as untinted white (think the center of an RGB triangle). And the Sun is around the center of it. So yes it's relative ("the Sun is a G2-class yellow star"... relative to noticeably-blueish-but-we'll-use-it-as-our-white-point-anyway-because-reasons Vega). But perceptually untinted leaves you with kind of narrow bounds. You notice 4500 K lighting as warm, even if you still see paper under it as white.

> a black-body [...] A cop-out answer

:) That's why the 5-year old was added to the question long ago. They don't completely stop answers of "5800 K!", but permit easily moving on with a "Um, 5-year old?"

> I would tell the 5-year old the sun is yellow

But... the Sun as big ball isn't, and the Sun seen in sky generally isn't. So social convention over reality? Unless... the 5-year is from Vega?


It's also important to understand that color spaces are used for medium to medium conversions.

Human perception is always a part of this but not always the goal.

For example in print a color space can be used to control how ink flows on paper. A lot of ink for dark colors on thin paper will result in a mess. That's why a color space converted image for such a print will look waay to light. But the result will be a good dark print.

And ofcourse this looks much better to the human eye.

Edit: I might be confusing the term color profile...


Hijacking this comment to ask a question... so is this guide saying that regular RGB at 100% intensity (no transformation) cannot represent every single color? This is a bit confusing to me, as I thought it could. I was always under the impression that RGB values could represent all colors that can be represented on a monitor


Yes, often the gamut for a computer display and the gamut of a particular working color space (like sRGB) intersect each-other, with some colors in each gamut which are outside the other.

If you use a linear representation of RGB coordinates, and allow values less than 0 or greater than 1, then you can represent any conceivable color. But most image formats do not support values outside the [0, 1] range.

If you have an image in one color space, and you want to convert it to another color space (e.g. for showing on your display or printing) there are many possible gamut mapping algorithms you might use to optimize the results according to various criteria. Here is a 300 page book about some of the possibilities, https://www.wiley.com/en-us/Color+Gamut+Mapping-p-9780470030...


Good lord that is a large technical book about one topic O_O


It's quite a decent article. I prefer the term "color image encoding" for sRGB IEC61966-2, or ITU-R BT.2020 or DCI-P3, because it encapsulates the primaries (red, green, blue in relation to CIE XYZ), tone response curve (often defined with a gamma function, or the sRGB curve which approximates a curve defined by a gamma function of 2.2 but in fact they aren't the same as the article points out), precision or bit depth, the reference viewing condition, and the reference medium.

Very often ignoring the differences in reference viewing condition or medium, gets people into trouble when they do transforms between these encodings and think the environment or medium doesn't have to change. For example there are four variants of DCI-P3, with the same primaries, different white points, and different tone response curves, to account for their different reference viewing conditions. Are they all the same color space? Errr, maybe, yes? Are they all the same color imagine encoding? Definitely not. Same for ITU-R BT.709 vs sRGB; same for Adobe RGB (1998) vs opRGB. As the reference mediums differ, so will the dynamic range.

CIE XYZ, based on the 1931 standard observer, continues to be put through its paces. There's also the 1964 standard observer derived from that and supplemental observations and research. There's dozens of color appearance models that have appeared, and continue to be an active area of research. Some try to account for various features of human vision including optical illusions (illusions expose them as if they are a kind of trick or failure, but in fact they're a feature) like simultaneous contrast, Bezold effect, distinguishing between saturation, chroma, and colorfulness, and many other aspects of appearance. Recently there's some understanding the idea of a single standard observer is probably wrong, and how to go about categorizing and handling multiple standard observers (i.e. normal color vision) and that is also yet another area of active research.

For anyone interested in color science, or having substantial math and/or computing skill, and looking for unique real world application, I highly recommend IS&T's annual Color Imaging Conference. imaging.org


Out of curiosity, what is stopping screen makers from making the entirety of the colour space as defined by the CIE chromaticity diagram its gamut?

Is there some fundamental limitation of liquid crystals? I understand for example, CRTs require red phosphor made of yttrium and europium, which somewhat limits the emitted freq. What about LEDs and LCDs? What are the limitations for making a wider gamut?

Do note my knowledge in this side of things is close to nil (the specific examples of yttrium and europium as red phosphor was a story told to me by a physicist friend, and it stuck)


If you take a look at the xyY diagram:

> https://ciechanow.ski/images/color_gamut_srgb.svg

the gamut is defined by the triangle connecting the RGB primaries. If you could add a primary at a more pure green, you'd get a bigger triangle and a wider gamut, but still not the entirety of the chromaticity space. If you start adding other components that lie outside that triangle, you start expanding the polygon, but there will always be parts of the diagram it doesn't cover.

In order to encompass the entire chromaticity space, you'd have to have a large (technically infinite) number of components. You can never have a component outside the diagram, because the curved hull is the pure chromaticity of light at each wavelength; light outside the curve doesn't exist, and light below the straight "line of purples" is invisible (UV/IR).

Theoretically you could get much more coverage by adding more components as new types of emitters are developed. This is happening in high-end theatrical lighting, where you can cram a ton of different LEDs into a fixture, but it's not (yet) practical to cram them all into an on-screen pixel.

Things get even more mind-bending when you start illuminating objects with light instead of looking at the light directly, because then there's a benefit to adding components even if they don't expand the gamut. Once you have four or more components, you can make many colors an infinite number of ways (metamers), and they might look absolutely indistinguishable when lighting a white wall, but they'll make skin tones look completely different if you made them by combining just RGB versus, say, amber and blue. That's not really related to your question, but I've just started delving into this area in my day job, and it's exciting and trippy. :)


> it's not (yet) practical to cram them all into an on-screen pixel

At the pixel density (>500,000 subpixels per square inch) of current flagship smartphones, I think it would actually work just fine to have 6 or 9 (or whatever) different primaries, make a grid of little pixels in some scattered but regular arrangement, and then use clever digital signal processing to figure out how to convert an RGB image into a RR'GG'BB' (or whatever) image. I would expect it to look visually identical at typical viewing distances for existing images, while potentially allowing someone with low-level hardware access to make a wider gamut / choose their preferred metamer / etc.

I would expect it to be entirely achievable using current technology and not inordinately much more expensive than current displays.

But the engineering effort and additional complexity might not provide enough benefit for display vendors to invest in that, vs. just continuing to optimize their current display concepts. Or they might not even be considering radical changes in strategy.


My understanding is that lot of the limitations of current LCD color gamuts stem from the quality of backlight used. That is why for example quantum dot technology is used to reproduce the wider rec2020 colorspace, because the spectrum of quantum dots is better suited for rgb filtering.

Although now that rec2020 is nearly an achieved goal, I do suspect that going beyond it requires more primaries as others have mentioned. Sharp experimented with a yellow primary, but that effort was fairly limited.

For oled I imagine the situation is much worse as more constraints in making the leds there.

Some references

https://www.nature.com/articles/lsa201743

https://pcmonitors.info/articles/the-evolution-of-led-backli...


You can make a much wider gamut display (while still avoiding severe observer metamerism) if you have more primaries. You could even do proper multispectral imaging.

It would be awesome if display manufacturers would try to make a display with e.g. 7 primaries in hexagonal pixels.

Pretty well every image that currently shows up on smartphone screens already has to be resampled, so there shouldn’t be any inherent reason why you couldn’t have more sophisticated displays requiring slightly more signal processing to target. It would take a bunch of engineering effort, but nothing extraordinary or out of reach of current technology.

The big problem is that every other stage of the image processing pipeline from cameras and GPUs to software is built on an existing square-grid-of-RGB-pixels model.


> Out of curiosity, what is stopping screen makers from making the entirety of the colour space as defined by the CIE chromaticity diagram its gamut?

A display can't support the entire space because of the way colours combine. Any light source acts as a point in the colour space, so a 3-colour display can only cover a triangle.

That said, wide gamut monitors do exist. They're of limited utility for simply consuming media; because of the chicken-and-egg effect everything produced for mass consumption already targets the reduced sRGB space or an equivalent. However, a wide-gamut display is useful for content generation, especially when targeting print processes that can use spot colours.


The full diagram would require that you are able to produce monochromatic colors along the entire visual spectrum. So at a minimum you would need a tunable monochromatic light source.

Several actually to blend to any point on the graph. Luckily, due to limits of the human visual system, you don't require a fully tunable spectrum for each pixel.

I'm not aware of any technology that would let us cram this into pixels.

At best you could build a projector from tunable lasers.


The CIE XYZ color space is fundamental to all modern color space processing. Very cool that it was defined in 1931 and has held up more-or-less unchanged to today. Be sure to click and drag on the 3d cubes!


> Very cool that it was defined in 1931 and has held up more-or-less unchanged to today

I wonder if the color mixing experiments have been recreated since the 30s, and if there has been any observable shifts in the curves


There have been a bunch of additional experiments. But under well-lit conditions with typical light sources, for a typical viewer with normal color vision looking directly at an image / colored object, the 1931 functions are close enough that switching isn’t worth the transition cost.

The best recent data can be found at http://www.cvrl.org/


There are also color appearance models, the latest one is CAM16UCS[1], which is simpler than the older CIECAM02[2] model, but harder to find papers explaining it. The Jab[2] representation is more perceptually uniform than Lab.

[1] https://colour.readthedocs.io/en/develop/_modules/colour/app... [2] https://en.wikipedia.org/wiki/CIECAM02


For web design, I use HSLuv[1] which is very handy.

[1]: http://www.hsluv.org/


Can you explain in laymans terms to another web designer why that is important? Like what problem does it solve?


For example, you want to dynamically generate a background and text color based on a hue. With hsluv you can simple do `hsluv(120, 60%, 10%)` for background color and `hsluv(120, 60%, 90%)` for foreground text color, you will have text on background with 80% perceived luminance difference, you can replace the `120` in my example with whatever hue in the `0-360` range and have consistent results.

The idea is that the color space is linear (perceived) with saturation and luminance regardless of hue.

You can see it clearly if you compare something like http://www.hslpicker.com with http://www.hsluv.org/ if you drag the hue slider on the hsl version, the luminosity varies, it does not on the hsluv version.


Ahh, very interesting. That is cool.


That's damn cool.


This is a very well written article, and boils down the complexity of color science into very digestible bites. As someone who has to deal with color spaces daily (VFX/CG), having something like this when I started would have been incredibly helpful.


I've actually used color spaces in a lot of my work, and it can be highly effective to alter the color space to improve performance of computer vision applications (even the ML kind):

https://austingwalters.com/edge-detection-in-computer-vision...

https://austingwalters.com/chromatags/


See also the venerable Color FAQ, a link to which is conspicuously absent:

http://www.poynton.com/ColorFAQ.html


No mention of CIELAB? Unfortunate, its a really neat way to get perceptual uniformity, has intuitive coordinates.


gimp needs a LAB mode


It has one.


i see. it's just operates very differently; i was hoping to view the result of applying LAB curves in real time on the original image, instead of a multi-step process of decomposing and re-composing each time to view the results.

I guess I can just use two windows though




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: