Aurelien Pierre, one of the lead developers on Darktable has been publishing lots of details on color theory and how it is applied in Darktable for things like the new Color Calibration module and how that relates to the white balance. His youtube channel is worth following if you are interested in this: https://www.youtube.com/c/Aur%C3%A9lienPIERREPhoto
Recent versions of Darktable include some amazingly powerful ways to deal with tone mapping, perceptual saturation, etc. that he implemented.
To those who, like me, have current Darktable crash randomly when you start working on your DNGs, I recommend RawTherapee (but get the latest build for your system, if you search discuss.pixls.us or GitHub issues you will see a link to one Dropbox shared a few times).
RT, being also based on dcraw, crashes too but much more rarely and typically only with certain setting combinations applied (for my M1 MBP noise reduction in Lab space seems to be the only remaining trigger with latest build). RT also allows you more control over certain pre-processing steps, like turning off debayering and additional debayering methods, which I couldn’t find in DT.
Sadly, neither app approaches LR or C1 in terms of stability required for most professional uses, but they do give invaluable for an enthusiast control over scene data interpretation.
This is, of course, quite tangential to the article, which deals with display-referred pixel data only.
If only! Take an over- or under-exposed photo and try to correct it in Lightroom and Darktable. Lightroom is able to recover far more detail from those regions than Darktable. No idea how, but it can. Everything else is far better in Darktable, but edit quality is all that really matters in the end.
This really contradicts my own experience, with both Sony and Nikon RAWs, as I see no difference in highlights/darks recovery with my photos. Thing is, Darktable is super technical unlike Lightroom or most commercial software, and hides very little from the user, so it's fairly demanding to user's knowledge. Just like processing in Photoshop that Lightroom doesn't replace for all cases because LR is more oriented towards magic.
What I like about Darktable is vastly superior color uniformity and handling, and its tone equalizer module that allows for visual zoning control; it replaces most 3rd party plugins for Photoshop. PS really feels more and more outdated at this point for the de facto most popular software for photo manipulation, since it doesn't include modern tools and you have to construct massive workarounds or buy additional plugins.
What I dislike about Darktable is lack of persistent masking; currently all masks are generated by previous modules in the pixelpipe, so going back in editing can alter the mask.
AI magic I guess. I'm just a hobbyist but it's given some good results for me with whatever i've thrown at it. Maybe I don't know what I'm missing out on with Lightroom but Darktable is more than good enough for my use case!
I feel like this post misses to mention the elephant in the room, which is ICC the international color consortium. They publish officially recommended conversion formulas and their specifications and device profiles contain data for different rendering intents, e.g. "colors objectively as accurate as possible" vs. "make it look the same to humans". They also standardized the .icc file format.
I do feel kinda strongly about these things because the DisplayCal open source calibration software is quite buggy and, hence, unusable for professional work because nobody there read the specifications before attempting to implement them. That's why accurate colors are a solved problem on Windows (e.g. Davinci resolve in HDR mode) and on Mac (Apple has special PCI-D3 displays) but on Linux, everyone tried to reinvent the wheel so different measurement and display apps are incompatible with each other.
This post is an introduction to color management, not a complete reference. ICC is definitely being taken into account in the course of the broader Linux work, see the rest of the repo and the draft Wayland protocol.
Absolute colorimetric intent often appears to be dismissed as undesirable. It's the only sane way for scanning and printing exact copies of old photos. And for matching two different displays exactly. Clipping off some outlier colors is often more desirable than all the colors being different. If your content is authored to avoid the darkest blacks, and the whitest whites, then the whitepoint of the display or media has no business determining your colors if it's not visible.
A really nice article by Pekka Paalanen from Collabora. He's been working for a while on bringing color management & HDR support to Wayland [1], producing documentation and discussing with upstream (kernel driver) and downstream (video players, browsers, etc) developers along the way.
Color theory blew my mind. First the fact that something that feels so natural needs theory - boy was I ignorant. The different color spaces, ways to achieve color, the perception of color, how the different media stores and reproduces color, incredible.
Offtopic, but I recently ran into a problem with Freedesktop's Cairo library in that it doesn't support the BGR pixel color format, which is the format used most often by Linux framebuffers.
I think that Pekka Paalanen's A Pixel's Color is about the best basic introduction I've read on the subject for someone who is of a technical bent but who is not fully au fait with the specifics of color theory.
Images and color reproduction is both an almighty and messy subject. There's just so much of it to cover if one wants to have a reasonable grasp of the subject. Leave out a bit you think you can avoid and soon you'll be unexpectedly caught out.
The theory discussed in this article provides an explanation behind almost any system that intervenes between incoming light and the eye, for instance, TV, printing etc.
The eye 'sees' (converts) a light spectrum of approximately 400 - 700nm into signals that the brain ultimately perceives as a range of colors. If the frequency of light were to sweep across the light spectrum then we'd see all the colors of the rainbow.
Unfortunately, we have no system that allows us to tune directly across the light spectrum and 'see' a color directly - i.e.: to 'tune in' each wavelength of light as does a superhetrodyne radio receiver tune to a specific frequency - not even the eye works that way. Instead, the eye uses three broadband receivers to cover the spectrum whose center frequencies correspond to red, green and blue to generate signals then the brain uses considerable trickery to generate our perception of light and color. Most synthetic [human-made]^ systems that replicate light use basic tricolor theory but that's where the similarity ends in comparison with human vision.
That's pretty much the only bit Paalanen does not cover in his article, presumably as everyone already knows it. However, I mention it as he takes the unusual step of describing color theory from deep down in the middle of the subject beginning with the pixel.
As we know, a pixel is a subunit or small element of an image. What Paalanen does with remarkable - in fact quite formidable - ease and clarity, is to initially focus in on the pixel's attributes and from there he expands out to provide a wonderfully succinct overview of the subject. In fact, his overview covers almost the full gamut (no pun intended) of topics needed to understand the subject.
There's no point me going over them as I'd only be rehashing what he says and do a much worse a job of it. If you understand anything about the subject then you'll know how quickly it gets bogged down in detail. Well, to avoid that and to make everything clear and succinct Paallanen neatly sidesteps certain tricky bits but he doesn't avoid them. I thought his handling of 'gamma' a bit of a masterstroke; here, he tells the reader what gamma is in very simple terms and then he neatly explains why it can be omitted for the purposes of his article. In my opinoin, this is a much better approach than simply omitting some 'difficult' point or topic altogether as do so many writers.
Here's another little gem that took my fancy:
"... Primaries are the "pure" colors emitted individually by the red, green and blue component light sources in a display. Driving these component light sources with different
weights (color channel values) produces all the displayable colors. As negative light does not physically exist, the primaries span and limit
the displayable color volume. This color volume with the luminance dimension flattened is the color gamut of the display. ..."
Great stuff, when studying this matter I'll bet none of you came across the phrase "As negative light does not physically exist,..." in you textbooks. Right, here your textbooks would have launched straight into complex numbers then matrices to explain 'negative' light but without actually using those words. The use of this descriptive term here makes the subject just so much easier to comprehend, and later when it comes to do the math involved we're already primed for those complex numbers - and they'll make sense immediately.
I gather Paalanen is not a native English speaker. When I read an article such as this by someone whose first language isn't English that is not only logical and well thought out but also written in straightforward idiomatic English then I feel rather humble, especially so when it's signicantly better than anything I could write myself.
___
"^ Animals vary from one (monochrome) up to seven (as far as we know)."
Recent versions of Darktable include some amazingly powerful ways to deal with tone mapping, perceptual saturation, etc. that he implemented.