The complaints in this article appear to be regarding OS-controlled lut adjustments, i.e. via special X11 features. I've used color calibrated monitors on linux in color-sensitive industries with hundreds or even thousands of artist seats (including film, animation, vfx, real-time, etc.) and the monitor has always been calibrated via the physical menus/buttons on the bezel and not a software/OS monitor-specific lut adjustment.
I'm not judging which is better or worse as it would indeed be nice if the calibration could be controlled by the OS. I'm just saying that adjusting the monitor hardware directly is what's being done in the pro linux content creation world. Also, I imagine that certain brightness and color gamut controls could only happen from the bezel controls. Dreamcolors can switch between srgb & P3 for example.
i don't know color management, but it sounds like the situation is analogous to complaining that VLC on linux doesn't have volume controls (for example; of course it does in reality...) when the solution is to reach over and twist your hardware volume knob.
it's good to know that you've found the controls sufficient on the systems you have adjusted!
The Linux community will attempt to solve the problem by attempting to create an entirely new color management system, which will get at best 80% done with a handful of outstanding Hard bugs, at least one of which will be bad enough to break the entire thing except on one specific distro/gui/hardware combination.
Hackernews will tell us that we should really be using Wayland, because Wayland will solve the color problem once and for all.
When it comes time to implement the color management stuff into Wayland, it's declared out of scope for the core Wayland protocol, but don't worry guys -- someday, somehow, a consortium led by GNOME will implement it on a DBUS interface that all compositors will understand. At least two, mutually incompatible, such protocols emerge, neither of which fix the underlying issues, both of which are tied to particular compositors.
Meanwhile, we are told that X11 is still horribly, irredeemably broken in this regard and if we haven't yet switched to Wayland, we really should by now.
The professional colorist industry goes on using Xorg on Linux.
...and it'll be difficult to turn off for those not in the professions named in the other comments here which require accurate colour, using monitors with smaller gamuts, and would much prefer to have the "raw", "unmanaged" behaviour.
Actually, a lot of us that don't work in the design space would be perfectly happy if our monitors simply had close enough to the same white between our dual screens. I absolutely do not care how color accurate my JavaScript code actually is in my editor, I would love to have the ability to manually tweak the balance so my monitors more or less render the same.
Right now, Linux just can't do that consistently. Most of the existing solutions want me to buy a very expensive color calibration tool that I can't justify or afford.
> Anyone should strive to have as accurate colors as possible
Why is that? I like to have my monitors with a bit of a warmer colour balance, since it is nicer on my eyes, and I have no need for 100% colour reproduction. I think that everyone should strive for the most comfortable colours as long as they aren't doing photo editing or something.
Mapping directly into the full gamut of the monitor, the way that almost every common computer did before the whole "color management" stuff even became a thing.
Anyone should strive to have as accurate colors as possible.
No, people have different needs and set their monitors' brightness and contrast accordingly. It's only the mentioned industries which require that accuracy --- and the associated, often very expensive, monitors and calibration equipment.
>No, people have different needs and set their monitors' brightness and contrast accordingly. It's only the mentioned industries which require that accuracy --- and the associated, often very expensive, monitors and calibration equipment.
Except for accessibility reasons (e.g. high contrast for the visually impaired), there are no "different needs" that dictate that people should see colors rendered falsely compared to their reference if they're not in the creative professions.
Well, that's not true, and any user of f.lux/Redshift/Twilight will probably agree with me on that one.
Or, any user of audio equalizer set to a genre preset.
While I understand what you mean (having the screen properly calibrated out of the box would sure be nice), you might be using too strong words to express it :)
No, parent just asserted that there are no "color space preferences" (outside of accessibility) people should have, period (and if they do have, OS/software makers and monitor companies should be OK to ignore them).
It's unclear to me why there's no color palette/correction table exposed in libdrm/KMS at the kernel. It should just be part of the video mode, maybe with a bit to indicate when the driver/hardware doesn't support changing it. But always be able to read it out, make it linear nonsense when the support isn't there - that won't be worse than what we have now.
Then the layers up the stack like Wayland/X/GNOME/KDE are just messengers to/from the bottom @ drm.
We also need floating point frame buffers to be first-class citizens at the KMS level. I don't want to be forced into OpenGL/Vulkan just to be able to have the hardware apply gamma correction on a software-rendered frame buffer, and if I have the hardware do color correction, it kind of needs the HDR of floats - not uchars, which I don't think libdrm supports with dumb buffers of today. If not floats, at least more bits than 8 per color component.
Programs like Plymouth or other embedded style applications running directly on libdrm should be able to have color-corrected output without needing bespoke software implementations and their own calibration tables. I should be able to tell the kernel the correction table, maybe compile it in, or another payload on the boot parameters cmdline.
Hell there are fairly well-known simple algorithms for generating approximate color tables from a single gamma value. If I only want to make things look "better" in my linux kiosk, and care not about absolute color correctness, let me stick a drm.gamma=.55 field on the kernel commandline to generate the correction table in lieu of a full calibrated table.
Yeah... I think both are applicable, but if you really want something done then you do it yourself or pay someone to help do it, especially for open source. It sounds like have a descent idea of where to start. I'd go find some kernel developers working on the underlying components, and offer to pay them or see if they have recommendations of people to get the work done.
In doing some searches on the subject this [1] came up. So there seems to already be movement on this front, and it's possible my knowledge of the current situation is stale.
There's definitely some level of gamma/color correction funtionality at the DRM level already in the kernel [1]. So my desires may already be largely fulfilled, and maybe userspace just needs to get its act together.
However I think complaining about open source software has its place. Sometimes the developer has never thought of adding the feature that the end user can't get along without.
No one has time to implement all the ideas they throw out in a casual discussion. That doesn't mean those ideas shouldn't be heard. If you have an actual argument, let's hear it, but your comment here is damaging and provides no value.
If you write the calibration info to the video LUT, why does software also need to know about it? Isn't your monitor displaying perfectly calibrated sRGB at that point?
If the content is not in sRGB, don't you need to actually know the contents color profile in order to convert it to sRGB? How does the monitor profile matter at this point?
(not challenging, just asking, I know next to nothing about color correction)
You don't want to convert to sRGB, you want to convert to the display device color space (which in professional displays is often larger than sRGB).
Most desktop environments and applications assume that the source is sRGB, unless specifically tagged (in image metadata).
So you have two points: source (image, video) and reproduction device (display, printer etc). And you convert colors from the former to the later. sRGB may not even come into play if the source image is AdobeRGB and the display is a wide-gammut pro LCD display!
I know just from writing graphics hacks that you get significantly better results if you do the correction in the HDR color space before you produce the 24-bit, 8-bits-per-component pixels shipped to the GPU.
If by "video LUT" you mean something at the CRTC or even after it on some external device, if the software producing the visuals has already reduced the pixels down to 8-bits-per-component before they hit the LUT, then you've lost accuracy particularly in the small values.
This is why it's desirable to do one of the following:
1. inform the software of the LUT and let it perform the transform before it packs the pixels for display
2. change the entire system to have more bits per color component all the way down to the framebuffer, then the per-component LUTs at the CRTC can profitably contain > 256 entries.
I'm not an expert in this field at all, just play with graphics hacks. But this is what I've come to understand is the nature of the issue.
edit:
To clarify, the reality implied by the need for correction is that some areas of the 0-256 range of values are more significant than others. When you do a naive linear conversion of whatever precision color the application is operating in down to the 24-bit rgb frame buffer, you've lost the increased accuracy in the regions that happen to actually be more significant on a given display. So you'd much rather do the conversion before throwing away the extra precision, assuming the application was working in greater than 24-bit pixels.
Not necessarily. After calibration your monitor displays as accurately as it can sRGB (or any other profile you calibrated for). It still may miss or shift parts of sRGB though.
Thus your software has to know what your display is capable of. It can use this information to show you which parts of the photograph you edit are not shown accurately.
> Except that they only set it for the primary output. Turns out if you have multiple displays, the profile for the first one is put into the _ICC_PROFILE X11 atom, but the profile for the second one in the _ICC_PROFILE_1 X11 atom, for the third display in the _ICC_PROFILE_2 X11 atom, and so on. It’s just that nobody seems to do this.
Sounds like an easy thing to fix. I'd suggest the author to try and make some patches - don't know about GNOME, but KDE is pretty friendly and easy to contribute to.
Not everyone is a developer, and this type of comments is why everyone that cares about UI/UX design professionally, eventually goes back to Windows or macOS.
That type of comment is also the underlying reason why Linux improves/fixes things at such an astronomical rate compared to either of the choices you listed.
It's not just saying "Hey you, go fix that." , it's saying "Hey everyone viewing this comment on a public page : this exists and needs to be fixed. Someone grab a wrench."
The Linux desktop ecosystem's development model is highly prone to regression, though. I've been using it since Xmms was the most advanced media player. I've seen lots of things fixed, only for them to break again a yeaf or two later.
So sure, things are fixed at an astronomical rate, but that's because things are also decaying at an astronomical rate so there's a constant supply of low hanging fruit
>That type of comment is also the underlying reason why Linux improves/fixes things at such an astronomical rate compared to either of the choices you listed.
Huh? Linux is behind both Windows and OS X in most non-server related areas. They can't even agree on a good compositor...
Except that there is no "they". Unlike with Windows or macOS there is no Microsoft or Apple to make decisions for the whole "ecosystem". There are distro vendors, but they're more like app stores shipping pre-configured pieces (including pieces that we consider to be core OS functionalities). And there are a lot of them, so lack of consensus is not surprising. And then many vendors offer options to install this-and-that, so it's up to users to decide what exactly they build on their machines...
Given that there is no such thing as just "Linux desktop" (I mean, my setup can be completely different from another persons' one), it's hard to say whenever it's ahead or behind, unless every component option is known to be such.
...and this type of replies is why it stays that way.
Well, at least when ignoring the UI/UX mention that has absolutely nothing to do with the discussion here (color correction is required in many fields, but UI and UX aren't really those ones).
The author seems to be technical enough to make a nice analysis of what's done and what needs to be done. I suspect he might be able to provide this particular fix himself - or, if he isn't, then he might be able to contribute by providing a well-detailed feature request for others to implement (with that even I might be able to go and fix it, while without it I surely won't, as I lack the knowledge, hardware and in fact even awareness of this problem; only reading this post put some light on it for me).
Some people don't do the mental switch between "I'll wait for fix" and "I'll fix it" if they're not used to it, even if they are perfectly capable of fixing it and have time for it. I see it on my own example, as there were some parts of the stack I never really considered digging into to fix stuff by myself, and when I finally tried, turned out there was no reason to keep myself restrained. It's just a friendly reminder that you can often fix such stuff by yourself and it might be not as hard as it seems.
Not an easy thing. The ICC-in-X specification specifies the index as the Xinerama screen number, which has no meaning with XRANDR-on-XOrg, and even less meaning on Wayland. There's nothing in the protocol to tie the ID to a monitor, or even a predictable hotplug order. This is why the device-id in colord exists.
I've been using the ColorHug 1, and found it worked quite nicely. The included Fedora CD (yes, this was some time ago) let me calibrate just fine. Once I wanted to do it straight from XCFE, I ran into a few challenges, but it wasn't unsurmountable:
https://askubuntu.com/questions/427821/can-i-run-gcm-calibra...
Basically, you have to run xiccd in the background, since XFCE doesn't have built-in colord support to set the X11 atom[1]. I've never used multiple monitors though, I don't know if that's possible with xiccd+xfce instead of Gnome.
I think niche market software, used by a limited number of highly specialized professionals, is somewhat incompatible with the open source economic model. When a piece of software is used by very many users, and there is a strong overlap with coders or companies capable of coding, say an operating system or a a web server, open sources shines: there is adequate development investment by the power-users, in their regular course of using and adapting the software, that can be redistributed to regular users for free in an open, functional package.
At the other end of the spectrum, when the target audience is comprised of a small number of professionals that don't code, for example advanced graphic or music editors or an engineering toolbox, open source struggles to keep up with proprietary because the economic model is less adequate: each professional would gladly pay, say, $200 each to cover the development costs for a fantastic product they could use forever, but there is a prisoner dilema that your personal $200 donation does not make others pay and does not directly improve your experience. Because the userbase is small and non-software oriented, the occasional contributions from outside are rare, so the project is largely driven by the core authors who lack the resources to compete with proprietary software that can charge $200 per seat. And once the proprietary software becomes entrenched, there is a strong tendency for monopolistic behavior (Adobe) because of the large moat and no opportunity to fork, so people will be asked to pay $1000 per seat every year by the market leader simply because it can.
A solution I'm brainstorming could be a hybrid commercial & open source license with a limited, 5 year period where the software, provided with full source, is commercial and not free to copy (for these markets DRM is not necessary, license terms are enough to dissuade most professionals from making or using a rogue compile with license key verification disabled).
After the 5 year period, the software reverts to an open source hybrid, and anyone can fork it as open source, or publish a commercial derivative with the same time-limited protection. The company developing the software gets a chance to cover it's initial investment and must continue to invest in it to warant the price for the latest non-free release, or somebody else might release another free or cheap derivative starting from a 5-year old release. So the market leader could periodically change and people would only pay to use the most advanced and inovative branch, ensuring that development investment is paid for and then redistributed to everybody else.
Many of my (Hollywood) clients need to do color grading and other accurate color work.
What platforms do they all use? Not Macintosh, despite its reputation for being the platform for "graphics professionals" (it was missing 10 bit/channel color until very recently). And not Linux, despite its use in render farms.
They use Windows 10 and HP DreamColor monitors. That's the only platform that works and works well for people who need to care about color.
I'm a colorist. Many of us do use windows, many use OSX, many more use Linux. Every major color critical application supports many types of LUTs and color management.
Further, HP Dreamcolors have tons of problems and aren't considered solid for color critical work (but are fine for semi color accurate stuff like intermediate comps etc). Color accurate work is done over SDI with dedicated LUT boxes handling the color transforms and the cheapest monitors being $7500 Flanders Scientific Inc 25" OLED panels.
Yep. I was a Colourist in my former life (TV ads, promo films, music videos mainly), used Resolve on OSX with a Flanders monitor. Most other people I talked to were either using OSX or Linux. (Granted, this is a few years ago when ProRes was the primary capture and delivery codec).
That's interesting regarding OLED, I thought over time the different colour LEDs decay at different rates?
A while ago I had a look at Eizo's 10 bit / channel TFTs, which looked impressive to me (from a layman's perspective), do you have any opinions of those?
Eizo's high end displays are great for almost all uses up to the highest end color critical installs. I recommend them over the DreamColors all the time. You can't get much better without moving to ultra high end pro solutions.
Flanders Scientific, Sony, Dolby in order from cheapest to most expensive. FSi and Sony use the same panels for their 25" models. Sony x300 is the go to right now for affordable HDR. Dolby is the gold standard for non projector color critical work.
For non color critical necessary displays, Eizo is about the best. Lots of" good enough" panels from LG, Acer, and Fell though. I actually have a gaming panel that calibrated surprisingly well and holds those numbers.
The best consumer display by far though are the LG OLED televisions. They're so good that we're installing them in lots of mid level suites as client monitors (aka close enough to our color critical panels).
I don't have any experience with the more expensive panels you listed, but I do have an LG OLED, and I'd be a bit more careful about recommending it for color critical work as a computer monitor.
I've owned it for about a year and the red channel on mine exhibits painfully obvious burn-in patterns.
I don't know why you are being downvoted so hard. Red is a very complicated color on OLED displays and manufacturers admit that frequent calibration is not only necessary but will eventually kill the display after a few years.
Sit in darkened rooms, spin 3 trackballs, and turn a few knobs to make pictures look pretty, mostly. DaVinci Resolve[1] and Baselight[2] are quite popular, take a look at the websites to give yourself an idea of what it looks like.
Hah, thanks! I should have been a little bit clearer - not an ELI5-layperson, a layperson who has some vague handwavey idea of how video/film is made and once read a popular article about orange/teal contrast.
It's more stuff like 'what is it about this process that makes a dedicated colour specialist necessary?', 'what are the things things they're supposed to accomplish?', 'what are their technical and creative constraints/inputs/deliverables?', etc.
I don't agree, the entire visual effects industry, including their color departments, run on linux. Baselight and Resolve, are the two most common color correction programs in the industry, baselight exclusively runs on linux, and the big color companies (company 3, efilm, technicolor) all run resolve on linux. Coloring is done either on projectors, or broadcast monitors (something like a sony PVMA250 on the low end @ ~$6,000)
Do you by any chance have some links where I can read more about Linux as a front-end system in the film/graphics industry? This is field of work in which I would never have guessed Linux to be strong.
Pretty much (there's a bit of Windows at the smaller places) all the big VFX studios (ILM, SPI, Weta, Framestore, MPC, DNeg) are running Linux for almost everything involving content creation, using apps like Maya, Nuke (for compositing), Katana, Houdini, etc.
There are exceptions - some apps (ZBrush) don't run on Linux, so there are Windows machines around, but in general >= 95% of machines the artists and developers use are Linux at the big places.
And most of those apps use OpenColorIO as a framework for handling colourspaces.
It’s mainly in large facilities that run huge jobs with massive amounts of data. Maya, Nuke, Houdini, Flame, Baselight, many in-house VFX software all run perfectly well Linux. And of course the cornucopia of renderers running on their server farms, as might be more expected.
The lineage is from SGI, where many of these applications were born, but as the company faltered and consumer graphics hardware took off thanks to gaming, Linux became the natural home.
Every large visual effects studio runs on linux, with hundreds of linux workstations at each one. Color sensitive work like lighting and compositing has been done for well over a decade on linux. Artist workstations are calibrated and every major computer graphics application has support for look up tables.
>Every large visual effects studio runs on linux, with hundreds of linux workstations at each one. Color sensitive work like lighting and compositing has been done for well over a decade on linux.
That's for rendering, where the OS and Desktop experience doesn't really matter, and the cheaper it is the better.
Few pros do the actual editing and color work (where the decisions are made, not the rendering part) on Linux.
This is just not true, you're spreading misinformation. I am a colorist, I'm the person making these final decisions. Every single high end color suite I've ever been in runs Linux. In fact, the full version of Baselight (one of the defacto color correction suites) only runs on Linux. DaVinci Resolve (one of the other major ones) ran only on Linux for the majority of it's existence and the full panel version (the pro choice) only ran on Linux until last year.
Every major color house I've worked in runs Linux exclusively in their suites (CO3, The Mill, Technicolor, etc).
That's not to say windows and OSX suites don't exist, I use them and my own suite runs windows, but the highest end of color is basically Linux only.
I am really very interested in reading about a typical hardware and software setup for a Linux colorist workstation with a special focus on which graphics card and which drivers to use! Nvidia?
So this is about DaVinci Resolve since it has the most flexibility for setup, many other systems are borderline turnkey.
The recommended setup is a super micro chassis with dual xeons (12 core cpus min rec, 20 core preferred), min 32GB ram (usually at least 64,128+ common on high end systems), SSD for OS, thunderbolt (min)/pciE/10GbE/fibre (preferred) attached storage usually 8 bay raid6 or similar min, almost always NVIDIA GPUs with 8x 1080ti's or the latest Titans being the most common set up I see.
This runs on CentOS or RHEL 6.8 or 7.3.
Video signal is output over SDI from a PCIe to a LUT box (for color transforms) then to a color critical display (FSi, Sony, or Dolby typically with the best suites using cinema projectors). A second SDI runs out to a box showing video scopes. Everything is usually calibrated by light Illusions software and using a Minolta colorimeter probe (typically a 3rd party service does this every few months).
The GUI monitor(s) are usually just regular consumer whatever.
The software is controlled by a large, $30K control panel that looks similar to an airplane cockpit.
That's most of the important stuff, but I can fill in details where you're curious.
Your work sounds fascinating. I was previously into my photography, specifically film (both slide and col neg) - I'm still a big fan of the medium, except for the expenses and faff of getting a good scan and then getting the colour right.
Can you help me answer two things, as both have bugged me for years..?!
How do they achieve a look of tinted monochrome in films, which are still actually in colour? If that doesn't make sense - I'm thinking of films like Heat where there is often a strong blue tint which gives the feel of monochrome but it is all in colour. I've found was able to replicate it somewhat by combining the image with a quadtoned version, but it was still fairly far off tbh.
The other question is - how does colour gamut relate to the brightness of the display? Is it all to do with the dynamic range of each channel - i.e. the difference between black and, say, red, rather than overall brightness? I was at a photography show recently, and was blown away by some of the prints made by some of fuji's printers. Is it ever possible to match the gamut our eyes can see? And what colour space/gamut do you usually work in? Sorry two extra Q's there...
Thanks, and thanks for the fascinating info already.
I think we might have a different definition of "Few pros do ... color work.. on Linux". Have you worked at Sony, Company 3, Dreamworks, Lucasfilm, Pixar, or Deluxe?
So this is about DaVinci Resolve since it has the most flexibility for setup, many other systems are borderline turnkey.
The recommended setup is a super micro chassis with dual xeons (12 core cpus min rec, 20 core preferred), min 32GB ram (usually at least 64,128+ common on high end systems), SSD for OS, thunderbolt (min)/pciE/10GbE/fibre (preferred) attached storage usually 8 bay raid6 or similar min, almost always NVIDIA GPUs with 8x 1080ti's or the latest Titans being the most common set up I see.
This runs on CentOS or RHEL 6.8 or 7.3.
Video signal is output over SDI from a PCIe to a LUT box (for color transforms) then to a color critical display (FSi, Sony, or Dolby typically with the best suites using cinema projectors). A second SDI runs out to a box showing video scopes. Everything is usually calibrated by light Illusions software and using a Minolta colorimeter probe (typically a 3rd party service does this every few months).
The GUI monitor(s) are usually just regular consumer whatever.
The software is controlled by a large, $30K control panel that looks similar to an airplane cockpit.
That's most of the important stuff, but I can fill in details where you're curious.
Are the ports on the 1080ti's used for video output at all? Is there one with SDI out? Or are they just used for CUDA?
At the risk of asking a silly question, what does the LUT-box do that couldn't be done in software (or, I guess, why isn't it done in software)?
This stuff is fascinating to me.
Do you know of any good YouTube videos on colorist hardware? I've seen a couple of videos on workflow, but neither went into the guts of the machines and LUT-boxes.
After reading this Linux rant and seeing how Apple is systematically marginalising its Pro customers, I'm actually inclined to believe you. Windows might be a pain for John Doe sometimes, but Microsoft also makes sure an insane amount of obscure professional features (like color management) keep working.
Everyone needs to characterize (not “calibrate”; that term is highly misleading) their display. The question is just whether you keep the characterization provided by the manufacturer, or measure the display yourself using a hardware device. Either way, the result is a display “profile”, which is basically a lookup table used by software to map color coordinates so that they will appear as expected on the display.
People using Macs certainly do care about ColorSync. That’s the name of the software which uses the display characterization to keep colors looking as expected throughout the operating system and most applications.
In many professional environments the preferred route is to use specialist hardware to send an RGB signal at a high bit rate and do any colour transformation in the monitor hardware.
Using LUTs either at the application or OS level to adjust colour information is a big no-no, although that doesn't stop some people from doing it. You simply don't want to change your colour space[1] until you absolutely have to.
The point of calibrating your monitor (which is a hardware + firmware level problem) is to see how your RGB image will look on a colour space restricted piece of hardware (for example in video this is often 12-bit RGB --> Rec709).
If you have some image data stored with reference to one color space, and you want to convert the data to a different color space (e.g. because you are targeting some particular output device), that is a gamut mapping problem. To learn about different trade-offs involved in choice of gamut mapping algorithms, I recommend Ján Morovič’s monograph, https://www.wiley.com/en-us/Color+Gamut+Mapping-p-9780470030...
Same story if you want to show your image on a display with a different gamut.
Most gamut mapping algorithms used in practice (whether on a display or in software) are actually pretty mediocre in my opinion. It would be possible to do substantially better by writing your own code, at the expense of being a bunch of work. Alas.
P.S. The Wikipedia article about color space (and articles about many other color-related topics) is pretty terrible, but I’ve been too lazy to rewrite it.
I currently work for a media conglomerate where colour tends to matter a lot to both the print and digital channels. I'm not sure which monitors they use as I tend to work rather separate from that group, but they all work on Macs—that's been the case since the late 80's early 90's in publishing and there seems to be no move to stray.
This all sounds interesting but I have 0 clue about what the author is talking about. Can somebody explain what is being talked about, what color calibration is and all? Not a designer, but a systems programmer. I do understand rgb, hsv colors but that's it.
If you do any professional color work, you want a color calibrated display. This ensures that the colors you see on your screen will be the same as the ones on your designer colleague’s screen, and the same as the ones that come out of the printer’s factory, for instance.
Higher end displays are already pretty decently calibrated out of the factory, but if you want to be exact you will need to buy an external piece of hardware that will measure your display’s colors and tell you how off they might be.
The author bought a piece of color calibrating hardware that was meant to be open in design and work with Linux, as presumably he wants to support these efforts.
But he encountered a bevy of problems, ranging from packages not updated in a while to things that just plain don’t work as documented, and got frustrated.
Understandable, since on macOS or Windows with proprietary hardware, this would have been a 5 minute process. The author is sad and frustrated that the open source alternatives aren’t there.
Displays also drift over time. So even if you have a professional display calibrated at the factory it needs to be recalibrated a minimum of once per year with 6 months being preferred in a professional environment.
This is true with even the highest quality color critical displays like we use in film/tv color correction ($5-$50k+ for 25" panels).
It's not about the signal it's about the physical display.
Displays aren't calibrated at the factory. To use an LED-backlit display as an example, not every single LED in the world is created equally. Not every LED is going to give off the exact same wavelength for the same current value.
Extrapolate this out to all the other components and this is the reason that your monitor has built-in physical controls for changing RGB/contrast/brightness values to begin with.
Calibration accounts for this.
As for why ICC profiles are used instead of just changing settings on the monitors, the OSD options usually don't offer enough fine-grained control to get things just right. Display makers are typically targeting main-stream consumers so they provide simple adjustment controls.
The display at the end of it is an analog device. The calibration confirms that the actual light coming out of the screen is an accurate representation of what the digital information thinks it should be.
To add to all the comments here: the range of physical colors a monitor can display intentionally also differs (a low gamut, sRGB gamut, or one of several wide gamut standards). The video pipeline sends just 8 or 10 but values at the display, but the actual color seen can differ dramatically.
A digital communication channel allows faithfully sending an image to the display. Turning that data into colored light is another thing.
Standard values of the voltages, currents, timings etc. that are applied to LEDs, liquid crystal pieces and other electronic components of the display in order to get the desired colors are only a starting point; a calibration that measures the differences between devices and compensates them is needed because of manufacturing and accidental differences.
I also have this question. I mean, the ones and zeros that get sent over your DisplayPort/whatnot cable for some specific color in your favorite color space should be identical for all computers everywhere. Why don't we then just calibrate the monitors themselves, rather than the whole OS?
We essentially are calibrating for the monitor’s version of the color not being 100% accurate. Yo’re right, in a perfect world the monitors would just be correct from the factory.
In practice, monitors change color over time (much mire common in ccfl backlit monitors, i think) and even shift with brightness, so we have to do it “at runtime”
Monitors aren't often calibrated from the factory or they are calibrated to be "subjectively" nicer looking, e.g. high contrast and slightly cool white balance to account for show-room floors.
In addition to that the built-in options for configuration are often very simplified and have a coupling. Low resolution control plus simplified options means that you often can't dial in perfect color reproduction. Hence ICC profiles.
Because light is analogue, our eyes are analogue and the light conditions of the living room is analogue, the paper where images are printed is analogue and so on.
It is physically impossible to get an unique colour space across all surfaces.
If you display the same RGB value on different screens (or other outputs like prints) it'll be a different actual color being displayed. Color calibration measures the actual color created and creates correction information that software can use to make the color conform to a standard, so you have better control over the output.
He wrote a pretty detailed blog article explaining his issues and how he got around them, this is all very useful information to someone who is looking to do something similar.
Spending €100 also isn't an insignificant amount, and for that price I would expect what I'm buying to work properly.
The ColorHug is a colorimeter designed to calibrate screens with an approximately-sRGB gamut. If you're trying to calibrate a wide-gamut screen with a ColorHug it isn't going to work very well. If you need to do anything other than consumer hardware you need to use a spectrophotometer (that can do spectral profiling) rather than a colorimter that's calibrated to different primaries than what it's trying to measure. A spectro is going to cost you at least $300, and a good one is going to cost you somewhat more.
Source: Person that designed the ColorHug hardware.
It's barely more than the device costs to make. It turns out making batches of 50 at a time is an order of magnitude more expensive than building them 50,000 at a time.
Source: Am the person that sits in a shed and builds each ColorHug.
It's not. An Eizo EX3 sells for 85 bucks here, a Spyder 5 for 95. A ColorHug2 amounts to 115. Since the ColorHug2 doesn't include the actual calibration software, it is equal to the EX3. Paying 35% more just for the "Open Hardware" label and then not being able to reap the expected benefits (better support etc.) doesn't sound like a good deal.
Better support? Open Hardware means Open Hardware, and nothing more - you get the access to the schematics, documentation, sometimes also right to produce similar devices by yourself. You can expect greater hackability, definitely, but "Open Hardware" sticker means nothing in terms of support or reliability. It might be better, it might be worse, you can't tell.
The price in such projects is directly related to the production scale. How many EX3s, Spyders and ColorHugs have been produced? Open Hardware projects (especially the equivalents of already available non-free devices) are often costlier because it initially attracts only the people who really care about its hackability, which makes the yields low, which makes the prices high, which further strengthens that relation, and the circle is closed.
With userbase kept small, most users usually keep the firmware/software support just right enough to scratch their own itches.
Please remember that hardware is not software, and open hardware comes with completely different set of challenges than free (open) software and when it comes to hardware, you often really need to pay extra for the freedom - not just with your time, like we were used to with early FLOSS, but also with your money. If you choose a project because of its "Open Hardware" sticker, it's really more than likely that it will be costlier and it will be rough at edges, because it's usually harder to roll with such projects than with closed competitors and the ROIs are usually way smaller too. That's just how it is and there's nothing surprising about it; if you care about openness, you have to accept it, otherwise it will never get better.
This is one of the more important comments here, imo. Open hardware != free software. As someone who's been in the unfortunate position of having to work with locked down chips on many occasions, just simply having access to proper documentation is awesome. It can take a lot of work and dedication to interface with open hardware, but we all benefit when the fruits of that labor are shared. This is a struggle we should all be willing to undertake.
The EX3 and Spyder are not good probes. The cheapest quality probe I know of is the xrite i1 display pro. Everything below that is basically a toy. I've never used a ColorHug though so I'm not sure how it stacks up.
As for software, DisplayCal is actually very well regarded in pro color and considered one of the only serious three choices, the others being CalMAN and the big dog being Light Illusions.
As with most Open Hardware projects, you pay extra for the mere fact of the device's existence. Without paying extra, it wouldn't exist, because producing stuff in small amounts is way more expensive per device than producing them in high volumes.
Many people would like to see open hardware succeed, so they get annoyed when they watch yet another open hardware company sell a broken product, which inevitably leads to bankruptcy, and then starts the “Linux will never work out of the box” cycle anew.
We’ve seen this play out dozens of times since the ‘90s, and the startups keep making the same mistakes.
They should at least read their predecessors retrospectives, and strive to make different mistakes.
It appears that KDE could do much better without any actual effort. Just merge the existing code into the base packages like GNOME did, don't make it optional.
I'm not judging which is better or worse as it would indeed be nice if the calibration could be controlled by the OS. I'm just saying that adjusting the monitor hardware directly is what's being done in the pro linux content creation world. Also, I imagine that certain brightness and color gamut controls could only happen from the bezel controls. Dreamcolors can switch between srgb & P3 for example.