Hacker News new | past | comments | ask | show | jobs | submit login
Accessible Palette: stop using HSL for color systems (2021) (wildbit.com)
323 points by bpierre on Aug 29, 2023 | hide | past | favorite | 115 comments



Nice!

I'd point out though that with ordinary display and print systems, saturated reds and blues really are darker than greens. The exact formula depends on your color space but

   Grayscale = 0.299R + 0.587G + 0.114B [1]
is commonly quoted (I think for sRGB) and in that case the brightest pure red is about 30% bright and the brightest pure blue is 11% which makes "bright red" an oxymoron in most cases.

You can certainly use those colors but they are always going to be dark. Simply applying contrast rules will make your color choices accessible but if you want to make something accessible and that look good the techniques in that article will make you a winner.

For that matter, saturated screen greens are nowhere near as saturated as is possible but are more saturated than most greens you see in real life: I make red-cyan stereograms

https://en.wikipedia.org/wiki/Anaglyph_3D

and one rule of thumb is that trees and grass look great in stereograms because even though they are green, they actually have a lot of red so the balance between the channels is good so you get good stereo and good color.

[1] https://www.dynamsoft.com/blog/insights/image-processing/ima...


This brings back thoughts of the NTSC days and broadcast safe limits, and the horrible time of clients that loved loved loved red. Explaining to them how the beautiful red artwork will be anything but beautiful on TV was never fun, especially if they wanted it for broadcast. Even when it wasn't for broadcast, an illegal red could still be seen frames later, and would bleed like it just had its throat slit.


As someone who also worked within the arcane limitations of analog video, at both the broadcast and prosumer levels, today's UHD video standards and colorspaces can be incredible when correctly applied in a maximal high-end workflow such as native 4k 10-bit HDR.

Yet when I look at today's typical "top quality" live broadcast content such as the 4K Super Bowl as delivered by mass consumer distribution such as Comcast Xfinity (via their latest high-end decoder box), it's a visual mess compared to what the signal chain should be capable of delivering.

Even though I have top notch viewing gear properly configured and calibrated (with local video processing 'enhancements' disabled), it looks terrible. Unfortunately, due to the layers of compression, conversion and DRM slathered on the signal before I receive it, it's extremely difficult to analyze what's going wrong. All I can determine is that it is a video feed being decompressed into a 4K-resolution, 4:2:2, 60fps frame buffer. However, examining still frame sequences reveals extensive motion, color and resolution artifacts.

The net effect conveys a sense of "sharpness" in the frequency domain at first glance but on critical viewing over time it's a weird kind of digital abomination of macro-blocked chroma splotches, lagged temporal artifacts and bizarre over-saturation of primary hues. While some pre-compressed streamed film content looks quite good when delivered via a streaming service willing to devote sufficient encoding time and delivery bitrate, it's still hit and miss. Live broadcast content, especially high-motion sports, is almost always a mess. We've come so far in standards and specifications yet still have so far to go in the actual delivered result to most households.


Years ago when deciding to cut the cord, I had to convince my roommate that an OTA DTV antenna would provide a better image. We had clear line of sight to the broadcast towers, so I knew it was a no brainer, but I'm in the video side of things, and he's not. This makes him a great analog for the vast majority of viewers. I set the inputs on the TV to the same channel for the Comcast cable box and the OTA antenna, and then A/B tested the inputs for him. Even he could see how bad the image from cable came. Their push-a-button-get-a-prize style one set of encoding settings for all content will always mean their low bit rates look bad.

My favorite cable box sports example was a PGA tournament was showing a golfer putting on the green. The shot was an extremely tight close up of the ball sitting there as the golfer addressed the ball. All of the dimples in the ball were clear, and every blade of grass was visible until the golfer swung and made contact with the ball. As the camera panned to follow, the ball went to this white roundish shape with no detail and the grass went to this blurry green smear again with no detail. As soon as the ball went into the cup and the camera stopped moving, at least one GOP later the grass snapped into full detail again.

Their predictive model is tuned for low motion static content because that's what 90%+ of their content is. Even something like ESPN is now primarily talking heads of people talking about sports rather than being sports. Any sports show in replay and not live so who cares? Looking back at crappy SD tape captures, it's obvious that anything was better than nothing. Much like YouTube. People just want something, doesn't have to be amazing. If it looks like Picasso instead of Monet, they don't care as long as their minds don't have to think


I have absolutely blown people socks off with the quality delivered OTA via ATSC, it looks so good.


And to think the US government gave anyone that wanted one a free DTV antenna. By that point, pretty much nobody used a terrestrial antenna any more, so a very few number of people took them up on the offer. I can only imagine cable companies being very please with that.

Also, the signal was meant to have even more bandwidth. When the broadcasters decided to bring out the fractional channels, it didn't exactly fit the idea that Congress had when allocating the frequencies. Yet another example of how Congress can be behind the times in pretty much everything.


When we bought our place we put a rooftop mounted antenna, and Distribution Amp (essentially a zero loss splitter), and then pulled RG-6 Quad Shield to each room.


Interesting comment, thanks. Two questions out of curiosity:

> Even though I have top notch viewing gear properly configured and calibrated

Any chance you'd be willing to share a few details about this?

> While some pre-compressed streamed film content looks quite good when delivered via a streaming service willing to devote sufficient encoding time and delivery bitrate, it's still hit and miss.

Which streaming services are doing things right in your view?


> Any chance you'd be willing to share a few details about this?

I have several viewing devices in different rooms including an LG C2 OLED, a high-end Samsung QLED and in my dedicated, fully light-controlled home theater room a native 4K 10-bit HDR+ laser projector and 150-inch screen. Each of these displays has been professionally calibrated. To objectively evaluate an input source these days it's important to try multiple different display technologies because flat screens can vary between OLED, QLED, mini-LED, LCD and VA which all have different trade-offs in contrast, peak brightness, viewing angles, color spaces, gamma response curves, etc. And that's before getting into various front projector technologies.

Most consumer TVs these days come with a pile of post-processing algorithms which claim to deliver various "enhancements." In almost all cases you'll want to turn these options off in the setup menus. For critical viewing, objective calibration with a suitable colorimeter is ideal, especially when considering HDR sources which should be normalized to each display's native capabilities in Nits. If you don't want to dive down the rabbit hole of evaluating all this yourself (which can admittedly get complex), I suggest the TV reviews at https://www.rtings.com which are credible, thorough and yet still relatively accessible to non-experts. Unfortunately, RTings doesn't evaluate front projectors. For that the best bet is an expert forum like AV Science (https://www.avsforum.com).

> Which streaming services are doing things right in your view?

Currently, I don't think there's any service I would say is universally "doing it right." It still varies depending on the individual piece of content. Amazon, Netflix, AppleTV and even YouTube each have some extremely well-encoded, high bitrate content. But I've also seen examples on each service that aren't great.

The highest-quality home source will typically be a UHD Blu-Ray disc player. If you have such a player I highly recommend the Spears and Munsil UHD Benchmark Discs (https://spearsandmunsil.com/uhd-hdr-benchmark-2023/). Just because a disc is UHD format doesn't mean the media on it has been encoded correctly, from a high-fidelity source and in appropriate quality. The Spears and Munsil disc features a comprehensive suite of custom-designed test signals and specially sourced demonstration content identically encoded in HD, UHD, HDR, HDR10, HDR10+ and DolbyVision, including moving-window split screens allowing you to compare formats. It's extremely impressive and, as a video engineering geek, I found it fascinating to explore for hours on my various displays - while my wife had zero interest in it :-).


Yes, the visual quality of a sports game can vary a lot and is frequently a disappointment. I can get an ATSC 3 multiplex from Syracuse and it is sad that it is not really better than the ATSC 2 symbol.


That's disappointing to hear because my current residence came with a large digital-capable antenna installed in the attic which I've never connected. When more local stations in my market start ATSC 3 broadcasts next year I was thinking of hooking up an OTA feed just to see if it's better than the Comcast XFinity cable-delivered mess.

I don't even watch that much TV content but when I do, I want it to not look like crap. It's frustrating because I'm sure the four national broadcast networks and top cable channels (eg ESPN, CNN, etc) are providing pristine source feeds at their head-end distribution points which look amazing. Is there even any meaningfully better-quality alternative these days? Maybe some over-the-top streaming provider of broadcast and cable channels who actually delivers 4k sources with guaranteed high-quality encoding and decent bitrates? If so, I'd cut the Comcast cord even if it costs more. It's not like Comcast is cheap but I also hate the idea of paying top dollar for such a substandard product simply because there are no better alternatives.

BTW: I'd be delighted to learn of any viable US-based content alternatives (eg streaming, direct satellite, etc). Back in the analog SDTV days I had a C-Band sat dish and the direct network feeds looked amazing in pure 6 Mhz analog component compared to local cable and even local OTA broadcast.


Red is still a particularly hard color to accurately represent, just not as hard as it used to be. The bleeding and chroma crawl that is most visible in NTSC red, has been replaced with at best half chroma resolution, and depending on how the viewer's decoder works, red edges may be especially harsh-looking.

It's definitely better than it used to be, though.


The better monitors can be reconfigured to use the DCI-P3 primary colors instead of the default Rec. 709 primary colors (a.k.a. sRGB primaries).

(sRGB combines the Rec. 709 primaries with a certain nonlinear transfer function, while Display P3 combines the DCI-P3 primaries with the sRGB nonlinear transfer function and with the PAL/SECAM white, which is also used by Rec. 709 and sRGB).

With the DCI-P3 primaries, it is very noticeable that the red is much better, allowing the displaying of reddish colors that are simultaneously more saturated and brighter than what can be achieved with the Rec. 709 red.

While DCI-P3 also has a better green than Rec. 709, there the improvement is much less obvious than in the red area.


The monitor has a set of primaries that doesn't change but the monitor can treat R, G and B signals as if they are in a particular color space with certain primaries and do the best that it can to simulate the appearance specified in the signal.


For most monitors, as a user you cannot know which are the true colors of the pixels of the screen and this is completely irrelevant.

What matters is which are the colors that will be reproduced on the display when you send the digital codes corresponding to pure red, green and blue, through the DisplayPort or HDMI interfaces of the monitor.

All the good monitors have a menu for the selection of the color space that will be used by DisplayPort and HDMI, and the menu will typically present a choice between sRGB and Display P3 or DCI-P3. Even when in the menu it is written DCI-P3, what is meant is Display P3, i.e. the menu changes only the primaries, without changing the white or the nonlinear transfer function.

All monitors will process the digital codes corresponding to standard color spaces to generate the appropriate values needed to command their specific pixels in order to reproduce a color as close as possible to what is specified by the standard color space.

The cheapest monitors are able to display only a color space close to Rec. 709 a.k.a. sRGB, those of medium price are normally able to display a color space close to DCI-P3 and a few very expensive monitors and many expensive TV-sets, which use either quantum dots or OLED, are able to display a larger fraction of the Rec. 2020 color space (laser projectors can display the complete Rec. 2020 color space).

Even when a monitor can display bright and saturated reds, as long as it remains in the default configuration of using sRGB over DisplayPort and HDMI, you cannot command the monitor to display those colors. For that, you have to switch the color space used by DisplayPort and HDMI to a color space with a wider color gamut.

Some monitors, typically those that are advertised to support HDR, allow the use of the Rec. 2020 color space over DisplayPort and HDMI, but most such monitors cannot display the full Rec. 2020 color space, so the very saturated colors will either be clipped to maximum saturation or mapped to less saturated colors.


> All monitors will process the digital codes corresponding to standard color spaces to generate the appropriate values needed to command their specific pixels in order to reproduce a color as close as possible to what is specified by the standard color space.

This is overly optimistic. It has gotten better lately but most monitors aren't calibrated as well as they could but. And not that long ago the RGB signals were directly mapped to the monitors colors.


> […] and would bleed like it just had its throat slit.

Thank you for your beautiful turn of phrase which has made my evening brighter.


You're wrong on multiple things here, I don't know if you read the replies, if so write a reply and I'll clarify a few things, cheers!


> In practice, samples meeting the WCAG 2.1 recommendations are harder to read than those with an “insufficient” contrast ratio.

Happy to see I’m not the only one. This has been bugging me for a long time, how an “accessibility guidance” formula actually had the opposite effect. Why was that standardized?


It’s a little sad the most widely used algorithms/guidance for contrast accessibility do not have peer reviewed evidence behind them. This goes for 2.1 and the upcoming standard. For something so impactful, I wish the industry help sponsor rigor here.


WCAG 2.x was neither peer reviewed nor empirically tested—true. However this is not the case for the candidate for WCAG 3, which is APCA. APCA has been in public beta testing for two and a half years, is the subject of ongoing empirical studies, and does already have journal published peer review, as well as independent peer reviews by PhDs, vision scientists, data scientists, and other technologists.

See a partial listing of reviews: https://git.apcacontrast.com/documentation/independent-revie...


It’s not true, this is a meme that started around 2020 and color suffers the more for it.

It’s funny, in a sad way, when you dig into it. The ‘scientific’ version is about 5 lightness points different from WCAG, on a scale of 100. That isn’t a coincidence. put somewhat more directly, it sure is funny these people made up contrast ratio out of thin air without evidence, and the answer that will save us has basically the same #s


IIRC there is movment within W3/WCAG to change the formula for judging contrast/ Or maybe it was just being called for and there's been no official pick up yet.


It's mentioned in the article as WCAG 3, APCA algorithm. The most important improvement in APCA is an exponent term.


The best article I've seen on this topic is https://www.handprint.com/HP/WCL/color1.html

Warning: super-long read, you will spend a day.

You'll be surprised, but I use some thoughts from this article in ClickHouse to color logs nicely: https://github.com/ClickHouse/ClickHouse/blob/master/base/ba...


I can't find anything specific on how these ANSI colors could be converted to rgb, other than that they are actually just color requests and are dependent on terminal configuration.

Could you share what are your RGB expectations for every message type you have set in your code?


I had the pleasure of meeting Eugene during Figma's Config conference where he gave a ton of similar pro tips to designers. Really a kind dude who truly has a passion for design and accessibility.

One thing I'll give a big shout out to in this article is APCA, which will likely be a successor to WCAG 2's color contrast algorithm. We used it internally here at Figma for our own accessibility revamp for a final result that ended up much better than it would have otherwise. Eugene provides some great examples of when WCAG 2 fails, and we were continually running into those.

That really brings me to my main piece of advice here: color is really hard to get right - any tool you have can help you along the way, but that said at some point you also have to trust your eye. At the end of the day, all of these tools are just providing some mathematical approximation for how your eye sees color. Your eye is the final source of truth, not the algorithms. If you're encountering areas where a tool or an algorithm aren't providing you the right result, switch it around or fall back to basics.


> We used it internally here at Figma for our own accessibility revamp for a final result that ended up much better than it would have otherwise.

Can you go into more detail here? Do you mean you use only APCA and ignore what the WCAG 2 color contrast algorithm flags as insufficient contrast?


We use a balance of both. All of our body text is WCAG 2 compliant - we wanted to ensure the primary usability of the app is accessible towards the current standard. Where we diverge is when combining saturated colors with desaturated colors for foregrounds/backgrounds, which WCAG has some trouble with. Some good examples of this are here: https://twitter.com/DanHollick/status/1417895151003865090

This was relevant for our Figma's brand color, as prior to our accessibility pass it wasn't compliant when combined with white text for neither WCAG 2 nor APCA. We modified it slightly to be #0D99FF, which was close enough to the original brand color it didn't feel like a rebrand, while still allowing us a passing APCA color contrasts score. Notably though it doesn't pass WCAG 2, but you'll see from this example the contrast is significant: https://image.non.io/15ed7558-5337-40fc-9e5b-2392087cc35b.we...

That's not to say though that APCA is a perfect solution. Interestingly, should APCA be adopted as the WCAG standard, Figma and Hackernews both would fail contrast guidelines. Part of APCA is a guideline for what font size/weight combinations are allowed for any color contrast level. Even with #000 and #fff as your colors, fonts under 11.5px are not allowed (Figma uses 11px for much of the UI). For many expert tools, 11px fonts are fairly standard due to the amount of UI you have to have on screen - other examples here where this is common are tools such as Blender or CAD software. Hackernews uses black 12px font for the main body text against a #f6f6ef background, which according to APCA requires a minimum 15px font at normal font weight. This is a perfect example of my earlier point that you have to use a collection of tools at the end of the day to get to a correct solution, rather than using a single tool or algorithm as a dogmatic source of truth. You can create poorly accessible solutions with WCAG and bad user experiences with APCA.


Thanks!

> All of our body text is WCAG 2 compliant - we wanted to ensure the primary usability of the app is accessible towards the current standard. Where we diverge is when combining saturated colors with desaturated colors for foregrounds/backgrounds, which WCAG has some trouble with.

Why not use APCA only though? Isn't it hard to keep track of where you're following WCAG 2 for some colors and APAC for others?

> Even with #000 and #fff as your colors, fonts under 11.5px are not allowed (Figma uses 11px for much of the UI). For many expert tools, 11px fonts are fairly standard due to the amount of UI you have to have on screen - other examples here where this is common are tools such as Blender or CAD software.

I've encountered this pain too. It feels like the only solution is the user configures their browser/OS to say what they find acceptable so the UI can adapt to that, rather than the default UI for everyone having to conform to the most restrictive rules.

> Hackernews uses black 12px font for the main body text against a #f6f6ef background, which according to APCA requires a minimum 15px font at normal font weight.

Do you have any tricks for conforming to APCA when you need to keep track of font weight, font size, and background vs foreground color while designing/developing UIs? There's style guides for reference, auditing tools, and potential for automation (e.g. CSS variables that pick a contrasting color for you), but curious if you had some thoughts given it can be a lot to manage.

> This was relevant for our Figma's brand color, as prior to our accessibility pass it wasn't compliant when combined with white text for neither WCAG 2 nor APCA. We modified it slightly to be #0D99FF, which was close enough to the original brand color it didn't feel like a rebrand, while still allowing us a passing APCA color contrasts score.

I'm curious how WCAG managed to miss this one during standardisation given how common blue is for a brand color. I wonder how many brands this has impacted.


Do you have any tricks for conforming to APCA when you need to keep track of font weight, font size, and background vs foreground color while designing/developing UIs?

I don't, as for us we set our goal to have a contrast of >60 regardless of font size/weight. Since we already weren't compliant with our 11px font sizes, we chose to use the contrast guidelines without the font size guidelines. Agree it's a lot to manage - we didn't find a great way to solve for this, which is why we mixed and matched WCAG, APCA, and general UX principles rather than relying on a single set of rules.


Hi @jjcm and @seanwilson

First of all thank you for these comments, I do proactively seek out comments like these as there are too few at the official APCA discussion forum https://github.com/Myndex/SAPC-APCA/discussions That said:

A px is not a device pixel, it is referenced to the canvas abstraction layer as the canonical CSS reference px. It represents 1.278 arc minutes of visual angle as subtended onto the retina. Or 0.0213° 𝑽𝜽

This is the case with a 96ppi monitor at 28" away.

Visual Angle in arc minutes is what is used in research, and in fact what the Snellen eye chart is based around, 20/20 is based on a capital E that is 5 arc minutes high, where each line is 1 arcmin, and each space between the three horizontal lines is 1 arcmin, where we then find a spatial frequency of 30 cycles per degree.

But that is for minimum acuity .

Minimum acuity is NOT best readability, which needs to be ~2.5 or times larger than the acuity size (critical size). For a standard display a reference distance away, that means an x-height of 9.4px, which is 12 minutes of arc 𝑽𝜽.

Critical size and critical contrast are recited and empirically tested for decades by eminent readability researchers Whittacker, Lovi-Kitchin, Ian Bailey, Legge, et alia.

The font sizes for APCA are based on this, and assuming a reference font like Helvetica with a 0.52 x-height ratio.

ALSO, if you want a "catch all" contrast value,Lc 75 is more appropriate IMO.

NOT SET IN STONE YET

This kind of discussion is good at the forum, so that it can be tracked and considered in the conversations about guidelines.

ALSO

If you want an easy way to use APCA and be fully backwards compatible with the old WCAG 2.x, then try BridgePCA at https://bridgepca.com


> I don't, as for us we set our goal to have a contrast of >60 regardless of font size/weight.

That's my general strategy too e.g. I'd rather pick a single blue that always works for large and small text, rather than having to keep track of and workaround which shade to use for different sizes.

I do wonder if they'll make some compromises to APCA in this area to make it easier to apply before it's standardised, as there's much more to track now compared to WCAG.


Hi @seanwilson, standards work is nothing BUT compromises it seems.

The reason for the public beta is to find concerns like these.

One of the aspects of APCA that is unique are the many use-case levels that improve design flexibility by focusing contrast values where they are actually needed.

The draft use cases for text are: https://readtech.org/ARC/tests/visual-readability-contrast/?...


I'm red-green color blind, and Figma's out of the box accessibility is good enough that I've never had to actually consider accessibility tweaks. Good job!


Friendly plug for HCT -- it's a color space I built to enable Material You, it weds the lightness measure mentioned here to the latest and greatest color science space --- LAB/LCH is from 1976!

It makes design more intuitive, they only need to know that a 40 difference in 'T' ensures that WCAG standards for buttons are met, and a difference of 50 meets the standards for text.


Cool! Any thoughts on how HCT compares with https://www.hsluv.org/comparison/? (Similar with HCT, the difference in "L" here makes picking contrasting colors more straightforward)

On the same topic, any thoughts on https://www.myndex.com/APCA/ and approaches if this becomes a standard? The color contrast value between a pair of colors depends on which color is the foreground and which is the background, so just comparing the difference in "T" won't be enough now?


re: HSLuv, I don't know much of it but I've heard it mentioned several times, so it must be helpful. This comment[1] goes into more detail, but TL;DR the core to me seems to be having L* and whatever H and S measurement works for you, so I'd say HSLuv is spot on.

I love APCA, Myndex (Andrew Somers, APCA author) was an absolute inspiration to me for getting interested in color, I wouldn't have known 1% of what I do or have done 1% of it without his deep dives showing how to work in this space.

It has an interesting effect on the community in that the general language around it is read too strongly: it is an important and crucial advance, getting science-backed data on exactly the right contrast in multiple scenarios is _absurdly important_ in a world of Ambient Computing™.

The mistake that's made is reading into this too heavily as _the previous spec or WCAG aren't based on data at all_.

Yes, the calculation is based on absurd stuff -- 100 nit monitor, you can't track down a source for every bit of it.

But there's a _whole separate field_ that spatial frequency comes from and it is _very_ well known and understood.

Most importantly: the APCA predictions for contrast were only about 5-7 L* different last time I checked. This is important, thats a _lot_ --- but it also isn't a lot, that's on a scale of 100. What we have today isn't completely utterly alienly different and broken.

Comparing the difference in T will be enough as long as the luminance measure APCA relies on is monotonic to RGB (HSL's lightness is hilarious, its the max of RGB minus min of RGB)

[1] https://news.ycombinator.com/item?id=37314700


Are there any good resources to learn how to use it?

Let's say that I knew that I needed 6 colors in an application: a red, a yellow, a blue, an orange, a green, and a purple — that is, the 3 primary colors and the 3 secondary colors. Finally, say I didn't care much about which colors I arrived at, but that I would like to try to equalize for brightness (maybe saturation, too?) while being somewhat identifiable as those 6 colors.

I'm sure there's no exact answer, since Yellow is very bright and blue is very dark, so I'd probably have to arrive something approximate.

How would I do that? Are there tools or tutorials for learning such a thing?


That's the sales pitch for HCT, you get to claim you can normalize along C to get similar colorfulness, or normalize along T to get similar lightness (with the bonus of T getting you the WCAG/a11y compatability). And why is yours right? Because you used the latest and greatest color science for H (hue) and C (chroma / saturation) and great color science for T, matches WCAG.

This isn't quite as helpful as it might sound, i.e. it can't settle all debates, then design needs to become playing within the system: ex. lets say you land at you want mostly pastels. You could settle on lightness 90 -- get nice yellows, teals, but...no red!?!? It turns out light red is pink. And then on top of it, the yellows and teals colorfulness can get crazy high: lets say 80. But red can only get to 16, this breathy pink.

This can be extremely aggravating, ex. yellow. It just doesn't contrast with white, there's absolutely no way to get any a11y number to tell you that a nice bright yellow can be a button in light mode. But the system is empowering in that you know you can't do any better on the implementation, and you can trust the #s.


That sounds like the answer to my question is: that's what HCT is meant for, but no there aren't any such resources, because it’s supposed to be as simple as tweaking the numbers. Am I reading that right?

Also, I'm getting the impression that I'd have to be working in the Android ecosystem, maybe?


It's available in Typescript/C++/Java/Dart and I believe ObjC either already or soon.


Is there a public specification somewhere? The only thing I could find is the code in the "material-color-utilities" on github.

Looking at the code, it seems the computations are much more involved than OkLab, especially in the HCT -> RGB direction?


100%, I used to joke the reason for this is its the "nobody got fired for buying IBM" color system.

#1 it takes the most modern widely accepted space in color science, CAM16 for hue and colorfulness. CAM16 is relatively incredibly complex, about 500 lines of code. #2 it maps it to L*.

#1 is because nobody gets fired for basing Google's color system on CAM16. #2 is for WCAG.

There was continually interesting conversations on what exactly the name of the space was and whether it represented a new color space that precluded more formalization of it, the code ended up being the spec as it were. I did get a blog post out: https://material.io/blog/science-of-color-design


In addition to missing oklab/oklch, the article is also wrong in claiming web/css supports only srgb; css color() function supports many colorspaces


Author here. The article was published 2 years ago, and when I started working on the tool the spec for OkLCH didn't even come out yet (late Dec 2020). Today I'd choose OkLCH over LCH as it solves a few problems with it.


Note that CSS Color Module Level 4 is still at the Recommendation Draft stage, and "supports" here is more accurately stated as "supports, except for Microsoft Edge (chromium) and Pale Moon (goanna)":

https://test.csswg.org/harness/results/css-color-4_dev/group...


https://caniuse.com/css-lch-lab is probably a better resource for support, and it notes that the main layout engines all supported it earlier this year, and gives an estimated global support rate of ~85%.


sRGB/ProPhoto/DCI-P3 and HSL/HSLuv/Lch/Lab/OKLab are two distinct sets of functionality. The former defines the colorspaces (such as sRGB), the latter defines colors within a given colorspace. To see a visual demo of this, visit https://oklch.com/#70,0.1,165,100 and enable Show P3 and Show Rec2020; two additional thin white lines will be added showing the colorspace truncation points for the OKLCH color (70% 0.1 165).

https://caniuse.com/css-color-function tracks support for the colorspaces, complementing the above link for Lch/Lab, and showing essentially the same current data: All desktop and the two top mobile browsers now support it.

I'm really glad to see this. Thank you for sharing. That makes me much more optimistic about what will come of this.

(Note that all browsers are currently failing LCH and OKLCH tests 9 and 10, and Firefox failing 15% of the CSS Color v4 parsing tests, at https://wpt.fyi/results/css/css-color?label=experimental&lab... — but this is still a page full of great success for CSS Color v4! So happy.)


Personally worked with oklab and several other LAB spaces. Oklab is fantastic for cutting into simple computations!


Here's a color model built on Oklch, but instead of lightness uses APCA’s (WCAG 3) contrast ratio: https://github.com/antiflasher/apcach


oklab might be an even better alternative to LCh:

https://bottosson.github.io/posts/oklab/


It's sort of way outrun it's initial premise: it's lightness channel is inverted compared to every other color space, and it's L has nothing to do with LCH or LAB. It's success brought more intellectual poverty, unfortunately. The initial blog post needs a laundry list of corrections, it goes off the rails when it starts making claims about accuracy based on gradients using CAM16 UCS.


Have the errata been collected anywhere?


No, it's a really interesting situation, doesn't directly claim to have lightness like Lab/LCH, or that there's something fundamentally good about having a blue-yellow gradient. But the L thing is crucial for WCAG (c.f. article), and designers, and the blue/yellow stuff is interesting.

- it's _not bad_ Its better than HSL in that its lightness is correlated to "true lightness", and it's __much__ simpler to implement than a more complex thing like HCT's CAM16 x L. That's about 500 LOC, versus 15 I'd guess. And substantially faster too, few mat muls in each direction.

- Oklab L is not L: The problem for engineers and designers: no relation to LAB L*/XYZ Y, so no correlation to WCAG.

The problem on the designers end: it's not a linear ramp in dark to light visually. This usually goes unnoticed in a cross-disciplinary scenario unless you're really going deep into color, designers are used to working under eng. constraints.

- Color space accuracy / blue and yellow gradient:

It's a warning sign, not a positive sign, if a color space _isn't_ gray in the middle of a blue to yellow gradient. They're opposites on the color wheel and should cancel out to nothing. Better to instead note the requirement that you don't want gray gradients, and to travel "around" the colorspace instead of "through" it, i.e. rotate hue to make the gradient, travelling along the surface of the colorspace, rather than going into the color space, reducing colorfulness.


> Oklab L is not L: The problem for engineers and designers: no relation to LAB L*/XYZ Y

That should not be considered a bug by itself. CAM16-UCS lightness J also does not depend only on XYZ Y.

> no correlation to WCAG

Y does not necessarily have to be the standard used by WCAG (see CAM argument). And a stick that's a little skewed is still very well correlated with an upright stick, statistically speaking.

> The problem on the designers end: it's not a linear ramp in dark to light visually.

Now that is the actual issue which has nothing to do with which variables L depends on. The L_r (lightness with reference) derived from L is designed to deal with that: https://bottosson.github.io/posts/colorpicker/


> That should not be considered a bug by itself. CAM16-UCS lightness J also does not depend only on XYZ Y.

I avoided the word bug, I believe. Let's say someone else claims it is. I would advise it is not a bug, but the absence of a feature. And that I'd struggle to come up with a good reason to switch away from HSL without that feature. If you're just picking colors, HSL is fine. The process advantage and magic is built-in "oh I know this color passes a11y with this one without guessing" is ___huge___. At least at BigCo, you can't skip a11y.

> Y does not necessarily have to be the standard used by WCAG (see CAM argument).

Sure, but it is used by WCAG and that's important: see above. Also, there's a reason Y has survived since 1900, and it's not because it's an arbitrary decision.

> And a stick that's a little skewed is still very well correlated with an upright stick, statistically speaking.

I assume it's a reference to the shape of Oklab's L, but I'm not sure if you meant that, so apologies if this sounds aggressive because it's not what you meant / I'm repeating another comment: Oklab's L is not Lab's L, and it's not close, and when you plot Oklab's L versus any other color spaces its very oddly shaped. This isn't a problem for an engineer or color scientist, but it is a problem for a designer.

> Now that is the actual issue which has nothing to do with which variables L depends on.

I agree that if we say "well WCAG could use any lightness measure" and decide we'll switch away from HSL to an arbitrary color space because the science on spatial frequency might be rewritten or WCAG may divorce itself from it completely, that we can completely write off a11y as a non-goal.


> Also, there's a reason Y has survived since 1900, and it's not because it's an arbitrary decision.

1931. And Y is known to be problematic since Judd (1951) and Vos (1978), with the latest approach being the Stockman & Sharpe 2005 "fundamental" redefinition. This error has real bearing on how narrow-spectrum display works.[1]

[1]: https://sensing.konicaminolta.asia/wp-content/uploads/2018/0...

> I assume it's a reference to the shape of Oklab's L... not close

The "a little skewed" refers to the shape of the L (or CAM "J") axis projected into the 3D XYZ space, not how the values are scaled in 1D (which I have addressed separately as L_r). They will not perfectly match the Y axis and appear bent, even skewed, because they don't just mathematically depend on Y. But they will be quite close and score a good (> 0.90, I guess) correlation coefficient over a set of color samples because they are trying to model the same aspect of human vision. That is still good correlation, and given the limited nature of gamuts you'd still be able to derive some sort of "this much difference means at least this much ratio".


This playground provides an incredible visualization of 16 color spaces, including LCh and OKLAB: https://color-playground.ardov.me/spaces-3d


That's amazing and cool and useful. Thank you!


While it's not perfect, I've been using oklab and oklch (depending on the circumstances) for color interpolation in some scenarios and the results are really visually pleasing compared to interpolating in RGB or HSL.


I've been happily using OkLab on my site and it is way better on my eyes than previous color palette which was using CIELAB.


If the CIELAB numbers are based on perceptual changes, how do they interact with accessibility? Do we need to worry that these perceptual formulae might be based on standard vision? Could they be different for different types of color blindness, or is that not a thing. (FWIW, I try to make my charts as accessible as possible, but I’m totally uneducated in color stuff, so I’m limited to just following advice, can’t derive anything—there’s every possibility that this is a dumb question!)


Worked on this. Contrast does change non linearly. You can apply simulations quite easily. The Machado at al simulations are just a matrix transform.


This article puts the description of it at the bottom, but APCA Contrast—which is the readable contrast test in the WCAG 3 Draft Standard—is much more fair to some colors than WCAG 2.1's Contrast Ratio was.

Good reading for the color theory behind perceptual contrast: https://www.smashingmagazine.com/2022/09/realities-myths-con...

And a distilled description of APCA: https://typefully.com/u/DanHollick/t/sle13GMW2Brp


APCA works great, but it has a very weird and restrictive license. Before you consider it for anything you should probably take a detailed look at that and consider if it's really usable for you.


The current license kinda sucks but there is some hope that a more permissive license will be in place by the end of the year.

https://twitter.com/MyndexResearch/status/169550147293769333...


Hey @zauguin and @kreskin,

I am sorry about the licensing issue, it is only temporary during the public beta. And I am providing some exceptions, if you need, send an email to legal@myndex.com

Just can't do an MIT type license right now—and really there's no generic one that really suit what we need right now, so we're working on such a license, one that is reasonably permissive yet prevents some of the issues we've run into. I.e. like with bad actors incorrectly modifying, or reverse engineering and doing so wrongly, or taking it to the cantina at Mos Eisley...


So, where's the code? I want an algorithm that I can use in any software, not a web form.


This is the library the author references. I’ve used it myself and it works great

https://github.com/gka/chroma.js


It would be nice if this was packaged into R...if only I had time. I really dislike a lot of R color palettes.


In this thread, the author suggests that they'd use Oklab[1] if they were to write the article today. Oklab was introduced in [2], and the code in that article is really easy to use.

[1]: https://news.ycombinator.com/item?id=37310534

[2]: https://bottosson.github.io/posts/oklab/


Came across an article when looking into this issue, about HSY[1] from a Krita (image editor) dev, which is apparently part of Krita's advanced color selector and addresses the same issue by mapping perceived luminosity of colors.

[1] https://wolthera.info/2014/07/hsi-and-hsy-for-kritas-advance...


(2021) should be added to the title, many commenters seem to be oblivious to that fact.


if you're interested in the topic, a nice article was just published in Smashing Magazine about Oklch:

https://www.smashingmagazine.com/2023/08/oklch-color-spaces-...


So, having just read this about Oklch, it strikes me that I [as a designer] would want to alter the chroma according to the ability of the display -- which they show the possibility of at the end, using @media:

    @media (color-gamut: p3){
      .some-class {
        color: oklch(70.9% 0.195 47.025);}}
Surely this is the only real option, otherwise you're risking distributing colours across a gamut that the display simply won't render? I'd assume that's going to result in non-linear fades, poor colour distinction, and such like?


Does anyone know off hand how we test whether or not tetrachromats are able to truly see more colours than your average person? Not just like the x-rite or whatever but maybe something that's more advanced than that which goes in the kind of direction that these more advanced colour space profiles seem to cover? I pass that test really easily but I really doubt I have more cones in my eyes or any of that.


My wife is a tetrachromat. She sees way more greens than I do. When we’re on a hike, she always gushes over all the “shimmery” greens and different shades of grasses and flowers. All I see is green.

We were at an art museum once and there was a painting that was just a bunch of rectangles of different colors. She thought it was the most beautiful thing ever. So I got her a small print of it and she was disappointed. I’m sure the print was a CMYK printing of an RGB picture, so it got down sampled back to three channels. It looked the same to me as the original.

God help us when we go pick out paint for a room in the house or try to color match a fabric. She has word visceral reactions to colors that just doesn’t make any sense to me.


My guess here, to help it make sense, perhaps think in terms of the luminous excitation of "blacklight paint"... If she's having strong reactions, probably to metamers under the right lighting in a real-world environment and not constrained by the gamut of a defined colorspace like some flavor of CMYK or RGB...

Here's a real question for her on the subject, how does she see a blacklight? I expect a true tetrachromat to see it nearly white. But also I wonder if she's sensing a fluorescing that is of sufficiently short wavelength that standard vision can not perceive it.

The implants I have in each eye are different technologies. The right eye has no UV filter, the IOL in the left eye does have a UV filter. When I look at a blacklight with my right eye, it's bright, almost white in appearance.

But looking at the blacklight with my left eye, it looks like what most see, as a very dark purple with almost no visible light.

A friend of mine is an artist who has been experimenting with extreme metamerism, using pigments that radically change perceived color under different light sources, including of course, blacklight.

Since you mention hiking, I wonder if some of the plant life you and your wife encounter have any florescent properties, or perhaps interact with some fungus that does... A portable UV light at night might be interesting...


That's really interesting, thanks for that insight! I looked into it and it seems that some people are saying that the only positive way to test is to see if you're a tetrachromat is by DNA testing, while others are saying there's vision tests that can be done - Which did your wife take out of curiosity? [1] It kind of sounds like what you describe could be used as a vision test.

[1] https://www.quora.com/How-can-I-really-test-if-I-am-tetrachr...


She was at the eye doctor and just said something passing about colors and they did an ad hoc test there. She never did anything like a DNA test because it really wouldn't change her life any. It's just an interesting fact that she sees more colors.

My theory that I've pieced together: The Y chromosome codes for 1 piece of the information and the X codes for 2. I'm sure it's not linear, but the quantum mechanic in me says that the color receptors are a linear combination of the three basis functions encoded on the DNA. For women their system is over determined, and I bet the green is really a degenerate solution that _usually_ overlaps. But in some cases the green is extra wide or even two district peaks. It also can explain why men are more likely to be color blind, if one of the 1 drops out, they only have two channels remaining. Women tend to have a back up.


Oh wow you're actually that famous physicist, what an honour! Been meaning to find a copy of your textbook. Any chance any of this has some influence on the idea of technicolor theories?


I'm so excited to see more work in the perceptually uniform color-choice space. I first saw this discussed by Stripe in 2019, but the work with "accessible pallets" is a revolution!

https://stripe.com/blog/accessible-color-systems


What about HSLuv?

https://www.hsluv.org/


HSLuv starts with a cylindrical representation of CIELUV which also doesn't "fill the cylinder" in the chroma axis (this might be what the article briefly refers to as "HCL or LCh(uv)"), and then stretches its chroma so that you always get a 0-100% chroma range regardless of hue and lightness. This might be convenient because you can't accidentally represent impossible colors, but you obviously lose perceptual chroma uniformity.


I created a game editor some years ago that had several HSL alternatives including one contrast corrected bar and two different light/shadow corrected bars. I used it mostly the other way around though. Creating suggestive backgrounds with many colors but with low contrast between them.


I'm not a trained designer, and for that reason I would never attempt to design a color system with six different colors, and then shades / tones within each. Is that not overkill and unnecessary?

Mind you, I appreciate the suggestion to avoid Tool X (i.e., in this case HSL) but I can't help but believe they did themselves a disservice by choosing a six-color color system to begin with. In my eyes, mind and heart accessiblity and simplicity go hand in hand.

Fact: There's not a user in the world who wakes up and thinks "My day will be made if and only if I visit a site with an overly complicated color system."


> I'm not a trained designer, and for that reason I would never attempt to design a color system with six different colors, and then shades / tones within each. Is that not overkill and unnecessary?

At a minimum, most web UIs grow to need:

- Several greys (borders, headings, body text, boxes), a primary/accent color (to show what's clickable), green (to show success), red (for danger) and orange/yellow (for warning).

- If you then want to show success/danger/warning alerts/badges/icons/buttons, you'll usually want extra shades of each for the borders/background/headings/text.

- For buttons, you'll usually want shades of at least your primary color for the background of each normal/hover/clicked/disabled state too.

- At that stage, you may as well have shades/tones ready for the other colors in case you need them in the future so you don't have to keep coming back to it. You could get by for a while by generating new color variants from existing colors with shortcuts like using transparency or contrast filters, but the feel/branding of these probably won't be ideal compared to more designed palettes.

- From a light theme, you'll need to tweak the colors to get a dark theme, usually reducing saturation as the colors become too bright on a dark background.

You can get pretty far with a restricted palette though e.g. https://design-system.service.gov.uk/styles/colour/ and https://getbootstrap.com/docs/4.0/utilities/colors/. Or use an existing large palette like https://www.ibm.com/design/language/color/, https://www.radix-ui.com/colors or https://designsystem.digital.gov/design-tokens/color/system-... (these ones have nice rules about which colors contrast with others).


Yes, but do they grow because they can? Or do they grow because they should?

Put another way, how many users are going to understand ALL those shades and subtle signals in a don't-make-me-think (i.e., Steve Krug) sort of way?

And how accessible are all those (too often) lower contrast combos?

I appreciate the finer points of your answer but it still feels to me like designers are imposing their form-over-function will on the users, and shouldn't we be past that at this point?

That is, ultimately, was HSL The Problem? Or was is more guilt by being in the wrong place, used in the wrong way, at the wrong time?


> Yes, but do they grow because they can? Or do they grow because they should?

So see Bootstrap's alerts for example (not known for being overly flashy):

https://getbootstrap.com/docs/5.0/components/alerts/

They've got primary, secondary, warning, danger and info and alerts. Each one has a unique background, border and text color, and the text contrast meets accessibility standards. So that's 15 colors just there.

Alerts like this are super common for any UI with forms. You could simplify each alert to use only one or two colors (which puts limits how you can make them look), but do you think this example is over designed as is?

> Put another way, how many users are going to understand ALL those shades and subtle signals in a don't-make-me-think (i.e., Steve Krug) sort of way?

You'd assume the user can tell the danger color from info, but the different shades of the danger color wouldn't have a different meaning except for emphasis and to help with the information hierarchy. Even if your UI was greyscale, you'd want different shades to help here too.

Can you point to a UI you like? How many colors and shades does it have?


My understanding is color alone is not enough as a signal for alerts. There are enough people who have issues with color differentiation that color isn't enough. So if we presume this to be true - and it is :) - well now we've add all these colors to "the brand" *and* also have too much confidence in what they can accomplish for all users.

Look. I'm not saying there's absolute right or wrong. But I do believe there's enough myths and false gods kicking around that someone needs to stand up and say, "Wait a minute..." I'm that guy today. I'm that person.


> My understanding is color alone is not enough as a signal for alerts. There are enough people who have issues with color differentiation that color isn't enough.

Yes, it's standard practice that you don't rely only on color for information. But for people that can see those color differences (about 90% of the population don't have color blindness) it's a very worthwhile improvement so isn't it overkill to ignore these people? And what's the problem with tweaking the shades so they match the brand a little better?

What are you advocating as an alternative? How many colors and shades?


If you want a CSS color system that easily creates shades/tints, use HWB (Hue-Whiteness-Blackness). It directly maps to simple artist sensibilities and is far less confusing than RGB/HSL and the new LCH/OKLCH

https://developer.mozilla.org/en-US/docs/Web/CSS/color_value...


You should have a look at https://www.hsluv.org/comparison/. It's simple to use like RGB/HSL/HWB, but unlike them makes it easier to find accessible colors as the perceived lightness of a color only changes when you change the L param.


Once you have a color system, however, it can be very easy to work within it an maintain the visual "brand promise".


I understand and appreciate brand, identity, marketing, and so on. And based on that experience, I can say with confidence: if your brand needs six colors and shades / tones within each to make a "promise" you need to so rethinking and redesigning. No "promise" that complicated is going to stick.


You began by saying you aren't a trained designer, and ended by telling most of the industry it is doing it wrong. You've come a long way in a short time, congratulations!


Yes and no, but mostly no :)

As a user of their "doing it right" - and feeling under-satisfied too often - maybe they should be listening more, and imposing their will on the rest of us less? Or do I roll over and play brain dead because I'm told to do so?

I happen to believe less is more. And I've seen that work well / better often enough. If mo' mo' mo' is so great, why am I not seeing it and feeling it?


The very conventional search form for the site I work on has six colors that are obvious including the white background.

I am looking at a picture of an anime character (Kizuna Ai) on my wall and it has two shades of five colors plus one shade of blue used in the background for a total of 11 shades.


It's more like: You're going to need six simple colors eventually, and when you do you want versions that work with your brand colors.


I love websites getting uglier and uglier due to forced "accesibility" colors, how about you give me a color palette? I want my color palettes to be like a 90s RPG, fully radioactive colors, I can see better if colors are brighter, I don't want dull/boring colors, I literally turn off darcula and switch to high contrast/terminal colors with bright green on my IDE for a reason.


> I can see better if […]

This is exactly the point of accessibility colors. Making things easier to see. There's a whole discussion on measurement of it in the article. It's just not a solved problem yet.


Or... people can start giving the option to pick the color you want for websites? why force colors you like onto others?


I use HSL to set the color of my keyboard leds during the day. For that purpose it's really great because it gives nice bright colors contrary to cielab which give many dull colors (like visible in the blog post).


This is only applicable against a white background. Do the same rules apply for a black background or other colors?


As this blog and the app demonstrate, we haven't even progressed past the inscrutable hex codes


Oof, what a great article. Fucking saved.


A truly correct color system would precisely describe the spectrum of whatever color you're communicating, interpreting the mangling eyeballs do to it left as an exercise to the reader.


A spectrum describes the physical properties of light, color describes the perception. Colorspaces are about quantifying the perception of light as observed by most human beings.


You run into interesting problems of metamerism

https://en.wikipedia.org/wiki/Metamerism_(color)

particularly metameric failure. For instance there are those RGB lights like Phillips Hue and Lifx which would render colors in the environment better if the colors were wideband but would work better at rendering strong saturated colors (like a display) if the R, G and B were laser-like monochromats.

There was this system for stereo films

https://en.wikipedia.org/wiki/Dolby_3D

that exploited metamerism by using 6 monochromats and very glasses with narrowband filters that route 3 of them to each eye. These were very high performance but were crazy expensive (needed theft protection) and got trashed in the marketplace by Lifton's RealD system based on circular polarization even though RealD doesn't perform quite as well.


The band filtering glasses type 3D is used today in a lot of theme park rides, like Avatar Flight of Passage for example.

https://variety.com/2012/film/news/transformers-ride-pushes-...


These were withdrawn from the market for Cinema in 2018 according to Dolby

https://professional.dolby.com/product/dolby-cinema-imaging-...

but there might still be some theme park rides still using it.


In my experience many theme park ride seem to use them.

I have looked at the glasses for rides like Transformers in my link, and Avatar and Harry Potter Gringotts and none of the lenses were polarized to my quick tests.


That would be too expensive, because the spectrum would need to be divided in much more channels than the 3 channels that are enough when they are matched to the average human vision.

High-resolution spectra (like needed for many scientific applications) can be recorded only for images with few pixels or for images with many pixels but with long delays between successive images (because they are obtained by mechanical scanning of the pixels), so you must accept a trade-off between spectral resolution and either spatial resolution or temporal resolution.


If you want to do physics, you can do physics.

That's not what color in this context actually is though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: