Hacker News new | past | comments | ask | show | jobs | submit login
Web color is still broken (webcolorisstillbroken.com)
643 points by Aissen on April 21, 2022 | hide | past | favorite | 200 comments



Having worked on this in Skia (and so Chrome, Android) for a good while, I can vouch this is an issue browser/phone developers would have liked to fix a long time ago, but have kept by default for compatibility with old expectations. A lot (~all) of web/app design was and still is done with this broken interpolation in mind, which then all breaks when math is done in a way that more accurately models light.

It’s a bit of a chicken and egg problem that we can really only fix by allowing content to explicitly opt-in to linear working color profiles. And sadly that’s not going to solve everything either: while a linear profile is good for color interpolation, sRGB encoding is ideal for brightness interpolation. You’re stuck choosing between ugly red-green interpolation and nice white-black (sRGB) or between nice red-green and ugly white-black (linear). There are some more complex color spaces that do try to be the best of everything here, but I’m not aware of any browser or OS providing support for working with these perceptual color spaces.

A similar problem comes up when trying to manage color gamut on screens and systems that are intentionally blown out to look vivid, vibrant, shiny… looking at you Samsung phones. Any properly color-managed image there looks incredibly dull compared to the bright defaults you’ve grown accustomed to. If we take 0xff red to mean “full sRGB red”, that’s much less bright these days than if we just assume it means “as red as the screen will go.” It’s much like how TVs are (hopefully were?) set in the store in bright shiny mode to induce you to buy them, not set to reproduce colors accurately.


> There are some more complex color spaces that do try to be the best of everything here, but I’m not aware of any browser or OS providing support for working with these perceptual color spaces.

OKLab and OKLCH coming soon to CSS4 near you:

https://www.w3.org/TR/css-color-4/#resolving-oklab-oklch-val...


It's already shipping in Safari 15.4!

Note this is a part of what the "Color Spaces and Functions" area of Interop 2022 (https://webkit.org/blog/12288/working-together-on-interop-20...) contains. See https://wpt.fyi/interop-2022?feature=interop-2022-color for the current status.


Oh that’s amazing! OKLab looks really good to me!



OKLab is great. AFAIK even Photoshop uses OKLab to interpolate gradients.



well, that was fast :)


> A similar problem comes up when trying to manage color gamut on screens and systems that are intentionally blown out to look vivid, vibrant, shiny… looking at you Samsung phones.

This is a problem on a lot of HDR-capable consumer monitors, even expensive factory calibrated ones from Dell. They often lack any form of sRGB clamp resulting in exactly this problem when viewing an sRGB signal. AMD has a setting in their GPU driver control panel that can do this on the GPU without needing the user to provide an ICC profile. The Nvidia driver has a hidden API to do this, requiring the use of third party software to take advantage of.[1]

TVs are a little better in this regard. My Samsung TV has an "Auto" option for color gamut that correctly identifies DCI-P3 and sRGB signals and clamps them accordingly. But the default setting was "native" which made sRGB content look really bad.

1: https://github.com/ledoge/novideo_srgb


>They often lack any form of sRGB clamp

And the ones that don't almost universally lock you out of all other color settings when enabling it, often even including brightness. It's ridiculous.

Also, thanks for the shoutout :)


> A similar problem comes up when trying to manage color gamut on screens and systems that are intentionally blown out to look vivid, vibrant, shiny… looking at you Samsung phones. Any properly color-managed image there looks incredibly dull compared to the bright defaults you’ve grown accustomed to. If we take 0xff red to mean “full sRGB red”, that’s much less bright these days than if we just assume it means “as red as the screen will go.” It’s much like how TVs are (hopefully were?) set in the store in bright shiny mode to induce you to buy them, not set to reproduce colors accurately.

I remember when the Pixel 2 came out, there was a minor kerfuffle over it defaulting to using sRGB. It got overshadowed a bit due to some of the more severe issues people ran into with the early batch, but it still apparently got enough of a backlash for them to patch it shortly after to update to a compromise between what I think it called "natural" and "vivid". At the time, I had never paid much attention to color profiles, so my roommate suggested I enable sRGB (which was in the developer settings on the original Pixel) for a week and see if I preferred it. It turns out I do actually prefer it, so now I've used it on whichever phone I've had since then!


Unmanaged color on wide-gamut outputs is simply godawful. I'm not surprised consumers go for it in direct comparisons, but especially with dark color schemes in vogue it seriously hampers usability by crushing darker shades (at least when lerping to DCI-P3).

TVs and monitors are pretty bad as well, but at least more wide-gamut source material is becoming available and calibration to DCI-P3 or Rec.709 is starting to become a selling point.


The article mentions the color-interpolation attribute.

Chicken-and-egg problems have to be resolved one step at a time. Implementing color-interpolation is a possible first step.


Would a new web standard that allows web developers to configure a 'color rendering mode', either on a webpage as a whole or individual parts (e.g. via a css option) solve this issue?


In what way is linear black-white interpolation ugly?


Oh sorry, yeah ugly is a really subjective term, I should have explained.

The 0.5 sRGB grey (aka 50%, 0x7f or 0x80) is right smack at the grey value most people would pick as the halfway point between black and white.

A linear 0.5 grey is way off from that, I think way too bright, but I’m about 10 months off working on this and to be honest this is one of those things like east/west or left/right that I can never remember without working out the math on a piece of paper. Too bright or too dark for sure, not at all a medium grey.

It took my team a long time to not think of linear and sRGB interpolation in terms of good vs. bad, right vs. wrong, modern vs. legacy, etc. There are good uses for interpolating in both color spaces, and an a variety of others like HSL, HSV, CMY, perceptual spaces. Sorry for continuing the problem. There’s no ugly color space; they all have good uses.


If you want to simulate light at a physical level (even as simple as two projectors shining light at the same wall) then Linear Light is correct —- it is any time when your mental model includes ‘light’… It just works.

You are right though that schemes like sRGB are a better fit for perceptual brightness. As you say the slider works the way people expect it to and you don’t need so many gray levels. 8 bit linear light shows posterization at some levels and is shaded too finely at others. If you want good results with linear light and common tools you need 16 bits of grey.


What I use is HSL whereby the H is in LRGB and the L is in SRGB (I forget what the S is but it’s one of the two). Very simple way of getting the best of both worlds when doing color math.


https://i.imgur.com/CRq6Kn8.png (left: linear, right: sRGB)

In linear interpolation, there's too much white and not enough black. That's why gamma correction/sRGB exists after all: if you encode the linear value directly, you waste most of the space on white values you can barely tell apart.


Sorry but this is just your example being horribly misgenerated/some gamma being applied twice, maybe even thrice. Here is what I generated:

sRGB: https://i.imgur.com/dbn3TM6.png Lab (linear): https://i.imgur.com/dCXzbF3.png

Please download the images and view in a proper image viewer. For some reason at least on my Windows machine Google Chrome adds horrible color banding to both images.

This is my code:

    import imageio
    import numpy as np
    import colour

    quantize_with_dither = lambda im: np.clip(im * (2**8-1) + np.random.uniform(size=im.shape), 0, 2**8-1).astype(np.uint8)

    srgb_black = [0, 0, 0]
    srgb_white = [1, 1, 1]
    srgb_grad = np.linspace(srgb_black, srgb_white, 500) * np.ones((200, 1, 3))
    imageio.imwrite("srgb.tif", quantize_with_dither(srgb_grad))

    lab_black = colour.XYZ_to_Lab(colour.sRGB_to_XYZ(srgb_black))
    lab_white = colour.XYZ_to_Lab(colour.sRGB_to_XYZ(srgb_white))
    lab_grad = np.linspace(lab_black, lab_white, 500) * np.ones((200, 1, 3))
    lab_grad_in_srgb = colour.XYZ_to_sRGB(colour.Lab_to_XYZ(lab_grad))
    imageio.imwrite("lab.tif", quantize_with_dither(lab_grad_in_srgb))


I didn't trust imageio's correct handling of gamma/colorspaces so I just wrote out the raw sRGB coefficients as 8-bit TIFF and used imagemagick to convert to PNG, telling it that the input is in sRGB:

    magick convert lab.tif -set colorspace sRGB -depth 8 lab.png
    magick convert srgb.tif -set colorspace sRGB -depth 8 srgb.png


Lab is not linear. L predicts perceptual brightness, just like sRGB, which is probably why your two gradients are almost the same.

Gamma encoding is lin_to_srgb(c) = c*12.92 if c<=0.0031308 else 1.055*c**(1/2.4)-0.055. Since lin_to_srgb(0.214)=0.5, in a linear ramp, the middle gray (808080) pixel should occur about 20% of the way through.


That banding is odd. I get the same issue on Windows 11 latest Chrome build. Looks fine in Windows default image viewer. These are PNG images, so there is no decompression to mangle them. The only way the two images could differ is in the way the pixels are massaged into the color space. Weird.


I've generated my own version of this (i just happened to be working on something that needed gamma correct rendering)

https://i.imgur.com/jeM4RCu.png

       top: linear_to_srgb(x)
    middle: x
    bottom: srgb_to_linear(x)


It’s probably just that a gradient between black and white in linear RGB results in a gradient with a midpoint that doesn’t look like _the middle between black and white_ to the human eye. I far as I understand that’s what sRGB fixes, perhaps among other things, and the way it fixes that results in muddy colors when you mix different colors.


The gradient mixing example is broken.

"Color gradients are supposed to be equally bright around the midpoint"

This is false. Nobody mandates this.

There is no "correct" way to do it, every one of them is arbitrary (although some do look better than others and the suggested method is better than the default linear interpolation).

It is completely analogous to smoothing curve in animation - it's about what sort of interpolation mechanism you use.

If there would be a standard for specific perceptually pleasing gradient, then one could claim "does not follow standard X".

The point of the article is usually abslutely correct, though: "The correct way to process sRGB data is to convert it to linear RGB values first, then process it, then convert it back to sRGB if required."


I've been studying color theory a long time, including giving talks about color mixing and indeed HSL space specifically for a decade...

I'm with you, there are ways that I think color mixing should be done but I would never say that mine is "right". That's silly, it's art.

I think color mixing should be done using HSI colorspace (luminosity replaced with intensity defined as the sum of all light power outputs) with HS defined as per HSL/HSV and interpolation between any two colors done by a line drawn between the coordinates. The reason for this is that in my applications I use spotlights so obviously "total brightness" is way more important than "display lightness". It's just physics, and no right or wrong about it, it's a different display kind.

This is "correct", the colorspace is explicitly constructed to be used in this way such that each step is perceptually even. But it also isn't necessarily what people want, to me "fade from red to cyan" automatically goes through white because of course it does... but what people usually mean is they want the hue to spin around at fairly constant saturation (ignoring that the direction of rotation matters) because it looks good. Red to orange to yellow to green looks vastly nicer in a gradient than going a direct path through white.

That all said (mostly because I find it bizarre and fascinating even after a lot of study), the arguments in the OP are not about whether a given gradient is objectively right but rather whether they match the spec. Things should match the spec...


Rainbows, where you sweep around the hue circle, have plenty of room for artistic license. The basic problem is that you can’t make a blue anywhere near as bright as the brightest yellow you can make. If you try to maximize brightness some parts are darker and some brighter in a way that is meaningless, particularly for scientific visualization. If you keep the brightness constant, however, it never gets very bright and is unsatisfying that way.

I like Rainbow Dash’s tail in My little pony because the artist chose to let the brightness vary but did it deliberately so it is meaningful in color and B&W.


I think you're talking about the effect where "secondary colors" made by mixing, say, R and G together have a maximum brightness (or I called it I for intensity) that is twice as high as that of pure R because twice as many LEDs are on.

This is the main reason I like it for my spotlights, it's way weirder if the apparent brightness of the spot changed as it went through a rainbow at fixed saturation (or from unsaturated to fully saturated even). Instead, I make it so that for secondary colors each individual emitter is half as bright by keeping the sum(R,G,B) constant (if it's RGB) as one of the main parameters. It's a tradeoff, it definitely only gets half as bright for the secondary colors. I think this is for the best in practice, otherwise white would be thrice as bright again, the software limit makes more sense to me than bouncing spot brightnesses.

For computer display colors, I don't think I am an expert enough to speak to it. I know about spotlight color schemes because that's what I build =)


But it isn't that simple, isn't it - what about the Helmholtz–Kohlrausch effect, most pronounced in spectral lights, that breaks the color additivity of trichromatic theory ?

http://www.handprint.com/HP/WCL/color2.html#combocolor

But he's just probably talking about how "green-yellow" wavelenghts contribute much more to lightness perception ?

http://www.handprint.com/HP/WCL/color3.html#optimalstimuli

http://www.handprint.com/HP/WCL/color11.html#valchrom


Yes, I agree it is bizarre and fascinating and everything you said about traveling in various colour spaces:)

Also agree "should match spec" is always a valid argument.

To repeat, and be clear, my critique was about what the page says "Physically correct color gradients..". This should have been "according to spec you should compute a gradient this way, and this is what your browser does...".


What's the best way to learn color theory? Like actually, properly learn it? I consider my interest in TV/Movies to cross over into the enthusiast category, and I work with medical images and video (compression mostly). I've spent time with the basics, and have recently started reading about BT 709 etc., but am unsure how best to really dive-in to color theory headfirst


If you give up your RGB sliders for HSV sliders you’ve taken a big step.

Value (bright vs dark) is more important than anything else. If you were picking foreground and background colors for text whether or not it is readable depends on the difference in values, not the hues.

Any image should be meaningful if seen in black and white, not just for color blind individuals but for normal sighted individuals too. The Ansel Adams zone system is good for thinking about this, even for non-photographic images. He identifies 11 major bands of value which are about as many shades as you are going to get in a bad print or viewing environment, say with a thermal printer or newsprint.

In a top quality print and viewing environment however each of those zones except the most extreme is able to show meaningful detail. Of course an image can choose to not use certain zones but generically I’d say the ideal image looks good under bad circumstances but under good circumstances it has something to delight the eye in all the zones. (Reference art from Pokémon often has well-thought out uses of value because it has to play across various media.)


I think a huge first step for me didn't require any textbooks, but simply manually calculating the CIE coordinates based on the actual light spectrum. There are some really good articles online at this point, but I think everyone should probably just do this once. The wikipedia articles on colorspace are pretty darn good, albeit spread over way too many pages...

I feel like if you can do that math you understand a ton. Including why red and blue mixed is purple =) The integrals you need are at the bottom.

https://en.wikipedia.org/wiki/CIE_1931_color_space


Two other super helpful mental experiments --

- Why is there a "white locus"? How much does my hue shift under different color temperatures (ignoring CRI, which I suggest you do for now)

- With what math can I reproduce a color using three other colors in combination?


The other commenters shared some seriously great stuff, but as a Math major, you have my personal thanks for informing me that there were integrals I could solve.


Fair, coming from a physics/chemistry/biology background seeing the connection between the spectrum of light being seen and the color you convert it to was really enlightening. I knew it had to do with phosphors, but the science is fairly understandable.

The chemicals that pick up light respond to different wavelengths differently. However, if they get excited, they are a [1] (as far as I think we understand). So a single phosphor can't tell anything about the wavelength other than a probability distribution.

Then you take three probability distributions and identify a single unique color in our heads for it. And that's why every color-space ultimately has three parameters, it's all just different projections and mappings but physically there's really just a spatially isolated signal that tells you which of three phosphors were hit and how much, and then a ratio to downsample it to a perceptual color.

And then you look at the overlap and see how for wavelengths you see as blue, the "blue" phosphor is only about as active as the "green" phosphor (which are confusing misnomers anyway) but your brain sees both being active equally as "blue". And then that "red" secondary hump gets hit when the wavelength moves from blue towards violet -- activating the "red" phosphor and making vision into a wheel.

It was always one of those "this is obvious until I think about it" moments when I wondered why 400nm and 700nm aren't ends of a spectrum in our vision rather than a wheel, there's no obvious theoretical reason for it I think. It's just a possibly pure coincidence of that "red" phosphor having a second hump.

It's nice to do the conversion for real a few times with example colors or black body radiation. Seeing how this works a little closely makes a bunch of stuff like imaginary colors make sense biologically with saturating certain phosphors.


This is the best free resource that I'm aware of :

http://www.handprint.com/LS/CVS/color.html


SVG gradient is specified to do the interpolation in sRGB space by default. However officially SVG 1.1 supports interpolation in linear colorspace, but no browser implements it AFAIK.

https://www.w3.org/TR/SVG11/painting.html#ColorInterpolation...

edit: An other mode of interpolation that is missing even from the spec is using premultiplied alpha. It would be essential to correctly render upscaled transparent raster images, where you don't want to see the "color" of the fully transparent pixels to leak through when it gets interpolated with a neighboring opaque pixel.


>"Color gradients are supposed to be equally bright around the midpoint" >This is false. Nobody mandates this.

That's just because their wording is imprecise. If you replace color gradient with "perceptually linear hue gradient", which I think is strongly implied, then that would be 100% correct. Hue is a color coordinate completely independent from others like lightness, in a perceptually uniform color space.

But you're on point - there's no right way to do it. Not because of it's a matter of taste or something (it's not). Problem is, there's currently no perceptually uniform color models! All models are uniform in some range but fail in various scenarios; human perception is still largely unexplored. As a result, getting it "equally bright" in different viewing conditions with varying intensities is an impossible task, especially when you know nothing about viewing conditions. You have to narrow it down to "good enough in common scenarios". Models like OKLab are trying to do exactly that.


When you're using a ringing filter, sRGB is often a better colorspace for upscaling than linear RGB, because it distorts the ringing artifacts to make them less noticeable by putting them mostly in the highlights, not the shadows. It's not mathematically "correct", but it's better suited to human perception, similar to how minimum phase filters can perceptually outperform linear phase in audio processing, where pre-ringing is more noticeable than post-ringing.

See "Catch #3" here:

https://entropymine.com/imageworsener/gamma/

And you can sometimes get even better results using a sigmoidized colorspace. See:

https://legacy.imagemagick.org/Usage/filter/nicolas/#detaile...


It still feels like that sRGB is accidentally better here, but a different, non-linear colorspace could outperform it for this case.


Well, sRGB is sort of perceptually linear. Its roots trace back to a 50s technology compromise between perceptual linearity, color gamut, & encoding cost for CRTs that held up with only minor tweaks until CRTs stopped being widely used.

Maybe a fancier model like oklab would outperform it for this specific case, but only in quality, not performance oklab is more expensive to convert to/from compared to sRGB, which is usually free as your backbuffer and textures are usually sRGB.


Well... color is physics though. Three spectra that our eye are sensitive to span a vector space dual to the space of all light spectra [1]. That space naturally has a linear structure. You can have twice as much light of one color or the other, and you can add the light of different colors.

So there _is_ a natural interpretation of linear interpolation between two colors in physics. Concretely: Imagine two lamps with the two colors and the same brightness. You start with one fully turned up and the other fully turned down. Then you increase the one while you turn down the other keeping the brightness constant.


Color is not physics, confusingly.

The 'eye sensitivity response' mode is a color space. But it has problems. Notably, it can represent a lot of impossibly pure colors (because e.g. your red detectors have a lot of overlap with green so pure red acitivation is impossible). More relevantly, a linear interpretation of this system does not match what people see. Perceived brightness is close to logarithmic. If the cell has double the activation, that looks like a constant increase in brightness.

Finally, the way we see color depends on the context, mostly the ambient light. This is largely (but not completely) captured by the idea of color-temperature / a white point. But there are also optical illusion effects around brightness and color.


Color vision is much more complicated and “RGB” doesn’t work the way than most people think it does.

The spectral response of the L and M cones mostly overlaps and our perception of color relies heavily on “adversarial” transformations in the brain.

The result is a very different effective spectral response superficially resembling RGB bandpass filters except the red channel takes negative values (!) at some wavelengths:

https://en.m.wikipedia.org/wiki/CIE_1931_color_space

The negative values can be thought of as requiring additional anti-red (green+blue) to render the correct color.

The CIEXYZ color space was invented to address the negative values and also conveniently separate luminance from color information. It is widely used today to do color space transformations since it represents standard human color vision in a reasonably convenient and intuitive way.

On a side note, wide band RGB or Bayer filters used in digital cameras are a poor match for the CIE 1931 standard observer effective band pass filters (even ignoring the negative red values). They would result in low saturation colors if mapped directly to a full-gamut RGB colorspace. Conveniently or coincidentally they work well when directly mapped to a limited gamut sRGB color space. Narrower bandpass filters approximating the CIE1931 curves look correct when mapped to a wide gamut like rec.2020. The point is you cannot make simple assumptions about RGB colors at any point in the process.

The biggest problem we have today however is many systems are not aware of colorspaces and assume that RGB values are an absolute and invariant encoding of color.


IMO gradients are not based on physics any more than selecting a smoothing curve in animation is. They are design tools - even though physics can be used to analyze them.

I say this as a software engineer with a MSc in physics and a computer graphics/traditional graphics afficinado :) - not to boast with my eminence as that would be silly on this site but just that I've given a long and hard thought on this matter over the years.


Smoothing curves that look the most natural are often modeled after springs or friction. How is that not based on physics?

Also, this argument makes total sense in isolation, but you still have no good reason for the gradient rendering on the web to stay as it is. It doesn’t look nicer or have any benefit other than backwards compatibility. This is the whole point of the article.


"Smoothing curves that look the most natural are often modeled after springs or friction. How is that not based on physics?"

To be clear, when I say "based on physics" I mean "a value that can be computed from first principles, based on a specific model".

For example the spectrum of a black body radiation can be said to be based on physics.

Gradient is a mathematical entity.

A "correct" gradient cannot be computed from first principles, nor does the article specify any specific model to use. There is no "correct" gradient. We can define some spec for a gradient and say "is according to spec" or "is not according to spec".

Gradient is an interpolation between two triplets using some function.

With gradient, in R3, we have two elements, a,b, and our task is to define a path g(t) from point a to point b so that g(0) = a and g(1) = b;

The trivial way to do this is by way of linear interpolation

g(t) = a * (1-t) + b * t

however this has lots of aesthetic pathologies. The work around gradients is about finding the most pleasing g(t).

If we start from the philosophical point of view that all smooth curves are based on intuition and inspiration from physics you are free to do so but this will summon an angry horde of mathematicians and computational geometry experts.

"This is the whole point of the article."

I think the point of the article was

1. Gradient rendering does not respect the spec

2. Any color computations should be done in linear color spaces

The points are valid, but the motivational paragraph about gradients was more misleading than correct (IMO) hence my critical response above.


I find that it's better to think of colors as imaginary rather than real - helps with some common assumptions, including how colors would be the same for everyone.


Whilst it is true there is no "correct" gradient, the author is making the reasonable assumption that the gradient can be simulated by two linear light sources (A and B) being shone onto a perfectly reflective (white) surface. Each light at full brightness is one end of the gradient. To calculate a colour at x% along the gradient you turn light A on to (100-x)% and B on at x%.

The issue with this is that the human eye perceives brightness slightly differently at different frequencies (and combinations of frequencies) so the perceived brightness of the gradient may appear non linear. That isn't to say that the browser generated gradient is perceived in this way, it is generally even more non linear in its perceived brightness.


Gradients are design tools. The gradient interpolation function can be designed around some physical mixing hypothesis but there is no first principles rule as such.

Focusing on the luminosity is only half of the battle.

For nice looking gradients that are pleasing to design with, generally for aesthetic reasons you also want the perceived color saturation to be interpolated in a predictable way.

For reference, here is a nice example of an approach to find a nice looking mixing function: https://bottosson.github.io/posts/oklab/


As an addendum a good way to visualize gradients of equal perceived brightness is to use an Hue Chroma Luminance (perceived brightness) picker like http://tristen.ca/hcl-picker/

As you can see, preserving luminance means you get a restricted range of gradients to choose from.


Linear interpolation in the HCL space is not equivalent to linear interpolation in a linear colorspace. For example in a linear colorspace if the interpolation goes from saturated blue to saturated yellow, then the the interpolation goes through gray, while in the HCL space the interpolation goes through more or less saturated colors.

The possible colors produced by a display remain a cube in a linear colorspace, which is a convex object, so all gradients remain available.


They should have said 'would be actually useful if' rather than 'should be.' From a design perspective, the utility of gradients that come to a muddy gray in the middle is nonexistent. Framing it as a way to make them more useful to people who need them rather than 'correct' would have been more productive, but way less likely to hit the top stories on HN.

I'd say the transparency assertions are more dubious. There's a zillion ways to superimpose a color on an image and the one they suggest isn't even more consistently useful than the others, let alone 'correct.'


From the article:

> Physically correct color gradients (as you would get, for example, along an out of focus edge between colors) are equally bright around the midpoint,

So I think there is an argument that using a linear color space is more correct.


Agree. It is also a good idea to add a third color in middle for those kinds of gradients.


Web color is somehow still a garbage fire.

About a decade ago I was hearing that web-safe color was effectively obsolete. Today Firefox chokes on P3 color, and Chrome doesn't appear to care at all about display color profiles.

I recently launched a website that had a fire-engine red background and was expected to have a rotating animation above the fold (https://worldparkinsonsday.com). Even once we got the animation synced perfectly between Chrome and Safari, it turned out that iOS Safari was rendering the video differently. Eventually I gave up and replaced a 128kb mp4 video with a 5mb transparent animated GIF. Shameful.

And I had this lil' fellah on my desk smiling at me the whole time https://100soft.shop/collections/dumpster-fire/products/dump...


A transparent webm probably would have worked for your use case. Or if you were willing to rebuild the animation, it looks simple enough for SVG.


Yeah but the fx guys certainly didn't have the js or css chops to make most of that work.

I think the real correct solution would have been a Lottie animation, but there was no time to train the fx guys.


Unfortunately, I don't think iOS safari has webm support yet.


All the modern media formats are unsupported by Apple. We could be using avif for images and webm av1 streaming and video if not for Apple products.

Its intentional sabotage by Apple at this point on behalf of MPEG-LA, it seems.


So 99.99% of times on HN people seems to think Apple is an AOM founding member and hence they will definitely support AVIF and AV1 ( not true ).

On another day Apple is working on behalf of MPEG-LA when its VVC patent pool doesn't even include Apple.


HEVC video with transparency is apparently possible[1], though I’m not sure if it’s supported on non-Apple platforms.

[1]: https://developer.apple.com/videos/play/wwdc2019/506/


I really hope Apple doesn't use this as an excuse not to support WebM, which is a web standard


Will it break the page if this support is missing?


setting aside whether a gif is really necessary, it was surprising to me that your gif is so large, considering it's mostly flat colors, albeit at a fairly high frame rate. running it through gifsicle -O3 takes off 1 MB, and -O3 --lossy takes off another MB. -O3 --colors 4 drops it by another 2 MB, for a total of 69% file size reduction, at the cost of some minor fringing only visible when displayed full-screen.


I have a technique I use to get around videos that are supposed to be seamless with their backgrounds.

First I figure out which edges of the video will have website background next to them. Some video edges won't need any if the video itself is touching the side of the webpage.

Inside the video, do either or both of these two things:

a) Make the background of the video the same as the background of the website (impossible for live action, useful for product spin-arounds / pull-apart animations)

b) Place a gradient layer over the video to make the edges of the video that meet the website content blend into the website's background color. The gradient should go from transparent to the website's color.

Render out the video.

Now comes the part where people usually get frustrated. The video either doesn't blend in with the website's background, or only blends in on certain browsers. (This is due to a combination of the video losing color information when it is rendered and due to the whole web color spacing issues the Original Post goes into.

Now to fix it: Place a css/svg gradient over the <video> element itself that also goes from transparent to the website's color.

Why put the original gradient over the video?

Two reasons:

1. Depending on the css/svg gradient shape and stopping points, some browsers will render a hard edge through the gradient, allowing the border of the video to still be seen. Human eyes are great at seeing very minute changes in brightness/color when they're next to each other, especially when they're arranged in a straight line. Baking a gradient into the video itself actually helps out the css/svg gradient to hide the video's edge

2. Rendering a colored transparent gradient inside a video actually lowers the amount of colors in the final video, leading to a smaller file size and faster loading time. If the user's aren't actually going to see the original colours, there's no point rendering out the original video and all its much larger amount of color information if the user's aren't going to see it.

Example time:

Here's the last website I achieved this effect on: https://myoresearch.com/en-au

(Yes I'm aware there are some other issues with the website, but this is more to demonstrate the video effect itself)


I love loud background colors like that.


I just found out that `lch()`, `lab()`, `oklch()` and `oklab()` are already supported in Safari (stable)[1].

The normal versions have a white point of 50 and the ok versions have a white point of 65, which is the same as sRGB[2].

[1]: https://wpt.fyi/results/css/css-color?label=stable&label=mas...

[2]: https://bottosson.github.io/posts/oklab/


Web color isn't broken, it's specced weirdly. Any attempt at "fixing" this would miscolor every existing website for the sake of correctness.

The SVG color space definition is the exception here, browsers can do better in that area. Still, it's silly to expect browsers to change the way their color system works and go against the expectations of color blending for web developers just to tick a correctness box.

Make a time machine and take it up with Microsoft en Mozilla in the early 2000s if you want the default color system to work differently. Write a patch for Chrome/Firefox/WebKit to implement the SVG spec if you're so passionate about this. People generally don't care about this stuff, they want the result to look Good Enough™ and that's exactly what the current implementation does. I don't think any browser maker will put in the effort to do much about this "problem".


Man this is another good example of the quote:

"Never underestimate the value of a 'good-enough' solution"

I commenting as a multi-decade programmer, but alas I know nothing about color ! Even though I've dabbled in graphics here and there in my career. (this is just for background)

My main point is that, as "technical person" with NO skin in the game. I can honestly say I can't see the difference, or if I can, why one is better or not. Not saying it should not be fixed, but if I am a sample of the "ignorant majority" that don't know about colors should this bother me ? Or bother me more ?

I hope I'm not offending the "experts in this field" just saying, your clueless audience(me) don't even know what is wrong :)


I'm a casual developer too, not knowing much about colors.

But I can see the difference. For "my browser" visual image, I can kind of see "vertical pillars" in the middle which which reminds me of times where we had not many colors in our palette.

Moreover, if looking at red and moving on horizontal axis - it is same red and after only some distance it starts to change. The correct color mixing for me looks better - the colors start to change instantly and more smoothly.


The worst part about web colour is that it's not even consistent. If you draw a PNG containing a rectangle with a given colour on to a page with background colour of the exact same RGB values, then it can look different.

Here's an example: https://incoherency.co.uk/interest/colour.html

And here's a screenshot of how it looks on my browser (Firefox 99.0, Ubuntu 20.04 LTS): https://img.incoherency.co.uk/3812

The square is a different colour to the page background colour despite having the same RGB value (#0cf4c7) but they look different. No amount of changing the colour mode selection in GIMP can fix this.

I opened the screenshot in GIMP and used the colour picker tool to see what colour my #0cf4c7 has turned into, and in the screenshot image it is #67f1c7.

Strangely, when I draw the screenshot in the browser, its colours stay the same, so obviously there is something you can do to the PNG that will make it render the colours you chose.


There are several ways colorspace metadata can be included in a PNG.

- No colorspace metadata (software should assume sRGB)

- Built-in sRGB format chunk

- gAMA chunks

- cHRM chunks

- ICC profile chunks

IMO v4 ICC profiles are the modern and best way to encode colorspace info. The problem is not all software uses any or all of these.

ffmpeg for example decodes all of these chunks but does not actually use any of it. It even assumes by default PNGs are encoded with rec.701 gamma (common for legacy video). This is why PNG->x264 conversion with ffmpeg has weird colors unless you encode the PNGs with gamma ~2.0 instead of the sRGB standard ~2.2.


Maybe it's an installed color profile thing, and colors from different sources get interpreted against different color spaces? PNGs typically have a well defined color space (implicit sRGB, or explicitly specified otherwise), but the background might get interpreted in device colorspace maybe.


No square, Firefox 99 Windows, or in Chrome either. Heck even IE 11 renders it correctly!

I think you need to investigate Firefox and how it is using your GPU. Maybe try disabling WebRender? https://support.mozilla.org/gl/questions/1345186


Firefox 99 on Linux, the square is visible. Chrome 100 on Linux, square is not visible. None of the colors match the RGB value #0cf4c7 when using a picker :/

Gimp shows a color of that hex the same as Firefox's square. The Chrome color is much paler in comparison. There's no color profile set for the monitor.


Something's wrong with your Linux, I don't see a square on FF 99, Fedora 36 GNOME/Wayland.


I don't know what could be wrong. I haven't done anything weird to it. I remember this exact problem from at least 6 years ago, on many different computers. I thought this was just how it was.


I don't see it either, same firefox, Ubuntu Mate 21.10 X11.


For me setting true

     gfx.color_management.force_srgb
in about:config makes the rectangle disappear. I don't have the slightest idea what is going on tho.


I'm using Firefox 99 on Windows 10. I don't see any distinguishable difference on the page, however, using the Firefox color picker on the square, I get #13fc35


I can't see the square either. Firefox 99, Windows 10.


I do see a square, also on Firefox 99, Windows 10. I don't see a square in Chrome.


wtf is the color of 0cf4c7.png ? it's different in every browser, and even when you download the file, it's again different in every program that opens it. this is not just a browser issue.

the Photos app on Windows 11 renders it as 01f4c7. almost 0 red. (although come to think of it, maybe the Windows 11 Photos app is an embedded browser, lol)


I don't really care what the colour is, I just assume it should look the same in the page background and in the PNG.


It looks the same on my Macbook with Chrome.

Did you use some browser extension for eye protection or dark mode?


Nope, I'm not doing anything weird as far as I can tell.


I don't see the square, FWIW. The colors are a match in Edge v100.


Interesting. It might just be a Firefox thing.

I remember reading a Firefox bug report about this, and the response was that the person's monitor colour calibration must be wrong, which makes no sense to me since even if the monitor colour calibration was wrong, I'd except it to apply to both the page and the image equally.

EDIT: This one: https://bugzilla.mozilla.org/show_bug.cgi?id=621474 - from 12 years ago.


Firefox 99.0 (Linux) here, the square is invisible.


> which makes no sense to me since even if the monitor colour calibration was wrong, I'd except it to apply to both the page and the image equally.

Color mgmt by browsers is, when it is applied, only applied to images (neither videos nor "CSS colors", broadly speaking). I'd bet you have an ICC profile associated with your monitor.


On Linux, I can see the square in Firefox, but not in Chrome. Interestingly enough, the color in Chrome is identical to the color of the square in Firefox, but when I force sRGB in Chrome (chrome://flags/#force-color-profile), it becomes identical to the background color in Firefox. I don't know what to make of that


Which Linux? I can't reproduce on Fedora 36 GNOME/Wayland.


Ubuntu 20 Gnome/X11


It's a bit sad that the author went to the depths of buying this domain, create this page and then don't provide a correct library they think it's appropriate like https://colorjs.io/

There's an article with a much more sympathetic approach:

https://www.joshwcomeau.com/css/make-beautiful-gradients/

Not sure if this is trolling or not, but the closing HTML tag misses a > :)


> It's a bit sad that the author went to the depths of buying this domain, create this page and then don't provide a correct library they think it's appropriate like https://colorjs.io/

Colorjs.io doesn't address most problems the website highlights.


That's why I would have appreciated a suggestion that does.

There are 3 issues highlighted: color mixing, transparency and scaling.

The second is entirely subjective and I prefer what browsers do. Scaling is irrelevant (the issue is not explained) to color handling.


The author is talking about the way browsers render graphics. Unfortunately, there's no way to correct that without modifying the browser itself.


And when there's no way, you use libraries. Others also mentioned CSS4 solutions and other alternatives. It's really not that bad as the articles paints it (pun intended).


A) someone needing to change image processing behavior on a browser isn’t impossible, is just not possible with a JavaScript library. Lots of different kinds of developers here. B) How workable the current situation is depends on your use case.



I imagine his exactly the sort of thing the author was hoping would be improved. But the existence of an experimental, largely unsupported API doesn’t really solve the problem though, does it?

https://caniuse.com/?search=Houdini

Listen — I’m not really sure what your problem is and I don’t really care. I have no interest in “winning” a conversation. If you’re just going to push goal posts in different directions to try and find some way you feel righter than me despite my having done nothing besides making several specific, one-sentence, topical points, then I yield the remainder of my time for you to go ahead and do that by yourself.


It is quite astonishing that color is still this broken even in most applications designed to work with colored images. Are there any good software libraries that can be leveraged to prevent this?

And I agree with the author about his reasoning that sRGB is compressed data. An audio equivalent is the Mel scale https://en.wikipedia.org/wiki/Mel_scale


Or the good ol' pre-emphasis [1].

They were not really common in the Western (maybe more so in vinyls, but I don't collect that) but it's such a pain in ass when I was collecting Japanese CDs from 80's.

https://wiki.hydrogenaud.io/index.php?title=Pre-emphasis


All HiFi vinyls and cassettes are/was recorded using pre-emphasis and required a de-emphasis filters to reproduce correctly the sound.


It's not astonishing to me, because I believe the author is not entirely correct about color being broken.

sRGB color is not the perfect color space, but in general it's better for UI than using linear color, which the author seems to be advocating for. The web is primarily UI. Not only that, but sRGB is the color space of probably 95% or more of devices, and of all low end computer monitors. sRGB is designed to balance perceptual linearity with encoding cost. It's very cheap to encode, and perceptual-enough that you can store a channel in 8 bits with high quality (linear value needs 16 bits to be comparable, and it's still worse unless you use half floats instead of unorm. It's common to go as low as 5 bit sRGB channels for albedo in computer graphcis.)

I am a graphics engineer, I've had a lot of colleagues learn the aphorism "don't do math in linear" once they get first burned.

> Unfortunately, by calling it a "color space", we've misled the vast majority of developers into believing that you can do math on sRGB colors

That belief is an over-correction in my opinion. The examples that the author shows are all about achieving physically correct lighting. The Rubik's cube image blended with 25% white doesn't look physically plausible because it's not being done in linear color.

But the goal with UI is visibility, not physical plausibility. If you want to cover something with a UI element at 50% transparency, you want 50% of the UI and 50% of the background, not a physical model of some transparent medium.

sRGB does however suck for defining gradients, but I think that's more to do with it being RGB. For this what I generally advocate is a luma/chroma model. Other posters have mentioned Oklab which is great, though even plain YCbCr will get you 80% of the way there while being cheap to convert to. You still want to do that in gamma/perceptual space though, not linear. The author didn't show gradients between white and black. Gradients between colors of 2 different luminances look awful in linear^.

However I totally agree with the author on zoom. Web browsers should be doing zoom/image resampling in linear space. Images shouldn't qualitatively change based on their zoom level.

^ https://external-preview.redd.it/voeyOYu6Ds-fLLU7nR4kABmFgUE...


> I am a graphics engineer, I've had a lot of colleagues learn the aphorism "don't do math in linear" once they get first burned.

I'm not sure what you mean by this. I work with game engines and everything is calculated with linear color. Not only anything else would be completely physically inaccurate, it would be unfeasible for modern HDR rendering which supports extreme dynamic ranges with physical units (ie 100000 lux sun).

Maybe you work with mobile devices, where they still haven't fully moved to HDR?


I do cross platform development, I have to keep mobile/console in mind though most of my work in on PC, other people work on mobile/console and tell me when the shaders I write are too slow or break mobile :).

You work with 100,000 lux sun? Damn! We did physical values for sun/moon lighting when I did flight simulator stuff but that painfully required full float rendertargets, which is fine when you can tell Uncle Sam the minspec gpu is a 1080, but unacceptable for console and mobile.

Anyway we do full linear HDR for the 3D scene of course, as it's simulating physical light transport. Though not exact sun/moon values as we want to do half float rendertargets.

The UI does compositing in SDR sRGB.

Some of our textures are tinted with this rather cursed lab model I cooked up for speed, which is gamma encoded srgb linearly transformed so that 'l' = average of rgb and 'chroma' is rgb equally weighted but placed 120 degrees apart in the unit circle. Was designed to make HSV transforms linear operations (ie matrix multiply) and have great performance with acceptable quality with not too many imaginary colors. Nicer color spaces were too expensive, and it works good enough to make the artists like it.


This is a bit out out of date. Interpolation colorspace can be specified as of CSS Color Module Level 4. See https://drafts.csswg.org/css-color-4/#interpolation-space

I don't believe it is yet implemented in browsers, but I know Chrome at least is working on it.


The best color mixing I've seen in a while is this

https://github.com/scrtwpns/pigment-mixing


> Unfortunately, by calling it a "color space", we've misled the vast majority of developers into believing that you can do math on sRGB colors, and by exposing the raw sRGB numbers to users, we've misled them into thinking those numbers have a reasonable meaning.

But sRGB spec does specify a color space ! And these numbers do have a meaning - the unreasonableness is with people's assumptions.

The issue here seems to be rather that developers and users, when hearing "space", assume a linear, Euclidean space, except color spaces are usually none of these !

But I don't really see what other word could be used here instead of "space" ?

----

A related issue (and potentially a fix) :

In 2015 developers (and users) could (usually) just stay blissfully ignorant of colorimetry and assume sRGB everywhere.

Today, with wider gamut screens becoming common even in non-professional monitors ("HDR"), and non-Apple OSes starting to get proper support for non-sRGB specs, sRGB cannot be assumed everywhere.

So learning some colorimetry might be a pre-requisite for comptetence now.

(Except for Web text colors, IIRC they're still limited to 8 bit per color = 256 values, sRGB ?)


I don't see a clear "why" for fixing this explained in the post. Yet if we assume this should be fixed, how should the community fix this?

Has this been discussed at the appropriate standards bodies and conferences?


Yep. Because this is a perfect example of something that only matters in ivory towers, but not "on the street". The fact that it HAS been 20 years and browsers "haven't bothered" to "fix" this only goes to show how unimportant this really is. The author is not wrong about any of it, but that only strengthens the argument that there isn't a compelling "why" for "fixing" it (let alone that changing how browsers render outputs really WOULD break every site on the internet that is designed for how sRGB has worked in browsers for 25+ years).


The fact that it HAS been 20 years and browsers "haven't bothered" to "fix" this only goes to show how unimportant this really is.

I couldn't disagree more. Browsers and modern web development style keep kicking out tried and tested technologies in favour of newer alternatives that are objectively inferior. This is not a good thing if you're interested in an attractive, functional WWW.

In this case we stopped using real images prepared by real graphic designers and digital artists using real graphics software and we substituted rounded corners and gradients and web fonts and scalable line art graphics. You can argue about whether this has advantages by reducing data sizes or cutting costs through allowing developers with bland toolkits based on "flat design" to do styling work instead of hiring experts. What you can't seriously dispute is that those new techniques render really badly in some or all of the major browsers and now many sites look bad unnecessarily and some become harder to use as well.


Oh I get the crusade, it's a noble one. I even support it. But you're conflating your "important to me" definition with my "actually important out in the real world". Browser vendors haven't bothered to "fix" this for a fifth of a century. Which makes this the definition of an ivory tower "problem". If it moved the needle in the real world even a little, it would have been changed long ago. And maybe it will one day, we live in hope.

But the airquotes around "fix" are precisely because it's not "broken", it's simply "how it's implemented". Plan accordingly.

> ...we stopped using real images prepared by real graphic designers ...allowing developers with bland toolkits based on "flat design" to do styling work instead of hiring experts.

Yeah that's a separate issue and rings of No True Scotsman-ism.

> What you can't seriously dispute is that those new techniques render really badly in some or all of the major browsers and now many sites look bad unnecessarily and some become harder to use as well.

Pretty sure that is disputable. "Many sites" do look bad, but if you're claiming this is because of browser sRGB colour handling then you're gonna have to cite a source or do more than claim the high ground.


But the airquotes around "fix" are precisely because it's not "broken", it's simply "how it's implemented". Plan accordingly.

I do. When I'm building web stuff professionally, there's a laundry list of CSS features (often quite basic ones) that it would be very convenient to use but I often don't because I know they will look terrible in production for a significant proportion of users. But that's unfortunate.

Yeah that's a separate issue and rings of No True Scotsman-ism.

Separate perhaps but I don't see how Now True Scotsman applies here. There certainly are professional developers who also have significant knowledge of things like colour theory and graphic design. You're talking to one. But those are separate skill sets, not normally required or expected for most development work. I don't think it's plausibly deniable that web development today frequently relies on someone who designed some toolkit, probably using basic CSS effects for almost all the visuals, instead of hiring an in-house designer. And I don't think it's plausibly deniable that today's WWW is much more homogenous and dare I say boring in appearance than the WWW of 10 or 20 years ago. There are usability advantages that come from some types of consistency but does everything really have to be so same-y?

"Many sites" do look bad, but if you're claiming this is because of browser sRGB colour handling then you're gonna have to cite a source or do more than claim the high ground.

It's not just the sRGB handling. I'm talking about a bigger picture. Some popular browsers had antialiasing glitches that made rounded corners done with CSS look like low-res pixellated junk for years when `border-radius` was first a thing. Try applying CSS transforms to anything using fonts or SVGs today and you can still see horrendous rendering artifacts in some browsers, and even worse if you're animating as well. Of course the fonts themselves render completely differently on different platforms even without any transforms applied and sometimes that has a material effect on important aspects like accessibility or even basic legibility. Gradients over large areas have horrible banding in some browsers because they don't use basic dithering techniques that every real graphics program has used since about the 1980s. The list of browser rendering glitches that any halfway decent creative software has avoided for a very long time is long and frustrating to read.


I don't really disagree with you, I think we're just on different sides of what it means for this to be "important".

> but I don't see how Now True Scotsman applies here.

"Real" designers, using "real" software, hiring "experts", etc. Because "no true designer would X...". There is absolutely genuine expertise in this domain but, much like development, it isn't a profession. It's not even a trade. Anyone who says they're a designer (or developer), is. For better and (usually) worse.


I guess color theory hasn't been given much love in web development because it's hard and murky. Once you do a dive into it, you realize there's no correct answers, only tradeoffs. It's a lot harder to be zealous about some color model being the ultimate color model because you'll encounter reality soon enough.

I suppose the original author is a zealot for linear SRGB, but that might change once he encounters a black and white gradient interpolated in linear.


In design work there is rarely a single right answer but there are often a lot of obviously wrong answers. Unfortunately web browsers seem to have latched onto those for many of these features, hence the awful gradients, janky animations, glitchy font rendering, etc.


I would guess it has not been given much thought, as the initial comment suggests, is most developers and users don't care at all. The most used web sites look terrible but people are using them for function, like searching, shopping and paying their bills.


The frustrating thing is that users do care, at least up to a point. There have been success stories in the modern era where a well-designed, well-executed product has dethroned a long-standing incumbent (or at least provided credible competition despite being David against Goliath) and the slick presentation seems likely to have been a competitive advantage. There's no shortage of complaints about sites or apps with poor design either.

However we know that most users will prioritise other aspects, such as functionality or security, over aesthetics. If the advantages of a nice design are purely cosmetic and don't also improve factors like the usability of an app or the credibility of a marketing site then that's probably going to be less important than missing some key feature the user wants or lacking the network effects that existing products in the market have established or convincing a potential customer that you're trustworthy before they had over their card details.


What do you think an example of a site where a new player has come along with a much better looking site than the incumbent and beat them?

Not trying to be negative, I just can't think of any and that would be excellent evidence that it does matter. Just bringing up a list of the top 200 web sites and they are the reference gallery of visually terrible sites.


The first example that comes to mind is banking. In my country (UK) there seems to be a clear gap now between the old school "high street banks" that have been around forever and a new group that have entered the market more recently.

Both types of bank offer similar basic accounts and services. Interest rates and fees also tend to be similar at the moment.

What does separate them is that the established "big names" almost universally have terrible online banking and mobile apps while the newer alternatives tend to focus on these facilities (I think some of them don't even have physical branches) and they have slick, modern UIs aimed squarely at younger generations and the digital native market.

I don't have any hard data to cite but the number of younger people I see using these services suggests they are having some significant success at breaking into even such a heavily regulated and competitive market.


Exactly, this site isn't very good at explaining anything. I've read the text but I still don't really know what I'm looking at


Colour is definitely a complex thing. My friend who is a graphic designer prefers to use Firefox over Chrome on MacOS because it renders “nicer” colours - being somewhat colourblind, I had never noticed, but indeed there is a subtle difference between how the same hex colour code renders in both, according to DigitalColorMeter…


Maybe it's because I'm on Firefox that all these images look identical. I have no idea what the author is talking about.

EDIT: nevermind, I see it now. The square in the last few images is slightly green when CSS is scaling it.


I have close to zero knowledge about this, but I believe it may be because of different default color profiles?

On Linux, when I force Chrome to use sRGB (chrome://flags/#force-color-profile) it makes the colors identical to Firefox's. It doesn't seem to make a difference on MacOS though, the colors are always identical there


... and this is why I prefer desktop apps like Lightroom Classic CC over their web-based equivalents. This and broken shortcut keys, e.g. Ctrl+R shell reverse search in a web IDE.


Well, the gradient is specified as an RGB gradient, yet the post talks about brightness. That's not the right comparison. There are different color models, and a gradient will simply interpolate between the values.

A better example would be to use HSL or LAB for the gradients (Red->Green):

  background: linear-gradient(to right, hsl(0, 100%, 50%), hsl(120, 100%, 50%));

  background: linear-gradient(to right, lab(50% 128 128), lab(50% -128 128));
  background: linear-gradient(to right, lab(50% 128 128), lab(100% -128 128));
The two different LAB values for "green" show that if you're just changing Lightness, you'll get a darker share of green. The RGB green needs 100% Lightness.

HSL produces the same output as an RGB gradient, which is wrong, as only L should be modified.

LAB works only in safari, and although the result is slightly different, it's still incorrect, probably due to my conversion


I don't think this is the best approach. There are two independent things in a gradient - the colors to interpolate between and the interpolation path - and I would not tie them together. If I specify the same colors just using different color models, then I still want the same gradient unless I explicitly demand different interpolation paths. You would not want that interpolating between 0 and 42 yields a different result than interpolating between 0x00 and 0x2A, would you?

My ad hoc expectation would be that I get a linear interpolation of perceived brightness, saturation, and color independent of how I specified the color to interpolate. Or maybe even better a path of minimal perceived color difference with uniform perceived color difference along the path which is probably at least slightly different from interpolating brightness, saturation, and color independently.


Values in different colorspaces are not fully interchangeable.

Some values don’t exist. Some might have multiple values in another space


That doesn't really matter, you don't really need to convert between different color spaces that much. Whatever the user specifies you convert into one color space, say CIE XYZ or whatever suits the interpolation you want to perform, and then you convert the interpolation result into whatever you need as the output color space. The only real requirement is that the interpolation space covers all input spaces, then it does not matter that not all input spaces cover the same colors or have non-unique representations.


While it's true that a LAB gradient will produce a more perceptually uniform gradient, it doesn't alter the article's point that sRGB isn't a colorspace. It has an associated nonlinear gamma function that is used to help compress values to 8-bit, and the result is that interpolating between two sRGB values is not the same as interpolating between them in a proper RGB colorspace, with the result that you'll get brightness problems.

So yes, LAB will produce more perceptually uniform gradients than RGB, but browsers are exacerbating the problems of RGB gradients by not implementing them properly.


Gradients in a better colorspace indeed make more sense to me (all of HSL/HSV/Lab are way more useful for practical applications than what sRGB does), same as HSL palettes in tools like Photoshop, but this transparency example I don't understand. Admittedly, I never gave it much of thought, but all these "my browser" examples look MUCH better and more intuitive to me, than the "correct" case. Well, I don't know, maybe 25% opacity should look a little bit more potent, but these "my browser" examples are actually usable, they look good, even though I agree they should all look the same (they don't for me). But each one of them looks better to me, than the "correct" example, which simply cannot be used to shade a picture. It doesn't look like a glass. It's ugly.


I wrote a blog post all about this! And how to get better color-mixing in your CSS gradients.

Shameless plug: https://www.joshwcomeau.com/css/make-beautiful-gradients/


This goes back to the original sin of using RGB/sRGB to express natural colors.

Nature doesn't work in terms of how much red, green and blue it needs to mix on the canvas. Sure, it's great and intuitive to represent colors on digital systems, but that's it. Light in nature works in terms of hue/tint along a continuous spectrum, brightness and saturation.

Color spaces such as HSV/HSL model these concepts much better than any RGB algorithm, and doing math with them produces results that actually make mathematical sense. YUV also takes human color perception into account. It's not a coincidence that most of the smart lights usually use HSV/HSL color spaces, and most of the cameras opt for YUV* based color spaces.


It would be nice to have explanations and comparisons of the techniques that are employed for the current behavior and the new behavior. The color gradient criticism feels valid from an aesthetic standpoint but the problem seems to be that a simple linear interpolation doesn't provide the desired color properties. This raises the question as to what exactly are the desired color properties from a technical standpoint and when is each technique applicable. Say we changed all color interpolations from an interpolation in RGB space to HSV space (as I imagine the color gradients suggested are achieved), would this have any unforeseen consequences?


It's not RGB that is "broken", it's the typical RGB color space used in practice, sRGB and similar, is "broken": because they are not in linear space.

The value of your nominal R,G,B number is the result of using the original "intensity" (brightness or similar) and then raise it to the power of gamma (typically 1/2.2). The purpose of this is to give more "bits" around lower end (darker end) where human eyes are more sensitive [1]. If we don't do gamma and have relatively low bit depth (like your typical 8-bit), it's very easy to produce banding at dark gradient areas.

Up to this point, there is nothing wrong.

However, if you want to average two colors (which affects almost all the operations that can happen on bitmap images, including gradient, blending, resizing, blurring, etc...), the correct way, just like real-world physics, is to average them in original linear form/value; but most of applications/implementations, as shown in this page, are just averaging their nominal "gamma'd" values.

[1] There is often a common belief to say that the introducing of gamma space historically was because of how CRT screen used to work (the relationship between electricity intensity and brightness etc.). But most of in-depth articles about this topic say that's just a coincidence.


Wow, yeah. I have done some hobby work with digital colors and rendering and I had no idea how used to the visual artifacts from math done on sRGB values directly I was that I just assumed a "fancy" transformation. Thanks for the insight.



The “correct” colour mixing example looks a little better, but only a little bit. The others, I don’t see a problem(I have m1 MBP, Firefox). I don’t think any of this would really affect my browsing experience one way or the other…


Maybe we could instead say, "web color is still not perfect". Nature is a wild animal and even getting computers to simulate natural color at all is a feat. Want to see real color? Get out of the fucking building.


Probably the best way to test if you are doing it right is to compare to an optical blend.

Say you blend red and green, such as by having a red ( rgb(255,0,0) ) div, and putting a div that is green with 50% transparency over it ( rgba(0,255,0,.5) ).

Now, make a checkerboard pattern of red and green divs. Look at it from far away enough that it appears as a solid color. (easier if you make each div occupy one pixel, while trying to avoid any antialiasing) You could also use something like frosted glass to blur it.

Do they appear the same color?

I would not expect it to appear as a bright yellow, but a rather dark yellow. (but not so dark as it appears brown)


Keeping aside the browser issue (which is already a quite interesting issue)... The correct scaling double square is already an issue with my eyes:

Mixing black and white dots to produce a grey was a pretty common trick when we did not have enough colors in our computers decades ago. But when I step back to lose the perception of the texture, why do I still see a very different grey than the center-grey with the same overall brightness?

I can't even say which one is brighter/darker (my answer changes when I look with a different mindset).


Many displays are not calibrated well to do that test justice. Also if you have a high DPI display, then the browser could already scale that picture for you, screwing with the test. You should display those images with image pixels matching device pixels.

You can visually test your display calibration with [1], but you should ensure that the image is displayed 1:1, which might not be the case with your browser and hiDPI settings.

[1] http://www.lagom.nl/lcd-test/gamma_calibration.php#gamma-tes...


Chiming in for HiDPI displays, I have a global 125% scale, so I had to zoom out the browser to 80% to get 1:1 on the rendered images.


Very interesting aspect of color management which I never paid attention to.

I wrote a simple primer on color profiles [1] a while back which makes mention of state of custom color profiles on the web. I do wonder what result you'd get if you blended images with different color profiles on a web page as browsers can view non sRGB based image content.

[1]: https://blog.oxplot.com/understanding-color-management/


Designs which depend on a particular colorspace are bad anyway.

I used the proper black background/white text override in my settings for a while, it was a great improvement on the sites where it worked. But it broke too many sites.

Not really sure how we've managed to regress so far on the ideal of having webpages just provide content and letting the browser render it as appropriate. Oh well, readerview it is I guess.


Something I've been curious about for a long time:

I grew up using an IBM 8513 VGA monitor connected to a PS/2. That's the mental model I use when thinking about color; to my brain, the default grey color (value 7) is "right in the middle" between what I think of as black and white.

What modern colorspace most closely maps to the color space I grew up with?


You can hardly call IBM 8513 VGA as having a color space. It just has colors - meaning the color space it's kind of arbitrary, two different monitor models could have different primary colors (as in slightly different reds/blues/greens).


TBH, the "correct color mixing" and the "your browser" color gradients look indisistnguishable to me.


It would be helpful to have the expected result near the browser output.

anyway, I think the "correct" scaling is at the very least objectionable. from a numeric point of view, it whould mix in gray, but from a visual poin of view we do preceive the outer box as darker because non linearity in dislpay and perception.


> anyway, I think the "correct" scaling is at the very least objectionable. from a numeric point of view, it whould mix in gray, but from a visual poin of view we do preceive the outer box as darker because non linearity in dislpay and perception.

I can barely perceive a brightness difference between the two areas. Meanwhile, scaling the gamma-encoded image results in a severely wrong result where the outside is far darker than the inside.


I noticed some extreme weirdness with the "correct scaling" example. For me (Firefox on Linux using LCD displays) , it looks different depending on which monitor I'm using - on one, the outer part of the gray square looks like there are yellow dots, while on the other monitor, it flickers, but only while the image is over a particular place on the physical monitor (this happens with other images where adjacent pixels frequently have a large difference in color).

Also, the apparent color changes depending on the zoom level - at anything other than 100% zoom, the effects described above disappear and the outer square appears darker than the inner one (although not uniformly). This may be linked to the bug that causes the browser to scale the image incorrectly.


> on one, the outer part of the gray square looks like there are yellow dots, while on the other monitor, it flickers, but only while the image is over a particular place on the physical monitor (this happens with other images where adjacent pixels frequently have a large difference in color).

Your monitor likely has a 6-bit panel and uses FRC to emulate an 8-bit display. This tends to cause content-dependent flickering and other temporal artifacts.


> The image should retain the same overall brightness as it is scaled down.

Incorrect.

While large regions of homogeneous color should indeed retain their brightness, thin one-dimensional structures are supposed to become darker and disappear gradually as the image is scaled down.


"Supposed to" in what context, by whom?


In any context, by everybody.

This is what happens when you move far away from an object, which is what "zoom-out" or "downscale" is all about. The total amount of light that you receive from an object decreases with the square of the distance.


Yes, but if you scale to half size (in linear size), then the overall luminance could go down by 1/2*1/2=1/4, not more, not less.


According to this logic scaling an image down would make it darker.


It does. Imagine a one pixel wide vertical white line on black background. As you downscale it, by whatever reasonable means, the line becomes a darker and darker grey.

EDIT: This is in the context of the image shown in the article, which is a gradient image consisting in light lines over black background. Of course, if you have thin black lines on white background, they will become ligter upon downscaling. In practice, on a real image, some parts will become darker and others lighter, depending on their particular textures.


If you have an image consisting of a thin white line on a black background, then indeed as you scale it down the pixels covering that white line will become darker because each pixel corresponds to a small amount of white in a mostly-black square. (I am making some assumptions about what scaling is meant to mean here, but the conclusion doesn't change much if you think about it differently -- e.g., with pixels as point samples of a bandwidth-limited function.)

But those pixels are also larger and the ratio (amount of light in image) : (number of pixels in image) should not change.

The author of the OP is complaining that when you take an image and rescale it, its overall average brightness changes.


I don’t understand the “correct scaling” part. The outer ring of the square has a mix of gray and darker pixels, it cannot possibly be as bright on average as the center square which is all grey pixels?


It looks like it is supposed to be a centre of 50% grey and a ring of alternating white and black pixels. As the note says: if you're seeing a darker ring (alternating black and grey pixels) your browser is messing up.


Can/is there a CSS library that does the transformations from an arbitrary color space to sRGB for you? Would it make sense to enhance CSS to an extent where this is possible?


While I sympathize with people for whom this matters, I still find it weird that other people care/are concerned about/want to control my experience.

I don’t want you to control my experience: Maybe I like having weird colors; maybe I’m color blind and need alternative colors; maybe I don’t want color at all and want a web experience that is text centric.

The model of the web is backwards and it makes browsers ridiculously over-complicated because they’re software designed to tel other people control my experience navigating and consuming information.

Give me the information and keep your colors and layout and scripts and ads and …


Unless you are the one implementing color rendering in web browsers, someone else is controlling your experience of color on the web.


I can select a web browser that displays colors how I’d prefer, just as I select a monitor.

The problem is that the web as currently constituted allows (encourages) the web page author to dictate my experience and my browser will mostly obey.

I’d prefer that there be no such thing as “pages”, just “information”, and that all display decisions would be made by software I completely control.


the change of HSV from RGB have to say, confusing in-all the online chatter and talk here is at B&W into RGB for colour/color "taste", individual optical focus and further more, its goes beyond chat, news and should be studied.

avoid this kind of flaming pit of fun gaming and consult with the person (know and trust). Opticians are great for this in my experience. They actually should point out to you the truth.


This is one of those things that seems extremely picky to me and at the same time I'm glad that someone out there cares enough about it to fix it.


The (s)RGB color model is not some legacy choice. It is closely tied to how colors are stored, composited, and communicated to physical displays. And this problem seems a bit artificial - how often do you see a linear gradient between colors like the ones shown in the examples? Finally, it is also trivially easy to get a gradient that looks like the "correct" ones presented in this article, by just adding a few additional color stops along the way that are closer to where you want the intermediate values to be.


As someone who has little to no idea how colors and operations on them work, I must say that was an incredibly interesting read!


I would imagine it actually looks even worse since low-mid tier laptops can only display 60% sRGB.


Even claimed 100% sRGB isn't the whole 100%. I'm aghast at how only recently 100% P3 coverage is finally becoming normal at the high-end (and is still uncommon for gaming-class laptop displays). I can't even consider System76 or Framework or similar for this reason. If you edit any visual content or work with designers, you owe it to yourself to get a P3 display that is calibrated.


I doubt its ever going to change for compatibility reasons


Can someone explain this for a sight-impaired person?


Consider someone playing a trombone, doing a slide between a low note and a high note at the same volume. With web color it gets quieter in the middle.


i would not say it was "broken". maybe just not implemented correctly.


Here's a real world image I found on the internet which is broken, both the left and right side of this image should have the same brightness (50%) but at the default browser scaling it appears that they have different brightness which is wrong and thus broken. https://www.redsharknews.com/hs-fs/hubfs/Imported_Blog_Media...


Well, no browser has correctly implemented tables since ever, why should they bother about colors?


This is the first time I ever hear about it, to be honest. Is there a link to something like OP but for tables?


Not just the web: PC colour is totally and utterly broken. The web is just a small part of this.

There's an entire field of colour management that Microsoft, Linux, and Google are carefully ignoring. They'll occasionally stumble upon ICC colour spaces, HDR, or 10-bit, but they make sure to break everything even worse and leave it like that forever.

Sigh... I have gone on the same rant annually since about 2010, starting on Slashdot. Most recently on YCombinator News in 2021. It's a whole new year, time to repeat my rant and pray to the IT gods that someone at Microsoft or Google stumbles upon this:

The current year is 2022. The future. We have these amazing display technologies such as OLED, HDR, and quantum dots. In this sci-fi fantasy world I cannot do any of the following:

- Send a photo in any format better than an SDR sRGB JPEG in the general case, such as an email attachment, document, or chat message. These are 30 year old standards, by the way.

- Send a photo in any format and expect colour reproduction to be even vaguely correct. Wide-gamut is especially pointless. It is near certain that the colours will be stretched to the monitor gamut in an unmanaged way. People will look either like clowns or zombies depending on the remote display device, operating system, software, and settings.

- Send a photo in 10-bit and expect an improvement in image quality when displayed.

- Expect any industry-wide take-up of any new image encoding format. It is a certainty that each vendor will do their "own thing", refuse to even acknowledge the existence of their competitors, and guarantee that whatever they come up with will be relegated to the dustbin of history. Remind me... can ANY software written by Microsoft or Google save a HEIF/HEIC file? No... because it's an "Apple" format. Even most Microsoft software can't read or write their own JPEG-XR format, let alone Google or Apple. Netflix developed AVIF but I'm yet to see it taken up by any mainstream system. Etc...

New in 2022:

- Display HDR on Linux.

- Display HDR even vaguely correctly on Windows. I mean, good try, but simply clipping highlights without even attempting to implement tone mapping is just plain wrong.

- Use a colour calibrator device on Windows 11, which totally broke this functionality.

- Use a colour calibrator device for calibrating a HDR display. I have a device that can measure up to 2000 nits. Can I use this calibrate my HDR OLED laptop display? No. That's not an option.

- Turn on HDR in Windows 11 and not have it cause a downside such as restricting the display gamut to sRGB.

- Use HDR in any desktop application for GUI controls.

- Use colour management for any desktop application for GUI controls that isn't an Electron application -- the only platform that does this vaguely correctly under Windows by default.

Things have even regressed over the last few years! Windows 10 Photo viewer could do colour management -- badly. It would show an unmanaged picture at first, and then overwrite it with the colour managed picture a bit later, so you get colour flickering as you scroll through your photos. Okay, fine, at least it was trying. Windows 11 does not try. It just assumes 8-bit sRGB for everything.

Similarly, the 14 year old WPF framework supported wide gamut, 16-bit linear compositing, and HDR. The "latest & greatest" Win UI 3 framework... 8-bit SDR sRGB only.


This is because Windows and Linux desktops are fundamentally not color managed. Everyone for themselves - color management needs to be implemented in every single application. Microsoft created scRGB which could have brought system color management to Windows, but didn't go through with it.

> Wide-gamut is especially pointless. It is near certain that the colours will be stretched to the monitor gamut in an unmanaged way.

This is also the manufacturer's fault. Some wide gamut displays don't even have sRGB emulation, and pretty much every wide gamut display defaults to their native gamut even in 8-bit mode, which is virtually never the right thing to do. sRGB emulation naturally reduces contrast, which is generally already very poor in all but the highest end PC monitors. To add insult to injury, the 10-bit / HDR (these are technically independent, but generally coupled in monitors) mode is complete shit in virtually every PC monitor advertised with HDR support. So you spend money on a display device that is designed essentially in exact opposition to its capabilities and your needs.

(Naturally, reviewers tend to ignore all of these problems apart from HDR actually being pointless with the current state of PC monitor tech; many praise the "vibrant colors" this gives you. Of course, everyone looks like they got sunburn, but who cares. Vibrant! Vivid! Saturated! The reddest reds money can buy! The greyest blacks! The most washed out shadows! Amazing! 8/10! Recommend! Buy now through my affiliate link!)


> wide gamut display defaults to their native gamut even in 8-bit mode, which is virtually never the right thing to do.

Then why have a wide gamut display!?!

The whole point is that you have a greater capability. It should be on all the time, not just when doing "professional image editing" or whatever. There SHOULD be no downside!

Similarly with HDR -- it is literally a superset of SDR, so then why are there endless support form complaints about it causing issues when enabled!? It SHOULD just work! Instead, early versions in Windows 10 would shift the desktop by 1/2 a pixel and cause blurring. Or darken the desktop. Or more recently force everything to sRGB, including colour-managed applications light Adobe Photoshop or Lightroom.

The correct thing to do is for each display to always be running in native gamut mode. The whole concept of in-display colour space emulation is absurd[1]. Instead, the display should feed back its native gamut to the operating system, which should then take care of tone mapping via either software or the GPU hardware. This almost happens now. Displays have EDID metadata that include the "coordinates" of their colour primaries. Windows even picks this up! Aaaaand then ignores it, and even strips out the information in newer SDKs like Win UI 3, because... I don't even know.

[1] Ideally, GPUs should be doing tonemapping under OS control, but to avoid banding this would need 12-bit or even higher output to the display. This would take too much bandwidth, so instead displays do tone mapping using LUTs with as many as 14-bits. Except that these LUTs are 1D and control over them is totally broken...


Because wide gamut (and better than 100 cd/m²) displays have been around for more than a decade now -

(though IIRC none - aside for black & white medical monitors - had better than 8 bit per color in hardware until "HDR in screens" showed up - IIRC also the PlayStation 3 and some games had a 10 bit per color mode that caused a lot of compatibility problems for hardly any benefit ?)

- but (non-Apple) OS support has been abysmal until recently,

and you probably need to pay a technician to use a probe to calibrate your screen anyway, so only some work environments would bother to set them up correctly ?

----

[1] Dolby PQ only needs 12 bits for up to 10k cd/m² ?

https://www.avsforum.com/threads/smpte-webinar-dolby-vision-...


There's "30 bit" (10 bpc) displays, which have been around for a fairly long time. These are not HDR, but usually native AdobeRGB with high bit depth. The way that works / used to work is that when an application uses a 30 bit surface, it still outputs an 8 bit image which travels through the Windows GUI pipeline and it's the graphics driver which replaces it with the real 10 bit image on scanout.

I don't think any of the current TN/IPS/etc. PC monitors with HDR have 10 bit panels. HDR is achieved generally through sheer imagination (most) and less commonly through more or less (rare) rough local dimming, not by actually having a panel capable of anything close to HDR contrast ratios.


And were those 10 bit per color in hardware ?

Also, all of this is about liquid crystals (and the electronics controlling them), but cathode ray tubes, plasmas, and light emitting diodes have quite different characteristics...

I would be surprised if nobody had made yet (professional ?) non-CRT PC monitors capable of more than 256 values discrimination ?! (Not even Apple ?!? Or at least some super-expensive, but still commercial (= non-experimental) displays ?)

Also, I guess a similar benefit might be achieved by using more than 3 primaries : who was it already that used a 4th "yellow" subpixel in their (IIRC) diode displays ?

(Though it's still not clear to me why more displays aren't using the standard (at least in Charge Coupled Devices) 2x2 Bayer Filter with double green, rather than a 3x1 one ? Too much reliance on Windows' ClearType hack working properly ? But why in TVs too ??)


> And were those 10 bit per color in hardware?

Yes

> I would be surprised if nobody had made yet (professional ?) non-CRT PC monitors capable of more than 256 values discrimination ?! (Not even Apple ?!? Or at least some super-expensive, but still commercial (= non-experimental) displays ?)

They exist, but it's limited to the high-end. E.g. Apple's XDR display has a 10-bit panel and FALD.

Reference-class monitors are generally of the "dual film" type, which essentially means that the panel is two LCDs on top of each other, one being used to control only brightness of a given pixel, and the other for brightness and color.

> (Though it's still not clear to me why more displays aren't using the standard (at least in Charge Coupled Devices) 2x2 Bayer Filter with double green, rather than a 3x1 one ? Too much reliance on Windows' ClearType hack working properly ? But why in TVs too ??)

Non-standard pixel layouts are common in OLEDs, e.g. RGBW, weird pyramids and un-even subpixel sizes (I'm assuming due to differing phosphor efficiencies). These all lead to poor text and UI clarity, as one would expect. (It's worth pointing out that OLEDs, being LEDs at their heart, have inherently lacking linearity which is why most of their brightness range is covered by digital modulation)

RGB subpixels require the fewest number of subpixels, which also means reduced brightness loss due to LCD structures. Going to Bayer would mean 33 % more pixels for the same display, except it's dimmer (increasing pixel pitch by 50 % horizontally does not make up for halving it vertically), more expensive to make and also dimmer because now you have two green dots per pixel, so they need to be half as bright, throwing away more of the backlight, and the drivers now need to perform with inhomogenous pixels -- without an obvious upside.

The reason color camera sensors tend to use Bayer filters is - I think - because green contributes most to perceived brightness, so doubling the sensor area for green means halving green's contribution to luma noise. This problem does not exist in displays.


What backlight is even linear though ?

IIRC cathode rays aren't. Plasma-fluorescent ?

Are liquid crystals even used in big screens with something else than a diode backlight these days ? (Maybe cheap ones still use plasma-fluorescent ?)

----

Ooh, right, silly me, for displays green light being human-efficient is backwards !

So you need less than 1/3 of green subpixels, rather than more ! (Anyone tried that yet ?)


AVIF probably has potential, as it's part of the AV1 - once we get AV1 hardware acceleration, hopefully AVIF will become the default picture format too ?


The fewer power we give to designers, the better for us.

A website is an interface, not your own personal fiefdom to grow your portfolio.


A website is an interface to display ads while you pad content around them.


How does your comment have anything to do with this post?


less power, fewer powers


Web color is still using linear color space.

There is nothing wrong with this because now the browser can decide how to convert it. So the title should be: browsers still use linear color space.


They use sRGB color space. If it used linear color space it wouldn't have this issue.


I assumed the interpolation of gradients isn't a matter for the colorspace?


The problem is the non-linear conversion between sRGB values and actual displayed brightness. Some operations have physical equivalents (e.g. transparent overlays can be defined equivalently to viewing a background through colored glass). If they do and you use textbook physical models to implement them, you need to also use linear intensity values to get it right. A lot of code out there ignores this.


Does it becomes obvious that it is a problem when I point out that using sRGB color space means we do linear interpolation of values in a non-linear space?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: