Hacker News new | past | comments | ask | show | jobs | submit login
Improving Color on the Web (webkit.org)
231 points by rayshan on July 1, 2016 | hide | past | favorite | 74 comments



People who focus on color profile support in a world where all colors are compressed to 8 bits per channel with a nonlinear gamma curve are obsessing about the wrong problems. I notice banding artifacts (caused by quantization to 8 bits) and blending issues (caused by linear blending of nonlinear values) way more often than I notice anything related to color gamuts.

People who care about color should be pushing to change the default color representation to a linear format with 16 bits per channel rather than making marginal changes to the edges of 8 bits per channel gamma compressed representations.

That has to start with the addition of 16 bit floating point format support to various hardware. It's really a crying shame that so little hardware supports 16 bit floating point. In addition to imaging it would be useful for audio, and deep learning too.


I've got a wide gamut monitor, and I vehemently disagree. Banding issues are real, but in my experience minor inconveniences compared to the huge step forward in realism that wide gamut brings.

Part of it is probably the kind of content you care about: "artificial looking" UI such as e.g. typical desktop apps care least (imho) about gamut. But if they don't do subtle gradients, then you won't notice banding either. On the other hand, for photographic content, you really notice those new colors you just can't represent in sRGB. I would really regret having to give that up, and would tolerate even visibly annoying banding every day to if it means having pictures that actually look like real life, and not like muddy, lifeless copies.

But to be clear: I very rarely notice any banding whatsoever, and when I do it certainly seems to be due to the source not the processing. Perhaps it's an OS/driver thing, or perhaps it's because more pictures contain sufficient noise to act as poor mans dithering, but banding just doesn't seem to be a (meaningful) issue. I mean: it's visible if you have large area subtle gradients, but not to a huge degree nor and well... don't do that?

Most things I see are either entirely flat (no banding) or photographic (noisy enough that you won't easily see banding).

I agree that a deeper color space is about time, although I suspect that going 16-bit floating point is an overreaction for most scenarios. Floating point isn't free, and neither are all those extra bits. With a decently high gamma, you can probably get away with just 10bits per channel, which would conveniently keep a pixel within 32-bit for efficient packed processing. And in the odd case that you really want to spend more than 10bits on color detail, then even for HDR you really don't need floating point - due to gamma correction, 2 extra bits means 20-50 times more light - more than enough for any kind of hdr that's likely to be displayable any time soon (and really, even if we could make peaks of 30000 nits - does that sound like something you want to look at?)


10-bit with a log or gamma encoding is widespread in film and video work, and I've never known a banding problem, even with purely generated gradients. The Rec. 2020 UHD standard does recommend 12-bit gamma-encoded to deal with the ludicrously wide gamut.

For HDR, a PQ (perceptual quantisation) encoding curve is already standardised by SMPTE - page 8 in these slides goes through the process of how it was worked out, right from human visual system basics: https://www.smpte.org/sites/default/files/2014-05-06-EOTF-Mi...

Given the abundance of log or gamma encodings for display imagery you might wonder why true linear 16-bit float is so common in CG production - why not log encode those 16 bits and get loads smoother gradients? Maybe the answer is that during production those linear files often also encode non-image data like vertex positions and normals, and perceptually "good" quantisation of those could lead to unexpected precision problems...


Exactly. The Wide Gamut is simply a selling point to have "vivid" colors. But most of the images including UI are still sRGB. Showing them on a 24-bit wide-gamut display means every color component is mapped to only a subset of 0-255 values. That will lead to a more noticeable banding than viewing the same image on an sRGB display [1].

The aforementioned DCI-P3 even have a higher gamma value of 2.6. Currently, almost all design compositions are done in the gamma compressed space, and the incorrect AA [2] and blending will be even worse on those devices.

Another thing is that most of displays are not even calibrated properly. Not even speaking about technical characteristics of the screens.

[1] https://twitter.com/vmdanilov/status/745321798309412865 [2] https://twitter.com/vmdanilov/status/712327571116056576


The antialiasing linear vs. gamma debate is an interesting one - check out this conversation, wherein nobody could figure out a reasoned method other than "sometimes AA in sRGB looks good"... https://twitter.com/rygorous/status/512371399542202368


My reply on Twitter in full:

Text blending in linear space perceived as “too thin” and inconsistent because font weights are choosen for the sRGB [1].

With light fonts at small sizes, sRGB blending also has apparent weight changes with the background [2].

But with bold fonts, the weight is consistent only shapes are perfectly smooth with the linear blending [3].

And with more colors, the sRGB blending is a failure [4].

[1] http://i.imgur.com/qKDfCnj.png [2] http://i.imgur.com/Z6sOUNI.png [3] http://i.imgur.com/sTosihk.png [4] http://i.imgur.com/fLpe150.png


The Wide Gamut is simply a selling point to have "vivid" colors. But most of the images including UI are still sRGB. Showing them on a 24-bit wide-gamut display means every color component is mapped to only a subset of 0-255 values.

Unless they're just being mapped directly, which would probably make everything automatically look much more vivid... and that would help sell those displays too.


But some of these values have to be reserved for those extra colors in the wide gamut colorspace. And sRGB just fits in that range. That's why to truly take advantage of the wide gamut, the whole rendering process from a software through a GPU to an output device has to be at least 10 bits per channel (relying on what's being widely adopted). Otherwise, customers will be missing out with almost all available content.


Sure but it would also make everything look inaccurate.


And if you don't want to pay the memory/bandwidth costs of doubling your image size, you can keep storing your images in 8 bpc sRGB, but convert from sRGB to linear before blending/interpolation and convert back afterward. Modern GPUs have built-in support for this. There's really no excuse for incorrect blending!


And yet all font rendering on Linux is done with incorrect alpha blending. There are patches for the major toolkits but as anything to do with font rendering, people are very resistant to change.


I do not disagree that focus is needed in that area too, but larger gamut can cut down or eliminate many banding artifacts by default.


In a practical sense, if you are doing lots of complicated blending or gradients in Photoshop, you can set your mode to 16-bits. It will stop reduce a lot of banding issues, and you can convert back to 8-bits with dithering when saving out as a common format.


I do film color work as part of my job. It's nuts how many hoops one has to jump over to give proper, almost proper, image experience on a variety of viewing devices. It's akin to sound mastering.

When there are gradients visible, more or less the only thing you can do is to introduce artificial monichromatic noise to the image to hide the perception of a staggered gradient.

I would like to see an industry-wide push for consumer-grade (at least) 10-bit signal chain from graphics cards to monitors with high dynamic range. That would have more impact on image quality than crap being pushed for now, like 4k and VR.

HDR B4 4K, chaps!


This is kinda happening - the "UHD Alliance Premium Certified" spec for TVs mandates 10-bit from the input to the panel. It's a shame the UHD Blu-ray standard doesn't mandate 10-bit, thought hopefully most will use it :) Dolby Vision mandates 12-bit mastering and delivery, though it sounds like 10-bit connections to the panel can be considered acceptable...


Now if we could only convince Nvidia to gives us 10-bit output on all cards, not only Quadros. That would be great.


Here’s another example, this time with a generated image. To users on an sRGB display there is a uniform red square below. However, it’s a bit of a trick. There are actually two different shades of red in that image, one of which is only distinct on wide-gamut displays. On such a display you’ll see a faint WebKit logo inside the red square.

I can clearly see the logo and I'm using a Samsung LCD from 2003. The pixel values are significantly different; the background is 255,0,0 and the logo is 241,0,0. The rest of my hardware is 2009-vintage. Likewise, the two shoe examples look very slightly (not a whole lot, but it's noticeable) different.

On an sRGB display, you can’t see the logo, because all the red values above 241 in Display P3 are beyond the highest red in sRGB, so the 241 red and the 255 red end up as the same color.

Seriously? 241 and 255 look very different to me, and if I remember correctly, would have always looked different in all the 24-bit colour monitors I've used.

Edit: experimentation with my monitor a little more shows that I can just barely discern 252,0,0 from 255,0,0, but 253 and 254 look identical to 255. This probably depends on my colour perception too. Either way, I still doubt my monitor is "wide-gamut", so what's going on here?


I think your system is ignoring the color profile attached to the image. They wrote:

    Remember the red square with the faint WebKit logo?
    That was generated by creating an image in the Display
    P3 color space, filling it with 100% red, rgb(255, 0, 0),
    and then painting the logo in a slightly different red,
    rgb(241, 0, 0). On an sRGB display, you can’t see the
    logo, because all the red values above 241 in Display P3
    are beyond the highest red in sRGB, so the 241 red and
    the 255 red end up as the same color.
But if your browser ignores color profiles, and many do, then the P3 -> sRGB conversion won't happen, and your sRGB monitor will be told to display some 241 red and some 255 red instead of all 255 red.


I have a retina display, use Firefox as default browser. In firefox I see the webkit logo, in Safari not. So it seems like Firefox has better color support than Safari? I would expect the opposite. How does this work?


Only late Retina display may have a wide enough gamut, so it's still possible that Firefox throws away the color profile (and therefore shows both reds as different, yet inaccurate), while Safari handles it (and therefore collapses the reds to 255).

"Debugging" color settings is tricky, because it can be hard what goes wrong along the chain, and what you end up observing may be counterintuitive.

EDIT: I observe the same behaviour than you on my early 2013 13" Retina MBP. Firefox seems to ignore the color profile on the red image while every other app doesn't (including downloading the image and using Preview). Other images, notably the Iceland one, highlight that my screen does have a wider gamut in some color areas, maybe just not in the red.


To the best of my knowledge, the only Apple devices with wide-gamut displays are the 2015 iMac and the 2016 9.7" iPad Pro. So I'd suspect that Firefox is either ignoring the color profiles, or handling them differently than Safari.


> the only Apple devices with wide-gamut displays are the 2015 iMac and the 2016 9.7" iPad Pro

Correct; this was mentioned several times at WWDC.


To be fair, on my 2013 rMPB, the Shoes one does not show a single difference whereas the Iceland, Italy, Sunset and Flower ones are much more vibrant and indicate that a wider gamut than sRGB may be available, but certainly not as much as the latest display.


I have an iPad Pro 9.7, and the color differences are very noticeable, especially in the wider variation of grass colors in the shoe image.


No. Firefox just apparently has broken color management.


I'm not sure about macos, but on windows, FF has supported color correction for years - in fact, it for a long time was the only browser on windows to do color correction.

A quick google finds this: http://cameratico.com/guides/web-browser-color-management-gu... which suggests that even on macos it used to be at least as good as safari, although apparently by default it made the (idiotic) assumption that untagged images should be displayed in the native gamut. They should be displayed as sRGB, because untagged almost always means "the author didn't think about it, and had an sRGB display".

Not sure what's going on with that image on the webkit demo, but I'll note that clipping out of gamut colors is by no means the obvious solution to out-of-gamut colors. That's what rendering intents are all about; and well known ones include absolute intent, relative intent, and perceptual intent. Perhaps firefox is assuming a perceptual intent, and which here would cause desaturation to retain out-of-gamut detail.

Edit: according to MDN, the default rendering intent firefox uses is perceptual, which explains why out-of-gamut detail is retained: http://kb.mozillazine.org/Gfx.color_management.rendering_int...


Most likely that's because the application you are using (or the graphics stack it's running on) is not respecting the ICC color profile embedded in the image, and is instead interpreting the pixel values as being in the sRGB color space.

What they are trying to demonstrate only works if the app is interpreting the pixel values in P3 and then converting them to sRGB. A graphics stack that includes a CMS (color management system) will always do this in preparation for display on an sRGB device, which is probably very close to your display's gamut.

The macOS graphics stack incorporates a CMS (ColorSync) that is turned on by default. Windows and Linux have great CMS support as well, but depending on the OS it may require installing an extra package, or enabling a CMS in the display properties.


For images that actually contain a color profile, the browser should certainly respect that. But for images that don't contain a color profile, assuming sRGB artificially creates a problem where none existed.

WebKit is intentionally squashing rgb(241,0,0) into rgb(255,0,0), whether the display shows them as distinct colors or not. That might improve consistency across displays, in that it'll make every display lower quality, but it doesn't actually improve quality. It might be nice to know that the display you render on has better color fidelity, but why not show all the colors present in the image rather than squashing them together? Then, for instance, if you have a display that has better color reproduction than sRGB but not as good as P3, you'll get the full color reproduction your display can manage.


But the image was presumably shot and colour corrected in P3, which means to squash it down to sRGB will by necessity change the imperceptible colour differences into something perceptible -- and therefore change the image into a new image, one slightly off -- very slightly in some cases. Consider retina displays-- do you prefer the browser messing with your image to produce a 1x image from a 2x? Nope. Do you prefer not having 2x? Nope. Yes a browser can scale it and it will function, but is it as intended? Especially if the browser changes it to be less noticeable or have similar contrast--that would be like a 1x image getting blurred to 2x or shrunk with an algorithm from 2x to 1x. Media queries and other syntax exist so we as creators can decide how we want things to look, down to the Last pixel. You as the content author can export two versions of the image, one colour corrected or edited on a P3 monitor and one corrected/edited on an sRGB and stand a chance that while sRGB will remain the default for now, P3 and beyond should gain as much market share as 2x, "Retina" or 4K displays, in part thanks to UHD Premium displays and content.


After reading a bit more about this here (thanks for all the informative replies) and on colour management in general, I'm pretty sure I prefer my monitor being able to display all 256 different intensities of one colour even if they're not exactly the same as some standard colour, than a scaled and truncated fraction of those (which seems like it'd just cause more visible banding.) The example image shows that clearly. I adjust my monitor's contrast and brightness to my preference, and colour perception is quite subjective anyway (not to mention affected by viewing conditions, hence those hooded monitors and controlled lighting when it is important) so I think colour management is not for me.

In fact, I'd say that if you displayed that example image simultaneously on a non-colour-managed sRGB monitor (logo visible), colour-managed sRGB monitor (logo invisible), and (colour-managed?) P3 monitor (logo visible), no average user is going to want the middle option.


If you ever have a chance to actually calibrate your display with a real hardware calibration tool, I think you'd quickly decide that color management was for you.


And if you ever spend $400 on a wooden volume knob you will quickly decide $1000 power cables are also for you.


> But the image was presumably shot and colour corrected in P3

That's not a reasonable assumption to make about an arbitrary image that doesn't contain a color profile. For instance, consider a pixel-art image drawn entirely on a computer and saved as a PNG with no color profile.

If an image contains a color profile, it's somewhat more reasonable to attempt to interpret that color profile (though it's still odd to not at least attempt to take advantage of higher-gamut-but-not-identified-as-such displays by using the full 8-bit RGB range, on the assumption that the display may or may not render ). And it seems reasonable to support media queries for image color profiles as well. But to quote the article:

> If an image doesn’t have a tagged profile, WebKit assumes that it is sRGB.

That's the behavior I'm arguing is completely wrong: if an image with no color profile contains the colors rgb(241, 0, 0) and rgb(255, 0, 0), the browser should render those as distinct colors, not squash them together.


No offense, but you’re wrong. Full stop.

(It’s okay though. This is something which developers without training in human color perception / color reproduction are commonly wrong about. It comes from lack of training and lack of experience. The proper thing to do is not a priori intuitively obvious to the layman, in the same way optimizing algorithms for CPU cache locality might not be obvious to a color scientist.)

If you take an image which was authored to be sRGB, and show it on a wide-gamut display, stretching all the colors out to fill the gamut, you will supersaturate all the colors and totally distort all of the lightness, hue, and chroma relationships in the image, and it will look terrible.

Likewise, if you take an image which was authored for P3 (or whatever), and squish it down to fit on an sRGB gamut, everything will end up undersaturated, again distorting all the lightness, hue, and chroma relationships. Again, it will look terrible.

There are fancier ways to do gamut mapping than pure clipping, but there is a lot of subtlety involved (Ján Morovič wrote a several-hundred-page monograph about this, if you want more details, https://amzn.com/0470030321), the best one to use varies from image to image, depends a bit on viewing conditions and other context, and the nicer methods are quite computationally expensive.

The HTML/CSS/etc. specs declare that untagged images and other untagged colors are to be treated as sRGB. This is the only reasonable assumption in a world where sRGB has been the primary standard for 20 years.

Any 2016 browser / operating system / image viewer which treats an untagged image as anything other than sRGB is spec-non-compliant and functionally broken. (Sadly, this includes most browsers on most platforms.)


First of all, "terrible" and "amazing" are subjective.

Any 2016 browser / operating system / image viewer which treats an untagged image as anything other than sRGB is spec-non-compliant and functionally broken. (Sadly, this includes most browsers on most platforms.)

Given the blatant loss of information that occurs if you do that, as evidenced by the WebKit logo example, I'd argue the spec is broken. Pixels of different values should be different when displayed, because that's what matters to the majority of users.


> "terrible" and "amazing" are subjective.

Yes, but «distorts the hue, lightness, and chroma relationships between colors as perceived by a large statistical sample of humans with typical color vision» (or perhaps easier, «... as computed using the CIE standard observer and a well-defined color appearance model») is not subjective.

You shouldn’t be “arguing” with the spec here or presuming to speak for “users” until you have studied up on color science, done concrete work with color reproduction, or at least compared both ways for yourself on a few dozen images.

Again, this is the naïve approach which is too often taken, but, contra your intuition, the results are dramatically worse if you do it that way. In an immediately obvious, not subtle way. What most “matters to the majority of users” is that their images look like what they expect. If you totally distort all the color relationships, the images end up looking completely different than intended, and “most users” will be dissatisfied with the output, blaming either the creator of the image or the software for being incompetent but without knowing quite what the problem is.

There are ways to avoid hard clipping, using a more sophisticated gamut mapping algorithm (I recommend you read Morovič’s book), but you can’t just treat one color space as if it were another.

In practice, hard clipping of out-of-gamut colors (assuming you do it along lines of constant hue/lightness in CIELAB space, or similar) ends up working reasonably well. You get some artifacts, but the vast majority of images stay away from the edges of the gamut.

(Photoshop and other Adobe apps have a pretty bad gamut clipping method, which is to map between color spaces without clipping, and then independently hard clip each color component. This sometimes causes severe hue shifts. Alas, even the industry leading software is often implemented by people who aren’t trained in color science, and don’t consider the edge cases. Or wrote their implementation 20 years ago and haven’t bothered to update it to keep up with computing improvements. It’s still much better than just pretending one color space is another though. I’m not sure precisely what Apple’s color management stack does.)


Yes, but «distorts the hue, lightness, and chroma relationships between colors as perceived by a large statistical sample of humans with typical color vision» (or perhaps easier, «... as computed using the CIE standard observer and a well-defined color appearance model») is not subjective.

Most people care whether the colours look good to them, not how close it is to some standard they probably don't know about. Almost everyone buying monitors in a store are going to be strongly basing their decisions on the brightness and vividness of the colours.

What most “matters to the majority of users” is that their images look like what they expect.

...and that is subjective.

Regardless, if 241,0,0 looks identical to 255,0,0 on a monitor with 24-bit colour, that's just not right at all.


Nobody cares about (241, 0, 0) or (255, 0, 0). What they care about is whether the color of the grass, sky, flower, dress, car paint job, sports team jersey, skin tone, ... looks correct.

I feel a bit like a broken record here. There are ways of mapping one color space down to a smaller one without hard clipping to the gamut boundary. But these can be computationally expensive and difficult to design properly.

However, assuming that a better gamut mapping method is unavailable, hard clipping out of gamut colors in practice works much better than just pretending two color spaces are the same. When you do the latter, images end up looking terrible.

The effect is immediately obvious to anyone with standard color vision, and as such is no more “subjective” than anything when we’re dealing with perception. In some sense all perception is subjective if you want to get philosophical. But in a practical sense, not really.

If you have a copy of Photoshop, you can try this for yourself. Collect a number of photographs or other images encoded in a large color space like P3. Then convert these to sRGB in two ways, (a) using the “assign profile” menu option and (b) using the “convert to profile” menu option.

Your proposal is to do the former. In practice, the results are entirely unacceptable. They look bad. That is, if you collect a group of humans with typical color vision and present both options, they will pick option (b) for almost all images, and for most images the right choice will be very obvious to everyone.


The point is the opposite. The logo is not untagged -- it's P3 tagged. When on a P3 display, you see the two colours as barely distinct -- as if you overlaid the logo with an opacity of 2%, I'd expect. It's barely there. The colours were picked close enough together that when properly rendered mapped to sRGB you cannot distinguish the colours without additional transformation. Consider a very high resolution, 3x monitor. Now view a 3x image on a 1x monitor: most browsers transform the image, prioritizing viewing all detail over image size. Safari here, by comparison--and only if the image is tagged with a colour profile--or in our hypothetical example, 3x--is trying to preserve the image as it was intended, losing the details just to show it at approximately the size appropriate on a lower quality display. Obviously the best approach is to provide two images, one for high quality displays and one for low quality displays so you have full control over the transformations applied.


An unintended consequence of that approach is that to get two colours to match visually, they would need to be specified in the same colour profile.

Say there is a background colour in sRGB and an image in P3, and both should display the same colour visually. Changing the visual display of the P3 image just because it could include non-sRGB colours will make them not match.


1. Background colours are generally specified in sRGB today. 2. Media queries can be used to set different background colours on P3 displays.


The image in question:

https://webkit.org/blog-files/color-gamut/Webkit-logo-P3.png

does have a color profile. By the way, it also uses 15-bit color channels.

Here is the same image after pngcrush removed the color profile PNG chunks:

http://dl.dropbox.com/u/1237941/Webkit-logo-profile-removed....

In Safari on my non-high-gamut Mac, the logo is clearly visible. (Edit: To be specific, on the default "Color LCD" profile I get 252,13,27 and 238,12,25 as native equivalents of the sRGB 255,0,0 and 241,0,0. Presumably there is some slight clamping around the edges of the profile, as well as some inherent precision loss if WebKit is performing the conversion on 24-bit colors, but it's not anywhere near as crazy as making 241,0,0 and 255,0,0 look the same.)


> The image in question:

> https://webkit.org/blog-files/color-gamut/Webkit-logo-P3.png

> does have a color profile.

I didn't comment on the handling of that image. My comment related to the bit of the article I quoted, which states that WebKit treats images without a color profile as sRGB.


Then you're conflating two different transformations. Squashing 241,0,0 and 255,0,0 is the transformation from P3, as performed on that image. As I stated in the last bit of my comment, the transformation from sRGB to the display's native color space, as performed on images without a color profile, CSS colors, etc. [edit: and of course images tagged as sRGB!] is far more subtle.


Well I agree that it is wrong simply because it breaks (for example) the PNG standard:

"RGB samples represent calibrated colour information if the colour space is indicated (by gAMA and cHRM, or sRGB, or iCCP) or uncalibrated device-dependent colour if not."

So webkit shouldn't try to transform the colors to sRGB but simply display it according to whatever display is present.

Having said that sRGB designed to be consistent and not high quality. It even has a "black point" defined, but nobody respects that since it would cause a maximum contrast ratio for all images represented in sRGB.


He changed the original statement.

The reason for this for anyone not familiar with colours and pixels is that the Apple P3 primary colour red is a different red from sRGB's. That means that if you do not provide additional metadata, the value will be dumped directly to whatever type of display you are using, making it display the colours using your display's lights.

As other posters have noted, if you view the colour with no colour management system in place, you are instructing the machine to display 241 intensity as is, which means you are seeing (very likely) the sRGB lights in your display set to 241 and 255 intensity; a very easy difference to spot. This is not the intention of his example.

In a colour managed system, that 241 value is transformed into an absolute colour model, and from there it is mapped to the smaller gamut of sRGB. That is, the Apple P3 RGB lights are converted to meaningful sRGB RGB light representations.

So why does an Apple P3 value end up being the "same" sRGB value, despite being different colours? It amounts to a mapping issue.

The entire range of intensity values at 8 bits per pixels for the Apple P3 red channel is an identical colour at all intensity levels. Same goes for sRGB; intensity does not change the colour. When we examine any single P3 intensity, we can map that colour to a "closest possible" sRGB triplet. No matter what we do, the P3 red is different to the sRGB red light, and the sRGB red light can never represent the fully saturated and different red of the Apple P3, no matter what encoded intensity value.

Given that the P3 primary for red is quite different, we end up with a value for sRGB, that after transformation and clipping to the sRGB gamut, is a collision when mapped to sRGB; several different colours might end up mapped to the same sRGB colour due to quantization. In the case of 241, it happens to map to 255 using the rendering intent he selected because some or both of the complimentary values end up negative and clipped in the smaller sRGB gamut. Every other value is also being mapped to sRGB, and there are going to be mapping collisions along the entire intensity range because again, no matter what we do, the sRGB red lights can never represent the red lights of Apple's P3.

An excellent reference on ICC colour management is here for anyone interested, including covering rendering intents and other issues: http://www.cambridgeincolour.com/tutorials/color-space-conve...


For a long time now, display technologies have been moving away from sRGB. Not just by design, but the decay rate and vector of the primaries due to the different phosphors and filters being used.

So anything about consistent color on the web ignores the reality that the viewing device is likely not at all sRGB. And then just as big of a problem and sometimes worse is the viewing condition is highly variable.


Even my monitor (ASUS VE248Q) when set to the sRGB profile shows the image fine, which seems a little odd to me. Even stranger, when in "Theater" or "Standard" profiles it doesn't appear. I don't think any of these settings likely change the color profile of the display, but probably just tweak brightness/contrast/etc levels - so sRGB isn't even really sRGB a lot of the time even if it "says so".


This reminds me of a demo page from the late 90s that made me appreciate graphic formats on the web, where it highlighted what true alpha transparency could really accomplish and what you miss out on when you don't have it http://www.libpng.org/pub/png/pngs-img.html

Beyond just the topic at hand, this is a wonderful example of showing people how much more the web can do and why it is important that we keep pushing for progress.


> If the images have obvious rectangular borders (due to the color specified in the bKGD chunk), the browser is broken.

Well, Chrome is still broken for some of these images on Linux. Not surprising though, it also screws up resizing images:

http://www.4p8.com/eric.brasseur/gamma.html

I with they would fix these things before worrying about wide gamut


On the matter of improving colors, it would also be nice if browsers played nice with colorspaces (Rec.601/Rec.709) when dealing with YUV to RGB conversion in HTML5 video. Right now all browsers I've tested straight up ignore colorspace tagging in H.264 video and have some other issues too. I have a little thing you can use to see this for yourself: http://daiz.io/yuv-to-rgb-in-html5-video/

Basically, if the Result doesn't match up with Expected, the browser is doing it wrong. Ideally browsers should handle YUV colorspaces like so:

H.264 video - look for colorspace tagging in the video by default and use it if available, otherwise fall back to guessing based on resolution. SD video (up to 1024x576) should be converted with Rec.601, HD video (width >1024 or height >576) with Rec.709.

VP8 video - VP8 is defined as Rec.601 only, so always use it.

Theora - Same as VP8.

How browsers actually fare today (tested on Windows 10):

IE11/Edge - Always assumes Rec.601 for H.264 video. Doesn't support VP8/Theora.

Chrome - No colorspace tagging support for H.264. Converts HD video with Rec.709, SD video with Rec.601. 1024x576 is treated as HD already. VP8 is always converted with Rec.601 as it should. Theora gets Rec.601 in SD but incorrectly uses Rec.709 in HD.

Firefox - No colorspace tagging support for H.264. HD uses Rec.709, SD uses Rec.601, 1024x576 treated as HD like in Chrome. VP8 and Theora both always use Rec.601 as they should.

The unfortunate conclusion from this is that color accuracy is pretty much a crapshoot when dealing with HD video on the web. The only way to guarantee accurate results right now would be to convert your video to Rec.601 (if you're mastering HD video chances are you're using Rec.709 by default), serve VP8 video by default and have a Rec.601 H.264 fallback for IE/Edge (I haven't tested how Flash video playback handles this matter so you might also need a Rec.709 H.264 fallback for that).


The IE11 result makes me weep here, given the popularity of it - thanks for doing these tests. The amount of work we put into maintaining correct colour through the production chain seems increasingly wasted...

FYI I tested Safari 9.1.1 (11601.6.17) and it failed on a whole bunch of the tests, including, frighteningly, the untagged HD 709 H264 :(


One thing I've noticed is that Chrome and Safari have a visible difference in color management. Try opening [0] (<style> body { background-color: rgb(88,174,235); } </style>) in both browsers side-by-side, for example.

[0]: data:text/html;charset=utf-8;base64,PHN0eWxlPiBib2R5IHsgYmFja2dyb3VuZC1jb2xvcjogcmdiKDg4LDE3NCwyMzUpOyB9IDwvc3R5bGU+


Is this a Mac-specific quirk, and what happens if you take a screenshot and compare the pixel values? On my Windows system I tried Chromium, Firefox, IE, and Opera (no Safari), they looked exactly the same. Taking a screenshot and sampling the pixel values gives 88,174,235 for all the browsers, as expected.


This is what I'm getting. Safari on right, Chrome on left. https://ipfs.pics/ipfs/QmS8hkCPxGr7iLDqotuHG6NXNeSesmQSWJGNK...

When Digital Color Meter (color-sampling tool on Mac) is set to "display native values," Chrome's values are consistent with CSS. When DCM is set to "display in sRGB," Safari's values are consistent with the CSS.

edit: Not shown in screenshot, but Firefox is the same as Chrome.


It looks like in Chrome, the display profile correction is not being applied. If you change it to sRGB both are the same. Safari is doing the right thing here.


Chrome has a bug filed: https://bugs.chromium.org/p/chromium/issues/detail?id=254361...

I can't tell if Firefox has a bug for this exact issue, but they do have some open color-management related issues.


Firefox has the same bug.

It has a non-default hidden option where you could previously set it to work correctly, but that option stopped working some time ago (at least on OS X).

Chrome has had this bug filled 7 years ago, and they kept ignoring it (although it's a very popular bug, people complain about it all the time). It drives me insane.

Basically I am forced to run Safari, although I'd very much prefer to use an open source browser.


Just stating the obvious that Safari uses WebKit as the engine and it’s open source: http://webkit.org.



They are the same, but the actual colour in that screenshot is 99,183,238 which (un?)surprisingly is different from the 88,174,235 specified:

http://i.imgur.com/PtkjfVj.png


FF and Chrome look the same to me as well. Safari looks different.


FF and Chrome are both broken w/r/t color management.


http://www.gballard.net/firefox/

Edge & Firefox support color management; though for best results in firefox you'll want to toggle http://kb.mozillazine.org/Gfx.color_management.mode to 1 - the default is 2, which is Not Good. Edge works out of the box.


It's worth mentioning that Apple only introduced 10bit per color support since OSX El Capitan[0] and with the iPad Pro while wide-gamut monitors, both professional and prosumer ones, have been around for years now.

[0]: http://www.cultofmac.com/395028/apple-quietly-added-10-bit-c...


Very interesting. Just to clarify, am I right in saying that 10-bit colour just provides more precise colours within the sRGB gamut? 10-bit colour does not extend the colour gamut in anyway?


Correct! But for wider gamuts (and higher-contrast displays), the benefit becomes more obvious in the lack of banding.


This whole situation is rather nasty. There was enough of consternation going from NTSC ("never-the-same-color") to digital, with the bifurcation/trifurcation of SMPTE 240M, BT.601 and BT.709, plus the bifucation of "legal" versus "PC" color ranges for all of the foregoing (thanks, Microsoft). I've seen VERY few pieces of software (and even broadcast hardware) that get even just THAT complication all sorted, and even less people who really know what they're doing operating the systems and software tools involved. Now, we're adding BT.2020, SMPTE 2084, Dolby Vision, and all sorts of proprietary stuff like Sony S-LAB. This is going to be a disaster -- nothing will properly talk to each other, e.g. cameras, encoders, video displays/monitors. Everyone can't even agree whether 10-bit is enough or whether we really need 12-bit for accurate reproduction. SMPTE and ISO just punted the question down the road by saying you can have any flavor of ice cream you want. I predict fun times ahead at next year's NAB...


Wide gamut is not enough, we also need HDR (that is Rec 2020 color space with Perceptual Quantizer). This will allow to show much brighter (and darker) colors.


I have bugs for this on webkit outside for... 4 years and counting. To the point I got so frustrated I wrote my own rant on gamuts and color spaces over on https://pomax.github.io/1436836360570/we-are-really-terrible... - seeing someone finally acknowledge this on the actual webkit blog is heartening. Maybe things _can_ change.


Firefox, Ubuntu 16.04, HP Zbook 15 (G1) with a 1080p display (not the DreamColor one). I do see the differences in all the images with the exception of Shoes, Flowers and Rose. I remember that Ubuntu installed a color profile but I don't know if it's calibrated for my screen. Nice to know that I have an above average display.


I have a retina screen, but see no difference with the ProFoto images in the demos. The AdobeRGB images show a difference and improvement. It's a pity that the demo with the sliders results in flickering, which makes it useless.


Retina only refers to resolution. Color representation is a deeper "qualitative" feature of the display. Currently I am only familiar with Apple's retina iMac which would be able to correctly display their examples.


The 9.7" iPad Pro also has a P3 display if I'm not mistaken.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: