Hacker News new | past | comments | ask | show | jobs | submit login
Digital colours and digital photography (pomax.github.io)
92 points by tonyedgecombe on July 14, 2015 | hide | past | favorite | 43 comments



As a photographer, none of this matters because

1) Pretty much nobody has calibrated monitors (even Apple devices aren't perfect)

2) Pretty much nobody has monitors higher than 8-bit per channel

3) Pretty much nobody cares about having accurate colors, especially photographers and videographers who "grade" everything to whatever looks good.

4) Pretty much nobody can tell the difference.

Technology has been good enough for a few years now to produce stellar high resolution prints, let alone Instagram or Facebook material.


that's funny... I kind of wrote the article exactly because all those points apply to my workflow. You're entirely welcome to your own opinion, but just because it's good enough for you, doesn't mean it's good enough for everyone =)


True for pictorial photography but color accuracy is very important in other fields, art reproduction and science for example.


The reason the model of the eye isn't a perfect triangle (as you would expect for three primaries - the three kinds of cones in your eye) is that the frequency response for your cones overlap. There are no frequencies of light that trigger only your green/middle-frequency cones. https://i.imgur.com/Te6XBOB.png


I have to wonder what the sensation of seeing truly "pure" green would be like. What if there were some way to selectively stimulate just the M cones, or to prevent the L cones from being activated?


I think looking at bright red light, with a very low frequency, should temporarily render the red cones less sensitive. Then you'd only get green and blue sensations.


Yes. If you stare at a pure red field on your screen for about 30-60 seconds, then look at a pure green field (or vice-versa), you should see a "super-green" (or "super-red").


I imagine the closest you will get to this "in real life" (rather than a room designed to only contain peak M cone frequencies) is by visiting a tropical rain forest. A common conclusion by tourists is "everything is so... green" due to the over-abundance of rich greens, being far more stimulating than they get from urban and digital exposure.


If any color enthusiasts here are frustrated by the lack of perceptual uniformity in every colorspace in common use, I wrote some utility code to work with the Munsell color system -- the only color system produced entirely from measurements of human color perception (with all the noisiness/messiness that entails). You still run up against the limits of your color gamut pretty quickly, though.

https://github.com/mrgriscom/munsell

https://en.wikipedia.org/wiki/Munsell_color_system

http://www.brucelindbloom.com/index.html?MunsellCalcHelp.htm...


>the only color system produced entirely from measurements of human color perception

What is CIE LAB then? While Wikipedia states that it was influenced by the Munsell color system, I don't think it was directly derived from it. Also I would argue that it is in common use.

On the other hand these measurements measured the local distance between colors (based on distinguishability) but nothing guarantees that the space of perceivable colors is Euclidean in this metric. So you can fabricate an Euclidean space of colors where the Euclidean distance is close to the perception distance, but it may never be perfect.

Side note: there is no perceivable color space to "rule them all", even if you can fabricate a perfect one. For upscaling images it could make sense to use a perception based colorspace for interpolating pixel values, however for downscaling you should use a linear colorspace (note: sRGB is NOT linear, almost all desktop software downscales wrong [2]). It causes kind of a hassle that distinguishing colors of large areas is happening with a strange metric, but if you are looking at objects with fine structure (like your LCD monitor) then the color mixing happens in a linear colorspace.

[1] https://en.wikipedia.org/wiki/Lab_color_space [2] http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png


Funny, I just watched a Scipy conference talk/video about this - it's about finding a new default color map for matplotlib [1]

CIE LAB is basically a ~30 year old model based on human perception - basically it takes into account white point adjustment and the non-linear human brightness response, and thats about it. There are newer (and faaar more complicated) models like CIECAM02.

[1] https://www.youtube.com/watch?v=xAoljeRJ3lU


As I see CIE LAB is not a something "inferior model" to these more complicated models. It just has a more simple viewing conditions model and a more limited regime of validity.

It seems somewhat complicated to properly apply CIECAM02. In the case of matplotlib, one should change the line colors depending on the background color being used. Not talking about adapted whitepoint and surrounding too where you at best can make an educated guess. They seem to solve the Euclidean distance problem by embedding in a field with more dimensions.

However I'm really happy for these efforts of matplotlib. It is by far my favorite data visualization library. There could be cases where this kind of accuracy is necessary: doctors often diagnose using visual inspection of different camera images (X-Ray, CT, MR). The optimal visualization of these camera images using well crafted colormaps in the right colorspace could save lives.


My understanding of CIELAB is that it tries to approximate perceptual uniformity while also having a rather simple transformation formula. As such the perceptual uniformity is compromised (intentionally). It is a good approximation but the lack of uniformity can be noticeable -- particularly in the hue dimension. See my 3rd link.


Someone on reddit also pointed out https://github.com/CarVac/filmulator-gui which looks like a super cool utility for getting "natural" colour control over your RAW files. I'll be playing with it over the weekend, but the author was pretty positive colours come out looking much more like what they should.


Part of me cringes at jumping through all these hoops to recreate some sort of anachronistic aesthetic which probably only looks better to you because it's what you're used to. It reminds me of how electric cars program in a 'jolt' when you hit the accellerator so that it feels more like an ICE. Otoh, I admire the technical effort that goes into reproducing it! Reminds me of the old terminal emulation in xscreensaver: http://www.jwz.org/blog/2003/05/xscreensaver-410-out-now/

Also, I've noticed the photos from my Canon point-and-shoot have quite a film-like quality to them.


yeah except in this case I'm hoping it actually ends up matching what I'm holding in my hand. 500+ fountain pens inks. Anything that actually works (because colour calibration between my cameras and monitors sure doesn't), I'll take. (that tool was written, as far as I understood from the author, to actually get things to look they way they look. Not "the way we were used to". I haven't used film in over a decade. I just want realistic colours.


This is fantastic, especially the part about what sensors could be if reimagined.

"But we're still doing that today, after decades of digital camera technology, and it really makes no sense anymore: why are we still lump-sum recording light instead of using sensors that can tell how much light passes through it per time unit, instead of "filling up"? (...) Instead of starting a sensor, letting light fall on it, and then turning it off and asking it how much light it found, we have the technological capability to make sensors that change electrical resistance based on how strong a light hits them."

That would mean almost unlimited dynamic range.

One avenue for possible disruption that the article does not mention is through digital cinematography first, where a sensor would have to be "only" 8 Mpixels (4K), and the users have more budget and care about dynamic range. This could starts with the likes of Red or Black Magic and eventually trickle down to the photo market.

Is there any prototype of such a sensor anywhere? What would it take for this to be produced using current processes? Is there any foundry today that could miniaturize these photoresistors?

Thanks again for the article, very refreshing!


I'm hoping this is what Black Magic will achieve, really, with their UHDTV-4K cameras (which use the Rec.2020 colour space, which is really wide). With TVs slowly moving to 4K, and hopefully monitors to follow, the idea of a rec.2020 colour space across the workflow is _immensely_ appealing. Although right now, not quite affordable ($3500 for a camera that's really currently only seriously marketed by a single brand, plus $800 per single 4k monitor, it's not consumer-cheap yet)


Don't normal sensors have a raw gamut that's pretty close to Rec2020 anyway? Is the innovation to do it in video as well?


There is no conspiracy.

A modern CMOS image sensor uses a photodiode that works as a current source and is integrated with a gate to determine light intensity. This is not exactly a photoresistor, but it is close enough for the discussion here. Electrical resistance is not a "digital" quantity as this article says. It is a phenomenon that must be measured by analog means and converted to digital. This necessarily involves an integration step that basically amounts to "filling up". You can bring this time to arbitrarily low (and that is basically what HDR exploits), but then you have to contend with noise.

I fail to see how the proposal here changes the readout of a photo sensor in the physical world and also addresses the fundamental issue of noise. This is written from the perspective of math and computer science, but that doesn't help us in the physical world where hardware designers have to live.


Sony produced a camera with a 4 primary color sensor, the DSC-F828 - instead of 2 green pixels in each 2x2 square, there was one green and one Emerald (cyan).

I had the predecessor to this camera, the F717, and it was a wonderful and innovative camera in most respects, but its Achilles heel was the color rendition. My personal belief is that the 4-color sensor was an attempt to improve on this, but the root cause was Sony's inability to apply proper color science in their rendering engine and the 4-color sensor was just a gimmick. The camera was introduced at an inopportune time, as the DSLR had just hit a similar price point providing better high ISO performance and interchangeable lenses, causing it to fail in the marketplace. The experiment was never repeated.

I don't know if the actual frequency response of the RGBE sensor was ever published. It would be interesting to see it overlaid on the CIE diagram.


The specs seem to list a standard RGBG sensor:

http://www.dpreview.com/reviews/sonydscf717/2

It also doesn't produce a raw file so mapping the gamut of the actual sensor is much harder.


The RGBE camera was the F828, the one before it to which I was contrasting was the F717 and indeed it was RGBG. Although the F717 didn't have RAW, I believe the F828 did.


dpreview aggrees:

http://www.dpreview.com/reviews/sonydscf828/2

I'd be very curious to see a raw file from this camera if anyone has one.


I remember that camera, although I never got to play with it myself. A bit of searching on the web doesn't seem to turn up any useful response graphs or gamut plots, which is a shame =(


If I understand correctly, the CIE graph is just a 2D representation of the human eye spectral sensitivity: https://en.wikipedia.org/wiki/CIE_1931_color_space Basically take the human tristimulus response R, G, B; normalize it; after normalization the 3rd value can be derived from the other 2 (z=1-x-y), and is no longer needed; plot x,y -> the familiar CIE graph.


I can't argue with any of the technical data in this article which is clearly well researched. There is however some important nuance that is missing. Color theory is a fractal and the deeper you go the more convolutions you find. Motion picture and video artists are typically a lot deeper in the weeds of color than photographers. One phenomenon relates to the psychological perception of color. One reason the picture of the flower is never the same as the flower is that you remember it differently (and yes I realize you may look at the lcd on your camera right after shooting). People in general carry certain "memory colors" grass green, brick red, that do not correspond to the metered color of these things. Grass is deeper green and bricks more saturated in memory.

Alexis Van Hurkman wrote an amazing book called Color Correction which is about digital color grading for motion pictures and TV. The issues pomax discusses exist at every step of the process: Capture, Editing, DI, Projection. The engineering standards work around the practicalities of the physics. Canon and Nikon won't address this issue. Their markets are dying as it is. Red might. Arri might. Panovision might. Black Magic might.

I suggest that pomax spend more time hanging out with cinematographers and DI artists and less time with photographers and coders to unravel more of this.

Last comment on the flower picture. Many people fail to get the shot they want not due to camera inadequacy but because to light something for the camera, direction, quality, temperature of light cost money and takes a lot of skill. Point and shoot isn't.

PS Mapplethorpe managed to get some flowers that look pretty good. And that was in the stone age.


>Of course, 2450 by 1634 pixels is wider than what you're publishing to the internet, and of course that zoom range means nothing in terms of what you're used to from "interchangeable SLR lenses" since the lens has to work for light fields, not planar recording, and you've never worked with a lens that has to do that, so your knowledge of what is a "safe" zoom range for dSLR is absolutely irrelevant for a Lytro camera, but you're going to lie to yourself and pretend you know what you're talking about, and you won't be buying a Lytro camera, and that's the battle we're fighting

I had seen the Lytro camera ages ago and had forgotten it. I later rediscovered it and did some minor research and decided against it.

Now I want to do more research because they're putting up a pretty good argument...

Beyond that - this entire article was very informative and enjoyable to read. I wish I had a bit more to comment on other than I like some of the wit they throw in.


I fell in this rabbit hole while tweaking desktop colors and trying to figure out how to programmatically create same-intensity colors. oops, RGB illusion broken.

This article is a good introduction to color spaces, but stays too abstract to really examine what's required to significantly enlarge gamuts. Doing so would require entire new devices and manufacturing processes - it's not simply a matter of throwing a bit more money, but a lot of money and long-term R&D.

There are "wide gamut" LCDs for people who care about color reproduction, but all they do is spread three primaries out further (and even this is expensive).


UHDTV's Rec.2020 colour space actually changes the primaries, which is fantastic, but not a lot of monitors support it (yet? hopefully) and cameras seem to only support it in video mode, which hopefully is also only temporary. I'd love to shoot my photos in Rec.2020 and then view them on a 4K monitor that has the same super wide gamut!


great article. random thoughts

-designing and manufacturing imaging/cmos chips at scale is very hard and expensive. Also - camera companies, especially large Japanese ones, are not the most agile organisations

-alternatives to bayer sensors (like Foveon) have not been commercial successes. My guess is that we will have better alternatives to current digital imaging, it will just take time because in an area with little competition profits are maximised with incremental, slow updates. Right now Sony sets the speed

-the light field stuff is super interesting, but I have a feeling it will be more successful in displays (especially VR/goggle displays) than in cameras. For cameras post-shot focus can be done with computational photography (like with the recent Android default camera app) or even faked with 2d post effects like in many instagram clones - most consumers don't care

-very much looking forward to rec2020 and higher dynamic range video becoming the norm. There is no reason to be stuck with 8bit recording formats even for consumer devices

-I remember reading dynamic range sensitivity with human eyes is related to field of view, so a display that is only in the foveal part of the vision - even if it can pump up huge dynamic range - might not be a good match for the eyes


Sony's doing some fantastic things right now, and I can only hope they gain enough traction to see things through. The RX100-III is bloody impressive, although I'd still like it if I could take stills at 4K/Rec.2020

As for the eye, it's both field of view and ambience, so cameras are hit twice when it comes to overcoming a hurdle: they can only do absolute capture rather than contrast, and their apertures (which control how much light hits the sensor at all) aren't particularly dynamic so they can't microadjust.

Still, there's lot of promise out there, it'll just... take a while. A Foveon-style light field sensor without the noise of the Foveon X3 (its gamut was HUGE, much bigger than Rec.2020) can have my money as fast as I can throw it at the screen, really.


sony have a lot of traction, they have the majority of the imaging chip market. Rxiv is going to be my next pocket camera, check it out. And will probably pick up a sigma dp3 Merrill on eBay, there is something wonderful about the detail it resolves.

Blackmagic is another cool company, they don't make chips but their FPGA based RAW video cameras rock and they put out more firmware updates for all of their cameras in a year that Sony etc do in 10 years.


I'm using an RX100m3 as my pocket camera right now (after half a year, I'm still rarely using the EVF though, so I guess I could've got an m2 instead and saved a few hundred) but I've been eyeing Black Magic's hardware for a while. That EF mount 4K Cinema Camera looks delicious, although right now still a little pricey.


The pixel DPI on cell phones is getting really high. Instead of just adding more RGB in the standard Bayer pattern, would it make sense to mix in other frequencies besides the standard RGB? I wonder what that would be like.


The electronic viewfinder on some Sony Alpha cameras have RGB and white pixels.


Double refresh rate first, this would help a tiny amount with input lag as well.


Seeing this reminded me of impossible colors:

https://en.wikipedia.org/wiki/Impossible_color


a bit ranty but well worth the read.


It's been a few years (OK, >10) since I spent some time at the Rochester Institute of Technology studying aspects of Color Science and later at UCLA studying image sensor design from the guys who designed and built nearly every image sensor that's gone into space.

The problem with the "we need better sensors" question is that, in reality "they" don't, "we" do.

By this I mean that the vast majority of the people on this planet are well served with a color system, from sensor to display, that provides the images we get today. These images are great for everything from selling you an iPhone to being entertained for a couple of hours by a movie to printing stunning images in a Victoria's Secret catalog and posting about your vacation in Maui with your kids on Facebook. There are well-understood color management approaches for making all of the above work very well.

In other words, from "their" perspective, there are no problems and "we" are all crazy.

Would people be amazed by the images one could produce with better sensors on matching display systems? Absolutely. Just as I was when I saw analog HDTV at least ten years before it got to consumer-land.

However, the issue really becomes one of economics. Consumer electronics isn't about excellence. It's about a simple question: "What's the next piece of shit we can get everyone hooked on?".

Famously: https://www.youtube.com/watch?v=8AyVh1_vWYQ

OK, that's a little harsh, but, yes, consumer electronics companies are always on the hunt for the next mass craze in the segment. Remember how everyone needed a 3D TV --not---, or how everyone needed a 240Hz TV --not-- and now everyone needs 4K --not? Consumer electronics companies are constantly throwing stuff up on the wall to see if anything will take off or if they can trigger a new "need" or "must have" through marketing and back-door content creation.

The reality has been that almost everything past the transition to HD and LCD TV's has failed to engage because, well, people don't need it. The transition from CRT's to LCD's, accelerated artificially due to RoHS [1], had a visible and measurable (in layman's terms) step improvement. People could derive satisfaction from spending the money and they, eventually, fell in line and behaved like good little consumers. Yet the entire transition had to be engineered at a massive level. I'd recommend anyone interested in the subject and, in particular, how it is that we got HDTV, read a nice little book titled "Defining Vision":

http://www.amazon.com/Defining-Vision-Broadcasters-Governmen...

I'll just mention a tid-bit that might have a bunch of readers go off and buy it: We have to thank Donald Rumsfeld for it. Yes, that Donald Rumsfeld, ex. Secretary of Defense, etc.:

https://en.wikipedia.org/wiki/Donald_Rumsfeld

If you think we got HDTV on technical grounds...well, read the book.

That's a long way around to say we don't have better imaging systems because the segment of the population who might legitimately need them is minuscule and has virtually no market power. A better imaging system would be a set of very expensive laboratory instruments used for a range of what I'll term esoteric tasks. In the meantime, what we have today is beyond good enough for anyone watching the World Cup or an episode of Lucy.

[1] https://en.wikipedia.org/wiki/Restriction_of_Hazardous_Subst...


can't emphasize that point enough really - it's also why these products have an incredibly hard time conquering the market: no one realises they "should want this" because as far as they can tell, there's no reason to. I am super happy about the move to 4K, but until 4k displays are ubiquitous (TV + desktop at the very least) I don't expect camera manufacturers to push their chip shops to true rec.2020 4K sensors (they already cover part of rec.2020, but they still lack response in the greens)


I love to see good articles on color. Thanks for the submission.


wow, I did not know how RGB sensors map to the XYZ color coordinates like this, and that we can record light-induced resistance! I learned a lot from this article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: