Hacker News new | past | comments | ask | show | jobs | submit login
Colorizing black and white photos with deep learning (floydhub.com)
247 points by saip on Oct 13, 2017 | hide | past | favorite | 51 comments



As a professional photo editor and historian, colorized photos really agitate me. I'm all for the creation of new ways to get people to engage with historical primary documentation, but the nuance that these colorizations are interpretations gets lost immediately.

Do an image search for "D-Day in color" and try to tell me which results are original color negatives and which are colorizations made by teenagers.

I'm also a little confused as to why colorizations always aim to restore color to the equivalent of a faded color negative, with muted tonality and grain. Human logic is funny.


I'm also a little confused as to why colorizations always aim to restore color to the equivalent of a faded color negative, with muted tonality and grain. Human logic is funny.

Well, as a professional photo editor, you know that if the original b+w photo captures an image with say, a 50% grey value, you don't know if the original color was bright red, or closer to that 50% gray. Bright red has a much higher chroma value, but chroma isn't recorded in a b+w photo - color saturation is lost. Easily demonstrated by making an image in your favorite image editing program that's just straight up red, then changing the image mode to grayscale, then asking someone else entirely to guess the original color.

That, and I think the style is to mimic hand tinting photographs, where you would paint right on a b+w photo. The colors would look "faded" because whatever was used to tint the photograph needed to have a transparent medium, for the information of the photograph itself to shine through. That, and there's just so many colors you could use when hand-tinting. Back to our 50% gray. What if that was... bright yellow? You can't tint "bright yellow" onto a 50% gray area of a photograph. Yellow is highly transparent, and the grey would be too powerful to let its chroma value shine through.


Thanks. That's a perspective that didn't really dawn on me until your comment. I guess in some ways, the muted colors could be seen less as the colorist's stylistic choice than an appeal to "safe" representation of chroma values that don't offer information about vibrancy.


I don’t think that’s quite it (i.e. that modern amateur photo colorists are intentionally aiming for a 19th century tinted photo style).

The bigger problem is that real images have varying chroma and hue within single shapes, but actually mimicking that when coloring a black and white photograph takes a huge amount of skill, attention to detail, and work. You have to think about what the lighting was like, what material it was striking and at what angle, what lens filters the photographer might have used to capture the image, etc. and then you have to go in and painstakingly paint all of those fine gradations and textures in.

It’s especially difficult to do a convincing job with skin, but most materials are hard to color convincingly.

It’s much easier to apply color to whole shapes as a blob, but this looks terrible (very obviously wrong) when you make the colors very strong.

If you want to try for yourself, get a Photoshop expert friend to find some color photographs (without showing them to you first) and convert them to black and white, applying whatever kind of intermediate processing he/she desires as long as it results in a roughly photorealistic looking black and white image.

Then you try to photorealistically colorize the photo, spending as much time and effort on it as you want. When you have something you are satisfied with, compare to the colored original. It’s very likely that the colorized version will look pretty bad in comparison, even if it looked vaguely okay on its own.


I also appreciate the effort that went into some of those things. Two of my favorite documentary series are the world wars in color (one series for each).

When they were first aired, there was a special episode with the director and artists who did the work. The amount of research they put into it, the amount of effort they out into doing the work, and their attempts to be faithful to the history really, really impressed me.

It's able to be an art, in and of itself. It's also able to be undertaken as a scholarly work. Additionally, the original uncolorized work is still available. It's an additive process and isn't destructive.

I'm not sure why you'd be agitated, but I don't have your perspective. I'd better understand, I think, if it meant the destruction of the original work.


I like colorized historical photos. It brings them to life in a remarkable way. I'd like to see the old B+W movies colorized (the Ted Turner ones don't count, as they were done very poorly).

Sure, the colors will never be exact, because we don't know what the original colors were. But that doesn't matter in any material way.


Having worked on movies shot on video, color film, and B+W film, the lighting and design choices are very different. Colorizing BW movies makes some sense for historical footage where that was the only available option, like newsreels or very ear;y films, but for anything with aesthetic choices involved (German expressionism onwards), it's no more appropriate than trying to figure out what Picasso's models 'really' looked like. Artists are not that concerned with fidelity, but with pushing the salient characteristics of their available or chosen medium in support of a particular artistic vision. We are not merely recording; as soon as there is more than one option available (whether that's paint on canvas or different kinds of film stock) we start playing with the possibilities. When you begin colorizing you risk compromising that.


The 'exact' concept is wrong. Any picture depends on lighting, camera used, filters, lens, etc.

I don't mind colorized photos. Yet for some of them that were made thinking in a b/w result (For example, the 'With The Beatles' album cover) colorizing means the lost of those stylistic choices.


Agreed. Interpretation or not, I don't see a problem as long as the colorization is not outright inaccurate. Also consider that for a long time people didn't even have photos and only had illustrations and written descriptions to go on.


> the nuance that these colorizations are interpretations gets lost immediately.

True for all AI; neural networks are doing amazing things, but the output is a synthesis, it's a complex interpolation of it's training inputs that may seem "good" or reliable, but it is never to be taken as truth or fact, and it can be arbitrarily wrong with unbounded errors.

> I'm also a little confused as to why colorization always aim to restore color to the equivalent of a faded color negative, with muted tonality and grain. Human logic is funny.

This isn't a human logic problem. Normally colorizations don't affect tonality and grain much, they are putting color splashes on top of a B/W image. This is true of hand-painted colorization, as well as the digital colorization here. You can't get rid of grain or adjust tone by adding color.

One can adjust tone and grain, but then you're doing more than colorizing, and going even further down the road of "interpretation" you're concerned about.

In this particular case, the author did mention "A more diverse dataset makes the pictures brownish". Brown is the average color in natural photos, so minimizing error tends to make things browish. That is separate from leaving faded tone & grain in tact, but it's a second reason why AI based colorization will tend toward muted color.


I'm also a little confused as to why colorizations always aim to restore color to the equivalent of a faded color negative, with muted tonality and grain. Human logic is funny.

I always assumed that if you tried to use "full color" it would look weird, since the photos themselves usually are quite faded and grainy.


More than that, it would look weird because restoring the color to a photo in a way that looks plausibly photorealistic is really hard. If you make something more stylized, the viewer doesn’t have as much reference to compare and realize that there’s something wrong.

To the grandparent: people have been trying to colorize black and white photos since the 1840s; complaining isn’t going to stop them now, https://en.wikipedia.org/wiki/Hand-colouring_of_photographs


To the grandparent: people have been trying to colorize black and white photos since the 1840s

True. But there was also tons of skepticism in the medium throughout the mid- to late 1800s. Oliver Wendell Holmes' writing on veracity of photography is a neat reminder that it took the public decades to come to terms that the photographic process was a somewhat-veritable facsimile of "real" life.


As a professional photo editor and historian, colorized photos really agitate me.

As a photographer, collector and enthusiast of vintage prints and photos and amateur historian, I deeply appreciate the aesthetic of colorized photos and understand the motivation to reproduce and master it. There is more to art than truth.

The artist is not the transcriber of the world, he is its rival. - L'Intemporel (Third volume of 'The Metamorphosis of the Gods'), André Malraux (1957)


> I'm also a little confused as to why colorizations always aim to restore color to the equivalent of a faded color negative, with muted tonality and grain.

Many movies set in the past, or that have flashbacks to the past, will often mute the colors. Most modern movies muck around with the colors in post production, too. The worst is when they go for the blue/orange palette.


> The worst is when they go for the blue/orange palette.

http://theabyssgazes.blogspot.com/2010/03/teal-and-orange-ho...


Another thing that is rarely mentioned, is that it’s very common to use color filters in B&W photography to tune the contrast.

That means the luminance values can be completely different from what they actually were, and skew the color choices. On the other hand, this might be something that a good ML algorithm can detect and compensate for.


I think there was a cable TV channel a longtime ago which might have been a Turner property --they would get rights to some classic B&W movie and colorize it --I'm sure the tech used back then was somewhat primitive but the results were horrific --but of course "Color!"


I remember watching some of them! Turner Classic Movies was the culprit and they were dreadful.

I suppose the technique has evolved exponentially since then.


Aw, man, yep, that was it. I vaguely recall the awful tinge of their movies. Looks like many actors, directors who were in some of the victimized movies were none too happy about the colorization: https://www.youtube.com/watch?v=ZKtRvBZx28c


This is refreshing. I’ve been learning machine learning through Kaggle. Recently and I’m a bit tired with the “tuning hyperparameter” culture. It rewards people that have the pockets to spend on computing power and the time to try every parameter. I’m starting to find problems that don’t have a simple accuracy metric more interesting. It forces me to understand the problem and think in new ways, instead of going down a checklist of optimizations.


I'm also starting to follow people and communities that work with deep learning in new ways. Here are some of my favorites:

[1] http://colah.github.io/

[2] https://iamtrask.github.io/

[3] https://distill.pub

[4] https://experiments.withgoogle.com/ai


You can be a little less brute force if you use something like hyperopt (http://hyperopt.github.io/hyperopt/) or hyperband (https://github.com/zygmuntz/hyperband) for tuning hyperparameters (Bayesian and multi-armed bandit optimization, respectively). If you're more comfortable with R, then caret supports some of these types of techniques as well, and mlr has a model-based optimization (https://github.com/mlr-org/mlrMBO) package as well.

These types of techniques should let you explore the hyperparameter space much more quickly (and cheaply!), but I agree - having money to burn on EC2 (or access to powerful GPUs) will still be a major factor in tuning models.


Ha, it reminds me of what Andrej Karpathy‏ said "Kaggle competitions need some kind of complexity/compute penalty. I imagine I must be at least the millionth person who has said this." It would be interesting to collaborate/compete on more creative tasks and have different metrics for success.

[1] https://twitter.com/karpathy/status/913619934575390720


So true. Another reason to put constraints in Kaggle competition is due to production environment. How many winner models have been used in production? I suspect this number is near zero. High accuracy with a delayed time makes a ML/DL artefact not usable in production, because from users point of view speed is much more valuable than the difference between 97% and 98% in accuracy.


The averaging problem in colorization is interesting. If it learns that an apple can be red, green and even yellow - how does it know how to color it?

A HN user in an earlier thread suggested to use a fake/real colorization classifiers as a loss function. [1] But I still feel that it would not solve the averaging problem. It would hop between different colors and probably converge to brown. I haven’t come across a plausible solution so far. [1] https://news.ycombinator.com/item?id=10864801


>But I still feel that it would not solve the averaging problem. It would hop between different colors and probably converge to brown.

At least to the extent that GANs work, it works. They will alternate between the observed colours based on the noise vector. They do not simply converge to averages, because the discriminator easily recognizes brown apples as fakes.


It could try to classify the apple tree or the context, but it would require a lot of training data. If it's out of context, it should select a color based on probability. But it's hard to solve this with just input and output data. The simple solution is to use noncontradictory training data, i.e. only having green apples.

I have an urge to teach it simple logic. Instead of making it brown, it selects the color with the highest probability from a range of colors. However, I haven't come across a deep learning implementation like this to mimic.


Although it is quite egregious here - this is not a problem inherent to colorization but rather to generative models in general.

Using something akin to a variational autoencoder would solve this problem, because it learns a distributional approximation rather than a single point estimate of the color, and then the random noise vector input allows one to sample from this output distribution. Similarly, Mixture Density Networks allow you to model a distribution and then sample from it.


You could adjust the error function. The common Root Mean Square error pushes predictions to the average. If you use absolute errors, or even a logistic function instead, you'll encourage the model to commit to a decision on a multimodal distribution.

Alternatively, use a discrete colour space and consider colours as categorical data not implying any ordinal scale.


> If it learns that an apple can be red, green and even yellow - how does it know how to color it?

I dunno, does it look more like a red apple, a green apple or a yellow apple?


Google "learning xor".



Interesting! If nowadays, pictures are colorized by hand in photoshop, it wouldn't be practical to colorize a full black and white movie. I guess this deep learning approach would solve this problem and colorize old black and white classic movies.


I think colorising b&w movies must already be fairly practical given the size of this list:

https://en.wikipedia.org/wiki/List_of_black-and-white_films_...


Agreed. I imagine this has applications in compression as well. You could stream a movie (or a football game) in black and white and enable each device to color it on the spot. A similar technique could also be done for HD/3D/VR.


Or you can just broadcast the audio from the game and a neural net will synthesize the video on the fly. The possibilities are _endless_!


Coloring football uniforms might be nearly impossible though...


Yes, you provide a handful of full data keyframes and reconstruct the details of the stream from the middle out.


That middle out compression has some fantastic Weissman scores I believe.


that is an amazing idea.


I'd like to see more than colorization. Consider the silent movie "Wings". Very high quality blu-rays are available of it. I would colorize it, remove the dialog cards and dub the dialog, then add foley sound effects and a music soundtrack!


I'd like to train this on color comic strips and then run something traditionally black and white like xkcd through it. Seems like it could make the colorization part of hand drawn animation much easier.


You'll probably want something closer to a GAN like pix2pix - https://phillipi.github.io/pix2pix/

An example implementation would look something like edges2cats https://affinelayer.com/pixsrv/


edges2cats is already way too fun. Thank you for the link!


The suggestion of using a classification network as a loss function is brilliant!

I love how we can, in general, elevate the sophistication of ML models by having different models interact and train each other.


Thank you, really good read and great product idea !!


It could be really interesting if it could return different coloured versions and provide a way to explore this different style.


What's even cooler is adding support for human annotations so users can selectively give colorization hints for different parts of the image to customize the output.


A GAN style approach to learning and generating variants could be interesting as well. It could generate a couple of hundred plausible versions. Then you have another network that is trained to differentiate between fake and real colored photos which picks the best version.


colorization applied to video: http://demos.algorithmia.com/video-toolbox/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: