Hacker News new | past | comments | ask | show | jobs | submit login
Researchers Develop Method for Getting High-Quality Photos from Crappy Lenses (petapixel.com)
148 points by Xcelerate on Sept 30, 2013 | hide | past | favorite | 55 comments



This is going in a direction that manufacturers have so far mostly avoided probably out of tradition.

These days lens design is already highly automated with simulation software to do the optimization. High-end lenses are just lenses where you've relaxed some of the conditions to get better quality (bigger, heavier, expensive types of glass, lower manufacturing tolerances). And yet some of the parameters lenses are still optimized for (color transmission, field curvature, etc) are increasingly something you could fix in post-production if you had an accurate model of the lens.

This makes particular sense for non-interchangeable lens cameras where the sensor+lens combo is known and manufacturing tolerances are already low, so your lens modeling can be quite good. It should be particularly useful in smartphones where all the other restrictions are so tight (small size and weight require it).


This is already in use in the micro-4/3rds cameras, which feature interchangeable lenses, but still tight integration between the lens and the camera. There are even firmware updates for the lens.

http://m43photo.blogspot.de/2010/09/lumix-20mm-distortion-co...


Neat! I had seen it done on compacts but didn't realize anyone had done in on interchangeable cameras. There are plenty of lens correction packages for DSLR lenses but those are "since the lens has compromises we fix it programatically" not "the lens was designed with the correction in mind".


Major manufacturers are indeed adopting these techniques for correcting certain flaws. Recent Canon DSLRs will correct for vignetting and chromatic aberration using lens-specific profiles.

http://www.usa.canon.com/cusa/consumer/standard_display/EOS_...


Those were the cases I knew where a manufacturer goes and says "here's an electronic recipe to fix some of the issues we couldn't on the lens", you're still getting the uncorrected photo from the camera and then choosing to fix something in post, but the lens is supposed to work to a high standard without any fixing.

The example ErsatzVerkehr mentioned in m43 and is done in some compacts is more "we have a lens+sensor+software combo that spits out a correct photo and you never see uncorrected results". The difference is that you can design lenses that without correction produce unacceptable results.


Lenses are such an interesting topic, a holdout that resists what has in general been a march of progress in manufacturing ability.

I'm still blown away today by the choices in lithography methods used in patterning silicon. Because large high-quality lenses get so expensive and difficult to manufacture, we have adopted rigs with systems of stepper motors to move about the wafer and lens, so that we can use both a smaller lens, and a slit (instead of annular) lens. I always think, "how can that be both easier and cheaper!", but it is...

Visual aid to illustrate a "scanner" method: http://www.lithoguru.com/images/lithobasics_clip_image012.gi...

The "scanner" method is the slit lens, the "stepper" method is the smaller lens, and today we do both.

More info:

http://www.lithoguru.com/scientist/lithobasics.html

http://en.wikipedia.org/wiki/Stepper#Scanners


Its always fun when somebody from another field (computer science) discovers something that is very well know in your field (E&M). One of the parts missing from this paper is that the PSF's of optical systems have a real and imaginary part but measuring the imaginary part is quite difficult and is mostly only done for biological samples.


Heh! This is especially true for SIGGRAPH! Just take some ten year old physics / engineering paper, implement the algorithm on a GPU, done!

To be fair, SIGGRAPH has a strong focus on applied work, and the contributions are often extremely amazing, despite the fact that the underlying method has been developed by someone else and published before.


There are some really clever techniques out there for recovering the imaginary parts actually, at least in a similar domain (transmission electron microscopy), but they're still somewhat new.


Could you please elaborate on these techniques? I'd be most interested in any relevant papers. Thanks!


Can't find a good holistic reference off-hand, but you're looking at focal series reconstruction (or, if you're using specialised equipment, holography) to recover phase information of your wave. That can then be used to reconstruct transfer functions and the like. [This isn't something I do personally so I don't want to go into too much detail in case I explain something incorrectly!]


These guys have a method: rogers.matse.illinois.edu/files/2011/slimoptexpr.pdf‎


I just realized something.

The complex PSF isn't important for camera applications because the complex field carries the transparency information. I don't want to see transparent things with my camera.


The imaginary part can be grabbed in several ways.

The Lytro camera is the closest I'm aware of to bringing such things to market.


Can you point at some papers in your field on this topic?


This is exciting stuff. I found the discussion on the /r/photography subreddit pretty interesting also:

http://www.reddit.com/r/photography/comments/1n94tp/cool_tec...


From the limitations mentioned here and on the subreddit, it sounds like it will be first successfully applied to security cameras with approximately fixed focal lengths and known light conditions and/or facial snaps at airports. Just what we need, more surveillance tech :(


> Just what we need, more surveillance tech :(

This is more useful to the homeowner and corner gas station than to government installations which have the budget to install good cameras.


<cynical-hat> What's the bet someone from a company on a Powerpoint slide in Snowden's cache is preparing to offer "security camera image enhancement as a service", so all that homeowner and gas station surveillance gets sent straight to some PRISM-like data gathering program as well as providing enhanced image to the camera operators?


Computational Photography is an interesting thing. The Lumia 1020 with its 41 megapixel camera is probably the best test bed at the moment capturing a lot of the light field. Lytro being a more explicit example.

So basically if you can replace optics with software you end up with better pictures for less money and more compact imagers. While better camera phones are always cited I think web cameras, security cameras, and visual recognizers (things that track items on assembly lines, or people in a store, or anything where you can set a visual condition to alert on) will be the big winners here.


I'd like to see examples on how this compares to regular run-of-the-mill post-process sharpening. If the "before" images were straight from the camera, the comparison isn't really fair.


Canon has the "Digital Lens Optimizer" which purports to model the optics on each lens so to allow better conversion to raw files/ jpgs. Also lens distortion can be corrected too. Lamentably they seem to correct only decent lens and the file size grows substantially.

Some details :

http://www.bobatkins.com/photography/digital/DPP_v3-11-10.ht...


These results are very impressive, and I think that if applied to a lens that was less aggressively simplified, they would be even better.

What is not clear from the video is how this algorithm performs on elements that are out of focus due to narrow depth of field. It seems likely that near the edge of the focal distance, there will be significant artifacts as the algorithm misinterprets slight depth of field blur as lens aberration.


This is awesome! I actually was just thinking this earlier today looking at one of my profile pictures. I liked a particular photo a lot but it had bad resolution when magnified and I was wondering if it would be possible to make it high def through some software that automatically divided and colored in the pixels.

Although it's not the exact same, I'm sure this sort of software can be applied to restore old photos!


Looks pretty cool and all, but am I missing something or are the first pictures just slightly out of focus? If the images were actually in focus would the improvement look as significant?

Considering it's a SIGGRAPH paper I'm probably just wrong but from the pictures in the article that's what it looks like to me.


I think they took the samples from the edge of a full resolution image. One difference between good and cheap lenses is that the center of the image will always be good, while on the edges more and more distortion might happen in the cheap lenses. It's possible these images were perfectly in focus on the focuspoint.


Those are nearly full-sensor images. If you download the supplemental materials, the large images are all 3492 x 2205. That's 7.7 megapixels -- and the Canon 40D is listed as a 10.1 megapixel-camera.

They shot the photos with a hand-made, one-element lens. This is a very bad lens.

The photos may look like they're out-of-focus, but they're actually in-focus. (The width of lines does not change after the processing.) They just have an extreme amount of diffusion, chromatic aberration, and all sorts of other distortions.


How can this be applied to, say, recovering highly-compressed images and video?

Can you generate a PSF as part of a compression step that will turn a smudged and compressed image back into a better-than-conventional-compression approximation of the high-quality original?


I would think not. By lossy compressing the image you have already lost information necessary to reconstruct it again. The method uses information from all three color channels to reconstruct the image. So all the information is there, the image is just blurry because the color channels are shifted (chromatic aberration).


To extend willvarfar's question, can you shift color channels in a reversible way so as to get compression.

Compare the two process.

1. raw_image -> image_compress(raw_image)

2. raw_image -> shift_color_channels(raw_image) -> image_compress(shift_color_channels(raw_image))

* Thinking out aloud *

Is the 2nd process feasible?

Is it possible that the current image compression algorithms pick up on the aberration patterns which invalidates the need for the 2nd process?

In the case of image compression using wavelet transforms (which many methods use) and if wavelets can pick-up on the aberration patterns, could the hurdle be finding a finite set of wavelet functions that can work for majority of the lenses?


It's not just chromatic aberration. It's also blurry because of distortion from the (very simple) lens. Even high-quality lenses cannot reproduce the image sharply across the entire plane; they're noticeably softer in the corners for example.

In this case their simple lens (akin to a Lensbaby) results in zoom-like blur in the corners and thus more extreme PSFs than the Canon lens, which has more or less the same PSF shape overall, just wider in the corners.

Also it appears that they used 32 wavelengths for computing the PSFs in the simple lens case vs. just three for the Canon lens.


Nice! This is working better than I imagined. It's also smart they are using each channel separate since different wavelengths bend different.

And since this is a post action this could also be used as plugin for Photoshop, Gimp and others.


I'm not sure about the "plugin for Photoshop, Gimp and others". They seem to require a calibration with the actual lense.


Such tools (e.g. Photoshop) already ship with a database for lens correction of most lenses from the large manufacturers. Things like chromatic abberation, vignetting etc. can be done automatically. For some common lens problems such as geometric distortion, I'd say the correction is so good that I wouldn't consider it to be a problem when buying a lens any more, whereas it was one of the larger problems with zoom lenses before.

As far as I understand this method just requires the PSF data for different settings to be shipped as well (at different apertures/focal lengths), and Photoshop could do the same thing using this method.

Abberations in a poor lens comes from two factors: 1) design flaw/compromise and 2) sample variation. If you have a lens profile in Photoshop with PSF data it can only correct for the problems inherent in the design of the lens (1), not problems due to variations in manufacturing or lens damage problems.


I think it depends how much two lenses of the same model/type are different: maybe you could have presets for common lenses and/or finetune the system for your own one doing some calibration shots.


There's a database of lens info that grew out of some open source panorama stitching tools, I wonder if similar would be applicable:

http://lensfun.berlios.de/


I wouldn't be surprised that you could make the convex optimization algorithm independent of the lens. That would just be prior information for the algorithm which you won't necessarily need.


I hope we see this in smartphones soon.


This is what http://www.dxo.com has been selling for years.


just curious what's diff between this & ps' filter http://www.adobe.com/inspire/2013/06/photoshop-camera-shake....


Camera shake affects all color channels uniformly across the image. That makes it unsuitable for removing any kind of lens distortion.


Quite nice, but doesn't come close to what you get from a decent SLR or DSLR, quality lense and somewhat skilled photographer.


First of all, there's no full resolution examples so you wouldn't be able to tell if this is "SLR quality" or not, whatever that means. The photographer has nothing to do with the quality of the lens: the best photographer won't be able to correct softness in the edge of a cheap superzoom lens. It's just physics. But lastly, the whole point of this method is to improve quality of bad lenses. Meaning, the whole point of the thing is that the pictures aren't top quality.


There are plenty of crappy lenses on DSLRs, too, with a very few rare gems in the sub-$1000 range. Canon's 85/1.8 and 40/2.8 stand out in my mind as excellent but the remainder of their mainstream ( non-L ) range is pretty mediocre.

It's how lens manufacturers keep their 'professional series' lenses lucrative... don't want people being satisfied with what they can afford!


On the other hand, the 'average' kit not terribly fast kit zoom lenses have gotten way better over the last 10+ years. There are way fewer total dogs. Put one in the hands of a decent photographer and they can get a good image.

(that said, if you want sharp, the 50/1.8 at f8 will be sharp enough. but sharp isn't everything. )


What does a decent photographer have to do with anything?


Suppose you do an experiment with three groups of people: one that is used to taking pictures with their smartphones, one that already worked with a DSLR one and lastly a bunch of professional photographers. Now give all of them the same high-end DSLR+quality lense and send them out to take pictures. Your claim seems to be all three groups would take equally good pictures. My claim: no way. Experienced photographers for instance just know things about lighting etc that take years to learn and won't be in your textbook.


The professional photographer doesn't change anything about the purely technical aspects of the image, though (which is defined only by the sensor and lens). This isn't about magically improving crappy photos but rather improving photos taken with a crappy lens (an obvious application would be cameras in phones).

That a good photographer is able to deliver a far better photo despite the constraints of his tools goes without saying, but it's not what this is about.


No. My claim is that this (very interesting) article is about improving quality from low quality lenses, and quality of photographer has nothing to do with this problem. Given a low quality lens, a professional photographer is still going to come out with a fuzzy image.


My god! Someone finally discovered the "ENHANCE" filter!

Well, aside from the problem that information can't be created from thin air. You can fix certain lens errors, but you cannot extract details that aren't there on the original.

The most obvious consequence is that you can't use this technique to extract more megapixels than the sensor has.

Less obvious limitations might be the sensibility to noise; all these samples have been taken in bright light, but if your crappy lens produces a very noisy image in low light, this method won't fix it.

Furthermore, the numerical aperture (NA) of the lens defines the highest possible resolution. Even with this method you can't get a higher resolution than wavelength/NA.

Unfortunately, there's no way around the principle "Garbage in, garbage out." Lensmakers rejoice, your business wasn't made obsolete after all!

Nevertheless, I can see exciting applications for this method; one thing that comes to mind are improving the pictures taken by photographic scanners used for digitizing old books.


This is not about reducing noise it is about fixing aberration and distortion.

So it even works on noisy images where you get a better quality image of the (still) noisy image.

They never mention that they extract details that aren't there. They just present details in a way that humans perceive as "sharper images".

You note about scanners is interesting. There once was a project that converted scanned disk record into MP3s. You had to scan the disk several times because of the scanners aberration (http://www.cs.huji.ac.il/~springer/DigitalNeedle/index.html).


You are right. I made my statements in reference to claims in the article like "This technique (...) may some day provide a software alternative for those who can’t afford high-end glass". The technique can definitely improve the image quality of a given lens, but it will never allow a "cheap" lens to replace an expensive lens.


Maybe I'm missing something, but most of your comments seem to be focused on limitations of the sensor, not on the lens itself. (It's hard for me to picture how a lens would behave differently at low light intensity than at high intensity: the light rays all bend the same way regardless, right?) Your point about diffraction-limited resolution is well taken, but for two lenses with the same aperture projecting onto identical sensors it sounds like this technique could make low-end products more competitive with high-end ones. (Let me know if I've missed your point.)


This method tries to correct specific lens errors. To do this, you need very good intensity resolution. If you have poor intensity resolution, information is lost and the lens errors cannot be corrected anymore. In low light, the signal to noise ratio will lead to poor intensity resolution, and the lens errors become irreversible.

A better sensor will only get you so far; since light is quantized (a ray consists of individual photons), there are physical limits to the intensity resolution possible at low light.

Once information is lost, there is no way to recover it. And that's why you just can't make up for a crappy lens with sofware.


Wouldn't the sensor be a little more important in this case than the lens?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: