This tool appears to be an open-source (and faster) replacement for Focus Magic, in that it allows the user to perform parameterized deconvolution. By that I mean: if the blur kernel can be approximated by either a line or a filled circle, and you're willing to tweak the dimensions and angles of the blur kernel, this can do a decent job of recovering your photo.
If the true blur kernel is more complicated -- perhaps a wavy line -- then you probably need a blind deconvolution tool, which this is not (yet?).
Fascinating. I have tried to find methods to recover text from digitally blurred images but ended up doing research and survey on similar topics while not finding any available tool for the task.
I still have several questions:
- How well does the state of the art deblurring work with digital blur of known algorithms for forensic purposes?
- How does blind deconvolution compete with some manual recovery processes done by experts, if there exist such manual methods?
- Can approaches in blind deconvolution extend to techniques of resolution enhancing from a sequence of photos taken with the same parameters?
- Are there researchers trying to integrate visual cognition frameworks into making deconvolution results more visually recognizable? I remember one somewhat related research was posted on HN: http://news.ycombinator.com/item?id=4241266 (and it happens to be coming from the same university as the paper you posted)
- Are you asking about recovery of information that was intentionally blurred? If so, there have been some exciting results in this area in recent years. For example, it turns out that given reasonable priors on the underlying glyphs, it's possible to reconstruct even heavily blurred text. (e.g., http://dheera.net/projects/blur.php ) That doesn't involve deblurring per se, since the goal is simply to find the glyphs (and combinations thereof) with maximum likelihood given the blurred text. Still, it's interesting, and similar techniques have been applied to super-resolution problems lately, where the approach is often called "hallucination."
- Blind deconvolutions often are used by experts. However, it's possible to help things along if a glint exhibiting the blur of interest is visible in the photo. That is: if a bright dot of light is visible in the photo, it will be blurred too, and so in the blurred photo the point of light will look like the point spread function (the blur kernel), and that can then be extracted and used with a non-blind deconvolution approach to recover the latent sharp image.
- Blind deconvolution and super-resolution are related problems, so advances in one might certainly help in the other. The research you cited makes use of image hallucination (mentioned above) as a key part of its super-resolution algorithm.
If anyone would like to see another example, here's one I just did. The original photo was taken with a Galaxy Nexus and no flash. It's a grainy shot and not the best starting point, but I was curious how it would do with a lower quality starting point.
The deconvolved images have some pretty nasty ringing. If you use an algorithm which places a realistic prior on the pixel gradients you will probably get cleaner results.
>But the Enhance Button simply ignores the fact that the big blocky pixels you get when you zoom in too close on a picture are the only information that the picture actually contains, and attempting to extract more detail than this is fundamentally impossible. No matter what you do or how you do it, you're merely guessing, if not making stuff up outright.
Very impressive. It reminded me of a demo from Adobe at MAX 2011 that showed promising results; it was more focused on eliminating defects from real images to produce a more aesthetic result. http://youtu.be/xxjiQoTp864
It seems that if the blur is caused by moving the camera you can try to find the direction the movement happend and restore the original photograph quite nicely afterwards.
Topaz Labs has had a Photoshop plugin out for some time now (InFocus) that does the same thing. It cuts off at a fairly low blur radius, though, since it's intended for photo correction rather than forensic retrieval, and ringing can get pretty severe even at a small pixel radius.
What sets the Adobe tool apart from the rest (so far) is that you can define a non-linear path for motion blur. From what was said at the demo, there seems to be some hope for automatically determining a non-linear motion path. That would be really cool.
Blurity author here. I'm happy to say that Blurity does exactly what you're requesting in your second paragraph: it can "automatically determine a non-linear motion path" using true blind deconvolution. Although Blurity is a commercial product, the watermarked trial is free. I'd be curious to hear your reactions.
1. The article states "the operation which is opposite to convolution is equivalent to division in the frequency domain" which is not correct. 2. "Deconvolution" has no mathematical definition (as implied by that quote), it is the name of various algorithmic approaches used in signals processing. 3. Finally, the Wiener filter is not deconvolution, it is just a filter.
Nonetheless, a great article, with great illustrations.
Wow, pretty good. I wonder if there will be any scandals coming from various blur-redacted, sensitive documents, if they can be de-blurred as well as this demonstrates?
But I don't think in the first place it will be possible to de-blur sensitive documents as well as this.
If you blur a region and than resize you loose that extra bit information you'd normally need to be able to do a good de-blur afterwards.
But this does makes me think, who knows terrorists could be using regular photo's, looking not important at all at first sight, to store messages. So you'd have to de-blur an out-of-focus zone to reveil the message.
Unintended Consequence the First: amateur porn producers have been "anonymising" their photos for nearly a decade now with heavy blur filters over the subjects' faces.
That's a decade of compromising images about to become significantly less anonymous...
No, there's no real worry of that, barring some serious revolution in deconvolution. Existing techniques can only deal with blurs of certain kinds, and theory suggests that the standard blurs used for faces/etc are in fact not reversible at all.
A short explanation can be done using the Fourier transform. Blurs like Gaussian blur, and photographic camera blurs (with some simplifying assumptions) are convolutions; they apply a 'blur kernel' to each pixel of the image, which spreads energy from that pixel to all the neighboring pixels based on the shape of the kernel. Visualizing the outcome of a convolution is not straightforward for complex scenes, but here the Fourier transform helps.
When looked at in the frequency domain, the convolution operator turns into a multiplication operator; the spectrums of the image and the blur kernel are simply multiplied frequency-by-frequency. So you can directly see where information is being lost in the final image, by seeing what frequencies of the blur kernel are zero - at those frequencies, the output image has lost all original information.
Deconvolution techniques are all trying to restore the original image; in theory all you need to do is to take the blur kernel, and divide by its frequency spectrum to obtain the original. Assuming you have no noise, etc, in the process, this works fine, except where the blur kernel is zero - division by zero doesn't get you very far, and the information there is truly lost.
With camera handshake, the shape of the blur kernel tends to be a squiggle (the path of the camera motion), and the frequency spectrum is reasonably nicely behaved - there may be no zeros or just a few spots. So reversing the blur is possible, maybe with some additional interpolation to cover up the nulls. Out-of-focus blur (bokeh) is much harder, since it tends to be much more uniform and smooth, like a gaussian blur.
A gaussian blur turns out to have a gaussian frequency spectrum as well - that means the blur kernel has 0 frequency content past a cutoff point, and the wider the blur, the lower the frequency cutoff, and the more information is lost. So deconvolution can't really work directly; you can make assumptions about what was there before (priors), to guide the reconstruction. But at some point it's about as good as pasting a random face from the internet on the blurred head. The question is mostly about where that cutoff is - how much can your knowledge of 'this is a face' make up for the zeroed-out information? In practice, you're probably pretty safe if you've blurred the face to the point where no features remain. If you're really worried about it, throwing in some random noise, etc, makes the problem even more impossible.
So in short: We can probably do OK on camera shake and maybe out-of-focus bokeh. We can't recover from smooth uniform blurs like gaussian blurs.
Wow, I was a little unimpressed with the results until the text deblurring at the end. It was so crazy, I had to go back and actually read the article instead of skimming it :)
If the true blur kernel is more complicated -- perhaps a wavy line -- then you probably need a blind deconvolution tool, which this is not (yet?).
If you're interested in blind deconvolution in general, Dr. Levin of MIT put together a nice overview paper a few years ago: http://www.wisdom.weizmann.ac.il/~levina/papers/deconvLevinE...
(Disclaimer: I'm the developer of Blurity, a blind deconvolution product)