The blurring and then sharpening in spatial domain is equivalent to applying a band-pass filter, or rather a band-stop in the article's case, in the frequency domain. Blurring requires a lot of parameter tuning, in particular the size of the mask and the weight of each the mask element. The choice of parameter can be mapped to particular filter in the frequency domain, e.g. a gaussian filter or an ideal filter. Same for sharpening. Operating in the frequency domain allows you to skip all the trial and error of parameters, by eye-balling the frequency of interest, and suppress its presence.
> what you did is still the best out there
It still is.
I worked with a printing company before. The company impart anti-forgery pattern to packaging material by tuning the half tone-ing angle of the printer. The printing company then offer a seemingly transparent film, with special pattern printed on it, to company requiring brand protection. By overlaying the film on top of packaging material, a specifically designed moire pattern would occur. If you squint your mind enough, it is like public-private key encryption in the physical world. Whenever the brand receive dispute over authenticity of purchased items, a special team, a team having that transparent film, will be summoned to verify the authenticity of item in question. It is one of the many ways the brand in question protect their goods. The printing company was looking for a mobile solution instead of delivery the transparent film, that's where I get to learn more about the printing industry.
It is a relatively well defined problem - removing periodic pattern due to physical arrangement of printing device. This is where algorithmic approach shines over ML approach. I think nowadays a lot of ML is an attempt to outsource the problem understanding part. These are hot dogs, these are not hot dogs. I don't know what defines a hot dog, but is this picture a hot dog?
Hyperbole, of course.
On second thought, I think the author shouldn't remove the periodic noise at all. He was preserving "a copy of history", not the first occurrence of history. It is a property worth preserving. It is a beauty of itself imo.
The halftone pattern itself might look lovely and “historical” on you screen, but the book would not present the image “as-is” (i.e. with all halftone detail from the original photo losslessly captured on the page), but rather through the lens of another halftone printing process, one whose DPI is low-enough that it would arbitrarily interfere with the original photo’s to create a garbled mess. The final product wouldn’t be halftone dots encoding a picture, but rather halftone dots encoding moire stripes from an attempt to replicate other halftone dots from a “sample grid” that doesn’t line up with them.
It’d be a bit like taking one of those game emulators that displays the game with a “scanline”-effect filter… and then hooking the computer running the emulator up to an old TV that has a natural built-in scanline effect. You won’t get the pretty scanlines from the emulator, nor will you get the pretty scanlines from the TV; you’ll just get a mess.
(To put it in technical terms: to accurately show the halftone “information” from the original photo in the book, the print resolution of the book would need to be double the Nyquist frequency of said halftone information. But it very likely won’t be.)
It would be possible to print the already halftoned image as it is without running it through another halftone process. Much like how text is not halftoned, but can include very small accurate shapes. I’m sure this makes the preproduction more complicated, but it’s essentially a software problem, not a hardware problem.
If you can extract the original halftone dots from the image, sure, you can create new mask layers using those.
But it's unlikely that you can do that, as, unless you're printing in a four-layer pure-CMYK process (and why would you?), even the best photo scanners will be unlikely to be able to capture enough information to tease apart the original offset-layers in their original colors, where halftone dots have overlapped to produce new mixed colors.
Also, you'll be unlikely to reverse the color-bleed effect of the paper to get out the original halftone dot pitch, in order to print new dots that bleed out to the same size the original's did on new, different paper with new, different inks.
Basically, this is as hard as any art forgery project (i.e. needing to match historical materials and processes and environmental conditions), but before you can even start the forging process, you need to destroy even more information by passing the image through the grid-sampling process of a scanner, and only then try to reconstruct layered vector circles from the resulting sample grid.
The moire is an artifact of the scanning or scaling process, and is by no means historic. Removing it is certainly legitimate. That is particularly true since scaling the image can make it worse.
If you are talking about preserving the halftone in an original scan, then yes I agree that it should be preserved. Yet you must also accept that the periodic pattern will have to be removed prior to publishing the image since it will be scaled at some point.
are you at liberty to share the printing company here? I'm looking for something like this. If not, my email is in my profile and would appreciate info so I can reach out and maybe give them business!
Thing is a bit complicated. I actually worked with a middleman, this middleman act as the "certificate authority" - signing up brands and dispatching orders to printing company(ies). So the printing companies don't know what the matching pattern is. It was five years ago and I have since moved on.
20 years ago you could just type in the rotation angle you wanted for each color separation into your film plotter.
If you ask your printer, they should be able tell you what lpi/dpi they can resolve on the media you're printing onto (you may be limited due to dot gain on uncoated stock).
The article doesn't say, but te method described in the article is called a notch filter. Here are some links on it, [0], [1], [2], [4]. Instead of cutting a sharp edge on the frequency domain, it's better that the filter blur the edge to avoid introducing too much artifacts, and this is called a Gaussian notch filter.
Anyway here is another method, that also targets peak regions on frequency domain for removal but is based on a median filter instead, [5]
I'm not sure.. what I can find is this https://www.youtube.com/watch?v=c11lSzKdTCk but I don't know if Photoshop implements it with a (linear) blur + sharpen (which would have a greater decrease in the quality of the image), or with a (nonlinear) notch filter
Nature will quite readily calculate the 2D Fourier transform and its inverse for us without the use of a digital computer.
This is because lenses effectively do a Fourier transform at the focal point. With the right setup, you can apply filters at the focal point and get pretty much exactly what you would expect. An example of such a setup is the 4F Correlator. [0]
Fourier optics is a whole subfield within optics, and it really is rather fascinating.
Exposure to Fourier optics really helped develop my intuition around the Fourier transform.
This is how Electron Crystallograpy works. You can choose to get half the Fourier transform (aka the diffraction pattern) with phase information lost, or use a secondary lens to get the full picture back after correction. It's quite magical.
Then you can do FT on that final image on a computer and then modify that pattern in reciprocal space to fix flaws with the image like astigmatism and noise.
I was introduced to this idea in a really cool video about using Fourier Optics for optical pattern recognition.[0] The video happens to have one of the best explanations of Fourier transforms I've yet encountered.
This is precisely the method used to undo screening and half-toning in one of the labs in 6.003 (the intro signal processing course at MIT). The technique works astoundingly well for its straightforwardness, and the exercise can be a lot of fun.
My institution's equivalent was projecting an image of a "criminal" behind bars, doing the masking with a piece of blue-tak in the reciprocal space, and then watching the bars vanish in the image. And then doing the same thing with a cold-war era real life spy-plane photograph of the ocean, followed by masking out the frequency corresponding to the waves, and revealing the underwater submarine. Fun and lifelong memorable.
It is a shame that the article doesn't make a connection between blurring (and sharpening), and operations in the Fourier domain.
The article paints on the Fourier image and sees the effect on the original image. Well, blurring an image is equivalent to painting a centered black ring.
I tried the image[1] on one of the ML super-resolution online image sharpeners[2] and the result was: https://ibb.co/S53Lt0X (click button to look at it in full resolution).
The generated image does not have the same global problem with moiré patterns. The dot patterns remain in the image, randomly dithered or converted into lines. The FFT solution worked better than that particular ML model, although I presume a ML model could be trained specifically to remove printing dots.
I would think the optimal technique lies somewhere between the two: a convolution kernel optimized to conflate halftone dots with each other as to restore exactly the information possible. (Of course, such a kernel would have to be individually tuned for the orientation/spacing of the dots.)
Literally painting over frequency peaks in the FFT with black circles I imagine would be pretty lossy, and not entirely rid yourself of the pattern (since you're making a new pattern with your dots). Indeed, in the animation, the image does get darker as circles are added, and some of the pattern is still visible.
Perhaps using a blur tool to blur out the peaks in the FFT would serve to maintain original image tone, and further reduce patterning?
GIMP has a wavelet deconstruction option/plugin which breaks frequency bands into separate layers (like a stereo's multiband equilizer). You simply delete the layer corresponding to the frequency component and bam, visual features at that scale vanish, preserving structure at all other scales.
There's even more sophisticated wavelet denoisers out there that effectively do the black circle over peaks trick, but automatically and more precisely.
I did look at this. It seems pretty coarse though -- wavelets generally divide the spectrum into power-of-two frequency bands, and the Gimp is no exception it seems. Also of course you have to remove all higher-frequency bands as well, since the halftone dots have high frequency content at the edges of the dots.
Trying it out now, I'm able to get a good result (to my eye) by deleting the top two bands, but it looks nearly identical to the article's blurring example.
Upscaling the image first by 141% and deleting the same two bands, the dots start peeking through, but the result looks closer to the article's inverse FFT result -- minus the artificial edge contrast enhancement that came from the author's use of black (rather than grey) circles.
G'MIC (which is available as a gimp plugin) has the exact same reversible fourier transform as a layer in it as the Fiji software he referenced. You can do the exact same operations from the article (or any other edits/blurs/selections to the peaks).
It's a bit cumbersome since you can't preview what you're doing, but quite workable. Was trying it out on his reference image this evening and got (IMO nicer) results fiddling with magic wand/fuzzy select/grow/shrink and various fills.
Any convolution kernel is equivalent to a FFT (aside from wrapping effects). The advantage of using a convolution kernel (instead of a hand-painted FFT mask) is that it's purely local and doesn't cause halftone dots in one area of the image to affect the rest, and is faster than a full-image FFT (which is O(N log N) in both the width and height).
A halftone dot whose size/shape changes gradually across the image acts like slow PWM in a pulse wave, changing the relative amplitudes of the harmonics (but not their locations). However, steep discontinuous changes can have nastier effects (which aren't handled well by either a convolution kernel or FFT).
I suspect it's possible to handle edges better than a FFT using a specialized algorithm, but I don't know if it's possible without inordinate runtimes, and if the end result is significantly better than a FFT or not.
I think often the FFT is used to actually speed up convolution, it isn't slower than computing convolution of your kernel shifted to every pixel position.
> [FFT] isn't slower than computing convolution of your kernel shifted to every pixel position.
For a fixed kernel size and sufficiently large images, full-image FFT is slower than naive convolution because FFT scales by O(n log n) by the image size, while naive convolution only scales by O(n) in terms of the image size. This can be avoided by using blocked FFT instead of full-image FFT:
> If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.[13] Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method.[14] A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.[15]
I see, I wasn't thinking about kernels much smaller than the image itself. So, if you don't use blocked FFT, it only makes sense to use FFT convolution when your kernel size is > log n?
You may be right, but I think in practice, people use direct convolution for small kernels, and FFT convolution for large kernels (blocked FFT if the signal is substantially longer than the kernel). I didn't look into the math behind this intuition though.
The specks on the FFT were hand removed... I thought the following: he had an image obtained after a blur and sharpening filters. By replacing parts of the FFT with black, he was just throwing away information. What would happen if instead of naively replacing the specks on the FFT with black he replaced those parts with the FFT from the blurred and sharpened image?
Indeed by applying the "dark circle" (ideal filter or sinc filter) in the frequency domain would introduce other artifacts. The restored image would contain some "ringing" around sharp edges, which might not be perceivable in the image shown. [1]
As of the darkening in restored image, it depends how the software interpret the band-stopping of dark holes. By dark circling in the frequency domain, energy is taken out from the image. Re-normalizing the image might be desired, but then it would turn dark areas brighter.
[1] https://en.wikipedia.org/wiki/Ringing_artifacts
I suspect using black instead of grey is the reason for the edge contrast enhancement in the inverse-FFT image. I.e., that's exactly why the tiny areas in between the small letters are darker, and thus the lettering clearer.
Perhaps the optimal technique would be to write a function which halftones a greyscale image, and then use optimisation to find a greyscale image which reproduces the input image when halftoned. This feels like an expectation–maximization kind of problem, but i can't give a rigorous argument for why.
I use Fiji every day, and I love the first impressions from someone who does not use it regularly (all correct too).
FFTs are amazing. In xray crystallography, you can use them to recapture the original image of a crystallized protein from the scatter of dots left by passing through it—essentially acting as the role of lens. They never cease to amaze me with their usefulness!
If you liked this, you might enjoy this little trick you can do with moiré patterns -- near the end of the post, I talk a little bit about the math behind the FFT connection. https://hardmath123.github.io/moire.html
Good writing, yes. But there are ways to properly re-heat old pizza and it's almost as good as fresh. One way (the best way) is to put it in a skillet with the pizza slice on one side and a little bit of water on the other side (not touching!) and cover it, and put over medium heat for 2-4 minutes. Another is to use an air-dryer (if you have one). You may need to test temperature and durations but once you find the proper settings it's perfect.
The same technique works well for reheating burritos, or upleveling microwave burritos. A nice sear on each side gives a firmer structure and I think it tastes better too :)
Cool! I wrote a similar script to automate removing rasters some ten years ago using G'MIC[1]. It was open sourced[2] and made available as a plugin of some sort. (Maybe through GIMP? Can't really remember.) The use case was enhancing page size scans of Babar childrens books which it worked great for. YMMV though, it's a bit rough around the edges.
That solution is one of those things where the solution seems obvious as soon as it is mentioned. At least if you are familiar with Fourier transforms. You want to remove a periodic pattern from an image, and of course that will be easier to do in phase space than spatial space.
Interestingly, filtering out high frequency components from the Fourier transform of the image is exactly how JPEG lossy compression works, so compressing the image as a jpeg would likely have a similar result.
The divide-and-conquer Cooley–Tukey FFT "algorithm (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms)."
Lovely stuff, but halftone prints are just one type of prints. Before digital printing was a thing, photos were created using enlargers throwing light through a negative on photosensitive paper. Since the negative was also created using photochemical effects, the "frequency" effect was much reduced. This can still be done by ordering prints in "photographic papers" by Fuji or Kodak.
If the image was created using a digital sensor, then it won't work as well, of course because the sensor itself is subject to a grid. However, the kind of Bayer Filter used in the sensor can help tackle the effect at source. This is what Fuji's X-Trans sensor[1] claims to do. (I am a Fuji user but I have no data point to offer in either direction)
Fourier analysis is currently used in "Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera."
Also, "JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image."
I don't think what the author did here (descreening) has anything to do with Moire per se. Sure, due to the nature of halftone, it often introduce Moire as the author said. But if you scan your document/photo at high enough resolution (sampling theorem anyone?) it isn't an issue in the descreening process later (which is the main topic of this article.)
Also, using inverse fourier transform to descreen is already the basis of lots of popular commercial denoise plugins (for Photoshop etc.). Most of them will automatically measure the angle and resolution of halftone matrix too.
I assumed that the issue was less the scanning (which can be extremely high precision, and, in any case, the author had to do to get the half-tone image he was working with) but the printing. Once the image is resized and printed in the book, the offset between the dpi of the printer and the half-tone dots was going to introduce new artifacts.
The author probably missed that the first F in FFT refers to a very specific (but efficient) algorithm, and that the effects he achieved are due to the properties of the (2D) Fourier transform, which can be computed using other algorithms as well.
Completely correct. And yet, FFT is much less ambiguous than FT ("Foot"? "Financial Times"? "Face time"?), so imo what is lost in preciseness is gained in clarity.
Discrete Fourier Transform is probably the most accurate way to describe the mathematical transformation which is being applied. Albeit it is very popular to say FFT when what is meant is Fourier transform or DFT, it is a bit like talking about sorting but calling it Quicksort despite it being a specific algorithm.
This reminded me of when film photography processors went from printing using a photographic process to digital printing. The digital process introduced a regular grid to the print which reduced the quality of the print. When prints were made using an enlarger onto photographic paper, the grain was random and the resulting images had a much nicer character.
Been using this technique for a long time, great to see an in-depth look at it.
This is great for other types of regular patterns, not just moire. For instance, if you scan old 60s/70s photos on textured/fiber paper, you can use this to smooth out the texture if desired.
It does for certain types that had a regular frequency to it. I think it was a kind of manufactured texture, not the natural fibers from the material. I don't know what it's called, but I think it was popular around the mid 60s
Well written and really interesting. Its really cool to see an FFT usage example, afaik its also being used by Shazam for matching sound samples.
I have fond memories of FFT, from the time I attempted to use it for generating stereo-pairs for obstacle detection using a two camera system. In my case specifically I was using it to find a particular pixel in the second image of a pair and calculate the shift.
In the end at the time using FFT for something like that was much slower, than using a correlation window based algorithm. It was very interesting though and very different, than the latter.
Fun fact: large correlations (hundreds or thousands of samples in 1D, unsure in 2D) can be efficiently computed using FFTs (if the needle is smaller than the haystack, you can use chunked algorithms like overlap-add/save).
What I don’t understand is how painting so much of the frequency space image black wouldn’t more adversely affect the original image? That was quite a lot removed…
Fourier transforms are amazing. Once I "got" how they worked it really made a lot of things like this make a lot more sense.
Blotting out "bright points" in a 2D transform is basically creating a bandpass filter to get rid of those frequencies. That pictures have frequencies, is cool in itself, looking at each vector through the image as a series of samples.
What a great read; interesting, informative and funny. Great examples of the concepts he's explaining - that make sense even for those of us with very little knowledge of these topics in advance.
> with a user interface only a parent could love
> “You don’t need ML,” Bryan said. “What you need is inverse FFT.”
This article is very well written! I remember having a similar experience creating a bandpass filter by computing FFT, multiplying by a Gaussian, and inverting the result. I swear Gaussian functions are just one of those things that you can insert anywhere and it'll just work out.
Well written, good flow, interesting read and reminded me of the lessons we had at the university some decades ago (should have paid more attention). Why isn’t this offered as an option in photoshop or other commercial image editing packages?
Since the halftone dots are in a matrix that is at an angle, can't you just write a small program that measures the size of each halftone dot and stores that measurement? Then map the measured size to a grey value (supersmall = near white, big = black), and you have recovered all the information in that image.
Then just counter the rotation of the matrix to get the best possible right-side-up scan.
Or better yet: figure out the average dot, replicate it all over the image, and run its Fourier transform. I think that subtracting it from the original image Fourier transform and then inverse transforming the result should also work.
Subtracting the Fourier transform of a dot pattern from the Fourier transform of a photo is equivalent to (but slower than) subtracting a dot pattern from the photo. Subtracting an average-sized dot from a random dot would leave behind a ring, and subtracting a cone from a random dot would instead leave behind odd sloped shapes. It might be a good first step though.
Also, halftone dots aren’t necessarily simple shapes. They are often designed to change shape as they go from 0 to 100% in order to have a shape that holds ink better (for some reason I don’t understand, a circle isn’t necessarily the best shape for holding its edge when made out of wet ink), and also for style (think large halftones where the dots are visibly long streaks all oriented in a particular direction which isn’t necessarily the same angle as the halftone grid). In fact, postscript allows you to prepend code that defines your own custom halftone function f(u,v) -> 0/1 (where u,v is the coordinate within the halftone cell) before printing your job, kinda like a weird little pixel shader from the 90’s.
Awesome read! Is this in use by any software already? I can imagine someone at Adobe already implementing this in photoshop. Can we apply this quickly enough to let everyone where whatever stripes they want on television?
This is not a general-moire-removal technique. It's basically a band-stop filter that removes those frequencies from an image that correspond to the size of halftone dots both horizontally and vertically. Nifty use case for FFT though, would probably make a good homework assignment on an undergrad DSP course.
> removes those frequencies from an image that correspond to the size of halftone dots both horizontally and vertically
And all the harmonics that make out the circle shape.
I'm pretty sure that making the circle repeat horizontally and vertically is some kind of convolution product and that has some nifty properties under Fourier transform.
I wonder, is it better to black out the relevant parts of the FFT image, or is it better to adjust them to be the same as the "background" shade surrounding them?
the spectral domain representation of the halftone image is cool. seeing as it's periodic itself, would be fun to play with a 2d cepstral representation and then liftering out the low frequency peaks that seem to be generated by the halftoning.
Jeez, that's a long ass article to just end up at the tl;dr of: FFT - fast fourier transform. I've used such tools with mixed results on older scanned photographs. You can pull it off in GIMP and one or two extensions, I believe.
https://www.gimp-forum.net/Thread-Fourier-Transform-tutorial...
> what you did is still the best out there
It still is.
I worked with a printing company before. The company impart anti-forgery pattern to packaging material by tuning the half tone-ing angle of the printer. The printing company then offer a seemingly transparent film, with special pattern printed on it, to company requiring brand protection. By overlaying the film on top of packaging material, a specifically designed moire pattern would occur. If you squint your mind enough, it is like public-private key encryption in the physical world. Whenever the brand receive dispute over authenticity of purchased items, a special team, a team having that transparent film, will be summoned to verify the authenticity of item in question. It is one of the many ways the brand in question protect their goods. The printing company was looking for a mobile solution instead of delivery the transparent film, that's where I get to learn more about the printing industry.