Hacker News new | past | comments | ask | show | jobs | submit login
Moiré No More (getrevue.co/profile/shift-happens)
1002 points by stock_toaster on Sept 26, 2021 | hide | past | favorite | 112 comments



The blurring and then sharpening in spatial domain is equivalent to applying a band-pass filter, or rather a band-stop in the article's case, in the frequency domain. Blurring requires a lot of parameter tuning, in particular the size of the mask and the weight of each the mask element. The choice of parameter can be mapped to particular filter in the frequency domain, e.g. a gaussian filter or an ideal filter. Same for sharpening. Operating in the frequency domain allows you to skip all the trial and error of parameters, by eye-balling the frequency of interest, and suppress its presence.

> what you did is still the best out there

It still is.

I worked with a printing company before. The company impart anti-forgery pattern to packaging material by tuning the half tone-ing angle of the printer. The printing company then offer a seemingly transparent film, with special pattern printed on it, to company requiring brand protection. By overlaying the film on top of packaging material, a specifically designed moire pattern would occur. If you squint your mind enough, it is like public-private key encryption in the physical world. Whenever the brand receive dispute over authenticity of purchased items, a special team, a team having that transparent film, will be summoned to verify the authenticity of item in question. It is one of the many ways the brand in question protect their goods. The printing company was looking for a mobile solution instead of delivery the transparent film, that's where I get to learn more about the printing industry.


> It still is.

It is a relatively well defined problem - removing periodic pattern due to physical arrangement of printing device. This is where algorithmic approach shines over ML approach. I think nowadays a lot of ML is an attempt to outsource the problem understanding part. These are hot dogs, these are not hot dogs. I don't know what defines a hot dog, but is this picture a hot dog?

Hyperbole, of course.

On second thought, I think the author shouldn't remove the periodic noise at all. He was preserving "a copy of history", not the first occurrence of history. It is a property worth preserving. It is a beauty of itself imo.


The halftone pattern itself might look lovely and “historical” on you screen, but the book would not present the image “as-is” (i.e. with all halftone detail from the original photo losslessly captured on the page), but rather through the lens of another halftone printing process, one whose DPI is low-enough that it would arbitrarily interfere with the original photo’s to create a garbled mess. The final product wouldn’t be halftone dots encoding a picture, but rather halftone dots encoding moire stripes from an attempt to replicate other halftone dots from a “sample grid” that doesn’t line up with them.

It’d be a bit like taking one of those game emulators that displays the game with a “scanline”-effect filter… and then hooking the computer running the emulator up to an old TV that has a natural built-in scanline effect. You won’t get the pretty scanlines from the emulator, nor will you get the pretty scanlines from the TV; you’ll just get a mess.

(To put it in technical terms: to accurately show the halftone “information” from the original photo in the book, the print resolution of the book would need to be double the Nyquist frequency of said halftone information. But it very likely won’t be.)


It would be possible to print the already halftoned image as it is without running it through another halftone process. Much like how text is not halftoned, but can include very small accurate shapes. I’m sure this makes the preproduction more complicated, but it’s essentially a software problem, not a hardware problem.


But text is super sharp. Halftone is used to blend colors/shades.


The image is already halftoned. There are no shades left.


If you can extract the original halftone dots from the image, sure, you can create new mask layers using those.

But it's unlikely that you can do that, as, unless you're printing in a four-layer pure-CMYK process (and why would you?), even the best photo scanners will be unlikely to be able to capture enough information to tease apart the original offset-layers in their original colors, where halftone dots have overlapped to produce new mixed colors.

Also, you'll be unlikely to reverse the color-bleed effect of the paper to get out the original halftone dot pitch, in order to print new dots that bleed out to the same size the original's did on new, different paper with new, different inks.

Basically, this is as hard as any art forgery project (i.e. needing to match historical materials and processes and environmental conditions), but before you can even start the forging process, you need to destroy even more information by passing the image through the grid-sampling process of a scanner, and only then try to reconstruct layered vector circles from the resulting sample grid.


The moire is an artifact of the scanning or scaling process, and is by no means historic. Removing it is certainly legitimate. That is particularly true since scaling the image can make it worse.

If you are talking about preserving the halftone in an original scan, then yes I agree that it should be preserved. Yet you must also accept that the periodic pattern will have to be removed prior to publishing the image since it will be scaled at some point.


It would be cool if the photo could be printed in a similar way to re-create the effect from the recovered image.


“I have had my results for a long time: but I do not yet know how I am to arrive at them” - C.F Gauss. Nice job a_c


are you at liberty to share the printing company here? I'm looking for something like this. If not, my email is in my profile and would appreciate info so I can reach out and maybe give them business!


Thing is a bit complicated. I actually worked with a middleman, this middleman act as the "certificate authority" - signing up brands and dispatching orders to printing company(ies). So the printing companies don't know what the matching pattern is. It was five years ago and I have since moved on.


20 years ago you could just type in the rotation angle you wanted for each color separation into your film plotter.

If you ask your printer, they should be able tell you what lpi/dpi they can resolve on the media you're printing onto (you may be limited due to dot gain on uncoated stock).


The article doesn't say, but te method described in the article is called a notch filter. Here are some links on it, [0], [1], [2], [4]. Instead of cutting a sharp edge on the frequency domain, it's better that the filter blur the edge to avoid introducing too much artifacts, and this is called a Gaussian notch filter.

Anyway here is another method, that also targets peak regions on frequency domain for removal but is based on a median filter instead, [5]

[0] Removal of Moiré Patterns in Frequency Domain, https://ijournals.in/wp-content/uploads/2017/07/5.3106-Khanj...

[1] Periodic Noise Removing Filter, https://docs.opencv.org/3.4.15/d2/d0b/tutorial_periodic_nois...

[3] Adaptive Gaussian notch filter for removing periodic noise from digital images, https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-...

[4] Adaptive Optimum Notch Filter for Periodic Noise Reduction in Digital Images, https://eej.aut.ac.ir/article_94_befd8a642325852c3a0d41ece10...

[5] A New Method for Removing the Moire' Pattern from Images, https://arxiv.org/abs/1701.09037


Does any photoediting tool implement these filters?


I'm not sure.. what I can find is this https://www.youtube.com/watch?v=c11lSzKdTCk but I don't know if Photoshop implements it with a (linear) blur + sharpen (which would have a greater decrease in the quality of the image), or with a (nonlinear) notch filter


Nature will quite readily calculate the 2D Fourier transform and its inverse for us without the use of a digital computer.

This is because lenses effectively do a Fourier transform at the focal point. With the right setup, you can apply filters at the focal point and get pretty much exactly what you would expect. An example of such a setup is the 4F Correlator. [0]

Fourier optics is a whole subfield within optics, and it really is rather fascinating.

Exposure to Fourier optics really helped develop my intuition around the Fourier transform.

[0] https://en.wikipedia.org/wiki/Fourier_optics#4F_Correlator


To add to your comment:

This is how Electron Crystallograpy works. You can choose to get half the Fourier transform (aka the diffraction pattern) with phase information lost, or use a secondary lens to get the full picture back after correction. It's quite magical.

Then you can do FT on that final image on a computer and then modify that pattern in reciprocal space to fix flaws with the image like astigmatism and noise.

http://www.calidris-em.com is the software for this.


I was introduced to this idea in a really cool video about using Fourier Optics for optical pattern recognition.[0] The video happens to have one of the best explanations of Fourier transforms I've yet encountered.

[0] https://www.youtube.com/watch?v=Y9FZ4igNxNA


Huygens Optics is an amazing channel!


This is precisely the method used to undo screening and half-toning in one of the labs in 6.003 (the intro signal processing course at MIT). The technique works astoundingly well for its straightforwardness, and the exercise can be a lot of fun.

[1] https://sigproc.mit.edu/spring19/psets/11/cat

[2] https://sigproc.mit.edu/fall19/psets/10/cat


My institution's equivalent was projecting an image of a "criminal" behind bars, doing the masking with a piece of blue-tak in the reciprocal space, and then watching the bars vanish in the image. And then doing the same thing with a cold-war era real life spy-plane photograph of the ocean, followed by masking out the frequency corresponding to the waves, and revealing the underwater submarine. Fun and lifelong memorable.


I also remember this being a part of the "mathematical imaging" course at UCLA. It really was a lot of fun, though the topics were a bit dated.


It is a shame that the article doesn't make a connection between blurring (and sharpening), and operations in the Fourier domain.

The article paints on the Fourier image and sees the effect on the original image. Well, blurring an image is equivalent to painting a centered black ring.


It’s quite easy to explain too, when talking about frequencies you just have to mention high and low pass filters.


I tried the image[1] on one of the ML super-resolution online image sharpeners[2] and the result was: https://ibb.co/S53Lt0X (click button to look at it in full resolution).

The generated image does not have the same global problem with moiré patterns. The dot patterns remain in the image, randomly dithered or converted into lines. The FFT solution worked better than that particular ML model, although I presume a ML model could be trained specifically to remove printing dots.

[1] https://s3.amazonaws.com/revue/items/images/010/787/862/orig...

[2] https://deepai.org/machine-learning-model/torch-srgan

Edited: added link to output image.


I would think the optimal technique lies somewhere between the two: a convolution kernel optimized to conflate halftone dots with each other as to restore exactly the information possible. (Of course, such a kernel would have to be individually tuned for the orientation/spacing of the dots.)

Literally painting over frequency peaks in the FFT with black circles I imagine would be pretty lossy, and not entirely rid yourself of the pattern (since you're making a new pattern with your dots). Indeed, in the animation, the image does get darker as circles are added, and some of the pattern is still visible.

Perhaps using a blur tool to blur out the peaks in the FFT would serve to maintain original image tone, and further reduce patterning?


The ImageMagick documentation for this technique recommends blurring the black circles:

https://legacy.imagemagick.org/Usage/fourier/#noise_removal


GIMP has a wavelet deconstruction option/plugin which breaks frequency bands into separate layers (like a stereo's multiband equilizer). You simply delete the layer corresponding to the frequency component and bam, visual features at that scale vanish, preserving structure at all other scales.

There's even more sophisticated wavelet denoisers out there that effectively do the black circle over peaks trick, but automatically and more precisely.


I did look at this. It seems pretty coarse though -- wavelets generally divide the spectrum into power-of-two frequency bands, and the Gimp is no exception it seems. Also of course you have to remove all higher-frequency bands as well, since the halftone dots have high frequency content at the edges of the dots.

Trying it out now, I'm able to get a good result (to my eye) by deleting the top two bands, but it looks nearly identical to the article's blurring example.

Upscaling the image first by 141% and deleting the same two bands, the dots start peeking through, but the result looks closer to the article's inverse FFT result -- minus the artificial edge contrast enhancement that came from the author's use of black (rather than grey) circles.


G'MIC (which is available as a gimp plugin) has the exact same reversible fourier transform as a layer in it as the Fiji software he referenced. You can do the exact same operations from the article (or any other edits/blurs/selections to the peaks).

It's a bit cumbersome since you can't preview what you're doing, but quite workable. Was trying it out on his reference image this evening and got (IMO nicer) results fiddling with magic wand/fuzzy select/grow/shrink and various fills.


Huh, interesting, thanks for trying that out! Yeah I forgot about the overtone frequencies of the halftone dots.

Yeah I supposed you'd need a tool with a continuous wavelet transform and potentially a 2D one at that.


I used this to remove/reduce scars from some portraits. Many layers and then edit the desired spot (without removing the layers). Works like charm.



Any convolution kernel is equivalent to a FFT (aside from wrapping effects). The advantage of using a convolution kernel (instead of a hand-painted FFT mask) is that it's purely local and doesn't cause halftone dots in one area of the image to affect the rest, and is faster than a full-image FFT (which is O(N log N) in both the width and height).

A halftone dot whose size/shape changes gradually across the image acts like slow PWM in a pulse wave, changing the relative amplitudes of the harmonics (but not their locations). However, steep discontinuous changes can have nastier effects (which aren't handled well by either a convolution kernel or FFT).

I suspect it's possible to handle edges better than a FFT using a specialized algorithm, but I don't know if it's possible without inordinate runtimes, and if the end result is significantly better than a FFT or not.

(Also, FFTs won't work as well for non-uniform hand-drawn halftones, like the charming https://upload.wikimedia.org/wikipedia/commons/a/ac/Julemoti....)


I think often the FFT is used to actually speed up convolution, it isn't slower than computing convolution of your kernel shifted to every pixel position.

https://en.wikipedia.org/wiki/Convolution#Fast_convolution_a...


> [FFT] isn't slower than computing convolution of your kernel shifted to every pixel position.

For a fixed kernel size and sufficiently large images, full-image FFT is slower than naive convolution because FFT scales by O(n log n) by the image size, while naive convolution only scales by O(n) in terms of the image size. This can be avoided by using blocked FFT instead of full-image FFT:

> If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.[13] Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method.[14] A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.[15]


I see, I wasn't thinking about kernels much smaller than the image itself. So, if you don't use blocked FFT, it only makes sense to use FFT convolution when your kernel size is > log n?


You may be right, but I think in practice, people use direct convolution for small kernels, and FFT convolution for large kernels (blocked FFT if the signal is substantially longer than the kernel). I didn't look into the math behind this intuition though.


The specks on the FFT were hand removed... I thought the following: he had an image obtained after a blur and sharpening filters. By replacing parts of the FFT with black, he was just throwing away information. What would happen if instead of naively replacing the specks on the FFT with black he replaced those parts with the FFT from the blurred and sharpened image?


Indeed by applying the "dark circle" (ideal filter or sinc filter) in the frequency domain would introduce other artifacts. The restored image would contain some "ringing" around sharp edges, which might not be perceivable in the image shown. [1]

As of the darkening in restored image, it depends how the software interpret the band-stopping of dark holes. By dark circling in the frequency domain, energy is taken out from the image. Re-normalizing the image might be desired, but then it would turn dark areas brighter. [1] https://en.wikipedia.org/wiki/Ringing_artifacts


I suspect using black instead of grey is the reason for the edge contrast enhancement in the inverse-FFT image. I.e., that's exactly why the tiny areas in between the small letters are darker, and thus the lettering clearer.


Perhaps the optimal technique would be to write a function which halftones a greyscale image, and then use optimisation to find a greyscale image which reproduces the input image when halftoned. This feels like an expectation–maximization kind of problem, but i can't give a rigorous argument for why.


If you just search for "inverse halftoning" on Google Scholar there are a ton of techniques that are clearly better than this, e.g. MAP estimation.


I use Fiji every day, and I love the first impressions from someone who does not use it regularly (all correct too).

FFTs are amazing. In xray crystallography, you can use them to recapture the original image of a crystallized protein from the scatter of dots left by passing through it—essentially acting as the role of lens. They never cease to amaze me with their usefulness!


>Imagine being able to edit the nutrition label and see the food transform in front of you.

This is a surprisingly good description of an FFT.


The Fast Food Transform?


That tool is broken, it only transforms belly size :(


In a way that works like my time machine, only goes one direction.


If you liked this, you might enjoy this little trick you can do with moiré patterns -- near the end of the post, I talk a little bit about the math behind the FFT connection. https://hardmath123.github.io/moire.html


Or for those who studied electrical engineering (or anyone one who took a Signal and System course):

https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...

https://matthews.sites.wfu.edu/misc/DigPhotog/alias/


> All the demoireing guides tell you to apply a sharpening filter after [blurring] to compensate, but that’s like reheating cold pizza.

What a fantastic analogy. I wanted to stand up and applaud the author.


Good writing, yes. But there are ways to properly re-heat old pizza and it's almost as good as fresh. One way (the best way) is to put it in a skillet with the pizza slice on one side and a little bit of water on the other side (not touching!) and cover it, and put over medium heat for 2-4 minutes. Another is to use an air-dryer (if you have one). You may need to test temperature and durations but once you find the proper settings it's perfect.


The same technique works well for reheating burritos, or upleveling microwave burritos. A nice sear on each side gives a firmer structure and I think it tastes better too :)


Cool! I wrote a similar script to automate removing rasters some ten years ago using G'MIC[1]. It was open sourced[2] and made available as a plugin of some sort. (Maybe through GIMP? Can't really remember.) The use case was enhancing page size scans of Babar childrens books which it worked great for. YMMV though, it's a bit rough around the edges.

[1] https://github.com/dtschump/gmic [2] https://github.com/dtschump/gmic-community/blob/master/inclu...


That solution is one of those things where the solution seems obvious as soon as it is mentioned. At least if you are familiar with Fourier transforms. You want to remove a periodic pattern from an image, and of course that will be easier to do in phase space than spatial space.

Interestingly, filtering out high frequency components from the Fourier transform of the image is exactly how JPEG lossy compression works, so compressing the image as a jpeg would likely have a similar result.


> a 1960s algorithm

What must Gauss think when he hears this?!

The divide-and-conquer Cooley–Tukey FFT "algorithm (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms)."

https://en.wikipedia.org/wiki/Fast_Fourier_transform#Cooley%...


To be fair, the author does acknowledge this, though he doesn't cite Gauss.

> FFT – Fast Fourier Transform – was invented in the 1805 and then, once again, in 1965.


Lovely stuff, but halftone prints are just one type of prints. Before digital printing was a thing, photos were created using enlargers throwing light through a negative on photosensitive paper. Since the negative was also created using photochemical effects, the "frequency" effect was much reduced. This can still be done by ordering prints in "photographic papers" by Fuji or Kodak.

If the image was created using a digital sensor, then it won't work as well, of course because the sensor itself is subject to a grid. However, the kind of Bayer Filter used in the sensor can help tackle the effect at source. This is what Fuji's X-Trans sensor[1] claims to do. (I am a Fuji user but I have no data point to offer in either direction)

[1] https://en.m.wikipedia.org/wiki/Fujifilm_X-Trans_sensor


This article opened for me a Pandora's box of curiosity.

For anyone interested, this is an excellent introduction to some of the concepts: https://web.archive.org/web/20060210112754/http://cns-alumni...

Fourier analysis is currently used in "Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera."

Also, "JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image."

Source for both quotes: https://en.wikipedia.org/wiki/Fourier_analysis


I don't think what the author did here (descreening) has anything to do with Moire per se. Sure, due to the nature of halftone, it often introduce Moire as the author said. But if you scan your document/photo at high enough resolution (sampling theorem anyone?) it isn't an issue in the descreening process later (which is the main topic of this article.)

Also, using inverse fourier transform to descreen is already the basis of lots of popular commercial denoise plugins (for Photoshop etc.). Most of them will automatically measure the angle and resolution of halftone matrix too.


I assumed that the issue was less the scanning (which can be extremely high precision, and, in any case, the author had to do to get the half-tone image he was working with) but the printing. Once the image is resized and printed in the book, the offset between the dpi of the printer and the half-tone dots was going to introduce new artifacts.


Ah, that's a very good point.


The author probably missed that the first F in FFT refers to a very specific (but efficient) algorithm, and that the effects he achieved are due to the properties of the (2D) Fourier transform, which can be computed using other algorithms as well.


Completely correct. And yet, FFT is much less ambiguous than FT ("Foot"? "Financial Times"? "Face time"?), so imo what is lost in preciseness is gained in clarity.


Discrete Fourier Transform is probably the most accurate way to describe the mathematical transformation which is being applied. Albeit it is very popular to say FFT when what is meant is Fourier transform or DFT, it is a bit like talking about sorting but calling it Quicksort despite it being a specific algorithm.


That's amazing.

Reading this I actually got excited about something I already knew and had used now and then, when I never felt about it that way before.


Same. I think it's something to do with the writing style, the familiarity of the problem, and the sheer flexibility of FFT.


This reminded me of when film photography processors went from printing using a photographic process to digital printing. The digital process introduced a regular grid to the print which reduced the quality of the print. When prints were made using an enlarger onto photographic paper, the grain was random and the resulting images had a much nicer character.


Another great use of an FFT and filtering. https://www.edn.com/buenos-notches-the-filter-wizard-vs-the-... Filtering out vuvuzela noise. Same sort of idea, targeting the frequency peak of an obnoxious interference.


When I saw the link named "explaining [FFT's] ingenuity in more detail" I was nearly certain it would be a 3B1B video, and it was:

https://www.youtube.com/watch?v=spUNpyF58BY


Been using this technique for a long time, great to see an in-depth look at it.

This is great for other types of regular patterns, not just moire. For instance, if you scan old 60s/70s photos on textured/fiber paper, you can use this to smooth out the texture if desired.


Textured paper isn't going to have a regular pattern, so this won't work.


It does for certain types that had a regular frequency to it. I think it was a kind of manufactured texture, not the natural fibers from the material. I don't know what it's called, but I think it was popular around the mid 60s


Well written and really interesting. Its really cool to see an FFT usage example, afaik its also being used by Shazam for matching sound samples. I have fond memories of FFT, from the time I attempted to use it for generating stereo-pairs for obstacle detection using a two camera system. In my case specifically I was using it to find a particular pixel in the second image of a pair and calculate the shift. In the end at the time using FFT for something like that was much slower, than using a correlation window based algorithm. It was very interesting though and very different, than the latter.


Fun fact: large correlations (hundreds or thousands of samples in 1D, unsure in 2D) can be efficiently computed using FFTs (if the needle is smaller than the haystack, you can use chunked algorithms like overlap-add/save).


FFTs are amazing! Also throwing ML at any problem is the root of all evil.


What I don’t understand is how painting so much of the frequency space image black wouldn’t more adversely affect the original image? That was quite a lot removed…


Not an identical match but perhaps as an intuition pump you can think about fourier transform of audio that is "clipping".


Fourier transforms are amazing. Once I "got" how they worked it really made a lot of things like this make a lot more sense.

Blotting out "bright points" in a 2D transform is basically creating a bandpass filter to get rid of those frequencies. That pictures have frequencies, is cool in itself, looking at each vector through the image as a series of samples.


What a great read; interesting, informative and funny. Great examples of the concepts he's explaining - that make sense even for those of us with very little knowledge of these topics in advance.

> with a user interface only a parent could love

> “You don’t need ML,” Bryan said. “What you need is inverse FFT.”


This article is very well written! I remember having a similar experience creating a bandpass filter by computing FFT, multiplying by a Gaussian, and inverting the result. I swear Gaussian functions are just one of those things that you can insert anywhere and it'll just work out.


Hey! Lisp Machine Space Cadet Keyboard (in the sample photo) for the win!

Photo:

https://s3.amazonaws.com/revue/items/images/010/788/006/orig...

Space Cadet Keyboard:

https://en.wikipedia.org/wiki/Space-cadet_keyboard

Oh, he's writing a book about keyboards. That makes total sense, then.


It has “like” and “dislike” keys! Way ahead of it’s time.


Well written, good flow, interesting read and reminded me of the lessons we had at the university some decades ago (should have paid more attention). Why isn’t this offered as an option in photoshop or other commercial image editing packages?


Since the halftone dots are in a matrix that is at an angle, can't you just write a small program that measures the size of each halftone dot and stores that measurement? Then map the measured size to a grey value (supersmall = near white, big = black), and you have recovered all the information in that image.

Then just counter the rotation of the matrix to get the best possible right-side-up scan.


Or better yet: figure out the average dot, replicate it all over the image, and run its Fourier transform. I think that subtracting it from the original image Fourier transform and then inverse transforming the result should also work.


Subtracting the Fourier transform of a dot pattern from the Fourier transform of a photo is equivalent to (but slower than) subtracting a dot pattern from the photo. Subtracting an average-sized dot from a random dot would leave behind a ring, and subtracting a cone from a random dot would instead leave behind odd sloped shapes. It might be a good first step though.


Also, halftone dots aren’t necessarily simple shapes. They are often designed to change shape as they go from 0 to 100% in order to have a shape that holds ink better (for some reason I don’t understand, a circle isn’t necessarily the best shape for holding its edge when made out of wet ink), and also for style (think large halftones where the dots are visibly long streaks all oriented in a particular direction which isn’t necessarily the same angle as the halftone grid). In fact, postscript allows you to prepend code that defines your own custom halftone function f(u,v) -> 0/1 (where u,v is the coordinate within the halftone cell) before printing your job, kinda like a weird little pixel shader from the 90’s.


Right right... linearity


For those who wish to experiment, this method has existed for Photoshop for a long time as a plugin: http://www.descreen.net/eng/soft/descreen/descreen.htm


reviewed how fft works because of this, and made this demo for image fft (picture cropped from original passage): https://gist.github.com/mikelty/4553938523f0d1b0f4ee575a6241...

turns out skimage can do this, not just the 'special piece of software' mentioned in the article.


If you liked this, you might like this as well: https://blog.mapbox.com/debanding-the-world-94439af16e7f


Honestly, I like anything about Fourier transforms!


Awesome read! Is this in use by any software already? I can imagine someone at Adobe already implementing this in photoshop. Can we apply this quickly enough to let everyone where whatever stripes they want on television?


With GIMP:

https://www.youtube.com/watch?v=30XaCfM2QGg

https://www.youtube.com/watch?v=se9I3uGITR0

With Photoshop:

https://www.youtube.com/watch?v=vb4XFNC4MvU

I am sure there are more tools for it. Good search terms seem to be "FFT Moire".

Fourier transforms can be easily computed in realtime at full HD resolution these days since GPUs are very fast at that kind of math.


This is not a general-moire-removal technique. It's basically a band-stop filter that removes those frequencies from an image that correspond to the size of halftone dots both horizontally and vertically. Nifty use case for FFT though, would probably make a good homework assignment on an undergrad DSP course.


> removes those frequencies from an image that correspond to the size of halftone dots both horizontally and vertically

And all the harmonics that make out the circle shape.

I'm pretty sure that making the circle repeat horizontally and vertically is some kind of convolution product and that has some nifty properties under Fourier transform.


Someone has made this http://turbomaze.github.io/JS-Fourier-Image-Analysis/. I am sure a web version where you can manipulate the FFT and see the image reconstruction live should already exist somewhere.

EDIT: This fork allows drawing over FFT https://geeklogan.github.io/JS-Fourier-Image-Analysis/


If you mean DFT, then yes. It's basis for image/video compression, for example. ;)


That would be DCT - Discrete Cosine Transform.


Things you can do to a single image that look good do not directly scale to the same results to video, especially something in motion.

Also, this would be a good example of just because you can doesn't mean you should, especially with stripes.


It's implemented in affinity photo. You get realtime feedback- it's pretty amazing.


The 2nd method also has a lot of noise in it which helps to "emulate" detail.

I think you could get the blur method quite a bit further for a fair comparison, e.g. by adding noise.


I wonder, is it better to black out the relevant parts of the FFT image, or is it better to adjust them to be the same as the "background" shade surrounding them?


An increadibly well explained piece on the magic that is the frequency domain visualized as power spectrum.

HN please, make a FFT of the Day tag to not miss those, just on the title.


Fun read. Apparently the author is going to be coming out with a book on the history of keyboards and the people involved in them.

Sounds promising.


the spectral domain representation of the halftone image is cool. seeing as it's periodic itself, would be fun to play with a 2d cepstral representation and then liftering out the low frequency peaks that seem to be generated by the halftoning.


Does any photoediting tool hás a filter like this? look It looks like a fixed algorithm


Best thing i've ever seen on HN


Jeez, that's a long ass article to just end up at the tl;dr of: FFT - fast fourier transform. I've used such tools with mixed results on older scanned photographs. You can pull it off in GIMP and one or two extensions, I believe. https://www.gimp-forum.net/Thread-Fourier-Transform-tutorial...


Fantastic read, thanks!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: