Hacker News new | past | comments | ask | show | jobs | submit login
Multiplying Integers Using Fourier Transforms [pdf] (rug.nl)
84 points by nceqs3 on May 15, 2021 | hide | past | favorite | 21 comments



Neat. I wrote a tool that shows interactive examples of how you can actually do the calculations for multiplications with Fourier transforms:

https://blog.robertelder.org/fast-multiplication-using-fouri...

The trick is really just about realizing that you can re-write a number like 1234 as a polynomial like 1 * 10^3 + 2 * 10^2 + 3 * 10^1 + 4 * 10^0.


That is really cool and useful. Visualization of the FFT can be hard sometimes. Thanks for making that.


GIMPS (Great Internet Mersenne Prime Search) has been using the trick for decades. And it works wonder for finding the largest (i.e mersenne) prime.

https://www.mersenne.org/various/math.php


This is great. One way closer to a computation platform, were the data just streams continuously through operative "channels" and the execution is mostly encoded into the layout of the data in memory. There is just no "program" to load, just a river of data, squeezing through cores which endlessly in parallel perform the same operations (until some other driftwood in the data instructs them to change that in this case for example a addition operation).

The complexity of such a program would be mostly done at compile time, layouting the instructionflow into the memory. Data would have physics, as in the parallel nature of this "simulation" pumping data through would add something roughly similar to our physics to the behaviour. Interaction between parallel processed elements has a scope, like a lightcone in our physics. (SumOf(NrOfCores*LargestChunkOfData))

If you want to safely remove or place new data into the whole affair, one just leaves large enough "gaps" in the datastream that can be assumed to firewall.

In my undergrad time i build a little comp science experiment. Nothing fenzy, just a idealized, parallel "Engine", copying datachunks of size from Buffer A to Buffer B.

https://github.com/PicassoCT/CompScienceExperiments/tree/mas...

Even just this has interesting interaction behaviour. Like sand at the beach, the data get sorted by size after intervals. One also insert "dark matter" aka dead chunks into the buffers, which has the effect of "accelerating" large chunks of data, while the small chunks stay put. All of this is from the point of view of the buffer of course, in which time is one "total copying from a to b".

Sorry i found this fascinating, even though it might be of no value besides distraction at all.


There's a great video on this topic: https://www.youtube.com/watch?v=h7apO7q16V0


Fabrice Bellard also used the DFT for multiplication as one of the many advanced techniques in his computation of 2.7T digits of Pi: https://bellard.org/pi/pi2700e9/pipcrecord.pdf


An excellent paper in the field is the one introducing the IBDWT, Irrational Base Discrete Weighted Transform [1]

1. http://www.faginfamily.net/barry/Papers/Discrete%20Weighted%...


I only skimmed the paper but is this essentially the same thing as Karatsuba multiplication (which I believe also has O(n log n) performance)?


No, Karatsuba multiplication is not linearithmic. Karatsuba multiplication takes time about n^1.58 (as opposed to n^2 in naive multiplication)


Karatsuba is generally used for numbers that are too big for ordinary (convolutional) multiplication but too small for Fourier transform multiplication to be worthwhile.


Ah, my mistake.


How does this beat Schonhage-Strassen (which is the standard way to do IDFT(DFT(a)*DFT(b)) ) by a factor of loglog(n)? IIRC nlog(n) was only achieved in 2019.


The analysis assumes constant-time arithmetic for the Fourier transforms. This limits the size of the multiplicands, since the error terms eventually grow large enough to affect the computed product.

From the paper: "Note that this error is due to the limited precision in which we can represent numbers on computers, it does not occur in exact arithmetic."

But of course, exact reals (twiddle factors aren't rational) are rife with undecidability if not uncomputability.


The paper seems to lack a comparison between regular multiplication of large integers and FFT multiplication in base 2.


FFT for multiplication and has a well understood complexity of nlogn w.r.t number of digits.

“Regular” multiplication could mean n^2 for standard paper arithmetic, n^1.6 for karatsuba, and a few other n^y algorithms with y>1

However as usual in this analysis we have dropped constant factors and the constant factors for FFT are high so you have to be multiplying numbers with at least hundreds of digits if not thousands.

Note that the base of the digits isn’t relevant for analytical performance, as it’s a constant factor change


What's amusing is that multiplying integers with periodic digits is much faster using this method


Jeesh, that's clever.

I'd probably just do a giant shift and add loop.


Are you talking about base-2 multiplication for fixed word-length integer? The paper seems to be talking about BigInt multiplication.


I was being facetious as there's usually faster ways, but you can do unlimited length integer multiply with an unlimited length mask/shift/add. Perhaps that is what you mean.


Faster than Shakuntala Devi?


Apparently it's an Indian lady who was a "mental calculator" person.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: