Hacker News new | past | comments | ask | show | jobs | submit login
Faster Fourier transform among world’s most important emerging technology (web.mit.edu)
121 points by __Rahul on May 7, 2012 | hide | past | favorite | 47 comments



Paper at: http://arxiv.org/abs/1201.2501v1

O(k log n log(n/k)) complexity for the general case.


...where k is the number of FFT coefficients computed, and n is the time series length.

Has anyone else been turned off by the sensationalism of MIT TR articles?


With all due respect, this is anything but sensationalism. It might help to understand that the number of FFT coefficients computed is not necessarily the number of data points processed. Theoretically, if we have a perfect cosine, for example, we'd get TWO non-zero coefficients (k=2) even if n was a huge number. Compare O(k logn log(n/k)) to O(nlogn) in those cases!

Granted, this doesn't apply to every single situation. However, citing the article in the case of video signals, you have on average 7 "non-negligible" coefficients (k~7). So there are certainly applications that will benefit dramatically from this.


We disagree.

The big-O results of the paper are interesting and with more work could prove a starting point for improvements in very long FFTs. Why long? Because, for big-O asymptotics to kick in, you'll need large n.

But, the domain of applicability seems to be a niche. The "7 non-negligible coefficients" result you refer to is for 8x8 DCT's (as in JPEG). That's n = 8. There are optimized implementations for the small-n cases that will utterly dominate a big-O optimal algorithm like this. Compare Strassen's algorithm for matrix *.

Have you looked at the code complexity of the algorithm in the paper? There are several interlocking components (hash functions, rebinning) that have their own big-O optimality claims and, in some cases, randomization. Getting it all to work together efficiently, given cache issues, even for large n (say, 10^6) would be a serious engineering challenge. People have been working on optimizing data flow for FFT for decades now, and the state of the art is very advanced.

I agree, that the result is interesting in principle. Runtime is actually sub-linear in n, the sequence length.


Thanks :) left that important bit out. This is opposed to O(n log n) for what's been around.


But it's MIT so anything and everything they do has to be awesome.


I thought normally k=n, and the ordinary complexity is O(n log(n)). So this sounds worse for the normal case.

I guess if you use a smaller k it is useful.


Most signals you get by sampling real world objects will be fairly sparse. You can expect anywhere between 50% and 95% of the coefficients in a wavelet or fourier domain to be near zero depending on the class of signal.

However, be wary of algorithms that expect K as input, as they are asking you to classify your signal before analysis.


Discussion from the initial announcement in January:

http://news.ycombinator.com/item?id=3480016



Does this mean real-time Dirac video encoding might be possible?


No, Dirac is based on wavelet transforms, which have different math behind them, not using Fourier transforms. This Sparse Fourier Transform (which the authors are now calling the sFFT) would be more useful for traditional FFT/DCT compression algorithms. Though for the most part, the hardware for even very complicated real-time Fourier-based compression (H.264, HD) already exists.


The sparse Fourier transform is also probably not useful for any of that stuff, because accurate results are needed (not just getting one or two of the main tones). Transforms are not the bottleneck at all in video encoding, and even in audio encoding, split-radix FFTs and the like are very fast.

High-res Dirac video encoding is definitely possible in real-time, you just need a very optimized encoder, which doesn't really exist. Dirac's main speed cost comes from the overlapped-block motion compensation, not the transform.


The other observation I make is that apart from their computational complexity, FFTs work well in hardware because they have reasonably efficient implementations.

Implementation of an FFT on a chip has two components: the logic/computing elements ( governed by O(n.log(n)) ) and the routing of signals between those elements. It turns out the size and speed of the FFT is mainly determined by the routing, not by the logic, and there is a tractable routing solution to a reasonable number of points [1]. The computation complexity becomes secondary if the complexity of the implementation is determined by the non-computational aspects.

[1] Based on experience in 1995.


Oh, for some reason I thought Dirac used DCT somehow. Here's a good overview of how it actually works for compression: http://x264dev.multimedia.cx/archives/317


Assuming these are suitable for H.264, then developing H.264 hardware decoders using sFFT could reduce the power consumption. Even though the problem of hardware FFT is "solved" there is still room for improvement.


I just had a somewhat closer look at the MIT Technology review list of emerging technologies, and in particular the people behind them. I made a rather sad discovery. Please take a look at the following list:

- Jonathan Tilly (stem cell research)

- John A. Rogers, Ralph Nuzzo, George M. Whitesides, Etienne Menard (Semprius founders)

- Ren Ng (light field photography, Lytro founder)

- Nikhil Jaisinghani, Brian Shaad (solar-powered microgrids, Mera Gao Power founders)

- Mark Bohr (3D transistors, head of Intel's process technology)

- Piotr Indyk, Dina Katabi, Eric Price, Haitham Hassanieh (Sparse Fourier transform)

- Gordon Sanghera, Spike Willcocks, Hagan Bayley (DNA sequencing, Oxford Nanopore founders)

- Perry Chen, Yancey Strickler, Charles Adler (Kickstarter founders)

- Peter Schultz, Robert Downs, Donald Murphy (Wildcat Discovery Technologies founders)

- Mark Zuckerberg (Facebook founder)

This is the list of the people behind all of the ten emerging technologies, as listed on http://www.technologyreview.com/tr10/

  Number of people on the list: 23
  Number of women on the list:  1
At least for Intels 3D transistor team and Facebook's timeline, I was not able to dig up the list of people on the team who develop these technologies, so there is still some hope that there are a few more women on these teams at least. The same should hold for the research on egg stem cell research.


Are you disappointed because women are not contributing to "breakthrough technologies" or do you think there were worthwhile women that were overlooked?


Highly relevant question. As someone with a signal processing background, the SFT is so huge that without doubt it should be on this list. Knowing MIT's meritocracy, they would have given the award to more women if it were appropriate


I'm basically just stating the fact, leaving an interpretation open for everyone who's interested.

A while ago, there were a few front page posts about the role of women in IT, and about how they find themselves in their work environments. The MIT list is not restricted to computer science, but perhaps it can be interpolated to the relevant fields here, too?

I think it stands out that women are heavily under-represented on this list, and although explanations for that fact might be manifold, I personally just find it a bit sad to look at that outcome.


Sorry to be that guy - but why the downvote? I mean, I can understand if people are not interested in this information, and it is certainly only marginally related to the original article - but why would you downvote my comment instead of, say, just ignoring it?


they make this seem like a huge deal, anyone know why? is this going to like put HD content on an iPhone? I don't expect that the signal processing hardware is the bottleneck here.


> I don't expect that the signal processing hardware is the bottleneck here.

Computational resources is always the bottleneck in bioinformatics, quantum chemistry, or any sort of high data volume analysis or simulation, and the FFT is a fundamental and commonly used transform in all fields.

At least for people who still use computers to, well, compute things, a faster FFT is a huge deal.


How many domains still use FFT specifically though as opposed to some other transform? Most of the signal processing papers of the last decade that I've read or read about have used either wavelets, FFT on a small window such that this isn't really applicable, or some arbitrary non-orthogonal basis.


When I was doing EEG signal analysis in grad school, I used the FFT all the time. While there's some cool stuff involving wavelets and using interesting basis sets (matching pursuit looks cool), if you're primarily looking at power and frequency over time, the FFT is sufficient, and usually faster than the other algorithms. (And if you're looking at power/phase, the common Morlet wavelet choice is mathematically equivalent to an FT with a Gaussian taper.)

I'm not sure what you mean by "on a small window such that this isn't really applicable"; can you give an example? As long as you accept the inherent time-frequency resolution trade-offs, there's no obstacle to using FFT on a small window. It's called the short-time Fourier transform (STFT), and it's used everywhere; it's probably used more than analyzing an entire signal, since we frequently want to know how power and phase change over time in a signal, and a full-signal FT can't tell you that.


>I'm not sure what you mean by "on a small window such that this isn't really applicable"

FFT is very useful on a small window, but algorithms that improve the asymptotic efficiency are unlikely to be useful at that scale. With n=100 the asymptote doesn't matter. We'll still be using the FFT forever, I'm just skeptical that the frontiers of technology will be advanced by a faster FFT, since it seems like the coolest stuff is happening elsewhere. It's increasingly becoming the quick-and-dirty counterpart to the sophisticated-but-slow methods.


FFTs are used in lots of ways. The R function convolve(), for example, uses FFTs for efficiency.


>At least for people who still use computers to, well, compute things, a faster FFT is a huge deal.

This is a useless post except that I wanted to highlight that wonderful little bon mot. Well done.


If the hardware isn't the bottleneck, you can simply make your compression/encoding algorithms more complex, because 10% saved traffic is probably much more valuable than 10% saved CPU cycles (remember on phones traffic means a lot of energy drainage for sending/receiving). Besides that, though, FFT is old, so whatever codec you use to view HD videos on the iPhone, it probably already uses FFT or some other comparable thing like wavelets or so.


Even if it's not the bottleneck, making it faster will have the effect of making it use less power. We all want longer battery life, right?


Maybe they can use this to reduce the latency for Rocksmith?


Awesome.

It'll sure be nifty when 2029 rolls around and the patent expires, so applications can actually use the algorithm. (No, I don't have any information that an sFFT patent has been filed, but this would be standard practice at MIT).

Tornado/fountain codes are a similar case. Pardon my bitterness, but it's an interesting question to wonder whether, by funding researchers to invent algorithms of this type and lock them away behind a patent-wall for two decades, USG is advancing the progress of technology or in fact retarding it.


Algorithms and mathematical methods are not supposed to be patentable in the US.


And yet, we've had plenty of software patents which have held up in court and caused significant damage to companies which were found to be infringing, and many more which have backed off due to fear of the same.

Mathematical facts aren't patentable. That much has been established. Perhaps a "pure" algorithm isn't patentable; but all software patents start out with something like "a general purpose computer which...", tying it to hardware and making it patentable. Good luck implementing that algorithm without a general purpose computer.

For instance, the i4i patent is a patent purely on an algorithm, and Microsoft lost that case and had to remove the feature from its software: http://en.swpat.org/wiki/I4i_v._Microsoft . I'm not sure how you can square that with algorithms not being patentable.


No problem, just find a way to implement the algorithm which doesn't violate at least one of at least 121 "fast fourier transform" pattents and you're all set (http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sec...). Just repeat that process for every algorithm you might want to implement and you'll have no problem writing software.


That's why they patent the idea of having a general purpose computer do the math. And no, they don't care if the novel part isn't patentable subject matter and the patentable subject matter has already been invented. That's a pretty well-established dodge, except insofar as the recent Supreme Court decisions impact it.


The "Marching Cubes"[1] algorithm got patented ages ago and the patent was enforced. Some clever people had to come up with a similar solution called "Marching Tetrahedrons" to work around the patent.

[1]http://en.wikipedia.org/wiki/Marching_cubes


The key phrase is "supposed to be"; "are not supposed to be patentable" does not imply "are not patented" in a system where patent examiners usually have very little idea what they're actually warding patents for and legal fees for challenging invalid patents are prohibitively high.


Where did ya here that? My favorite algorithm is patented: http://en.wikipedia.org/wiki/Scale-invariant_feature_transfo...


As a famous American statesman once put it: "Justice Marshall has made his decision - now let him enforce it."

It's commonly supposed that the Supreme Court could void these illegal patents - could dissolve this entire illegal industry - with a swish of their shiny polyester robes. Actually they already tried that, in Flook. Jedi mind tricks, which in the end is all the Court really has, worked about as well on Andrew Jackson.

I think this explains the disappointing result of Bilski. If you are supposed to have a magic power which in reality doesn't exist, make every excuse to avoid using it. You may still be suspected of impotence, but at least the suspicion is not confirmed.

Unfortunately there isn't really a solution to the patent problem that's compatible with the rule of law, mostly because the rule of law was so long ago abandoned for the rule of lawyers. Perhaps Andrew Jackson could round them all up and march them to Oklahoma - or at least, Marshall, Texas.


Please choose less offensive examples.

A large fraction of the native population that was rounded up and marched in that incident died. Accurate records were not kept, but according to estimates about a quarter of those evicted died as a direct result. Including some of my relatives.


Almost all my relatives are dead. Including all who remember the Jackson administration.

Some of them died justly, others not. I'm not an Albanian and have no interest in investing emotional energy in multi-generational blood feuds. Eg, some of my relatives were killed by the Germans, who make a really fine motorcycle.

Also, life was pretty cheap before the 20th century. It's easy to appall kids these days with the death rate among chained slaves crossing the Atlantic. In fact the death rate among Europeans voluntarily crossing the Atlantic was quite similar. It also wasn't easy or safe to walk across half of North America, whether voluntarily or not. Basically, to anyone who lived in this era - regardless of race, color or creed - we're all a bunch of whining pussies.

Really the main difference between the expulsion of the Cherokees and my proposed ethnic cleansing of the Palo Alto patent lawyers is that the Cherokees didn't deserve it, whereas the patent lawyers do. Lawless force can work good as well as evil. And when we've already left law behind, what's the alternative?


There is always my crackpot solution to software patents:

http://www.loper-os.org/?p=309

Transparently offload the "patent sin" to a system's users. (With plausible deniability.)


That's only true for the US. In the majority of other countries algorithms aren't patentable.


This is not a "faster" FFT. It's not a FFT at all.

"Because the SFT algorithm isn't intended to work with all possible streams of data, it can take certain shortcuts not otherwise available. In theory, an algorithm that can handle only sparse signals is much more limited than the FFT."

It may well be a very useful algorithm that does FFT-like operations, but the title is marketing hype and it's impossible to judge the true range of applicability of this algorithm from the article alone.


Disagree. I think that natural scrolling gestures in Mac OS Lion is more revolutionary and more intuitive to use than this "faster Fourier transform". That technology sounds like it only used by nerds, and I'm sure it has a TERRIBLE user experience.


I'd recommended giving the article another read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: