If the algorithm's so short, why not post the pseudocode?
Or did they? Here's my impression based on the article and a few minutes of refreshers:
Problem: Given a (weighted?) graph, output its Laplacian matrix, which represents (in the sense of being polynominal-time reducible to?) how much current would flow between any two nodes if voltage were applied across it.
New algorithm:
1) Find a (minimal?) spanning tree for the graph.
2) Calculate how much current would flow through that graph if it were so weighted and a unit voltage were applied through it, with the edges being resistors (and leaves presumably connected to ground?).
3) Create a loop restoring one of the edges in the original graph.
4) Update the currents your previously calculated by the amount needed to balance the circuit.
5) Repeat from Step 2) until you have the original graph.
Okay, so these papers are phrased a little confusingly, but what they're really doing is solving a specific but very useful type of linear equation Ax = b where A is a diagonally-dominant matrix (entries on the diagonal are larger than the sum of the off-diagonal entries). It happens that graphs are equivalent to matrices, so it's a neat area that straddles applied math and computer science. For a survey paper on recent developments in the field: http://www.cs.cmu.edu/~jkoutis/papers/CACM-KMP.pdf
That's pretty much what I gleaned too. The paper specifies how an acceptable ("low-stretch") spanning tree is shaped, and bounds [something, error criterion?] as part of its complexity proof.
So I guess the bulk of the brainwork in producing this result was in a) deriving the low-stretch spanning tree definition and algorithm, and b) proving the performance bound? Because, aside from understanding the low-stretch spanning tree calculation, the algorithm is indeed pretty simple.
Actually, I should add: one other tricky is coming up with (the probability distribution for) how you choose the order in which to restore edges.
This algorithm is useless without quantifying constants or numerically demonstrating that the constants are practical for some test problems. Although they cite CMG in passing, they do not compare performance. They also (negligently) do not cite LAMG. MG algorithms are also highly parallelizable, which is not clearly the case for this algorithm.
To get an idea of how essential it is to quantify constants, recall that in 1964, Fedorenko rigorously proved that the number of iterations of multigrid required to reduce the residual of Poisson on a rectangular grid with n points by a factor of epsilon is O(n * |log epsilon|). He quantified the associated constant as
W(n, 0.01) <= 210000*n + W(10^6, 0.01)
Groundbreaking optimal asymptotics, but utterly useless (the proposed algorithm had the wrong idea about good coarse spaces) and nobody cared until Achi Brandt came along it 1973 and revolutionized implicit solvers by demonstrated that if you get the details right
> an algorithm that solved graph Laplacians in “nearly linear time,” meaning that the algorithm’s running time didn’t increase exponentially with the size of the problem.
The phrase "nearly linear time" come from the paper.
However, the paper gives a better definition:
When we refer to a nearly linear time SDD system solver we mean an an
algorithm that computes such a x in time O(mlog^c(n)logε⁻¹)
where m is the number of nonzero entries in A and c ≥ 0 ∈ ℝ is a
fixed constant.
The term "nearly linear time" is fine. The next part of that sentence is where the problem arises: it suggests that the opposite of "nearly linear", and the status quo, is that running time "increase[s] exponentially". But the existing standard algorithms already scale polynomially, not exponentially.
In colloquial terminology, x^2 is an exponential. This isn't actually an important part of the article. I don't feel that a paragraph explaining why "Polynomial time" is bad (but not too bad) would improve the article.
Wasn't this algorithm described in a post to HN a few months ago in the context of fraud detection, but subsequently taken down at the request of the poster's employer?
It would be interesting indeed if private industry had already arrived at this and not realised what they had on their hands. My mind is screaming this again http://xkcd.com/664/
Wasn't there an article pretty recently about a well-performing algorithm for the traveling salesman problem that sounded pretty much exactly like this algorithm?
Or did they? Here's my impression based on the article and a few minutes of refreshers:
Problem: Given a (weighted?) graph, output its Laplacian matrix, which represents (in the sense of being polynominal-time reducible to?) how much current would flow between any two nodes if voltage were applied across it.
New algorithm:
1) Find a (minimal?) spanning tree for the graph.
2) Calculate how much current would flow through that graph if it were so weighted and a unit voltage were applied through it, with the edges being resistors (and leaves presumably connected to ground?).
3) Create a loop restoring one of the edges in the original graph.
4) Update the currents your previously calculated by the amount needed to balance the circuit.
5) Repeat from Step 2) until you have the original graph.
6) The currents are your solution?
(Sorry, lots of edits to correct and clarify.)