Hacker News new | past | comments | ask | show | jobs | submit login
Short algorithm, long-range consequences (web.mit.edu)
117 points by interconnector on March 1, 2013 | hide | past | favorite | 24 comments



If the algorithm's so short, why not post the pseudocode?

Or did they? Here's my impression based on the article and a few minutes of refreshers:

Problem: Given a (weighted?) graph, output its Laplacian matrix, which represents (in the sense of being polynominal-time reducible to?) how much current would flow between any two nodes if voltage were applied across it.

New algorithm:

1) Find a (minimal?) spanning tree for the graph.

2) Calculate how much current would flow through that graph if it were so weighted and a unit voltage were applied through it, with the edges being resistors (and leaves presumably connected to ground?).

3) Create a loop restoring one of the edges in the original graph.

4) Update the currents your previously calculated by the amount needed to balance the circuit.

5) Repeat from Step 2) until you have the original graph.

6) The currents are your solution?

(Sorry, lots of edits to correct and clarify.)


The first link under "Related" on the right is the paper itself. The "SimpleSolver" is at the top of page 8. http://arxiv.org/pdf/1301.6628.pdf


A note -- when linking to arXiv, please link to the abstract, not directly to the PDF.


Why? The pdf is what I'm interested in. I don't particularly care where it's hosted.


There's more formats available, for one thing.



I agree, but gave up trying to convince people. I now use a userscript to rewrite all arxiv pdf links (outside arxiv itself) to point to the corresponding abstract page: http://homepages.inf.ed.ac.uk/imurray2/code/user_scripts/arx...


Okay, so these papers are phrased a little confusingly, but what they're really doing is solving a specific but very useful type of linear equation Ax = b where A is a diagonally-dominant matrix (entries on the diagonal are larger than the sum of the off-diagonal entries). It happens that graphs are equivalent to matrices, so it's a neat area that straddles applied math and computer science. For a survey paper on recent developments in the field: http://www.cs.cmu.edu/~jkoutis/papers/CACM-KMP.pdf


That's pretty much what I gleaned too. The paper specifies how an acceptable ("low-stretch") spanning tree is shaped, and bounds [something, error criterion?] as part of its complexity proof.


So I guess the bulk of the brainwork in producing this result was in a) deriving the low-stretch spanning tree definition and algorithm, and b) proving the performance bound? Because, aside from understanding the low-stretch spanning tree calculation, the algorithm is indeed pretty simple.

Actually, I should add: one other tricky is coming up with (the probability distribution for) how you choose the order in which to restore edges.


AAaa! the ant solution!


This algorithm is useless without quantifying constants or numerically demonstrating that the constants are practical for some test problems. Although they cite CMG in passing, they do not compare performance. They also (negligently) do not cite LAMG. MG algorithms are also highly parallelizable, which is not clearly the case for this algorithm.

To get an idea of how essential it is to quantify constants, recall that in 1964, Fedorenko rigorously proved that the number of iterations of multigrid required to reduce the residual of Poisson on a rectangular grid with n points by a factor of epsilon is O(n * |log epsilon|). He quantified the associated constant as

    W(n, 0.01) <= 210000*n + W(10^6, 0.01)
Groundbreaking optimal asymptotics, but utterly useless (the proposed algorithm had the wrong idea about good coarse spaces) and nobody cared until Achi Brandt came along it 1973 and revolutionized implicit solvers by demonstrated that if you get the details right

    W(n, discretization error) <= 40 * n
(using only addition and shift operations).


> an algorithm that solved graph Laplacians in “nearly linear time,” meaning that the algorithm’s running time didn’t increase exponentially with the size of the problem.

MIT, I expected more from you.


The phrase "nearly linear time" come from the paper.

However, the paper gives a better definition:

When we refer to a nearly linear time SDD system solver we mean an an algorithm that computes such a x in time O(mlog^c(n)logε⁻¹) where m is the number of nonzero entries in A and c ≥ 0 ∈ ℝ is a fixed constant.


The term "nearly linear time" is fine. The next part of that sentence is where the problem arises: it suggests that the opposite of "nearly linear", and the status quo, is that running time "increase[s] exponentially". But the existing standard algorithms already scale polynomially, not exponentially.


This article is probably not written by the CS department!


Does everything have to turn into a dick-waving contest around here? Just send a correction if you care.


Looks like somebody else doesn't know what near-linear means.


In colloquial terminology, x^2 is an exponential. This isn't actually an important part of the article. I don't feel that a paragraph explaining why "Polynomial time" is bad (but not too bad) would improve the article.


Wasn't this algorithm described in a post to HN a few months ago in the context of fraud detection, but subsequently taken down at the request of the poster's employer?


It would be interesting indeed if private industry had already arrived at this and not realised what they had on their hands. My mind is screaming this again http://xkcd.com/664/


Wasn't there an article pretty recently about a well-performing algorithm for the traveling salesman problem that sounded pretty much exactly like this algorithm?


I would love to see a visualization of their algorithm iterating its loop rebalancing thing on a simple 2D graph. Maybe a weekend project...

[removed: I was curious about the SDD requirement, but that's always the case for a graph Laplacian]


I wish I could review an actual working reference implementation of this algorithm.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: