Your Erdos number is how many citation links you pass through to reach a paper by Erdos.
This author has (apparently) a nice tidy small Erdos number. To publish without co-authors denies those co-authors the chance to get (his Erdos number+1).
So clearly he/she is selfish and has the wrong priorities :)
> Babai (the author) has a very small Erdos number
Let's quit beating around the bush: he has the smallest possible Erdős number[1]. If you don't presently have an Erdős number, or wish to improve your Erdős number for bragging rights, co-authoring a paper with him (or any anyone with an equivalent Erdős number) is the best possible outcome. (Obtaining an Erdős number of one is sadly no longer possible.)
Also, don't miss the Erdős–Bacon number list. It sums one's Erdős number with the Keven Bacon number (shortest path to a movie with Keven Bacon). Some examples (Erdos + Bacon):
Mit Math Professor Daniel Kreitman (1 + 2) = 3
Richard Feynman (3 + 3) = 6
NCAA gymnastics champion Kiralee Hayashi (3 + 2) = 5
Natalie Portman (5 + 2) = 7
There's even a Erdos-Bacon-Sabbath number that also includes proximity to the band Black Sabbath.
This article is worth discussing in itself, as it is very clear, gives a good sense of the situation, is easy to read and understand, and avoids the kinds of inaccuracy and solecism usually seen in articles about math.
> No one has ever found an efficient algorithm for an NP-complete problem, and most computer scientists believe no one ever will.
I find these kinds of assertions somewhat misleading. There _are_ "efficient" algorithms for, one could say, "pragmatic variations" of NP-complete problems. Some only work on some subset of cases (perhaps most of the useful ones), or non-deterministic (but you run them enough times and get the right answer), etc.
Basically, you can often either settle for slightly suboptimal solutions, or ignore a few pathological worst cases. That said, I suspect that P != NP, and thought this was a good article.
Ok so the question is, while theoretically this is an important work, is it really worthy having a practical implementation?
(I remember something similar happening with "Primality testing is P" but the existing, non-P algorithms were good enough so people wouldn't bother using it except in specific situations)
No. As the author notes in the preprint, there are already numerous very fast, practical algorithms. This one is only asymptotically faster, and by itself it would probably be slower in real-world applications.
The claims you're making in this subthread are not precise enough to be correct. If you only allow a constant finite amount of time to solve a given problem, then aysmptotics guarantee nothing. An O(1) time could solve fewer instances than an O(2^2^n) time algorithm. It is also not true that the asymptotically faster algorithm "can solve more instances as the allocated time increases." These statements are only true if you add the phrase "sufficiently large" to them. So asymptotically faster algorithms can solve more instances as the allocated time bound increases, provided that the time bound is sufficiently large. But how large that "sufficiently large" needs to be is arbitrary, so you can't say that Babai's algorithm is practical without either precisely studying the constants involved in the algorithm (which Babai did to some extent, but omitted in the notation for clarity), or doing empirical analysis, which is unlikely to ever happen with Babai's algorithm.
If it's not required then it's not implied, and unfortunately it is required. If you want to keep having serious discussions about this stuff I suggest you go read some more about algorithms.
Yeah, but the question is does the new algorithm beat the old ones for problems that are solvable in reasonable time. A constant time algorithm is not that useful if the constant happens to be 10^1000 years.
If the paper is correct then there ARE problems that this algorithm can solve in less time than others. But it may be that every problem on which it is faster requires more bits of memory than there are atoms in the universe. Or that every problem on which it is faster would have a runtime that is 10^100 times the lifetime of the universe.
> If it is asymptotically faster, then it is implied that there are problems the faster one can solve, that the other cannot, assuming finite time.
More precisely, there are problems that the asymptotically faster one can solve in less time than the asymptotically slower one, or, equivalently, there exists some time period at and beyond which the asymptotically faster algorithm can solve problems that cannot be solved by the asymptotically slower one in the same time.
However, the minimum time period where that becomes true with real implementations running on real, available hardware may be very large, so there may not be any practical advantages to an asymptotically faster algorithm.
It may not be the case in our universe. An O(n^2) algorithm is only faster than O(n^3) algorithm for an n > N (N may be 0). N may be so large that the problems it solves faster take more time than the heat death of the universe.
I'm not sure I understand what you're saying. The class of problems Babai's algorithm can solve is exactly the same as the problems any other of the algorithms can solve. They're all asymptotically bounded in time, as well.
If you have a finite amount of time, a given algorithm will only be able to solve a subset of all problem instances in the allocated time (assuming an infinite set of problem instances). If you have two algorithms and give the same finite amount of time to each, the asymptotically faster one would be expected to be able to solve more and/or larger problem instances in the allocated time than the asymptotically slower algorithm, as your allocated time increases.
It is important to note that big o notation is not a measure of "how fast will it take". It is not a measure of time at all, but a matter of how many discrete steps are required to solved the problem as a function of input size. Each step could take longer, which could effectively give it a longer run time. Solvability by the algorithm and the big O notation assumes unlimited time. They are all " theoretically solvable". The constant time and size of N would determine which implementation would be practically faster in a given situation. Lower asymptotic bounds algorithms are faster, given a sufficiently large (and specific) N. But that depends on the implementations. It may be that N is larger than most GI problems compared to the programs that currently run. The current algorithms run in polynomial time for most graphs (just like many sorting algorithms run closer to n rather than n log n the closer you get to pre-sorted input)
If the finite time is smallish, and the constant factor in the asymptotically faster algorithm is sufficiently large, it might be the case that it can't solve any instances of the problem in that time.
He said in the talk, answering this exact question, that basically his result shows that there aren't any really hard cases for the heuristic algorithms to run into. It would of course require some analysis to connect them to his result, but there would be no point in implementing his algorithm on a computer.
Is somebody able to explain in more detail what the "Greater metropolitan area" of P space means? I have a CS undergrad degree so I get what polynomial space is, just not sure about the quasi-p-space part.
Quasi-polynomial time is slower than polynomial time, but (much) faster than exponential time. Basically, if polynomial is O(n^c) and exponential is O(c^n), then quasi-polynomial is O(x^(log(n)^c); that is, n does show up in the exponent, but it's some power of the log of n.
In theoretical terms, it's much closer to a polynomial problem than an exponential one.
A quasi-polynomial time algorithm is one which has a worst case running time of 2^[O((log n)^c)], for some fixed constant 'c'. Polynomial time (PTIME) algorithms are a special case (where c=1).
"Greater metropolitan area" is a way of saying that Babai's proposed algorithm while not exactly in PTIME, is still very close. Indeed, in a certain sense it is now "closer" to PTIME than EXPTIME.
Just a week or two ago I saw someone get downvoted for suggesting that many CS degrees don't include real analysis. I suspect that by now undergrad CS programs are all over the place, and I wouldn't dare make broad generalizations about the math content of a CS degree anymore.
I am pretty sure that most CS students don't have the algebra background to understand what's going on in this algorithm. I've taken a year's worth of algebra and the algorithm still relies on stuff that I don't get.
Edit: and most CS programs do not emphasize traditional math anyways. No analysis, some linear algebra, very very little abstract algebra, and that's pretty much it.
I would, because I'm on the inside. There are even many CS degrees that don't require a course in algorithms. You have to remember that most CS degrees come from public/state universities, not Stanford and MIT. And CS students at public/state universities are largely mathphobic.
That's a very broad generalization. There are many public/state universities with amazing CS research and programs (e.g. UC Berkeley) that are very math-heavy.
>There are even many CS degrees that don't require a course in algorithms.
What CS degree doesn't require a course in algorithms? I assume there are some Software Engineering degrees that don't, but how can you spend 4 years studying computer science without an algorithms course?
>And CS students at public/state universities are largely mathphobic.
I agree that most CS students try to avoid relatively proof heavy classes like Automata. But almost all CS courses require up to Calc 2 and discrete math, so I'm sure the truly "mathphobic" students would have majored in CIS or something else.
I also don't think this has anything to do with Standford MIT vs public universities. Students at those Universities would avoid elective proof heavy classes as well. I did undergrad at a relatively unknown state school, and now I'm in grad school in a top ten program. I know plenty of people from both schools who avoid rigorous proof heavy classes when they can.
Pretty big generalization there. There are 20+ CSU's and 10 UC's in California. You really going to paint 30 schools with such broad strokes in one state alone, plus the rest of the US?
Thanks for posting this. In the future, could you please add the algorithm to the title? This feels a couple steps removed from clickbait. No offense intended; it's just a suggestion.
> Babai has declined to speak to the press, writing in an email: “The integrity of science requires that new results be subjected to thorough review by expert colleagues before the results are publicized in the media.”
I'm okay with Quanta Magazine leaving out the irony; I can imagine the Engadgets and Buzzfeeds of the world would omit his response and draw their own wild conclusions regardless.
For the nearly all practical purposes, there already exist efficient algorithms for GI. This work is mostly of interest for theoretical matters because it shows that GI can be solved in quasi-poly time for _all_ graphs, whereas previous work still had exponential time worst-cases. Hence why there isn't really a concise algorithm that you'd want to implement.