Hacker News new | past | comments | ask | show | jobs | submit login

In practice, n^100 is not a thing that happens except in weird or contrived cases. n^2 is common. n^3 turns up here and there. I've heard of n^5 in graph algorithms, I think, but it's pretty exotic.

Let's see if I can provide a very rudimentary intuition about why this might happen. YMMV, and I'm sure this isn't the whole story.

O(n^2) basically implies that we have to look at every pairwise combination of elements. Higher powers might imply needing to look at combinations of multiple elements / paths / whatever. In practice, these algorithms tend to be more or less tractable.

O(2^n), on the other hand, generally implies a brute force search, where we need to consider every possible solution, or every possible combination of elements. This is essentially the same as saying we can't do much better than guesswork.

So O(n^k) versus O(2^n) is basically "looking at small groups of elements of known size" versus "trying every possible solution."

This isn't the whole story—there are problems in between the two groups, though a couple of big ones have been reduced to polynomial time in recent years. But to understand the "metaphysical" interpretations that some CS people sometimes loosely apply to the two classes, it helps to look at the actual distribution of real world algorithms. It's pretty clumpy. There's an empirical element here.




High exponents happen - vide http://cstheory.stackexchange.com/questions/6660/polynomial-... for an example of O(n^79) (sic!).


Yes, but since such examples are far from common (look at all the TCS people's jaws dropping at that constant and power) I fail to see how this implies N vs NP is irrelevant except in platonic math fairytale land. Since you said you have a QI background, I'm guessing you're familiar with Landau's symmetry breaking framework for identifying phase transitions. Like P, it works perfectly in a lot of cases and substantiates our desired qualitative idea (phase transitions) with a quantitive property. However, as I'm sure you know, we've found plenty of cases in the realm of QI where it fails to identify phase transitions (e.g. topological order) that we would like to categorize as such. However, nobody would think of throwing it out entirely just because of these exceptions, since it is useful in the vast majority of cases.

Such is the case with P and the notion of being "easy" to compute. Yes, some algorithms in P are actually qualitatively "hard" and some outside are actually qualitatively "easy". But in the vast majority cases, finding that a problem is in P means we've drastically reduced its difficulty, and proofs that a problem isn't in P establish bounds that make it unfeasible in the general case.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: