Hacker News new | past | comments | ask | show | jobs | submit login

The impression I had is that many experts believe that no single algorithm A can achieve O(n^2), but there exists a sequence of algorithms A_i that achieve O(n^{2+d_i}), with d_i>0 getting arbitrarily small, but with the algorithms getting arbitrarily long and the constant overhead factors out front getting arbitrarily large.

Unfortunately this was just word of mouth, and I might be misremembering.




This is what I choose to believe. Even if it is not true I will still believe it because it is so satisfying.


Yea. I've always wondered whether it was motivated purely by intuition about elegance, or whether there is another computational task that has been shown to have this feature. (The finite-length proof that an infinite tower of algorithms exist without them actually being specifiable in finite length would I guess require some Gödelian jiu jitsu?)


Hmm, I guess I don't get it. I feel like if you have such a sequence of algorithms A_i, you can "string them together" to get a single algorithm A which has the property that it has complexity O(n^{1 + epsilon}) for any epsilon > 0. Specifically, suppose we know that A_i requires time at most C_i + D_i n^{1 + 1/i}. Then there exists N_i such that, for any n >= N, C_i + D_i n^{1 + 1/i} >= C_{i+1} + n^{1 + 1/(i+1)}, and so can't we specify A by saying "if the input has size n where N_i <= n < N_{i+1}, perform A_i"?


An algorithm has finite length, but an infinite sequence of algorithms need not. For algorithm A to actually carry out "if the input has size n where N_i <= n < N_{i+1}, perform A_i", it needs to be able to generate the code for each of the infinite number of A_i's.


I see, thanks. Yeah that kind of thing is definitely above my pay grade.


The choice of A_i now depends on n. The complexity of your construction is O(n^{2 + 1/argmax_i { N_i <= n}), not O(n^2).


Oh I wasn't claiming it's O(n), just O(n^{1+epsilon}) for any epsilon > 0. Indeed, the function inside your O is dominated by n^{2 + epsilon} for any epsilon > 0.


The problem is that each algorithm likely has a constant factor much larger than the previous. This does not matter for the runtime complexity of any individual algorithm but when you string them together the time complexity explodes.


I'm pretty confident the construction above does not depend on any bound on the growth rate of C_i (or D_i).


I guess this would be true even if the correct complexity was say,

O(n^2 * ln(n)).

Waxing theoretically, there is some integer-valued function f such that optimal matrix multiplication is

Ω(n^2 * f(n)).

We can ask whether f is O(1). I think your statement is equivalent to saying that f is o(n^ε) for any ε > 0.


> I guess this would be true even if the correct complexity was say, O(n^2 * ln(n)

Yes you're right it would be true. We can see this by just constructing a sequence of algorithms A_i that are implementations of the putative optimal O(n^2 * ln(n)) algorithm except artificially slowed down by ever smaller polynomial overheads. But the hypothesis I'm describing above holds that no such O(n^2 * ln(n)) algorithm exists.

> Waxing theoretically, there is some integer-valued function f such that optimal matrix multiplication is...

The hypothesis I'm describing is that no optimal algorithm exists. Instead, the claim is that there is an infinite sequence of ever better algorithms (specified by ever longer sets of instructions, and with ever larger constant overheads).


This is roughly my guess about P=NP as well, and I suspect the problem will be "solved" when someone figures out how to formalize that progression satisfactorily. Does this sound right, or does anyone know a solid counter-argument?


Knuth holds such a view:

> Don Knuth: As you say, I've come to believe that P = N P, namely that there does exist an integer M and an algorithm that will solve every n-bit problem belonging to the class N P in nM elementary steps.

> Some of my reasoning is admittedly naïve: It's hard to believe that P ≠ N P and that so many brilliant people have failed to discover why. On the other hand if you imagine a number M that's finite but incredibly large—like say the number 10 3 discussed in my paper on "coping with finiteness"—then there's a humongous number of possible algorithms that do nM bitwise or addition or shift operations on n given bits, and it's really hard to believe that all of those algorithms fail.

> My main point, however, is that I don't believe that the equality P = N P will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. Although I think M probably exists, I also think human beings will never know such a value. I even suspect that nobody will even know an upper bound on M.

https://www.informit.com/articles/article.aspx?p=2213858


That doesn't feel quite the same. My intuition is based on noticing that many instances of NP-complete problems can be solved quickly, and that adding more sophisticated heuristics can put more instances in that category. I'm just guessing that the set of actually-slow instances can be made arbitrarily small if the program is long enough, i.e. has enough special cases for classes of problem that are amenable to faster solutions. It has the same "long enough program" flavor, but I don't think the equality will be quite exact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: