As an ex-physicist (quantum physics, now - data science), I am still puzzled why computer scientist consider computational complexity something metaphysical. Or even worse - a rule which universe should care about; look up considerations of non-local boxes and computational complexity (a generalization of quantum correlations; see Popescu-Rohrlich box).
First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
One interesting, physical case is the spontaneous emission of a photon from an excited atom. It looks like an exponential decay, but the theory shows that the decay have to be polynomial (I don't remember it's power, if more like 1/t^3 or 1/t^6, but not too high). Up to my knowledge, we haven't detected the polynomial tail (because when it is, the probabilities are so low).
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?
>So, is there a physical effect that cares about computational complexity?
The No-SuperSearch Postulate (there is no physical means to solve NP-complete problems in polynomial time) is not purely mathematical conjecture. It's also a claim about the laws of physics. The postulate includes P != NP as a special case, but it is stronger.
If you assume that the postulate is true, you can use it to explain things like why quantum mechanics is linear, why adiabatic systems have small spectral gaps, why timeline curves don't exist, black hole information paradox, impossibility of relativity computers, why QM-amplitudes are not directly observable. The question of how much physics you could derive from No-super-search postulate is interesting.
And it is assumption I find... strange. If it really works, it would be interesting.
QM is linear for many other reasons (the simplest related to not being able communicate faster than light); smallness of spectral gaps is AFAIK a conjuncture.
And in general, playing with causality will bring many other problems.
The principle is essentially a recognition that, if there were no constraints on energy or other physical limits, then many problems in NP or harder would be tractable. This bridge might be two-way, so we may start from complexity and ask: what bounds might we expect in physics since without, this problem would be tractable?
One relevant example is protein folding, which is NP-complete. Why then have we not found arrangements such that protein conformations are solutions to NP-complete problems? It's reasonable to postulate that nature does not actually solve NP-complete problems and that almost all carriers of problematic instances died off (not completely, since prions exist after all).
If this is the case then we can make a prediction that there exists some polynomial time algorithm able to predict conformations of biologically encountered instances in reasonable time to high accuracy and that a program search such as deep learning will one day find this algorithm given sufficient data. That would be a huge deal.
+++
The following links argue more or less that from a bayesian perspective, a prior weighted heavily on smaller exponents for the case P=NP, is more reasonable given the examples and rarity of realistically intractable polynomial time algorithms (and how quickly they seem to fall to reasonable with focused attention).
If this is the case then we can make a prediction that there exists some polynomial time algorithm able to predict conformations of biologically encountered instances in reasonable time to high accuracy and that a program search such as deep learning will one day find this algorithm given sufficient data. That would be a huge deal.
Hang on - proteins folding "unpredictably" as per a prion is not evidence of solving of NP completeness in that "problem". Also, while prions might do that, it's not clear to me that there is much evidence that their folding is "unpredictable" in any rigorous way. Can you cite a reference?
Yea, I'm saying prions show that nature is not solving NP-complete problems. What we have instead is that a class of well behaved proteins were selected for but there can still be problematic instances.
> First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Sure, it may. But there are precious few examples in the wild where we're dealing with crazy powers like that. I think the bigger (but still quite small) failure in P's ability to measure "ease" of computation is that it only considers the worst case. A good example is graph isomorphism. There was recently a development of a quasipolynomial algorithm for graph isomorphism, which had the best bound we've found for the problem yet. However, with our existing algorithms it was extremely difficult to find examples where they didn't run in polynomial time, even when actively looking for them. But again, this is the exception, not the rule.
> Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
Very true, but TCS spends a lot of time developing new algorithms to shrink exponents as well. A lot of the methods and skills needed for making better algorithms to get problems into P also apply to moving them around within P. And many of the same tactics for trying to figure out exactly how P and NP fit together apply to trying to figure out the same for P and BQP.
Since you have a background in quantum physics and (I assume) care about that field, are you aware that quite a bit of the TCS community works on quantum problems? There have been plenty of results that stand to be very physically relevant. Some good examples are the recent (quite rapid) progress in analyzing symmetric cryptography schemes' vulnerability to attacks by a quantum computers[1]. Interestingly, many are starting from the same jumping off that asymmetric attacks did; namely Simon's algorithm (an important stepping stone towards Shor's algorithm).
Don't take me wrong - TCS is an important discipline, with both interesting practical and theoretical results.
The only thing I am criticizing here is the _metaphysical_ implications of computational complexity classes (and also, jargon which is misleading for not TCS people - e.g. O(n^100) s not "efficient" or "fast"). Furthermore, as you pointed, some problems have practical solutions which usually work efficiently, even if not always or give approximate, but still - useful, solution.
> There have been plenty of results that stand to be very physically relevant
What I meant was not "can be applied to physical systems" (I agree!), but "is a foundation for physical laws" (I am highly skeptical and I am yet to see an example).
(Though, to be honest, I was more in quantum info that quantum computing; and right now I am more into... quantum JavaScript: http://quantumgame.io/)
In practice, n^100 is not a thing that happens except in weird or contrived cases. n^2 is common. n^3 turns up here and there. I've heard of n^5 in graph algorithms, I think, but it's pretty exotic.
Let's see if I can provide a very rudimentary intuition about why this might happen. YMMV, and I'm sure this isn't the whole story.
O(n^2) basically implies that we have to look at every pairwise combination of elements. Higher powers might imply needing to look at combinations of multiple elements / paths / whatever. In practice, these algorithms tend to be more or less tractable.
O(2^n), on the other hand, generally implies a brute force search, where we need to consider every possible solution, or every possible combination of elements. This is essentially the same as saying we can't do much better than guesswork.
So O(n^k) versus O(2^n) is basically "looking at small groups of elements of known size" versus "trying every possible solution."
This isn't the whole story—there are problems in between the two groups, though a couple of big ones have been reduced to polynomial time in recent years. But to understand the "metaphysical" interpretations that some CS people sometimes loosely apply to the two classes, it helps to look at the actual distribution of real world algorithms. It's pretty clumpy. There's an empirical element here.
Yes, but since such examples are far from common (look at all the TCS people's jaws dropping at that constant and power) I fail to see how this implies N vs NP is irrelevant except in platonic math fairytale land. Since you said you have a QI background, I'm guessing you're familiar with Landau's symmetry breaking framework for identifying phase transitions. Like P, it works perfectly in a lot of cases and substantiates our desired qualitative idea (phase transitions) with a quantitive property. However, as I'm sure you know, we've found plenty of cases in the realm of QI where it fails to identify phase transitions (e.g. topological order) that we would like to categorize as such. However, nobody would think of throwing it out entirely just because of these exceptions, since it is useful in the vast majority of cases.
Such is the case with P and the notion of being "easy" to compute. Yes, some algorithms in P are actually qualitatively "hard" and some outside are actually qualitatively "easy". But in the vast majority cases, finding that a problem is in P means we've drastically reduced its difficulty, and proofs that a problem isn't in P establish bounds that make it unfeasible in the general case.
> I am still puzzled why computer scientist consider computational complexity something metaphysical.
In what sense do you mean "metaphysical"?
Personally, I would say computational complexity is both "physical", in the sense that it gives fundamental limits to what can/cannot be done in the physical world using some amount of resources (say, anything obeying Blum's axioms). I would say it's also "metaphysical", in the sense that those limits are derived purely mathematically, and are "imposed on" the physical world; rather than, for example, by observing the physical world and imposing consistency with those observations on some mathematical model. I think of these computational "laws of nature" in a similar way to the "law of nature" that e^(i*pi) + 1 = 0.
In other words, we can imagine (AKA develop a consistent mathematical model of) a world operating by Newtonian physics; we can do the same for a world operating by special or general relativity; and for the standard model; we can do it for worlds with N dimensions; with Euclidean, hyperbolic or spherical geometry; and so on. All of these "make sense", in a self-consistent way, and it's up to physicists to figure out which of those models best describes our physical world.
On the other hand, there don't seem to be self-consistent models of universal systems which can solve their own halting problem, or models which don't posess the hierarchical complexity classes we've found (although, of course, the machine models may differ, we may introduce hierarchies of halting oracles, etc.). In that sense, these seem to be "laws of nature" which are intrinsic to our models, by virtue of their mathematical definition, rather than being imposed from the outside in order to agree with observations.
The section about closed timelike curves is, I think, a good example of why computational complexity has metaphysical consequences but struggles to bridge the two - see what Aaronson writes about the "grandfather paradox".
> the theory shows that the decay have to be polynomial
I thought this was the other way around: the theory predicts exact exponential decay, but experiments aren't entirely clear that that is always the case. Depending on which textbook or paper you read, some say the deviations from exact exponential decay are just experimental error, while others say they are evidence of additional effects that the exponential decay law does not account for.
From quantum field theory, you get polynomial decay and it's quite fundamental (related to causality, holomorphic functions (Kramers–Kronig relations)) and if it were not working, a lot of physics would need rewriting.
EDIT:
I remember it from a lecture, many years ago. Some googling brings me to this thread with links to papers:
> From quantum field theory, you get polynomial decay
I think you're oversimplifying, although I agree that I was also oversimplifying when I said that the theory predicts exact exponential decay.
What I get from that stack exchange thread and the linked papers is this: in an idealized (and not physically possible for a real system) model in which every individual decay is statistically independent from every other and in which all of the quantum states that are possible end points of the decay are unoccupied, the theory predicts exact exponential decay. But in real physical systems, neither of those conditions is exactly true. Individual members of an ensemble of decaying quantum systems (such as radioactive atoms in a sample) interact with each other (either directly or indirectly via states in the environment), so their decays are not completely independent; and any real decaying system has an external environment which includes some occupied "target" quantum states. When these effects are taken into account, the theory predicts a decay law which is often called "power law" but which I would describe as more of a "truncated exponential": it replaces an infinite series of power law terms (the exact exponential) with a finite number of them. The actual measured decay law in a given experiment will depend on how much the experiment probes the portion of the state space where the above effects are significant (for example, a decay law measured over a very long time can deviate measurably from exponential where a measurement over a shorter time would not).
AFAIR if you have a single excited particle, you still get this polynomial decay. (It was more about every distribution and Fourier transform properties - as mentioned in the post.)
Yes, in these subtle effects backaction may be important. (The whole non-exponential decay is because the "emitted" EM field acts back on the particle.)
> AFAIR if you have a single excited particle, you still get this polynomial decay.
I'm not seeing that in the linked papers. I'm seeing descriptions of exponential decay under certain conditions, and deviations from that under other conditions.
> As an ex-physicist (quantum physics, now - data science), I am still puzzled why computer scientist consider computational complexity something metaphysical.
Please don't generalize one or a few computer scientists to the whole group.
By the way, a similar thing can be said for the halting problem. We may not be able to prove mechanically whether any program halts, but if we can do it for any program appearing in practice, that may be good enough. Still the proof is of immense theoretical value.
(You are right that I missed a quantifier; this regex would also fix a typo.)
> Still the proof is of immense theoretical value.
I enjoy mathematics, also in its pure form. Sure, there are many things nice, valuable and beautiful. Some are of practical use.
Yet, misusing it is a problem. Some otherwise great minds (like Roger Penrose) make some questionable claims about consciousness based on unjustified implications for the real world.
> We may not be able to prove mechanically whether any program halts, but if we can do it for any program appearing in practice, that may be good enough.
Alas, we can't even do that. Eg I have a simple program that systematically looks for a counterexample to Fermat's Last Theorem, and halts if it finds one.
We know now that this simple and comparatively natural program will never halt. But the proof is crazy.
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?
Personally, I consider it metaphysical because it says something about provability (since, deep down, proofs and programs are the same thing). Truth beyond proof (or at least beyond what's feasible to prove) is bizarre and mysterious.
Metaphysics (and physics) is concerned with uncovering the fundamental truths of the universe. Computational complexity indicates this may be unachievable.
Is there anyone on hacker news that could explain me secion 6 in this article? Overall the article is very interesting but I am not sure if the waterfall argument is valid.
The waterfall conjecture was stated as :
'Given any chess-playing algorithm A that accesses a "waterfall oracle" W, there is an equally good--chess-playing algorithm A', with similar time and space requirements that does not access W.'
From this we can assume that the article is a metaphor for us as external observer given semantics for computation. Thus by providing the reduction we showed that the computation in and of itself does not have semantics. However, why would we assume there to exists A'? Even if there is A', can't we reduce A -> A', and if we can reduces W to A, then by proxy W -> A'?
Well, author tries to prove it's not. And, btw, oracle is something more than just an external observer, it can be used, for example, to solve the halting problem in classes for which it would be impossible with nondeterministic machines.
In my hastly typing i forgot to add 'in' to valid.
Could you elaborate on the oracle approach? Because I interpreted it as being the external observer in this case. From computational complexity I know that an oracle solves NP problems in polytime by a nondetermnistic turing machine, but I am not sure how this oracle stuff loops back to the philosphical argument.
The proof provided by Aaronson is really opague as to what and how it proves that the argument is invalid.
The main problem with that section is that the author tries to engage with a particular objection (the waterfall argument) that is itself a bit obscure.
I think philosophers should focus on novel contemporarily-unfalsifiable ideas, like are we living in a simulation?; if you are a person outside time with instant access to all moments in the history (past-present-future), how much power do you have?; how would you create time (continuous step? adaptive resolution depending on what beings inside time decide to observe?) etc. Some of these ideas later might become falsifiable and move science forward, even if they sound super sci-fi or even insane right now. Or would be a nice background for a movie/game with some non-trivial thoughts already applied.
Unfalsifiability was a philosophical breakthrough first, and it is now the de facto test to see if an idea has any real world consequence.
To say philosophers should focus on unfalsifiable ideas is to say philosophers should be antipragmatic and inconsequential. One could easily argue that many are, but to encourage it? That's not necessary.
We need to focus on everything, and think about everything. We shouldn't ask "our philosophers" to "think" about things. We should all learn how to think and become better philosophers for ourselves, and find a way to openly contribute the progress of our thoughts.
> To say philosophers should focus on unfalsifiable ideas is to say philosophers should be antipragmatic and inconsequential.
Philosophy is the origin of all the sciences. Starting with Mathematics, as various topics became tractable they became self-standing disciplines, leaving the non-tractable ones behind. Thus inherently philosophers are the ones who work on the really hard problems.
(True, epistemology remains in philosophy but I would argue that the tractable parts moved into mathematics).
AI suffers from the same problem -- once something becomes well accepted people say, "well THAT isn't AI; AI is <some other intractable domain>".
I think it would be more pragmatic to say philosophers should find ways to make unfalsifiable claims falsifiable, and that way the scientists could work on them. But 1) this is actually what scientists already do when confronted with new problems, and 2) the statements rests on pragmatism, which is not mandatory unless of course you are already a scientist.
Ultimately any thought can be advanced with further thinking because concepts and ideas are progressive. Our mind is an incredible machine, and for anyone to feel thinking is only for philosophers is truly doing themselves a disservice. We all need to learn to think harder. And our "philosophers" need to find ways to teach how to think better and harder and newer thoughts. What anyone thinks or thought of in the past is more history than philosophy.
Very interesting. I'll be honest, I expected this to be vague and self-important but it was very concrete and made a lot of discussed-to-death topics interesting again. He sort of lost me about halfway through the paper but I suppose that's because he has a PhD and I don't.
Yes, it is. And it wasn't meant as criticism, I think.
The pointers to past discussions are helpful if there was a real discussion then.
I strongly dislike all those "but look, it has been submitted before (but nobody ever commented)", because it adds absolutely nothing, but in this case it's different.
Not related to the content of the PDF, but I seriously dislike the width of those margins. While LaTeX look marvelous as always, there's just too much text per page.
First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
One interesting, physical case is the spontaneous emission of a photon from an excited atom. It looks like an exponential decay, but the theory shows that the decay have to be polynomial (I don't remember it's power, if more like 1/t^3 or 1/t^6, but not too high). Up to my knowledge, we haven't detected the polynomial tail (because when it is, the probabilities are so low).
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?