If you are interested in the implications of P vs. NP, as opposed to a survey of the situation around the problem, check out this excellent essay [1] by Scott Aaronson. He goes into interesting consequences about the nature of quantum computing, simulation of a human mind, and time travel, among other topics.
There is also another interesting article [2] by Arkady Bolotin at Ben-Gurion University in Israel, who argues that the lack of macroscopic quantum behavior implies P!=NP. Basically, macroscopic quantum behavior would imply that the universe is computing the solution to a NP-hard problem in violation with currently accepted laws of physics. Also, see this debate between Bolotin and an educated commenter [3].
>The problem is that the equation says nothing about how large an object needs to be before it obeys Newtonian mechanics rather than the quantum variety.
I'm pretty sure this is BS.
The linked medium post does nothing to defend the implicit premise that the universe can be computed in a reasonable amount of time by classical computers.
Indeed it is, interference phenomena have so far scaled up to fairly large many atom structures in rather predictable ways. The real problem for Quantum mechanics starts happening when needing to explain gravity.
I think you're talking past the parent, whose point is that our universe's physics may not be polynomial time computable (or computable at all, for that matter).
I made two different points. The first is that there's nothing that "obeys Newtonian mechanics rather than the quantum variety.", it's quantum all the way up. The second is what you said.
Guess it wasn't clear enough that the two points aren't related.
...it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems."
The linked survey comments on this; whenever we've been able to put a problem in P, we've almost always been to bring down the exponent to practical sizes.
The classic low hanging fruit problem. Whenever we've pumped an oil well dry, we've almost always found another easy to pump well, therefore all oil wells are easy to exploit and we'll never run out of oil. There are other analogies with the ease of catching wild passenger pigeons and bison.
Yes, this always confuses me. Why the constants are swept under teh carpet for real world, practical applications. From the same article.
"It is also possible that a proof would not lead directly to efficient methods, perhaps if the proof is non-constructive, or the size of the bounding polynomial is too big to be efficient in practice."
Given the long standing nature of the problem, my guess is that some new techniques or insights will be required to solve it one way or the other. Those may well give clues to solving NP/NP-Complete problems efficiently.
Until someone actually does it though it's still just speculation.
One answer is because in practice, we very rarely see large constants anywhere. While theoretically x^1000 algorithms exist, it's hard to actually find a reasonable example of one. Ditto for e^1.00000001.
Of course, why this might be the case remains to be investigated.
Example: You want to bisect the nodes of an edge-weighted graph into two sets of equal size such that the sum of the edge weights between the two parts is maximized.
The best approximation algorithm (it's NP-hard) runs in time O(n^(10^100)):
It's interesting though that the examples you provided are all approximations for known NP-hard problems, and in many cases the free parameter that lets you get arbitrarily large constants depends on how close to optimal you want the approximation to be.
Do you have examples of large-constant algorithms in P that are not approximations of NP-Hard problems?
Not quite the same as the earlier conversation on exponents, but since you mentioned constants, matrix multiplication is a prime example -- apparently Coppersmith-Winograd has enormous constants ("only provides an advantage for matrices so large that they cannot be processed by modern hardware") that I can't find them (the wiki article doesn't really have a proper source, and preliminary searching also yielded no concrete results -- possibly because the proof of existence was non-constructive?).
Complexity theory abounds with such algorithms in order to solve concrete problems. In fact it's even worse than that. There is a whole subfield of complexity theory which looks for "fixed-parameter tractability" and typically involves algorithms with reasonable exponents but astronomical constants. Even if you have an O(n) algorithm, if the constant hidden by the notation happens to be on the order of 10^10000 this is not something that you could ever hope to run.
One example are all the consequences of the Robertson-Seymour theorem which (non-constructively) asserts that a large number of graph problems "is in P". For instance, we know that testing whether a graph can be embedded in the torus is in P, but the algorithm in question is completely infeasible.
I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...
The "non-constructively" in this case has fascinating consequences.
One of those is that according to classical mathematics there are concrete problems with polynomial time algorithms that there is no way of producing, and if produced, there is no way of verifying. (Literally no way of verifying, pick an axiom system and there are problems for which the question of whether there is one more forbidden minor is undecidable in said axiom system.)
If you doubt that such algorithms have any real existence, you might be a closet constructivist...
>I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...
Fixed parameter tractability is not necessarily about polynomial time algorithms with huge constants; it's about getting efficient algorithms for problems that have different parameters. Optimising over all the parameters might require exponential time, but if you fix one of the parameters, some problems become tractable (with the constant depending on the value of the fixed parameter).
This can lead to large constants, but doesn't have to. It depends on the value of the fixed parameter.
In this case the polynomial time algorithm is to test the graph against a finite list of forbidden minors. This has a reasonable exponent, but a constant dependent upon how long that finite list is.
That finite list can be extremely long indeed. And in some cases, there is no way of finding or verifying the full list.
Do we rarely see large constants? Or are they just not on problems where precision and "absolute best answer" matter?
As an example, an algorithm to find the optimal solution to a Rubik's cube would actually be very difficult to do. However, finding a solution in a short enough time frame is quite easy. This is more true the larger of a cube you try and solve.
Contrast this with problems such as encryption, where we have specifically made problems where there is not a "best answer", but rather there is only a single answer that matters.
I think you're proving my point here -- finding the optimal solution for a Rubik's cube is probably at least PSPACE-Hard, which is probably exponential.
So complexity theory is confirming your intuition, which is that 'optimization type problems' are hard.
Even the basic rucksack optimization problem is NP-hard. I'd dare say that a huge class of optimization problems maps very naturally to this particular one, https://news.ycombinator.com/item?id=13316776 for starters.
Ok, so optimization problems are potentially bounded with smaller coefficients than we would think.
What about sequencing life? Supposedly we share a lot at the genomic level with animals we are vastly different from. Could a large degree of the polynomial that is life explain that divergence?
Ha! I'm just trying to get a better feel for what is being said. Definitely the dumbest person in this particular conversation.
I am assuming you are pushing the same angle as the other poster. That is, most optimization problems are actually not "large constants" in the exponents. So, since most of what developers think of as "hard" are almost all classical NP problems, maybe it makes sense to look at other things we don't typically think of as computational.
Could just be nonsensical, though. I fully accept that.
I misunderstood your point, then. I thought you were saying that we rarely see giant exponents in things. My question is if we typically do see giant exponents, but make do with much simpler solutions. (That make sense?)
I am saying we rarely see giant exponents (on polynomial-time algorithms) in practice, because most algorithms that we intuitively think of as 'hard' turn out to be in NP or PSPACE or other complexity classes that we have decent reason to suspect have a lower-bound exponential running time.
P: O(n^k), where large k makes it 'slow' but still polynomial
NP: probably O(e^kn), k>1, where k close to 1 makes it 'fast', but still exponential
The objection is that k can be big for some polynomial algorithms but small (near 1) for other exponential ones, so we can have 'slow' polynomial algorithms and 'fast' exponential ones.
But in practice, k is usually small for polynomial algorithms and not close to 1 for exponential ones so simply knowing whether an algorithm is exponential or polynomial tells you something.
The 'hard' problems you gave me all (probably) fall into the exponential runtime class, whereas the easier ones (like finding any solution to a Rubiks) fall into the polynomial runtime. So your intuition of 'hard' vs. 'easy' matches complexity theory's evaluation, even though theoretically there could be really slow 'easy' problems and really fast 'hard' ones.
To be clear I'm not talking about how useful asymptotic analysis and big O notation are in general, I'm talking specifically about the case if we eventually do prove P = NP.
Internet lore and popular media then assume that immediately for example all encryption will be trivially broken. Mathematically it may be true because then all NP problems would be known to be polynomial, but there's still the issue of the practical steps involved factorising a huge number which may have enormous constants or very large exponents.
Or if it turns out we can reduce it to some other known polynomial problem, there's still the actual transformation which itself may be polynomial with large constants or exponents.
That's not quite correct. Factorization is definitely in NP, because verifying whether a number has a particular factor is easy.
Maybe you meant that factorization hasn't been proven to be NP-complete, which is true. In fact it would be very surprising if it was NP-complete, because that would imply other surprising things (e.g. NP = co-NP).
If P=NP is proven than that makes factorization an easy problem (and included in the proof) which would make many algorithms for encryption "trivially broken".
Right, but it's worth mentioning that factorization itself is not known to be NP-hard (and it's suspected not to be, by most, I think).
Also, as others have pointed out, the constants involved in any "generic" algorithm for solving any NP problem in Polynomial type may be astronomical, so even if it turns out that P=NP, then it may not ever be feasible to actually such any algorithm for anything practical.
>Basically, macroscopic quantum behavior would imply that the universe is computing the solution to a NP-hard problem in violation with currently accepted laws of physics
Yes. The post you are replying to said, essentially, "lack of macroscopic quantum behavior => P != NP, because macroscopic quantum behavior => P=NP", which is obviously flawed logic.
Do yourself a favor and take the time to read this paper not just to learn about P=?NP but also to see a model of how technical writing ought to be done. Scott Aaronson is truly the Richard Feynman of our day. Not only is he freaking brilliant, but his writing style is accessible, not too mathy, even humorous at times. This made me LOL:
"We see a similar “invisible fence” phenomenon in Leslie Valiant’s program of “accidental algorithms” [246]. The latter are polynomial-time algorithms, often for planar graph problems, that exist for certain parameter values but not for others, for reasons that are utterly opaque if one doesn’t understand the strange cancellations that the algorithms exploit. A prototypical result is the following:
[Abstruse theorem elided, but it boils down to: there's this weird problem with a parameter k that can be proven to have polynomial-time solutions for some values of k and proven to be NP-hard for other values of k.]
Needless to say (because otherwise you would’ve heard!), in not one of these examples have the “P region” and the “NP-complete region” of parameter space been discovered to overlap."
> to see a model of how technical writing ought to be done
Well, this is semi-popular technical writing directed at people who are not theory-of-computation researchers. If all scientific writing would be written like this, then all papers would be at least hundreds of pages long and skip important details.
It's absolutely wonderful -- and I think vital -- that a researcher takes the time to do popular writing, but most scientific writing can't be popular. This kind of writing is usually reserved for surveys, like this one. Popular renderings of groundbreaking research also exist in publications like Scientific American, and blog posts by researchers like Aaronson who like (and are good at) communicating their findings in this way, but the important details are often very much research-level and quite impenetrable to all but very specialized researchers.
>Well, this is semi-popular technical writing directed at people who are not theory-of-computation researchers.
Nothing wrong about directing a technical writing to people other than "theory-of-computation" researchers. Most people are not theory-of-computation researchers, not just laymen, but also also statistics researchers, topology researchers, geometricians, logicians, etc.
>If all scientific writing would be written like this, then all papers would be at least hundreds of pages long and skip important details.
GP obviously means all scientific writing aimed at the same kind of audience.
Theory-of-computation researchers would obviously not need such a volume at all in the first place: they already know this stuff, and they can just directly read pure research on P vs NP, not some summary of what P vs NP means.
That said, all scientific writing (including those aimed/produced by experts in a field) could take from the quality of the wording, and the attention to clarity. It doesn't mean it has to also follow the lengthy introduction to its subject.
In other words, it seems to me like we're splitting hairs, while the intended meaning was pretty clear from the GP's comment context.
Actually I'm pretty sure many theoretical computer scientists find this useful too; it covers at a high level recent approaches for proving P != NP, such as Ryan Williams' approach, the GCT approach, etc.
Scott himself said that he had to do a lot of legwork to understand GCT. I can bet that this is the case for most theoreticians.
Complexity theory is a vast field; it's difficult to become acquainted with the disparate portions of the field. Surveys such as this are valuable for the field.
It's a survey that's been made more popular than the usual. Not all surveys can be like this or they'd all be at least this long. Even relatively long surveys are usually restricted to about half the length of this one.
Abstract:
In 1950, John Nash sent a remarkable letter to the National Security Agency, in which—seeking to build theoretical foundations for cryptography—he all but formulated what today we call the P=?NP problem, considered one of the great open problems of science. Here I survey the status of this problem in 2016, for a broad audience of mathematicians, scientists, and engineers. I offer a personal perspective on what it’s about, why it’s important, why it’s reasonable to conjecture that P≠NP is both true and provable, why proving it is so hard, the landscape of related problems, and crucially, what progress has been made in the last half-century toward solving those problems. The discussion of progress includes diagonalization and circuit lower bounds; the relativization, algebrization, and natural proofs barriers; and the recent works of Ryan Williams and Ketan Mulmuley, which (in different ways) hint at a duality between impossibility proofs and algorithms.
Any survey that mentions Borges in its introductory section is A-OK in my book.
Edit This reminds me of an assignment my Discrete Math professor, Dr. Fishel, gave me, which shows how to create (in polynomial time) a Hamiltonian circuit in any undirected graph in which every vertex is of the order ceiling(n/2) where n is the number of vertices in the graph. A C++ implementation is provided. [2007]
https://alanhogan.com/asu/hamiltonian-circuit/
His remarks on "geometric complexity theory" are very interesting (starting p80):
I like to describe GCT as “the string theory of computer science.” Like string theory, GCT has
the aura of an intricate theoretical superstructure from the far future, impatiently being worked on
today. Both have attracted interest partly because of “miraculous coincidences” (for string theory,
these include anomaly cancellations and the prediction of gravitons; for GCT, exceptional properties
of the permanent and determinant, and surprising algorithms to compute the multiplicities of
irreps). Both have been described as deep, compelling, and even “the only game in town” (not
surprisingly, a claim disputed by the fans of rival ideas!). And like with string theory, there are
few parts of modern mathematics not known or believed to be relevant to GCT.
There is a large, emerging body of research in which one proves the impossibility of solving problem A by finding an algorithm for solving a different problem B.
For Mulmuley et al. (it's been a while since I was in the thick of this) it's finding an algorithm for computing a particular decomposition of a representation[1] having to do with determinants and permanents. This essentially shows the matrix permanent formula can't be written as a determinant of a small matrix (of smaller formulas of the variables), which is the "algebraic" analogue of P vs NP, and which Mulmuley et al. believe will pave a road toward P vs NP. Last I heard there was a serious blow to this program. (edit [3])
For Ryan Williams, it's a more direct approach to separating complexity classes such as ACC (a small circuit class) and NEXP (a large turing machine class), where this[2] is the most amazing explanatory technical document I have seen for almost anything. He also has this program whereby if you can find faster algorithms for boolean satisfiability (SAT)—that is, faster exponential-time algorithms—you get complexity class separations "for free."
I've read both of his other survey articles linked in the post. Judging by those this will certainly be interesting. Aaronson does a great job tying formal theory to applications and getting across a good mental model for the math he writes about. Looking forward to digging into this one!
I'll read this if I can afford the time! It would be nice if there were a summary more detailed than the abstract but still short relative to the length of the article (say, 10-pages long?).
If I had to summarize it from one sentence from the article, I'd go with:
"The one prediction I feel confident in making is that the idea of “ironic complexity theory”—i.e.,
of a profound duality between upper and lower bounds, where the way to prove that there’s no
fast algorithm for problem A is to discover fast algorithms for problems B, C, and D—is here to
stay."
Yup, or at least when you have access to a "closed timelike curve" which I think is close enough to what you're asking. See, e.g., this -- which is a transcript of a lecture given by the author of the survey article in the OP. http://www.scottaaronson.com/democritus/lec19.html
With the help of a CTC you can solve any problem in class NP in polynomial time; in fact you can do better and solve anything in PSPACE in polynomial time; and the capabilities of quantum and classical computers are the same.
(The last of those is a result due to Scott Aaronson -- author of the survey article in the OP -- and John Watrous.)
There is also another interesting article [2] by Arkady Bolotin at Ben-Gurion University in Israel, who argues that the lack of macroscopic quantum behavior implies P!=NP. Basically, macroscopic quantum behavior would imply that the universe is computing the solution to a NP-hard problem in violation with currently accepted laws of physics. Also, see this debate between Bolotin and an educated commenter [3].
[1]: http://www.scottaaronson.com/papers/philos.pdf [pdf]
[2]: https://medium.com/the-physics-arxiv-blog/the-astounding-lin...
[3]: https://www.researchgate.net/post/P_NP_and_quantum_nature_of...