Hacker News new | past | comments | ask | show | jobs | submit login

If you are interested in the implications of P vs. NP, as opposed to a survey of the situation around the problem, check out this excellent essay [1] by Scott Aaronson. He goes into interesting consequences about the nature of quantum computing, simulation of a human mind, and time travel, among other topics.

There is also another interesting article [2] by Arkady Bolotin at Ben-Gurion University in Israel, who argues that the lack of macroscopic quantum behavior implies P!=NP. Basically, macroscopic quantum behavior would imply that the universe is computing the solution to a NP-hard problem in violation with currently accepted laws of physics. Also, see this debate between Bolotin and an educated commenter [3].

[1]: http://www.scottaaronson.com/papers/philos.pdf [pdf]

[2]: https://medium.com/the-physics-arxiv-blog/the-astounding-lin...

[3]: https://www.researchgate.net/post/P_NP_and_quantum_nature_of...




>The problem is that the equation says nothing about how large an object needs to be before it obeys Newtonian mechanics rather than the quantum variety.

I'm pretty sure this is BS.

The linked medium post does nothing to defend the implicit premise that the universe can be computed in a reasonable amount of time by classical computers.


Indeed it is, interference phenomena have so far scaled up to fairly large many atom structures in rather predictable ways. The real problem for Quantum mechanics starts happening when needing to explain gravity.


I think you're talking past the parent, whose point is that our universe's physics may not be polynomial time computable (or computable at all, for that matter).


I made two different points. The first is that there's nothing that "obeys Newtonian mechanics rather than the quantum variety.", it's quantum all the way up. The second is what you said.

Guess it wasn't clear enough that the two points aren't related.


The Wikipedia page has a section that does a pretty good job I think:

https://en.wikipedia.org/wiki/P_versus_NP_problem#Consequenc...

"Similarly, Stephen Cook says:

...it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems."


How does "polynomial" imply "practically tractable"?

If P = NP, the exponent may well something like x^100 or x^(# of atoms in the Universe)


The linked survey comments on this; whenever we've been able to put a problem in P, we've almost always been to bring down the exponent to practical sizes.


The classic low hanging fruit problem. Whenever we've pumped an oil well dry, we've almost always found another easy to pump well, therefore all oil wells are easy to exploit and we'll never run out of oil. There are other analogies with the ease of catching wild passenger pigeons and bison.


In point of fact, the survey gives a bit more detail than what the parent implied.


Yes, this always confuses me. Why the constants are swept under teh carpet for real world, practical applications. From the same article.

"It is also possible that a proof would not lead directly to efficient methods, perhaps if the proof is non-constructive, or the size of the bounding polynomial is too big to be efficient in practice."

Given the long standing nature of the problem, my guess is that some new techniques or insights will be required to solve it one way or the other. Those may well give clues to solving NP/NP-Complete problems efficiently.

Until someone actually does it though it's still just speculation.


One answer is because in practice, we very rarely see large constants anywhere. While theoretically x^1000 algorithms exist, it's hard to actually find a reasonable example of one. Ditto for e^1.00000001.

Of course, why this might be the case remains to be investigated.


Example: You want to bisect the nodes of an edge-weighted graph into two sets of equal size such that the sum of the edge weights between the two parts is maximized.

The best approximation algorithm (it's NP-hard) runs in time O(n^(10^100)):

https://arxiv.org/pdf/1205.0458v2.pdf

For more examples see here http://cstheory.stackexchange.com/questions/6660/polynomial-...


It's interesting though that the examples you provided are all approximations for known NP-hard problems, and in many cases the free parameter that lets you get arbitrarily large constants depends on how close to optimal you want the approximation to be.

Do you have examples of large-constant algorithms in P that are not approximations of NP-Hard problems?


Not quite the same as the earlier conversation on exponents, but since you mentioned constants, matrix multiplication is a prime example -- apparently Coppersmith-Winograd has enormous constants ("only provides an advantage for matrices so large that they cannot be processed by modern hardware") that I can't find them (the wiki article doesn't really have a proper source, and preliminary searching also yielded no concrete results -- possibly because the proof of existence was non-constructive?).


Complexity theory abounds with such algorithms in order to solve concrete problems. In fact it's even worse than that. There is a whole subfield of complexity theory which looks for "fixed-parameter tractability" and typically involves algorithms with reasonable exponents but astronomical constants. Even if you have an O(n) algorithm, if the constant hidden by the notation happens to be on the order of 10^10000 this is not something that you could ever hope to run.

One example are all the consequences of the Robertson-Seymour theorem which (non-constructively) asserts that a large number of graph problems "is in P". For instance, we know that testing whether a graph can be embedded in the torus is in P, but the algorithm in question is completely infeasible.

I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...


The "non-constructively" in this case has fascinating consequences.

One of those is that according to classical mathematics there are concrete problems with polynomial time algorithms that there is no way of producing, and if produced, there is no way of verifying. (Literally no way of verifying, pick an axiom system and there are problems for which the question of whether there is one more forbidden minor is undecidable in said axiom system.)

If you doubt that such algorithms have any real existence, you might be a closet constructivist...


>I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...

That's basically Donald Knuth's position. http://www.informit.com/articles/article.aspx?p=2213858


Fixed parameter tractability is not necessarily about polynomial time algorithms with huge constants; it's about getting efficient algorithms for problems that have different parameters. Optimising over all the parameters might require exponential time, but if you fix one of the parameters, some problems become tractable (with the constant depending on the value of the fixed parameter).

This can lead to large constants, but doesn't have to. It depends on the value of the fixed parameter.


In this case the polynomial time algorithm is to test the graph against a finite list of forbidden minors. This has a reasonable exponent, but a constant dependent upon how long that finite list is.

That finite list can be extremely long indeed. And in some cases, there is no way of finding or verifying the full list.


Do we rarely see large constants? Or are they just not on problems where precision and "absolute best answer" matter?

As an example, an algorithm to find the optimal solution to a Rubik's cube would actually be very difficult to do. However, finding a solution in a short enough time frame is quite easy. This is more true the larger of a cube you try and solve.

Contrast this with problems such as encryption, where we have specifically made problems where there is not a "best answer", but rather there is only a single answer that matters.


I think you're proving my point here -- finding the optimal solution for a Rubik's cube is probably at least PSPACE-Hard, which is probably exponential.

So complexity theory is confirming your intuition, which is that 'optimization type problems' are hard.


Even the basic rucksack optimization problem is NP-hard. I'd dare say that a huge class of optimization problems maps very naturally to this particular one, https://news.ycombinator.com/item?id=13316776 for starters.


Ok, so optimization problems are potentially bounded with smaller coefficients than we would think.

What about sequencing life? Supposedly we share a lot at the genomic level with animals we are vastly different from. Could a large degree of the polynomial that is life explain that divergence?


What?


Ha! I'm just trying to get a better feel for what is being said. Definitely the dumbest person in this particular conversation.

I am assuming you are pushing the same angle as the other poster. That is, most optimization problems are actually not "large constants" in the exponents. So, since most of what developers think of as "hard" are almost all classical NP problems, maybe it makes sense to look at other things we don't typically think of as computational.

Could just be nonsensical, though. I fully accept that.


I misunderstood your point, then. I thought you were saying that we rarely see giant exponents in things. My question is if we typically do see giant exponents, but make do with much simpler solutions. (That make sense?)


I am saying we rarely see giant exponents (on polynomial-time algorithms) in practice, because most algorithms that we intuitively think of as 'hard' turn out to be in NP or PSPACE or other complexity classes that we have decent reason to suspect have a lower-bound exponential running time.


Sadly, I'm not sure I understand the point. Aren't we saying these could be exponential, just with large coefficients?

Edit: I meant "small" coefficients above. Apologies.


P: O(n^k), where large k makes it 'slow' but still polynomial

NP: probably O(e^kn), k>1, where k close to 1 makes it 'fast', but still exponential

The objection is that k can be big for some polynomial algorithms but small (near 1) for other exponential ones, so we can have 'slow' polynomial algorithms and 'fast' exponential ones.

But in practice, k is usually small for polynomial algorithms and not close to 1 for exponential ones so simply knowing whether an algorithm is exponential or polynomial tells you something.

The 'hard' problems you gave me all (probably) fall into the exponential runtime class, whereas the easier ones (like finding any solution to a Rubiks) fall into the polynomial runtime. So your intuition of 'hard' vs. 'easy' matches complexity theory's evaluation, even though theoretically there could be really slow 'easy' problems and really fast 'hard' ones.


Aaronson specifically addresses this in section 1.2.1, on page 6.


To be clear I'm not talking about how useful asymptotic analysis and big O notation are in general, I'm talking specifically about the case if we eventually do prove P = NP.

Internet lore and popular media then assume that immediately for example all encryption will be trivially broken. Mathematically it may be true because then all NP problems would be known to be polynomial, but there's still the issue of the practical steps involved factorising a huge number which may have enormous constants or very large exponents. Or if it turns out we can reduce it to some other known polynomial problem, there's still the actual transformation which itself may be polynomial with large constants or exponents.


If I'm not mistaken, isn't it also the case that factorization hasn't been proven to be in NP?


That's not quite correct. Factorization is definitely in NP, because verifying whether a number has a particular factor is easy.

Maybe you meant that factorization hasn't been proven to be NP-complete, which is true. In fact it would be very surprising if it was NP-complete, because that would imply other surprising things (e.g. NP = co-NP).


If P=NP is proven than that makes factorization an easy problem (and included in the proof) which would make many algorithms for encryption "trivially broken".


Right, but it's worth mentioning that factorization itself is not known to be NP-hard (and it's suspected not to be, by most, I think).

Also, as others have pointed out, the constants involved in any "generic" algorithm for solving any NP problem in Polynomial type may be astronomical, so even if it turns out that P=NP, then it may not ever be feasible to actually such any algorithm for anything practical.


That's what Knuth suspects, P=NP, but essentially impractical and not tractable.


Because 'polynomial time' is a long-standing term in time complexity theory defined to imply tractability.


There is a fair chance that the exponent will be of tractable magnitude for certain practically interesting classes of problems.


I agree. I've produced x^8 algorithms, which are already completely useless for all practical applications.

If a proof has 10,000 statements, we already couldnt generate it with an x^8 algorithm.


I came here to recommend your [1] as well as http://www.cs.ucsd.edu/users/russell/average.ps


>Basically, macroscopic quantum behavior would imply that the universe is computing the solution to a NP-hard problem in violation with currently accepted laws of physics

This seems to presuppose what it tries to prove.


Wait, doesn't that claim by Arkady Bolotin state the converse: that P != NP implies the lack of macroscopic quantum behavior?


Yes. The post you are replying to said, essentially, "lack of macroscopic quantum behavior => P != NP, because macroscopic quantum behavior => P=NP", which is obviously flawed logic.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: