Hacker News new | past | comments | ask | show | jobs | submit login
P≠NP proof update (rjlipton.wordpress.com)
35 points by amichail on Aug 15, 2010 | hide | past | favorite | 42 comments



Watching this process unfold has been extremely entertaining - solely from the perspective of watching how highly scientific / mathematical communities resolve these types of occurrences. There is a LOT of tip-toeing going on, and it seems there is a very established, methodical, and specific process that is followed by those in these communities. You have to wonder if some of these mannerisms could (should?) apply to open source development :)

I've attended college, but I'm not a graduate and never really experienced any of the higher-level mathematical courses, so this proof (and discussions thereof) are completely greek to me. But I can certainly tell that based on the discussions and claims the researcher dropped an A-bomb and the math and CS communities are abuzz.

For those that are in my situation - is there any layman's summarization that describes what this proof really means to the computer science community? Even the Wikipedia entry has me lost at the second paragraph.


To an extreme approximation, problems in P are easy and problems in NP are hard. This is important for e.g. cryptography: its security relies on hard problems staying hard. If it were found that P = NP, it would mean that we would have an easy solution to hard problems. On the other hand, if P != NP, then we could rest assured that the hard problems will stay that way.


I think that is a bit too extreme of an approximation. What if someone proved that all NP problems could be solved in n * 1000^1000 time? That is in polynomial time (linear, in fact), but completely intractable.

Edit: removed Big O notation, since my point here is actually that the constant might be so large that you'd never get to a point where the limiting case would matter. Sorry to set off a firestorm.


O(n * c) = O(n), the whole point of the O-notation.


No. It's only asymptotically equivalent. If the constant is enormous relative to some absolute measure (say, the number of transistors on earth) then O-notation doesn't meaningfully help you.


Wrong. O(n * c) = O(n).

The definition of O(n) is: f is in O(n) so long as |f(n)| <= M * |n| for some constant M.

Likewise, the definition of O(n * c) would be: f is in O(n * c) so long as |f(n)| <= M' * |n * c| for some constant M'.

As you can see, any function f which is in O(n * c) is also in O(n), and vice versa.

The entire point of complexity classes is to describe asymptotic behavior. If you want to describe something other than asymptotic behavior, find a different tool.


Despite being correct in the narrow sense, you nevertheless end up being wrong in a broader, more useful sense.

I do not disagree with the definition. But you shouldn't rigidly focus on the definition without considering the utility of the notation, and the purpose of the discussion.

Big O notation describes the asymptotic behavior of a function when the argument N being passed to it approaches infinity. This is why it is also known as asymptotic notation.

Now, since this thread is about practical consequence, there is a severe problem with glibly saying that n * 999^999^999^999^999 will ever remotely resemble n from a practical standpoint. You'll never be able to compute f(1), much less a value of n large enough to be limiting.

At any rate, it looks like we both agree that O notation doesn't help you here. My fault for bringing it up.


you cant just go and change the meaning of big-O when you want it to mean something else, it has a meaning already


I shouldn't have introduced it to this discussion, because it's inappropriate here anyways when you're dealing with constants that are never going to be negligible for any value of N that might otherwise be tractable before the heat death of the universe if the constant were smaller. Perhaps that's the response I should have had, and left it at that. But Big O really does look at the limiting case, so I'm not exactly changing its meaning.


Do you dispute the core of my argument, or are you principally interested in this particular semantic diversion? If the latter, do you feel that it actually helps answer the original poster's question? If the former, then what more do you feel that you have to gain by continuing along this line of argumentation, since I've already stated my agreement above?

it sounds like you are irritated that you are being contradicted. i think the solution in this case is that you stop saying things that are flagrantly wrong. i am interested in "this particular semantic diversion". you have been taking a rather confident, definite tone in your various assertions and someone could easily get confused into believing there is legitimate content in your mathematical errors

by the "core of [your] argument" i assume you mean the idea that problems may be solvable in polynomial time but not necessarily quickly solvable in practice, therefore it is a simplification to call the elements of P "easy." that is of course indisputable, but i think the heuristic argument that P problems are easy is more forgivable than the mathematical error of saying O(cn) != O(n) because, like, y'know, asymptotics and stuff


I see nothing but agreement on the only substantive point in this thread. You don't really address my questions, so I think you and I are now talking past each other; it's time for us to discontinue this discussion.


if you claim that O(cn) and O(n) are different classes then changing the meaning is exactly what you are doing. i think what you are missing is that in mathematics, the meaning of a concept is principally derived from its formal mathematical definition (in this case the one jacobolus supplied you with above, from which it is a trivial exercise to prove f(n)=O(n) iff f(n)=O(cn)), not other things


Do you dispute the core of my argument, or are you principally interested in this particular semantic diversion? If the latter, do you feel that it actually helps answer the original poster's question? If the former, then what more do you feel that you have to gain by continuing along this line of argumentation, since I've already stated my agreement above?


Look, O-notation has a precise mathematical definition. So the question whether O(cn) = O(n) has a precise answer, independent of whether the answer happens to be helpful or practical. The answer is "yes". A proof can be found on wikipedia. http://en.wikipedia.org/wiki/Big_O_notation


> What if someone proved that all NP problems could be solved in O(n) time?

You’re quite confused. NP is (at least) a superset of P, and not all problems in P can be solved in linear time (for example, sorting an arbitrarily long list of random numbers). I think you meant something like O(n^(1000^1000))?


Not confused, though you are correct that we already know that at least some NP cannot be solved in linear time given that NP is a superset of P. I should have phrased things more narrowly.

My point is simply to show that even linear-time problems may be computationally intractable.


You could still have used big O notation:

"what if some NP-hard problems could be solved in O(n^(1000^1000))?"

That is polynomial time so those problems would be in P, but in practice they would be intractable.


I actually removed that point from my comment - that problems in P may be intractable, so it may be academic anyhow. But a way of turning NP-complete problems into problems in P (i.e. P = NP) would hopefully mean that intractable NP problems would be tractable P problems, even if only for relatively small values of n.


Hopefully true, I agree.


Is there actually much cryptography based on NP-hard problems? Neither integer factorization nor elliptic curves are known to be NP-hard, for example. The first part of your statement is still true (if P=NP, then RSA would be breakable in polynomial time), but I don't think the second part is (proving P!=NP doesn't prove that integer factorization is outside P).


Cryptography is usually in the intersection between NP and co-NP: NP means a yes-answer is easy to check, co-NP means a no-answer is easy to check. If you are trying to break some encryption you can usually tell quite easily whether you get the right key or not.

It would be a major breakthrough (comparable to P=NP) if anyone could show that the intersection of NP and co-NP is equal to NP.

Some people tried to make an encryption system out of the knapsack problem (http://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman_knapsack...). It makes a good read about how a promising cryptosystem was broken.

Some interesting tid-bit: Even if P != NP, it could still be that most instances of NP-hard problems are easy to solve. (I.e. it's possible that even for NP-hard problems the expected running time is polynomial for all realistic probability distribution of instances. For certain definitions of `realistic'.)


Barrkel: thank you!

Edit: so with this in mind, is it safe to say that the NSA's mathematicians are probably the few individuals in the world most interested in this recent development ? Or, are there other types of problems out there that this proof attempts to address? (Commercial etc)


Cryptography is a perverse example where you want to create very hard problems (if not impossible) for others to solve while having some means of keeping the problem easy for yourself and individuals you trust.

If we had some easy way to solve problems in NP though, I bet there would be tons of productive applications. I'll quote a friend of mine:

"That would transform life on earth, and multiply our technological development rate by a factor unimaginable.

There's no way of even estimating what could happen if that problem were solved, unless the polynomial conversion made the problem impossible.

....It would also solve the protein-folding problem and all of the quantum state problems. Money is nifty; immortality and a possible stardrive don't suck. The NP problem is the current barrier to human science. 'There's no way to simulate that' is a constant refrain around here."

An interesting way of looking at it is that a valid proof that P != NP would be a kick in humanity's balls comparable to Gödel's incompleteness theorem.


Only that most people already expect P != NP.

But as you say, P = NP would really make things too easy.


Commercial etc

Well, if the opposite had been proven, P = NP, there would be titanic commercial opportunities, but it seems to be the opposite (as most people suspected).


no, a proof that P!=NP would most likely have no immediate consequences for engineering ("practical") applications to anyone, the people most interested in resolution of P vs NP are complexity theorists and other theoretical computer scientists


P is the set of all problems that can be verified in time polynomial to the problem space. NP is the set of all problems that can be verified in time polynomial to the problem space with the use of a non-deterministic oracle.

Hold that thought.

If you have a list of n elements, you can search through the list and check if an element is in that list in n^1 time. If you have a list of n elements, you can check some property on a triplet of numbers in (at worst) n^3 time, looping once for each member of the triplet.

I can check that there's a triplet that multiplies to 1337 mod 31337, in a set of n numbers, with three nested loops, and doing the modular multiplication in the inner loop, in n^3 time.

Now! What if there were a magic box you could use? Whenever there are three such numbers in the set, it picks them out; whenever there are none, it just finds any three. You can use that magic box, get the three numbers it picks, check it, and voila, you find your answer immediately, in 1 step instead of n^3.

So if a problem is in NP, you need a magic box if the problem is going to be solvable in n^1, n^2, n^3 steps based on n inputs. (They recently showed that Primality-Testing was in P; an n digit number can be factored in n^12 steps. Large, but still polynomial, and better than the exponential-time 2^n.) If a problem is in P, you don't need the magic box to find it in polynomial time.

The magic box is the non-deterministic oracle. The polynomial that expresses how hard the problem is, that's the polynomial of polynomial-time.

You know "https"? The security of "s" is based on how hard it is to factor two large prime numbers multiplied together, and we know it's in NP (just give me the two numbers, and I can check), but we don't think P=NP, ie we don't think there's an easy way to break all online commerce. (See the RSA algorithm.)


By the way, prime factoring is not proven to be NP-complete. So it could turn out that prime factoring is in P, without implying P=NP.

Other cryptography schemes would still remain viable, even if RSA broke down. (As long as nobody proves P=NP.)


By the way, it seems highly unlikely that prime-factoring NP-complete, since it is in co-NP.


Factoring is not known to be in P. You're thinking of primality testing, specifically http://en.wikipedia.org/wiki/AKS_primality_test


Thanks! Fixed.


More fix needed: "an n digit number can be factored in n^12 steps."


Among other things, we've based electronic commerce on the difficulty of factoring large numbers, which is in NP. If P=NP, then lots of people are going to be very surprised

First, thank you for the explanation. However, regarding your quote above, how would electronic commerce change if it was proved that P=NP ? When you say electronic commerce, I think of the process of buying and selling goods. Are you speaking of the analysis of that process?


If P=NP, then conventional cryptography would be impossible. In particular you would have no public key cryptography so it would not be possible to transmit information securely over an unsecure channel without sharing a some sort of key beforehand. Think of what this will do to online transactions which rely on information like credit card numbers being transmitted securely. I'm sure other areas of the economy would be vastly affected too.

The one form of cryptography that would still be possible is quantum cryptography. But if I recall correctly, that requires that two parties already hold some shared secret before communicating.

Edit: In addition although it would be possible to detect if someone is eavesdropping over your communication, you still wouldn't have the privilege of being able to communicate securely regardless of whether your message is being recorded. This of course presumes that not only P=NP, but an efficient polynomial time algorithm exists for NP problems i.e. one which is say linear or quadratic or even cubic time.


If P = NP then factoring numbers may be easy in such fashion that factoring numbers is not really that easy. N^749 is still polynomial time.


Your second part contradictions your first.

P=NP in a completely real world intractable way. And if P=NP, one would guess it would be in such an intractible fashion, since otherwise the algorithm would have found it's way into nature or something.

Most researcher think P!=NP to begin with, so if it was proven, very little would change. So it's likely any proof regarding P and NP will have little direct impact, though it might have a big impact just by the mathematical formalism it creates.


I don't agree with this explanation.


Given the ad hoc way in which this peer review is being done on the web, perhaps there's a startup opportunity here...


I'm not sure why this is getting down voted. There is an opportunity here to build a web application to help the peer review process. Whether the academic community wants such a thing is beyond me, but the observation is perfectly valid.


The whole math community on the web thing mostly started when Wordpress added support for LaTex (which let them use formula).

There have been some attempts to use Wikis for the same process, but from casual observation they seem to keep returning to using blogs. There is MathOverflow though: http://mathoverflow.net/


I think on most research the current pace of the informal peer reviews and discussion is unusual and unsustainable. On most things I think it would be unnecessary.


i think the deluge of comments we're seeing from the math community is because the paper claims to solve one of the toughest problems out there. this wouldn't scale to findings of a lesser importance




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: