Hacker News new | past | comments | ask | show | jobs | submit login

To an extreme approximation, problems in P are easy and problems in NP are hard. This is important for e.g. cryptography: its security relies on hard problems staying hard. If it were found that P = NP, it would mean that we would have an easy solution to hard problems. On the other hand, if P != NP, then we could rest assured that the hard problems will stay that way.



I think that is a bit too extreme of an approximation. What if someone proved that all NP problems could be solved in n * 1000^1000 time? That is in polynomial time (linear, in fact), but completely intractable.

Edit: removed Big O notation, since my point here is actually that the constant might be so large that you'd never get to a point where the limiting case would matter. Sorry to set off a firestorm.


O(n * c) = O(n), the whole point of the O-notation.


No. It's only asymptotically equivalent. If the constant is enormous relative to some absolute measure (say, the number of transistors on earth) then O-notation doesn't meaningfully help you.


Wrong. O(n * c) = O(n).

The definition of O(n) is: f is in O(n) so long as |f(n)| <= M * |n| for some constant M.

Likewise, the definition of O(n * c) would be: f is in O(n * c) so long as |f(n)| <= M' * |n * c| for some constant M'.

As you can see, any function f which is in O(n * c) is also in O(n), and vice versa.

The entire point of complexity classes is to describe asymptotic behavior. If you want to describe something other than asymptotic behavior, find a different tool.


Despite being correct in the narrow sense, you nevertheless end up being wrong in a broader, more useful sense.

I do not disagree with the definition. But you shouldn't rigidly focus on the definition without considering the utility of the notation, and the purpose of the discussion.

Big O notation describes the asymptotic behavior of a function when the argument N being passed to it approaches infinity. This is why it is also known as asymptotic notation.

Now, since this thread is about practical consequence, there is a severe problem with glibly saying that n * 999^999^999^999^999 will ever remotely resemble n from a practical standpoint. You'll never be able to compute f(1), much less a value of n large enough to be limiting.

At any rate, it looks like we both agree that O notation doesn't help you here. My fault for bringing it up.


you cant just go and change the meaning of big-O when you want it to mean something else, it has a meaning already


I shouldn't have introduced it to this discussion, because it's inappropriate here anyways when you're dealing with constants that are never going to be negligible for any value of N that might otherwise be tractable before the heat death of the universe if the constant were smaller. Perhaps that's the response I should have had, and left it at that. But Big O really does look at the limiting case, so I'm not exactly changing its meaning.


Do you dispute the core of my argument, or are you principally interested in this particular semantic diversion? If the latter, do you feel that it actually helps answer the original poster's question? If the former, then what more do you feel that you have to gain by continuing along this line of argumentation, since I've already stated my agreement above?

it sounds like you are irritated that you are being contradicted. i think the solution in this case is that you stop saying things that are flagrantly wrong. i am interested in "this particular semantic diversion". you have been taking a rather confident, definite tone in your various assertions and someone could easily get confused into believing there is legitimate content in your mathematical errors

by the "core of [your] argument" i assume you mean the idea that problems may be solvable in polynomial time but not necessarily quickly solvable in practice, therefore it is a simplification to call the elements of P "easy." that is of course indisputable, but i think the heuristic argument that P problems are easy is more forgivable than the mathematical error of saying O(cn) != O(n) because, like, y'know, asymptotics and stuff


I see nothing but agreement on the only substantive point in this thread. You don't really address my questions, so I think you and I are now talking past each other; it's time for us to discontinue this discussion.


if you claim that O(cn) and O(n) are different classes then changing the meaning is exactly what you are doing. i think what you are missing is that in mathematics, the meaning of a concept is principally derived from its formal mathematical definition (in this case the one jacobolus supplied you with above, from which it is a trivial exercise to prove f(n)=O(n) iff f(n)=O(cn)), not other things


Do you dispute the core of my argument, or are you principally interested in this particular semantic diversion? If the latter, do you feel that it actually helps answer the original poster's question? If the former, then what more do you feel that you have to gain by continuing along this line of argumentation, since I've already stated my agreement above?


Look, O-notation has a precise mathematical definition. So the question whether O(cn) = O(n) has a precise answer, independent of whether the answer happens to be helpful or practical. The answer is "yes". A proof can be found on wikipedia. http://en.wikipedia.org/wiki/Big_O_notation


> What if someone proved that all NP problems could be solved in O(n) time?

You’re quite confused. NP is (at least) a superset of P, and not all problems in P can be solved in linear time (for example, sorting an arbitrarily long list of random numbers). I think you meant something like O(n^(1000^1000))?


Not confused, though you are correct that we already know that at least some NP cannot be solved in linear time given that NP is a superset of P. I should have phrased things more narrowly.

My point is simply to show that even linear-time problems may be computationally intractable.


You could still have used big O notation:

"what if some NP-hard problems could be solved in O(n^(1000^1000))?"

That is polynomial time so those problems would be in P, but in practice they would be intractable.


I actually removed that point from my comment - that problems in P may be intractable, so it may be academic anyhow. But a way of turning NP-complete problems into problems in P (i.e. P = NP) would hopefully mean that intractable NP problems would be tractable P problems, even if only for relatively small values of n.


Hopefully true, I agree.


Is there actually much cryptography based on NP-hard problems? Neither integer factorization nor elliptic curves are known to be NP-hard, for example. The first part of your statement is still true (if P=NP, then RSA would be breakable in polynomial time), but I don't think the second part is (proving P!=NP doesn't prove that integer factorization is outside P).


Cryptography is usually in the intersection between NP and co-NP: NP means a yes-answer is easy to check, co-NP means a no-answer is easy to check. If you are trying to break some encryption you can usually tell quite easily whether you get the right key or not.

It would be a major breakthrough (comparable to P=NP) if anyone could show that the intersection of NP and co-NP is equal to NP.

Some people tried to make an encryption system out of the knapsack problem (http://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman_knapsack...). It makes a good read about how a promising cryptosystem was broken.

Some interesting tid-bit: Even if P != NP, it could still be that most instances of NP-hard problems are easy to solve. (I.e. it's possible that even for NP-hard problems the expected running time is polynomial for all realistic probability distribution of instances. For certain definitions of `realistic'.)


Barrkel: thank you!

Edit: so with this in mind, is it safe to say that the NSA's mathematicians are probably the few individuals in the world most interested in this recent development ? Or, are there other types of problems out there that this proof attempts to address? (Commercial etc)


Cryptography is a perverse example where you want to create very hard problems (if not impossible) for others to solve while having some means of keeping the problem easy for yourself and individuals you trust.

If we had some easy way to solve problems in NP though, I bet there would be tons of productive applications. I'll quote a friend of mine:

"That would transform life on earth, and multiply our technological development rate by a factor unimaginable.

There's no way of even estimating what could happen if that problem were solved, unless the polynomial conversion made the problem impossible.

....It would also solve the protein-folding problem and all of the quantum state problems. Money is nifty; immortality and a possible stardrive don't suck. The NP problem is the current barrier to human science. 'There's no way to simulate that' is a constant refrain around here."

An interesting way of looking at it is that a valid proof that P != NP would be a kick in humanity's balls comparable to Gödel's incompleteness theorem.


Only that most people already expect P != NP.

But as you say, P = NP would really make things too easy.


Commercial etc

Well, if the opposite had been proven, P = NP, there would be titanic commercial opportunities, but it seems to be the opposite (as most people suspected).


no, a proof that P!=NP would most likely have no immediate consequences for engineering ("practical") applications to anyone, the people most interested in resolution of P vs NP are complexity theorists and other theoretical computer scientists




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: