I think that is a bit too extreme of an approximation. What if someone proved that all NP problems could be solved in n * 1000^1000 time? That is in polynomial time (linear, in fact), but completely intractable.
Edit: removed Big O notation, since my point here is actually that the constant might be so large that you'd never get to a point where the limiting case would matter. Sorry to set off a firestorm.
No. It's only asymptotically equivalent. If the constant is enormous relative to some absolute measure (say, the number of transistors on earth) then O-notation doesn't meaningfully help you.
The definition of O(n) is: f is in O(n) so long as |f(n)| <= M * |n| for some constant M.
Likewise, the definition of O(n * c) would be: f is in O(n * c) so long as |f(n)| <= M' * |n * c| for some constant M'.
As you can see, any function f which is in O(n * c) is also in O(n), and vice versa.
The entire point of complexity classes is to describe asymptotic behavior. If you want to describe something other than asymptotic behavior, find a different tool.
Despite being correct in the narrow sense, you nevertheless end up being wrong in a broader, more useful sense.
I do not disagree with the definition. But you shouldn't rigidly focus on the definition without considering the utility of the notation, and the purpose of the discussion.
Big O notation describes the asymptotic behavior of a function when the argument N being passed to it approaches infinity. This is why it is also known as asymptotic notation.
Now, since this thread is about practical consequence, there is a severe problem with glibly saying that n * 999^999^999^999^999 will ever remotely resemble n from a practical standpoint. You'll never be able to compute f(1), much less a value of n large enough to be limiting.
At any rate, it looks like we both agree that O notation doesn't help you here. My fault for bringing it up.
I shouldn't have introduced it to this discussion, because it's inappropriate here anyways when you're dealing with constants that are never going to be negligible for any value of N that might otherwise be tractable before the heat death of the universe if the constant were smaller. Perhaps that's the response I should have had, and left it at that. But Big O really does look at the limiting case, so I'm not exactly changing its meaning.
Do you dispute the core of my argument, or are you principally interested in this particular semantic diversion? If the latter, do you feel that it actually helps answer the original poster's question? If the former, then what more do you feel that you have to gain by continuing along this line of argumentation, since I've already stated my agreement above?
it sounds like you are irritated that you are being contradicted. i think the solution in this case is that you stop saying things that are flagrantly wrong. i am interested in "this particular semantic diversion". you have been taking a rather confident, definite tone in your various assertions and someone could easily get confused into believing there is legitimate content in your mathematical errors
by the "core of [your] argument" i assume you mean the idea that problems may be solvable in polynomial time but not necessarily quickly solvable in practice, therefore it is a simplification to call the elements of P "easy." that is of course indisputable, but i think the heuristic argument that P problems are easy is more forgivable than the mathematical error of saying O(cn) != O(n) because, like, y'know, asymptotics and stuff
I see nothing but agreement on the only substantive point in this thread. You don't really address my questions, so I think you and I are now talking past each other; it's time for us to discontinue this discussion.
if you claim that O(cn) and O(n) are different classes then changing the meaning is exactly what you are doing. i think what you are missing is that in mathematics, the meaning of a concept is principally derived from its formal mathematical definition (in this case the one jacobolus supplied you with above, from which it is a trivial exercise to prove f(n)=O(n) iff f(n)=O(cn)), not other things
Do you dispute the core of my argument, or are you principally interested in this particular semantic diversion? If the latter, do you feel that it actually helps answer the original poster's question? If the former, then what more do you feel that you have to gain by continuing along this line of argumentation, since I've already stated my agreement above?
Look, O-notation has a precise mathematical definition. So the question whether O(cn) = O(n) has a precise answer, independent of whether the answer happens to be helpful or practical. The answer is "yes". A proof can be found on wikipedia. http://en.wikipedia.org/wiki/Big_O_notation
> What if someone proved that all NP problems could be solved in O(n) time?
You’re quite confused. NP is (at least) a superset of P, and not all problems in P can be solved in linear time (for example, sorting an arbitrarily long list of random numbers). I think you meant something like O(n^(1000^1000))?
Not confused, though you are correct that we already know that at least some NP cannot be solved in linear time given that NP is a superset of P. I should have phrased things more narrowly.
My point is simply to show that even linear-time problems may be computationally intractable.
I actually removed that point from my comment - that problems in P may be intractable, so it may be academic anyhow. But a way of turning NP-complete problems into problems in P (i.e. P = NP) would hopefully mean that intractable NP problems would be tractable P problems, even if only for relatively small values of n.
Edit: removed Big O notation, since my point here is actually that the constant might be so large that you'd never get to a point where the limiting case would matter. Sorry to set off a firestorm.