Complexity theory abounds with such algorithms in order to solve concrete problems. In fact it's even worse than that. There is a whole subfield of complexity theory which looks for "fixed-parameter tractability" and typically involves algorithms with reasonable exponents but astronomical constants. Even if you have an O(n) algorithm, if the constant hidden by the notation happens to be on the order of 10^10000 this is not something that you could ever hope to run.
One example are all the consequences of the Robertson-Seymour theorem which (non-constructively) asserts that a large number of graph problems "is in P". For instance, we know that testing whether a graph can be embedded in the torus is in P, but the algorithm in question is completely infeasible.
I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...
The "non-constructively" in this case has fascinating consequences.
One of those is that according to classical mathematics there are concrete problems with polynomial time algorithms that there is no way of producing, and if produced, there is no way of verifying. (Literally no way of verifying, pick an axiom system and there are problems for which the question of whether there is one more forbidden minor is undecidable in said axiom system.)
If you doubt that such algorithms have any real existence, you might be a closet constructivist...
>I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...
Fixed parameter tractability is not necessarily about polynomial time algorithms with huge constants; it's about getting efficient algorithms for problems that have different parameters. Optimising over all the parameters might require exponential time, but if you fix one of the parameters, some problems become tractable (with the constant depending on the value of the fixed parameter).
This can lead to large constants, but doesn't have to. It depends on the value of the fixed parameter.
In this case the polynomial time algorithm is to test the graph against a finite list of forbidden minors. This has a reasonable exponent, but a constant dependent upon how long that finite list is.
That finite list can be extremely long indeed. And in some cases, there is no way of finding or verifying the full list.
One example are all the consequences of the Robertson-Seymour theorem which (non-constructively) asserts that a large number of graph problems "is in P". For instance, we know that testing whether a graph can be embedded in the torus is in P, but the algorithm in question is completely infeasible.
I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...