Hacker News new | past | comments | ask | show | jobs | submit login
On P, NP, and Computational Complexity (acm.org)
44 points by yarapavan on Oct 29, 2010 | hide | past | favorite | 3 comments



One important application of course being that if P=NP and we can prove it is that encryption as we know it today would no longer be viable with public key systems. Also, if I recall correctly the protein problems that Folding At Home works on are some variant of NP that could be expedited.


That's almost exactly the opposite of what this article says. To summarize it briefly, Vardi reminds us that the importance of proving P!=NP is to enhance our understanding of the underlying issues. Also, he devotes a significant fraction of the article to reiterating that worst-case complexity is not the same thing as tractability.

I'm not really familiar with protein structure prediction, but I don't think that it involves solving NP-complete problems. Scouring Wikipedia articles about protein structure prediction suggests that the techniques used are broad, varied, and distinct from the classical set of problems in NP. If someone's doing work in this area and is attempting to solve a problem in NP I'd love to hear it.


Yes the question of P versus NP is a tough, highly curious question. So far the interest is, something like the continuum hypothesis and model theory, more philosophical than practical.

I like the question of P versus NP and believe that attacks on it will yield important and useful progress right along, long before the question is settled.

But of the proponents, such as Vardi, of the importance of the P versus NP question, many of them are less than accurate or honest. E.g., Vardi wrote:

"Garey and Johnson's classical textbook showed a long sad line of programmers who have failed to solve NP-complete problems."

Nonsense. Total cooked up, clearly wrong, outright deceptive nonsense.

Vardi is talking about the cartoon early in the book. There was a long line of computer scientists with one at the front explaining to a business executive behind a desk that he, the computer scientist, could not solve the businessman's problem but neither could any of the other computer scientists in the long line.

That cartoon was always a big deception: Almost certainly the businessman's problem could be solved, and maybe some of the computer scientists in the line could solve it.

The businessman's problem was to save money in his operations. So maybe some operation was costing him $200 million a year, and an optimal solution to some integer linear programming problem (ILP), and ILP is known to be in NP complete, might save him 15%. Actually the 15% is somewhat realistic: The work for the book was at Bell Labs; one of the main business problems being considered was the design of long distance telephone networks; and there was some empirical evidence that an optimal solution could save roughly 15%. So, that would be $30 million. Nice.

Such design is an old problem: MIT Dean T. Magnanti gave a Goldman lecture on the problem at Hopkins, etc. Various large Internet and multi-protocol label switching (MPLS) network operators have continued to consider the problem. Especially considering the need for growth and the uncertainty of the future products and flows, it's a tough problem. For the deterministic version of the problem, at least one company in Texas, with some expertise from SMU, got venture funding to work on the problem.

They flew me down for an interview. At one point I mentioned that my last effort into ILP was to get a feasible solution within 0.025% of optimality to a 0-1 ILP with 40,013 constraints and 600,000 variables in less than 905 seconds on a 90 MHz computer. The misunderstanding, of even several students from SMU, about NP complete led them to conclude that I had to be lying, that no such progress could be possible. The progress was possible, and easy: The problem had special structure. So, except for 13 constraints, the problem was separable and easy to solve. So, use nonlinear duality, put the 13 constraints in the objective function with Lagrange multipliers, and do some Lagrangian relaxation. After 500 such iterations, I had my feasible solution and a duality bound that established the 0.025%. Did I have an algorithm that showed that P = NP? Nope. Did I get what I wanted in the real problem? Yup. The dual problem is to maximize a concave function given by an intersection of finitely many closed half spaces (in this context, always true, no matter how nasty the original problem -- the proof is easy). So, this dual problem is just ordinary linear programming, and for it I used IBM's Optimization subroutine Library (OSL). Given the dual solution, the solution of the separable problem was fast. So, I exploited "Solving ILPs for Fun and Profit 101 for Dummies" -- exploit special structure.

In a back room they had a nice, high end Dell computer with a copy of C-PLEX and were ignoring them both and trying total enumeration. Net, they didn't want to work with me, and I didn't want to work with them. Poor venture firm.

Later a company in France got with C-PLEX founder R. Bixby and got some work done. Cisco bought them to get a system configuration aid for their MPLS customers.

But back to the cartoon: The businessman's problem was to save the 15% of the $200 million, that is $30 million or so.

Here's the big lie: The businessman's problem was to save nearly all the $30 million. So, saving $29,999,999.99999 would be just fine. Presto: (1) Saving nearly all the money was likely quite doable. (2) It is not clear that algorithms for such approximate solutions are in NP complete.

Indeed, for positive integers m and n, for the traveling salesman problem in Euclidean n-space on m cities, just find a minimum spanning tree of the m cities (using a polynomial algorithm), and construct a traveling salesman tour of the cities by slightly modifying a depth-first traversal of the spanning tree. Presto: With a reasonable probabilistic description of the problem, as the number of cities m grows, this solution comes as close as we please to optimality with probability as close to 1 as we please. So, the problem has a polynomial algorithm that provides approximate solutions. This is an old result with details in:

Richard M. Karp, "The Probabilistic Analysis of Some Combinatorial Search Algorithms," pages 1-19, 'Algorithms and Complexity: New Directions and Recent Results', edited by J. F. Traub, Academic Press, New York, 1976.

So in the cartoon, the computer scientist was standing there arguing that the difficulty of guaranteeing to save the last tiny fraction of the last penny, exactly, always, on worst case problems, the worst that could exist even in theory, with a polynomial algorithm meant that no computer scientist in the long line could solve the businessman's problem which was and remains total deceptive, incompetent, made up, academic make-work, junk-think, busy-work, prof-scam nonsense.

Net, the computer scientist was passing out a big lie just to avoid real, useful work and go for a very long term job in philosophical curiosity and intellectual self-abuse.

And Vardi is still pushing that nonsense.

Vardi is saying that the difficulty of exceeding Mach 50 means we can't take a plane to California and then wants to be funded for life to work on exceeding Mach 50 or maybe just the speed of light.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: