Hacker News new | past | comments | ask | show | jobs | submit login

I described the reasons. E.g., hype.

Also, the paper was eager to assume that a polynomial algorithm that showed P = NP would be fast and that the world would then get big savings in airlines, manufacturing, etc., and that's nonsense. Generally we can get all but the last pennies of savings now.

Net, the paper is increasing the high confusion about the problem P versus NP.




These claims are false. We can get savings on a small number of "low-hanging fruit" problems, which are easy to approximate on practical instances - bin packing with many small items, scheduling with many short jobs, 2D Euclidean Hamiltonian cycles. What about, for example, the learning applications that Fortnow mentions? Try your C-PLEX on the problem "find the minimal size circuit that explains this big set of observations". You will not get anywhere. There are many other examples.

It makes sense for the industry to concentrate on making a buck out of the easy instances, and leaving the hard stuff to the academics. However, that is different from claiming that hard instances don't exist.


You wrote"

"However, that is different from claiming that hard instances don't exist."

but I didn't claim that there were no hard instances. Again, even showing P = NP does not guarantee to provide a fast algorithm for "hard instances" in NP. Net. the question P versus NP is a bit far from practice.

Again, a big example is the simplex algorithm: In practice, the algorithm finds optimal solutions in about 3m iterations where m is the number of rows, and we get to forget about the number of columns. So in practice simplex is terrific as a 'polynomial' algorithm: It's faster than a degree 1 polynomial since we get to ignore the number of columns. But, simplex, worst case, is exponential. So, there are polynomial algorithms to replace simplex, but they are all far too slow: If simplex takes too long, then the polynomial algorithms take still longer.

Linear programming is one of the best examples we have for where we had an exponential algorithm and found a polynomial one. The result: The polynomial algorithm sucked.

You wrote:

"find the minimal size circuit that explains this big set of observations".

I tried to skip over that part of the paper to keep my response simple.

First it is not clear what he means by "explains", but I fear that I do know. Basically he wants a 'fit', but this is dangerous. In practice, a good approach to finding such fits is classification and regression trees (CART) complete with a book by L. Breiman and others.

But for a 'fit', commonly that's easy: For some positive integer n and, for i = 1, 2, ..., n, we are given real numbers x(i) and y(i). The x(i) are distinct. Now we want a polynomial p of degree n - 1 so that p(x(i)) = y(i). Okay, how to find p? Easy. I leave it as an exercise. Problem is, p doesn't provide much as 'explanation'.

Generally, 'explanation' is tough to get at in part because it tries to get at causality which, just from data, as in the paper, is super tough to get at.

I mentioned C-PLEX for optimization problems. Not all problems in NP are optimization problems. But, curiously, integer linear programming bites off a lot of NP, and C-PLEX with associated techniques is one of the best approaches to integer linear programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: