Hacker News new | past | comments | ask | show | jobs | submit login
P might be NP: A Polynomial Time Algorithm for the Hamilton Circuit Problem (arxiv.org)
44 points by Filligree on May 29, 2013 | hide | past | favorite | 47 comments



Just looking at the abstract:

    In this paper, we introduce a so-called Multistage graph Simple Path (MSP)
    problem and show that the Hamilton Circuit (HC) problem can be polynomially
    reducible to the MSP problem.
That would imply that the MSP is NP-hard. So far, so good.

    To solve the MSP problem, we propose a polynomial algorithm ...
That would imply that P=NP, and hence this would be a major result, with potentially wide-reaching consequences.

    ... and prove its NP-completeness.
Pause. This doesn't make sense. If you have a polynomial algorithm then it's in P. If you've reduced HC to MSP then you've already shown MSP is NP-Hard.

They use the word "its" - to what are they referring? The algorithm? That doesn't make sense, as an algorithm is not something that's NPC. The MSP problem? Earlier claimed results show that it's NP-Hard, now they're showing it's P, so to "prove its NP-completeness" doesn't fit.

However, English is not their first language (I assume) so perhaps I'm over-thinking irrelevant detail.

    Our result implies NP=P.
Yes, yes it would.

Now I'm off to see if I, as a non-specialist, can make any sense of it.


From the Introduction:

  We will introduce a so-called 'Multistage graph Simple 
  Path' (MSP) problem and prove its NP-completeness.
So what they claim to be NP-complete is their new problem and not the algorithm as it should be. I'll still say this will turn out to not be real. I won't even try to follow their paper but it's not written in LaTeX so it can't be real math... :)


They claim to have a new problem (MSP) which is NP-Hard because HC reduces to it, and they claim to have a polynomial algorithm to solve MSP.

That's all very reasonable. What's less reasonable is that all of the references bar one are to their own papers, and a paper from 2010 claims to have been presented at a conference, and to prove this result.

It's not passing the sniff test, but I'll still see if the first few pages make sense.


Further into the article they state "This paper persents the full version of our idea to solve this famous problem. We will introduce a so-called 'Multistage graph Simple Path' (MSP) problem and prove its NP-completeness. To solve the MSP problem, we will propose a polynomial algorithm and prove its correctness."


> Pause. This doesn't make sense. If you have a polynomial algorithm then it's in P. If you've reduced HC to MSP then you've already shown MSP is NP-Hard.

This actually makes sense, although it wouldn't be necessary to point out.

If the author proves that P=NP, then all problems in P are NP-complete (because every problem in P is reducible to any non-trivial problem in P).

The paper doesn't look very promising to me, but the basic logic of the argument makes sense.


ArXiv has many "proofs" that P=NP and that P!=NP. ArXiv does not do any vetting of correctness (it is beyond their scope.) We don't need a HN submission for this and the title is inaccurate/clickbait.


"Please don't submit comments complaining that a submission is inappropriate for the site." http://ycombinator.com/newsguidelines.html

People have upvoted this article, therefore there should have been a submission for it.


Fair enough, but people should know that arXiv does not have a crackpot filter (this is not a bad thing. ArXiv is excellent!) Any random pdf claiming to decide P = NP on the internet is going to be crackpot stuff.

(Off topic) My favourite silly arXiv post is this one, mostly for the dramatic title and abstract: http://arxiv.org/abs/0809.4144


I wonder why authors of P=NP 'proofs' won't attach a program that solves some NP hard problem for a collection of large inputs in reasonable time.

This of course wouldn't prove the work is valid, but should be enough to draw attention and have the paper reviewed by qualified people.


O(n^10) algorithm is in P, but does not solve large inputs in reasonable time.


Mathematically you can prove that an algorithm with certain properties exists, without being able to implement it. Also, as others have pointed out, polynomial does not necessarily mean "fast".


Interesting, are you aware of any such algorithm (proven to exist but not discovered yet)?


There are cases where the lower bound to solve some problems (e.g. sorting) have been proved to be less than any known algorithm at the time.

If you prove that the lower bound for any NP-Complete problem is O(p) where p is a polynomial, then P=NP and you do not necessarily have the algorithm.

http://en.wikipedia.org/wiki/Upper_and_lower_bounds


A bit more weakly, some algorithms are known but do not have any implementations. I believe this is the case for the algorithm from "Triangulating a Simple Polygon in Linear Time".


Exactly.

I believe there's good money to be made if you find out that P=NP

If you can solve an NP problem in polynomial time you can solve 3-SAT, and if you can solve that you can factor big numbers (even though factorization is 'easier' than NP)

Maybe you can easily reverse hash functions as well with that knowledge.


P=NP doesn't imply that there is fast algorithm for every problem in NP. It only means, that there is polynomial time algorithm for everything in NP (e.g. n^100 is polynomial). For example there can be O(n^100) algorithm for solving 3-SAT, which is not fast algorithm.


Yes, but as usually, the problem with O is the growth

In the same way that Insertion sort can be faster than Quicksort for small vectors, there's a number of elements from where even O(n^100) is quicker than O(n!)

Because the (practical) problem with NP problems is not when they are small, you can try every combination for a small TSP problem in a reasonable time.

But for big problems, even if it's n^100 instead of n! it'll be most likely faster than the existing algos.


The problem is that, even though it is faster it isn't fast enough. You can't solve any real world large problems with a n^100 algo, even though it is faster in theory.


>>This of course wouldn't prove the work is valid

No, it actually would. And exactly the question begs the answer why don't they submit the damn code.

During my college days, a professor would always argue combustion can only be a exothermic reaction. Another professor would argue it can endothermic too. By the way the discussions went and during one lab session, a student just stood up and asked the professor to produce a chemical which he could put on his palm and burn to prove its endothermic.

Since then he stopped and I never ever heard him talking about combustion being a endothermic reaction again.

All it takes is to write a program, if you have the algorithm is it really that difficult to write it?


That's a good question, but always keep in mind that polynomial time doesn't imply that it'll be faster than the currently used algorithms on any real application.


For a sufficiently big input (N) polynomial will be faster without any doubt.


If you scroll to the last page, you'll find that this "Xinwen Jiang" character cites: a textbook on computational complexity from the '70s, plus himself (in the apparently nonexistent Computer Technology and Automation), more than ten times. Furthermore, he's been at this since 1993. A dedicated crank if I've ever seen one.


Spot the error or shut up. If Ramajuan would write a paper himself he could only cite some rather obscure mathematical encyclopedia, he still uncovered wide areas of mathematics unknown to his contemporaries. There might be one Ramajuan for 100 000 cranks, but it still sucks to dismiss someones work based on the fucking bibliography...


    > Spot the error or shut up.
That's not really how it works. I'm deciding whether it's worth my time trying to understand this. If it's really the breakthrough it purports to be, I can expect there to be some deep ideas and difficult tricks - I expect this to take both time and effort.

To decide whether I'll bother I apply several heuristics, many of which are well-known and informally documented. If the first page or two just seems like obvious stuff, or is subtly nonsensical, then I won't bother.

But I am certainly concerned that he doesn't cite any other significant work at all. More, the claims toward the end are rather, well, indistinct. It's doing very well against the ten heuristics in this blog post:

http://www.scottaaronson.com/blog/?p=304


OK, but I would love for those heuristics to focus on assessing the content of the paper, not the reputation of the author, his peers and quotations etc. You can get some vague impression of what's going on in this paper in some 30 minutes, there are many parts that seem strange (he says for 4-stage graphs one can manually verify that his "prooving algorithm" is correct and uses this as an assumption later in his proof), so I would love to hear something about that.


The heuristics I apply go well beyond the ones given in that post. In this case I have taken on board the fact that so many references are just to his own earlier work, and that work is old enough that it should have made a bigger splash. That's a clear indication that this is not likely to be correct, so I'm looking for mistakes, I'm not looking for deep ideas.

And I'm not finding deep ideas. I spent 10 minutes getting a feel for the approach, and I'll come back when I have another 10 or 15 minutes to spare, but it's really, really not looking good.


Well, the best heuristic is finding a mistake, and if it's a crank there should be several.


It doesn't always work like that. Sometimes a paper is, after enough trying to understand what's going on, simply nonsense. It can be hard work wading through pages of unmotivated and apparently random definitions, trying to find some sense, only to have it completely fall apart.

There aren't always obvious mistakes. Sometimes it makes sense locally, but not globally. Taking an early assessment can help enormously in avoiding time-wasters.


Well, that's not just some textbook on computational complexity, it's the textbook on computational complexity.


The field of "p versus np" is seldom being touched by researchers, since no one wanna spend his/her time on an unknown ending research program. What I mean maybe one could hardly find a published paper in recent years. What do you expect the author to cite? If you want to get an overview about the paper, you may watch this: http://www.youtube.com/watch?v=NYWgrjWQx60&feature=youtu...


I'm very skeptical, for the following reasons:

1. The author first released this paper (well, a version of it) in April 2009. It seems unlikely that it would have gone unnoticed for four years if the proof was valid.

2. Aside from a 1979 textbook and a 2010 paper, the author only cites himself (10 times!)

3. The author does not use TeX (see 1. at "Ten Signs A Claimed Mathematical Breakthrough is False" http://www.scottaaronson.com/blog/?p=304)


>At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you.

Hahaha.


From the author home page:

'It seems our algorithm is a polynomial one. So we would like to discuss with more people.'

It's weird if a proof doesn't even convince the author.

https://sites.google.com/site/xinwenjiang/


In this case, I think it shows uncommon good sense. There have been how many claims that P=NP?

Ironically, the author being this careful makes me think he's more likely to be right.


I'm not a specialist by any means, but he also ends the paper "All the results show that our polynomial time algorithm can get the same answer as the backtracking algorithm does. No exception.", which doesn't seem to imply a concrete proof.


"Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth


This appears to be a corollary: "Beware of bugs in the above code; I have only tried it, not proved it correct."


the arxiv doesn't peer-review papers. it has lots of proofs of famous conjectures, et cetera.

this paper, in particular, smells fishy. the author doesn't cite anyone but himself for significant result (he does cite someone else for referencing material that's standard for specialists.)

related: http://primes.utm.edu/notes/crackpot.html


Jiang found a polynomial-time algorithm for solving what he calls "labelled multistage graphs"

Note that these graphs are partially ordered sets (http://en.wikipedia.org/wiki/Partially_ordered_set)

and every partially ordered set has a unique corresponding comparability graph (http://en.wikipedia.org/wiki/Comparability_graph)

and the problem of finding a Hamiltonian cycle in a comparability graph is known not to be NP-complete. (http://link.springer.com/article/10.1007%2FBF00571188)

Thus Jiang may have found a polynomial-time algorithm, but it solves a problem that is not NP-complete.


I spent some forty minutes on this, I think his "multistage graph simple path problem" is a very obfuscated version of a simpler problem where there is no need for this complicated labelling of vertices, with the information being instead reflected in the connections between vertices. I think the problem he discusses is simply the problem of determining whether two vertices in a graph are connected via a path or not. Did anyone else understand his problem formulation and had any similar impressions? I have a microscopic hope of this being a valid P=NP proof, still would be nice to know wtf is this about.


His "Multistage Graph" seems poorly defined. I'm slowly coming to grasp what I think it is, but he defines is as

    G = <V,E,S,D,L>
but pulls the E(v) from nowhere.

It's not looking good.


Yes, I think the intention is for E(v) to be one of the inputs of the algorithm (I contemplated E(v) for half an hour to guess this is what he means). I thought maybe the following is possible: I create a new graph with the same vertices as his graph, but instead of having an E(v) I connect each vertex to the members of E(v) via an edge. Afterwards, if two vertices are not connected via an edge in his original graph, I remove this edge from my new graph. I cannot really get to the heart of it, I just had a vague impression this might be equivalent, either way I think the problem could likely be formalized in a more effective way.


Nope, there are constrains on how the path might look or not. The constrains have form: Is path enters vertex v, then it might only use edges {...}.


Yes - it's getting harder and harder to pin down exactly what this problem really is, what an example of it is really like, and what's really going on.

I'm sure the author has something in mind. I'm sure it's not well explained, and it might even prove insufficiently explained to be able to assess it properly.

It's looking like it's locally "OK" but globally nonsense.


Well, he get proving that his problem is NP-hard right (because the proof is really easy). But his algorithm for solving his problem is really messy and hard to follow.


Yes, not looking good

It will certainly be amazing if it is true, but it doesn't look right

The style, the constructs in the article, etc

But yes, translating one NP problem to another NP problem is a n allowed approach and has been done between several NP problems (if you prove your translation is correct, and valid all the time, of course)


This video may help to understand the paper more vividly: http://www.youtube.com/watch?v=NYWgrjWQx60&feature=youtu...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: