Hacker News new | past | comments | ask | show | jobs | submit login
The polynomial algorithm for 3-SAT problem (or P=NP) (romvf.wordpress.com)
269 points by caustic on Jan 19, 2011 | hide | past | favorite | 142 comments



I've read the paper fairly closely, and it mostly seems like the author is hiding a conflict-driven search in ill-stated data structures, which allow him to perform a faulty analysis of the runtime of his algorithm.

I've implemented a SAT solver and read the literature extensively. This paper is not up the standards of clarity imposed by that literature, see, eg, "Efficient Conflict Driven Learning in a Boolean Satisfiability Solver" (Zhang, available for free online). There is a world of difference in the clarity of presentation between these two papers. There might be an English language barrier problem operating, I don't know.

If the author did some work to polish his presentation and state definitions more clearly, as well as submit his SAT solver to well know competitions, (http://www.satcompetition.org/2011/), I'm sure he could get some attention from the relevant people. Given how clear it looks right now, I'm too busy with my own research to try and pull out the hidden conflict-driven algorithm that I suspect exists in this paper, as it would be very time-consuming for little expected gain on my end.

If his algorithm beats the pants off all the others in SAT 2011, well, then I'd get right down to work.

Homework for someone who has some time: download his code and make it run on SAT 2010. Compare to other algorithms from that competition. Not, of course, a definitive test, but it it performs even in the middle of the pack, then you'll know it is worth a closer look.


I would like to clarify that just because an algorithm runs slower than the ones used by solvers in competitions like SAT, it doesn't mean much, as we're looking at asymptotic complexity here. To give an example: the "PRIMES is in P" paper gives a poly time algorithm to determine if a given number is prime or not, but in practice, it runs slower than tests like probabilistic algorithms like Miller-Rabin.


Very true, I'm sorry if I implied otherwise above. However, using a well-known SAT benchmark to assess correctness/performance in the absence of a clearer statement of the algorithm would still be a good step to take, if someone was interested.


"To give an example: the "PRIMES is in P" paper gives a poly time algorithm to determine if a given number is prime or not, but in practice, it runs slower than tests like probabilistic algorithms like Miller-Rabin."

I agree in principle, but the example is a bad one. Here, a deterministic polynomial time algorithm is stacked against probabilistic polynomial time algorithms.

In SAT, we'd have the (hypothetically) polynomial algorithm against exponential time algorithms. Theoretically, the constants in the polynomial might of course be so large as to erase the advantage on non-galactic problem sizes but this happens rarely.


Another example is Linear Programming - it is provably polynomial using the ellipsoid algorithm but people tend to use algorithms (such as simplex) which are not provably in P but run much faster in practice.


The problem with the ellipsoid method is not speed but rather numerical stability. It would work great if we only had "real reals", i.e. reals with infinite precision.


I'm not at all an expert on this material, but some random points to get people started:

0. This guy looks orders of magnitude less looney than the usual P=NP prover. I hope someone who knows this material well steps in soon.

1. This guy has implemented his algorithm. This is a very good sign -- most garbage "algorithms" are exposed to be broken when people try to implement them.

2. Most 3SAT problems are "easy". Being able to solve a particular problem doesn't mean that the algorithm works in general. He would have done better to demonstrate his algorithm on "known hard" problems.

3. He states a running time of O(n^4 m), but his experimental results don't scale quite like this; perhaps his analysis is wrong, or perhaps there's just a monster hiding behind the big-O.

4. If he is correct, and his algorithm is optimal, we probably don't need to worry very much about cryptography yet: It looks like this algorithm is far too slow to be a practical attack on existing cryptosystems.

(EDIT: Oops, in the time it took me to write that, 18 other people posted comments. Well, so much for getting the discussion started...)


It's actually pretty easy to construct hard problems. A random 3-SAT problem is hard when it has the right density ratio of clauses to Boolean variables. Too few, and it's (relatively) easy to find a satisfying assignment. Too many, and it's easy to find a contradiction. This ratio is around 4.26, and it can actually be viewed as "thermodynamic" phase transition. In the large n limit, almost all 3-sat problems with a lower ratio are solvable, and almost all above that are not.


I'd be interested to see a reference for that result. I've been working for some time on the equivalent problem for graph 3-coloring, and really there are only some vague, heuristic results. I should think it also depends on the randomness model you use. In graph coloring some random processes give easy problems and others give hard problems for the same density.


Which randomness models in particular? The two Erdős–Rényi models G(n,p) vs G(n, M)? I thought for almost all purposes they gave the same answer as long as equivalent M and p were chosen in the large n limit, because the concentration around M will be so tight due to the law of large numbers. Is there some other model you're looking at?

Random without qualification is used to mean "uniformly". For this specific case (3-SAT), there are only four reasonable readings: both (variables within a clause), and (clauses within the set of all possible clauses) can be chosen with or without replacement. Neither makes a difference in the large n limit.

http://books.google.com/books?id=zyktZTqZSzMC&lpg=PA414&...

is from a reasonable book.

It's true that this phenomenon hasn't been completely rigorously proven, but it's been given "physicist proofs". There's enough evidence that it's unreasonable to not consider it true.


My sense is that there are hard 3-coloring problems in graphs that don't look "random". Certainly the ones I generate have a small number of vertices with very high degree, and the others are more reasonable. This is why I say that there are potentially things happening with different random models. But you are certainly right on all counts.

However, where you say this:

    .... it's been given "physicist proofs". There's enough
    evidence that it's unreasonable to not consider it true.
There had been a lot of evidence around for quite some time that primality was harder than P (assuming P!=NP), and yet it turns out to be in P, even without PvsNP being settled. I'm suspicious. I still believe that hard instances can be found, and that "most" of them will look as you describe, random-ish with the right parameters, but I'm wary about claiming that most things that look like that will, in turn, be "hard."

But I suspect we're in agreement, possibly "violent agreement."

ADDED In EDIT: Forgot to say, thanks for the reference.


Yes, we are in violent agreement then.

Easy to generate random hard instances does not at all mean that highly structured "non-random" instances can't be hard. The "phase transition" only means that the fraction that are hard at a given non-critical ratio decreases with n, but there very well could be a large absolute number, that increases with n. The space of problems of size n grows exponentially, after all.


My sense is that even before AKS, most researchers believed primality was in P. This was based mainly on poly-time probablistic algorithms developed in the 80's.


Here's a Nature article that's pretty readable

http://www.cs.cornell.edu/selman/papers/pdf/99.nature.phase....


To pick on a small part of your comment: Do any theoretical computer scientists seriously worry that a proof that P=NP would threaten cryptography? It's not like a proof would suddenly make an i7 able to factor huge numbers in polynomial time. Also, while all polynomial-time algorithms are considered "tractable" in theoretical CS, that doesn't mean that they're all practical on current hardware (nor does it claim to mean that).

I suppose a proof that P=NP would still technically threaten cryptography, because if you assume that computer hardware performance increases polynomially with time you must concede that any given crypto technique will become practical to break after some amount of time.


>Do any theoretical computer scientists seriously worry that a proof that P=NP would threaten cryptography?

Yeah, that's how crypto works as a field. As soon as there's a shadow of a doubt that something is secure in even the least likely edge case, it gets tossed out. MD5 was basically considered dead once someone showed to how create a collision with (correct me if I'm wrong) two very long, very unlikely, very similar (?) plaintexts. It didn't threaten anyone's actual use of MD5 in an immediate sense, but MD5 was shown not to be reliable and in the crypto world an algorithm's reliability is sort of treated as binary.

(I apologize to the true security folk in the crowd if I've oversimplified something here.)


Apparently it wasn't considered dead enough early enough: http://www.win.tue.nl/hashclash/rogue-ca/


Factoring is in NP. Proving that P=NP would, by definition, imply that large numbers can be factored in polynomial time.

The issue of whether it's a feasible amount of work is exactly what I was addressing.


The issue of whether it's a feasible amount of work is exactly what I was addressing.

One of the reasons this would be a huge result is that even a very expensive polynomial-time solution for any NP-complete problem (say, one which is O(n^2000) and would take several billion years to run against a 256-bit key) is only going to get faster over time. Also, the experience of cryptography is that once cracks start appearing in an algorithm the mathematicians start trying to wedge their chisels in and hammer those cracks into fractures, into holes, and eventually into whatever you call something which no longer resembles a crack so much it resembles absence-of-wall.

There are specific mechanisms by which this would happen, too. A near miss on P=NP would mean funding for professors looking into refining it, and professional interest generally, would suddenly explode. Nobody wants to fund or work on duds, but big contributions to this problem would make careers.

Not incidentally, while the government funds quite a bit of academic research, this particular branch of mathematics is of special interest to one type of government customer which has ways of getting funding for projects expedited if they are in the national interest... and it is virtually inconceivable that they would not find very, very strong national interest in the vicinity of this result.


True. However I don't think the article really implies that P = NP (although the title certainly does). From what I can see the article only addresses the class of NP-complete problems. Factoring is not known or believed to be NP-complete so it wouldn't directly prove that factoring was in P. Please correct me if I'm wrong.

Edit : For some reason I can't reply to posts so I'll do it here.

RiderOfGiraffes> It would will leave in question those problems in NP, but not NPC. Currently it is generally believed, but not known, that factoring is such a problem.

This is exactly my point. P = NPC doesn't imply anything about factorisation. Since factorisation is in NP we haven't shown that P = NP

kd0amg> NP-complete problems ... are, in essence, the hardest problems in class NP.

From what I understand they a 'thought to be' the toughest but this isn't proven. If this article turned out to be true then they are reduced to being as hard as P. So who is still standing as a contender for the hardest problem in NP? - factorization of course as it has never been demonstrated to be polynomially reducible to NPC.


I believe you to be mistaken. Let me explain.

Proving that any single NPC problem is in P will be enough to prove that every NP problem is in P, and not just the NPC ones. Suppose A is is NP, B is in NPC, and further suppose that solving B is polynomial. Reduce A to B (a polynomial operation because B is in NPC), solve B (a polynomial operation by assumption), and convert the solution back to a solution of A (a polynomial operation because B is in NPC), and that gives a polynomial solution of A.

Proving that any single NPC problem is not in P is enough to prove that all NPC problems are not in P. It would will leave in question those problems in NP, but not NPC. Currently it is generally believed, but not known, that factoring is such a problem.

Added in edit, to reply to your edit:

    RoG> It would will leave in question those problems
    RoG> in NP, but not NPC. Currently it is generally 
    RoG> believed, but not known, that factoring is such
    RoG> a problem.

    dejb> This is exactly my point. P = NPC doesn't imply
    dejb> anything about factorisation. Since factorisation
    dejb> is in NP we haven't shown that P = NP
Read my entire comment again. In particular, the second paragraph. In particular:

    Proving that any single NPC problem is in P will be
    enough to prove that every NP problem is in P.
More specifically, proving 3-SAT is in P will prove that every NP problem, not just the NPC problems, but every NP problem, is in P. Including factoring.

Let me add a little more.

   1. Suppose 3-SAT is in P.
   2. Let A be any problem in NP.
   3. 3-SAT is in NPC.
   4. Therefore A can be converted to A', an instance of 3-SAT.
   5. By (1), A', as an instance of 3-SAT, can be solved in polynomial time
   6. Hence A has been solved in polynomial time.
Replace "A" with factoring, and thus we've shown that P=NPC implies that factorisation is in P.


Can you supply a link to the proof that all NPC problems are equivalent? I remember learning it, but I can't remember how it worked.


This in the definition of NPC. What you have to prove is that a language is in NPC.

This was proved for SAT first: http://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem

http://en.wikipedia.org/wiki/NP_complete


By definition, a problem P is NPC if:

a. P is in NP, and

b. Every problem in NP can be converted to an instance of P.

Consider any two problems in NPC, P and Q. Then each is in NP, and each can be converted to the other. They are therefore equivalent.

QED.


The NP-complete problems are a subset of NP ("nondeterministic polynomial") problems, specifically those to which all NP problems are polynomial-time reducible. They are, in essence, the hardest problems in class NP. It's pretty trivial to show that integer factoring is in NP; what's not known is whether it's NP-complete. If any NP-complete problem is in P, then via polynomial-time reduction, all problems in NP are in P.


factorization of course as it has never been demonstrated to be polynomially reducible to NPC.

Every NP problem is polynomial-time reducible to SAT. I do not recall the title of the paper off the top of my head, but it has been proven that the definition of an NP problem (a question that can be answered given some additional piece of information, a "certificate") is polynomial-time reducible to SAT.

The methodology was basically an algorithm taking as input a description of such a solution, and producing a boolean expression on the certificate and some other values needed to maintain integrity. The expression is satisfiable if and only if the answer is affirmative.

The reason that factorization is not known to be NP-complete is that the reverse is not known. That is, no NP-complete problem is known to be polynomial-time reducible to factorization.


From what I understand they a 'thought to be' the toughest but this isn't proven.

Yes, it is. Any NP problem can be many-one reduced to SAT, which means that there's a function that transforms an instance of the NP problem to an instance of SAT, and the answer to whether that formula is satisfiable is the same as whether that instance is a member of the NP problem (in its decision version).

So who is still standing as a contender for the hardest problem in NP?

If P = NP, then all problems in NP are as easy as each other, so this question is meaningless.


To be pedantic, not all problems in P are equally hard. Sorting for example is harder than finding the median.

The provable lower bound for the worst-case performance of searching is O(n log n). Finding the median takes linear time i.e. O(n).

Sorting and finding a median reduce to each other polynomially, of course.

Interesting subsets of polynamial problems are e.g. linear problems, or highly parallelizeable problems whose runtime tends to O(log n) as the number of processors tends to infinity. Adding a list of numbers is not only linear but also highly parallelizeable. Proving a problem to be not parallelizeable like that, is also interesting.

There's also the question whether randomness helps. It does in practice, but we don't know whether access to random bits helps in theory.


By "as easy as each other" I meant reducible to each other with a polynomial factor, as is standard in complexity theory.


Yes, that why I prefaced with "To be pedantic". I just felt like pointing out that there are meaningful differences between polynomial problems.


Speaking pedantically, P is in NP also. I suspect that NP is being used as shorthand for NP-complete, not just NP. As for actual deterministic factorization Wikipedia says that

'It is suspected to be outside of all three of the complexity classes P, NP-complete, and co-NP-complete'

So even if this showed 'P = NP-complete' it may not imply imply factorization is in P. I'm at the limits of my rusty complexity knowledge here so if any knows better please correct me.


Actually, cperciva is right here. Factoring is in NP, therefore if P=NP then there is a polynomial-time algorithm for factoring. Many problems harder than NP are NP-complete, so saying factoring is in NP-complete would not imply the conclusion.

Also, there are problems in NP that are neither in P, nor are they NP- or co-NP-complete.


Cperciva is right, but I think you're mistaken when you say "many problems harder than NP are NP complete", NP complete problems are both NP and NP hard. NP hard is a class of problems such that every problem in NP can be reduced to any problem in NP hard. Meaning if P=NP then every problem in NP(including the NP complete problems) can be solved in polynomial time; all it takes is one algorithm that solves an NP complete problem in polynomial time (note that this does not show that P=NP hard, a much stronger claim).


You're right. I've got the brain hiccups.


> Also, there are problems in NP that are neither in P, nor are they NP- or co-NP-complete.

Err, it seems to me you've got a very serious statement there. I think you mean “that are not known to be …”.


No, there actually are problems in "NP-intermediate" class (if P!=NP) although they are artificial. :) And, of course you are right, for many other problems researchers suspect they might be in NPI. See http://cstheory.stackexchange.com/questions/79/problems-betw...


It's been proven that if P != NP, then there are problems in NP, which are neither in P nor NP-complete. We don't know what those problems are, because identifying such a problem would prove P != NP. There are candidates, such as factoring, for which there are neither polynomial algorithms nor proofs of NP-completeness, but definitively identifying them is at least as hard as proving P != NP, and possibly harder.


Yep. I should just stop posting today. :)


The point I was trying to make (badly) was that the actual claim doesn't imply P = NP, only that P = NP-complete. Given that factoring isn't known to be NP-complete (or co-NP-complete) this paper doesn't imply that factoring is in P.


If a single NP-Complete problem is in P, then P=NP. This is what NP-Complete means. In particular, factoring would be in P. Conversely, factoring is not NP-complete, so if factoring were in P, we could not draw any conclusions about P=?NP.


Actually, yes. Normally, when something is placed firmly in P, improvements are quickly made and a spur of research firmly lowers the bounds of the algorithm.

Additionally, P=NP suggests that NP in RP. If an inefficient deterministic algorithm in P is found, I'm sure a much faster randomized algorithm would soon follow.


> Normally, when something is placed firmly in P, improvements are quickly made and a spur of research firmly lowers the bounds of the algorithm.

This hasn't been the case in linear programming, at least: 30 years after it was proven to be in P, the old exponential algorithm (the simplex method) is still competitive with the best known polynomial algorithms (interior point methods). From a practical perspective, no real efficiency breakthrough. The fact that there are now two families of algorithms with good performance for the problem is an improvement, but the fact that one of them is theoretically polynomial doesn't seem to have led to dramatic speed gains.


OTOH, simplex is very efficient already. Average-case complexity for random matrices is polynomial, and there are some interesting results that extend beyond that (see http://arxiv.org/abs/cs/0111050).


Don't forget that changing out algorithms is a time-consuming process. It's not a 10 second fix, it's a technical challenge and a bureaucratic nightmare. I mean, it takes a week's worth of meetings to restock the condiments in the mess hall in some of these places. So you wind up in a situation where it takes a year or more to implement this stuff once you have the ideal solution selected, and selecting the replacement solution takes another year or more. So even if it's not projected to be an issue for 5 years, you have to start the replacement process right away.


Thanks for the summary with your level of competence (already a good one ;)).


Colin, no offense, and you are certainly a good coder & cryptographer, but that doesn't make know about everything in mathematics.


Err, what? I won the Putnam, but that's a mathematics competition, not a computational complexity competition.

Edit: If the above comment doesn't make sense, it's because the comment I was replying to was "you won the Neumann" and has since been edited.


OK, I just upvoted purely for the reason of keeping it on the front page a little longer so someone more qualified has a chance to glance at this. Without any evidence and no qualification to judge myself, it seems highly unlikely.


The prior probability of it being correct is so low that we should be flagging it, not upvoting it. There's many, many false "proofs" of P=?NP out there.

If I write a paper formatted in LaTeX saying that I cloned a T-Rex in my backyard, should it be upvoted so that an expert in cloning can take a look at it?


If this were the situation I would agree with you, however an independent programmer has implemented his algorithm and posted it on github with instructions (https://github.com/anjlab/sat3) on how to run it on k-SAT instances. Because of this, I believe it increases the claim's reputability; it is easier to confirm or reject his algorithm and claim.


First, it doesn't appear to be independent as the author on the blog says he/we prepared it.

Second, doesn't mean much. It's almost as likely there's a bug in the code as in a proof. The only advantage is you can run some tets on it, but if it's wrong the tests may actually give you a false sense of security.

Lastly, if you prove P=NP you submit to FOCS or STOC. Let them review it.

Otherwise its just another Archimedes Plutonium.


Perhaps you're right regarding the first point, I'm just going off what he states in the OP: "Also two independent versions of the algorithm in programming languages have been implemented."

My point is that a constructivist proof that P=NP along with working code (again I'm assuming it's implemented correctly, as it appears vetted by Romanov) is easier to prove incorrect as it's easier for someone who perhaps doesn't have the theoretical background to find pathological examples where it breaks down (and a much wider audience fits in this category, including most of HN).

I agree with your point about the false sense of security however--an inability to find such pathological cases is not sufficient to prove P=NP. In order to truly verify the claim a rigorous analysis of the proof will be necessary--but in this situation it's much easier to show what this guy is saying is false than in the Deolalikar case.


To be concrete, an average nicely formatted P=NP proof claim is correct with probability less than 10^(-5) (IMO).


Is anyone else struggling to understand theorem 2 on pages 15-16?

It sounds like what they are saying is equivalent to the following: If S1 intersect S2 has a solution, and S1 intersect S3 has a solution, then the system S1 intersect S2 intersect S3 has a solution.

But this is evidently false. Consider the case where the CTS included each of the following rows, and were empty everywhere else (after re-ordering the columns so the same-name columns were in the same final column):

  (1) 000 
  (2)  001
  (3) 0 00
In this case, (1) and (2) are consistent, and (1) and (3) are consistent, but (2) and (3) are inconsistent.

I suspect the problem they set up the induction for might not perfectly align with the theorem, but it needs more careful examination.


Having read this paper (and being reasonably knowledgeable on the subject), I'm convinced this paper is wrong, or at least very underexplained.

The algorithm uses an algorithm similar to the well-known '3-consistency', which has been shown to solve quite a lot of classes of problems, in particular some that are solved very poorly by the normal learning-based systems used in most SAT solvers.

The paper aims to strengthen 3-consistency slightly, using permutations. However, while that is a reasonable strategy, it is entirely unclear to me, and unclear in the paper, while that gives poly-time solving time.


Just this months there's another paper on arXive that uses 3-SAT to proove P!=NP http://arxiv.org/abs/1101.2018

A list of articles published on the P=NP debate is here http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

Looks like someone thinks they've solved the problem every month or so :)


And as always, we'd better hope that the proof is wrong. P=NP would (almost) imply the impossibility of cryptography, but even more seriously a lot of brainpower from the top theoretical computer scientists (whose proofs often begin with "suppose that P != NP...") would have been wasted :)


P=NP does not imply cryptography is impossible at all.

For example, "Quantum encryption" allows you to send data and know whether you're being eavesdropped. Then, once you manage to send it without being eavesdropped, you can use what you sent as a one-time-pad safely.

You could of course regress to distributing one time pads to everyone :-)

Also, if P = NP, there are still NP hard and Exp problems that remain outside the realm of P.


That's why I said "almost" :) AFAIK, cryptography without P != NP is an open question.

BTW, quantum encryption is not encryption as we know it (a purely functional deterministic bijective transformation of the message), as it requires special hardware, optical fibers, etc...

As of EXP etc... I am not aware of any (theoretical) encryption schemes that use problems harder than NP (actually most use circuit complexity, so NP/poly, but it is related). In designing encryption you often want polynomial verifiability (for decryption), and that implies NP membership.


The negative effects of ruining public-key cryptography completely pale in comparison to the benefits. The traveling salesman problem alone has so many practical and scientific applications that it would be worthwhile, and that's not even counting subgraph isomorphism or knapsack.


How so?

1) One time pads are provably secure albeit impractical

2) Integer factorization is BQP, but not necessarily NP-complete.


1) One time pads require a secure channel to transmit the key, which must be the same length of the message. It is not a feasible cryptography scheme, more of a theoretical framework.

2) Integer factorization is trivially in NP (the decision problem "n has a factor < x" has a certificate: the factor)


A quantum channel provides protection against eavesdropping for random data, but does not provide security for specific data. A shared quantum channel can be used to easily produce a shared one-time key that only the two parties involved know, which can then be used to encrypt specific data over an unsecured channel.


One time pads are not useful in many cases, but they have been used:

http://en.wikipedia.org/wiki/One-time_pad#Historical_uses


I completely disagree - think of all the useful np problems that we'd be able to solve in the large scale.


What we should do is generate a class of instances of 3-SAT that are expected to be hard, and then try the solver on them and see what the runtime looks like as a function of the size of the input.

Recently someone claimed a polynomial-time graph coloring algorithm. I generated hard instances, their "solver" blew up. Claim debunked. It should be simple enough to do the same for this (for some definition of simple).

The key lies in generating hard instances. As cperciva has said in http://news.ycombinator.com/item?id=2121837 - most instances of most NPC problems are "easy."


I tried running the code on Pigeon Hole Problems, but it crashed. :(


With what error? Can you please file an issue here: https://github.com/anjlab/sat3/issues and provide link to CNF file?


tl,dr version of his strategy (as far as I understood it)

1. For a fixed permutation construct a Viterbi-like search on the triplet assignments - if it fails it is not satisfiable. However, if it doesn't fail right away, there is still no guarantee there is an assignment. Call this structure compact triplet (CTF) or whatever.

2. Constuct a small set of permutation (at most m) for which every clause in the original CNF failing to satisfy will mean that at least one of these permutations CTFs will fail to satisfy.

3. Efficiently? combine the structures.

I didn't really read it deeply but that from what I understood that seems to be the top level strategy. I'm not 100% certain about it.


That's more or less what I got out of it as well, though I came at it from the other direction...

If you skip the first part about pre-processing, and instead look at the graph that is generated, you can come up with following:

1. There exists a graph consisting of potential truth assignments on N unknowns in M clauses consisting of 7M nodes, and <48M2 edges.

2. Label each node uA,uB,uC,vA,vB,vC where u? is the unknown number, and v? is whether that unknown is true or false.

3. Edges exist between node I and node J if either nodes I and J share zero unknowns, OR both I and J share at least one variable AND all shared variables have the same value.

4. For there to be an assignment that causes the full statement to be correct, you must find an M-clique (a complete graph on M nodes within the 7M node graph).

Solving part 4 is NPC.

His claim is, basically, that if you permute/partition the input (to construct the tables), you can construct a disconnected graph with far fewer edges (get rid of the first part of #3 above), then add links graphs in such a way that if you can find a path from a tier-1 node to a tier-2 node, ..., to a tier-M node, examining the labels is sufficient to solve the problem. The two nasty parts are finding an efficient permutation/partition step, then managing to properly link the disconnected graphs. As you say, the two steps that the author hand-waves.


Where is the market among the HN community on betting this proof is correct (or the market for P=NP in general)?

My bid/ask is 0% / 0.02%

I think I'd wager at most a 1% chance that P = NP, and, I'll be generous and put the odds that this particular person cracked it first at 2% of 1% (I know he has code posted, but think of all the smart people who failed, and within my 1% is the case where P=NP but no human ever proves it). Would anyone offer better odds than 1 in 5,000, or make a bid?

My knowledge is limited to taking Sipser's intro class in school; but as a programmer, I always find subset-sum to be the most tangible and convincing example that NP-hard feels pretty tough.

I love these announcements though; I am always humbled and fascinated by the resulting discussion.


http://www.longbets.org/

is a market for long bets, e.g.

“Over a ten-year period commencing on January 1, 2008, and ending on December 31, 2017, the S & P 500 will outperform a portfolio of funds of hedge funds, when performance is measured on a basis net of fees, costs and expenses.”

PREDICTOR: Warren Buffett

CHALLENGER: Protege Partners, LLC


Costs $50 to publish a bet there, though, so transaction costs would eat up anything but a very, very large bet at 0.2%


I'd put some money on `He hasn't proved P=NP.' as long as the odds are greater than zero.


I believe Intrade are open to suggestions for new markets (they have "Scientific" category where this would fit).


Why do we think this is worthy of voting up? Is there any reason to think it might be correct?


Failure can be interesting. Consider the attempt a few months ago by Deolalikar. The ensuing public discussion, which including many big names in complexity theory and at least one Fields medallist, was very enlightening.

This particular attempt involves someone claiming P=NP, and they have code. Even people who aren't complexity theorists can jump in on this one, by analyzing the code to verify it really is in P, and by trying to find problem instances it fails on.


at least one Fields medallist

I think both Gowers and Tao talked about it. I don’t remember in how much depth.


It looks like the paper is fairly straightforward. It purports to give a polynomial time algorithm for 3-SAT.

Interesting facts:

- It is a O(n^4m) algorithm, where n is the number of boolean variables and m is the number of disjunctions.

- It not only tells you whether the formula is satisfiable or not, but gives you the satisfying values.

- The author claims to have implemented and tested the algorithm on reasonably sized problems, though I don't see his code posted anywhere (?). Someone else appears to have implemented it here: https://github.com/anjlab/sat3.

- The author references no prior work, just review papers on the problem itself.


> - It not only tells you whether the formula is satisfiable or not, but gives you the satisfying values.

Any yes/no NP solver can be turned into one that does this with polynomial overhead. After finding there is a solution, you ask it the similar problem "does this have solution with the first bit set to true?", then "does this have a solution with the first bit set to <answer to previous question> and the second bit set to true?", and so forth until you've identified all bits.


It is almost certainly not correct, simply because so many have tried, and none succeeded.

However, this guy seems very cool - he is humble about his work, appears very serious, and even puts his code on github. And even if wrong, his approach might be interesting.


"because so many have tried, and none succeeded"

Words worth of a politician, but not a hacker!


No words worthy of a Hacker - we are bound by reality.

Politicians are not.


Our actions are bound by reality. Our choices are bound by what we posit reality to be.


It's on the arXive, so you know it's legit.


Sarcasm, right? The arXiv has very little quality control (intentionally).


I think the point was that quite a bit of what is on arXiv is from quacks and nutcases. (maybe not the majority, or even a large part - but enough to cast some shadow over arXiv itself, and therefore over other submissions as well).


Just because its on arXiv doesn't mean its legitimate. Its more likely that it is legitimate if it gets published (with a few exceptions like the Poincaré conjecture.



Yeah, like that will work.


We just want to know what the flaw in his reasoning is.


Zero other (english) publications by the author in the Cornell archive and only four references within. Not an indicator as to whether the paper is correct (I haven't read it), but that's "smelly".


Not really. There are many great researchers publishing in their language only. Especially in countries where research is not heavily funded. For some, 800EUR for a conference is just too much.


Yeah, and, seriously, a Russian?! Please...


Why not a Russian? For example the Russian Mathematics Olympiad is considered harder than the International Math Olympiad by many of the top contestants.

Edit: why downvoting?


I interpreted dauphin's comment as sarcasm.


I know nothing about N=NP debate, but does the claim P=NP and the existence of a published algorithm (https://github.com/anjlab/sat3) make this claim easier to verify than the claim P!=NP. Isn't the point that P=NP has great practical significance that will be immediately recognized?


You'd be surprised. Sometimes its much harder to find hidden faulty logic in a proof for an algorithm's correctness or running time.

If I recall correctly, Ramanujan once wrote a proof for 1=2 that baffled mathematicians for quite some time before they figured out what was wrong. I believe that proof was rather short (< 1 page). For a long proof with a tiny error, things could be much worse.


Can you find a reference for Ramanujan's "proof"? I looked briefly and can't find one, but I'd be very interested to see what the proof was.


Here is one proof. I don't know if this can be attributed to Ramanujan.

    Let a, b be equal integers

    a = b
    a^2 = ab    // multiply by a
    a^2 + a^2 - 2ab = ab + a^2 - 2ab    // subtract a^2 - 2ab
    2(a^2 - ab) = a^2 - ab
    2 = 1
From p319, Fermat's Last Theorem by Simon Singh


I'm guessing this isn't the proof that jchonphoenix referenced, because it has a fairly obvious error and most mathematicians would see it quite quickly.


Just for those who don't see the error:

  2(a^2 - ab) = a^2 - ab
On this line, a^2 - ab is 0 (look at line 2). You wind up dividing both sides by 0, thus resulting in the false proof.


division by zero ?


A contradiction implies all falsehoods are true.


I was able to find this, though it's just some algebra tricks rather than a professional mathematician's musings:

http://www.innovatia.com/Design_Center/Proof_that_2=1.htm


It certainly would make it easier for a professional programmer to verify, hence it has a certain appeal to this audience. Probably not so much for a mathematician.


If P=NP there might still be no practical implications - if there is a minimum complexity bound on the order of the polynomial, then it might remain completely impractical to solve np complete problems.



I don't understand something in the blog post. What did they proved in 2002 and what is the new result that they proved now?


Time to fire up Coq and really prove it.


My great grandchildren will be looking forward to the results :P


There is an inkling prediction market on how likely this will be proved correct by the end of 2011: http://home.inklingmarkets.com/markets/34366


Right, he just didn't want to submit it to a peer-reviewed journal?


I think before I would publish such a thing widely, I would ask a few friends to double check.


lol


sorry guys. this site is not worth taking seriously anymore


Could it be ... P = NP? Most people suspected otherwise.

Someone should really verify his algorithm on various SAT sets. But I have to say that his approach is very good and humble... he suggests he may have solved the problem -- but it is up to us to verify! I'd like to follow this further, so I bookmarked it via an upvote.


This is easy: just put your algorithm behind a web API. When I'm solving my NP-complete problems instantaneously, for free, I'll believe you.


"Instantaneously, for free" is nowhere near what this paper claims (O(n^4m)).


But P is good and NP is bad, and fast is good and slow is bad, so P must be fast!


"Proof by halo bias;" this could be a valuable new technique to math in the social sciences.


Why would a polynomial time algorithm become instantaneous if you stick it behind an API?


The magic of the cloud! Imagine running that on Heroku (for instance): it would be costly (price) but possible.


Just parallelize your algorithm and use n^5 machines ;)


Is any polynomial algorithm parallelizable?


Yeah this is dumb, should have read the OP.


Should have taken time to think. You don't even need to read the OP to know what you said is nonsense.


I don't think one example constitutes a conclusion. While demonstrating the nonexistance of an algorithm for 3-SAT problem would prove P!=NP, the existance of an algorithm merely means "Move along, let's try a different difficult algorithm"


That's incorrect unless I'm misreading you. 3 SAT is NP-complete, which means that if this algorithm's polynomial performance holds up, you can generalize it to solve any problem in NP in poly time.


Just to be nitpicky, 3 SAT is NP-Complete which means its NP-hard.

The NP-hard property allows you to reduce any problem in NP to it.

As you stated it, generalizing (or reducing) 3-SAT to another problem proves nothing and is a common mistake for undergraduates learning complexity theory (reducing the wrong way).


He said generalize it (the algorithm) to other NP problems, which I took to mean 'turn every NP problem into 3-SAT and solve that'


The word generalize is meaningless and undefined, which is what led to your confusion as well as multiple interpretations (which is one of the things I was picking at).

To say you generalize an NP problem isn't even a statement that typechecks in the language of complexity.

Reduction on the other hand, is a formally defined term and is very specific in what it means exactly.


I agree you can't generalize a problem, but if you make an algorithm work on more than just one specific problem class, aren't you generalizing the algorithm (in this case, by reducing the problem)? The only other word I can think to use there instead of generalize would be 'generify' but that's only used in the context of languages that have generics, IIRC.

If moultano had actually said 'generalize [the problem]' then I wouldn't have even left a comment, because everything you have said is correct, I just don't think his statement needed that clarification. Now, however, I'm interested in whether the word 'generalize' can be used with respect to algorithms. (<--- someone not in complexity theory)


> I don't think one example constitutes a conclusion.

You're wrong. 3-SAT is NP-complete.

Quoting Wikipedia:

"But if any single problem in NP-complete can be solved quickly, then every problem in NP can also be quickly solved, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every problem in NP-complete (that is, it can be reduced in polynomial time). Because of this, it is often said that the NP-complete problems are harder or more difficult than NP problems in general."


3-SAT is NP-complete, so if you have an algorithm to solve it in polynomial time, you've solved them all.

http://en.wikipedia.org/wiki/3SAT#3-satisfiability http://en.wikipedia.org/wiki/NP-complete


3-SAT isn't just NP, it's NP-Complete. So a solution for 3-SAT is also a solution for all other NP problems.

http://en.wikipedia.org/wiki/NP-complete


The existence of an algorithm to solve 3SAT in polynomial time is sufficient to prove P = NP, since 3SAT is NP-complete. That is, any NP problem can be reduced to solving 3SAT.


Coq could be used to reduce automatically, and more!


Here's how I intuit this: 3-SAT is like "machine language" for other NP problems. Any other NP problem can be "compiled down" to 3-SAT statements.

Though it's probably inaccurate, that's how I intuit the matter.


Though it's probably inaccurate, that's how I intuit the matter.

It's actually quite accurate, but you can do even better: any other NP problem can be "compiled down" to one 3-SAT statement.


You're obvisouly correct. "3-SAT statements" is pretty meaningless in this context since you want to satisfy one big consistent statement.


I may be mistaken, and this might require an exhaustive search of the relevant literature, but I believe 3SAT is NP-Complete and therefore this one example does indeed lead to the stated conclusion.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: