Hacker News new | past | comments | ask | show | jobs | submit login
Issues in the Proof that P ≠ NP (rjlipton.wordpress.com)
108 points by bumbledraven on Aug 10, 2010 | hide | past | favorite | 33 comments



Head spinning. At least I got that bit about LFP... mostly.

The coolest thing about this all is watching the Internet doing what it was built for. Real-time collaboration on a massive scale. The whole of the relevant scientific communities all in one virtual space clamoring at a hundred virtual whiteboards. I love living in the future.


"The coolest thing about this all is watching the Internet doing what it was built for."

And that would be commenting on a paper before anyone has finished reading it?

Just kidding.


Well, that's an interesting point actually. Not just commenting before they have finished reading it: also commenting without enough knowledge to say anything useful about it.

Moderators on meta.mathoverflow.com decided not to allow possibly premature discussions on mathoverflow: http://meta.mathoverflow.net/discussion/590/. The arguments are quite reasonable:

  So answers could linger on MO asserting that some parts 
  of the proof are correct, other sketchy, other flat out
  wrong and precious few people will be able to decide for
  sure if this is true or not.


Consensus is an imperfect metric. Some of those things, even if discarded by consensus in the near-to-long term could prove valid in a couple years' time.

We want as many avenues of exploration as possible, even once the proof is decided right or wrong.


A minor point: that decision on meta.mathoverflow.net barely involved the moderators at all. One said what he was inclined to do, but otherwise it was the MO community.


I am not entirely sure about myself here, but I think, I kind of like the way people handle this. Sure, there will be a lot of questions which are trivial to someone deep in the field and even more questions which are already answered in the paper, but these should be removed pretty fast. However, those questions which are not answered via "See page X" or "Look in that book, page X" will need explanation and will strengthen trust in this proof, should it be correct.


I haven't looked at the proof in any kind of detail.

But comments by Jame S. Gates etc. raise a question in my mind. Any statical argument that a given phenomena is unlikely is based on the space of phenomena behaving very roughly "communatively" - AB and BA equally likely or at least correlated.

How do you apply that kind of reasoning to a totally arbitrary process of computation, a process which wouldn't necessarily have respect any kind of randomness? It seem like any statistical argument for P=/NP has to have a detailed and direct discussion of how you can escape this problem.

Anyway, that's my crude effort to translate the discussion into my mere math MA understanding...


Probabilistic arguments for deterministic systems are standard practice. Search for Erdős and probabilistic methods to see several examples in combinatorics.

In computation theory many proofs have the tagline "...with probability 1", meaning the author enumerated all desired possibilities and showed their probabilities summed to one. Most people accept this as proof, although it is less preferred, in the way that proof by contradiction is less favored than direct proof.


Note, I have not read the paper in detail yet, I just want to explain certain statistical proof techniques which are quite counter-intuitive to me, and maybe help you :)

You can, for example, show the probabilities of certain properties of an algorithm and use these probabilities in order to show the possibility of something existing or the absence of something. Furthermore, you can for example use the expected value of a certain process to demonstrate that solutions with a certain value exist.

For example, if you are able to demonstrate that the expected value of a maximum cut is 5 for a family of graphs, then you can conclude that there are maximum cuts in this family of graphs which have the value of at least 5. There are some with less, and there are some with more, but cuts with weight more than 5 exist.

Or, for example, if you can demonstrate that the probability of a turing machine in a certain class accepting or rejecting after a polynomial time of steps is 0, then it is impossible that this class is identical to P. On the other hand, if it is possible to demonstrate that this probability is not 0 (and thus larger), then we demonstrated that the cut of this class and P is nonempty.


I also don't have high hopes for the statistical angle, in fact I seem to remember that statistical arguments have been around for a long time. But they are not good enough as a proof.


Precisely! I found the crowdsourcing approach that got started at http://geomblog.blogspot.com/2010/08/on-deolalikar-proof-cro... amazing. Terence Tao had done a similar thing, not for paper review but to prove some theorems.

Compared to this, the rigid approach adopted in Math Overflow to close any discussion on the issue seems so 19th century. See the discussion here: http://meta.mathoverflow.net/discussion/590/


I think the stance taken by MathOverflow is correct. It would be one thing if that were the only venue to have a discussion about the proof validity possible but there exist much better discussion mediums. The linked blog is a case in point. MathOverflow is geared towards being a Q&A site.


Totally agree. MO seems to enforce an rigid, sanitary culture for precise question and answering. Perfect approach to mathematical discovery and enlightenment.


What makes you think so?

I always found the process of discovery (when you are working stuff out on your own) to be quite messy. You clean up only after you have found your way.

For discovery by reading another one's work, I agree that cleanliness helps. (That's one of the reasons to clean up your work in the first place.)


The communication part, which you alluded to in terms of reading someone else's work, is what I was referring to.


I wouldn't trust a (relatively new, correct?) Q&A site with commenting on research-level work, at least not above the level of informal chat. While outsiders, id est people largely not acquainted with the subject at hand or lacking the expertise, can provide a fresh perspective, I would not expect the benefits from a discussion on such a high-level to outweigh the potential distractions and/or misguided approaches. Let alone other issues that arise with discussing unpublished work.

On the other hand, a discussion in the spirit of "what are the concepts behind this proof?", or "what are the implications if this proof is correct/wrong?", or even "what is the approach of this proof? what are the pitfalls associated with such a process?" are excellent ways of disseminating knowledge and educating about the subject a) without having to resort to the few experts on the field and b) keeping the content in line with the principles of a Q&A site.


You don't sound familiar with MathOverflow: it is for research level questions in mathematics, and for the most part lives up to that purpose.

The reason I wouldn't want to read about this paper there is that the MathOverflow users don't seem to include many computer scientists; they are mostly mathematicians who work in non-computery areas of math.

In other words, I would go to MathOverflow for comments on a breakthrough paper in certain areas, just not this one.


Could it be that the million dollar millennium prizes are actually slowing down the resolution of important problems because they discourage crowd sourcing proofs?


I may be out of tune with many others here, but I think certain constructive processes (as in creating a many-faceted proof, outlining overarching software architecture, making the storyline of a good novel/movie) are most efficient with small groups of people. Otherwise you run into issues of "design by committee" or "too many cooks".

It is easier to crowdsource once the main outline is in place, it structures a vague problem into a set of more specific subtasks that can (more or less) easily be distributed across many people.

Like in this case: Deconstructing an existing proof can work very well with crowdsourcing, since a proof contains a relatively small set of specific claims, and each "crowd" contributor can focus on one particular issue.

But this works because the crowd now has a common focus point and task list created by the paper being published. It's not clear to me how (or whether) the process could be reversed, that the same crowd and effort could be coordinated to cowrite an original paper with an alternative proof.


I'd agree that crowd sourcing might not always be the best way to approach things. But the general situation that the person who gets the final result wins very likely does inhibit effective work on the solution of very difficult problems.

I mean, I think there's a consensus that if Wiles had been publishing his results as he went along, Fermat would have been solved earlier but probably not by Wiles himself. Whether the current N/NP proof is right or not, it also is clearly the product of long work in a vacuum. That's not the best way to do things if nothing else as a matter of sanity. If you're publishing as you go along, you've got a lot more of a sanity check. Further, if research is open, you can do single person, small-committee and crowd-sourced versions.

Still, I don't think this primarily a matter of the Millennium Prizes in particular but of the tremendous competition of academia in general - Wiles' effort happened before the Millennium Prizes - and he couldn't get Fields Medal either - too old.

Oddly enough, the Netflix prize actually seems to have produced a fusion of efforts in the end. Perhaps future prize creators could think about that. Both the Millennium Prize and the Fields Medal seem deeply problematic in their effect on mathematics in general.


Good points, and I agree.

The Netflix prize is interesting. According to a summary post on the Netflix forum

    the early results were mainly by individuals... 
    the team members began to coalesce and combine, and 
    in the end, entire teams coalesced and recombined.
http://www.netflixprize.com//community/viewtopic.php?pid=961...

I can only speculate that the combination of a deadline, a problem that was too hard for any individual and a common forum for exchanging ideas helped encourage team formation. (I.e when you realize that you can't do this alone, teaming up is a rational thing to do.)

I assume that the problems we (as in society, humanity) want to tackle will grow ever more complex in the future, and may easily outgrow the capacity of individuals.

So yes, perhaps we could/should learn from how the Netflix prize played out...


I think some risk-averseness drove the combination as well: at the very end, two teams with very similar performance were looking at it being more or less random chance who'd finish a hair ahead, with one getting $0 and the other getting $1m. So they merged and split a much safer $500k each.


Would they pursue for the proof in the first place, if not for the prizes (recognition, among others)? I think it's just analogous to how businesses work, with competition, secrecy, etc. And the debate on whether this approach to proof is good or not overall is similar to the debate of capitalism vs. communism.


It seems possible to me, but doubtful. Even contributing work worth a footnote in the wikipedia article about solving P =? NP would validate a lifetime of research to many people.


I don't think anyone working on these kind of problems for years is aiming for the M$. That just icing on the cake. The main reason for not publishing partial results (which could be considered to be a sort of crowdsourcing, as it will lead to dozens of people reading and inspecting the results), is that someone could recognize the trail and beat you to its endpoint.


>... because they discourage crowd sourcing proofs?

Do they? It seems it would be valuable in any pursuit, as it costs the reviewers less. It certainly discourages crowdsourcing solutions, as then how do you split the winnings, but proofs? That seems to require an attempted solution first, to be checked, in which case credit is pretty easily assigned.


well, you can't crowdsource everything ... sure, crowdsourcing things that most humans can do (e.g., labeling images) works pretty well, but solving open problems in math or composing masterful works of literature, eh more doubtful. the crowds are only wise in certain respects.


Mathematics has been crowdsourced effectively. See: http://www.thebigquestions.com/2010/04/08/blogging-tic-tac-t...


It seems like a bit much to hope to understand and verify the proof without a huge investment of time and effort. The problem is exponentially compounded if you don't already do research in theoretical computer science and have the appropriate logic background.

But! All of this has put a spotlight on a theorem that I had somehow never appreciated. It is a beautiful theorem that is well within reach of a mathematically-literate computer scientist: P = FO(LFP) over finite ordered structures. If you start reading up on this, it's immediately clear that this is interesting, because FO(LFP) say nothing a priori about computational resources in the sense that P does.

Anyway, a sufficiently curious reader can look it up as theorem 4.10 in the following text:

http://www.amazon.com/Descriptive-Complexity-Texts-Computer-...

That book is pretty light on the basics of logic. I used this book:

http://www.amazon.com/Mathematical-Logic-Undergraduate-Texts...


Learn to use the DOM instead of HTML strings, and you'll almost never run into those XSS problems. You can replace all of your HTML sanitization functions with document.createTextNode(text).


I think you're looking for a different thread...


Oops, I meant to reply in the thread about that node.js chat application full of XSS holes.


No, given how cryptic the original post looked to me, the comment looks absolutely spot on!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: