Hacker News new | past | comments | ask | show | jobs | submit login
P = NP Proofs: Advice to claimers (rjlipton.wordpress.com)
153 points by furcyd on April 22, 2019 | hide | past | favorite | 156 comments



On a related note, Scott Aaronson once wrote up a few heuristics he uses to quickly evaluate how likely a P/NP proof is to be incorrect: https://www.scottaaronson.com/blog/?p=458

Aaronson’s rules are more technical than the ones given in this article, the latter of which can be generally applied to most claimed (dis)proofs of famous open problems. In particular, poorly typeset papers from a solo, unknown authors are highly unlikely to be correct when they’re claiming to resolve well known open problems.

In regard to the P/NP problem specifically, Aaronson’s examples of weaker open problems which would likely be resolved first - or at least accounted for in some nontrivial way - are analogous to this article’s point about proving weaker theorems first. It would be very suspicious for a proof that P and NP are separable (or not separable) to work at such a high level that it doesn’t prove also new results about the lower bounds of a litany of other problems.


Is all the good math institutionalized now (e.g., you can't do good math without all the benefit of learning background within the institution conventions)?

What's the intuition that an extremely important proof will come from someone who gets all the TeX tweaked to camera-ready perfection, goes around and gives the talks, and all the conventional things... rather than a genius out of nowhere who walks into a stranger's office, wordlessly drops on their desk a stack of handwritten pages wrapped in twine, and walks out, never to be seen again?

I was thinking of this because I once knew an autistic person, from some humble and isolated upbringing, who was getting some informal math instruction, and they told me they thought they knew a proof for P=NP. I considered it extremely unlikely, but not utterly impossible. (They reportedly tested "off the scale" high in some particular cognitive regard, and I'd noticed a couple minor superpowers.) If that person, to use them as an example, ever actually worked out the proof, and wanted to "turn it in" with their own judgment and access, I wonder where they'd go. And whether it would be somewhere that would take them seriously enough to look at it, and whether they could tell that a proof in unconventional terms had merit. If they get turned away from the first place, would they be discouraged and give up, or would they keep trying, there or elsewhere. If they persisted, would they appear to be a crank because of their persistence and frustration.


> What's the intuition that an extremely important proof will come from someone who gets all the TeX tweaked to camera-ready perfection, goes around and gives the talks, and all the conventional things

I think it's more that vanishingly few people with poor notation are able to get things completely right. Mathematicians acquire this intuition from marking thousands of undergraduate students' work and attempting to understand the research of (at least) hundreds of postgraduate students and professional researchers (i.e. everyone "in the system"). In these people's work the intuition bad notation => mistakes is almost universally true, so the odds against an outsider with poor notation accomplishing something mathematicians have been failing to solve but chipping away at for decades are astronomical.

It's not impossible, of course, but so unlikely that it's almost certainly not worth the time and effort to look into the details. It would be almost inevitably disappointing, and there's not enough time for the more trustworthy interesting stuff as it is.

Edit: a person like the specific one you're describing can earn professionals' time and effort (trust) with smaller results that are both insightful and short and correct enough to be easily checked, and then showing that they can learn from their mistakes.


> Is all the good math institutionalized now (e.g., you can't do good math without all the benefit of learning background within the institution conventions)?

Probably not; there is a surprising amount of unsolved theorems that are interesting that could be tackled with roughly an undergraduate level of math or thereabouts. In the less-well-traveled portions of math, there's definitely more work than workers to do it. But the problems that attract the cranks (things like P=NP, Riemann, Hailstone), these have been open for long enough and studied sufficiently thoroughly that you're likely to fall into an already-known trap if you haven't studied hard enough to know what's been down before.

I'll also point out that it's not so much that you need to study in the rigors of school to get this information as it is that you'll have studied enough samples to know that people are looking for particular formatting to get your work accepted. It's not a sign that you have the academic heritage; it's a sign that you've at least done the required reading.


You are focused on one out of of 8 heuristics that he describes on that post. He speaks about other more important ones, technical ones, before he even mentions that. He also prefaces everything by saying "the author's reputation or institution must not be a factor, for ethical and practical reasons". And I stress the word: heuristics. It's a rough way of estimating how likely is this to be correct since it is impractical to carefully examine dozens of these claims per year.


I think the intuition is that the best mathematicians, autistic or not, tend to be in academia already. So your prior for someone from the general population should be that they're more likely to be a crank than a genius, ie non-academia has many more cranks than geniuses.


It's definitely intuitive to me that most all claims from outside the institution will be mistaken/naive. But, for really big wins that have thus far evaded the institution, what's the likelihood that the solution will come from within the institution rather than from outside? (I'm asking sincerely; I'm not a mathematician, and don't know.)


There's only one example I've heard of (some Russian guy), but generally they tend to involve innovative approaches within the establishment.

(As said Russian guy proves, sometimes this heuristic fails; but your prior should be that it's likely correct.)


If you are thinking of Perelman and the Poincare conjecture then I don't think his career really maps to the kind of "outsider" that is described in this thread: https://en.m.wikipedia.org/wiki/Grigori_Perelman#Early_life_...

He had a PhD in math with enough success to be offered faculty positions at Princeton and Stanford.

The fact that he recluded from academic life after that doesn't matter.


I was thinking of some Indian guy and (not as nice of an example) some patent officer in the Swiss patent office.

There also is the nice example of https://en.wikipedia.org/wiki/Marjorie_Rice


I was thinking of some Indian

Assuming you mean Ramanujan. Although he was self taught in mathematics and did much of his work outside the university setting, the vast majority of his significant discoveries and publications where made within the academic framework.


> I think the intuition is that the best mathematicians, autistic or not, tend to be in academia already.

Is that a safe assumption, though? Won't it limit the field to people who are happy making a low salary and don't mind teaching?


I'm not talking about a hard-and-fast policy; I'm talking about a Bayesian, "if a random person off the street sends a proof what should my priors be?"


This is an asinine assumption. What of the incredibly "genius" poor folks who do not have the opportunities to join academia?


The point is that raw genius is insufficient for replicating Ramanujan-like accomplishments in the 21st century. It's almost irrelevant how brilliant someone is these days - if they haven't had the opportunity to learn a vast amount of theory, they can't really make significant progress towards theoretical problem.

The field simply has far less low hanging fruit than it did a century, or even a half-century ago. It's not reasonable to expect that a lone genius with little to no academic background in mathematics could resolve a famous open problem through sheer creativity. For literally generations the field's most intelligent researchers have spent their careers chipping away at this problem.


Just yeah, I didn't let Ramunujan into my school, much too messy, over promised, couldn't do proofs right. Not really academic material. He refused to learn LaTex.


From the format point of view, it's also easier to write lots of maths in a consistent way in latex than word. Similar to: you can paint in excel, but it's uncommon enough that you'd become known for painting in excel. (Tatsuo Horiuchi) Or programming using Notepad. Ignoring what most people see as an improvement in tooling area can be easily used as one of indicators.


> Ignoring what most people see as an improvement in tooling area can be easily used as one of indicators.

Well said. I would make the distinction though that intentionally ignoring a well known toolset can be liberating in the exploration of the unknown, whereas being ignorant of a well known toolset by not investing research time is likely to lead one into well known pitfalls.


> Is all the good math institutionalized now (e.g., you can't do good math without all the benefit of learning background within the institution conventions)?

The short answer is yes. We haven't had a Ramanujan in decades, and they were more or less always extraordinary. Insofar as you can generalize all of its subfields together, it's reasonable to say mathematics is an extremely mature discipline now.

What I mean by that is there isn't much low hanging fruit around. Even when Ramanujan was dazzling Hardy with his results, many of them were already known. That was over a century ago. There just isn't much fertile ground left where a single brilliant mind can make significant headway based on raw talent or intuition, without first working through significant education in prerequisites. It's more and more commonplace for authors to collaborate on their work, because compelling problems at the forefront of mathematics research are increasingly requiring cross-disciplinary knowledge to resolve.

To focus on Ramanujan a bit in particular - one of the reasons Ramanujan was so talented was because he not only came up with novel results in complete isolation; rather, he also came up with original perspectives and theorems. That means he was capable of posing interesting questions, which is usually much more interesting and inspiring for further research. That's very inspiring, but the reason I point it out is because Ramanujan wasn't solving open problems in the West left and right so much as he was exercising ingenuity to pose - and resolve - open problems in fertile ground he could reach. But a century later, the amount of existing mathematics has increased so much in both breadth and depth that a burst of creativity is almost certainly not going to get an undergrad to find something that e.g. Rudin hasn't written down somewhere already. But more importantly, it's vanishingly unlikely they will resolve a famous open problem which is the subject of intense research interest. All the stories of untrained amateurs doing that are from the mid 20th century or earlier.

To turn your question on its head a bit: consider how astonishingly brilliant an untrained individual would have to be find a nontrivial result that the rest of the mathematical community hasn't found. Either they're inventing their own definitions and discovering entirely new mathematics in isolation (wow!), or they've somehow found a way to use the tools accessible to them (university analysis and algebra, at best) in a way every other trained mathematician has not. It's rare even for senior math undergrads to publish nontrivial research on their own. It's significantly rarer for that research to improve progress towards an open problem. The only example I can think of off the top of my head where undergrads actually resolved a well known open problem is AKS, and even then it wasn't a solo author.


But when we are talking about unsolved problems, we are in the land of vanishing small probabilities in any case. Could it not be that the very fact that someone has been trained in conventional mathematics a hindrance to them resolving an unsolved question, and someone thinking in isolation has an advantage?

Here's Grothendieck, for example:

“In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still, from the perspective of thirty or thirty-five years, I can state that their imprint upon the mathematics of our time has not been very profound. They've all done things, often beautiful things, in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they've remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have had to rediscover in themselves that capability which was their birthright, as it was mine: the capacity to be alone.”

I can certainly see this happening: someone away from the pressures of publishing, the drudgery of admin tasks, the frustration of applying for grants or department politics, or the fear of looking "bad" to their peers etc. devoting their time to mathematics out of pure joy, play and drive and coming up with novel definitions and assumptions that those in establishment mathematics, out of pure sociological and psychological reasons, have not even dared venture into.


I can see a case for a trained mathematician contributing a legitimately new perspective to the community by withdrawing and spending time looking at things in an unorthodox manner. That's closer to that Grothendieck was describing than what you're posing.

In the early to mid 20th century, it was feasible for someone to pick up a book on number theory and apply relative genius to an open problem to quickly solve it. The bottom line is that prerequisites for doing so were often just basic calculus and high school algebra. An understanding of sequences and series went a long way.

This isn't the case in the 21st century. We're firmly out of that territory. You can't offer a profound new perspective on a thing which you can't understand, and the barrier to understanding research mathematics continually rises. The forefront of modern mathematics is so far removed from even graduate level mathematics course material that it's not going to just be intuited through untrained brilliance. You have to actively learn it, which is (unfortunately) vanishingly unlikely outside of academia.

If the tools available to you are basic real analysis and linear algebra, P/NP is beyond your reach - full stop. At this point we actually have proofs that a valid proof of P/NP (and similar problems) cannot be achieved through large swathes of elementary techniques.

We don't live in a world like Good Will Hunting where an amateur can succeed by being a genius. That's useful in the long term but not enough on its own.


What prerequisites, according to you, does a graduate from a good CS program need to understand P/NP enough to begin taking a stab at?


For starters:

1. All math and CS prerequisites to an intro computational complexity course.

2. An intro computational complexity course.

3. All math and CS prerequisites to an advanced, graduate-level computational complexity course. Let’s say by this point you have worked through two linear algebra courses, three calculus courses, one or two discrete mathematics courses, some graph theory, some combinatorics, some logic, and a couple of algorithms courses.

4. The advanced, graduate-level computational complexity course. Hopefully you work on probabilistic computation and, in particular, quantum computation.

5. Now read and understand the relevant papers advancing the field (not just this one problem) for the last two decades. That one paper from 1990 about reducing the permanent to the determinant? You should know about that.

6. Finally, to reach the temple you must walk the path littered with the bodies of would-be explorers before you. Read the papers of failed proofs, starting with the easily refutable ones. The most sophisticated failed proofs have errors so subtle that they are useful for the research community in their own right as an exercise in peer review.

As a rule, a grad student near their PhD in this area should know of everything Aaronson mentioned here: https://www.scottaaronson.com/talks/pvsnp.ppt. None of that should be unfamiliar or unknown. A postdoc and beyond should actively have ideas percolating on how to chip away at some small aspect of a lesser problem featured therein.

Here’s the reality: at almost every stage of a researcher’s career, “taking a stab at” a famous outstanding problem is the wrong way forward. Usually a problem is still outstanding because it actually needs a new theory - this is the practical utility in solving theoretical problems in the first place. Therefore the best bet for solving this problem is actually to chip away at it over a long period of time.

Think hacking through a rainforest to reach a goldmine, not managing to somehow parachute to the goldmine directly when it’s hidden beneath the trees.


Ok. This looks like sound advice.

I am through 1, 2 and 3, I was thinking of learning/going straight for Ketan Mulmuley's geometric complexity theory. So you'd say that's a little misguided approach, right.


Thank you, that makes sense. Rather than this reality being discouraging to all the aspiring mathematicians who've been learning and building on their own, I think it should be encouragement -- to find ways to get closer to the community, to learn more of what they already know, and perhaps eventually find mentoring or collaboration.


If you're skilled and practiced enough to be a world-class talent in your field, it is highly unlikely that you don't work in that field as a professional.


I thought you meant a simpler heuristic http://haspvsnpbeensolved.com/ :-)(I had thought Scott Aaronson is somehow connected with it, but probably not, it seems)


That’s kinda funny, but it should at least throw an error when you click the button without providing a URL. :P


I know the guy who made that website. I think it might have been done at the request of Scott, or Scott advertised it on his blog at some point.


I hope to see the day when this website gives the wrong answer.


Carl Sagan once complained that all people who contacted him and claimed to talk with aliens always reseived simple unoriginal messages like love each other. Aliens never provided anything humans did not already know.

If someone comes up with a proof, they are in good position to troll.


Can you imagine if it was the other way? "Well I have no idea what it means, but they said these few sheets of math would be interesting to you and I thought you'd like to have them."


> Can you imagine if it was the other way? "Well I have no idea what it means, but they said these few sheets of math would be interesting to you and I thought you'd like to have them."

I can easily imagine that there exist lots of math nerds with a love for alien conspiracies in the internet who are keen on decoding these formulas.


I'd imagine something actually revolutionarily useful would probably have controversial assumption breaking conventions that people would be very skeptical of along with some novel concepts and definitions most wouldn't see any value or sense in.

If this were to actually happen, I think most people would discount it as gibberish because it would appear to be wrong things based on inventiveness and misunderstanding. Which if someone with a fields medal published you take seriously but if it's someone who works retail and is talking about aliens, you're probably going to ignore.

There's a twilight zone episode about that where someone travels back in time to prevent things but isn't taken seriously and can't stop disasters.


You could go around asking successful mathematicians to write out some of their favorite unsubstantiated hunches and post a compilation of them as an alien message.


A simple litmus test that saves you from most clickbait, si whether the actual paper title contains, more-or-less, the equation P=NP or P!=NP. I cite Andrew Wile's proof of Fermat's Last Theorem, titled "Modular Forms, Elliptic Curves and Galois Representations" . The title will be about the how, and not some pop article title.


On as a counterexample, the paper showing that primality checking is in P was called "Primes is in P"


Another counterexample is "IP = PSPACE" by Shamir.

I think it's nice when titles include the main result plus a few words about methodology, like "IP = PSPACE using Error Correcting Codes", or "The PCP Theorem by Gap Amplification".


The title "Modular Forms, Elliptic Curves and Galois Representations" was the title of his talk where he first presented his proof. He chose that title because he wanted to keep the fact the he'd actually proven FLT as a surprise twist for the end of his talk and did want and 'spoilers'

The 'Modular Forms, Elliptic Curves..." part of the title comes from the fact that Wiles' approach wasn't to directly prove FLT. What he did was prove something called Taniyama–Shimura–Weil conjecture, which is a statement about the connections between modular forms and elliptic curves. It was already known at the time (thanks to Ribet's Theorem) that Taniyama–Shimura–Weil being true also would mean that FLT had to be true as a consequence of the conjecture.


>Yes, this post is about our own “claimers” in complexity theory.

>The TV Claimers meet a grisly end. We will not say any more about it. We want to be nice.

>Please: Do not stop reading. Yes we know that it is likely that no claimer really has such a proof.

I understand the author's frustration on the subject, but I don't think his disdain is warranted. This reminds me of the time Mehdi Sadaghdar, who runs the YouTube channel ElectroBOOM, disagreed with former MIT Professor Dr. Walter Lewin's proposal that Kirchhoff's Law is "for the birds".

Sadaghdar reproduced the results from Dr. Lewin's experiments, but proposed an alternative explanation about what could be happening. He created a video for Dr. Lewin, and stated that he was open to being proven wrong, ending that "either way, the science will win".

Dr. Lewin's response, instead of educational, was belittling where he generalized Sadaghdar as one of the group that holds to Kirchhoff's Law "religiously", and that he needs better education. The explanation was wrapped around in copious amounts of disdain. Sadaghdar eventually realized that they both fundamentally believed the same thing, and that their disagreement was only an error in communication, however, Dr. Lewin's videos are far more disliked than liked, and he didn't come out appearing to be the good guy from the exchange.

My point is, maybe generalizing the people you're criticizing, even though you "think the vast majority of claimers of P=NP or other big results have almost always worked alone" and comparing them to a group of people who die a "grisly" death in a fictional TV series, before you begin a lecture, isn't educational.

[0]: Sadaghdar's original video: https://www.youtube.com/watch?v=0TTEFF0D8SA

[1]: Dr. Lewin's response video 1: https://youtu.be/AQqYs6O2MPw

[2]: Dr. Lewin's response video 2: https://youtu.be/ototTU5NUNA

[3]: Dr. Lewin's response video 3: https://youtu.be/d_XqrZo5_7Y

[4]: Sadaghdar's last video: https://www.youtube.com/watch?v=Q9LuVBfwvzA


I was bracing for you to reveal that Sadaghdar was right all along and that Dr.Lewin was wrong but.. in the end it's about whose youtube video like / dislike ratio is better?


Yes. And Sadaghdar was right all along, but turns around so was Dr. Lewin, it was an error in communication. Dr. Lewin defined Kirchhoff's Law in a particular way that Sadaghdar didn't agree with.

I'm sorry if that wasn't dramatic enough for you, but that's the only objective metric I have to show how either of their responses was received.

PS. Sadaghdar has 2.5 million subscribers, and Dr. Lewin has just under 400k, These are videos with thousands of like/dislike votes each.


I'm a crank. I found this article very valuable.

In 2010 I put online a brief refutation of the EMH, the efficient market hypothesis. Here it is: https://arxiv.org/abs/1011.0423

If I had seen this then, I think I would have followed the directions in the inset box, where the article suggests saying something like: "The reason I succeeded in finding an algorithm for P=NP is that I noticed that... No one else seems to see that this insight is very powerful. It is very useful since it implies ..."

I could have simply written: "Nobody has bothered to test the Efficient Market Hypothesis, but since I have some background in computer science I made an unusual mental connection, and it follows that it doesn't need to be tested. It cannot be true."

(My proof outlined a way to show it has to be false.) I remember my thinking at the time was that I didn't need to explain it at all, just make it correct and brief.

At the time, I didn't want to include anything personal in the paper. Perhaps I should have. It's not a very good paper.


Isn't this equivalent to saying "I have incorporated a company, it owns a lockbox, and only I know what's inside." Only an insider knows the real value. But even Wikipedia's summation of the strongest version of EMH says that it's impossible if insider information can be hidden: "If there are legal barriers to private information becoming public, as with insider trading laws, strong-form efficiency is impossible"

https://en.wikipedia.org/wiki/Efficient-market_hypothesis#St...

So it seems that your contention- that insider information that hasn't leaked yet obviously won't be reflected in the price- is spelled out already.


okay so going with your analogy it's like publishing a picture of what's inside the vault but you publish the picture encrypted. imagine the encryption is a 20-letter random password in winzip (imagine winzip is secure) with the picture of what is in the vault. Although only an insider knows what's in the vault, the public gets the encrypted picture. Every day you publish one more letter of the passwrod. So if the password is p?jbAL;nQpg-37(z(feH then on day 1 you publish just "p" (the first letter) so now it's as strong as 19-letter password since you gave them the first letter.

I don't know what will happen exactly between day 1 and day 20, but on day 19 anyone can just try every 20-leter password with the first 19 letters you've published by that date. So if by then you've published p?jbAL;nQpg-37(z(fe already they'll just try

p?jbAL;nQpg-37(z(feA p?jbAL;nQpg-37(z(feB p?jbAL;nQpg-37(z(feC p?jbAL;nQpg-37(z(feD

in winzip. Anyone can just try all of them until they find the one that unlocks it. So on day 19 if it's worth knowing, for sure someone will take a minute or two to try it. Not that hard if there's just 1 missing letter.

So for sure if you tell people it's a 20-letter password and give them 19 letters they will figure it out before you've given them the last letter.

All right, but on day 1 nobody can guess all 20 characters in the password. That's too hard. it's impossible.

So every day you make it easier, but it'll be "broken" sometime between day 1 and day 20, assuming that the insider information doesn't leak in any other way. that means that someone will break it but not on day 1. on some day between day 1 and day 20. The person who analyzes it "best" will have the insider information first, but for sure we don't have to wait until day 20 because by then it's ridiculously easy to analyze.

meanwhile you're making the analysis easier and easier. the person who can do the analysis best will have it first, and this proves that there's no "infinitely great analysis" already being done all the time by public markets. it's a simple demonstration of this fact. I don't think you actually need to do the experiment to now the results. hope this helps.


Your explanation does not disprove the EMH as far as I can tell.

Sure, there's some people who will have information sooner than others. In fact, the EMH says that "people who outperform the market do so based on chance" and it's expected that some people do win some of the time.

What better test of chance than who guesses the password soonest by luck?

Okay, fine, it's not luck you say, it's whoever has the best supercomputer to crack the password each time that happens. However, in reality, contrived examples like brute-forcing passwords aren't frequent parts of the stock market. Rather, it's events that traders predict the potential impact of, which is much closer to a random affair.

What you've shown in your comment is that it's difficult to clearly define private and public information in some contrived cases.


I think difficult analysis is a very frequent part of the stock market.

My argument was not so much about luck as about speed of analysis. EMH doesn't allow some people to analyze information faster than others. For example if Intel always releases a 450 page earnings at 11 o'clock, and it takes 4 hours for the market to really analyze it because it's written really abstrusely and that's the fastest anyone can read and figure out its meaning, then under the EMH there's no scope for 1 specific trader to read it in 1 hour and perform the analysis faster, before the price has adjusted to the new information. In fact, under the EMH the price has to instantly fully reflect the implication of the information of the report the moment it is released. If two hours in, a famous analyst (Warren Buffett let's say) tweets about it, and that tweet moves the price, under the EMH it must have moved the price because of new information. It can't just be because the famous analyst is smarter and analyzes better and faster than the rest of the market. Not possible.

That doesn't mean that until the tweet the analyst's better-than-market analysis is somehow "private" (rather than public) for 4 hours. there's no new information, it's just that the public price doesn't fully reflect the completed analysis. The market can be a bit dumb.

But how do you know the facts I just gave are true? How do you know the completed analysis can't be factored into the price the instant the report is released? (The price does jump instantly.) Maybe it jumps to the new fully completed analysis price? Maybe everything I wrote is not true and in fact the market is perfectly efficient, and only new information, or stochastic events, change the price? Maybe it fully reflects all public information all the time?

So I showed how to show that analysts can't all work instantly, yes, under a contrived example. The EMH doesn't require that markets only be efficient under conditions that aren't "contrived". But rather, all the time. you can run the experiment and show it if you want. you can literally see the stock market figure it out. They could also figure it our wrong. Maybe early on #117 will spike (so that prices look like this: https://imgur.com/a/C782Fgt ) - it could be for any reason. Maybe a false rumor that it was leaked (a false leak), whatever. But the market gives the rumor just a 10% chance of being true. But regardless of the reason for the spikes, at the end there must be a single spike, in the value of the stock that actually wins. it must be around $1 billion since it'll have that in cash. (It's important that people actually believe this will happen.)


Hi! I am a person who reads crankery.

First, thank you for self-assessment. It's rare.

This paper constructively suggests that P=NP iff EMH: https://arxiv.org/abs/1002.2284

It may be worth reading up on PPAD and Nash's thoughts on P=NP.

Do not despair, but do not expect to be correct, either. Explore and be wrong, but look for proofs, too. Above all, do not forget that crankery is fundamentally about being unwilling to see existing proofs and results. Always try to open your mind.


Thank you for the link. you say, "do not expect to be correct" but the abstract of the paper you linked says "Since P probably does not equal NP, markets are probably not efficient". do you agree with the paper?


Not GP, but I do not. The paper is built on a pretty basic error:

> For simplicity but without loss of generality, assume there are n past price changes, each of which is either UP (1) or DOWN (0). How many possible trading strategies are there? Suppose we allow a strategy to either be long, short, or neutral to the market. Then the list of all possible strategies includes the strategy which would have been always neutral, the ones that would have been always neutral but for the last day when it would have been either long or short, and so on. In other words, there are three possibilities for each of the n past price changes; there are 3^n possible strategies.

The number of possible n-move histories is 2^n, not n. That means there are 3^(2^n) possible strategies. Nondeterministically searching that space in polynomial time does not work. The claim that a strategy could be verified in O(n) time is dubious as well since the size of a full description of a strategy is (in the worst case) exponential in the length of history it considers. So their claim that finding a technical strategy which performs well in backtesting is in NP is not proven.

They repeat the same error later:

> there are 2^(t + 1) possible strategies (because every possible strategy is a mapping from 2^t different possible sequences of 1’s and 0’s of length t into two possible choices, LONG or OUT)

There are other issues, like assuming a strategy that backtests well will continue performing well in the future.


Thanks, great analysis. (I take your point.) How about my paper I started this thread with:

https://news.ycombinator.com/item?id=19722455

Any thoughts?


There are two very distinct statements which say entirely different things: "markets are probably not efficient" is a fine statement and I agree with it easily.

"Markets are provably not efficient because xyz" is much harder to agree with, especially given that paper, since it's the same as saying you've actually proven that P!=NP and thus requires an extraordinary proof.

You're claiming to have a solid proof. Just because someone agrees that your result is likely correct, that doesn't mean they agree with your proof.


Right, two different things. I meant: 1) those results wouldn't be at odds with my results. 2) but I wonder if myWindoonn judges the paper they linked to be correct? (unflawed, complete etc - does it prove its titular claim?)

if the paper is flawed, nothing follows from it, there's no reason to spend time on it except to understand its flaws. Then it's just another crank paper, besides and unrelated to mine.


If you have a proof that P=NP, the best way to demonstrate that is to solve a nice, hard 3-SAT problem. Maybe reversing something like siphash, or something similarly difficult.

Set up a web page where someone can put in their 128-bit SIP hash and their plaintext, and you'll tell them the 64-bit secret used to hash it.

Everyone will believe you.

For extra bonus $$$, there are various large prizes (e.g. $1M USD) you can win if you can "break" various things which are trivially breakable if P=NP. Might as well cash in there, too before they pull the plug on the contest…


It is addressed in the article. The idea is that it is not a good proof.

Solving a hard problem with code doesn't mean that you solved the general case. For example your algorithm may work in polynomial time for some specific 3-SAT problems or hashes up to a certain length. It may be huge but it doesn't mean you proved P=NP.

Furthermore, P=NP doesn't mean you can easily solve NP-Complete problems. What if N^100 is the best you can do? It may be polynomial but it is useless. In fact, small steps are expected. And going straight to a workable algorithm is suspicious.


Sure, it doesn't prove that you've solved P=NP, but if you can do that (especially for multiple known hard problems), it seems pretty likely that your suggested solution would be heavily evaluated.

I don't think anyone suggested that breaking a single problem proves P=NP, they were just addressing the article, and suggesting a way to convince someone to evaluate your proof..


> Solving a hard problem with code doesn't mean that you solved the general case.

If we're talking about P=NP, 3-SAT is the "general case". That's why I picked it, since all NP problems are reducible to 3-SAT in polynomial time and space.


Saying that you solve a specific case of satisfiability problems (e.g., siphash) is not proof that you can solve any 3-SAT problem. Graph coloring is a great example of this phenomenon--graph coloring is NP-hard, but most graphs can be colored very efficiently in polynomial time.


That's a good idea, but it won't work for a non-constructive proof.


Nor would it necessarily work (quickly) if the order of magnitude for any constants were sufficiently large (after all, a runtime of 2 * N and a runtime of (2^512) * N are both O(N)).

If I remember correctly, in the semi-regular survey conducted with CS experts about this question, the most likely explanation that accompanied a response saying they believed that it may be the case that P=NP was “...but with very large constants”.


I personally distrust all non-constructive proofs as a matter of principle, and can't think of a single one I would actually miss in practice, let alone one I find convincing. They all smack of begging the question to me.

I especially distrust non-constructive proofs that involve first constructing objects in an impossible manner (like, say, sorting an infinite set of infinitely long bit strings—I'm looking at you, Cantor)—and then (ab-)using the reader's intuition about actual things you can construct to "prove" nonsense about things you clearly can't.

Which is a long way of saying that I'll only accept a constructive proof that P != NP (or vice versa).


An algorithm with complexity O(n^{999}) is polynomial but but not more useful than an exponential algorithm on any practical problem.

If an NP solver comes out, it may not practical (at first or for any practical purposes).

The polynomial complexity of the "P is in prime" algorithm https://www.cse.iitk.ac.in/users/manindra/algebra/primality_... had first a bad polynomial complexity and was later improved in other papers.


Does having a proof of P=NP necessarily mean that you can write the code to solve NP problems though?


Not necessarily, but it's hard for me to imagine what a non-constructive proof of P=NP would look like.


> Not necessarily, but it's hard for me to imagine what a non-constructive proof of P=NP would look like.

Have a look at the proof of the Robertson-Seymour theorem. Here the Wikipedia section about its importance in complexity theory:

> https://en.wikipedia.org/w/index.php?title=Robertson%E2%80%9...


Interesting. Thanks!


What do you image a constructive proof looking like?


It would look like an algorithm that solves an NP-complete problem in polynomial time. (Isn't that obvious?)


Sure. And why does that seem any more feasible than a non-constructive proof?


I don't know. Why is the moon plaid?


By the name, I was expecting this article to be much more technical, giving examples of common errors made by attempted proofs or reasons why certain cruxes in possible proofs would be difficult to overcome.


So was I, something more like this:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32....

Still, I thought it was a nice post.


I recently became interested about P=NP, the reason was learning. And I became crankish, in the sense of trying to understand the problem myself.

My advice is - learn about naturalness. Monotone circuit bound in CLIQUE in particular is an interesting result, which knocks out many straightforward approaches to show that P=NP. The Wigderson recent book gives a very good overview.

However, I wish in some sense this was better explained, that there was more examples of algorithms that cannot work because they can be, in some sense, represented as monotone circuits.


Very smart people have tried for a very long time to show one way or the other. We don't know if P = NP (seems unlikely) or if P != NP (seems likely). But there is no proof either way.

If P = NP, then we can generate solutions to very hard problems as easily as we can verify given solutions to those problems (both in polynomial time). Again, this seems very unlikely.


I suspect most people, even most computer scientists, have a very limited view of complexity theory. When Big-O notation is introduced, and we discuss algorithms, we tend to only ever come across them in contexts where the algorithm is "small" polynomial or "small" hidden constants are present. How many people can even name a polynomial algorithm that's worse than cubic, or a polynomial algorithm with a massive hidden constant? In practice, these algorithms tend to be hidden in nasty combinatorial problems which sometimes are themselves contrived mainly to force a solution that slow.

Yet the evidence suggests that, should P=NP, then the polynomial algorithm is going to be of that nasty combinatorial variety. The main reason for this, informally of course, is that NP-complete problems tend to have their NP-completeness arise from specifically hard instances, rather than generally being hard. SAT is NP-complete, but we can solve many SAT queries we care about (undoing cryptographic algorithms are of course the exception--by design). Graph coloring is NP-complete, except if your graph falls into one of several dozen special cases of graphs which covers most we know about. In other words, there's a kernel of instances where there structure is exponentially complex and there's no way to develop heuristics that are meaningfully simpler than "try every possibility until one works." It is possible that there exists some basis that can describe the complex structure in polynomial terms, but this basis would itself have to embed the complexity we see, and as such, it would be one of those combinatorial monstrosities. Think something of the terms of 3^2^16 as your hidden constant (that's the bounds for the "simple" algorithm in the L=SL paper).


Something I've always wondered - if only certain instances of 3SAT are difficult, why are those considered the same problem as easy instances of 3SAT? Why do we say 3SAT is hard, rather than 3SAT with specific structure? Or do we lack the ability to determine whether a given instance of 3SAT is hard?


> Or do we lack the ability to determine whether a given instance of 3SAT is hard?

Yes. At least in general. In fact, answering whether any SAT (or 3SAT) instances are hard equivalent to settling P vs. NP. Assuming easy = polynomial, the "easy subset" of SAT = SAT iff P = NP. Separating hard from easy instances -- in general -- is pretty much the secret to complexity theory.

We do know to recognize some easy cases. SAT is FPT -- i.e. fixed parameter tractable [1] -- because its worst-case solving time is exponential in the number of variables. There are more complex cases of easy instances.

A question that is easier than P vs. NP is separating SAT instances that are easy/hard for DPLL (+ improvements), the 1960s algorithm at the core of all SAT solvers. AFAIK, this, too, is an unanswered question in general (i.e., given a SAT instance, we don't know how to tractably determine whether or not DPLL will tractably solve it).

[1]: https://en.wikipedia.org/wiki/Parameterized_complexity#FPT


We use the worst-case analysis to drive Big-O notation; if the worst case is hard, then we say the problem itself is hard. This is the main limitation of Big-O, as normally taught: it tends to ignore the practicability of the algorithm for typical cases.

Usually, the practice for solving these sorts of problems is to have your algorithm return three values: the solution, unsatisfiable, or timed out. There is literature on how to structure such problems to make it more likely you're going to get answers, but I'm not personally familiar with this area.


Yup, I still think that you can likely solve the halting problem for "useful" cases. The cases where you can't you probably should change your code anyways, since odds are people can't reason about it either.


Almost any application with an event loop doesn’t halt.


you're right, but i'm pretty sure that those programs are not what GP was talking about. i mean, i don't think anyone would be applying a termination checker to a daemon-ish program that's meant to run forever... you'd just check the parts that are actually meant to terminate.


And for many of them, you could determine that statically.


We know that 3SAT is NP-complete. That is, you can use a solution of 3SAT to solve any NP problem with only a polynomial slowdown.

There is no such proof for the subset of 3SAT that we can solve quickly. It's for the entire problem only.


> If P = NP, then we can generate solutions to very hard problems as easily as we can verify given solutions to those problems (both in polynomial time).

Knuth himself, for example, has casually expressed the opinion (not a proof) that P=NP in the past. When you say "as easily" it belies the nature of polynomial time questions. If it turns out that we can produce an algorithm for 3SAT that is O(n^1e72), then that will in fact prove that P=NP. But this does not make any practical problems "easy".


No need for n^1e72, a SAT instance with a couple of hundred variables can usually be solved easily, industrial instances are often hundreds of thousands of variables. Even a O(n^10) algorithm would be of MASSIVE theoretical interest, but almost no practical value.



You assume that the proof for P = NP would be constructive.


It's automatically constructive.

Algorithms can be enumerated. Proofs can be enumerated. For any formalized problem Q in NP, you can always simply enumerate algorithms A and candidate proofs P until you stumble on a pair (A, P) where P is a proof that A answers Q in polynomial time.

If P=NP, this first step takes constant time (long but independent on the input).

If P!=NP, this first step takes forever.

In other words, if you have a non-constructive proof that P=NP, just make “search for the polynomial algorithm” the first step of the algorithm, and now you have a constructive proof.


> For any formalized problem Q in NP, you can always simply enumerate algorithms A and candidate proofs P until you stumble on a pair (A, P) where P is a proof that A answers Q in polynomial time.

I'm curious: does this run afoul of the halting problem?


No, since a proof is finite. When you enumerate Turing machines to generate a proof you execute the first n machines for n steps. At some point some machine must decide something, so some n is the bound.


> No, since a proof is finite. When you enumerate Turing machines to generate a proof you execute the first n machines for n steps. At some point some machine must decide something, so some n is the bound.

Thanks. It's been awhile for me, so if you could bear with the perhaps simple question: do we avoid undecidability here by the finitude of the proof or by only running a finite number of steps? It seems to me like there are three states for any solver: (1) it responds in the negative (this is not a proof), (2) it responds in the positive (this is a proof, you're done), or (3) I'm still trying figure it out.

Do we fold the 3d case into the 1st by saying that we'll only iterate n steps before terminating? Or am I missing the point entirely?


How do you ensure that the proof system you are enumerating over is strong enough to prove that the program witnessing P=NP is indeed polynomial? Prima facie polynomialness of this program can be independent from the ambient logic.


Sure, it's trivially constructive but if it turns out NP problems can be solved at minimum with like O(n^1000000000) complexity, your search could take unfathomable amounts of time to terminate.


> Algorithms can be enumerated. Proofs can be enumerated.

How exactly do you do this?


Checking whether a string is a valid proof can be done in polynomial time. So, enumerate all possible strings, and check if each string is a valid proof of what you're trying to find.


> Checking whether a string is a valid proof can be done in polynomial time.

That's not true. Curry-Howard says that determining if a proof is correct has the same complexity as programs in general. Even if P=NP, there are still complexity classes strictly larger than P; say EXP, that would result in a proof requiring exponential time to check for validity. That's assuming that "validity" here means true/false-ness, rather than syntactical validity, which is much less useful.


That is misleading. Simplifying a great deal, the Curry-Howard correspondence asserts the following identities between intuitionistic logic and pure functional programming:

* Types = formulae

* Programs = proofs

* Beta-reduction = cut-elimination

Checking whether a string is a proof in a given logic is a simple computation.


Just an addition to this comment:

Point is that, according to Curry-Howard, validating a proof is equivalent to type checking the program, not actually running it to halting. Type checking (<-> validating a proof) is polynomial.


I think pascal_cuoq's argument still works though. As I understood it, his construction was

  Given an instance x of some NP relation R:

  1. Enumerate all (TM, π) pairs until we encounter a π which proves that TM is a correct polynomial time solution to R.

  2. Simulate TM on the given instance x.
The first step depends only on R, not x, so it takes constant time with respect to the instance size.


OP probably meant within the current context. An NP problem is defined to have a (polynomial sized) proof that can be checked in polynomial time by definition.


But enumerating all the possible strings takes infinite time? Or, wait, is this saying that since there is some algorithm you can spend some constant amount of time to find it that doesn’t affect the time complexity of your algorithm? I’m not sure I’d call that “constructive”.


If you know that such a proof exists, then it's guaranteed to terminate.


is this saying that since there is some algorithm you can spend some constant amount of time to find it

Yes.

I’m not sure I’d call that “constructive”.

The method for constructing the algorithm is given!


Here are two good practice problems for testing your personal bogometer:

http://sparsey.com

https://www.agilepq.com

It's not P=NP but two similar problems that attract a lot of quackery.


Wow! Lisper, Ron Garret. This is Rod Rinkus, inventor of Sparsey. You and I had what I thought was a congenial email exchange about quantum computing a month ago. And, here, out of the blue, you disparage my work in the public square, and in a cowardly fashion, behind my back. I guess that says a lot about your character. Yet, your comment really doesn't make much sense, does it? You even begin your comment with a disclaimer to the effect of "This is really not relevant to this thread"..but then you go on to disparage my work. Kind of a weird thing to do. We're you drunk?

In any case, I demand a public apology from you, here in the public square, where you committed your cowardly wrongdoing. We'll see what you're made of in how you handle that. If you have enough conscience and consciousness to apologize for your behavior, then we can move to you explaining your specific criticism of my work, which is now incumbent on you, is it not?..since I've called out your unwarranted and thoughtless attack. This will be a lot of mental work for you, since, thus far, you demonstrate no knowledge of Sparsey, or any of the relevant computational principles. But again, that will take you a long time. So, I recommend you apologize first, rather than keep your audience waiting.


> I demand a public apology from you, here in the public square

What exactly is it you want me to apologize for? Have I said anything here that is factually incorrect?

> You even begin your comment with a disclaimer to the effect of "This is really not relevant to this thread"

No, I ended my comment with the following statement:

"It's not P=NP but two similar problems that attract a lot of quackery."

And I stand by that.

> you demonstrate no knowledge of Sparsey

I don't claim to have any. My opinion of you and your work is based entirely on the fact that you claimed to have achieved quantum supremacy on a classical computer, an extraordinary claim which you failed to back up with any evidence. When I suggested how you could provide that evidence (implement Shor's algorithm) you balked. That is the sum total of the evidence on which my opinion of you and your work is based. It's quite possible that Sparsey has merit, I don't know. What I do know is that whatever merit it may have, it isn't what you told me it was.

(The whole situation is very peculiar because in our private correspondence you really emphasized the quantum stuff, but your web site doesn't mention it at all. This leaves me nonplussed.)

> you disparage my work in the public square, and in a cowardly fashion, behind my back

If something is done in public it is, by definition, not behind anyone's back, so I think you should apologize to me for accusing me of behaving in a cowardly fashion. If you can demonstrate that something I said was wrong, I will absolutely apologize for it. But if you want me to change my opinion you will have to do with with evidence and persuasive argument, not insults and demands for apologies.

But before this spins too wildly out of control, let's start by publicly clearing up the factual question at the heart of our disagreement: do you stand by the claim you made to me in private (but which I cannot find any record of you having made in public) of having achieved quantum supremacy on a classical computer?


Again, wow! "behind my back" means you denigrated my work in a public forum without apprising me. For all you knew, or cared, I'd never have seen this and the harm you inflicted would have been done. What if I went around in public forums and said, "Hey there's this crackpot out there, Ron Garret, check him out on your bogometer"? Most of us learn at a young age not to disparage people behind their back. Apparently you didn't learn that.

You introduced the term "quantum supremacy" to our discussion. I don't usually think in such hype-y / marketing terminology, but I assented because I take it to be synonymous with there being a classical realization of quantum superposition. There is. Sparsey is an example. But you haven't taken the time, again by your own admission, to study or understand that.

For the (apparently hundreds, based on hits on my website) of others watching this thread, my short 2012 paper, "Quantum Computation via Sparse Distributed Representation", clearly describes how the probabilities of multiple states can simultaneously, physically be active, in superposition if those states are represented as sparse distributed representations (SDRs). And, contrary Ron Garret's claim, my 2017 paper, "A Radically New Theory of how the Brain Represents and Computes with Probabilities", provides empirical evidence that the probabilities of all states (hypotheses) represented (stored) in an SDR coding field are updated (in a way that is semantically consistent with the learned dynamics of the input space) with a number of steps that remains constant as the number of stored states increases. That capability is minimally necessary for any quantum algorithm. I don't know of any other model that has shown this ability.

I think the mental block that has existed thus far in quantum computing (QC), and in quantum theory (QT) more generally, is that theorists have been thinking in terms of the probability amplitudes of states as being represented in localist fashion A localist representation is one in which any single entity is represented by a single representational unit and that unit represents only that entity. Any localist simulation of QC on a classical machine will require exponential memory. But, an SDR coding space has a storage capacity of combinatorial order. This opens the possibility that an exponential order number of states can be represented in the polynomial number of units comprising the SDR coding field.


> you denigrated my work in a public forum without apprising me

I'm sorry, I didn't realize I was under any obligation to apprise you. You certainly never asked me to, and if there's some kind of universal social obligation that applies here that's news to me. (BTW, did it even occur to you that the reason I did this is not because I'm a coward or a drunkard, but simply because I might be clueless?)

> the harm you inflicted would have been done

What harm could I possibly inflict? I have no influence in the quantum computing community. I've never published anything in that field, and I've been retired from research in general for nearly 20 years. No one who matters has any reason to pay attention to what I say. The only way my critique could possibly harm you is if it has merit. (Does it?)

> What if I went around in public forums and said, "Hey there's this crackpot out there, Ron Garret, check him out on your bogometer"?

You mean like when Lubos Motl called me a "category 5 loon"?

https://motls.blogspot.com/2012/10/evading-quantum-mechanics...

I was actually flattered that someone of Lubos's stature would take notice of my work and put forth the effort to criticize it. I think it said more about Lubos than it did about me that he chose to lob an ad-hominem at me, but whatever, it's a win-win for me: either Lubos is right, and I learn something, or he's wrong, and I get to be smug about having taught him something. :-) It's certainly not worth getting upset over.

> You introduced the term "quantum supremacy" to our discussion.

That's true, but I introduced it in the context of a question: are you claiming that you have achieved quantum supremacy in a classical computer? To which you answered yes.

> I don't usually think in such hype-y / marketing terminology,

Quantum supremacy is not "hype-y / marketing terminology", it's a technical term, whose definition is common knowledge in the quantum computing community:

https://en.wikipedia.org/wiki/Quantum_supremacy

I assumed you knew that, but apparently I was wrong. (In retrospect, I guess the fact that you had to look Shor's algorithm up on Wikipedia should have been a clue.)

So... now that you do know it, I am going to ask you again: are you claiming to have achieved quantum supremacy on a classical computer?


"I was actually flattered that someone of Lubos's stature would take notice of my work and put forth the effort to criticize it. I think it said more about Lubos than it did about me that he chose to lob an ad-hominem at me, but whatever, it's a win-win for me"

From your comments its clear you like the attention regardless of the implications. The math has been stated,the resources to understand the algorithm are all there and not particularly hard to understand if you take the time. If you have questions on implementing the algorithm on your own you can ask. Otherwise there will eventually be public code available for people to see themselves and you can just wait for it. Have a nice night.


I understand the math. What I don't understand (yet) is what the benefit is supposed to be. Why should I care about the math? What can I do with an SDR that I cannot do equally well some other way?

When Gerard originally approached me, he gave me what was essentially answer to that question that I interpreted as claiming that he had achieved quantum supremacy on a classical computer. When I asked him to confirm this, he did. But the problem is, it turns out that Gerard didn't know what quantum supremacy was. He thought it was "hype-y / marketing terminology". It isn't. It's a technical term with a precise meaning [2].

So this whole mess is, AFAICT, just a misunderstanding.

[1] https://news.ycombinator.com/item?id=19740538

[2] https://en.wikipedia.org/wiki/Quantum_supremacy


I'd like to know what you don't understand about the Sparsey algorithm that makes you think it is "quackery". I've programmed it from scratch (I've been working with him since October), Rod Rinkus has several tests that have proven it, Paul Franzon designed GPU & SIMD architectures and an ASIC chip and tested it against Numenta's HTM's which Rod's was faster but consumed more power, and also DARPA & ONR have granted him research grants to develop the algorithm.

So I think your claims of quackery are a bit unfounded, but I'd still like to know your thoughts. If there are people out there who think it is impossible or lacks credibility then we need to address that. I hope it's not just the website look xD. He has been saying he needs to change the website because it's old but he hasn't done so yet.


I didn't say it was quackery.

Now, I happen to believe that it is quackery because Gerard in private correspondence claimed that Sparsey achieves quantum supremacy on classical computers. That is an extraordinary claim but he did not provide any evidence to back it up, and when I suggested a simple demonstration he could do to back it up he demurred.

It's possible that Sparsey does something interesting, but it's (almost certainly) not what Gerard told me. That's why I think that evaluating it makes a good exercise.


I'll let him comment on what you guys were talking about but I know of his theory that you are commenting on. But I'm pretty sure you have misunderstood what he was talking about in terms of what sparsey does vs his ideas on parts of it (SDRs specifically) being framed as an alternative to quantum computing. You should have linked his direct thoughts on the qunatum computing parts, instead of throwing all of Sparsey under the bus.

http://sparsey.com/SDR_and_QC.html

You may have not directly called it quackery but you suggested and alluded to it requiring a bogosity meter and that quacks are attracted to it. It's pretty clear what you meant, to be honest.

Sparsey in general though is a separate subject really. What it is doing is capturing all the features it has ever seen and building temporal and spatial relations between those features and then storing those as a code that represents an input in a particular spatial and temporal context. So if I have the words CAT and BAT the A in CAT and the A in BAT will have different codes because of the letter that came before it, likewise with T because A's code is different for both C and B therefore so is T's in both separate cases. So I think that is where the whole quantum part comes into his ideas. Every sample in a sequence has many possible "realities" or codes that can represent it and by representing it in a certain way you are also representing an entire sequence that came before it and a branching point of possible future representations. I have near zero experience with quantum mechanics so I won't comment on how closely it is or isn't to that, because I have no clue.

More importantly though, even if it doesn't represent an alternative to mathematically tit for tat exactness for quantum computing, what it can do is reduce computation times dramatically. The idea is that for an equation you could give a sufficiently sized sparsey model a number of inputs. Sparsey naturally represents those inputs as some representation and you can assign the answer to that representation, and you do that with all or as many inputs as possible, each getting its own unique representation. So basically what you are doing is making a giant lookup table of answers. So when you see a set of inputs you know the answer via code lookup rather than calculating it. Sparsey is a bit memory heavy so you would only really want to do that with large hard problems. But that's where the speed comes in, you know instantly the entire state, its past, and its likely conclusion (because it probabilistic), or in fact all of its possible conclusions.

It's similar to Q Learning in some ways, in the sense that you are storing many many states you have encountered. Except sparsey doesn't store the states brute force like qlearning. Similar states, spatially, temporally or both, are stored with more code overlap, dissimilar ones have less overlap.

But that's the gist of it. He has some results posted. We've talked about getting some sort of public repository up and running for people to play around with and maybe a community forum or something, but that's one of his concerns lately, letting people see the results for themselves rather than taking his word for it. Which, I have one project I need to finish right now, but I will be working on that full time sometime in May.


Also briefly, doesn't necessarily have to be an equation that the system is learning. Any sequences of samples could be assigned an answer/label and can have overlap with other data types of samples. So an image can be associated with a sound or a word etc, and presenting one or part of the sample could trigger the recall of any of the samples associated with the given sample. So the word tiger triggers the recall of the image, the sound, any context it is found in, etc.


> I'm pretty sure you have misunderstood what he was talking about in terms of what sparsey does

That is quite possible.

> It's pretty clear what you meant, to be honest.

What I meant was exactly what I said: evaluating Sparsey makes a good exercise. I believe it is bogus, but I could be wrong. I am basing my judgement on very sparse (no pun intended) data.

> I'll let him comment on what you guys were talking about

Here are the relevant quotes from our correspondence:

Gerard: I just watched your 2011 "Quantum Conspiracy.." talk and thought you might be interested in an idea I've had concerning quantum theory and how information is represented. The core idea is briefly described in my 2012 paper. "Quantum Computation via Sparse Distributed Representation". I have to related Medium articles as well, "Quantum Computing in the Offing" and "Not Copenhagen, not Everett, a New Interpretation of Quantum Reality". The essential idea is this. To my knowledge, in all quantum theory (QT) and quantum computing (QC) models published to date, prob. amplitudes (PAs) are represented in localist fashion. That is, each PA is represented in its own physically unique memory location (e.g., 64-bits, 32 for real, 32 for imag.) that is physically disjoint from every other PA. In principal, the number of PAs needed to describe a system is of exponential order, so a localist representation requires exponential memory. BUT, if instead, PAs are represented as sparse distributed representations (SDRs), i.e., small subsets of binary units chosen from a much larger field, then they can all exist (be physically stored) in physical (and classical superposition): each PA is just a different subset of the units and the subsets can overlap. Thus, the exponential number of PAs can be represented in sub-exponential (polynomial) memory, Basically, we're leveraging a third fundamental resource, "code space", which is formally orthogonal to the other two, time and (physical) space (i.e., amount of memory). That is, the code space of an SDR coding field is of combinatorial (exponential) order.

Me: Thanks for bringing this intriguing work to my attention. You say "the exponential number of PAs can be represented in sub-exponential (polynomial) memory”. This makes it sound as if you claiming that SDRs allow you to achieve quantum supremacy in a classical machine. Am I reading you correctly? Because if that’s what you’re claiming, then you are almost certainly wrong. Since you don’t seem like a quack, it seems more likely that I’m just misinterpreting what you’re saying, so I thought I should get that cleared up before going any further.

Gerard: Actually, you are interpreting correctly and thank you for being open to the possibility that I'm not a quack :) [Lots of details about Sparsey elided.]

Me: OK, but in the interests of full disclosure you should know that my Bayesian prior on this [not being a quack] is still not very high :-) So let me ask you another “top-level filter” type question: can Sparsey run Schor’s algorithm? If so, have you done it?

Gerard: No I've never tried to run Shor's algorithm. [Lots more details elided.] What d'ya think?

Me: I think you should try to run Shor’s algorithm, because if you succeed that will be a slam-dunk proof that your claims are correct.

Gerard: Well, I don't really understand Shor's algorithm (from wikipedia). I'd have to study it. I never really have so far.

I never responded to that because at that point I was pretty convinced he was a quack. It's possible I'm wrong about that. But I'm very confident that he doesn't understand quantum supremacy, and that his claims about it are wrong.


Well like I said I have no clue about any of the quantum mechanics side of it, I only know what sparsey does. But yes, one state is represented by an SDR (not just and SDR but probabilities that have triggered an SDR), but not just that state all of the states leading up to the current represented state as well. Changing the representation changes not just what is currently represented but again an entire history and past representations.

The code space he is talking about is utilizing time. So say I have 10 bits, and one of them is on, then I can represent 10 things. Now if my system also takes into account the code of the last time step I can represent 100 different things. 10 things in 10 different temporal contexts each, is 100 unique spatio-temporal states, and that goes on for however many layers you make. Hence exponential. Now I don't know what quantum supremacy is, but you brought it up it seems, he didn't. In general I know he thinks specialized hardware is not needed, and he has stated to me he thinks SDR or sparsey based systems tailored to do quantum computing problems could be a reality. So I think he might have thought you meant quantum supremacy in that sense. As he has told me. But I won't comment too much on that because he'll be responding tomorrow most likely. It's past midnight where he is at.

But that's what I know Sparsey does, whether or not its better than quantum computing, I couldn't say. I've tried to understand what he is getting at but I just don't know enough quantum mechanics to come to any aha moments. What I do know is that the state representation in sparsey is massive and can represent many many high order concepts from low order concepts/features and find correlation between concepts both spatially and temporally. The math is very simple for that and not really hard to wrap your head around. You should start with the 2010 paper and 2014 paper of his if you are actually going to do the mental exercise you suggested. I don't know how you can claim someone is a quack if you don't even know the basic principles of the idea.

As for his theory on the quantum part of it, I know he says its a developing theory of his and he just hasn't had the time to work on the idea like he has wanted, and he has always been 100% with me on that. Which is fair enough considering he has been working on Artificial General Intelligence and in that category alone there are so many things we haven't been able to try yet. Which is one of the reasons we are in the works of making this a profitable company rather than a research company. Getting more minds on the ideas. But I would wager that if you did more than 3 responses to him being a bit more clear on what you thought was wrong with the theory he probably could have given his thoughts to you about those specific concepts/critiques. But you really didn't say anything, at least in the responses I can see. But again I have practically no insight to your guys' full conversation so I'll let him clarify tomorrow.

This will be my last response tonight, I have to be in a meeting in an hour with a colleague overseas then I'm crashing but I might respond tomorrow and Rinkus probably will as well.


> I don't know what quantum supremacy is

https://en.wikipedia.org/wiki/Quantum_supremacy

> whether or not its better than quantum computing, I couldn't say

It doesn't have to be better, merely as good as would already be a major revolution in both math and physics.

> I would wager that if you did more than 3 responses to him being a bit more clear on what you thought was wrong with the theory

This was my last message to Rod:

> I think you should try to run Shor’s algorithm, because if you succeed that will be a slam-dunk proof that your claims are correct. You will win a Fields medal, a Nobel prize in Physics, and become fabulously wealthy beyond the wildest dreams of Croesus. I think it’s much more likely that the attempt will reveal some flaw in your scheme, but either way, it seems like a worthwhile endeavor. Worst case, you will learn something.

I really don't know how I could have been any clearer.

This was his response:

> Well, I don't really understand Shor's algorithm (from wikipedia). I'd have to study it. I never really have so far. But, I'm confident that my conjecture that QC reduces to the two properties I mentioned in last email is true.

In other words: he doesn't understand one of the most basic and fundamental results in quantum computing, but he's confident that he has made a discovery that will revolutionize the field. If it quacks like a quack...


"I think you should try to run Shor’s algorithm, because if you succeed that will be a slam-dunk proof that your claims are correct. You will win a Fields medal, a Nobel prize in Physics, and become fabulously wealthy beyond the wildest dreams of Croesus. I think it’s much more likely that the attempt will reveal some flaw in your scheme, but either way, it seems like a worthwhile endeavor. Worst case, you will learn something."

That's a decidedly non answer from a non expert and provides no insight to the person being critiqued. That would be like if Alcubierre came to you with some ideas for his warp drive and you said, well if it can travel faster than the speed of light then you've cracked it, but I think its more likely that you will fail. All of that being said without any knowledge or attempt to run the mental exercise you purport to be fond of. It's still a theory and the math is what I said it was, its clear in the papers, its clear in the classification experiments, its clear in the raw math of it.

What's also clear from your comments is that you won't and can't actually say anything about the algorithm because you don't know anything about it. From my experience, people like you who can only see things and explain things from a high level and never actually critique the problem head on, don't actually know what they are talking about, thinking Theranos level. If it weren't the case, you'd understand that a quick glance, which I already did when you mentioned it, at wikipedia isn't going to make me understand anything if I don't have the fundamentals under my belt. Which, believe me if it was possible, I'd be glued to this computer on wikipedia becoming an expert on everything as fast as I could read.

I've tried to be a bit civil with this but it's clear you don't care and you'll probably never care. If it trolls like a troll...


Ron, you're hung up on Shor's algorithm. But you continue to avoid addressing my claim (above). Perhaps that's because you don't understand the fundamental difference between localist and distributed representations. In any case, your continued use of "quack" while you also continue to admit you don't understand the thing you're criticizing,...well, it's just pathetic. Anyway, let's stop. I don't really see any value in communicating with you further.


> you don't understand the fundamental difference between localist and distributed representations

That's true, I don't, and I've never said otherwise. I do, however, understand information theory, quantum computation and quantum supremacy. So I don't know if you're a quack or not. What I do know is that your claim of having achieved quantum supremacy on a classical computer is almost certainly false. I don't need to understand the details of your algorithm to know this, I only have to understand what "quantum supremacy" means and why it matters.

I also know that I have looked at your site and read some of your papers and I can't make heads nor tails of them. It's possible that this is because I'm an idiot and I'm simply not capable of grasping the brilliant idea embodied in Sparsey, or that I just haven't put enough effort into it. Or it's possible that this is because it's all bullshit. I honestly don't know.

But I do know one more thing, and that is that you're getting awfully defensive about all this. Why would you do that if you were confident in the merits of your position? What threat could a lazy idiot like me possibly pose in that situation?


Are those even serious sites? They look like what happens when a Markov chain program meets a mediocre web designer.


I can assure you that Rod Rinkus', Sparsey creator, work is very serious. He is a computational neuro scientist I've actually been working with him since october. But he has said the website has needed some updating for a while now. One of my primary jobs will be to get the demos up and running with a more updated feel. I'm not sure of the other site, but I have done the work to recreate Rod's work from scratch and he has done several tests to prove the system works. Most of his work was done through DARPA and ONR research grants. He has only really recently decided to try to make a profitable company off of it recently, before that it has always been funded through research grants.

As for the look and feel of the website, which in reality has no merit on whether the research is legitimate or not, I have joked that he just needs to market himself better and he has laughingly agreed. But you can hit him or I up if the research is to hard to understand from a glance. We are trying to practice how to pitch it to people better so it's easier to understand. But once every thing clicks, its actually a really simple concept, it's just that its a highly recurrent system. I had to program it out piece by piece before things started making sense.


Sadly, yes. I personally know someone who has invested in one of them :-(


Here is a guy, who did not know LaTeX, but written in MS word:

https://www.quantamagazine.org/statistician-proves-gaussian-...


Thoroughly boring and conventional advice. How about some advice from the dark side: First, become a distinguished researcher in the field so that there is no other choice but to take you seriously. Then write a giant multi-part book-length paper consisting almost entirely of endless definitions and lemmas referring to each other. To those brave souls who actually try to read and understand the paper and who claim that they have found a problem with it, reply that they have profoundly misunderstood the nature of the work and should read the paper more carefully.

Not actual advice of course. I just find the controversy surrounding Mochizuki's purported proof of the abc conjecture (HN discussion: https://news.ycombinator.com/item?id=18034714) fascinating. Something like this could easily happen to P=NP when one cannot confidently state whether we have a proof or not.


I dislike the early section about arrogance. Who cares? I don’t care if someone is being arrogant, humble, or anything else, when it comes to mathematics proofs. Imagine if Hardy had scolded Ramanujan for being “arrogant” and dismissed his correspondence?

At best, by trying to judge arrogance, you’re trying to use it as some type of correlate with being a crank and more quickly dismiss a claim without spending the effort to read it.

At worst, it’s trying to entrench the superficial credential of mathematics academia and journal publication style of collaboration and peer review, which frankly (and especially in mathematics, where polymath amateurs can easily be responsible for ground-breaking solo work) is overdue for being replaced by a more fully open, unrestricted participation model of discovery and publication.

Some arrogant cranks are going propose time-waster claims. Just accept it. Don’t care if they are arrogant or not.


Just wondering, why do people not use Coq/Isabelle/Agda/etc in order to check such proofs? Wouldn't that save them from the "shame" of producing yet-another-incorrect P(!)=NP proof?


Because that's an extremely large amount of work to do, even if you're already familiar with the tools.

It wouldn't be quite so bad if there were a huge repository of already proven theorems you could pull from. But in reality you'll probably find yourself defining what a Turing machine is, what asymptotic complexity is, deriving results like the master theorem, and a thousand other trivialities you would just skip over in a paper.

It is especially problematic that there isn't a common repository of reference implementations of well-known unproved propositions. It would be so easy to make a subtle mistake in your definitions that allowed an existence proof to go through in a way that wouldn't generalize to the actual definition. For example, when you define probabilistic computation, if you declare that a probabilistic gate is a stochastic matrix with real number entries, you will incorrectly find that there exists probabilistic programs that solve the halting problem.


> It wouldn't be quite so bad if there were a huge repository of already proven theorems you could pull from.

What about things like Metamath [1]? (Not sure if there is a more mature similar effort, Metamath is just the one I've heard of.) It seems like it has quite a lot of the "thousand other trivialiaties" you mentioned, but maybe it's still not enough to make a real dent?

[1] http://us.metamath.org/index.html


There’s other efforts, like the UniMath project by the HoTT people.

https://github.com/UniMath/UniMath


The thing is, I'm not sure if it's quite a matter that most theorems haven't been proven automatically and rather a matter that using an automatically proven theorem in another theory to be automatically proven just isn't as easy as using an existing theorem in an ordinary proof.

IE, one definition of a Turing machine might exist but it might not that usable for a given proof's purposes.

I see a bunch of proof referenced here and I recall more elsewhere;

https://madiot.fr/coq100/


> For example, when you define probabilistic computation, if you declare that a probabilistic gate is a stochastic matrix with real number entries, you will incorrectly find that there exists probabilistic programs that solve the halting problem.

Great example! Do you have an outline of this proof? I can see the error in defining a unitary matrix over R instead of C, but I'm not immediately seeing how you can exploit that error to bypass the halting problem.

I'm guessing the concrete error would be introduced by overlooking that the real-valued matrix won't preserve the correct probability amplitude? If so, what's the next step to (falsely) deciding that a given algorithm will complete?


> Do you have an outline of this proof?

Let p be the real number where, iff the k'th program halts, the k'th bit of p (after the decimal point, in binary) is 1.

Let M be the operation "toggle a bit with probability p", which has matrix [[1-p, p], [p, 1-p]].

When given a program k to solve, keep applying M while counting up how many toggles you see. Do this until the chance that the k'th bit of the sample mean equals the k'th bit of p is greater than 2/3 or whatever other threshold you want, which will take a finite number of samples. Return the k'th bit of the sample mean. This is a probabilistic algorithm which solves the halting problem with arbitrarily high probability.


Maybe the problem is that you can use uncomputable numbers in the matrix? (e.g. https://en.wikipedia.org/wiki/Chaitin%27s_constant)


Unless I’m misunderstanding the parent commenter’s definition of a probabilistic gate - I’m assuming a unitary matrix - then that shouldn’t be an issue. Chaitin’s constant is in both R and C. More generally, all uncomputable real numbers are also in C, because C is complete over R.

In other words that shouldn't be the issue, because the correct "setting" for the stochastic matrix still allows for that possibility. It's not something you introduce by using real entries.


I don't think the issue here is requiring the matrix coefficients to be real (I don't think that complex values make sense there at all), but allowing arbitrary real numbers. In such case you can show algorithm, for which there exist matrix with real coefficients such that it solves halting problem - the trick is encoding infinite amount of information in the real constant.


Complex values (a unitary matrix) make sense if the stochastic matrix is representing a probability amplitude.


They don't do it because the technology just isn't there yet. Automated theorem checking is typically very difficult for any nontrivial theorem; automated theorem generation is so extraordinarily difficult, it's generally infeasible.

The steps to construct a proof of a theorem are as follows:

1. Formally define a language specification which will ostensibly contain the theorem statement and a proof.

2. Search the language space (the set of all strings of the language) for a proof of the theorem statement.

3. If step 2 fails, change the language specification to include prerequisite mathematics or an alternative statement of the theorem.

Even when you exploit clever structural properties of the theorem statement and the language definition, it's extremely easy to encounter a combinatorial explosion when you search for a proof.

On the other hand, suppose you already have a proof and simply want to verify it using a proof assistant. In order to do so, the language you define must be able to automatically verify all prerequisite mathematics which your proof depends on. Encoding that mathematics into a language is difficult and tedious, and dependency checking is itself vulnerable to combinatorial explosion.

It's a very active area of research in its own right, but it can't obviate this problem for the foreseeable future.


> Automated theorem checking is typically very difficult for any nontrivial theorem

I'm not sure this is true -- I thought the whole idea of these theorem systems was to make the kernel of the proof system as simple as possible (to allow a manual proof, or at least high confidence of correctness).

If your program can produce an instance of a type, and it type-checks, then it true. Type inference can be tricky, and a lot of what makes the systems usable is the fact that you can omit types and allow the compiler to infer them, but a fully-expressed typed proof outline is enough to quickly verify correctness of the proof.


> but a fully-expressed typed proof outline is enough to quickly verify correctness of the proof.

Yes, implementing that is the difficult part. I'm not saying it's infeasible - I'm saying it's very difficult. Writing provably correct software is also very difficult. This kind of theorem checking is becoming more common for theorems in e.g. combinatorics, but it's still very uncommon due to the added effort.


Encoding a proof to check it is the hard part.


> 2. Search the language space (the set of all strings of the language) for a proof of the theorem statement.

This is not how proof assistants based on type theory work. That would be true if you used a prolog-like thing in order to search for a proof, but in these systems the human constructs the proof (just like how they do for P=NP papers) and the system checks if it is correct (unlike P=NP papers, where the human checks if the proof is correct and is usually wrong).

> and dependency checking is itself vulnerable to combinatorial explosion

I am curious on what you mean by that.


Note that my comment distinguishes between automated theorem proving and automated proof checking. I made remarks for each; I admit it may have been unclear, but my point about searching a language space refers to automated theorem proving in particular. My second point about the requirement to verify "dependencies" for a theorem is a comment on why automated verification is difficult.

As for what I meant by that last point - there is a huge amount of mathematics to encode, almost all of which has not been formally specified into types, and much of which is structurally redundant. Not only is it difficult to encode necessary prerequisite mathematics for a theorem, but that requires its own effort and care, which multiplies the effort required for a given nontrivial proof to be checked.

Also note that I'm distinguishing between the general infeasibility of automated theorem proving and the otherwise great (but ordinarily feasible) difficulty of automated proof checking.


Most of the ones I've encountered have not put in the time to learn computation theory (not even a careful reading of Sipser), so I doubt they're up for learning to use that sort of tooling. Most also always think they're almost there and just need to refine a couple of minor details, which would make the time spent bringing those guns to bear seem hard to justify.


This article should really be deleted. It's giving crackpots advice for camouflaging their crackpottery, forcing serious mathematicians to waste more of their time determining its legitimacy.


Actually, if a crackpot can faithfully camouflage their output then they are probably not crackpotty enough to deserve deletion without any consideration at all.

Imagine, you have succesfully proved P eq/neq NP... but you can't be bothered to learn LaTeX or convince someone who does to typeset your work? Unfreeking likely.

The vast bulk of the junk can be disregarded by simple heuristics because the crackpots are unwilling or unable to meet even a very low bar. Telling them precisely where the bar is will only get a small amount past it.


[flagged]


I suspect you're just trolling, but for anyone else: This is true for any bounded input, yes, but doesn't hold for an infinite domain.


‪More specifically algorithms solve arbitrary sets of inputs and outputs. Different algorithms can be aligned with different sets of inputs and outputs that are less natural but within the P=NP criteria.‬


Why to discourage people to try things with low probability of success? It's hard to find these guys, let them be happy at least for a while


I received an email from someone I met recently:

> I inted to start doing a project on what causes Parkinson's Desease using Python , Big Data, , Data analyt and Nvidia Supercomputing. You have a skillset that can hel sovle this problem and I coul use your help. If we are successful then we can do the same for other medical deseases.

He was middle-aged and had learned to code a few months ago. He did not seem interested when I told him that the field was called 'bioinformatics' and that there were many people working on these problems already. This is not a "low probability of success", this is a zero chance of success.

In these cases, I'm not sure what the right course of action is. I don't think discouragement even does anything; for a particular kind of person (who has the arrogance to think they could cure a major disease as though it were a blue ocean problem), active discouragement just solidifies their resolve. I like to think that if they could redirect their ambition to a small and tractable problem, they might actually be able to make a contribution (even Terry Davis of TempleOS fame was able to produce something inspiring, which he wouldn't have done if he had been trying to Solve a Big Problem).

But then I think you're probably right, that there is no real hope, and the best course is to ignore them so I'm not wasting my time, but also to let them have a dream and feel like they're doing something.


Ok I understand the point here. We are not concerned about a guy who believes he can date with a supermodel but with the guy who becomes a stalker.


"He was middle-aged and had learned to code a few months ago."

I don't like the idea that older beginners are necessarily worthy of contempt.

I do like the idea of suggesting that he take some courses in bioinformatics.

I don't know who Terry Davis is, but does he deserve the "even" qualifier before his name?


Many similar cases approach anyone who is viewed as capable in tech, from app ideas to game ideas to buzzword mashups. They usually come from people who have spent minimal time trying to build their own capability or researching prior work done in the domain. They are only focused on making their own dream a reality and latch on to the one they perceive as able to realize it.

What they need is a mentor. Which is fine, of course. Mentorship is one of the best thing to look for when you're starting out. The issue is that mentorship cannot begin without humility from the mentee. Even if the mentee presented a legitimately great idea, an idea is still not an equal contribution, because they will need to be guided through the whole process.

So I think the disdain is warranted. Not because of age or inexperience. But because they're engaging a mentor with an offer of partnership. It's very arrogant.


Those are fair critiques. I brought up his age as evidence that it was not simply the arrogance of youth; if it were, education and exposure to a wider world would be the proper prescription. But his blank stare after I gave him a keyword and gently nudged him towards reality, along with the subsequently quoted followup message, confirmed that this was not youthful inexperience and naivete.

As to Terry Davis, I debated whether to include the 'even' qualifier, and decided to include it. I don't know if he deserves it either. You should look him up and decide for yourself. He was a complicated figure (just passed away recently) and I'm sure people would argue both ways.


You're welcome to try all you want. But after the hundredth crackpot paper someone receives, expect them to be a wee bit less welcoming, open, and excited than they were the first couple of times.

I think a lot of people commenting on this topic to this effect are obviously not receiving a huge deluge of crackpot papers. I've never received them myself, but I've seen a couple of instances of people and companies who do, each of them themselves small instances of the problem, and it would have been plenty to try my patience if I had to spend any time at all on each of them.


They have the freedom to speak, but there's no obligation for anyone to ever listen. The time of the person being asked for is significantly more valuable than the person presenting something. If you want to be heard, you generally need to put in a good amount of time, often by contributing to smaller problems and earning a reputation. This stuff is earned, not given out, that includes even listening to someone's ideas. Time is limited, a random person's ideas are no more valuable than anyone else's, this is why we rely on reputation, intuition, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: