Hacker News new | past | comments | ask | show | jobs | submit login
Proof Claimed for Deep Connection between Prime Numbers (nature.com)
394 points by shashashasha on Sept 10, 2012 | hide | past | favorite | 89 comments



I personally know Brian Conrad (quoted in article). He has an encyclopedic knowledge of algebraic number theory and algebraic geometry, I would say that he's in the top ten people worldwide who could reasonably assess whether Mochizuki's proof is correct. And he has a great nose for bullshit, and little patience for it.

He is taking this claim seriously. He doesn't necessarily believe it's correct (presumably he's a bit careful in what he says to the press), but he seems to think it has a shot, and is worth paying attention to.


I think it's fascinating there are only ~10 people on this earth 'smart enough' to get the proof. That is seriously some next-level stuff going on...


"Reasonably Assess" != "Smart enough"

Presumably, it is experience in algebra, established reputation in field, and familiarity with communication of math that allow this subset of people to reasonably assess the papers. It is not to say that the theorem is so advanced that only 10 people on earth possess the aptitude to understand it - it is that only 10 people have the foundation on which to understand the theorem due to its specificity.


Right, because "experience in algebra, established reputation in field, and familiarity with communication of math" can be acquired by anyone with only a modicum of smarts.

I think you are underestimating just how intricate and challenging understanding a proof can be. Especially one of this length and by someone of this standing.


Unless I misread, I don't believe I saw anything that implied that the poster was claiming that anyone with a modicum of smarts could contribute significantly to the study of elliptic curves as relates to Diophantine equations. That being said, there are a mind-boggling number of extremely specific sub-specialties in mathematics, each of which would demand a significant investment of time from even a very intelligent person to reach a deep understanding of the state of the art. The fact that only a handful of folks are sufficiently versed in a particular sub-specialty to evaluate a long, involved and presumably brilliant proof attempt that draws significantly upon the state of the art of that sub-specialty does not imply that only a handful of folks are sufficiently intelligent to evaluate the proof. This reminds me of the recent attempt on the P/NP problem by Vinay Deolalikar - plenty of extremely talented mathematicians watched from the sidelines because they did not have sufficient specialized knowledge to properly evaluate the proof; though, that did not imply that the proof was particularly brilliant or even correct (http://michaelnielsen.org/polymath1/index.php?title=Deolalik...).


No- he's saying that some people who have the intellectual horsepower to pursue it choose to pursue other things instead.


Reputation in the field is not a proxy for general intelligence. That there are only 10 people with the reputation to make it worth them assessing the proof does not mean those are the only 10 people with the smarts to understand it.

And yes, a large part of being able to work in a specialized field like this really is about having the specialized knowledge and vocabulary, rather than general intelligence. If this is anything like most proofs in mathematics, most reasonably intelligent people would be able to understand it given ten years of studying the field. It's not about being smart enough, there's just so much existing knowledge that you need a lot of specialization to be able to really contribute to any given area.


I do not assert that reputation is a proxy for general intelligence. However, reputation is a cornerstone of becoming a peer reviewer, and it takes peer review for an assertion to become science. Therefore, my assertion is more so that having reputable academics backing the paper will decrease the equivocacy of its assertions and aid its adoption rather than assert that reputation functions as a proxy for intelligence.


It is not unreasonable to assert that it takes intelligence to understand this area of mathematics. However, it is wholly inane to assert that one is intelligent if and only if one understands the intricacies of this proof.


That wasn't only ten, it was top ten.


Changed to the Nature article that this is a repost of. The SA article had borked formatting, losing superscripts.


Thanks, apologies for the poorly formatted link!


It states that for integers a+b=c, the ratio of sqp(abc)^r/c always has some minimum value greater than zero for any value of r greater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)^2/c = 900/128. In this case, in which r=2, sqp(abc)^r/c is nearly always greater than 1, and always greater than zero.

Obviously I'm reading this wrong -- because as stated (and assuming that a, b, and c are positive integers) this seems trivially true -- sqp(abc) cannot be zero, r cannot be negative, and c is finite, so therefore sqp(abc)^r/c is greater than zero, QED.

Does Nature mean that the quantity does not approach zero as r tends to infinity (or some such)? Their example sure doesn't seem to indicate such.


Yes, sqp(abc)^r/c is greater than 0. But what we are saying here is that it is _bounded_.

The formulation in the article is a little confusing/non-intuitive. A perhaps more 'layman' explanation is that for every r > 1, there are only finitely many coprime tuples (a,b,c) such that a+b = c and c > sqp(abc)^r In other words, in "most" cases, sqp(abc) is greater than c!

This is equivalent to the definition in the article. Why? Well, given a fixed r, there are finitely many tuples that satisfy c > sqp(abc)^r. Since there are finitely many, we can pick a constant K > 0 st c < K*sqp(abc)^r for ALL tuples (a,b,c). Thus:

sqp(abc)^r/c > 1/K > 0, as the article mentions.


Thanks, after reading that through a couple of times, I think I understand what it means.

Looking back at the article, this is the critical bit:

> the ratio of sqp(abc)^r/c always has some minimum value greater than zero for any value of r greater than 1

That looks like it's just saying sqp(abc)^r/c > 0. But from what you're saying, the bit about the minimum value (1/K) is important. For each value of r, it should be possible to define a minimum value greater than 0 which sqp(abc)^r/c can never be smaller than. Right?

Are those minima interesting in themselves, or just as part of the whole conjecture? Is there a method for finding the minimum for a given value of r?


Your interpretation is right, and those minima are interesting in themselves. For example, when http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-a... was posted, the very first comment was about that.

"It almost looks as if the method doesn't (as presently written) produce explicit constants. Hopefully as the ideas become more widely understood, effective constants will be able to be extracted."


Ah thank you, the boundedness requirement makes sense.


this is for the abc conjecture[1] (I was thinking maybe they were being headline-y about the Riemann hypothesis before I clicked through), which, if proved, would be extremely interesting due to the number of conjectures and other theorems that have been shown to be equivalent to it or direct consequences of it.

The proof will be well beyond me, but the conjecture itself is pretty accessible, as are many of its connections.

This line from the article was confusing:

> Fifteen and 17 are square free-numbers, but 16 and 18 — being divisible by 42 and 32, respectively — are not.

but that's supposed to be 4^2 and 3^2, respectively.

[1] http://en.wikipedia.org/wiki/Abc_conjecture


The version from Nature does not suffer from this typographical mess.

http://www.nature.com/news/proof-claimed-for-deep-connection...


Also the statement for Fermat's Last Theorem isn't showing the exponents correctly (should be a^n+b^n=c^n)


It took a moment to puzzle these out (parent and GP's comments). The article would a great deal clearer if the typography made these clearer.


Indeed. Since it's simply copy-pasted from Nature News[0] (with attribution), it may be worthwhile to read it there directly.

I didn't understand that the ratio of import was sqp(abc)^r / c until I clicked through to the original story.

[0]: http://www.nature.com/news/proof-claimed-for-deep-connection...


Also:

> It states that for integers a+b=c, the ratio of sqp(abc)r/c always has some minimum value greater than zero for any value of r greater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)2/c = 900/128. In this case, in which r=2, sqp(abc)r/c is nearly always greater than 1, and always greater than zero

should be:

It states that for integers a+b=c, the ratio of sqp(abc)^r/c always has some minimum value greater than zero for any value of r greater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)^2/c = 900/128. In this case, in which r=2, sqp(abc)^r/c is nearly always greater than 1, and always greater than zero


Any implications for RSA and other algorithms? [RSA relies on factoring the product of primes being a hard problem]


The claimed proof doesnt directly help us in factoring primes. At least not now. It has more to do with "structure" of primes.


The way I see it Mochizuki's work could turn out to be like Maxwell's theory of electromagnetism. It could give us a definitive theory on the nature of prime numbers, but first someone needs to come and understand and solve it. A 'solution' of a differential equation system would be a formula, a 'solution' of Mochizuki's system would be an algorithm. Many new algorithms could come out of this, and one of them could be an algorithm for prime factorization.

But: I'm not a mathematician, so maybe I'm just fantasizing right now.


Warning: This article is mangled by the flattening of all the superscripts in it. Given that it's about powers, the math is all but unreadable. You have been warned. :-)


Warning: this comment is out of date, the Nature article has correct formatting :)


From Wikipedia:

> Mochizuki entered Princeton University at age 16 and received a Ph.D. under the supervision of Gerd Faltings at age 22. In 2002, he became a professor at the Research Institute for Mathematical Sciences in the Kyoto University.

Very impressive.


I found the formulation of the theorem difficult to digest, but the wikipedia version was much clearer:

For every ε > 0, are there only finitely many triples of coprime positive integers a + b = c such that c > d (1+ε), where d denotes the product of the distinct prime factors of abc?


Link to a clear statement of the question, expressed as a game: http://news.ycombinator.com/item?id=4505264

This is a technique John Horton Conway uses a lot, expressing conjectures as games, where two (or more) people try to do something in turns to make, or not make, something happen.

Useful idea.


How do you get epsilon to appear in comments? The only docs I could find on formatting are http://news.ycombinator.com/formatdoc and it has no clues.


Those familiar with LaTeX may find the following tool useful: http://www.unicodeit.net/ It takes LaTeX symbols as input and outputs the unicode for them. There is a mac app available too, which will convert it in-place for you too.


The solutions to

a⋅x² + b⋅x + c

are given by

x = (-b ± √(b² - 4⋅a⋅c)) / (2⋅a)

φ ∩ (ψ ∪ ρ) ↔ (φ ∩ ψ) ∪ (φ ∩ ρ)

Thanks! I can't even begin to describe how useful this will be!

If I could upvote this more than once, I would!


> How do you get epsilon to appear in comments?

Unicode, of course: ε

When I need a Unicode character, I find it with my Unicode search page:

http://arachnoid.com/unicode

Then I copy the character into the document I'm working on. Others have more efficient ways to enter Unicode from their keyboards, like having a list of codes and a keyboard layout that allows the codes to be entered efficiently.


I just Google "unicode [name of character]". Hasn't let me down yet.


also, most platforms I have used have some kind of software to look up characters, e.g. on OSX you have something like "character map" where you can search by name/category


Relevant meta-commentary on the proof by Marty Weissman and Minhyong Kim:

http://mathoverflow.net/questions/106560/philosophy-behind-m...


One way to deal with an overzealous mod:

Such a great answer proves that the question was worth asking. – Olivier 2 days ago

@Olivier: while I agree with your comment regarding the answer, I woould like to add that your inference regarding this particular question to me comes close to saying: the great and heroic work of the fire-fighters prove that it was worth setting the house on fire. – quid 2 days ago

On MathOverflow, though, it is possible to edit the original house so that it is no longer inflammatory, while still recording the work of the firefighter. – Terry Tao 2 days ago

[edit: formatting]


The fact that we have a fkg Fields medallist having to deal with overzealous mods makes me sad.


Those answers read like Klingon to me :/


This link was already posted in this thread. And as I mentioned there, it wasn't Gowers but Martin Weissman who wrote the first answer. http://news.ycombinator.com/item?id=4503359


Edited to correct the attribution. Thanks for pointing out!


Ok, I've read [1] that this guy has developed a new set of mathematical objects and techniques which he is (almost) the only one to understand.

That means that if someone wants to check if the proof is right, he'll first have to learn how to use all these objects.

I'll guess we won't have a tested proof for some years.

[1] [SPA] http://gaussianos.com/posible-demostracion-de-la-veracidad-d...


I don't expect the new maths to cause a delay from learning so much as from verifying.

In some sense, proof-checking is a trivial linear-time operation: you check that each step follows from the last, and if they all do, the proof is correct. Proof checking is something that computers do well, except that describing a proof to a computer is usually non-trivial.

Where proof-checkers -- human and otherwise -- will get hung up, especially with the new maths, is that it may not be obvious how each step follows from the last.

What will happen is something a bit interactive: if the proof checker cannot intuitively see the logic in one step of the proof, he may first try to prove the lemma himself. If he can't, he'll go back to the original author of the proof and say, "Hey, I don't see how you go from step 518 to step 519. Can you prove it?" Sometimes the author easily can; he just didn't elaborate on that step in the proof because it was so obvious to him. Sometimes he can't; either in trying to prove that lemma he finds a contradiction, in which case the proof is incorrect, or maybe he just can't prove it, and the proof is ... incomplete.

It's this last situation which causes these grand proofs to drag out for so many years. At some point in the 500 pages, there is a jump which the author didn't notice, and it takes him another few years to prove that jump. The checkers have a much easier job, because they just have to try to follow the logic; if they fail, the burden is on the author to make the point at which they stick more clear.


Here's a presentation of the mathematician the article is about. It's on the nature of "Inter-universal Teichmuller Theory"[1] which apparently was the work leading up to the proof.

[1] http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20...


I recommend reading Martin Weissman's explanation from MathOverflow:

http://mathoverflow.net/questions/106560/what-is-the-underly...

[edit: Oops. Fixed attribution.]


That explanation was not written by Gowers at all (his name appears on it now that the answer was marked community wiki, becuase he is the most recent person to edit it, to fix a typo), and indeed although Gowers is certainly brilliant, this is far from his domain of expertise. That answer was written by Martin Weissman (username: Marty), the author information is at the bottom right corner of any answer and here says "Marty 98%".

Also, take a look at Minhyong Kim's answer.


Mochizuki has also been keeping an informational, yet succinct diary of his progress for a few years: http://www.kurims.kyoto-u.ac.jp/~motizuki/thoughts-english.h....


Can anyone explain in lag terms what the implications of this may be.

Just the phrase "deep connection between prime numbers" sounds really interesting, so, if this were true would there be any practical application of this in the next (N) years that would not be possible without this proof?


abc conjecture is asymptotic. That is, its statement is of the form "for every positive epsilon there exists a constant". A proof can be constructive, that is, an algorithm that can calculate a constant from an epsilon, or it can be nonconstructive. For any practical application it would seem a constructive proof is needed.

I am not at all qualified to talk about the claimed proof, but I think it would be nonconstructive.


Thanks, but I said in "lay" ("lag" due to stupid iPhone AC) terms...

I dont really get how your response answers my question.

Can you really dumb it down for me; explain like I am 5 how this will result in practical applications in the next (N) years.

thanks


His answer was that he thinks there will be no practical applications from this proof, since it merely states that something exists without giving us an idea of how to find that thing.


Yeah. For the most part non-constructive proofs are only good for proving things about existing algorithms - their convergence properties, their correctness, their running time etc. Formal proof often isn't necessary in practice, though - "seems to work well enough" is ok in many domains.

Of course, this isn't to say that new mathematics based on the result and on the techniques used to prove it won't lead to new algorithms.


Here is the best layman's explanation I could find: http://i.imgur.com/VrjiG.gif


http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20... — oh, I need some time to accept this is not machine generated :)


What are the practical applications of such a proof? I'm genuinely curious.


There are probably implications for cryptography.


I don't know much about cryptography. So, what you're saying is basically that cryptographic algorithms are largely implemented relying on primes?

This is fascinating.


> cryptographic algorithms are largely implemented relying on primes

One of the steps in generating an RSA key is, "Randomly generate two large prime numbers p and q. Multiply p and q, and save the product pq = p*q as part of your public key."

If anyone can figure out p and q from pq, then they can figure out your private key.

A while ago there was an article on HN about using the Euclidean algorithm to break SSL.

If you have two keys pq and rs, and one of the factors is the same (like p = r), then there's a well-known algorithm dating back thousands of years which you can use to quickly figure out the common factor's value.

So what the authors of a paper did was gather a couple million public keys from SSL websites, then run the Euclidean algorithm on each pair of keys, and many of them had common factors. Why? As noted above, the prime numbers are supposed to be picked "randomly," but they weren't picked randomly enough. (Getting a computer to produce random numbers is an interesting problem. The usual approaches are "faking it" -- technically called pseudorandom numbers -- and using input from hardware that interfaces with the outside world, like the system clock, keyboard, mouse, network, etc.)


There's even more at work here. In the context of RSA and DHMW there are bad primes and good primes. A prime p is potentially bad if p-1 has no large factors. The algebraic structures associated with pq are then combinations of small algebraic structures, and that can (sometimes) make pq easier to factor.

So not just any old primes will work, and if two unrelated, unassociated people use the same software to generate "good" primes, sometimes they will coincide, leading to the potential for finding their factorizations with GCD on pairs.


There is little point in concerning oneself with smooth p-1, p+1, Φ_i(p) values. Randomly-chosen primes will have a negligible chance of having such properties.

Additionally, the more powerful ECM algorithm can be parametrized to generate about p different elliptic curves E_{a,b}(p), where p is found whenever E_{a,b}(p) has a smooth order. Once again, the chance of finding a pair (a,b) that results in such a smooth order is negligible. This is, in a nutshell, the result of Rivest and Silverman [1] dispelling the need to jump through many hoops to generate strong primes.

[1] http://people.csail.mit.edu/rivest/RivestSilverman-AreStrong...


Right, so what you're saying is that when the primes are smaller, as they used to be able to be, strong (and safe) primes were a good idea because the earlier methods of factoring could exploit their potential weaknesses. However, it's now the case that with ECM and GNFS methods of factoring we've had to move to much, much larger primes, so the weaknesses are no longer relevant.

Thank you.


> two unrelated, unassociated people use the same software to generate "good" primes, sometimes they will coincide

I thought the GCD attack's success was more due to the lack of good entropy (randomness) in the random number generation, than due to a lack of eligible primes of the requested length.

This makes much sense -- some people might run the random-number generation on server systems, which often don't have any sources for human input (mouse movements and keystrokes can be a great source of randomness).

Or maybe it's simply a case of people using /dev/urandom instead of /dev/random because it's faster. (Which it is, but it's also not something you want to use for something as important as generating a key for long-term use.)


yes, prime factorization is the predominant method for making cryptography "hard" to brute force. Earlier in it's history, discrete logarithms were also used to make brute force un-palatable, but factorization based cyrptography rules the roost right now.

http://en.wikipedia.org/wiki/Discrete_logarithm

http://en.wikipedia.org/wiki/Integer_factorization


There is no obvious implication of the abc conjecture to cryptography.


This statement is as interesting as the following one:

There are probably implications for mathematics.


Shinichi Mochizuki's homepage is worth a look :) http://www.kurims.kyoto-u.ac.jp/~motizuki/top-english.html


The general message there is "I'm always up in the skies working on my proofs. No office hours this week"


This guy's thesis has an approachable explanation of the conjecture, as well as some of the interesting theorems that it implies (including an asymptotic version of Fermat's Last Theorem):

http://jeffreypaulwheeler.com/Masters%20Thesis.pdf



404


there's an apostrophe before the "s" in "Masters" that is stripped out by the HN code for some reason. either add that or go to http://jeffreypaulwheeler.com/ and select the link "The abc Conjecture" near the start of "Papers Section".


Is it naive to say these proofs are overly long - that there's something a little simpler behind them?


At this stage, yes, it's naive. The ideas and techniques here are pretty much completely new, and it will take years, if not decades, for people to understand them deeply enough to find the essential core of what's happening.

No, they are not overly long. It's likely that papers explaining them will effectively be an order of magnitude (base 10) longer.



That link is old and was posted a few years before the final paper. The more recent discussion is here: http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-a...


That comment, from 2009, refers to a different paper by a different author.


? that link points to a comment about a different claimed proof from 2009


If there is some kind of connection between prime numbers, wouldn't a lot of the current work in encryption be thrown out?


> If there is some kind of connection between prime numbers, wouldn't a lot of the current work in encryption be thrown out?

Unlikely. The recent work doesn't improve our ability to factor large integers, the key to modern cryptography. If someone created a function that instantly produced the prime factors for large integers without the present difficult process, that would change everything, but that's not a likely outcome.

The next real challenge to modern cryptography isn't this work, but quantum computing. If quantum computing came to full flower with no obstacles sometime in the future, that would represent a basic change because it would perform an end run around the present computational difficulties that assure the security and reliability of our present methods.


Ok, off topic, sorry, but... Are you saying that if/when quantum computing comes around, existing encryption systems would be suddenly ineffective? So if the NSA gets their hands on a quantum computer, they'd be able to brute force a 1024bit RSA encrypted file in far quicker time than their other super computers?

I found some info here, but I didn't get enough out of it to answer my question directly. I think the answer is "Well theoretically, but it's complicated":

http://stackoverflow.com/questions/2768807/quantum-computing...

http://en.wikipedia.org/wiki/Quantum_computer#Potential

http://en.wikipedia.org/wiki/Shor%27s_algorithm

http://en.wikipedia.org/wiki/Post-quantum_cryptography

  [...] most currently popular public-key cryptosystems 
  rely on the integer factorization problem or discrete 
  logarithm problem, both of which would be easily solvable 
  on large enough quantum computers using Shor's algorithm. 
  Even though current publicly known experimental quantum 
  computing is nowhere near powerful enough to attack real 
  cryptosystems, many cryptographers are researching new 
  algorithms, in case quantum computing becomes a threat in 
  the future.


Yes. The naive algorithm for factorization runs in O(2^N) (where N is the number of bits in the number you want to factorize), which is impractical for any sufficiently large N (2^64 > 10^19, 2^1024 > 10^328 which is BIG). There are better known algorithms, but none of them are polynomial, so a bruteforce is highly impractical.

Shor's algorithm, on the other hand, runs in O((logN)^3). For an idea of the difference, log(64)^3 = 216, log(1024)^3 = 1000.

So Shor's algorithm could be used to very quickly factorize large numbers and would effectively break RSA. We just need a quantum computer, which as far as we know nobody has yet.


Ok, thanks. But when I start hearing conspiracy theories that the NSA (or anyone) has already developed one, I should basically look for an encryption algorithm that isn't based on prime factorization?


Encryption algorithms exist and are being developed which are not based on prime factorization and for which no known quantum algorithm exists which can break them in polynomial time.

There will probably be significant warning before quantum computers exist which can break RSA. In that time it will probably become the norm to no longer use algorithms which are rendered obsolete by quantum computers.


> Are you saying that if/when quantum computing comes around, existing encryption systems would be suddenly ineffective?

Yes, but it's a big "if". There are many obstacles that stand in the way of quantum computing on a large scale, in fact some believe it will never be possible to maintain the delicate conditions required to perform quantum computation on a large scale.

The reason quantum computing could outperform conventional computation is because a quantum computer would, in a manner of speaking, process all possible problem statements at once, in parallel, and arrive at the best solution just as though only one problem statement had been processed. I know that may be difficult to imagine, but so is quantum. About quantum theory, John Wheeler said, "If you are not completely confused by quantum mechanics, you do not understand it."

Suffice it to say that quantum computing on a scale sufficient to pose a threat to public-key cryptosystems is not just around the corner.


I think that's the wrong attitude to take. If you look at the mathematics there is really nothing so weird or magical about quantum mechanics or using it to perform factorization; you just take a set of qbits initially in an equal superposition of all possible factors, manipulate them through quantum operations such that the amplitudes of all the factors that don't divide the number go to zero, and then measure the state to force a collapse/entangle yourself with it, and the number you measure will necessarily be a factor. QM in general is really pretty simple and even obvious if you ignore the silly mythology and just follow the math.

As for the specifics, I believe we have working 4-qbit quantum computers, and shor's algorithm has been successfully applied to calculate that 15=3x5. But current approaches do indeed seem unlikely to scale up to 1024 qbits.


> If you look at the mathematics there is really nothing so weird or magical about quantum mechanics or using it to perform factorization ...

Yes, if you examine the mathematics that's true, but if you examine the physics it's not true. The problem doesn't lie with algorithm design, the problem lies with putting them into practice in actual physical computers.

> QM in general is really pretty simple and even obvious if you ignore the silly mythology and just follow the math.

On the contrary, not at all simple if one must try create an actual physical realization.

This is not to suggest that the problems won't be overcome, only that they're more formidable than describing how easy it is in principle.


It is funny, but AFAIK there is no established lower bound on the complexity of factoring problem. Even in the realm of classic computation. Anecdotally - Ron Rivest estimated in 1977 that factoring a 125-digit number would require 40 quadrillion years. RSA-129 was solved in 1994. RSA-768 in 2009.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: