Hacker News new | past | comments | ask | show | jobs | submit login
Mathematicians Team Up on Twin Primes Conjecture (simonsfoundation.org)
288 points by digital55 on Nov 20, 2013 | hide | past | favorite | 94 comments



To clarify, this result means that there are infinitely many pairs of primes separated by at most 600. This doesn't mean that the gap between subsequent primes must always be less than 600.


Thanks for the clarification. Reading back through the article I see where that's mentioned, but I totally missed it the first time through.


Ah I had missed it too. Is there a bound on the gap between subsequent primes as well?


For any natural n, the n consecutive numbers (n+1)! + 2, (n+1)! + 3, ... (n+1)! + (n+1) are all composite -- the first one is divisible by 2, the second one by three and so on.

Thus there are arbitrarily long sequences of consecutive numbers that have no primes in them.


I seem to remember that there's always a prime between n and 2*n? (Still an arbitrarily large absolute gap, but a fixed relative gap.)

Can anyone deny or confirm?


Yes, that's called Bertrand's postulate, first proven in 1850 by Chebyshev. For any integer n > 3, there's at least one prime strictly between n and 2n-2.

http://en.wikipedia.org/wiki/Bertrand's_postulate



That's a really damned cool proof. Why is it formulated as n+1 instead of merely n?


Because n! + 1 can be prime, so from n! + 2 to n! + n you only get n - 1 numbers.

That's why you need to start the sequence at (n+1)! + 2.


Concrete examples:

3!+1 = 7 is prime

11!+1 = 39916801 is prime (according to http://www.wolframalpha.com/input/?i=is+11!%2B1+prime%3F)

27!+1 = 10888869450418352160768000001 is prime (according to http://www.wolframalpha.com/input/?i=is+27!%2B1+prime%3F)


You forgot a couple.

1!+1 = 2 is prime.

2!+1 = 3 is prime.

Based on a back of the envelope estimate, the number of values of n up to N for which n!+1 is prime should be O(log(log(N))) so it should happen infinitely often. But proving that is likely to be difficult.


Did you get that estimate by some other way than using prime number theorem and Stirling approximation for n! ?


That is exactly how I estimated it.

But when I think about it, it is the wrong way to estimate it. Because you've eliminated all of the most common causes of those numbers not being prime, so the density of primes in that set should be expected to be massively higher than normal.

If I thought about it, I could come up with a much better estimate. But it would take some thought.


Presumably there's no semantic difference, though? Saying that, for any counting number n, there are at least n-1 consecutive composites is semantically no different than saying there are n consecutive composites. It just makes for a cleaner proof, no?


That's a neat little proof.


Just the opposite. The frequency of primes is known to be logarithmic, and therefore asymptotically approaches zero. From this, there cannot be such a thing as the largest gap between primes, because that would put a lower bound on the frequency of primes.


Nope :) Consider the number 1000000! + 1 (one million factorial plus 1).

  1000000! + 2 is divisible by 2.  
  1000000! + 3 is divisible by 3.  
  ...  
  1000000! + 1000000 is divisible by 1000000.
Here we have a gap of 999999 numbers, none of which are prime.


Weird, I just read about this exact example yesterday in the "music of primes" by Markus De Sautoy. I'm not sure though if it was the German Siegel or the Norwegian guy (Sebel or something) who also end up to Princeton to figure it out, or maybe it was known already by Euler's time :-)

Neat proof indeed.


I first read it a few years back in Excursion's in Number Theory [0]. Super approachable (and short!), I read it in high school with only a little bit of calculus knowledge.

[0]: http://www.amazon.com/Excursions-Number-Theory-Dover-Mathema...


That is a neat proof :D


Ah, an earth-shaking discovery made by a man in his fifties who was barely within academia [1]. Guess he won't get a Fields Medal for that.

It seems like more and more of the really big discoveries are being made by people who go off and work really hard on their own.

I vaguely recall a claim that the greatest discoveries aren't being made by people in their twenties any more just because math is so huge you need ten years of study to get to the bleeding edge.

Anyway, it seems to say something about academia.

Edit: Just to be clear I neither deny that academia can achieve great things nor do I claim that young people can't do similarly. Certainly, anyone halfway near academia probably see many flaws but that's for another post.

[1] http://en.wikipedia.org/wiki/Yitang_Zhang


I don't get it. The article talks not just about Zhang, but also about James Maynard, who is a postdoc and looks young, and is part of what you might call standard academia. His result is independent of Zhang's and is orders of magnitude stronger and is much easier to understand. Comes off pretty good for academia, really.

Also, remember that most people on Earth aren't in academia, but most people still think. So the null hypothesis is that most innovations won't come from academia. But a huge amount of them do anyway, including even your counterexample, who--despite his economic struggles--was still a lecturer at a university. You could make a similar argument for young people.

Our university system and our cultural obsession with youth deserve plenty of criticism, but the accusation that they're not making important discoveries isn't one of them, and it's totally unwarranted from this article.


> The article talks not just about Zhang, but also about James Maynard, who is a postdoc and looks young, and is part of what you might call standard academia. His result is independent of Zhang's and is orders of magnitude stronger and is much easier to understand.

Maynard has done some stellar work to be sure, but he built off of Zhang's breakthrough. Maynard's result is one of a sequence of very quick findings all happening in just months after Zhang's work became public, all make possible by that work, and relatively easy, as can be seen from their number and rapid pace.

I have not read all the papers here, but my guess is that Zhang's result is hugely more impressive.


> Maynard has done some stellar work to be sure, but he built off of Zhang's breakthrough.

That was actually not my understand from the article (I haven't read the papers either). What I got was that there was an earlier paper ("GPY") that Zhang based his work on. Maynard based his work on a flawed predecessor of GPY. Thus he did not base his work on Zhang's. In fact, the article seems to indicate that further progress could be made by combining their approaches, but it certainly seems from my reading that they're independent. Of course, that could all be wrong, but it is what the article indicates.


Actually at the bottom of page 2 of Maynard's preprint he states: "We emphasize that the above result does not incorporate any of the technology used by Zhang to establish the existence of bounded gaps between primes."

So it seems like it is actually a coincidence, though they were both building off of the same previous work (GPY) so maybe it's not that surprising.


That is only a tactic move. it is Zhang's work inspired him to look for a quick better alternative. Zhang showed " there is gold! " And then Maynard ran towards that.


> he built off of Zhang's breakthrough.

The article makes it very clear that his work is completely independent from Zhang's, which is why there is expectation that they can be used together to get bounds into the 10s, and maybe even below that limit.


> ... but most people still think.

I take issue with that.

People think, but they don't think far beyond what is directly accessible to experience, so I wouldn't call it academic thinking.


OK.

How sure are you that academia is different from the general run of mankind in this regard?


The difference between academic teaching and practical teaching is that academic teaching involves "second order knowledge". I.e. knowledge that is not accessible to direct experience. We can only acquire it through study, reflection, and testing and retesting our mental models.

Time and again, phenomenographic research shows that students simply don't do this. This is the bane of any academic teacher's life. Students are demonstrably smart enough to understand the material, but they just don't engage with the material at a deep enough level for it to make any real impression. They don't want to think.

For instance, Physics students become adept at manipulating kinematics equations, but on further questioning it becomes clear that they still think like Aristotle did. Or Computer Science students struggle mightily to memorize all the syntax and semantics of Java, but their learning is so shallow they still can't solve the FizzBuzz problem.

Why don't people think? For the same reason they don't exercise, if they don't have to. It's hard!

So, yes, I claim that the majority of people don't think. Are there academics who don't think much? A few don't. Many don't think much about matters outside their specialism. And academics receive precious little training in overcoming the cognitive biases that hinder clear thinking for most people. But I still think that academics are more likely to be deep thinkers than most, just like sportspeople are more likely to be enthusiastic exercisers than most.


I think it's unfair to place the blame squarely on the student. Even in college, the academic system emphasizes memorization and surface-level application (knowing how to "use a formula") over the kind of "thinking" you're describing.

Also, many professionals are called upon to solve problems that require this same kind of "thinking". Academics are not the only ones wrestling with difficult problems.


> I think it's unfair to place the blame squarely on the student. Even in college, the academic system emphasizes memorization and surface-level application (knowing how to "use a formula") over the kind of "thinking" you're describing.

You're quite right. I can't possibly give a fair, nuanced description of the issue in a couple of paragraphs, and I'm nowhere near qualified to give the definitive account of the problem. Still, you're right that I do place more blame on the learners than most. Maybe people shy away from these ideas because they're dangerously close to some rather sinister Brave New World-style educational theories.

In teaching, I think we have a chicken-and-egg problem. We want students to learn both the underlying abstraction and the surface details. But the weaker students are resistant to learning the underlying abstraction, so we drop that and drill them harder on the surface details instead.

I see no easy solutions to this problem.

> Also, many professionals are called upon to solve problems that require this same kind of "thinking". Academics are not the only ones wrestling with difficult problems.

Sure. Computer programmers are an obvious case. But a perusal of the Daily WTF's archives (http://thedailywtf.com/), or the fact that many professional programmers with long careers still can't pass the FizzBuzz test, shows you that there is still a problem.


Wait, what? Academia is a pretty broad brush stroke, if you know anything about being an academic, and you appear to be making that brush broader still.

Care to elaborate?


I'm not sure what you're asking me; the whole point of my comment is that academia is a pretty broad brush stroke.

My parent comment says that, outside academia, most people's thoughts aren't significant enough to merit the term "thinking". I'm saying that, if you want to consider that that's the case outside academia, you should realize that it's also the case inside academia.

Personally, I would be happy to dignify most people's thoughts with the term "thinking".


Are you saying that Maynard independently tackled P(2) or that he independently came up with a better sieve? My take from reading the article is the latter.


I guess I'm saying the latter too. Can you explain the former?


It is my understanding from reading Maynard's preprint that he claims independence from Terrence Tao and the Polymath8 project. He doesn't claim independence from Zhang and neither should he, even if he uses a different approach. His result may well turn out to be important or more important but Zhang's result is clearly earlier and well-known.

My differentiation is that number theorists are looking for better sieves all the time. Zhang's work was a breakthrough in bringing the prime gap from infinity to a finite number. It is not surprising that better sieves could bring the gap a lot closer.


Okay it appears that you and several others have not read the preprint paper. The article's wording is a bit confusing but if you think carefully you'd realize that a later work can not be independent of an earlier well known result.

The article does have a link to Maynard's preprint: http://arxiv.org/pdf/1311.4600v1.pdf

Here is an expert overview of the situation: http://www.dms.umontreal.ca/~andrew/CEBBrochureFinal.pdf

"In April 2013, Yitang Zhang proved the existence of a nite bound B such that there are in nitely many pairs of distinct primes which di er by no more than B. This is a massive breakthrough, makes the twin prime conjecture look highly plausible (which can be re-interpreted as the conjecture that one can take B  2) and his work helps us to better understand other delicate questions about prime numbers that had previously seemed intractable. The original purpose of this talk was to discuss Zhang's extraordinary work, putting it in its context in analytic number theory, and to sketch a proof of his theorem. Zhang had even proved the result with B  70 000 000. Moreover, a co-operative team, polymath8, collaborating only on-line, had been able to lower the value of B to 4680. Not only had they been more careful in several dicult arguments in Zhang's original paper, they had also developed Zhang's techniques to be both more powerful and to allow a much simpler proof. Indeed the proof of Zhang's Theorem, that will be given in the write-up of this talk, is based on these developments. In November, inspired by Zhang's extraordinary breakthrough, James Maynard dramatically slashed this bound to 600, by a substantially easier method. Both Maynard, and Terry Tao who had independently developed the same idea, were able to extend their proofs to show that for any given integer m ¥ 1 there exists a bound Bm such that there are in nitely many intervals of length Bm containing at least m distinct primes. We will also prove this much stronger result herein, even showing that one can take Bm  e8m􀀀5. If these techniques could be pushed to their limit then we would obtain B( B2) 12, so new ideas are still needed to have a feasible plan for proving the twin prime conjecture. The article will be split into two parts. The rst half, which appears here, we will introduce the work of Zhang, Polymath8, Maynard and Tao, and explain their arguments that allow them to prove their spectacular results. As we will discuss, Zhang's main novel contribution is an estimate for primes in relatively short arithmetic progressions. The second half of this article sketches a proof of this result; the Bulletin article will contain full details of this extraordinary work."

EDIT: it appears that copying and pasting from the PDF is problematic and please read the linked PDF if interested.


It's easy to believe academia is fading when you categorically ignore all of its achievements.


John Baez has had an interesting series of posts about this on G+, but IIRC the gist of it is that even though both Grigori Perelman and Yitang Zhang were somewhat outside the mainstream of academia, their work was not. Both of them built on well-known theories and methods to prove their groundbreaking theorems. Perelman's primary tool, for example, was Ricci flows, which had been used at least 25 years before his work. (I know even less about Zhang's work)


I think the mathematical edifice is sound, but I can certainly see how being as divorced as possible from typical faculty responsibilities makes it easier to do original / groundbreaking research. Too many professors I know spend the bulk of their time attending to e-mail, grant writing, and traveling to give talks about work others have done. What little remaining time they have is often devoted to teaching. Some of them openly complain about how little time they have to actually think about things, while others seems to roll with it (being famous in your field is not a bad gig). Worse, the more successful you become the more these tertiary responsibilities encroach on your time. It's an open secret that the best-known professors in certain fields basically sign their names on the papers their postdocs and grad students write and that is more or less the extent of their research contribution. Cutting all this out, even if it means moving in with your mom in Moscow, could indeed be a viable option...


The bare truth is this kind of mathematics (and, say, theoretical physics) is "hard" and requires a kit of thinking and thus a lot of time. Breakthroughs are named that for a reason -- they involve a distant, non-local solution. If you're doing your research on a topic you will most likely only focus on local improvements to the problems, simply because they lead to more steady results -- they are both safer and don't require you to pretty much think all the time about the problem at hand -- you can't really expect every mathematician to prefer that kind of work.


Agreed!

What I'm trying to get at is that there's two ways you can think of academia: 1) roughly the set of people in traditional institutions, and 2) a basically unorganized set of people that happen to come together through working on related problems, in related ways.

2) is arguably the purer view of academia, and in that sense, it's as strong as ever.

I agree with you that there's deep, serious problems with 1), and they have to do with Pournelle's law, (in the US) with Bayh-Dole, and a litany of other hard, nasty problems of social life. But, even in that case: is that any worse than other cultures?


But Zhang was teaching as a adjunct (or at least, a non-tenure-track instructor) before he made this breakthrough and skyrocketed (apparently) to full professor. Adjuncts generally have enormously less time for research than full time professors do (at least if they want to make a borderline living wage).


Being barely within academia today means having access to all the papers released along with many blog posts by PhD's who analyze the results and repercussions of a paper or a discovery.

The only example of out-of-academia, in the real sense, was Ramanujan[1] who was a sort of pure genius out of nowhere and was able to pull out formulas like this [2]. He wasn't constrict by the rigorous need for proof that Europeans were looking in order to accept any discovery for being true.

De Sautoy[3] states that Ramanujan non-formal (western-style) education kept him away from European mathematical achievements and but also away from European fears of inability to attack specific problems (i.e. like Riemann's Hypothesis for example[4]) and that was a good thing.

[1] http://en.wikipedia.org/wiki/Srinivasa_Ramanujan [2] http://bit.ly/1dh0E5w (Google image) [3] http://plus.maths.org/content/music-primes [4] http://en.wikipedia.org/wiki/Riemann_hypothesis


> I vaguely recall a claim that the greatest discoveries aren't being made by people in their twenties any more just because math is so huge you need ten years of study to get to the bleeding edge.

This depends heavily on what you mean. Of course there are areas of math requiring ten (or more) years of study before you can work in them. However...

- In theory, all high school graduates have studied math for at least twelve years.

- There are many, many problems on "the bleeding edge" of mathematics, in the sense that no one can solve them, but which can be stated in terms an algebra student would have no trouble understanding. Further, there are many mathematical techniques which are "advanced" in the sense that they are never taught in school, but are "simple" in the sense that the algebra student I mentioned earlier would have no trouble following a proof using them. If you're interested, try reading through the archives of Tanya Khovanova's blog; she has plenty of examples of IMO problems and special Soviet math exam problems for Jews.


The difficulty with proving bleeding edge results in mathematics is normally not that the question is particularly advanced or difficult to understand, but rather that if it had an easy solution, then it would likely have been solved by now.

The only times I can think of where this is not the case is after a major, yet easy to understand, discovery makes a lot of formerly hard problem easy, or a field of mathematics that is either young enough, or uninteresting enough that it has not yet had significant time devoted to it.


> an earth-shaking discovery made by a man in his fifties who was barely within academia

It's stories like these that make me jump with joy inside every time I think about the wonder that is the internet and the possibilities that creates for us as a human race.

The possibility that exists by having things like free 2,000 book encylcopedias (see http://en.wikipedia.org/wiki/Wikipedia%3ASize_of_Wikipedia#H...) at the fingertips of nearly every human on earth is just mind boggling.

I won't be surprised if Joe Average discovers the cure for Cancer tomorrow. Or AIDS. Or our fossil fuel addiction.

In fact, I expect those things to happen sooner than most of us might think.


Nah, I don't think the average person will discover the cure to cancer.

While access to freely available knowledge is increasing, this knowledge is mostly broad, not deep. So we can have information on a lot of topics, but these topics aren't covered in very much depth.

Also keep in mind that Zhang was very much within the fold of academia when he discovered the 70 million upper bound on the separation of primes.


> While access to freely available knowledge is increasing, this knowledge is mostly broad, not deep. So we can have information on a lot of topics, but these topics aren't covered in very much depth.

This has bothered me quite a bit. I want to go to wikipedia and be able to find, to use Feynman's term, a map of the cat. I don't understand why it's not there.


I expect nothing like this will ever happen. You underestimate effort and amount of work (and training) needed for such breakthroughs. Also the cost of the research. I am afraid most of the low hanging fruits are already gone. You don't find cure for cancer just by reading a scientific journal over the evening tea and then dreaming up the formula.


I don't think that you can really call a professor with a PhD 'Joe Average'.


Precisely the reason to push for free and open academic journals that anybody can read.


Established scientists stop producing groundbreaking results because of tenure. Once you have tenure, the biggest incentive to work hard goes away; it's simple economics. (Hence "mathematics is a young man's game".)


For most people in academia, the incentive to work hard is not the fear to get fired, but the intrinsic satisfaction of working on a certain class of problems. Tenure does not take that intrinsic value away.


I take it you've never heard the term "6 year job interview"? :)


Pretty sure that applies to the bullshit you have to do that isn't research. I'd be surprised to find a math professor who isn't there because he/she enjoys studying math - there just aren't any other good reasons to do it.


You might want to look into what "groundbreaking" entails because excited news articles lead nowhere all the damn time.

Tenure exists to escape people from the "3 papers a year" BS which means you get a neverending stream of minor refinements and very little thorough investigation.


Ah yes. Andrew wiles preferred to work on his own and he 'disappeared' for years before he came up with his proof.


So if I understand this right...it means that no matter how far along the number line I go (and I can go a long ways out) I will always be able to eventually find a pair of primes that are separated by no more than 600?

The crazy thing to me is that primes get less and less dense as you get further and further out. It would seem to me that once you reach a certain threshold you won't be able to find two primes "close" to each other anymore because primes will be so far apart from each other. But this says I will always be able to find a pair pretty close together? (I may be looking a long time though)


> It would seem to me that once you reach a certain threshold you won't be able to find two primes "close" to each other anymore because primes will be so far apart from each other. ]

That is not quite correct. The average gap between primes gets larger as you move further along the number line. But, you will still get the occasional consecutive odd numbers that are close to one another. In fact, there is an old conjecture (Twin Prime Conjecture[1]) that claims that there are an infinite number of pairs of consecutive primes separated by 2.

PS. I'm not a mathematician (engineer with some decent math background), so I might very well be wrong.

[1] http://en.wikipedia.org/wiki/Twin_prime


Yeah, this result is effectively getting a little bit closer to proving the Twin Prime Conjecture.

It would be interesting to see the proof get progressively closer to 2, but more and more difficult each time.


You will always be able to find a pair of primes that are "close" to each other. However, the theorem makes no claim how far such pairs are from each other ;). To your point, primes get progressively less dense, which must also mean that these pairs get sparser still. It's probably quite vast distances between them, but infinity is a long ways, so it's not entirely surprisingly you find such improbably events with regularity.


> Maynard’s approach applies not just to pairs of primes, but to triples, quadruples and larger collections of primes.

Quadruples implies that it does, in fact, make a claim how far such pairs are from each other. In fact, it sounds like it could be used to make a claim about any finite grouping of primes. It would be interesting to see what kind of pattern those follow.


There's no such thing as a "quadruple pair of primes". Assuming x is prime, one of x+2, x+4, and x+6 has to be nonprime. More specifically, one of them has to be dividable by 3... If x mod 3 is 1, then x+2 is divisible by 3; otherwise, if x mod 3 is 2, then x+4 is divisible by 3.


Given that a "pair of primes" simply refers to two primes within a fixed maximum distance of each other, not necessary separated by exactly 2, I assume a "quadruple of primes" would refer to the same thing. Not four numbers each separated by two, but four numbers all within a given distance of one another on the number line. (Whether this distance would be described as the total span encompassing all four numbers, or the maximum distance between any two of the four, I don't know.)

And of course, you could generalize four to k, and quadruple to k-tuple.


>Whether this distance would be described as the total span encompassing all four numbers, or the maximum distance between any two of the four, I don't know.

Actually, either one would imply a bound on the other (although a slightly looser one). If you know the total span, the same number is a looser bound on the maximum between any two of the four, and if you know the maximum of any two, 3 times that gives you a loose bound on the total span.


True. What I meant was, I'm just not sure which convention is commonly used when speaking of prime k-tuples.


In understanding how small prime gaps can occur so far out, it helps to think of primes as gaps in the projection of all prior prime numbers (i.e. the shadow of all prior primes).

It is then fairly easy to intuit that every so often there'd be gaps in the shadows right next to each other.


Wow, this is a great story. First of all, props to the writer for weaving a bunch of different threads together in a way that made sense and was understandable to non-mathematicians without dumbing it down too much. And this is why I love math. There are so many great characters and themes in this story.

I love problems that can be stated so simply and yet are so fiendishly difficult to prove.

I love the idea that there is room for everybody in mathematics, from the lone genius outside of academia, to the young post-doc, to the unlikely collaborators.

Most of all, I love the idea that it's okay to fail, to throw out dumb ideas along with brilliant ones and that everyone makes mistakes, even Ph.D.'s in mathematics. We need more of that in education and academia generally.


My understanding of the Riemann Hypothesis is that it can be reformulated as a statement about prime density. If this result is a concrete outer bound on density, does it have implications on RH?


Speaking of reformulations of the Riemann Hypothesis, there is a surprisingly elementary statement that is equivalent to the RH.

Let H(1) = 1.

Let H(2) = 1 + 1/2.

Let H(3) = 1 + 1/2 + 1/3.

...and so on.

These are called the harmonic numbers.

Let S(1) = 1.

Let S(2) = 1 + 2.

Let S(3) = 1 + 3.

Let S(4) = 1 + 2 + 4.

Let S(5) = 1 + 5.

Let S(6) = 1 + 2 + 3 + 6.

The pattern here is harder to spot than the pattern for the harmonic numbers. S(n) for a positive integer n is the sum of the divisors of n. For instance, 6 is divisible be 1, 2, 3, and 6, so S(6) = 1 + 2 + 3 + 6.

Conjecture: for any positive integer n, S(n) <= H(n) + exp(H(n)) log(H(n)), with equality only when n = 1.

Believe it or not, this conjecture is equivalent to the Riemann Hypothesis!

Proof here: http://xxx.lanl.gov/abs/math.NT/0008177/


That is surprising. It's pretty incredible to see a formulation that doesn't involve any advanced Number Theory.


Zhang revealed during an interview that he's already made "astonishing" progress on RH.

http://www.changhai.org/community/article_load.php?aid=13833...

> 這個黎曼猜想,我手上也有一些東西,如果拿出來,也會很轟動,我就是這種習慣,如果沒有完全結果,或者到最後,我覺得我不可能再做了,我也許會把它拿出來,但我現在是不想拿出來的。

He won't publish it until he finalized it.


That may be where Zhang is heading.


That is the most beautifully well-written article about mathematics that I have read in my life.

The explanations made it completely accessible as a CS student with nothing more than calculus experience and familiarity with the concept of the search for primes.


I knew about the Wiki http://michaelnielsen.org/polymath1/index.php?title=Bounded_...

However, I did not know about the solo paper http://arxiv.org/abs/1311.4600

  In their celebrated paper [5], Goldston,
  Pintz and Yıldırım introduced a new method for counting   
  tuples of primes, and this allowed
  them to show that [eq 1.1]

  The recent breakthrough of Zhang [9] managed 
  to extend this work to prove [eq 1.2]
"Sieve methods" say integer properties like divisibility are essentially random. This is how we can show two integer randomly chosen are relativley prime with probabilty pi²/6= 0.608...

http://math.stackexchange.com/questions/64498/probability-th...

Here we are trying to count primes separated by certain small gaps.


Montreal..on the map, finally. There's a lot of talent here.


Montreal was already on the map for Bixi. And probably other things.


I'm not a mathematician, but this is an interesting subject to me.

Does this imply a certain density to prime numbers? And given this new information, does it mean that it could be less computationally hard to find or verify primes?


Not directly. This says that at any point on the number line, if you go sufficiently farther down the number line, you can always find 2 primes whose difference is no more than 600. It is possible that the distance between such groups gets arbitrarily large (at an arbitrarily fast rate) as you go down the number line.

Having said that, the primes do have a known density, given by the Prime Number Theorem [1]. This theorem states that the number of primes less then x is x/ln(x), as x approaches infinity.

[1]http://en.wikipedia.org/wiki/Prime_number_theorem


The overall density of primes is known, and decreases continuously as the numbers get larger (See http://en.wikipedia.org/wiki/Prime_number_theorem)

What this theorem says is that no matter how large the numbers get, no matter how sparsely the primes are spread out, on average, there will always be pairs of primes that are relatively close (within 600 of each other)


"on average" is the wrong concept here. Just leave it out of your sentence and it's correct.


That's what I thought, but if you apply "on average" to the clause preceding it rather than the one following it, it is correct.


I would be curious to know what impact this would have on primecoin which appears to have gained a lot of traction over the last few months.


The bigger implication is that if you know the frequency of prime numbers and have a way to calculate when to expect them the difficulty in calculating new large primes is greatly reduced. In effect the strength of all encryption based on Primes is greatly reduced.

BitCoin, PGP, HTTPS, Certs, all just got orders of magnitude easier to crack. Well not just... It will take a little bit for time to use this information to create optimizations for beating each of these. But soon.


What? No. RSA doesn't depend on finding prime numbers, it depends on factoring large numbers, which is a completely separate problem. Elliptic curve cryptography doesn't depend on prime factorization at all.

Even if they did, I don't think this really helps you; knowing that there exist arbitrarily many primes within N doesn't help you find primes any faster.


What do you think Factors are? Do you know what a Prime is? Do you know what a Factor Is?

I am skipping the rest of your arguments since. "What, Yes" is more correct.


A mathematician once told me that he doesn't appreciate research which leads to small improvement of a bound, but only research that leads to a conceptual novelty (like possibility). I guess this story shows that there is room for both.


I might be way off here ... but does this not have implications for RSA encryption .. isnt the whole concept resting on the fact that two prime numbers cant be factored ... (I don't know shit about encryption obviously)


It has not actually been proven that RSA is backed by the difficulty of factoring. That is to say, it may be possible to break RSA without an efficient way to factor numbers. However, an efficient way to factor numbers does break RSA.

Having said that, there is no obvious way that this result helps with factoring. Indeed, the Twin Prime Conjecture states that the gap is actually 2. Obviously, this has not been proven, but if there was a way to use a proof of this fact to efficiently factor numbers, then we would have used that method without the proof. If it ends up not working, then we would have disproven the Twin Prime Conjecture.


> isnt the whole concept resting on the fact that two prime numbers cant be factored

No. Prime numbers have no factors.

RSA uses two prime numbers multiplied together to create a number with only two prime factors; factoring that number is the hard part if it's large enough.

This discovery has nothing to do with factoring large numbers.


Yep, way off, sorry. This has basically nothing to do with factoring. This is just proving that there is never a point beyond which no two prime numbers are 600 or less apart.


Maynard is too eager. He talked too much that is a coincidence. But the fact is that he is a member of the polymath project and all the works are consequences rather than coincidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: