I like to put things on x/y-charts to get a visual overview. Even things that usually are not displayed in this fashion.
When it comes to prime numbers, it always irked me, that one cannot easily look at them in a visual way. I always thought that they would probably look beautiful.
One day, I had the idea to counter this by visualizing prime numbers in the complex plane. I then wrote a little script to colorize each number in the complex plane by how "prime" it is. And indeed, the result looks beautiful:
You have seen the Ulam spiral[1] from 1963, right? I don’t know that it provides any actual insight into this stupenduously complicated[2] topic, but if you want something to draw about prime numbers that’s probably the best-known option. Or see the story about “primons”[3] for a view that’s less visual but no less tangible (given a certain kind of background at least).
And yes, as you note, your CDF very much looks like a square grid after an inversion. That is to say: draw a square grid on a horizontal plane, lay down a ball on that plane, project the image onto the surface of the ball by straight rays through its topmost point (imagine a lightbulb there; see? “projection”), then take a parallel plane touching that same ball and project back onto it from the surface (now casting rays from the bottommost point).
But if so, it’s boring in that it should have little to do with prime numbers. Let me think about this. (The story of complex numbers and projective transformations is not boring, of course, it’s quite pretty, just doesn’t provide much of an insight into the primes; and, in turn, its connection to hyperbolic geometry, most widely known through Escher’s drawings, is also quite wonderful, but removed even further from the original picture, so might not be necessary to understand it.)
The thing I found was that if you start the spiral with a 0 you get a similar pattern. Further if you layer each of the triangular sections you get a really cool pattern[0].
>When it comes to prime numbers, it always irked me, that one cannot easily look at them in a visual way. I always thought that they would probably look beautiful.
There are several of visualizations of prime numbers, and they do, look beautiful.
I have seen many “plots” of prime numbers that look really good. In addition to looking good, I sometimes _swear_ I see a pattern. It’s really frustrating, to see something on a graph which looks like it follows some pattern or algorithm[1], and to know that finding the mathematical definition of that pattern would would be revolutionary, world news. I can see why so many people get sucked into this problem.
The tl;dr on it is that all primes are of the pattern 3n±1, and 2𝞹 (polar coordinates) is .283 mod 3, hence the primary curve. Then he explains about fractions that approximate pi, but I think he's over-explaining. Reducing a fraction means making the denominator and numerator co-prime, and in the case of pi approximations, the interesting ones all have prime numbers as the denominator, so the 3n±1 pattern appears again.
There are many visualizations of the primes in 2D and many are fascinating. I've always wondered what it would look like in 3d or 4d or higher. what if the primes were mapped on to some crazy topological shape? Is there some shape and dimension out there that produces a "perfect" pattern?
> I've always wondered what it would look like [...] if the primes were mapped on to some crazy topological shape?
This is easy: consider the spectrum of the ring Z, Spec Z (by definition, the spectrum of a commutative ring consists of its prime ideals; all prime ideals of Z (with the exception of the zero ideal) are generated by the prime numbers; in this sense, we can identify all elements of Spec Z (with the exception of the zero ideal, which is "just another point" of Spec Z) with the prime numbers if we desire).
On Spec Z, we can define a topology, the Zariski topology, by defining
V(I) := {P \in Spec Z | I \subseteq P},
where I is an ideal of Z, as the closed sets of the Zariski topology.
I'm not sure that this answers the parent's question. The Zariski topology is unusual, and in "some crazy topological shape" I think the parent is looking for a way to map the primes onto a smooth manifold in some finite dimensional space in a way that elucidates some kind of hidden structure.
Hmm.. I keep looking at your first ( https://www.gibney.org/images/size02/-1_-1_to_1_1.jpg ) image, and I swear it looks 3 dimensional positively curved space, with vertices intersecting in depth. Just that since it's a picture, its being compressed from 3d->2d.
Even though the lines are curved, I'm parsing them as straight.. somehow. Perhaps this is the wrong projection, and maybe it needs to be mapped on a sphere using lat/lon/alt (parametric)? That would also explain the infinite border asymptotes, and going from 2d mercator->spherical would modify those.
That prime fractal seems a good fit for shadertoy: it would give GPU perf and basic zoom in zoom out for free. The last two images look indeed suspiciously good.
>> In 2019, he and Carl Pomerance, his adviser at Dartmouth — who, according to Lola Thompson, a mathematician at Utrecht University and a former student of Pomerance, essentially “came out of retirement to work with him”
Somehow that made this whole story a lot more human to me. Pomerance is a big name in modern number theory (to me anyway) but apparently at the end of the day he's just another guy with a particular set of interests who will pop up again when the right thing comes his way.
For those who don't know, Carl Pomerance was the guy who came up with/discovered the quadratic field sieve which for a time was the fastest factoring algorithm for large (100-ish digits) semiprimes. He has a beautiful write-up describing the algorithm and the story of its discovery:
Pomerance, Carl. “A Tale of Two Sieves.” In Biscuits of Number Theory, edited by Arthur Benjamin and Ezra Brown, 85–104. Providence, Rhode Island: American Mathematical Society, 2009. https://doi.org/10.1090/dol/034/15.
To this day the question I have about the Quadratic Seive is something like "what is being reduced in degrees of freedom by each relation added to the matrix?"
It seems like the sequence of primes in the factor base (which are quadratic residues) is very much related to the residues of the factors (they are) and somehow this it's doing in the abstract something like CRT to construct the factors. But I have never read a deeper explanation beyond the mechanics of implementation.
Can we also talk about how this guy is in the third year of his PhD and has no less than 18 papers listed under "major/recent publications", all but two of which are published, and one of the latter 'shocked' the math community?
Not to mention, the article says he's currently working with James Maynard, a leader in the field.
One of my favorite stories about Maynard is in another one of Quanta's articles, concerning prime number gaps [1]. Yitang Zhang had just established an upper bound of 70 million for prime number gaps. After that, Terence Tao assembled a crowd of mathematicians to try and refine Zhang's method, and over the summer, they incrementally lowered the bound all the way down to 4680. Then Maynard, who had been working solo on developing an entirely different method, came out of the blue and lowered the bound to 600.
I still find the crowd-sourced "Polymath" approach incredibly cool, and Terence Tao himself discusses the benefits of open collaboration and the dangers of working solo [1]. Though clearly, James Maynard does just fine on his own.
T.Tao's first publication (at least the first one that shows up on MathSciNet) was in 1996, the same year he finished his mathematics PhD. I would say for most people working towards the pure side of mathematics, having over 5 papers before the end of a PhD is not at all typical.
Terence Tao finished his PhD at age 21. So he both didn't publish major work before PhD, but did publish major work at a young age.
Most math PhDs including OP topic Lichtman would start after age 20 and finish at age 24-28.
So Lichtman might be on a similar trajectory as Tao, but not accelerated through formal schooling.
And since Lichtman's work is in elementary number theory (similar to combinatorics, one of Tao's main subjects), it's more accessible to early results from a brilliant person with less experience. Other subjects require more background study before reaching new territory.
People on this thread say that in recent years there has been a lot of pressure for math researchers to publish far more often partial / lower quality work to match the publication rate of other fields.
I’d argue all research is the same, or at least should be. I left academia after my PhD because I realized it’s meant for prodigies, not regular people like me. I had to leave even though my advisors tried to convince me that I’d have the best shot of all their trainees at being a good professor. I think I held what is needed of a good professor at an even higher standard.
There's a lot more luck though involved in certain fields. In modern biology it's not really possible to have a "prodigy" career trajectory IMO - studies cost too much time and money for even the best 3rd year PhD student to be shocking the world and churning out dozens of pubs.
And there is a need for many "normal" people to contribute to research in the field too, because a solid part of the equation is sheer man hours. I get being jaded, and I also don't know what field you were in, but regardless I think it's a bit too pessimistic to assume that only prodigies can meaningfully contribute to research.
"Research is for prodigies" is extremely specific to (mostly pure) math and theoretical physics, where the results aren't practical so no one cares much about easy problems. Math students solve professor-made-problems for fun and games during school (IMO, Putnam), and rarely does anyone academically publishes the results. (Maybe one big book of 1+ years of problems).
In other more practical fields, less,-inspired research is useful, and also research is slower due to real world complexity and expense, not inherent intellectual difficulty. Like plowing through all possible chemical rejection looking for one that works, or fine tuning a radio in engineering, or a NN or a big program in CS, or running psych / sociology surveys.
There’s not only one metric for a genius, this dudes more prolific With his papers, maybe he’s even more of a prodigy than Tao, but what difference does it make, does it make Tao any less of a big deal?
This brings back memories to me. Almost exactly 2 decades ago a related theorem, prime factorisation being in P, was proved by a couple of undergrads working with a professor in IIT Kanpur, India [1]. The story and the sensation it caused is still fresh in my memory because I had just begun my post-graduation studies in another IIT (IIT Bombay). There was sort of a festival atmosphere in the campus; with all sorts of seminars arranged, discussion groups and what not.
To this day I have not tried to understand the paper; just that it was hailed as one of the shortest and most elegant proofs.
> Almost exactly 2 decades ago a related theorem, prime factorisation being in P, was proved by a couple of undergrads working with a professor in IIT Kanpur, India
It is still an open problem whether prime factorization is in P or not. What Manindra Agrawal, Neeraj Kayal and Nitin Saxena showed is that checking whether a given (binary) number is prime is in P. Up to today, no polynomial-time method has been published to obtain a non-trivial factor of the input if the AKS algorithm returns that it is not prime.
> What Manindra Agrawal, Neeraj Kayal and Nitin Saxena showed is that checking whether a given (binary) number is prime is in P.
Wow this confused the hell out of me because I thought, isn't the sieve of eratosthenes already below polynomial time? But from what I read, while the sieve of eratosthenes is sub-polynomial relative to the magnitude of the input, the AKS method is polynomial time relative to the length of the input in binary representation (aka the number of binary digits).
That's funny because "most" NP-complete problems are pseudo-polynomial, because a simple exponential problem is O(e^polynomial), so log of that is polynomial.
But people forget that natural numbers are just data, and that data can be interpreted at a a natural number.
Oops; you are right. If my memory serves right, showing prime factorisation in P will imply P == NP; which would have been a sensational news at a different level altogether.
> showing prime factorisation in P will imply P == NP
Whether "Factorization is in P" actually implies "P=NP" is another open problem (most researchers in this area don't believe that this is the case).
What does hold is that if we found a "fast" algorithm for factorization, this would break some cryptosystems. The most well-known example is RSA, but other, more academic cryptosystems would be broken, too.
Well, it is also not proved to not be NP-hard, so it could also be NP-hard to our current knowledge. But you are right that researchers believe that it's not NP-hard.
For anyone interested in a more technical summary of the some of the main ideas:
Problem Statement
The Erdos conjecture that all primitive sets S satisfy f(S) <= f(PRIMES), where f(S) = sum_{s} (1/(s log s)) and PRIMES is the set of all primes.
Definitions
Define A_q to be a primitive set such that all elements a are divisible by prime q, but no primes less than q.
* Example: A_3 contains multiples of 3, but no even numbers.
Define A*_q to be the set of all integers divisible by only primes >= q.
* Example: A_q is a subset of A*_q.
* Example: A_q is also a subset of q \* A*_q (i.e. multiply each element by q).
Define g(a) = 1/a * Product_{p<LargestPrimeDivisor(a)}[(1 - 1/p)].
* Example: if a=7, then the Product term (excluding the 1/a) is the "density" of A*_7. This is because (1-1/2) of integers aren't divisible by 2, (1-1/3) aren't divisible by 3, and (1-1/5) aren't divisible by 5.
* Example: if a=7, then g(7) is the density of all multiples of 7 that aren't divisible by 2,3,5.
Results
In [1], Author proves that for all A_q, with A_q \neq {q}: f(A_q) < k*\sum_{a\in A_q} g(a) <= k*g(q), for a constant k.
* The first half of the inequality comes from a cited result: 1/(a log(2a)) < k*g(a).
* The second half of the inequality comes from clever construction of the sets S_a = (a \* A*_{LargestPrimeDivisor(a)}), for all a in A_q. These sets are constructed to satisfy g(a) = |S_a|/|NaturalNumbers|. So because (1) S_a and S_b are disjoint for a\neq b (2) S_a is a subset of A*_q, this implies that sum(g(a)) = sum(|S_a|) <= sum(|A*_q|) = g(q).
In [2], Author shows that k\*g(q) <= 1/(q log q). Note that the right-hand side of the inequality is equal to f({q}).
* This uses a pretty clever partitioning of the primitive set A into A_2 U A_3 U A_5 U ... U A_q U .... Combining with part 1, this implies f(A_q) < f({q}) for all primes q, so f(A) < f(PRIMES).
> Mathematicians noted that the work is particularly striking because it relies entirely on elementary arguments. “It wasn’t like he was waiting for all this crazy machinery to develop,” Thompson said. “He just had some really clever ideas.”
I get creative, but comprehensibility eludes me. To some https://www.cs.stanford.edu/~knuth/papers/huang.pdf may be obvious in its statements and steps and I guess that, aside from having the necessary background in this area, they fundamentally differ from me in how they 'see' this. They the underlying meaning, to me it's a jungle.
ok, so you can't C&P from that PDF :) by hand then: "Any set H of 2^(n-1) + 1 vertices of the n-cube contains a vertex with at least sqrt(n) neighbors in H"
Nothing in that makes intuitive sense to me except vertices will have neighbouring vertices. It's not even dimension limited and I can just about manage 4 dimensions at a push.
Next, a bunch of recursively defined 2x2 matrices with no apparent link to anything. I'm lost twice now. Yet to a Knuth or a Von Neumann, it just makes sense. This stuff makes me feel like a different species.
Just a reflection on things. Professional mathematicians in-chimings welcome.
the combinatorial object "the n-cube" is just length n bitstrings, neighbor means one bit flip away. It can be thought of as being embedded in the geometric
n cube.
the recursive matrices as they are defined are, if you ignore the signs, the incidence matrices of the n cubes. This is only mentioned in the final line of the paper (which is not totally unreasonable). The rest of the result is just some pretty basic facts in linear algebra, but you still have to be comfortable with linear algebra in order to see why they might be natural constructions.
How is the bitstring 'embedded' in the n-cube? It must be a linearisation from a 1-dimensional string to vertices of the n-dimensional object but...? What does 'embedded' even mean, mathematically.
Edit: neighbouring vertices can't be collapsed to a 1-dim string such that each bit x in the string abuts bit y if x is a neighbouring vertex of y (neighbouring = 1 edge away). Works for 1 and 2 dim cubes (line and square) fails for 3. Maybe am missing something.
'incidence matrices' something is incident on something else? It occurs to me that in some sense the recursive matrices might represent the n-cube, each sub-matrix being an n-1 cube. If this is so it is starting to make a little sense.
But that's not the point, which is, how are you seeing this so clearly? It's like a pea-souper over at my end.
> Edit: neighbouring vertices can't be collapsed to a 1-dim string such that each bit x in the string abuts bit y if x is a neighbouring vertex of y (neighbouring = 1 edge away). Works for 1 and 2 dim cubes (line and square) fails for 3. Maybe am missing something.
It sounds like you're missing something. A 3-dimensional cube has 2^3 = 8 vertices:
x y z
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
Since there are three dimensions, each vertex has three neighbors. If you imagine the vertex at (0, 1, 1), you can travel across the x-dimension to the neighbor at (1, 1, 1), or across the y-dimension to the neighbor at (0, 0, 1), or across the z-dimension to the neighbor at (0, 1, 0). Obviously, you represent this by flipping the x-, y-, or z-bit in the corresponding bitstring representation. The bitstring representation itself is blindingly straightforward: the vertex at (0, 1, 1) has the representation 011, etc.
It's not really clear to me where you're becoming confused?
> What does 'embedded' even mean, mathematically.
In this case, it means that a complex structure (the cube) includes a simpler structure (the collection of discrete bitstrings). You can analyze the bitstrings by themselves if you want to, or you can imagine them as being the vertices of a cube that may or may not have properties other than its vertices, but however you describe the cube, it will include the bitstrings.
> It's not really clear to me where you're becoming confused?
Got it. I was interpreting the bitstring as 1 bit mapped to each vertex of the cube, I so badly, utterly misinterpreted it (my string would be 2^n length, not n-length). A simple example and the intent here is totally clear. I guess the problem is at least partly I tend to read what I expect, not what's there. Sigh.
> In this case, it means that a complex structure (the cube) includes....
Righto, a model of interest, a *-morphism or bijection between the string-set and the cube (where * = iso/homo/auto/whatever, it's been a long time)
I mean, there isn't anything known as "the prime number conjecture". Goldbach's conjecture is a prominent conjecture about primes, but it's not "the prime number conjecture".
Goldbach's conjecture is possibly the most famous unsolved problem in mathematics. When people refer to prime number conjecture, this is the most obvious one everyone's thoughts will go to.
I'd say the Riemann hypothesis is more famous. Goldbach's conjecture didn't make the Milennium Prize Problems list, and Perleman's refusal of the award for his proof of the Poincaré conjecture popularized that list quite widely.
>>Erdős found that for any primitive set, including infinite ones, that sum — the “Erdős sum” — is always finite. No matter what a primitive set might look like, its Erdős sum will always be less than or equal to some number. And so while that sum “looks, at least on the face of it, completely alien and vague,” Lichtman said, it’s in some ways “controlling some of the chaos of primitive sets,” making it the right measuring stick to use.
Finite? Surely there must be an operation done to each element of the set before summing to achieve a finite bound.
> "For example, rather than counting how many numbers are in a set, they might do the following: For every number n in the set, plug it into the expression 1/(n log n), then add up all the results. The size of the set {2, 3, 55}, for instance, becomes 1/(2 log 2) + 1/(3 log 3) + 1/(55 log 55).
Erdős found that for any primitive set, including infinite ones, that sum — the “Erdős sum” — is always finite."
It said that the quantity sum(1/(n log n) for n in A) is always finite when A is a primitive set. The "operation done to each element of the set" is n -> 1 / (n log n).
The paragraph directly before the one you quoted explains the sum:
> For every number n in the set, plug it into the expression 1/(n log n), then add up all the results. The size of the set {2, 3, 55}, for instance, becomes 1/(2 log 2) + 1/(3 log 3) + 1/(55 log 55).
Well he worked on it for 4 years (for fun?), so I'm not sure what you mean by walk away with a PhD. To me it seems like he has earned it, and even if he doesn't get one he obviously is able to
In this specific case the guy has 16 other published papers already, so if he'd wanted to walk away with a PhD he probably could easily have done so already. That being said, I'd imagine that Oxford would be one of the places that could find a way to award a PhD for genuinely outstanding but unusually short work if that had been his only achievement and he'd asked for it. Seems like one of the few universities still run chiefly by academics, not administrators.
On the contrary, this is the exception that proves the rule. The process exists because we can't expect once-in-a-lifetime works to appear from every candidate.
I think there's a precedence for it too. I believe George Dantzig's doctoral advisor offered to accept the solutions to a homework set as a thesis since it solved major open statistical problems.
For those who don't know, here's some background on Dantzig:
"During his first year as a doctoral student at the University of California-Berkeley, Dantzig arrived late to the class of Jerzy Neyman, one of the great founders of modern statistics. On the blackboard were two problems that Dantzig assumed to be homework.
"A few days later I apologized to Neyman for taking so long to do the homework—the problems seemed harder to do than usual," Dantzig once recalled. It turned out the conundrums, which Dantzig solved, were two famous unsolved problems in statistics."
How so? The "process" of getting PhD is writing up his proof and getting it reviewed by the university, and verifying via oral defense that he knows the proof and didn't just plagiarize it.
> The conjecture deals with primitive sets — sequences in which no number divides any other. Since each prime number can only be divided by 1 and itself, the set of all prime numbers is one example of a primitive set. So is the set of all numbers that have exactly two or three or 100 prime factors.
Minor clarification: exactly k prime factors, counted with multiplicity
Unfortunately without a maths UG it's a hard slog I'll agree, but not the most impenetrable paper over ever seen. Not going to pretend I get the subtleties though.
I wonder how the conjecture originally came up with 1.64 anyway, seems like a rather arbitrary number to establish a conjecture that takes decades to prove. If there was some way to actually come up with that number initially, wouldn’t that itself have been on the way to a proof?
It shows how there are two types of mathematics; research-level math in which important stuff is proved, and then teaching/pedagogical math, which doesn't get as much attention. I don't think teachers, authors get enough credit. There are also a huge differences in individual ability eve
between math PHDs and profs. This may see obvious but the differences can be huge.
Also, from the article: Mathematicians noted that the work is particularly striking because it relies entirely on elementary arguments. “It wasn’t like he was waiting for all this crazy machinery to develop,” Thompson said. “He just had some really clever ideas.”
I also hate click-bait titles, but I think it is aptly fitting for a change.
The tan box in the article explains this set. The multiples are n×618 where the smallest prime factor of n (not of n×618) is ≥103 (if any). I.e. 618, 103×618, 107×618, 109×618, etc.
> And associated to the number 55 (5 × 11) would be all multiples of 55 whose smallest prime factor is 11 (therefore excluding all multiples of 2, 3, 5 and 7).
> Of course there is at least one such multiple: 103 * 618.
103 * 618 is divisible by 2, which is a prime strictly smaller than 103. Every multiple of 618 is divisible by 618, and so is also divisible by anything that divides 618.
A cousin comment clarifies that, for x * 618, we're interested in the divisors of x, not x * 618. The article, however, is at best ambiguous in its language; you'd have to interpret the term "multiple" as meaning the multiplier applied to the base number, not the result of said multiplication. The latter is what is usually understood.
You would generally hope that the popular coverage of the paper would
(1) avoid making statements that are obviously nonsense, such as "And associated to the number 55 (5 × 11) would be all multiples of 55 whose smallest prime factor is 11"; and
(2) not require reading the paper itself in order to interpret the popular coverage.
If you're going to read the paper, why would you read someone else's paper-inspired gibberish? What value does that add?
When it comes to prime numbers, it always irked me, that one cannot easily look at them in a visual way. I always thought that they would probably look beautiful.
One day, I had the idea to counter this by visualizing prime numbers in the complex plane. I then wrote a little script to colorize each number in the complex plane by how "prime" it is. And indeed, the result looks beautiful:
https://www.gibney.org/does_anybody_know_this_fractal
Then I went back to do more mundane things with x/y charts. The one that became most popular is Product Chart:
https://www.productchart.com
But every once in a while, I still dream about the "complex divisor fractal" and what mysteries might be hidden inside of it.