> The paper soon set off a cascade of what Ellenberg called “math at Internet speed.” Within 10 days, Ellenberg and Dion Gijswijt, a mathematician at Delft University of Technology in the Netherlands, had each independently posted papers showing how to modify the argument to polish off the original cap set problem in just three pages.
This is a very interesting trend I've noticed. I wanted to cite a few papers on arXiv in one of my own research papers recently, but my advisor commented that none of the articles had been peer reviewed (since arXiv is a preprint server). I told him that in the last year alone, six papers on arXiv follow a "research trail" (i.e. a paper is put on arXiv in May that builds on results from a February paper that builds on results from December, etc.), and that the most recent peer-reviewed article in a published journal is so far behind the state-of-the-art that completing my paper without any mention to the arXiv works would put me significantly behind the rest of the field.
Of course, these papers all relate to math and computer science — whether a new algorithm or proof works is (usually) immediately evident upon implementation, and the papers on arXiv include the complete algorithm and often link to the author's code. Peer-reviewing their work yourself often takes no longer than a half hour or so (unlike, say, a research article in materials science, where a complete replication study could take over a year).
That has been common in physics for a long time, too. For hot new topics, the conversations are often happening through preprints rather than peer-reviewed articles. (The papers are usually eventually published in peer-reviewed journals, though.)
Building on research seems like it's a form of peer review, especially in math where you'd need to understand the entire proof before making incremental improvements to it.
While this can be true it's also common to have "assuming X is true" where X is some very complicated hypothesis.
We saw this with Fermat's Last Theorem and it was with a great sigh of relief that it was finally proven in the 90s. If the inverse was true then entire fields of Mathematics would have collapsed.
Maybe this is more true for research that assumes the truth of the Riemann Hypothesis than research that assumed the truth of Fermat's Last Theorem?
Maybe there was some significant body of research before 1995 that assumed the Taniyama–Shimura–Weil conjecture, a more powerful statement that implied Fermat's Last Theorem and was ultimately proven as a way of proving it.
The consequence is that a whole body of work winds up coming with an asterisk until people figure out what they can and can't trust. Papers may be looked at for inspiration, but won't be quoted for results. Eventually some of it gets proved properly, and the rest is abandoned. After that the older papers become mere historical curiosities.
A possible place where this could happen is the classification of finite groups. It has been "proven", but the proof is long, technical, and never was adequately reviewed. Lots of papers these days start off using the classification in interesting ways. However there is an open program to produce an actual reviewed proof. If in the process of doing that, we found that the original result was long, there would be a fairly large project to figure out the consequences.
But when the results are useless anyway, it doesn't really matter if they are right or wrong...they just may be speculative of some alternate universe, or may still contain ideas that are applicable elsewhere.
Prime numbers used to be useless when first researched (edit: during the previous two centuries, when their properties was studied). We don't always know in advance what will turn up useful.
You say, "Prime numbers used to be useless when first researched," but when the Middle-Kingdom Egyptians were doing their initial research on prime numbers, they needed them for the algorithms they used to calculate with fractions. These were used in the Rhind Papyrus to calculate things like the volumes of granaries. You could hardly have picked a worse example.
No, he picked a famous and perfect example. He just didn't specify it well enough.
Over the last 2 centuries, number theorists developed the theory of large prime numbers. The numbers that they were dealing with were so large that they had no conceivable use in describing the physical universe.
Famously one prominent number theorist, G. H. Hardy, wrote A Mathematician's Apology, a book describing and justifying his life. In it he famously described his field as being utterly useless with no practical applications.
Then cryptography came along, and the mathematics of finding large prime numbers, and factoring hard to factor large numbers, turned out to have practical applications of great importance!
From my point of view, both of you demonstrated a lop-sided knowledge of math history.
Clearly you know more about the ancient history and origins. I'd be willing to bet that you know that the ancient Greeks knew 2500 years ago what prime numbers were, had proved that there were an infinite number of them, had algorithms like the Euclidean algorithm for finding the greatest common denominator, had proven unique factorization AND had demonstrated that sqrt(2) was not a fraction. We don't actually know how much farther a lot of the knowledge goes.
On the other side he had obviously encountered cryptography, and knew that a whole lot of the necessary number theory dates back to Gauss, 200 years ago. https://en.wikipedia.org/wiki/Disquisitiones_Arithmeticae is the origin of concepts like modular arithmetic, quadratic residues, and so on. But he was not familiar with the ancient history predating that, or else he could not have thought that the study of primes only goes back 200 years!
He could have avoided the problem on his side by Googling for what he was going to say before saying things with glaring and obvious errors. Very few of us are so careful.
You could have avoided the problem on your side by giving him the benefit of the doubt and assuming that he's probably not a complete idiot, then trying to figure out what he might have meant. You might or might not have figured out "cryptography", but you could have at least made your post in the form of a much more pleasant question. However that is fairly rare to find, and doubly so online.
As for me, I'm just lucky enough to know both halves of the history, so could easily sort it out.
Ben, I'd've thought you'd known me long enough to know that I'm familiar with RSA and the history of number theory. (Maybe I misremember; you seem to have started doing Perl after I left clpm.) I read A Mathematician's Apology last year (most of it, anyway), and my friend Nadia keeps publishing papers that factor large numbers of RSA keys in practice, the latest being CacheBleed. Your understanding of cryptography is surely deeper than mine — the most I've ever done myself is write an implementation of SRP — but it's not as if I haven't heard of the field.
I had thought that it was common knowledge that (small) prime numbers had a lot of practical uses (mental arithmetic in general, arithmetic with fractions, including with vulgar fractions, gear train design, that kind of thing) but apparently I was wrong. It turns out that lots of people don't know about this. So my inference, that only a complete idiot would not know this, he did not know this, and therefore he was a complete idiot, was ill-founded.
And so I came out looking like some kind of ignorant, arrogant know-it-all. I really appreciate the feedback, Ben. Natanael_L, I'm sorry I was such a dick to you.
I am familiar with your name, but hadn't tracked you well enough to remember anything more than, "He knows Perl." I left Usenet about a year before I learned Perl, so I was never in clpm.
So you just failed to register the cryptography reference and then backsolve to what he really meant. If that's the worst thing that you did last month, then you're a better person than I...
from May 23. Less than a week between them! And it's with good reason too, because they want to compare with state of the art on a certain problem/dataset, and that improved five days ago.
Both of these papers are about very basic advances, in a sense: If you have a residual network implementation lying around (and there are plenty of open source ones), it's trivial to implement the improvements they propose. So it's not as crazy to cite a 5 day old paper as it seems.
> whether a new algorithm or proof works is (usually) immediately evident upon implementation
I would argue that this is a dangerous claim, because an algorithm working generally is different from proof that it always works, and this can lead to serious issues. I would say that this is one of the big reasons peer review exists!
I'm not familiar with academia but I was surprised that the discovery was made by three mathematicians at three different far flung universities. Is that type of collaboration also being accelerated by the internet?
Collaborators will often meet at conferences or similar, and then go out of their way to attend the same conferences. There's a big difference between the fast in person work that lays the ground work, and the refinement that goes into finishing the paper. The internet is way more useful for the latter, imho. It's much easier to share ideas quickly on a black board...
The situation could be even better if the proposition and proof could be written in a machine verifiable form using a proof assistant like Coq.
Potentially you could have a section in a maths journal which 'papers' (in the form of computer readable propositions and proofs) are immediately reviewed for correctness by a computer. Post-publication, humans may then assess the impact / significance by voting (perhaps with propositions from significant conjectures potentially configured to instantly go to the top of the significance rankings).
This is a very interesting trend I've noticed. I wanted to cite a few papers on arXiv in one of my own research papers recently, but my advisor commented that none of the articles had been peer reviewed (since arXiv is a preprint server). I told him that in the last year alone, six papers on arXiv follow a "research trail" (i.e. a paper is put on arXiv in May that builds on results from a February paper that builds on results from December, etc.), and that the most recent peer-reviewed article in a published journal is so far behind the state-of-the-art that completing my paper without any mention to the arXiv works would put me significantly behind the rest of the field.
Of course, these papers all relate to math and computer science — whether a new algorithm or proof works is (usually) immediately evident upon implementation, and the papers on arXiv include the complete algorithm and often link to the author's code. Peer-reviewing their work yourself often takes no longer than a half hour or so (unlike, say, a research article in materials science, where a complete replication study could take over a year).