Hacker News new | past | comments | ask | show | jobs | submit login
Top mathematical achievements of the last 5ish years, maybe (richardelwes.co.uk)
110 points by rndn on June 23, 2015 | hide | past | favorite | 24 comments



The result of Ono and Bruinier is significant, but I don't know if it really belongs on this list.

Euler's pentagonal number theorem already gives a simple algebraic description of the partition numbers. The so-called "finite algebraic formula" of Ono and Bruinier is only more "finite", "algebraic" or "formula" in a very artificial, press-release-exaggerated sense.

Sort of like claiming that any of the contorted "prime formulas" that various people have come up with (http://mathworld.wolfram.com/PrimeFormulas.html) provide a better finite description of the primes than the good old sieve of Eratosthenes.

Now, this comparison is not quite fair. Unlike those prime formulas, the Ono-Bruinier result is actually mathematically significant in that it does give you more information about the partition numbers. It's a nice result, but is it more interesting than hundreds of other interesting results in mathematics in the last 5ish years? Even if we just look at the extremely narrow field of mathematics that is the study of partition numbers, there are other nice recent achivements such as Radu's proof of Subbarao's conjecture.

Ono is certainly an excellent mathematician, but he might be even more excellent at getting extreme amounts of publicity for his results. I don't want to give the impression that I'm sour about this. On the contrary, I think it's awesome that mathematics can get this kind of publicity, and Ono earned it (other mathematicians should learn from his example!), but it's something to keep in mind.

Likewise, about HOTT. It's very promising, but is it really an "achivement" yet? Who knows, the hype could turn out to be justified. I guess if you want to assess scientific progress as recent as in the last five years, you have to try to predict the future as well.


Yes: HOTT is an acchievement. It differs from the other items on the list, though, as it belongs to the "theory building" branch of mathematics and not to the "problem solving" branch.

In the end, the current implementation of HoTT might not be the one standing the test of time; But the central idea behind the programme is here to stay: A refactoring of mathematics. (Higher) category theory is currently the best tool to uncover deep connections between different areas of mathematics. But doing higher category theory using set theory can be compared to writing a webapp directly in machine code: Possible, but cumbersome. A top level language providing suitable abstractions is needed and HoTT is one proposal.

It can be compared to the ongoing shift from imperative to functional languages.


I'm somewhat surprised that Gentry's fully homomorphic encryption wasn't mentioned. It's definitely more applied math than anything else, but if you're going to mention RSA and factoring you might as well mention the biggest crypto breakthrough in the last 5-6 years.


The list at the bottom regarding computational achievements is possibly slightly more relatable and also contains this nice gem:

    The most impressive feat of integer-factorisation using a quantum
    computer is that of 56,153=233 × 241. The previous record was 15.


Mochizuki's work is interesting. I wonder how long that will take to verify.


worst website background ever...


Four extra lines in a nice IMRaD format would have been a nice addition to start the discussion. It looks like the reader still has a lot of work to do.


I can't imagine what four extra lines would have made things much easier for the reader. These results are from a variety of fields of mathematics; understanding the context for any one of them is a substantial piece of work. They aren't part of some single scientific experiment that could be presented "in a nice IMRaD format".

What sort of four extra lines did you have in mind?

(Note that each of Elwes's top 10 comes with a link to further information, generally in reasonably accessible form.)


1. What the question is and why it is important.

2. Broadly, what area of mathematics was used - how did they go about solving it.

3. Do we have a concrete proof? For a general case?

4. Where are we now and what is next?


(See the sibling comment with a "10." in it for context.)

9. The weak Goldbach conjecture. Back in 1742, Christian Goldbach (like Fermat, a mathematician-and-lawyer who made a famous long-unsolved conjecture in number theory) wrote in a letter to Euler that he thought every even number bigger than 2 could be written as a sum of two prime numbers. (E.g., 4=2+2; 6=3+3; 8=3+5; 10=3+7; 12=5+7.) Everyone believes this is true; there are good heuristic arguments why it "should" be true; but proving it is really hard. Like the abc conjecture, this is another one about the interplay between the additive and multiplicative structure of the integers. One method that's been used to attack the Goldbach conjecture is the "Hardy-Littlewood circle method", which is useful for a whole lot of other kinda-similar problems in number theory; it's kinda-sorta an application of Fourier transforms to number theory. It was used back in the 1930s to prove the following statement, which is weaker in two ways: All large enough odd numbers are the sum of three prime numbers. (Note that if GC is true then so is this; if even=prime+prime then even+3=prime+prime+3.) What Helfgott has now done is to show that "large enough" means "bigger than 5" (which is the best that could possibly be done, since 5 is too small to be the sum of three primes). He did it by refining the "circle method". Getting from here to the full Goldbach conjecture is still going to require major new ideas; there are somewhat-well-understood obstacles to using this method to prove GC itself.


(See the sibling comment with a "10." in it for context.)

7. Seventeen Sudoku Clues. OK, this one is easy to describe :-). The question is how few clues a Sudoku puzzle can have while still having a unique solution. The answer is 17. A bunch of 17-clue puzzles had already been found, so the "only" thing left to do was to prove that 16 clues never suffice. The proof was basically a big computer search, but with some clever innovations to make the search manageable. So the first step was to find all possible completed Sudoku grids. Someone else had already done that. It turns out that there are only about 5 billion "genuinely different" fully-solved Sudoku grids. So now all you have to do (ha!) is to check, for each fully-solved grid, every set of 16 clues, and verify that none leads to a unique solution. Of course this is ridiculously too big a search, and that's where the cleverness comes in.

So, given a completed grid, you can identify some "unavoidable sets", sets of cells such that any solvable puzzle must include one clue from each such set. (They looked for unavoidable sets of size <=12.) And then you can find "hitting sets", sets of size 16 that intersect every one of these unavoidable sets that's been found. Any solvable 16-clue puzzle would need to have its clues form a hitting set. But something could be a hitting set and not a solvable puzzle, so once you've found all the hitting sets you have a bit of checking to do.

And the cleverness lies in finding good ways of finding (enough) unavoidable sets, good heuristic ways of finding all the hitting sets (this problem is NP-complete in general), and efficient ways of testing each hitting set found to see whether it gives a puzzle with a unique solution.

There are other mathematical tasks that can be formulated in terms of hitting sets (the paper by McGuire, Tugemann and Civario claims this is true for various problems in computational biology, software testing, databases, etc.); I don't know to what extent their ideas are actually applicable to any of those domains.


So, four extra lines for each item? That would (much more than) double the length of the piece. Except that for several of them, much of that information is already there.

Still, let's have a go.

10. Mochizuki and the abc conjecture. The abc conjecture says, roughly, that you rarely have a+b=c where a,b,c are all products of large powers of prime numbers. You can think of this as saying that the additive and multiplicative structures of the integers are kinda-independent. The abc conjecture, if true, would imply lots and lots of other conjectures that number theorists have made. (In particular, it "almost implies" Fermat's Last Theorem.) Mochizuki claims to have proved it using a very complicated thing he's created called "inter-universal Teichmueller theory". Teichmueller theory is all about spaces whose structure is based on the complex numbers, which you can think of as being obtained as follows: start with the integers; allow yourself to form fractions (giving the rational numbers); "fill in the gaps" (forming the real numbers); "add roots of algebraic equations" (forming the complex numbers). But there are some not-so-obvious ways to "fill in the gaps" where instead of the usual limiting process where you do things with smaller and smaller errors (sqrt(41) ~= 6, 6.4, 6.40, 6.403, 6.4031, etc.) you try to do things that are right "modulo large powers of p" for some prime p; e.g., sqrt(41) = 1, 21, 821, 3821, 03821, 203821, etc.; but you should really think of this as exhibiting one thing mod powers of 2 and another mod powers of 5. Doing something comparable to Teichmueller theory using these gives you "p-adic Teichmueller theory", and then "inter-universal Teichmueller theory" is a further generalization that I don't understand. No one is quite sure whether Mochizuki's proof is correct; it's based on a huge amount of abstruse stuff he's invented that no one was very interested in before he claimed to have used it to prove the abc conjecture, and getting one's head around all that takes time. The next step is for the mathematical community (or at least some small bit of it) to understand IUTT well enough to check Mochizuki's proof. I believe that's happening, but it won't be quick.

Hmm, that's pretty long (despite not managing to do more than gesticulate vaguely in the direction of the relevant mathematics). I'll do the others, but I think it'll be one per comment rather than a single monstrously long comment.


Thank you for doing all of these, that's a terrific contribution.


Isn't it? I haven't enjoyed somebody being disliking a comment of mine so much :). I've been reading these all day.


You're welcome. By the way, I like your username.


A fellow reader :)

(I just ego-surfed my hn name. The twitter guy is somebody else.)


(See the sibling comment with a "10." in it for context.)

4. The Socolar-Taylor tile. This is pretty cool. So, suppose you have an infinite supply of tiles of some set of shapes and sizes (trivial simple case: squares of a single size). You might be able to use them to tile the whole plane (e.g., the obvious square tiling, like the pixels of a computer display). The easiest tilings to think about, like this one, are periodic: they repeat themselves. Some others are aperiodic: they don't repeat.

In the 1960s, a guy called Wang proved a nice theorem: there is a computer program that can answer the question "here are some tiles; can I tile the plane with them?" if, and only if, every set of tiles that can tile the plane can do so periodically. (This isn't hugely surprising in retrospect; the idea is that you can recognize when there's a tiling if and only if all tilings are effectively finite.) And then Robert Berger found a set of (thousands of) tiles that can tile the plane, but only aperiodically.

(I lies slightly: the tilings in the paragraph above are all of a very special kind: the tiles are squares with carefully designed wiggles on their edges that restrict which ones can go next to which others. Equivalently, they're squares with coloured sides and abutting sides' colours have to match.)

So then the question arises: Berger did this with 20,000 different kinds of tile, but how few do you need? You have probably heard of "Penrose tilings": Roger Penrose found a set of two tiles (these ones aren't square) that will tile the plane but only aperiodically. So ... can you do it with only one?

It turns out you can, and Socolar and Taylor found a tile that does it. You can do this with a hexagon and some restrictions (like the colours on those square tiles earlier) on what can match with what, or you can perturb the hexagon so that the restrictions fall out naturally from the geometry (like putting wiggles in the sides of the squares). Unfortunately, the perturbations required make the Socolar-Taylor tile disconnected -- it's made of multiple bits that move around together!

I am inclined to doubt that anything will come of this that's useful elsewhere in mathematics, but it's incredibly cool.


(See the sibling comment with a "10." in it for context.)

3. Completion of the Flyspeck project. What's the densest way to pack a huge number of oranges into a huge box? Every greengrocer "knows" the answer: hexagonal close packing. (Actually, if you have too many oranges they will get squashed. Better use cannonballs or something.) The first person known to have conjectured explicitly that this is best was Johannes Kepler (better known for his work in astronomy) back in 1611.

In 1990, a guy called Hsiang claimed to have proved this. It was (and is) generally reckoned that his proof had holes in it. One of the leading critics of Hsiang's proof was Thomas Hales, who in 1998 announced his own proof. It was a very computational proof: Hales reduced the Kepler conjecture to a statement about the minimum value of a function of 150 variables (presumably describing how the spheres are arranged; I don't know how this works, but I remark that Gauss proved back in 1831 that the densest arrangement with the spheres on a regular lattice is the usual one, so there's probably something clever going on), and then proved that there are "only" 5000 or so genuinely distinct configurations, each of which gives rise to a linear programming problem (i.e., optimization of a linear function subject to linear constraints), and then solved them all with a computer. This is a bit like the famous Appel-Haken proof of the "four colour theorem", only much worse.

The mathematical details were extremely intricate. A lot of very smart mathematicians looked at Hales's proof and reported that they were 99% sure it was right but couldn't be more confident than that.

So Hales started a project "FLysPecK" (= FormaL Proof of Kepler) to verify all the details using automated proof-checking software. (So there's a tenuous connection here with the "univalent foundations" work, but I don't think there's any more to it than that both are related to automated proof-checking.) And, after more than a decade of grind, the Flyspeck team has declared success and (assuming they haven't messed up) Kepler's conjecture has definitely been proved.

What next? Well, presumably by applying proof-checking software to a real serious mathematical problem the Flyspeck team has helped to improve it and make it more useful for other real serious mathematical problems.


(See the sibling comment with a "10." in it for context.)

1. Bounded gaps between primes. This one's pretty easy to describe, though the details of how it works are harder.

There seem to be quite a lot of pairs of prime numbers just 2 apart. 3,5; 5,7; 11,13; 17,19; etc. Are there infinitely many? Everyone expects that there are, and there's even a plausible conjecture for how common they are. But no one has been able to prove it. This is yet another of those "additive versus multiplicative structure of the integers" things, like the Goldbach conjecture. And, like the Goldbach conjecture, what everyone believes is that the two structures are kinda independent, almost as if the prime numbers are chosen at random apart from some simple constraints.

Zhang made a big step in the direction of this "twin primes conjecture", though we're still a long way from proving it. He proved that there are infinitely many pairs of primes no more than 70,000,000 apart. (Other people have subsequently reduced that bound to 246. Still a lot bigger than 2.)

I don't know enough analytic number theory to say much about how Zhang's proof works. The first (and deepest) step is to get some bounds on (roughly) how much the density of primes that are k mod m can vary with k. (So, e.g., any prime number >3 must be either 1 mod 6 or 5 mod 6; if you look at prime numbers between x and 2x, how different can the number of them that are 1 mod 6 and the number that are 5 mod 6 be?)

Having done that, he uses a technique called "sieve theory", which has been around for a while, to prove that for a certain (rather large) k the following holds: If you have an "admissible" set of k numbers -- I'll say what that means in a second -- then there are infinitely many n such that when you add n to each of your k numbers, you can find two primes among the results. And "admissible" means that there's no prime for which your k numbers cover all the possible values mod p. So, e.g., {2,4,6} is not admissible because these numbers include every possible value mod 3. But {2,4,8} is admissible.

And then getting from there to the actual theorem turns out to be pretty simple.

What came next after Zhang's work was that, as I mentioned, a bunch of people whittled his bound from 70 million down to 246, and established a connection between this work and something called the Elliott-Halberstam conjecture, and found that if EH is true then the number would go from 246 to 12 -- or even to 6, if a stronger version of EH is true. It doesn't seem possible to go further than that without major new ideas.


(See the sibling comment with a "10." in it for context.)

8. Ngô Bảo Châu’s proof of the Fundamental Lemma. This is a big one but I'm not sure I can say anything useful about it, but let's see if I can get maybe 0.1% of the way towards doing so. So, we are interested in prime numbers, and there's a powerful way of investigating how they're distributed that's based on something called the Riemann zeta function. For some values of its argument, you can compute zeta(s) as the sum of this series: zeta(s) = 1/1^s + 1/2^s + 1/3^s + 1/4^s + etc. Euler noticed that you can factorize this as the product of 1/(1-1/p^s), where p ranges over all prime numbers, and "morally" this is why it's related to the distribution of the primes. You can use this -- with a lot of fascinating but difficult technical stuff along the way -- to prove the "prime number theorem", which says that the number of primes <= x is about x/log(x). Now, suppose you're wondering about a subset of the primes; e.g., the ones that, when you divide them by 4, leave a remainder of 1. Then you might compute the same product, but only over those primes; or, conversely, only over the primes that leave a remainder of 3 when divided by 4. (There is exactly one prime that falls into neither of these categories. It isn't hard to find.) Thinking along these lines leads you (if you're clever enough) to what are called "Dirichlet L-series": a(1)/1^s + a(2)/2^s + a(3)/3^s + ... where the sequence a(1),a(2),... has a particular form. And you can use these to prove a generalization of the prime number theorem that tells you, e.g., that the number of primes <= x that are 1 mod 4 is about 1/2 x/log(x).

These sequences a(k) are periodic modulo some integer (e.g., the one you need to discriminate between primes that are 1 and 3 mod 4 has period 4). You can think of them as being associated with "the integers modulo k", which is an "abelian group" (which very crudely means something in which you can do addition and subtraction and have the results make some kind of sense). Associated with other abelian groups there are other L-series-like things called "automorphic L-functions". And these things are (kinda-sorta) the subject of the so-called "Langlands program", which sketches -- the details even of the conjectures involved are in many cases not clearly understood -- relationships between lots of objects in pure mathematics that I haven't yet said enough even to define: automorphic forms (related to automorphic L-functions), group representations, etc. And all these things have intimate connections with the prime numbers (as I kinda sketched above), and with other number-theoretic things (e.g., the proof of Fermat's last theorem makes use of ideas of this sort).

One of the conjectures in the Langlands program is the "fundamental lemma". I can't even begin to tell you what it is, I'm afraid, but it's important and now it's been proved :-).

What's next: decades more chipping away at the Langlands program, bringing gradually more understanding of number theory, algebraic geometry, and related areas of mathematics.


(See the sibling comment with a "10." in it for context.)

6. The Growth of Univalent Foundations/ Homotopy Type Theory. This is the bastard offspring of (semi-)automatic proof checking, the "foundations of mathematics", and, of all things, topology. The connection between these may become less surprising if you are familiar with category theory, which has always been kinda foundational and kinda topological.

I don't really know anything about this stuff, and unlike (e.g.) inter-universal Teichmueller theory or the Langlands program I can't really claim that that's because it's beyond human comprehension :-). I bet there are people here who know a lot more than I do about this, and I hope they'll correct my mistakes and fill in my gaps. Anyway, it goes something like this:

Mathematicians have always liked building complicated things out of simple things, and in particular have always wanted to build the whole of pure mathematics on top of something nice and simple. The traditional nice-and-simple thing to use for this has been set theory; the study of collections of things. The usual path from raw sets to the mathematical objects we're familiar with -- which is not relevant to "univalent foundations", so skip this if you like -- goes like this. First, build a structure to serve as your non-negative integers. Zero will be the empty set; n+1 will be {0,...,n}. Define addition and multiplication "by induction". Next, ordered pairs: if a,b are sets then {{a},{a,b}} is a set from which you can work out what a,b were. Abbreviate that as (a,b). Now we can make the integers as follows: consider ordered pairs (a,b) where a,b are nonnegative integers, and interpret them as meaning a-b. We need to treat (a1,b1) and (a2,b2) as the same when a1-b1=a2-b2, which is the same as a1+b2=a2+b1, which is a statement only about nonnegative integers and addition. Now we can make rational numbers: take ordered pairs (a,b) where a,b are integers and b isn't zero, meaning a/b; again (a1,b1) and (a2,b2) are the same when a1/b1=a2/b2, which is the same as a1.b2=a2.b1, which is a statement only about integers and multiplication. Now make real numbers by "Dedekind cuts"; the real number x "is" the set of all rational numbers < x. Etc., etc., etc.

There are some not-so-likeable things about basing mathematics on set theory. One is that the notion of "set", although it seems absolutely straightforward and transparent, turns out to be not so much so when you look more closely. E.g., you might think that for any proposition P with a single "free variable" x in it, there should be a set of "all x such that P". E.g., "all x such that x is a prime number". But that famously can't work in general; take P = "x is a set that doesn't have itself as a member" and you get a paradox. So you have to restrict your ability to form sets in this way, and there are a bunch of different ways of doing it, and none of them is perfectly satisfactory.

(A certain amount of unsatisfactoriness is inevitable thanks to Goedel's incompleteness theorem. However you try to construct mathematics, either your system will just be broken or else there will be questions it can't resolve.)

"Univalent foundations" is an entirely different approach. The idea is to build mathematics on something like the lambda calculus -- that is, roughly, you start with functions rather than objects. You can very very handwavily say that the relationship between u.f. and traditional set-theoretic foundations is a teeny little bit like that between Haskell and, say, C.

One of the cute things about lambda calculus is a theorem that connects types with mathematical theorems, and expressions that compute a value of a given type with proofs of the corresponding theorems. This is at the heart of proof-checking systems like Coq and Agda. And one of the ideas behind univalent type theory is to find a type system rich enough that you can use this for talking about all of mathematics.

I've not really got very far into talking about UTT but I'll stop here because (see above) I don't really know what I'm talking about and I'm sure other people on HN know it much better. The hope is to provide foundations for mathematics that (1) don't have so many layers of arbitrary construction between the foundations and the things mathematicians actually use, and (2) are amenable to automated checking. I think.


A fairly good summary, though more material on the Curry-Howard Isomorphism would have been nice.

Summary for the layperson:

The Curry-Howard Isomorphism is an exact syntactic correspondence between types for lambda calculi, propositions in intuitionistic logics, and objects in closed Cartesian categories.

A lambda calculus is a very small programming language built only out of functions. You start assigning "types" so that you can talk about total functions, which will always return.

Intuitionistic logic is just logic without the Law of the Excluded Middle (P \/ ~P), and in particular, it's a logic of constructive evidence in the form of proof. To assert X in intuitionistic logic, you must provide a proof which shows how to construct the object the proposition is about. If this sounds computational, it is: the Excluded Middle is false in intuitionistic logic because the Halting Problem is undecidable. For all computationally decidable problems, you can in fact prove the Excluded Middle for those problems by exhibiting the relevant decision algorithm.

Closed Cartesian categories... well, I don't actually know enough category theory to describe them. Sorry.

But anyway, what the Curry-Howard Isomorphism shows is that given any one of these things, I can turn it into all the others. A proof in intuitionistic logic can be turned into a program in typed lambda calculus, and the type of a lambda-calculus program can be turned into a proposition which that program proves. A program is also an arrow in a closed Cartesian category, and a type or proposition is an object.

But anyway, what happened is that Martin-Loef's Intuitionist Type Theory has always been hard to work with, because you have to pull all kinds of funny definitional tricks with equality. In fact, to prove equality in ITT, you have to show that two things must be the same by definition, tautologically, trivially identical. Proving nontrivial equalities gets very difficult, as anyone fed up with trying to do so in Coq can tell you.

What happened with Homotopy Type Theory? Well, they noticed that you could add another branch to the Isomorphism: programs are points, types are topological spaces. This isn't so surprising, because category theory has always been a bit topological. This gave them a notion of a formal system to work with, and they found a model for it (which proves the formal system is sound and consistent). They found that in this model, "equality is equivalent to equivalence" (Voevodsky's Univalence Axiom), or in other words, the kind of trivial equality one deals with in Coq is generalized as just a very low-dimensional instance of topological and categorical equivalence. They also found that equality could be defined as the existence of a path between two points -- two objects/proofs are equal precisely when they belong to the same "lump of rubber" in a topological space. Since paths can be nontrivial, "go around the hole in the donut" rather than just "go towards the other point in a straight line", this is a powerful tool -- homotopy theory becomes a part of the new foundational framework.

Moreover, since topology and homotopy theory deal with continuous objects, and in fact the definition of continuity in general, from the get go, this meant that they would need a lot less nasty "plumbing" to define things like the real numbers, and thus most of common maths, in a "more topological" foundations. The fact that the new system has computational content and submits itself to computer proof-checking via the Curry-Howard Isomorphism is icing on the cake.


(See the sibling comment with a "10." in it for context.)

2. Partition numbers. Suppose I give you a positive integer; say 5. How many ways are there to write it as a sum of positive integers? Seven: 1+1+1+1+1; 1+1+1+2; 1+1+3; 1+2+2; 1+4; 2+3; 5. How do these numbers behave? How fast do they grow? Does anything about them repeat periodically?

Back in 1918, Hardy and Ramanujan found an amazing formula, an asymptotic series that you can use to compute p(n) exactly with a manageable amount of computation. (The series doesn't converge, but if you stop adding at the right point you get the right answer!) Rademacher later tweaked it to make it actually convergent.

Ramanujan also found some very startling results of the following kind: If n leaves remainder 4 when divided by 5, then p(n) is a multiple of 5. If n leaves remainder 5 when divided by 7, then p(n) is a multiple of 7. If n leaves remainder 6 when divided by 11, then p(n) is a multiple of 11. (Why 4, 5, and 6? Because 24x4 is 1 mod 5, and 24x5 is 1 mod 7, and 24x6 is 1 mod 11. No, this is not meant to be obvious.) Unfortunately, no such thing is true for 13, or in fact for any other prime number. (Ramanujan also found related results for powers of 5, 7, and 11.)

But! There are similar things for other primes, and Ono proved that for any prime other than 2 and 3 there are theorems of the form p(ak+b)=0 mod [your prime]. (These differ from Ramanujan's in that we don't get to take a = the prime in question.)

And then in 2011 -- this is the result Elwes is referring to -- Ono and Bruinier generalized this further by finding a sort of fractal structure in the partition function. The idea is that there are relations of the form p(q^m0.a0+b0) and p(q^m1.a1+b1) modulo q, where m1<m0, so that you can compute p(q^m0.a0+b0) mod q in terms of p(much smaller things). The number of much smaller things you need depends on q, and it turns out that (0) for q<=3 this stuff doesn't work, (1) for 5<=q<=11 you need no much smaller things so that you get p(...) = 0 mod q, (2) for 13<=q<=31 you need one much smaller thing so that you get p(...) = constant times p(smaller) mod q, etc.

This is all sufficiently constructive that Ono can, e.g., exhibit the congruences whose existence he's proved, and use them to calculate partition numbers.

I don't have much understanding of how this all works, but it uses an important bit of machinery called modular forms. (Closely related to the stuff I waved my hands vaguely towards when talking about the Langlands program.) These are mathematical functions with a certain kind of "almost-periodic" behaviour, which turn out to be connected to pretty much everything in number theory. Improvements in understanding these could be very valuable elsewhere.


(See the sibling comment with a "10." in it for context.)

5. Untriangulable spaces. Suppose you're doing topology: you're interested in what's left of geometry after you strip away notions like "distance" and "angle". The central notion here is that of "topological space", which is a set (of points) together with (in one formulation) a notion of "neighbourhood"; intuitively, a neighbourhood of a point x is a set that contains "all points close enough to x". Topological spaces can be pretty weird, so topologists are interested in various more restricted kinds of space that look more like the geometrical objects we're familiar with.

One of the most important is a "manifold". A d-dimensional manifold is a topological space with the property that all small enough bits of it look like ordinary d-dimensional Euclidean space.

Now, you might hope that that's a restrictive enough condition to allow you to put a lot of useful extra structure on any space that's a manifold. For instance, you might hope to be able to put a "differentiable structure" on it, which is equivalent (not trivially equivalent, given the usual definitions, but never mind) to saying that you can instantiate it as a subset of a nice Euclidean space, defined by saying that it's where some nice well-behaved functions on that space are zero. Or, kinda-oppositely, instead of heading in the direction of increasing smoothness you might head in the direction of increasing discreteness: you might want a "triangulation" of the manifold, meaning that you can build it out of points (0-dimensional) and line segments (1-dimensional) and triangles (2-dimensional) and tetrahedra (3-dimensional), etc., etc., glued together in obvious ways along their faces.

(Why would you care? Because if you have that extra structure you can use various bits of mathematical machinery designed to exploit it. For instance, if you have a triangulation then lots of things reduce to nice finite computations that you can do on a computer.)

Alas! It's been known for some time (though it was quite a surprise when first discovered) that some manifolds can't have a differentiable structure put on them (and others have more than one, which was also a surprise).

Well, it turns out that if a manifold has a differentiable structure then it can be triangulated, but perhaps not conversely. So maybe every manifold can be triangulated? Nope. In the 1980s, an example was found of a 4-dimensional manifold with no triangulation.

Still, things might be different in higher dimensions. (E.g., the Poincare conjecture -- resolved several years ago now by Grigori Perelman -- has versions in all dimensions; it's really hard in 3 dimensions but much easier in 4 or more.) And this is the question that Manolescu has resolved. It turns out that there are manifolds in 5,6,7,... dimensions that can't be triangulated.

He proved this using something called Floer homology. Homology is a technique for deriving algebraic structure from topological structure (or, actually, from other kinds of structure, and there's a bit of a connection here with the "univalent foundations" stuff); there are lots of different kinds of homology, all closely related, and this particular one turns out to have a lot to do with mathematical structures that arise in theoretical physics.

The relevant thing about Floer homology is that it derives algebraic structure from what are called "cobordisms". If you have d-dimensional manifolds A and B, a cobordism between them is a (d+1)-dimensional manifold-with-boundary (what does that mean? well, think of e.g. a disc; it's a 2-dimensional manifold, with 1-dimensional boundary, namely the circle around it) whose boundary consists of a copy of A together with a copy of B. (Example: suppose A and B are both spheres, like the surface of the earth. A cobordism between them is given by a hollow ball; one sphere is its outer surface, one its inner surface.)

I have no knowledge of the details, but it turns out (and was proved in the 1970s) that "all manifolds in dimension 5 and up are triangulable" is equivalent to a rather technical statement about homology and cobordisms (in, curiously, only 3 dimensions). And that's the statement that Manolescu proved was false.

I don't know whether anything interesting follows from the existence of untriangulable 5-manifolds as such, but a deeper understanding of Floer homology means a deeper understanding of cobordisms, which means a deeper understanding of topological quantum field theories, which may be useful in theoretical physics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: