Hacker News new | past | comments | ask | show | jobs | submit login

It would be nice if on the page they included detailed descriptions of the proofs it came up with, more information about the capabilities of the system and insights into the training process...

If the data is synthetic and covers a limited class of problems I would imagine what it's doing mostly reduces to some basic search pattern heuristics which would be of more value to understand than just being told it can solve a few problems in three days.





I found those, I just would have appreciated if the content of the mathematics wasn't sidelined to a separate download as if it's not important. I felt the explanation on the page was shallow, as if they just want people to accept it's a black box.

All I've learnt from this is that they used an unstated amount of computational resources just to basically brute force what a human already is capable of doing in far less time.


Very few humans can after years of training. Please don't trivialize.


Very few humans go after this type of the training. In my "math talent" school (most of the Serbian/Yugoslavian medal winners came from it), at most a dozen students "trained" for this over 4 high school generations (500 students).

Problems are certainly not trivial, but humans are not really putting all their effort into it either, and the few that do train for it, on average medal 50% of the time and get a silver or better 25% of the time (by design) with much less time available to do the problems.


This is disingenuous. People who train are already self selected people who are talented in math. And in the people who train not everyone gets to this level. Sadly i speak from personal experience.


This school is full of people talented at math — you can't get in if you don't pass a special math exam (looking at the list, out of Serbia's 16 gold medals, I can see 14 went to students of this school, and numerous silver and bronzes too — Serbia participates as an independent country since 2006 with a population of roughly 7M, if you want to compare it with other countries on the IMO medal table). So in general, out of this small pool (10 talented and motivated people out of 4 generations), Serbia could get a gold medal winner on average almost once every year. I am sure there were other equally talented mathematicians among the 490 students that did not train for the competition (and some have achieved more academic success later on).

Most students were simply not interested. And certainly, not everybody is equally talented, but the motivation to achieve competition success is needed too — perhaps you had the latter but not enough of the former. I also believe competitive maths is entirely different from research maths (time pressure, even luck is involved for a good idea to come up quickly, etc). Since you said you were a potential bronze medal winner, it might not even be a talent issue but maybe you just had great competition and someone had the better luck in one or two tests to rank above you (better luck as in the right idea/approach came to them quicker, or the type of problem that appeared on the test suited them more). And if you are from a large country like USA, China or Russia (topping the medal table), it's going to be freaking hard to get into a team since you'll have so many worthy students (and the fact they are not always scoring only golds for their teams out of such large pools tells me that the performance is not deterministic).

As a mathematician, I am sure you'd agree you'd want to run a lot more than a dozen tests to establish statistical significance for any ranking between two people at competitive maths IMO style, esp if they are close in the first few. As an anecdote, many at my school participated in national level maths and informatics competitions (they start at school level, go on to county/city level to nation level) — other than the few "trained" competitors staying at the top, the rest of the group mostly rotated in the other spots below them regardless of the level (school/county/nation). We've actually joked amongst ourselves about who had the better intuition "this time around" for a problem or two, while still beating the rest of the country handily (we've obviously had better base level of education + decently high base talent), but not coming close to "competitors".

I, for instance, never enjoyed working through math problems and math competitions (after winning a couple of early age local ones): I've finished the equivalent of math + CS MSc while skipping classes by only learning theory (reading through axioms, theorems and proofs that seemed non-obvious) and using that to solve problems in exams. I've mostly enjoyed building things with the acquired knowledge (including my own proofs on the spot, but mostly programming), even though I understood that you build up speed with more practice (I was also lazy :)).

So, let's not trivialize solving IMO-style problems, but let's not put them on a pedestal either. Out of a very small pool of people who train for it, many score higher than AI did here, and they don't predict future theoretical math performance either. Competition performance mostly predicts competition performance, but even that with large error bars.


To mathematicians the problems are basically easy (at least after a few weeks of extra training) and after having seen all the other AI advances lately I don't think it's surprising that with huge amounts of computing resources one can 'search' for a solution.


Sorry that's wrong. I have a math phd and i trained for Olympiads in high school. These problems are not easy for me at all. Maybe for top mathematicians who used to compete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: