"Solving IMO problems at Silver-Medal level" is pretty much equivalent to solving something like 4/6 problems. It is only a disingenious comparison if you want to read it as a comparison. I mean yea, many people will, but I don't care anout them. People who are technically interested in this know that the point is not to have a competition of AI with humans.
It's great that you feel safe being so aloof, but I believe we have a responsibility in tech to turn down the AI hype valve.
The NYT is currently running a piece with the headline "Move Over, Mathematicians, Here Comes AlphaProof". People see that, and people react, and we in tech are not helping matters by carelessly making false comparisons.
I think search-based AI is on a different level than imitation models like GPT. This is not a hallucinating model, and it could potentially discover things that are not written in any books.
Search is amazing. Protein folding searches for the lowest energy configuration. Evolution searches for ecological fit. Culture searches for progress, and science for understanding. Even placing a foot on the ground searches for dynamic equilibrium. Training a ML model searches for best parateters to fit a dataset. Search is universal. Supervised learning is just imitative, search is extrapolative.
Why? Why is hype bad? What actual harm does it cause?
Also the headline is fair, as I do believe that AlphaProof demonstrates an approach to mathematics that will indeed invade mathematicians workspaces. And I say that as a mathemstician.
For sure. I feel like mathematicians are not paying attention (maybe deliberately) we have a real shot of solving some incredibly hard open problem in the next few years.