>Indeed, and this has been the case for quite a while now. You can always improve on some general algorithm by taking advantage of knowledge of the data but that never generalizes and usually leads to either worse performance on other data and/or new pathological cases that result in results that are unusable.
Deepmind did the exact same thing with AlphaTensor. While they do some geniunely incredible things, there's always a massive caveat that the media ignores. Still, I think it's great that they figured out a way to search a massive space where most of the solutions are wrong, and with only 16 TPUs running for 2 days max. Hopefully this can be repurposed into a more useful program, like one that finds proofs for theorems.
Deepmind did the exact same thing with AlphaTensor. While they do some geniunely incredible things, there's always a massive caveat that the media ignores. Still, I think it's great that they figured out a way to search a massive space where most of the solutions are wrong, and with only 16 TPUs running for 2 days max. Hopefully this can be repurposed into a more useful program, like one that finds proofs for theorems.