Compute is limited during inference, and this naturally limits brute-force program search.
But this doesn't prevent one from creating a huge ARC-like dataset ahead of time, like BARC did (but bigger), and training a correspondingly huge NN on it.
Placing a limit on the submission size could foil this kind of brute-force approach though. I wonder if you are considering this for 2025?
Compute is limited during inference, and this naturally limits brute-force program search.
But this doesn't prevent one from creating a huge ARC-like dataset ahead of time, like BARC did (but bigger), and training a correspondingly huge NN on it.
Placing a limit on the submission size could foil this kind of brute-force approach though. I wonder if you are considering this for 2025?