On the AlphaCode Attention Visualization website [1], the Accepted code shown for 1553D is a O(n^2) Python one, which is supposed to be TLE. It correctly implements a two-pointer solution, but failed to "realize" that list.pop(0) is O(n) in Python. I'm not sure how it passed.
> AlphaCode's solution the "inner O(n) loop" is actually a memmove(), which is optimized to be insanely fast.
Again, it is not. CPython does not do these things.
The web page says, and this is corroborated in the paper,
> Solutions were selected randomly, keeping at most one correct (passes all test cases in our dataset) and one incorrect sample per problem and language. Note that since our dataset only has a limited number of test cases, passing all tests we have cannot completely rule out false positives (~4%), or solutions that are correct but inefficient (~46%).
The “54th percentile” measure did use estimated time penalties, which you can see discussed in Table 4 in the paper, but 1553D was not part of that.
Apologies, I thought you meant ‘optimized’ in a different sense, not in terms of how list.pop is implemented, as AlphaCode wasn't involved in that. You are entirely correct that list.pop uses memmove.
> Someone submitted this 1553D code to Codeforces and it passed
Ah, well that shows you have a 2 second time limit, which is quite a lot of time! Not quite enough to empty a 200k element list with list.pop(0)s, but not far off; a 140k element list squeaks in under the time limit for me.
On the AlphaCode Attention Visualization website [1], the Accepted code shown for 1553D is a O(n^2) Python one, which is supposed to be TLE. It correctly implements a two-pointer solution, but failed to "realize" that list.pop(0) is O(n) in Python. I'm not sure how it passed.
[1] https://alphacode.deepmind.com/#layer=30,problem=34,heads=11...