I'm spit balling here so definitely take this with a grain of salt.
If I only see legal moves, I may not think outside the box come up with moves other than what I already saw. Humans run into this all the time, we see things done a certain and effectively learn that that's just how to do it and we don't innovate.
Said differently, if the generative AI isn't actually being generative at all, meaning its just predicting based on the training set, it could be providing only legal moves without ever learning or understanding the rules of the game.
I am not a ML person, and know there is an mathematical explanation for what I am about to write, but here comes my informal reasoning:
I fear this is not the case:
1) Either, the LLM (or other forms of deep neural networks) can reproduce exactly what it saw, but nothing new (then it would only produce legal moves, if it was trained on only legal ones)
2) Or, the LLM can produce moves that it did not exactly see, by outputting the "most probable" looking move in that situation (which it never has seen before). In effect, this is combining different situations and their output into a new output. As a result of this "mixing", it might output an illegal move (= the output move is illegal in this new situation), despite having been trained on only legal moves.
In fact, I am not even sure if the deep neuronal networks we use in practice even can replicate their training data exactly - it seems to me that there is some kind of compression going on by embedding knowledge into the network, which will come with a loss.
I am deeply convinced that LLMs will never be exact technology (but LLMs + other technology like proof assistants or compilers might be)
Oh I don't think there is any expectation for LLMs to reproduce any training data exactly. By design an LLM is a lossy compression algorithm, data can't be expected to be an exact reproduction.
The question I have is whether the LLM might be reproducing mostly legal moves only because it was trained on a set of data that itself only included legal moves. The training data would have only helped predict legal moves, and any illegal moves it predicts may very well be because the LLMs are design with random variables as part of the prediction loop.
If I only see legal moves, I may not think outside the box come up with moves other than what I already saw. Humans run into this all the time, we see things done a certain and effectively learn that that's just how to do it and we don't innovate.
Said differently, if the generative AI isn't actually being generative at all, meaning its just predicting based on the training set, it could be providing only legal moves without ever learning or understanding the rules of the game.