But this is becaude ASIOF was in the training dataset. Chatgpt wouldn't be able to say anything about this book if it wasn't in his dataset, and you wouldn't be able to have enough tokens to present the whole book to chatgpt.
Thinking of it as "the training dataset" vs "the context window" is the wrong way of looking at it.
There's a bunch of prior art for adaption techniques for getting new data into a trained model (fine tuning, RLHF etc). There's no real reason to think there won't be more techniques that turn what think of now as the context window into something that alters the weights in the model and is serialized back to disk.