Everyone always calls it "next word prediction" but that's also a simplification.
If you go back to the original Transformer paper, the goal was to translate a document from one language to another. In all prompt systems the model is only given "past" tokens (as it's generating new tokens in real time) but in that original paper the LLM can use backwards and forward context to determine the translation.
Just saying the architecture of how a model is trained and how it outputs tokens is less limited than you think.
You could fine-tune or train your own model on the data and then design the prompt / query interface to give you interesting results.