There are plenty of examples of "new" things a LLM can do.
A good example is all those toy examples of "Program a whatever in the style of Shakespeare and David Bowie's love child". This isn't a thing that it has seen in training data.
From my limited understanding of how LLMs work, I believe this behavior is enabled by embeddings. The model maps its ~50,000 token vocabulary into a lower-dimensional vector space. Each dimension in the vector adds some sort of meaning (or at least association) to the word.
I saw an example in a Numberphile video where they were able to take the vector of the word "prince", subtract the vector of the word "man", add the vector of the word "woman", and the resulting vector was closest to the word "princess". So in theory there could be a "gender" dimension and a "position of authority" dimension in that vector (or the model might be making other, stranger connections between words that we don't understand).
I think the same thing is happening in your example. The model identifies and produces output that keeps it in the general region in the vector space for the "Shakespearean" and "Bowiean" dimensions while still satisfying other requirements.
I don't think that's really "new". That's combining two existing styles that the LLM has seen in its training data, and the creative idea to combine those styles has been supplied by the operating human.
It's phenomenally impressive, and may be a stepping stone to models that can come up with new ideas, but I don't think we're there yet.
LLMs seem to be able to capture the idea of Shakespeare and Bowie's styles, and intuit a combination of the two, but when I start asking it questions about what it thinks about the process I don't get the impression of any understanding. It can magic up text from prompts but it doesn't understand what it's doing.
A good example is all those toy examples of "Program a whatever in the style of Shakespeare and David Bowie's love child". This isn't a thing that it has seen in training data.