Absolutely false, at the core of every LLM is a highly compressed text corpus from an Internet search engine.
(The wonder here isn't that an LLM succeeds at text retrieval tasks, the wonder is how highly compressed the index turns out to be. But maybe we just severely overestimate our own information complexity.)
So, what you're telling me is that every thing they say has already been said before, completely verbatim? Like, if I asked it to write a story about a dog named Jebediah surfing to planet Xbajahabvash, it would basically just find a link to someone else's story that wrote about the same dog surfing to the same planet? That sounds like an infinitely large amount of combinations. Perhaps the internet is just infinitely large, squared (or even circled).
(The wonder here isn't that an LLM succeeds at text retrieval tasks, the wonder is how highly compressed the index turns out to be. But maybe we just severely overestimate our own information complexity.)