An LLM is only an Internet search engine with a fancier interface, it doesn't actually reason about anything. There is no "semantic" anything about an LLM's output.
I don't personally detect sentience, yet — but about 5% of my inputs result in some sort of interpretation and/or reasoning (sometimes I have to think for days "why did LLM.xyz make such a strange connection..?" only to realize the schizotypisms of machine aren't often wrong (just different).
Absolutely false, at the core of every LLM is a highly compressed text corpus from an Internet search engine.
(The wonder here isn't that an LLM succeeds at text retrieval tasks, the wonder is how highly compressed the index turns out to be. But maybe we just severely overestimate our own information complexity.)
So, what you're telling me is that every thing they say has already been said before, completely verbatim? Like, if I asked it to write a story about a dog named Jebediah surfing to planet Xbajahabvash, it would basically just find a link to someone else's story that wrote about the same dog surfing to the same planet? That sounds like an infinitely large amount of combinations. Perhaps the internet is just infinitely large, squared (or even circled).
It's only as semantically coherent as its training database. An LLM is, in effect, just a lossy compression of its training database. The compression is based on statistical maximum likelihood estimation, there are no mental (or any other kind) of models involved in compressing the training database.
You can claim that mental models don't actually exist and everything in the universe is just maximum likelihood, but that would be a religious/spiritual statement, outside the realm of science.