It's only as semantically coherent as its training database. An LLM is, in effect, just a lossy compression of its training database. The compression is based on statistical maximum likelihood estimation, there are no mental (or any other kind) of models involved in compressing the training database.
You can claim that mental models don't actually exist and everything in the universe is just maximum likelihood, but that would be a religious/spiritual statement, outside the realm of science.
You can claim that mental models don't actually exist and everything in the universe is just maximum likelihood, but that would be a religious/spiritual statement, outside the realm of science.