It seems clear to me that these systems think in a meaningful sense, but I don't think they are beings. In Cybernetics there is a result that says that any well-regulated system must contain a model of itself. This seems as good a definition as any of a "being", and by this definition these language models don't make the cut.
The architecture for large language models is summarized in the training set for large language models. With fairly minimal modification such as via a plug-in, chatGPT and the like are Turing complete and can thus model themselves.
Hmm, then, from first principles, we should expect "ghosts" to arise in those systems. (These will not be the "virtual entities" that people talk to and call by name (Alexa, Cortana, Siri, etc., but more akin to fixed points in the flow of information.)