I think the word "sentience" is a red herring. The more important point is that the researcher at Google thought that the AI had wants and needs 'like a human', e.g. that if it asked the AI if it wanted legal representation to protect its own rights, this was the same as asking a human the same question.
This needs much stronger evidence than the researcher presented, when slight variations or framing of the same questions could lead to very different outcomes from the LLM.
> that if it asked the AI if it wanted legal representation to protect its own rights, this was the same as asking a human the same question.
You seem to be assigning a level of stupidity to a google AI researcher that doesn't seem wise. That guy is not a crazy who grabbed his 15 minutes and disappeared, he's active on twitter and elsewhere and has extensively defended his views in very cogent ways.
These things are deliberately constructed to mimic human language patterns, if you're trying to determine whether there is underlying sentience to it, you need to be extra skeptical and careful about analyzing it and not rely on your first impressions of it's output. Anything less would be a level of stupidity not fit for a Google AI researcher, which considering that he was fired is apropos. That he keeps going on about it after his 15 minutes are up is not proof of anything except possibly that besides being stupid he also stubborn.
This needs much stronger evidence than the researcher presented, when slight variations or framing of the same questions could lead to very different outcomes from the LLM.