Indeed, though I'm not clear on your deiknymi, it's quite hard to determine sentience.
IMHO the Turing test has not aged well, and is insufficient to really answer the question; It's now quite possible to make something that quacks like a duck, walks like a duck, etc, but that is clearly, on close inspection, not actually a duck.
And I always thought that the Turing test was a philosophical way to solve the matter- I always intended it as meaning "it's pointless to inspect closer something that quacks, walks like a duck: because walking and quacking are the essence of the duck".
And now I am surprised at how people seem to have understood the test the other way around- as if it was meant to be just some kind of empirical test to be confirmed by some more rigorous inspection.
And yes, of course LLMs are nor sentient as humans are: but the limits of their sentience should be clearly visible in the limits of their discourse.
Mereological nihilism like this solves a huge swath of meaningless questions (across many fields) about if something meets some poorly defined categorization.
But the kind of person asking these questions is usually the kind of person to reject such an answer.
> Alan Turing himself didn't propose "the Turing Test" as an actually meaningful test
It was the wording "can machines think" that Turing considered ambiguous due to varying opinions on how to define "think". He proposed the experiment specifically as a less ambigious replacement - I think it's entirely wrong to say that he did not intend it to be meaningful.
> He proposed the thought experiment specifically as a less ambigious replacement
Yes, Turing's issue with the question is really the same as mine: we don't really know what we mean when we say something or someone "can think". That means that the question cannot be answered because we don't know what the question is. "What is the answer to the question of life, the universe, and everything?" ... "42".
What I'm saying is that Turing didn't propose the imitation game as a test for whether a machine can think at all. He proposed it in the hopes of redirecting the question to one that is both answerable and meaningful -- but it's a different question than "can a machine think?".
IMHO the Turing test has not aged well, and is insufficient to really answer the question; It's now quite possible to make something that quacks like a duck, walks like a duck, etc, but that is clearly, on close inspection, not actually a duck.