Note that Turing essentially claims that intelligence is predicting what another human would write in text. So the fact that a generative language model that "predicts what might be next" trained on human text is almost by definition something that passes the Turing Test.
The only remaining question is whether intelligence is equivalent to sentience consciousness etc. those funny things.
The only remaining question is whether intelligence is equivalent to sentience consciousness etc. those funny things.