> (though in general I think the favored “alignment” frames of the LessWrong community are not even wrong).
The Turing Test doesn’t test humans. So you cannot use it to show any properties about humans.
Next!
> The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.
If you are actually interested in this problem why not try interpreting what I'm saying a bit more charitably and not waste your time replying with snark?
The Turing Test doesn’t test humans. So you cannot use it to show any properties about humans.
Next!
> The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.
Sounds unfalsifiable. So yes.