You made two claims: (1) current language models are useless and (2) current language models have reached a ceiling. I said:
How many people are using ChatGPT, Stable Diffusion, etc. for economically or personally valuable activities?
If (1) is true, then the answer to that question is "zero" or at least "close to zero". Do you really believe that?
If (2) is true, then it is also true to say that transformer models will never exceed today's capabilities by a significant amount at any time in the future. Do you really believe that?
The limitation is inherent in the core design. There is no overcoming. This is not a hurdle or a wall. It's a design flaw.
Is it totally useless to everyone? No. Not completely. It's like a coherent search engine: a way to find data that is close to other data. But "close to" in this case is only "semantically", and never "logically", so that's that.
Is it going to get any less useless than it is? Only slightly. "It" will never get better. The only better version of "it" is a completely new ground-up redesign that doesn't resemble "it" at all.
Modern neural network architectures are Turing complete [1]. So I don't see any argument for a limit in principle unless you are arguing that a Turing machine can't achieve language understanding. If that's what you're saying, then I wonder who is espousing mysticism here.
How many people are using ChatGPT, Stable Diffusion, etc. for economically or personally valuable activities?
If (1) is true, then the answer to that question is "zero" or at least "close to zero". Do you really believe that?
If (2) is true, then it is also true to say that transformer models will never exceed today's capabilities by a significant amount at any time in the future. Do you really believe that?