This is lowkey cope. AI should be like talking to another human, at least that is the promise. Instead we're getting glorified autocomplete with padded language to sound like a human.
In its current form LLMs are pretty much at their limit, barring optimization and chaining them together for more productivity once we have better hardware. Still, it will just be useful for repetitive low level tasks and mediocre art. We need more breakthroughs beyond transformers to approach something that creates like humans instead of using statistical inference.
I don't know that, mostly speculating based on how mixture of experts is outperforming decoder-only architectures, which means we're already composing transformers to squeeze the most out of it, and still it seems to fall short. They have already been trained with incredible amounts of data, and it still needs to be composed into multiple instances and needs even better hardware and it seems it has reached diminishing returns. The question is will the little that is left to optimize be enough to have it be truly agentic and create full apps on its own, or will it still require expert supervision for anything useful.
In its current form LLMs are pretty much at their limit, barring optimization and chaining them together for more productivity once we have better hardware. Still, it will just be useful for repetitive low level tasks and mediocre art. We need more breakthroughs beyond transformers to approach something that creates like humans instead of using statistical inference.