LLMs need a lot of GPU power to learn. I'm not sure it's correct to say that they don't learn, it's just a question of them being unable to learn anything more than a very small context window in real-time on presently available/economical hardware. But if you have GPUs with terabytes of VRAM and you feed experience into them, it will learn. It's still questionable if that's enough for true AGI, but I think the inability to learn in real-time is clearly a hardware limitation.