> But there is no "reasoning" occurring here - just syntactic templating and statistical auto-complete.
This is the "stochastic parrot" hypothesis, which people feel obligated to bring up every single time there's a LLM paper on HN.
This hypothesis isn't just philosophical, it can lead to falsifiable predictions, and experiments have thoroughly falsified them: LLMs do have a world model. See OthelloGPT for the most famous paper on the subject; see Transformers Represent Belief State Geometry in their Residual Stream for a more recent one.
This is the "stochastic parrot" hypothesis, which people feel obligated to bring up every single time there's a LLM paper on HN.
This hypothesis isn't just philosophical, it can lead to falsifiable predictions, and experiments have thoroughly falsified them: LLMs do have a world model. See OthelloGPT for the most famous paper on the subject; see Transformers Represent Belief State Geometry in their Residual Stream for a more recent one.