Hacker News new | past | comments | ask | show | jobs | submit login

> But there is no "reasoning" occurring here - just syntactic templating and statistical auto-complete.

This is the "stochastic parrot" hypothesis, which people feel obligated to bring up every single time there's a LLM paper on HN.

This hypothesis isn't just philosophical, it can lead to falsifiable predictions, and experiments have thoroughly falsified them: LLMs do have a world model. See OthelloGPT for the most famous paper on the subject; see Transformers Represent Belief State Geometry in their Residual Stream for a more recent one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: