Hacker News new | past | comments | ask | show | jobs | submit login

“Doing next-token prediction” isn’t a contradiction of “understanding” any more than it would be for a SO/friend who can complete your sentences.

But it’s useful to remember that autocompletion of sequences is at the bottom of LLMs, and a hefty dose of RLHF of whatever the RLHF raters thought was good output.




I agree, it is not a contradiction. But usually those who bring up doing next-token prediction imply that it is a contradiction.

Also, there is often some confusion between consciousness and understanding. LLMs clearly understand on some level, within certain intrinsic and extrinsic constraints, but of course there is no consciousness there.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: