Hacker News new | past | comments | ask | show | jobs | submit login

Humans also need to break up the problem and think step-by-step to solve problems like 234878 * 452.



The difference is what I attempt to describe at the end there.

Humans apply fixed strict rules about how to break up problems, like multiplication.

LLMs simply guess. That's a powerful trick to get some more capability for simple problems, but it just doesn't scale to more complex ones.

(Which in turn is a problem because most tasks in the real world are more complex than they seem, and simple problems are easily automated through conventional means)


We either learn the fixed rules in school, at which point we simply have a very strong prior, or we have to invent them somehow. This usually takes the form of "aesthetically/intuitively guided trial and error argument generation", which is not entirely wrongly summarized as "guessing".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: