Hacker News new | past | comments | ask | show | jobs | submit login

I think you're confusing prediction with ratiocination.

I'm sure you've deducted hypothesis' based solely on the assertion that "contradiction and being are incompatible". Note, there wasn't prediction involved on that process.

I consider prediction as a subset of reason, but not the contrary. Therefore, I beg to differ on the whole assumption that "intelligence is prediction". It's more than that, prediction is but a subset of that.

This is perhaps the biggest reason for the high computational costs of LLM's, because they aren't taking the shortcuts necessary to achieve true intelligence, whatever that is.




> I think you're confusing prediction with ratiocination.

No, exactly not! Prediction is probabalistic and liable to be wrong, with those probabilities needing updating/refining.

Note that I'm primarily talking about prediction as the brain does it - not about LLMs, although LLMs have proved the power of prediction as a (the?) learning mechanism for language. Note though that the words predicted by LLMs are also just probabilities. These probabilities are sampled from (per a selected sampling "temperature" - degree of randomness) to pick which word to actually output.

The way the brain learns, from a starting point of knowing nothing, is to observe and predict that the same will happen next time, which it often will, once you've learnt what observations are appropriate to include or exclude from that prediction. This is all highly probabalistic, which is appropriate given that the thing being predicted (what'll happen if I throw a rock at that tiger?) is often semi-random in nature.

We can better rephrase "intelligence is ability to predict well", as "intelligence derives from ability to predict well". It does of course also depend on experience.

One reason why LLMs are so expensive to train is because they learn in an extremely brute force fashion from the highly redundant and repetitive output of others. Humans don't do that - if we're trying to learn something, or curious about it, we'll do focused experiments such as "Let's see what happens if I do this, since I don't already know", or "If I'm understanding this right, then if I do X then Y should happen".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: