Hacker News new | past | comments | ask | show | jobs | submit login

I may remember more than 4096 tokens, but I probably only pay attention to 7 of them at any given moment: https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...



The context heads of a LLM are more analogous to the sort of processing that goes on in, e.g., Brocca's Area of your brain as opposed to working memory. You can't have anything analogous to working memory as long as LLMs are operating on a strict feed forward basis[1]. And the fact that LLMs can talk so fluently without anything like a human working memory (yet) is a bit terrifying.

[1] Technically LLMs do have a forget that last toke and go back so I can try again operation so this is only 99% true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: