Hacker News new | past | comments | ask | show | jobs | submit login

Good idea, but my question would be whether cognitive context frames or relational frames are more computationally analogous to function calls or counting numbers. We know LLMs are still bad at counting, for example. (Hope the gist of my question makes sense.)



I think it's a fascinating case study that LLMs are bad with numbers. It could have something to do with how these models are trained. Humans do a lot of rote memorization and learning in young life to cement numbers and basic math, but an LLM doesn't have this advantage—it's like a person that has just read a lot. If a human was never taught math, but read thousands of books, I wonder if their number skills would be in any way analogous to an LLM.

Then again, I think there could be an argument that the human would pick up and internalize counting without any instruction, just based on the embodied fact of having (on average) ten fingers. I'm guessing there's a cog-sci or anthropological study that looks into this…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: