Hacker News new | past | comments | ask | show | jobs | submit login

Glossing through the paper, it seems they're noting this issue but kinda skipping over it:

In fact, it is clear that approximation capabilities and generalization are not equivalent notions. However, it is not yet determined that the reasoning capabilities of LLMs are tied to their generalization. While these notions are still hard to pinpoint, we will focus in this experimental section on the relationship between intrinsic dimension, thus expressive power, and reasoning capabilities.




Right, they never claimed to have found a roadmap to AGI, they just found a cool geometric tool to describe how LLMs reason through approximation. Sounds like a handy tool if you want to discover things about approximation or generalization.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: