Hacker News new | past | comments | ask | show | jobs | submit login

LLMs are trained to predict the next word in a sequence. As a result of this training they developed reasoning abilities. Currently these reasoning abilities are roughly at human level, but next gen models (gpt5) should be superior to humans at any reasoning tasks.



How did you reach these conclusions and have you validated them by asking these superior artificial agents about whether you're correct or not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: