Hacker News new | past | comments | ask | show | jobs | submit login

LLMs can reasonably reason, however they differ in that once an output begins to be generated, it must continue along the same base set of logic. Correct me if I'm wrong, but I do not believe it can stop and think to itself that there is something wrong with the output and that it should start over at the beginning or backup to a previous state before it outputted something incorrect. Once its output begins to hallucinate it has no choice but continue down the same path since its next token is also based on previous tokens it has just outputted



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: