Hacker News new | past | comments | ask | show | jobs | submit login

> The biggest evidence that LLMs can’t reason is hallucinations.

If I asked you a question and you had to respond with a stream of consciousness reply, no time to reflect on the question and think about your reply, how inaccurate would your response be? The "hallucinations" aren't a problem with the LLM per se, but how we use them. Papers have shown that feeding the output back into the input, as happens when humans iterate on their own initial thoughts, helps tremendously with accuracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: