Hacker News new | past | comments | ask | show | jobs | submit login

Hallucinating is roughly how they work, we just label it as such when it's something obviously weird



This is something I'm not sure people understand.

LLM's only make a "best guess" for each next token. That's it. When it's wrong we call it a "hallucination" but really the entire thing was a "hallucination" to begin with.

This is also analogous to humans - who also "hallucinate" incorrect answers, usually "hallucinate" incorrect answers less when they "Think through this step by step before giving your answer", etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: