Hacker News new | past | comments | ask | show | jobs | submit login

Hallucinations are fundamentally a confusing and useless term that make this discussion a mess.

Firstly - Humans Hallucinate. We have the ability to be impaired to the point that we incorrectly perceive base reality.

Secondly - LLMs are always ‘Hallucinating’ Objective reality for an LLM is relations between tokens. It gets the syntax absolutely right, but we have an issue with the semantics being wrong.

This is simply not what the model is specced to do. It’s fortunately trained on many conversations that flow logically into each other.

It is NOT trained to actually apply logic. If you trained an LLM on absolutely illogical text, it too would create illogical tokens - with mathematical precision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: