Hacker News new | past | comments | ask | show | jobs | submit login

Precisely. Hallucinations were improperly named. A better term is "confabulation," which is telling an untruth without the intent to deceive. Sadly, we can't get an entire industry to rename the LLM behavior we call hallucination, so I think we're stuck with it.



You are implicitly anthropomorphizing LLMs by implying that they (can) have intent in the first place. They have no intent, so can't lie or make a claim or confabulate. They are just a very complex black box that takes input and spits output. Searle's Chinese Room metaphor applies here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: