Hacker News new | past | comments | ask | show | jobs | submit login

> You rhetorically declared hallucinations to be part of the normal functioning (i.e., the word "Normal" is already a value judgement).

No they aren't: When you flip a coin, it landing to display heads or tails is "normal". That's no value judgement, it's just a way to characterize what is common in the mechanics.

If it landed perfectly on its edge or was snatched out of the air by a hawk, that would not be "normal", but--to introduce a value judgement--it'd be pretty dang cool.




You just replaced 'normal' with 'common' to do the heavy lifting, the value judgment remains in the threshold you pick.

Whereas OP said that "hallucinations are part of the normal functioning" of the LLM. I contend their definition of hallucination is too weak and reductive, that scientifically we have not actually settled that hallucinations are a given for LLMs, that humans are an example that LLMs are currently inferior - or else how would you make sense of Terence Tao's assessment of gpt01. It is not a simplistic argument of LLMs are garbage in garbage out, therefore they will always hallucinate. OP doesn't even show they read or understood the paper which is about Turing machine arguments, rather OP is using simplistic semantic and statistical arguments to support their position.


I didn't say that. I said "hallucination" is a value judgment we assign to a piece of text produced by an LLM, not a type of malfunction in the model.

If we're going to nitpick on word choice let's pick on the words that I actually used.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: