> You rhetorically declared hallucinations to be part of the normal functioning (i.e., the word "Normal" is already a value judgement).
No they aren't: When you flip a coin, it landing to display heads or tails is "normal". That's no value judgement, it's just a way to characterize what is common in the mechanics.
If it landed perfectly on its edge or was snatched out of the air by a hawk, that would not be "normal", but--to introduce a value judgement--it'd be pretty dang cool.
You just replaced 'normal' with 'common' to do the heavy lifting, the value judgment remains in the threshold you pick.
Whereas OP said that "hallucinations are part of the normal functioning" of the LLM. I contend their definition of hallucination is too weak and reductive, that scientifically we have not actually settled that hallucinations are a given for LLMs, that humans are an example that LLMs are currently inferior - or else how would you make sense of Terence Tao's assessment of gpt01. It is not a simplistic argument of LLMs are garbage in garbage out, therefore they will always hallucinate. OP doesn't even show they read or understood the paper which is about Turing machine arguments, rather OP is using simplistic semantic and statistical arguments to support their position.
No they aren't: When you flip a coin, it landing to display heads or tails is "normal". That's no value judgement, it's just a way to characterize what is common in the mechanics.
If it landed perfectly on its edge or was snatched out of the air by a hawk, that would not be "normal", but--to introduce a value judgement--it'd be pretty dang cool.