I appreciated a post on here recently that likened AI hallucination to 'bullshitting'. It's coherent, even plausible output without any regard for the truth.
More true to say that all output is bullshitting, not just the ones we call hallucinations. Some of it is true, some isn't. The model doesn't know or care.
While I have absolutely no issues with the word "shit" in popular terms, I'd normally like to reserve it for situations where there's actually intended malice like in "enshittification".
Rather than just an imperfect technology as we have here.
Many people object to the term enshittification for foul-mouthing reasons but I think it covers it very well because the principle it covers is itself so very nasty. But that's not at all the case here.