I wonder if people fall into a "happy valley of ignorance", where users don't actually see how an LLM can be wrong, rarely actually are met with hallucinations, and their use of LLM output is rarely a big enough problem. Whereas, we technical people who know its a bunch of matrix operations are so skeptical that we don't put this amazing technology to much use at all.