I don't think users understand how LLMs are different are different from search engines, especially for information retrieval. Someone I know well has been using ChatGPT and Bard for months, and was surprised that they don't just use the Bing/Google search engine indexes behind the scenes. The idea that LLMs are a bunch of frozen matrices is not obvious.
It's hard to communicate that it's better to rely on LLMs for some classes of "reasoning" or language tasks vs simple information retrieval, particularly given retrieval does work well much of the time.
One thing that has surprised me more than it probably should have is there appears to be a double-hump distribution between those who 'get' how an LLM works (at least to the degree that its strengths and weaknesses are understood well enough to get useful work done with one) and those who don't, with the those that don't category being very, very hard to get someone out of.
I have a couple of clients who have wholeheartedly embraced ChatGPT, but are (repeatedly) shocked when it isn't just a 100% accurate answer machine. Explanations on why that is a very dangerous way to approach these things fall on deaf ears.
I wonder if people fall into a "happy valley of ignorance", where users don't actually see how an LLM can be wrong, rarely actually are met with hallucinations, and their use of LLM output is rarely a big enough problem. Whereas, we technical people who know its a bunch of matrix operations are so skeptical that we don't put this amazing technology to much use at all.
It's hard to communicate that it's better to rely on LLMs for some classes of "reasoning" or language tasks vs simple information retrieval, particularly given retrieval does work well much of the time.