Maybe they're concerned with the current limits of LLMs and hallucinations and don't want to provide a product that returns false information? Also AI is too general a term, we should be talking about specifics areas they could use specific technologies like NLP, machine learning, computer vision, etc