This is exactly the things LLMs seem to be the worst at, even as they improve
LLMs are confidently wrong rather than inquisitive/have an interest in being right. They theoretically have infinite patience, but they really have no patience and accept input at face value.
I don't doubt that these can be improved, but I also don't doubt that some people will be better at interacting with LLMs than others, and that to get good at using an LLM will take onboarding time that organizations would prefer to centralize
LLMs are confidently wrong rather than inquisitive/have an interest in being right. They theoretically have infinite patience, but they really have no patience and accept input at face value.
I don't doubt that these can be improved, but I also don't doubt that some people will be better at interacting with LLMs than others, and that to get good at using an LLM will take onboarding time that organizations would prefer to centralize