Once you know LLMS make mistakes and know to look for them half the battle is done. Humans make mistakes, which is why we take effort to validate thinking and actions.
As I use it more and more often the mistakes are born of ambiguity. As I supply more information to the LLM it's answer(s) gets better. I'm finding more and more ways to supply it with robust and extensive information.
As I use it more and more often the mistakes are born of ambiguity. As I supply more information to the LLM it's answer(s) gets better. I'm finding more and more ways to supply it with robust and extensive information.