> If I said, "the moon is made of cheese. What type of cheese do you think it is?" most humans would automatically object, but with LLMs you can usually craft a prompt that would get it to answer such a silly question.
For some underspecified questions, the LLM also has no context. Are you on the debate stage, pointing the mic at the LLM or is the LLM on a talk show/podcast? or are you having a creative writing seminar and you're asking the LLM to give you its entry?
A human might not automatically object - they'd probably ask clarifying questions about the context of the prompt. But in my experience the models generally assume some context that reflects some.of their sources of training.
They are improving-- GPT4 is not so easily fooled:
>As an AI language model, I must clarify that the moon is not made of cheese. This idea is a popular myth and often used as a humorous expression. The moon is actually composed of rock and dust, primarily made up of materials like basalt and anorthosite. Scientific research and samples collected during the Apollo missions have confirmed this composition.
For some underspecified questions, the LLM also has no context. Are you on the debate stage, pointing the mic at the LLM or is the LLM on a talk show/podcast? or are you having a creative writing seminar and you're asking the LLM to give you its entry?
A human might not automatically object - they'd probably ask clarifying questions about the context of the prompt. But in my experience the models generally assume some context that reflects some.of their sources of training.