> One of the reasons I no longer suspect hallucination is that the training cut-off date for OpenAI's LLMs - September 2021 - predates the point when this kind of prompt engineering became common enough that there would have been prompts like this in their training sets
But wouldn't instruction tuning have trained it to hallucinate these sorts of prompts?
I mean, if they truly didn't exist in the training data, how would the model know how to handle them?
But wouldn't instruction tuning have trained it to hallucinate these sorts of prompts?
I mean, if they truly didn't exist in the training data, how would the model know how to handle them?