I’d love to see a future generation of a model that doesn’t hallucinate on key facts that are peer and expert reviewed.
Like the Wikipedia of LLMs
https://arxiv.org/pdf/2406.17642
That’s a paper we wrote digging into why LLMs hallucinate and how to fix it. It turns out to be a technical problem with how the LLM is trained.
I’d love to see a future generation of a model that doesn’t hallucinate on key facts that are peer and expert reviewed.
Like the Wikipedia of LLMs
https://arxiv.org/pdf/2406.17642
That’s a paper we wrote digging into why LLMs hallucinate and how to fix it. It turns out to be a technical problem with how the LLM is trained.