Hacker News new | past | comments | ask | show | jobs | submit login

We know that LLMs hallucinate, but we can also remove them.

I’d love to see a future generation of a model that doesn’t hallucinate on key facts that are peer and expert reviewed.

Like the Wikipedia of LLMs

https://arxiv.org/pdf/2406.17642

That’s a paper we wrote digging into why LLMs hallucinate and how to fix it. It turns out to be a technical problem with how the LLM is trained.




interesting! is there a way to fine tune the trained experts, say, by adding new ones? would be super cool!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: