Hacker News new | past | comments | ask | show | jobs | submit login

Because of stuff like this I don't get how you can trust it not to hallucinate when it summarizes your inbox.



IME LLMs hallucinate much less when given a text that it can summarize or answer questions about vs. asking it to generate something




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: