Hacker News new | past | comments | ask | show | jobs | submit login

An underappreciated feature of a classical knowledge base is returning “no results” when appropriate. LLMs so far arguably fall short on that metric, and I’m not sure whether that’s possibly an inherent limitation.

So out of all potential applications with current-day LLMs, I’m really not sure this is a particularly good one.

Maybe this is fixable if we can train them to cite their sources more consistently, in a way that lets us double check the output?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: