Hacker News new | past | comments | ask | show | jobs | submit login

The only time the LLM can be somewhat confident of its answer is when it is reproducing verbatim text from its training set. In any other circumstance, it has no way of knowing if the text it produced is true or not, because fundamentally it only knows if it's a likely completion of its input.



Post training includes mechanisms to allow LLMs to understand areas that they should exercise caution in answering. It’s not as simple as you say anymore.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: