Hacker News new | past | comments | ask | show | jobs | submit login

LLMs can provide references. There’s no limitation on that. Even GPT4 includes references sometimes when it deems them beneficial.



An LLM itself cannot provide references with any integrity. They’re autogressive probabilistic models. They’ll happily make something up, and you can even try and train a reference with it, but as the article states this is very very far from a guarantee. What you can do is a kind of RAG situation where you have some existing database that you include into the prompt to ground it, but that’s not inherent into the model itself.


Sure, but it can semantically query a vector DB on the side and use the results to generate a "grounded" response.


“LLMs can provide references. There’s no limitation on that.”

That’s not really an LLM providing references, but a separate db as an extra step providing the references.


All outputs of an LLM are generated by LLM. LLMs today can and do use external data sources. Applied to humans, what you're saying is like saying that it's not humans who provide references because they copy the bibtex for them from Arxiv.


But if you’re using an external data source and putting it into the context then it’s the external data source that’s providing the reference — the LLM is just asked to regurgitate it. The large language model, pretrained on trillions of tokens of text, is unable to provide those references.

If I take llama3, for example, and ask it to provide a reference.. it will just make something up. Sometimes these things happen to exist, often times they don’t. And that’s the fundamental problem - they hallucinate. This is well understood.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: