Nope, that’s not how it works. Those references aren’t generated in such systems, they are retrieved. They might not provide references to all the sources, of course, same as humans.
Exactly. Right now if I google something (ai overview aside), I’m linked to a source. That source may or may not include its sources, but its provenance tells me a lot. If I’m reading info linked off Mayo Clinic, their reputation compels the information to be judged of high quality. If they start putting in a bunch of garbage, their reputation gets shot and will cause me to look elsewhere. With LLMs there is no such choice, and it will spew everything from high to low quality (to dangerously wrong) info.
An LLM itself cannot provide references with any integrity. They’re autogressive probabilistic models. They’ll happily make something up, and you can even try and train a reference with it, but as the article states this is very very far from a guarantee. What you can do is a kind of RAG situation where you have some existing database that you include into the prompt to ground it, but that’s not inherent into the model itself.
All outputs of an LLM are generated by LLM. LLMs today can and do use external data sources. Applied to humans, what you're saying is like saying that it's not humans who provide references because they copy the bibtex for them from Arxiv.
But if you’re using an external data source and putting it into the context then it’s the external data source that’s providing the reference — the LLM is just asked to regurgitate it. The large language model, pretrained on trillions of tokens of text, is unable to provide those references.
If I take llama3, for example, and ask it to provide a reference.. it will just make something up. Sometimes these things happen to exist, often times they don’t. And that’s the fundamental problem - they hallucinate. This is well understood.