Hacker News new | past | comments | ask | show | jobs | submit login

Well, if all you want is entertainment then it doesn't matter. If you want factual information then getting it without any way to check its veracity makes for a huge amount of work if you actually want to use that information for something important. After you've verified the output it might be useful if it is correct.



That doesn't render it useless. It means that citations are have additional utility.

Further, LLMs do not only spit quotes. They engage in analytic and synthetic knowledge.


> They engage in analytic and synthetic knowledge.

they're not hallucinations now, they're "synthetic knowledge"

like microsoft's hilarious remarketing of bullshit as "usefully wrong"

https://www.microsoft.com/en-us/worklab/what-we-mean-when-we...


Synthetic knowledge is not bullshit.

Synthetic knowledge refers to propositions or truths that are not just based on the meanings of the words or concepts involved, but also on the state of the world or some form of experience or observation. This is in contrast to analytic knowledge, which is true solely based on the meanings of the words or concepts involved, regardless of the state of the world.


> Synthetic knowledge refers to propositions or truths

of which LLMs have no way of determining

-> bullshit


That is not correct.

You can interrogate an LLM. You can evaluate its responses in the course of interrogation. You can judge whether those responses are coherent internally and congruent with other sources of information. LLMs can also offer sources and citation for RAG operations.

There is no self-sufficient source of truth in this world. Every information agent must contend with the inevitability of its own production of error and excursions into fallibility.

This does not mean those agents are engaging in bullshit whenever they leave the domain of explicit self-verification.


> You can interrogate an LLM

No you can't, an LLM doesn't remember what it thought when it wrote what it did before, it just looks at the text and tries to come up with a plausible answer. LLM's doesn't have a persistent mental state, so there is nothing to interrogate.

Interrogating an LLM is like asking a person to explain another's persons reasoning or answer. Sure you will get something plausible sounding from that, but it probably wont be what the person who first wrote it was thinking.


This is not correct. You can get an LLM to improve reasoning through iteration and interrogation. By changing the content in its context window you can evolve a conversation quite nicely and get elaborated explanations, reversals of opinions, etc.


I feel like I'm chatting with one here


Based on what?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: