Well, if all you want is entertainment then it doesn't matter. If you want factual information then getting it without any way to check its veracity makes for a huge amount of work if you actually want to use that information for something important. After you've verified the output it might be useful if it is correct.
Synthetic knowledge refers to propositions or truths that are not just based on the meanings of the words or concepts involved, but also on the state of the world or some form of experience or observation. This is in contrast to analytic knowledge, which is true solely based on the meanings of the words or concepts involved, regardless of the state of the world.
You can interrogate an LLM. You can evaluate its responses in the course of interrogation. You can judge whether those responses are coherent internally and congruent with other sources of information. LLMs can also offer sources and citation for RAG operations.
There is no self-sufficient source of truth in this world. Every information agent must contend with the inevitability of its own production of error and excursions into fallibility.
This does not mean those agents are engaging in bullshit whenever they leave the domain of explicit self-verification.
No you can't, an LLM doesn't remember what it thought when it wrote what it did before, it just looks at the text and tries to come up with a plausible answer. LLM's doesn't have a persistent mental state, so there is nothing to interrogate.
Interrogating an LLM is like asking a person to explain another's persons reasoning or answer. Sure you will get something plausible sounding from that, but it probably wont be what the person who first wrote it was thinking.
This is not correct. You can get an LLM to improve reasoning through iteration and interrogation. By changing the content in its context window you can evolve a conversation quite nicely and get elaborated explanations, reversals of opinions, etc.