Can it ingest multiple PDFs in the same 'context', or would I have to assemble it all into one (under the 50mb limit)?
What is it using to
Can we (provoke you to) set the model temperature in the conversation to either minimize the hallucinating or increase the 'conjecture/BS/marketing-claims' factor?
Right now it's just one pdf per bot but yeah you could hack it by merging the pdfs and then generating a new bot. Interesting suggestion - did you notice a particular hallucination? What kind of docs would be high vs. low temperature?
Yes. See this comment [0] Another HNer with API access tried just ingesting the paper without context and some instructions and model-temp=0 and got better results.
I've also found in my area that it'll happily hallucinate stuff -- after all, it has zero actual understanding, it just predicts the most likely filler in the given context.
Tamping that down and just getting it to cut out the BS/overconfidence response patterns and reply with "I know X and I don't know Y" would be incredibly useful.
When we get back an "IDK", we can probe in a different way, but falsely thinking that we know something when we are actually still ignorant is worse than just knowing we've not yet got an answer.
Can it ingest multiple PDFs in the same 'context', or would I have to assemble it all into one (under the 50mb limit)?
What is it using to
Can we (provoke you to) set the model temperature in the conversation to either minimize the hallucinating or increase the 'conjecture/BS/marketing-claims' factor?