We are a team of engineers & researchers on AI Alignment & Safety, we're investigating multiple methods, including metamorphic testing, human feedback, benchmarks with external data sources, and LLM explainability methods.
Currently, fact checking works on straight facts. It does a Google Search and uses LLMs to shorten it. Once it has the short version, it will compare the short results with the answer provided by ChatGPT itself. Premium tiers would get better fact checking sources than just google. We're investigating various data sources and comparison methods.
Note that fact checking / hallucinations is just one of the types of satety issues we'd like to tackle. Many of these are still open questions in the research community, so we're looking to build and develop the right methods for the rights problems. We also think it's super important to have independent third-party evaluations to make sure these models are safe.
This is a new tool we're building in the open, and we're interested in your feedback to prioritize!
> Currently, fact checking works on straight facts.
Wow, you guys have a database of all the facts?
> It does a Google Search and uses LLMs to shorten it.
Oh...
...actually, this is an empirical fact checker. I wouldn't call it "fact-based", as it's epistemologically an absurd statement, but "empirical fact checking" sounds good and presents an idea that is very close to how humans verify information in the first place - by checking multiple sources and searching for correlation.
For what it's worth, I think your approach makes sense. Good luck.
> Currently, fact checking works on straight facts. It does a Google Search and uses LLMs to shorten it.
So your fact-checking LLM is also vulnerable to injection and unethical prompting then when it ingests website text. And a Google search is far, far away from fact checking, particularly for the subtle errors that GPT-4 is prone to making.
Currently, fact checking works on straight facts. It does a Google Search and uses LLMs to shorten it. Once it has the short version, it will compare the short results with the answer provided by ChatGPT itself. Premium tiers would get better fact checking sources than just google. We're investigating various data sources and comparison methods.
Note that fact checking / hallucinations is just one of the types of satety issues we'd like to tackle. Many of these are still open questions in the research community, so we're looking to build and develop the right methods for the rights problems. We also think it's super important to have independent third-party evaluations to make sure these models are safe.
This is a new tool we're building in the open, and we're interested in your feedback to prioritize!