Putting aside the difficulty of building that database of true facts, and of parsing unknown facts out of English text, that solution feels like it's inviting an arms race. I can write a bunch of uncontroversial "true" facts into an article containing one or two false facts.
Diluting lies with verifiable but only tangentially related truths is a well-established tactic to fooling humans as well.
Someone with an agenda in the symbolic vs statistic AI question could take this parallel as an example of how close ontology-based AI approaches are to the way humans think. And then someone with the opposite agenda would point out that the example is all hypothetical.