This is kind of like the advent of spellcheck, where a whole class of errors started to appear regularly in almost every article because publishers stopped paying for the human labor to manually review for things like homonym or word ordering errors. Except much worse, because it could allow spurious or even harmful facts to accrue and spread instead of just grammatical mistakes.
> Except much worse, because it could allow spurious or even harmful facts to accrue
It already did, even in the "purely human" era. I think LLM text will gradually become more trustworthy than a random website by consistency filtering the training set.