Because everyone has at least tried to lie - not necessarily for malicious reasons, e.g. white lies - a few times in their lives and know it’s not easy coming up with plausible sounding lies. It takes effort. Sure it might be effortless for some psychopaths but there are a limited number of such people. Overall the internet was surprisingly usable despite all the liars - even now with SEO and content farms; I think even SEOs realize that it’s better to provide information that’s true than lie if it cost them the same.
LLMs changes all that. They can produce content on a massive scale that can drown out everything else. It doesn’t help the fact that more inaccurate LLMs are cheaper and easier to create and run - think about all the ChatGPT3 level LLMs vs ChatGPT4 level LLMs; guess which ones SEOs will gravitate towards.
> Because everyone has at least tried to lie - not necessarily for malicious reasons, e.g. white lies - a few times in their lives and know it’s not easy coming up with plausible sounding lies.
Regardless, the amount of false information on the internet used to be limited by the amount of people producing it. Some of it is deliberately produced misinformation. Some of it are just mistakes due to carelessness, others due to willful negligence - example of the latter would be content farms who make their money off flooding search engine result pages and pushing ads in your face when you visit their website; who couldn’t care less what if you got what you came for.
Despite all that the internet was still useful. There are enough people posting accurate information such that the signal to noise ratio is good enough.
LLMs will change all that. They have the potential to flood the internet with hallucinated falsehood. And because bad LLMs are cheaper and easy to create and operate, those will be the majority of LLMs used by aforementioned content farms.
Well, you can be sure the SEO creating content farms will pick the path of least resistance.
They won't make up a lie when telling the truth is far easier and when they do have to lie, it's a slight bit more effort.
But that's a moot point with LLMs ...
If/When they use LLMs, they will pick the cheapest LLM possible which will produce the most garbage - they might not even bother keeping it up to date; why bother when hallucinated BS sound just as convincing.
with AI this limit is all but removed
all the human generated bullshit ever created will soon be dwarfed by what AI can vomit out in an hour