Hacker News new | past | comments | ask | show | jobs | submit login

This is kind of like the advent of spellcheck, where a whole class of errors started to appear regularly in almost every article because publishers stopped paying for the human labor to manually review for things like homonym or word ordering errors. Except much worse, because it could allow spurious or even harmful facts to accrue and spread instead of just grammatical mistakes.



> Except much worse, because it could allow spurious or even harmful facts to accrue

It already did, even in the "purely human" era. I think LLM text will gradually become more trustworthy than a random website by consistency filtering the training set.


Unfortunately, it is more than likely that the training inputs to upcoming LLMs will be partly from older LLM outputs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: