Hacker News new | past | comments | ask | show | jobs | submit login

This is just the "guns don't shoot people, people do." argument except in this case we quite literally have a massive upside incentive to remove people from the process entirely (i.e. websites that automatically generate new content every day) - so I don't buy it.

This kind of AI slop is quite literally written by no one (an algorithm pushed it out), and it doesn't communicate anything since communication first requires some level of understanding of the source material - and LLM's are just predicting the likely next token without understanding. I would also extend this to AI slop written by someone with a limited domain understanding, they themselves have nothing new to offer, nor the expertise or experience to ensure the AI is producing valuable content.

I would go even further and say it's "read by no one" - people are sick and tired of reading the next AI slop article on google and add stuff like "reddit" to the end of their queries to limit the amount of garbage they get.

Sure there are people using LLMs to enhance their research, but a vast, vast majority are using it to create slop that hits a word limit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: