Hacker News new | past | comments | ask | show | jobs | submit login

Why train on scientific texts when half the papers are BS?



it would be good to train LLMs to distinguish BS articles from good ones. but it's rather unrealistic expectations


I can only assume someone is working on a BS detector. And holy fuck, I would pay cash money for a way to identify it without having delve into it myself. Imagine not having to dive into paper only to find out that the population sample is so low that it shouldn't even be considered for submission or better yet have online news qualified as 'propaganda beneficial to side x'.


One of the few interesting ideas from Neal Stephenson’s book Fall is the idea that the rich pay people to curate the information they see on the web and in their social feeds. The poor have to wade through all of the machine generated content and propaganda and figure out what is true on their own.

An AI that does that detection would be both wonderful and dangerous.

(That book abandoned its only interesting ideas and went totally off the rails a few chapters later IIRC.)


At which point increasing dependence on outside validation makes your opinion become irrelevant?


In a sense, it already is. My voice is drowned in a sea of SEO-optimized gibberish. Even if I had something interesting and novel to say, an average person would be hard pressed to 1) find me 2) convince me to exchange ideas 3) pay money for it. There is usually a market for experts with niche appeal, but, well, the market only needs so many.

Still, it is a great question and part of me wonders about the whatifs of this evolution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: