Hacker News new | past | comments | ask | show | jobs | submit login
The pressure to publish pushes down quality (nature.com)
96 points by benbreen on May 21, 2016 | hide | past | favorite | 18 comments



Is it possible to model the rate at which human knowledge is increasing or decreasing? Given that:

A) claims about reality are a lemon market, and it's often expensive or impossible to evaluate the truth behind purported facts.

B) false claims undermine the value of true claims by decreasing trust in them, and by causing people to stop believing things that are true so as to realign their worldview to be logically consistent with misinformation.

It stands to reason that if the percentage of true claims made in papers is less than X, the sum of human knowledge is actually decreasing even if good papers are still being published.


I can't even think of a way to measure human knowledge which isn't problematic in immediately obvious ways.


Yeah, what is a 'true claim'?


I really, really hate to ask the obvious, but doesn't this apply to damned near literally everything?

Did we genuinely need a column to remind us of this? Is this how bad it's gotten? I'm not asking rhetorically; I'm dead serious. Has publishing gone so far off the rails that we have to restate the blindingly obvious just to reboot the conversation? I'm nowhere involved in the process and will readily admit that I've got not a single paper to my name, so I'm clueless about the current state of affairs when it comes to published research.

Edit: Wow. Yeah, things actually look pretty bad if we take the opinion at its word. Particularly telling are the examples given, such as:

> For example, a breast-cancer cell line used in more than 1,000 published studies actually turned out to have been a melanoma cell line. The average biomedical research paper gets cited between 10 and 20 times in 5 years, and as many as one-third of all cell lines used in research are thought to be contaminated, so the arithmetic is easy enough to do: by one estimate, 10,000 published papers a year cite work based on contaminated cancer cell lines. Metastasis has spread to the cancer literature.


It indeed seems to be bad. If you try reading some papers you can find everything from conclusions obviously based on formal fallacies, confusing statistical significance with practical significance, experiment designs that couldn't provide any new information no matter how they turn out, to outright bizarre flaws of logic such as dietary intakes calculated so that the body isn't able to excrete more than it absorbs.

The medical science has decayed into a strange cargo cult where experiments are made and papers are written, but the essence of science, the pursuit of better understanding, is lacking. It's no wonder there has been no major progress in the last few decades and we have an epidemic of obesity and other metabolic disorders.


> but the essence of science, the pursuit of better understanding, is lacking

When I read articles and books from scholars from the 19th and beginning 20th century, I'm often surprised how broad their horizons were and how cautious in their thinking. It's always a pleasure to read.


Where can I read about medical science having been more focused on understanding in the past? I think the focus on small increments in (relatively) unimportant medical knowledge is perhaps the greatest tragedy of our time.

Also, who calculates dietary intake that way?

"Confusing statistical significance with practical significance" -- that's a great phrase!


>"Where can I read about medical science having been more focused on understanding in the past?"

As far as I know no one has really written this up, it is just something you figure out by reading the literature. You see that despite the technological limitations at the time they tried to include tables of the data in as much detail as was practical and rule out competing explanations, etc. Over the decades this degenerated into testing null hypotheses and reporting p-values and averages. Now there is somewhat a recovery due to the sheer ease of transmitting information, but it is still amazing how little this is taken advantage of in general.

I've thought about a book but not sure people would buy it. How much would you pay for something like that?


Certainly it does -- and yes we do. Things are so bad that entire fields of science are best disregarded until a few good meta-analyses come out. Or, better yet, large randomized trials with predetermined criteria for success or failure, and no fishing allowed.


(Example: remember that "10,000 hours to expertise" thing that Malcolm Gladwell was pushing a few years ago? Forget about it; the whole thing failed to replicate. And forget about Malcolm Gladwell in general, since he makes a career of this sort of borderline-fraud.)


Oh, and even the stuff that was true was completely misinterpreted. For example, he cited a study of students as showing there was no "math talent", only diligence. The setup actually was quite interesting: there was an additional informational questionnaire that wasn't about skills at all, and there was a correlation with conscientiously filling out that questionnaire and scoring high on the actual skills test.

However.

The effect only appeared in 2 grade levels and only in some years, so extremely weak evidence. But that's only a small detail. The big one was that the effect was only evident at the national level. So yes, there are no nations that are more talented at math than others. In other news, water still wet. However, the data, even if the flimsy bits are taken at face value, simply do not show at all that there is no math talent at the level of individuals, and in fact showed the exact opposite.

Details.


Your criticism of Gladwell is oddly unspecific given the nature of your criticism. You fail to state what exactly you criticize in his 10,000 hour statement.

When I read the kind of criticism you seem to be referring to, like this one:

http://www.businessinsider.com/new-study-destroys-malcolm-gl...

I notice that the examples are silly: They equate "success" with "mastery". For example, they write

> Then there's a band like the Sex Pistols, who took the world by storm even though Sid Vicious could barely play his bass.

So they themselves say he "could barely play his bass"! How does that refute the mastery theory? The Richard Branson example is about "success" and not "mastery" too - I bet Richard DID have "10,000 hours" of business practice.

Not that Gladwell made any specific predictions either AFAIR. I wish the people criticizing his work would made an example of how to be better - but it seems the criticism itself sets new low points in quality.

"What Malcolm Gladwell REALLY Said About The 10,000 Hour Rule":

http://problogservice.com/2012/03/15/what-malcolm-gladwell-r...

> “Malcolm Gladwell said you have to have 10000 hours in a subject to be an expert,” they will often state. The problem is, they’re repeating a misquote from someone else who has never read the book.

Also note how Gladwell quotes scientists - so any criticism should be directed at those:

> “The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert—in anything,” writes the neurologist Daniel Levitin. — p. 40


So in other words, publishing papers in journals has gone much the same route as the media, where getting something out quickly is rewarded more than doing a good job.


and consumer tech, and manufacturing in general, and politics, and policing, and...


In other news; water is wet.


Pressure is a good thing. Pressure does affect quality. We never have enough time to ship things. There are always improvements that we can make.

Pressure is always there. It's one of those things which expands to take up whatever space which we are given. More time (shipping less) doesn't necessarily mean better quality. There are days when I can get twice as much done just because my task list has twice as much things. Then there are days when simple things on a short task list seems to take me forever. I feel I have more time so maybe I'm more susceptible to going down rabbit holes. Or maybe I procrastinate until I have enough time that I know the task will really take.

I wonder if people would really improve the quality if given the time to do so. Maybe the issue is that they don't have the tools to improve the quality in the first place. At a certain point your brain calls the task done and you aren't going to do anything more to it. Give more time to a developer writing spaghetti code and you'll probably just get more spaghetti code. To really improve, the developer would have to go through some big personal change (a steady diet of code reviews, reading good code and using other resources for improvement).

Maybe it's best to improve the processes. In software we can ship something with a beta label and expect problems. Then we can iterate on the software to improve it.

And is publishing less a solution to drowning in noise? That seems to be a separate problem to be solved using different tools. There will always be a growing volume and with volume comes noise.


For the first part of your comment I will just say that research does not work like this.

However, that is interesting:

> Maybe it's best to improve the processes. In software we can ship something with a beta label and expect problems. Then we can iterate on the software to improve it.

That's more or less an analogy of what people talking about open peer review are saying, and I think it is quite right.


I don't live in the world of publishing. But pressure is what pushes me to pay the rent. And that's probably the case for most people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: