Hacker News new | past | comments | ask | show | jobs | submit login

The Nobel laureate profiled in this article researches biology, and has won the Nobel prize for medicine and physiology. You would hope that research on medicine, of all subjects, would be published in journals that never make mistakes, but Science, arguably the most prestigious journal in the world, definitely published a mistaken article about cell biology quite recently,[1] so we have to wonder how well even the most prestigious journal does peer review of what it publishes. A medical doctor who studies the scientific research process in general thinks that a great many published research findings are probably false,[2] so there have to be greater efforts on the part of scientists to get peer review right, and improve scientific publication practices.

From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions [3] on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.

"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."

[1] http://www.slate.com/articles/health_and_science/science/201...

https://www.sciencenews.org/node/5635

http://www.nature.com/news/arsenic-life-bacterium-prefers-ph...

[2] http://www.plosmedicine.org/article/info:doi/10.1371/journal...

http://www.bx.psu.edu/~anton/bioinf1-lectures/mccarthy2008.p...

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna...

[3] http://www.frontiersin.org/Computational_Neuroscience/10.338...




>You would hope that research on medicine, of all subjects, would be published in journals that never make mistakes

I wouldn't hope this, and maybe it's pedantic, but there's an important point to make here. Scientific articles are evidence, not truth, and people (and systems) make mistakes. The hallmark of a robust system is not the absence of error, for it is arrogance to believe you've ever achieved it and foolish to have it as your absolute goal. What is important is setting a reasonable expectation of accuracy, the types of errors committed, the root causes of those errors, and responsible, transparent reactions when errors are made.

A system is much more trustworthy when it has a solid history of errors with reasonable responses than a system which claims or appears to suffer from no error at all.


Exactly. Reviewers expected to be experts in a field can only check claims against their own knowledge. Their role should be recognized as limited to that and not to have some monopole on the "real truth". Fear of error could make them overly protective if not paranoid.

In my sense, scholar communication means should be open but with mechanism to weed oud obvious crap and mechanism for efficient self correction.


The thing is, the top journals are also _extremely_ loath to publish retractions. Give http://retractionwatch.com/2013/11/27/at-long-last-disputed-... if you haven't seen it before and note that this is a case in which one of the paper's coauthors has been pushing for the journal to retract it for years. In cases in which it's other scientists pointing out that the paper is bunk, you get situations more like http://retractionwatch.com/2013/07/04/retraction-of-19-year-... or http://retractionwatch.com/2013/06/19/why-i-retracted-my-nat...

And as for followup papers that contradict/correct the previous one, none of the journals make it easy to tell they exist or anything. So people keep citing the original erroneous paper. That last link in the previous paragraph puts some numbers to the scope of this problem if you look at it (16 citations for the retraction, 976 citations for the bogus unreproducible paper, 700 of the latter coming _after_ the retraction was published).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: