Hacker News new | past | comments | ask | show | jobs | submit login

The PubMed Commons initiative[1] by the National Institutes of Health, mentioned in the article kindly submitted here, is a start at addressing the important problems described in the article. One critique[2] of the PubMed Commons effort says that that is a step in the right direction, but includes too few researchers so far. A blog post on PubMed Commons[3] explains a rationale for limiting the number of scientists who can comment on previous research at first, until the system develops more.

[1] http://www.ncbi.nlm.nih.gov/pubmedcommons/

[2] http://retractionwatch.wordpress.com/2013/10/22/pubmed-now-a...

[3] http://www-stat.stanford.edu/~tibs/PubMedCommons.html

USING MY EDIT WINDOW:

Some of the other comments mention studies with data that are just plain made up. Fortunately, most human beings err systematically when they make data up, making it look too good to be true. So an astute statistician who examines a published paper can (as some have done) detect made-up data just by analyzing what data are reported in a paper. A researcher who does this a lot to find made-up data in psychology is Uri Simonsohn, who publishes papers about his methods and how other scientists can apply the same statistical tests to find made-up data.

http://opim.wharton.upenn.edu/~uws/

From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions

Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020

http://www.frontiersin.org/Computational_Neuroscience/10.338...

on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.

"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."




Tokenadult provides helpful input.

Like all other sectors, scientific research can get inbred and peer review corrupted as a mechanism. It's similar to a character/job/performance reference: "We want to hear what other people say about you," can be problematic when the folks talking are untrustworthy--yet they wield credentials imparting trustworthiness. Peer review only worked when the majority of peers were rock-solid good scientists. (when there were fewer scientists, with personal reputations and the discoveries to back up their reputations) Wouldn't it be great if Pierre Curie, Darwin or Tesla were doing peer reviews? (or men/women of similar caliber)

At leading research schools (they aren't all universities), falsification of data exists at the student and professorial level.

Funding is definitely cut at NASA, whereas "sexy" research funding is increasing. We need excellent researchers across all areas of expertise. And we need increased accountability and transparency. And more funding.

I argue that what is most needed is increased scientific literacy at the level of political leadership and the general population so that findings can be accessible/understood/evaluated on a more concrete level by all.

Coming pathogen shifts associated with climate change, and extreme weather events, etc. alarm people. And people want to be able to trust science and to trust science reporting.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: