I'm an author on several physics papers published before I switched careers to become a programmer. So I've been published in "big name" (for physics) journals.
Peer review, even without people scamming it, is still broken. The reason is, presuming you have an ideal peer panel, everyone is competent in the art of the paper. This means that they are all working in schools of research relevant to the paper they are reviewing.
Thus every paper either supports or opposes the peers school. If your paper helps confirm their theory, great. If not, they have a tendency to let politics get involved and reject the paper.
Now, a lot of funding comes from government. Imagine the situation where government is funding research with a desired outcome. This desired outcome has a policy implications. Result: Your peer review system is going to highlight results that confirm the government's policy position because the peers are getting their paycheck there, and it is going to bury results the disagree with the position.
I think that a better system would be to have peers give written reviews, that would be attached to the paper, with a rebuttal from the papers authors. Publish more papers, air differeing views, and make more money (since more paper sold means more library fees, etc.)
Then peer review will be less about "censorship of radical ideas" than about "hypothesis-antithesis-synthesis".
And yes, even in obscure areas of materials physics, there is a lot of politics... cause the stakes are so low.
[reposting from a previous submission that didn't get any traction]
If anyone wants a good idea for a startup in the academic journals space, I'd recommend looking into vetting peer reviewers as part of the article submission/review process. There have been a few high profile cases of people faking their peer reviews recently. At Sage we dealt with a big case of that earlier in the year and retracted 60 papers. A new case this week seems like it involves 5 published papers and a bunch that got caught before being published. Note that both those cases (and some more) use ScholarOne as the submission management platform.
In particular, the new hotness in the academic journal world is open access journals, and OA journals need to be particularly careful about bad stuff slipping through peer review. A few OA journals from Hindawi just lost their impact factor (source: http://scholarlyoa.com/2014/10/14/the-scientific-world-journ...), not due to outright peer review fraud, but due to other "abnormal citation patterns". For OA journals getting or losing an impact factor has a HUGE impact on the # of submissions (which are directly tied to revenue), so there's a big need to ensure you don't get caught with your pants down when it comes to the peer review process. So if you had a product that gave a publisher higher confidence in the integrity of the peer review process I imagine you'd be able to get a number of bites from big and small publishers alike.
The OA EGU Journals have been doing a good job of reviewing. The review process is open to the public with anyone able to read and comment on a paper, while they also invite/require several experts to participate. While some papers are only reviewed by the invited reviewers, many are reviewed by 5+ people. Having the entire process open to the public is the way forward.
This is the correct response for open-access journals.
There's an economic imperative to keep the process secretive in paid-access journals, rather than improving it - they can claim this vetting process is their 'secret sauce' and it's one way many attempt to justify their fees.
As we've learned from open-source, if you can get enough eyes (identity isn't actually that important) then all bugs become shallow.
"As we've learned from open-source, if you can get enough eyes (identity isn't actually that important) then all bugs become shallow."
This is an often repeated claim, but I don't think it holds up.
What is more accurate is that, given enough eyes most bugs are shallow, but the deep ones are not.
Some things are just subtle, and some things are technically deep. Throwing a thousand naive[1] reviewers at a problem like that wont get you anywhere near the result of just one person who knows the domain.
In the domain of peer review, this means that you can't replace (domain) expert reviewers with a random sample of interested people. You can certainly enhance the expert reviewers with additional people, and this is a good idea.
[1] Naive with respect to that particular problem domain.
Is there anything stopping the publishers from requiring cryptographically signed reviews by peers? E.g. we establish a bijection between public keys and institutional email addresses and then only accept signed reviews.
here is an idea: how about Sage sponsoring a kaggle challenge along these lines? Could certainly gather some interesting ideas and maybe you can entice some good performers to take it commercial later?
The fundamental issue with the current peer review system is that most academic research is "valueless" to those involved in the system.
I use the term "valueless" in a very careful and specific meaning. It does not mean the work is without merit or doesn't have large impact or benefits to society, rather it simply means the quality of the work has no value to the peer reviewers or to the journal itself.
* The peer reviewers are donating their time and generally getting nothing in return from reading the work other than seeing what other developments are going in in the field.
* The journal gets nothing from work itself. They are solely in the business of collecting author fees and fees from library.
In this context, I think it is accurate to describe most work as "valueless" to those involved in the system. This naturally leads to sloppy peer review and publishing practices.
What I would like to see is a system that makes all players have skin in the game for the quality of the work.
How to do this? One way would be a new kind of journal where everything published has to be patentable. The journal would pay for the patenting costs and the split any potential licensing fees with the authors/funding institutions in some way. Peer reviewers would be paid and/or get a small cut of the patent.
This would make everyone in the process have "skin in the game" in the value and quality of the work being published. I don't know if the economics of such a model would work, but I think it would be a fascinating experiment.
Um, you're talking about patenting the Higgs boson, the structure of DNA, the speed of light.
There is academic merit in being a reviewer. Being a reviewer for the New England Journal of Medicine would surely favor any tenure committee or grant application. Perhaps one could argue that's hard to perceive, but I think it's safe to say any PhD sees the value.
The journals, on the other hand, have abided by very loose business practices with exorbatant margins enabled to a great degree by the perception of the academic honor system, for a long time. Their business models and methods are thankfully coming under much deserved scrutiny.
Should researchers be honorable people? Yes. Will you always find dishonorable behavior in a set of 100 people, over a lifetime? Yes.
Having been through this there is no value in being a reviewer when it comes to tenure or grants. There is value in being an editor, but being a reviewer is worth zero to your career.
No. Peer reviewers get: early access to articles of interest, the feeling of contributing to the community they belong to, the ability to demonstrate their knowledge, the ability to reciprocate other people for reviewing their own articles, and so on. These are not financial rewards, but they are not less strong for that (arguably even stronger).
Journals are heavily incentivized to maintain standards because otherwise good authors will stop submitting to them and librarians will stop subscribing (of course only the first incentive holds for OA journals).
The fact that system sometimes fails is not proof that it is a bad system - all human systems have a failure rate.
Of course. The primary goal of academia is the advancement of human knowledge.
Unfortunately, the current ecosystem of journals is to a significant extent failing at that. Possibly introducing a clear [0] profit motive into that system would result in it working more efficiently and better serving academia's primary goal.
[0] I say "clear" because a lot of the current system of journals is primarily driven by profit motives that are only slightly veiled. There are notable exceptions to this, e.g. PLoS, Journal of Statistical Software, etc...
The researchers need to profit in order for the incentives to truly align. It's like an incredibly poor man's spotify and we all know how that works for musicians.
The standards required for patentability are quite different from scientific novelty or interest.
Paying reviewers is not a bad idea, as journals that pay will both be more likely to attract quality reviewers, and it also creates a contract between the reviewer and the journal. Such a contract could include requirements for the standard of reviewing. If a paper is later found to have been poorly reviewed the journal can seek redress from the reviewers, further encouraging them to do their job properly.
So if I have an idea that might be patentable, the journal will handle all the costs for me? Including maintenance fees and the cost of appeals if the patent is rejected? Sweet! Someone else takes the brunt of the $10,000+ in patent filing costs, and there's no downside for me!
In other words, your proposed journal just managed to increase the cost of publication by over an order of magnitude.
Onarbor, https://onarbor.com, is doing precisely what you describe. It's publishing and funding. I'm one of its creators so would be grateful if you'd be interested in talking more. tim@onarbor.com.
We need to separate the process of deciding if a paper should be published in a journal from the process of reviewing a paper. Reviewers (I have been one many times) are asked to do two things:
1. Give an opinion as to value of the paper and if it should be published in journal x
2. Point out flaws or suggest improvements to the paper.
These two activities are not really connected to each other. The decision to publish or not in a journal is an editorial decision that reviewers should not be making. I am happy to point out flaws or suggest improvements, but who am I to make a judgment on the value an article to a journal? If the science is sound (but boring) does that make it less worthwhile of publication?
One advantage of separating these two processes is it would enable reviewers to communicate directly with the authors to ask questions or get more information. As a reviewer I have many time wanted to fire off a quick email to the authors to resolve something that I was unclear on, but the need to make the process anonymous (so I can pass judgement on the value of the paper) means everything has to go back via the editor. This makes it impossible to open a dialog with the authors and get the best result for science.
This is one of the problems we're trying to solve with Publons.com. We help reviewers to build an officially verified record of past reviews (which is great when it comes to getting tenure). Editors can then use those records to vett candidate reviewers.
The current academic peer review system is an example of a process that worked fine when number of participants was small but fails at scale. I'd like to see something like hacker news or reddit's social version of vetting papers in addition to any formal review process, perhaps with additional moderation from known experts. There is currently no site for community review of articles or papers and I think it would be a great resource for scientists of all kinds.
For physics and astronomy, there is the arXiv. Essentially all the papers (at least for astronomy) are posted there. The next step would be to add a community voting/vetting scheme on top of that. Statistics could be displayed alongside the article, and access to more detailed comments could be facilitated. Version tracking would enable people to update their papers to address criticism and suggestions. (arXiv already does allow people to replace their papers with new versions, and they retain the old ones, but there isn't really an easy way to "diff" two versions of the paper).
Epistemio allows rating publications as well as publishing post-publication peer reviews. Even though ratings and reviews may be anonymous, reviewers are authenticated, in order to prevent the kind of issues mentioned in the target article.
Check out pubpeer. It is supposedly post-publication peer review, but in practice is usually limited to people making accusations of data manipulation. The conversations can be a bit more chaotic than is ideal for peer review.
More than just peer review, what is crucial is replication of experiments by many. So often experiments are found too difficult to replicate rendering the results unusable.
Onarbor, https://onarbor.com, is trying to tackle the scaling issue you mention. Regarding the reviews and reputation building its very much in the model of HN, Reddit, Stackoverflow. There is also a Kickstarter-like funding component. Worth a look.
The entire scam seems to be possible because there is no attempt at confirming the credibility of the people presuming to actually do the peer review. This type of decentralized authentication is precisely what PGP is particularly good at.
I have to imagine that it would be trivial to design a system which would simultaneously ensure that only the actual scientists would be able to peer review a paper based on other (recognized) scientists signing their keyring.
Journals of higher credibility (Nature, Cell, etc..) might even require reviewers that had more authority/signatures on their keys.
I don't know if the problem is trivial to solve, but designing a system which engages one element of the problem is trivial. The part that I was focusing on, was ensuring that any paper that claims to be Reviewed by a scientist actually was reviewed by that scientist.
Great quote. It gets me thinking about Facebook. FB is assuredly a massive technical solution to a social problem. I get Bruce's sentiment, but there are exceptions to this rule.
Yes, the social solution that Facebook presents is ideal for keeping in touch with a loose or geographically diverse set of real life friends. It is a technical solution to an admittedly simple social problem but a solution it is. Prior to Facebook I used to send graph emails, people would leave jobs, change email, etc etc. then came the cc replies, all out of order. A mess, solved well with Facebook, ad still really the only useful feature.
There's a registry that's trying to be a centralized identity database for all researchers called ORCID: http://orcid.org/ although I don't think any real-life authentication is really built in, so I think it's trivial to create an ORCID ID pretending to be someone else.
Wow! I have participated in peer review and the system can work--here, it seems like the editors were lazy. I wonder how many papers a typical editor is responsible for per day? I can understand researchers wanting to exclude groups that they are competing with. I always assumed that editors would check suggested referees against obvious conflicts of interest, but perhaps there really are too many papers and too many groups for editors to manually keep track of this these days? Perhaps some checking for recent coauthorship and institution could be performed? Ultimately, the safest way would be to simply not allow people to suggest referees. Editors should be able to find people in the field (based on the literature) who could potentially referee the paper...
The basic problem is that I think there is value to having referees who are anonymous to the submitter. That way, there is no problem with someone relatively new offering a strong critique of a relatively established researcher. But someone (the editor) needs to know that the referee is competent. Also, competent refereeing takes work. For example, I refereed a "methods" paper awhile back and since the method would likely be encoded into software (and there are a lot of people that don't repeat derivations), I actually went through their integrals, checked for conceptual errors, (-) sign errors, integration errors, etc. It's community service and necessary--but it does take time.
Yes authors should never be able to suggest reviewers. I always hated this as I knew the whole process was so corrupt. Not all journals ask for this, but it seems to have become more common.
> researchers exploited vulnerabilities in the publishers' computerized systems
...
> But even the most secure software could be compromised.
This is nonsense, it is nothing to do with software vulnerabilities. It would be no more or less "secure" if I mailed in printed paper with a cover letter saying "Thomas Edison would be a suitable reviewer, his phone number is 555 5555 5555".
Peer review, even without people scamming it, is still broken. The reason is, presuming you have an ideal peer panel, everyone is competent in the art of the paper. This means that they are all working in schools of research relevant to the paper they are reviewing.
Thus every paper either supports or opposes the peers school. If your paper helps confirm their theory, great. If not, they have a tendency to let politics get involved and reject the paper.
Now, a lot of funding comes from government. Imagine the situation where government is funding research with a desired outcome. This desired outcome has a policy implications. Result: Your peer review system is going to highlight results that confirm the government's policy position because the peers are getting their paycheck there, and it is going to bury results the disagree with the position.
I think that a better system would be to have peers give written reviews, that would be attached to the paper, with a rebuttal from the papers authors. Publish more papers, air differeing views, and make more money (since more paper sold means more library fees, etc.)
Then peer review will be less about "censorship of radical ideas" than about "hypothesis-antithesis-synthesis".
And yes, even in obscure areas of materials physics, there is a lot of politics... cause the stakes are so low.