What's interesting here, is that this is not a technology problem. Nor is it a science problem. Nor economics. Its a strictly speaking one of sociology. The things ostensibly being bought, sold and traded are not the tru determinants of value. The value of Science to an investor is a function of its monopoly position in the distribution of its content; and on its dervivative value as a "citation" flag to grant-writers and tenure comittees. The demand to "feed" this system is really driven on the supply side. The monopoly side on the backend is just an artificial scarcity model and barrier to entry, that indirectly feeds back into the supply function. That is to say, the "citation" credential needs artificial scarcity to function in its role of extracting derivative rents. It is the grant writers and the tenure comittees that ensure they have this subsidy, as a negative cost (free labour) of production. While none of this is new to anyone reading here, the mechanics are worth thinking through. Because the system is sponsored by public money (on the grants and indirectly, the academics) on the one hand, and on the other through public policy (monopoly rights to the iP that comes out of publicly funded research is <given for free> to monopolist publishers). The sociology of all of this is to avoid "mistakes" in the allocation of funds and tenure. That is to say, the adbication of the experts...to other "experts". But as the article points out...the grant writers and the tenure comittees are abdicating responsibility to....journalists...not scientists or actual experts, per-se in their fields. At least, that is the essence of the critique leveled herein.
Interesting how they don't use the word "open" even once in that article, whereas the Nobel prize winning scientist makes that a cornerstone of his proposed solution:
It's probably worth pointing out that open access, while important, isn't really relevant to this discussion as publishers like Nature already embrace open access.
(disclaimer, I work for Digital Science, which is owned by the company that owns Nature)
It's probably worth pointing out that open access,
while important, isn't really relevant to this discussion
as publishers like Nature already embrace open access.
Uh, I just tried accessing a random article in Nature and hit the $32/article paywall. So no, don't think so.
Just in case anybody wants to read a bit more about this: This is Green Open Access, which usually means that the publication is blocked for a while, but usually doesn't cost the author (or their institution) anything extra. The other option is Gold Open Access, which, depending on journal, can get rather pricey, but makes the article openly available immediately.
Yes. Science and Nature are different in this regard. Science also has much lower subscription fees, probably because it's run by a nonprofit, where Nature is for-profit.
He is promoting his "own" open access journal. I'm not saying that is a bad thing. Just trying to understand the comment above about not mentioning "open"
Sounds good. I have no problem with anyone saying "this is a bad thing, I have a better one, check it out" - so long as they are honest and open about it - which this guy appears to be.
It's sort of like Linus and his "I've written a controller, check it out" email, eh? :->
This is an impossible system to break when grant distributors completely require that you have recently published in a luxury journal in order to garner funding. The same problem happens with tenure track committees.
Most of the scientists I know are desperate for funding now, and so they have made an even bigger push to publish in a big name journal. It's an insane system, but I've never heard of a viable solution for it.
I just want to put this thought out on the open, what about crowd-funding? Would that ever work for research? Like you can make all the papers free to read, and at the end provide a link to donate to their research. Or they can showcase that this is what we are working on, this is how far we have gotten. Now we want to do this but for that we need this amount of Money. Help us out?
Would something like that work? Or the primary success of crowd sourcing is that people see it as a "pre-order market for cool new gadgets" instead of the whole "this thing should exist! I'll help!" ?
This is really a problem with our government. The NSF is shrinking and is now too small to support the number of physical scientists in the US. If funding were prevalent and actually scaled with the size of academia then people wouldn't be so desperate to make a show of their research. It is common practice to claim your research will result in magical leaps in technology, such as curing cancer while making cars more fuel efficient, essentially lying to get funding. On a side note, this is one of the reasons why science news articles are often so far fetched and inaccurate.
Certainly making journals free and open is a step in the right direction, it reduces the financial strain on Universities and researchers. It also the opens the doors for amateur scientists to do solid independent (predominately theoretical) work.
I should be clear though, we already have a kind of crowd sourcing system in place: taxes and government funding agencies. The solution isn't to implement crowd sourcing, that would be like putting a band-aid on a leaky submarine. The solution is to correct the system we already have in place.
A central problem with crowd sourcing directly to the public is that the public _cannot_ be swayed to fund science that isn't sexy. A portion of fundamental science is actually pretty dull and difficult to explain, but needs to be done for the sake of progress.
> Certainly making journals free and open is a step in the
> right direction, it reduces the financial strain on
> Universities and researchers. It also the opens the doors
> for amateur scientists to do solid independent
> (predominately theoretical) work.
Actually, open acess journals are more expensive to univeristies and scientists. Instead of the university paying for a fixed subscription fee out of the "overhead money" it collects, the scientists themselves have to py a fee per paper published out of their own grants.
Nothing prevents the university from paying publication fees out of overhead. In fact, many universities do so. Additionally, it should also be pointed out that eLife, where Randy is the Editor-in-Chief, is open access and does not charge publication fees.
I can't defend PLOS's publication fees, I don't know anything about them. I can say it costs roughly ~$5000 to make a journal article open access in most journals I am familiar with. So $2500 doesn't seem so bad, although it is clearly way too high. There is also the question of whether these are optional fees.
I'm not sure that Kickstarter style crowd-funding would work well for basic research. Two of the keys to its success are rewards, that mean that people who fund the project actually get something for their money, and projects that are of a reasonable scope to be completed within a year or so (while there are definitely projects that last longer, those also tend to be the ones that people are least happy about).
Basic science researcher generally doesn't produce very much that would be attractive as awards, and lots of good research will take years to complete.
Remember that crowdfunding is not just for new gadgets, but in general it's for new things that people want to consume; gadgets, art, music, games, and the like.
There is some crowd funding for science, but it's pretty much small potatoes - it'll cover a trip, or maybe a single publication, or the budget for some preliminary data.
Even tiny, "Pilot Study" grants are often for an order of magnitude more than usually comes in via crowdfunding. And some of the large studies...just isn't going to happen without major changes in how and what people fund.
What do you call it when tens of thousands of volunteer birdwatchers pay for their own personal efforts (in time and money) to do bird monitoring in the UK? Cumulatively that's a lot of money, not "small potatoes."
The BTO isn't the only example. There are many others (e.g. the BOINC projects, Planet Hunters, EyeWire, The Amateur Sky Survey).
Maybe you'll object, saying that's not "crowdfunding." If it's not, then what is it? There's definitely funding being spent, on scientific research, by "the crowd" (i.e. the general public).
It is, and those are great examples. But it's only accessible to the most sexy, most accessible, most 'fun' disciplines. The harder stuff just cannot be crowd-sourced right now. Biomedical research is expensive, dull, and removed from most peoples' experience, and so it'd be very difficult to get a crowd interested in funding basic biomedical research.
But i think Biomedical can still attract people. Nobody likes diseases. everyone would be ok to contribute to eliminating diseases or coming up with better cures/surgeries.
What's tough, is to get funding for weird theoretical quantum mechanics stuff. How do you ask for funding to things people have absolutely no idea about?
maybe you could crowd fund the first expensive study into CACK-3 protein that may or may not be related to disease X. But after the 100th randomly named protein investigation people are going to be bored and sceptical it makes progress. These studies just say protein X seems to be related to protein Y. It takes an excruciatingly long time to make progress on the high level goal of understanding a pathology. I bet half the bio people doing experiments don't even know why they are doing them either.
Classic example highlighting the difference between Research/Discovery vs Development.
Kickstarter and their crowd-funding ilk are fine when it comes to Developing a product but good luck getting funding for anything as iffy as R&D, more so in the biomedical realm.
Yup, it's usually a case of trying to interpret what the PI told you to do. They usually don't have time to actually explain the context. Or to make sure that what they're requesting isn't something entirely impossible.
I like the solution for having some kind of "reward" for funders. As others mentioned, I'm not sure yet how to adapt this sort of thing to completely new projects or for something that isn't flashy like research on AIDS/cancer/MS etc.
Why you'd say that? I have reccuring billing for several services, like Dropbox, Linode, Digital Ocean, AWS, auto-domain renewals, some subscriptions to digital content, etc.
It's not only illegitimate businesses use recurring billing!
Honestly - ex academic here - I've come across a couple of research topics that I'd really like to publish in, and I have zero interest in pissing through a peer review process.
I am thinking of throwing something into the glossy professional journal, and maybe doing a whitepaper or two on the corporate website. That way, people will link and refer to it if it is good, and they won't if it is not - and it will disappear or be referenced accordingly.
I intend to include the data and code, so everything can be verified independently.
That the rest of the publishing world is simultaneously going down that same path seems to me an inevitability.
You might also think about putting it on the arXiv (arxiv.org). I think they take articles that haven't been submitted to journals, and it is a bit of a recognized source of articles (there are citation guidelines for arxiv articles for example).
These top science journals may want to look at Encyclopedia Britannica's recent history because the article's quote from an executive editor at Science sounds suspiciously like what people at Britannica said as they saw what could have been their opportunity pass them by like they were standing still.
> Monica Bradford, executive editor at Science, said: "We have a large circulation and printing additional papers has a real economic cost … Our editorial staff is dedicated to ensuring a thorough and professional peer review upon which they determine which papers to select for inclusion in our journal. There is nothing artificial about the acceptance rate. It reflects the scope and mission of our journal."
On another note, if I were an enterprising graduate student looking for a project to work on, I'd see this as a great opportunity to work with a Nobel-prize winner on something important enough to him for promoting science that he took this stand, taking away time to work on his other passions.
Britanica could get a thousands and some millions for not producing a single issue off their old catalog! As a former Digital Librarian it is a serious racket!
Monica Bradford, executive editor at Science, said: "We have a large circulation and printing additional papers has a real economic cost…
One wonders if Science is one of those journals that invites reviewers to work for free because being a reviewer is so prestigious. Unpaid review constitutes a significant subsidy to the academic publishing industry: http://www.timeshighereducation.co.uk/402189.article It seems to me that the marginal cost of publication is actually fairly low, especially given the rather exorbitant prices the top journals charge just to read a paper ($30 for non-library purchases, which is more than the cost of the print edition).
In our field, it is time to support USENIX over IEEE and ACM, over the Open Access issue, too. I'm boycotting IEEE and ACM until they support Open Access.
Amazingly, they're going after authors making their own papers available!
IEEE is ridiculous. Sometimes I feel ashamed for admitting that I maintain an IEEE membership (which is really just for the journals anyway). And then I get even more annoyed when I look at how I pay IEEE something around $800.00 / year for all my society memberships and journal subs, and still routinely find that I don't have access to something I find while Googling, without paying another $30.00 or more.
This is all really leading me to want to get more involved in the whole IEEE governance process and maybe try to help spark some kind of initiative to cut down on the access fees for IEEE content. Or maybe an initiative like that already lives... I'm not sure. If there is, I feel like I should find it and get involved.
IEEE used to be a quasi-viable way for an individual to get decent insurance.
I love IACR, IFCA, USENIX. I kind of wish there were a "hacker con" organization equivalent to IFCA, covering the regional cons, defcon/blackhat, etc., for things like providing a searchable repository of papers, talks, etc., and maybe a common cfp repository.
Not quite sure this is true about the ACM. From their policy about rights permanently retained by the author [1]:
* Post the Accepted Version of the Work on (1) the Author's home page, (2) the Owner's institutional repository, or (3) any repository legally mandated by an agency funding the research on which the Work is based
* Post an "Author-Izer" link enabling free downloads of the Version of Record in the ACM Digital Library on (1) the Author's home page or (2) the Owner's institutional repository;
This wouldn't allow the author to post it to another repository, if not legally required, distinct from his home page. (It was IEEE which was more aggressive recently, but from reading Matt Blaze, it looks like ACM and IEEE have traded off that role; neither is as bad as Elsevier, but they're allegedly 'on our side' vs. a commercial publisher, so anything short of full OA brings dishonor to them.)
The key is to start developing better ways to measure researcher productivity. That's where all the different forms of altmetrics come in. For us (Publons.com) we're using peer review itself to generate article level metrics (and give reviewers credit for it) that focus on the quality and significance of a paper, not just its popularity.
Are there any funding sources focused entirely on the unsexy stuff? E.g. showing X is reproducible or that targeting gene Y does not do anything interesting. Seems like an area where targeted NSF funding can help.
I'm curious, because I don't really know this system, if other scientists feel this way. I have some inkling that journals can often publish erroneous papers (maybe I got that idea from Nate Silver?), but that would be so very unfortunate for Science's reputation. I hope it's not that bad.
Journals like to publish sexy/novel papers. That can lead to them publishing some things that are "hot", but not necessarily well proven. The arsenic bacteria from a couple of years back is a good example [1].
In their rush to publish, solid work which may not be as sexy can get pushed to lower tier journals, whereas "novel" things that aren't as well proven can get elevated to top-tier journals. At this level, things aren't necessarily the pure meritocracy that we like to think it is. Then again, these are for-profit publishers. There is nothing that says what they have to accept, and they can choose what they want to publish. The only thing that will change the system is large-scale or big-name boycotts like this one.
The Nobel laureate profiled in this article researches biology, and has won the Nobel prize for medicine and physiology. You would hope that research on medicine, of all subjects, would be published in journals that never make mistakes, but Science, arguably the most prestigious journal in the world, definitely published a mistaken article about cell biology quite recently,[1] so we have to wonder how well even the most prestigious journal does peer review of what it publishes. A medical doctor who studies the scientific research process in general thinks that a great many published research findings are probably false,[2] so there have to be greater efforts on the part of scientists to get peer review right, and improve scientific publication practices.
From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions [3] on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.
"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."
>You would hope that research on medicine, of all subjects, would be published in journals that never make mistakes
I wouldn't hope this, and maybe it's pedantic, but there's an important point to make here. Scientific articles are evidence, not truth, and people (and systems) make mistakes. The hallmark of a robust system is not the absence of error, for it is arrogance to believe you've ever achieved it and foolish to have it as your absolute goal. What is important is setting a reasonable expectation of accuracy, the types of errors committed, the root causes of those errors, and responsible, transparent reactions when errors are made.
A system is much more trustworthy when it has a solid history of errors with reasonable responses than a system which claims or appears to suffer from no error at all.
Exactly. Reviewers expected to be experts in a field can only check claims against their own knowledge.
Their role should be recognized as limited to that and not to have some monopole on the "real truth". Fear of error could make them overly protective if not paranoid.
In my sense, scholar communication means should be open but with mechanism to weed oud obvious crap and mechanism for efficient self correction.
And as for followup papers that contradict/correct the previous one, none of the journals make it easy to tell they exist or anything. So people keep citing the original erroneous paper. That last link in the previous paragraph puts some numbers to the scope of this problem if you look at it (16 citations for the retraction, 976 citations for the bogus unreproducible paper, 700 of the latter coming _after_ the retraction was published).
OK, Tenured Professor at UC Berkeley with loads of Nature, Science, and Cell papers who recently won a Nobel Prize ... I'll go against the grain here upon your command and not follow the footsteps of those that made it to the top of the system off of the back of luxury publications.
This almost sounds like they should take away his Nobel prize immediately. Abusing the spotlight to promote his own journal is just ridiculous. (Open access != free)
Bonus points for getting the Nobel prize through publications in those journals, and only denouncing them after he won.
"His" journal was established with an ideology in mind, not to enrich or empower Sheckman himself. He's promoting his journal because he's the only one around 'doing anything about it'. And I accept that kind of promotion. As another poster mentioned, this article doesn't mention 'open access' - but only in the last few years has a reputable 'open access' journal even been an option for biomedical researchers. And for that Sheckman and his journal is owed a fair amount. Science cannot be entirely free (as in beer). But the knowledge gleaned from it can and certainly should be free (as in speech).
Currently, your choices are Cell Reports[1] or eLife[2].
Better late than never, I say. Nor do I think he waited to win his Nobel to promote open access, as his editorship of an OA journal would attest to.
It's also worth pointing out hat eLife is actually free to publish in, at least for the moment. The founding organizations (HHMI, the Wellcome Trust, and the Max Planck Institutes) are footing the bill for the first few years.
Try waiting until that actually happens before you get outraged over it. Getting yourself worked up over hypothetical situations that play out in your mind is not good for your health.
Apparently publishing in this journal is free (no fee required) and articles are accessible on the web for free. So to me it's free as in free bear. However there seam to be also a severe review process. So eLife is not as open as Reddit or HackerNews. ;)
Here's to a world where a researcher who hasn't won a Nobel prize can do the same to journals that manipulate impact factor without regard for the consequences to science!
Indeed. It's a step forward, sure, but it's easy if you have papers in Cell, Nature and Science (which he does) and a Nobel to say "Yeah, nuts to this".
I'd be much more impressed by a public stance defending young tenure track faculty making a commitment to open access.
Easy or not, this kind of statements from big names is exactly what is needed to solve the problem. If I say this kind of things (as a young researcher with a modest track record) no one will listen to me. If every Nobel prize winner said this kind of things, maybe the politicians who set the evaluation criteria for grants and tenure (yes, often their are scientists, but they are doing politics) would be forced to change things.
It is sad that the grant system is focused on selecting "elite" people, rather than improving the system. Improvements to the "research system" should be encouraged.