Hacker News new | past | comments | ask | show | jobs | submit login
Mathematicians aim to take publishers out of publishing (nature.com)
194 points by ananyob on Jan 18, 2013 | hide | past | favorite | 43 comments



Seems rather behind the times. MSP (http://msp.org/about/) and others seem to have already shown that "researchers can organize the peer review and publication of their work at minimal cost, without involving commercial publishers".

I understand that this link is talking about something slightly different (basically collecting links to Arxiv instead of "publishing a journal"), but I also don't think this is as ground-breaking as Nature makes it out to be.


I think MSP is a great publisher, but it still functions more or less like a traditional publisher would. I believe their journals are reasonably priced, but this initiative seems to present another step in that it is an attempt to completely abolish publishers.

The argument in favor is that they do not add all that much of value, and the community would rather do without the copy-editing that the journals provide and instead have completely open access without any charges.

Of course there are already open access journals, but sometimes you have to pay to publish there, and what may make this initiative different from the ones so far may simply be momentum: Perhaps this time it will indeed start a permanent transformation of the way mathematics is published.

The current push, starting with the Elsevier boycott and "lead" by Gowers, seems to have support from a substantial part of the community.


Right. I definitely don't mean to imply that there's not room to improve on the "traditional publishing" model that MSP uses. I only mean to point out that the Nature article is rather over the top in terms of the prospective changes that it attributes to the Episciences Project, and that an enormous amount of good work has already been done in the "reforming math/science publication" space in the past 10 years or so.


The difficulty in escaping Springer/Elsevier is not that researchers are too dumb to organize their own journals/peer review. Quite the opposite, we already do that FOR Springer/Elsevier.

The difficulty is that existing publishers own journals which have gathered significant prestige over decades and sometimes centuries. And hiring boards, swamped and unable to take a closer look, often just look at which journals a prospect has published in. That means if you're not submitting to these prestigious journals, you're crippling yourself. And by submitting to them, you're adding to their prestige, making it a vicious self-feedback loop.


But if you get a prestigious mathematician, such as Gowers, to back your new journal, you may get some traction.


Or if you get the entire editorial board to resign, and start a new journal.

This happened with the board of Topology (http://www.ams.org/notices/200705/comm-toped-web.pdf), who then set up the Journal of Topology. The new journal isn't open access, but it is cheaper. Then the editors of K-theory resigned to set up the Journal of K-theory, which was less than half the price of its predecessor.


This remains a problem - that researchers in many field don't appreciate the utility of true open access (free to read and free to reuse).

see: http://www.youtube.com/watch?v=L5rVH1KGBCY

Cheaper is nice, but free (and machine accessible!) lets us do things that have never been done before.


I think this varies heavily per field- some disciplines are more collaborative (eg, CS) and others are more competitive (biomed).


So can't you submit some papers to the prestigious journals, and others to the not so prestigious journals?

Out of interest, does prestige really advance society and/or the sciences in any way? Genuine question - if the answer is yes, then how does it do this?


To put an example of a prestigious free open access journal: Journal of Machine Learning Research (http://jmlr.csail.mit.edu/)


What's so great about MSP? They're not-for-profit, great, but it's still prohibitively expensive. (http://msp.org/forms/orders/). Clearly their articles are far from open access as all science should be.


You're right, it's old news, but I do think the more we hear of this, the more it's likely to start a movement rather than be a novelty.


There has been many open/free publications in the scientific community. The basic idea is that if most of the work (writing, formatting, reviewing and selecting) is done by researchers for free anyway, why should the same researchers (or their institutions) pay for the privilege of accessing these papers.

Such efforts remain marginal because in my experience the tenured professor who decides which work to publish where tends to select well established commercial outlets which supposedly have a better name/impact/reputation/fame and all that goes with it.


I found the arxiv server running costs astounding ($800,000/y). Could any one comment on that? Is that typical?


This is an error. It's the total operating costs. Server costs for 2013 are budgeted for 41,700 $

https://confluence.cornell.edu/download/attachments/12711648...


I don't think it's an error - and if it is, they still got the right message across. The text says it costs $826,000 to run arXiv's servers. What they meant was to operate arXiv itself, but the main purpose of arXiv is for those servers, so I think it's valid to say "run the servers" as a proxy for all of arXiv itself. That the physical hardware costs $41,700 is not the most important thing. The important figure here is the total operating costs, as that's what keeps the whole show going.

This is a good demonstration of the fact that people are generally more expensive than hardware, and paying people to do something as their fulltime job is expensive.


They spend half their $ on support and admin people. I can't honestly fathom how a site like arXiv could require 3 full time user support people with part time students helping...

They have 1.5 FTE listed as devops for arXiv proper, the extra 0.75 in 2013 appears to be dedicated to this: http://invenio-software.org/

The actual cost to 'run the servers' appears to be around $200,000 - $250,000.

The rest is going into support and I guess a full time person to just 'manage', whatever that means for a project of this type/scope...fund raise I guess? I would think that you'd have volunteer advisers helping with that given the nature of the project.


You guess? The linked report says 0.5 FTE for "management and administration."

I think it's pretty unrealistic to think that managing 7 people alone wouldn't require even a half-time job. Yes, not to mention fundraising, as well as interacting with the advisory board, the volunteer moderators, outreach to users and policymakers.

It's also unrealistic to think that running an article management system that has 6000-8000 submissions per month from users all over the world of varying technical ability couldn't occupy three full time user support staff.


You're right. I'm not a native English speaker and like kghose I was a bit surprised by this number (hence my search for the arXiv budget). After reading your comment I see the ambiguity there.


I was surprised, too, which is why I also read through the budget you linked to.


as a rule of thumb triple the salary to get an idea of the total cost - this is for bog standard jobs.

Very technical organizations (Bell Labs JPL, F1 Teams) can have OH rates of 600%+ like the ranked 1 RnD organization I worked at it costs to have those very expensive shops, master level craftsmen and engineers on staff.


From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions

Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020

http://www.frontiersin.org/Computational_Neuroscience/10.338...

on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.

"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."


Scholastica had this functionality 11 months ago: http://blog.scholasticahq.com/post/17592143685/arxiv-integra...

What can a small startup do to bring attention to our solution?


It seems like you guys have had a marketing/awareness issue for a while. In fact, I'd love to talk to you about it off-HN- I've wondered if we (@ scholr.ly) could find a way to collaborate.


Sure, you can email me at rwalsh [at] scholasticahq.com


Could we make peer review even more valuable if at least some parts of the review process were eventually made available to the public? I think of this especially when I hear the occasional story of fake articles making it through the process.

I can imagine the list of "reviewers & comments" some day becoming as important as the list of citations. Would reviewers be better motivated if they knew that their feedback and requested changes were going to be part of the public record? Or would the whole process degenerate into writing meaningless sound bites for the the scholarly equivalent of dust jacket blurbs?


So now you're a young researcher and you receive a manuscript for review. It's no good and you pan it mercilessly in review.

Then your review gets published, and it just so happens that the author of the paper is a senior person at the university where you apply for your next job. Oops.

So more-junior academics might start being reluctant to recommend rejection of any paper. Or, if they are told who the author is (which they generally are), reluctant to recommend rejection of any paper whose author might ever be important to their career.

These seem like really bad incentives.


The idea that appraisal of academic work must be made in secret does not ring true. Perhaps universities have degenerated into political posturing over pursuit of knowledge, but even this seems like an argument for opening the process in order to reform it.

The young researcher should not have to be the one to tell the senior researcher the work is crap. This is why the process is called "peer" review. Of course, the senior researcher should not be submitting crap, and should be told bluntly if she or he does.

But perhaps I'm missing the point and "peer review" is now doublespeak for the process of editing a paper. In this case, the academic publishing racket appears even more absurd.


Academia is rife with politics. I've seen some of this up close through an ex girlfriend and I was frankly shocked at the pettiness and the vindictiveness of it all. Furthering science didn't seem to be at the forefront of the minds of those participating, it seemed to be mostly about ego. This was in the field of biology, possibly other fields are better / worse.


I am an academic mathematician. Happily, I have observed little of what you described. Ego, sure, but pettiness and vindictiveness, no.


Math may very well be a huge exception to this because all mathematicians can test their proofs in an objective way rather than as partial opinions or bits of understanding of a greater whole. It's a discovery process with 'true' or 'false' embedded right into the core of the science.


Hence the old saying that:

"The politics of the university are so intense because the stakes are so low"

Certainly in my six years working in academia I was surprised at the long term vendettas and bitterness that seemed to be common in what was, in reality, really a rather pleasant working environment.


I'd like to get a better sense of the kinds of conflicts you observed. Did people get upset because someone panned their article? Or would they be upset if a colleague panned a friend's article? What triggered the pettiness and how was it expressed?


This is a valid point, but in my experience, technical people can't help from criticizing things.

The typical computer science conference review process is single-blind: authors do not know the reviewers, but reviewers know who the authors are. Some CS conferences now are doing double blind: reviewers also do not know who the authors are.

Personally, I would prefer reverse single blind: authors know who the reviewers are, but reviewers do not know who the authors are. Why? I think it's good to try prevent reviewers from being biased (either way) by names or institutions. But I think that reviewer anonymity has lead to some really crummy reviews; people don't read papers carefully (if they read the whole thing at all) and throw out one paragraph or two. I think that if reviewers had to sign their name of the bottom of their review, they would do a better job. I try to adhere to that when I review a paper: would I feel embarrassed if my name was attached to this?


It's not that easy. From papers cited it may be easy to deduce who is the author. Typically, there are projects resulting in many papers, not single ones - so it is natural, and desired, to refer to previous works.

Additionally, it may be easier to find if an author is not committing self-plagiarism (sending the almost the same thing to a few different places).

More on that topic: http://academia.stackexchange.com/questions/625/open-versus-...


That is also a problem with double-blind reviews, and I'm quite familiar with the problems (and annoyances) of anonymizing a paper. However, my main point was about making the reviewers sign their name. I am unaware of a CS community that does so.


Double blind isn't really anonymous. It's pretty easy to tell who the author is from subject, citations, and writing style.


It leave some doubt - I've been able to figure out that it was likely a paper came from a certain group, but you still don't know for sure, nor do you know the order of the authors. It also benefits people from lesser-known institutions.


There's been a recent push for optionally open reviews- I know some of the PLoS properties allow it.


This also happens organically -- for instance the best IF journal in JCR 2011's statistics & probability group is open access, CC-BY and fully gratis Journal of Statistical Software.


Does anyone else find it ironic this article is by Nature, one of the biggest scientific publishers?


Nature is not a publisher, it is a publication by Macmillan (the nature publishing group is a division of Macmillan).


I would like to see them run their software project as open source. It might save them some money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: