Hacker News new | past | comments | ask | show | jobs | submit login
Time to break academic publishing's stranglehold on research (newscientist.com)
697 points by joeyespo on Nov 25, 2018 | hide | past | favorite | 300 comments



Academic publishing is a favorite recurring topic on HN, and it's one I've occasionally dipped into discussing, although these discussions are typically 99% one-sided and void of nuance or reasoned arguments. It's like discussing politics online.

I'm a shareholder and board member of a large privately-held, family-owned academic publishing company. If anyone is interested in trying to understand what makes the industry work, why it's so hard to disrupt it, etc. I'd love to engage or put you in touch with people within the industry smarter than me - my email is in my profile.

I know the industry is particularly frustrating to the HN crowd. We want to think it's a technology problem - that distributing PDFs is a solved problem (which it obviously is). But the root of the problems (of which there are many) are all cultural and much harder to change. If you're going to jump in and try to "fix" the industry or put publishers out of business, I highly encourage you to engage with folks in the industry with an open mind and really try to understand why things work the way they work. You're not going to have any success unless you truly understand the incentive structure of academia and the social and cultural aspects of inertia that are at play. If you go in thinking you can build a better "publishing" mousetrap you will fail. You have to realize publishers are in the reputation business. And when you start peeling back the onion of how academics are assessed, given jobs, given tenure, etc you start seeing how hard changing behavior can be.


> You have to realize publishers are in the reputation business.

That's right, and that's also why the prestigious researchers and universities, those with an already established reputation, have a responsibility: collectively leave the editorial boards of for-profit publishers, set up alternative venues with the help of university libraries. Share the archival, indexing and discovery effort among universities via peer-to-peer digital library federation.

All the tools are there. The same way that places like Stanford, Berkeley and MIT made MOOCs a thing, they can revolutionize scientific publishing.

There's already such venues such as JAIR for AI research (and that was set up looong ago without all the tech we have today), so it's certainly possible. It just needs to become the norm rather than the exception.


The problem is that the vast majority of established researchers do not care that the system is broken. Why work against the system once you're at the top? Most of them haven't ever thought about it, and the ones who know usually don't do much, because, hey, that's the way it was always done, they are used to the current journals and conferences being prestigious, and they don't want to bother with the unglamorous work of setting up HotCRP or coordinating reviews. Plus they usually want to push their career further and they know perfectly well that fighting against established venues isn't the right way to do that.

As a young tenured academic I have been systematically refusing to do reviews for non-open-access journals. (I'm certainly not senior enough to create a new journal and be taken seriously.) I hoped that this might give some ideas to my peers but it's seen at best as a weird quirk, at worst as a selfish move (because "the community" needs my help to review papers). Usually the answer is some variation of "yeah the system is not perfect".

The only argument I have seen which motivates researchers to care about open access is when it's mandated by funding agencies (e.g., in Europe, the ERC). Researchers really want to get these grants, so when these agencies talk about open access, they listen very carefully. It both makes me somewhat optimistic and quite cynical that the only way to move towards open-access seems to be via funding agencies (as opposed to researchers caring about the problem). Plus, for many agencies, "open access" here means "gold open access", i.e., continuing to work with the usual publishers, who make the articles available online, but move the "costs" (and huge margin) to the authors, with "article processing charges" of $1000-$2000.


This. I used to be an academic a few years back and most of the top academics thought everything was peachy. Also, certain subfields are much worse compared to others. I was a biology transplant from physics and I was shocked that preprints were actively looked down upon in biology and generally in biomedical sciences. The publishing fees often went into several thousand dollars per publication and mostly bothered only the junior academics. Oftentimes, grants would have a separate budget for publishing or if you were powerful you could get the university to pay your fees. The people who it impacted the most were those with the least power so it didn't matter. I knew postdocs who paid publishing fees from their meagre pockets because the PIs said they wouldn't or put onerous conditions like adding themselves or other members to the author list.


> the vast majority of established researchers do not care that the system is broken.

I've been arguing that it's a coordination problem.

Aspiring scientists don't have the pull to change the system (if they move individually they only bear the individual disadvantages without bringing about the big collective benefit), and established scientists generally don't care or are too busy with other things [1].

That's why a concerted ("political") move like this cOAlition S thing is so important, and I find it very disappointing that academics speak out against it. I will read more to understand why.

[1] With notable exceptions, like Don Knuth getting the editorial board of the (Elsevier) Journal of Algorithms to resign and move to a new journal, taking the reputation with them.


My problem is that I'm being continuously evaluated on the basis of my publications and the future of my position literally depends on this evaluation every year.

I have to get as many publications into as highly reputed international journals as possible if I want to have any chance at getting a permanent position, and the vast majority of all reputable journals in my area are behind private paywalls. There is a maximum of 3 reputable, fully open journals and they are still not as prestigious as the established ones.

If I decided to only publish open access journals from now on, that would be immediate professional suicide. According to our institute's internal regulations, I'd be gone in two years from now, maybe even in one if they interpret their rules more strictly.


I agree with you and particularly with the funding agencies being the most effective catalysts for change. The good news is that it's working. STM publishing is moving toward open access, the momentum continues to build (most recently with Plan S) and we're moving toward a future where government-funded research will be OA. It'll still take time to get there, but I'm pretty confident that's where we'll get. And yes, that future will likely involve those funders paying a few thousand dollars per article in APCs. And that future will continue to be dominated by the large publishers, Elsevier in particular. But access will be better, so I'd call that a huge win.


MOOCs don't compete against traditional university education.

You're talking about eliminating, or at least sidelining traditional academic publishing. It's a completely different proposal.


You have to realize publishers are in the reputation business. And when you start peeling back the onion of how academics are assessed, given jobs, given tenure, etc you start seeing how hard changing behavior can be.

there is no doubt about that. but maybe it's time to question how science is done in general.

in the end the goal is to advance our knowledge and bring humanity forward.

but instead of everyone cooperating to do just that, they are competing with each other, and try to outdo each other. a lot of energy is wasted in preventing others from stealing your research ideas and being the first to publish on a particular topic. instead of looking at the benefits of the research published in a paper, and whether the results can be reproduced, instead what matters more is how many citations the paper can get.

reputation has become more important than producing actual results. academics and academic institutions are measured not in the quality of their research, but in the amount of papers and citations they can produce, to the point that researchers who can't dedicate their life to their work, because they have family, or worse, are a single parent, can't get a job, let alone tenure, because they can't put in the time required even though they may well put in more effort than others into the time they do have.

so yes, i acknowledge that changing this is going to be extremely hard. but it looks to me like changing the way papers are published will be the easiest step, because the components that actually matter are distribution, which is technology, and reviewers, which are academics.

the only thing that i see publishers doing is to edit the journals and decide what to publish. but shouldn't exactly that, also be done by academics?

how about a model like stackoverflow? papers are published like questions, and reviews are the answers. readers upvote good papers and good reviews, so that the most upvoted and most reviewed papers float to the top. the citation count can be included in the score too.

greetings, eMBee.


>how about a model like stackoverflow? papers are published like questions, and reviews are the answers. readers upvote good papers and good reviews, so that the most upvoted and most reviewed papers float to the top.

The problem is that long-form works that require substantial analysis by experts (such as scientific papers read by competent scientists) don't work for a voting system like Stackoverflow. Internet voting works on things like comments of HN but not for 20000 word papers.

(I made previous comments about the limitations of voting systems to vet academic papers: https://news.ycombinator.com/item?id=15269673)

Basically, scientists are humans and human nature says they won't log into a system to upvote/downvote papers. Instead, they want a stronger signal than karma points before they spend (or potentially waste) their precious time on analyzing the merits of a long paper. That stronger signal is another respectable scientists that asks them to look at it.


right. votes would just be for the general popupation, to show which papers are popular. if votes are used at all.

scientists would focus on actual reviews and citations.

the only challenge i see is how we motivate people to write reviews. perhaps as some sort of a trade. for every paper uploaded you are expected to review 3 other papers. so that each paper can get at least 3 reviews.

there is also a potential for making a difference between public and private reviews.

private reviews would be like prepublishing reviews now, and public reviews like opinions about a paper after it's published.


> right. votes would just be for the general popupation, to show which papers are popular. if votes are used at all.

The last thing we want is the general public voting on scientific research. Any results that don't fit their world view will be voted down.


> readers upvote good papers and good reviews, so that the most upvoted and most reviewed papers float to the top

I suspect such a system would devolve into clickbait and politics very quickly


well, if it's just like stackoverflow, possibly. although one might take votes solely as a popularity count, and not put any serious academic value to them. for academic value the number and quality of reviews and citations would be more relevant.


> which is technology, and reviewers, which are academics.

There is another factor to consider - the outsiders. They need to know which research is good and which isn't and independent journals giving a stamp of approval to create "branded" science helps.


Some folks are trying to do this. The Conversation [1] is meant to be a bridge between academic research and layman's news stories about that research. And Kudos[2] is trying to encourage academics to write more layman-focused summaries of papers to explain the research and appeal to the larger world outside of your particular subfield.

[1] http://theconversation.com/us

[2] https://www.growkudos.com/


the same can be done by independent reviewers.

i would not trust a commercial entity whose goal is profit, to be able to decide for me what research is good, the same way that i don't trust movie distributors. i'll look for outside reviewers who earned my trust because they understand the subject matter.


There are two types of outsiders, individuals and institutions. For an individual, choosing a reviewer to trust can potentially work but it does get harder if you have to pick multiple different reviewers to get good coverage.

For institutions, especially those in charge of grants, something more stringent and organized is needed than "I like that science guy". And those organizations will need to be payed somehow, preferably in such a way that it doesn't create too perverse incentives.


how about the institutions themselves? it's their money after all.

we pay you to do research, you send us your results, and we publish them.

or someone higher up, if the institutions get their money from the government, then that same government could fund a separate institution to review the results (so as to see whether the money is well spent)

there is still a room for commercial publishers, but instead of controlling access to the papers they publish, they get to republish what their audience thinks is interesting. so they are no longer gate-keepers, but add value instead.

and whoever thinks that added value is worth it, will pay for it. universities, and especially individual researchers need not pay for it, because they can go directly to the source.

magazines like nature or national geographic for example. but it could also be a company like microsoft that funds reviews of papers that are interesting for their business. or some independent thinktank. or the government of a different country.

everyone gets free access to the original research. that's what open access is all about. and anyone can fund reviews to suit their needs.


> in the end the goal is to advance our knowledge and bring humanity forward.

That's not true. The end goal of scientists is to get get grants to they can get paid. As they climb higher, they get more money for less work. Soon they get to attach their name ( sometimes in the first position ) to academic papers when they did no work on it what so ever. To do that they means they need to generate papers and publish papers.

Publishers are exploiting this.

> the only thing that i see publishers doing is to edit the journals and decide what to publish. but shouldn't exactly that, also be done by academics?

Academics do not have any interest in this. My wife was a PE at a major STEM publisher with a portfolio of dozens journals and supervisory responsibility for about a hundred. Academia's technological adaption ( which is necessary for efficient editing/producing workflow ) is terrifying. Corrections are done by hand. Proofs are faxed. Dropbox blows people mind. Papers that are accepted by EICs are unreadable -- EICs simply kick the paper to publisher to get it "produced".

If academics are not going to do publishers' job then publishers are going to control the process and decide what to charge.


i wasn't talking about the goals of individual scientists, but the goal or the purpose of science as a whole. why do scientific research? if it's for fame then you'll have more chances as an actor.

no, there can only be one reason for publicly funded scientific research. to advance our society.

as for academics having no interest in technology. yes, that is a problem. but that is changing. the younger generations are more comfortable with technology, and i am guessing that if they don't use modern tools, it's because their supervisors can't deal with them, and they are not bold enough to change the status quo. but they will once they are in charge.


> as for academics having no interest in technology. yes, that is a problem. but that is changing. the younger generations are more comfortable with technology, and i am guessing that if they don't use modern tools, it's because their supervisors can't deal with them, and they are not bold enough to change the status quo. but they will once they are in charge.

And that is not going to happen for next fifteen to twenty years.


> You have to realize publishers are in the reputation business.

Everyone realizes this. The problem is that you're abusing your position in this reputation system for profit. Do you think $40 is a fair price to read a paper from 1987? https://journals.sagepub.com/doi/pdf/10.1177/109434208700100...


Nope. I'd be in favor of shorter copyright terms in general. The truth about $40 prices are that they aren't intended really to be the fair price of a good. They're there simply to allow the subscription business model to work. So no, I don't like the fact that we charge $40 for a 31 year-old paper. And I'm a huge proponent of shifting to a non-subscription business model. And the publishers are shifting, albeit slowly and only because they're being forced to. But they are shifting, at least in STM fields, and I'd guess that this $40 byproduct of the subscription model will indeed die out when the subscription model shifts to the open access author pays model (which, of course, we can also debate for a long time).


When I was young, I have bought the standard C++ in pdf because it was less than $10. I think it was the maximum I was agreeing to pay for it. Sometimes I find a reference to research articles I would like to read. I have no access to university to get them freely. If article were accessible at a decent price (less than $2), I think I would buy a lot more of them. IMHO, academic publishing would still be profitable if prices were low.


I assume the price is probably too high, but for $50/month you can essentially get access to a Netflix for articles, called DeepDyve[1]. It's certainly not complete in that they don't have agreements with every publisher on the planet, but there's a lot of content on there.

[1] https://www.deepdyve.com/


The price of $40 has nothing to do with copyright.


Right, sorry for the confusion. I was responding to two separate items in the question. One was the price, the other was the age of the article. I thought the question was clearly asking about both. Question 1: is $40 fair? Question 2: is that price (or potentially any price) fair for an article over 30 years old? My opinion is that $40 is not a fair price for a single article, and yet it's required to make the subscription business model work. And my second opinion is that the duration of copyright should be shorter. Obviously these opinions are in conflict with the business with which I'm involved, and are certainly not shared by most others within the industry.


> My opinion is that $40 is not a fair price for a single article, and yet it's required to make the subscription business model work.

It's not. Unless it's a way to drive people to buy subscription. That is: "pay $40 for a single article, or pay just $19.99 a month to access all of them". Which is also crappy.

> Obviously these opinions are in conflict with the business with which I'm involved, and are certainly not shared by most others within the industry.

This is true, unfortunately (same goes for media such as audio and video, sadly).


I'd say, instead, that they've taken over the reputation business, and are burning it down, converting legitimacy into money.


> Everyone realizes this.

They absolutely don't. Every time this subject comes up, there are plenty of people wondering how expensive it can be to host a PDF. It's not; the price they're asking is not for the service of hosting a PDF, but for granting you the privilege of using their brand. A publisher has a monopoly on its brands, which means that there's almost no downwards price pressure.


  Every time this subject comes up, there are plenty of
  people wondering how expensive it can be to host a PDF.
I always assumed that was a rhetorical technique, a sort of jargon-free shorthand for "Given that variable costs are so low but the prices so high, surely this company's managers are either sinfully greedy with their profit margin, or sinfully inept in their inability to control back office costs"


Right, but "inept in their ability control costs" is irrelevant: the price they ask is for the brand, not for the costs of producing anything.


There are two issues

1. why the system is as it is (and why it's hard to change)

2. what work are the publishers doing that is actually contributing value

The hosting cost is a point regarding 2, and making that point is consistent with understanding 1.


I agree, but I will still argue that people making the PDF download point are often conflating points 1 and 2.


Allow me to turn the tables a little:

- why don’t publishers pay reviewers for the extensive time they spend reviewing the paper?

- why don’t publishers pay academics who edit journals, another time consuming task?

- why don’t publishers pay academics or institutions for a product they sell?

I can’t comment on your company, but the above is the status quo for most of academia.

Do you think this business model is ethical?


As the parent didn't reply, I will fill in:

People trying to change or improve the publishing process _do_ understand academia - most of them work or have worked in academia.

The reason they're not engaging with you is likely that that your model is so obviously broken, and arguably unethical, that they don't want to help you or work with you - they want to put you out of business. Personally, I think that is an entirely reasonable response to the behaviour of academic publishers. Publishers have had more than enough time to reform.


I didn't mean engage with the publishers to work together with them. I meant engage in the conversation to really understand why changing the behavior of academics is so hard, so that you can be more effective at doing whatever you are trying to do to put publishers out of business. There are a lot of things that publishers don't like about academic publishing (journal impact factor is highly contentious within the industry), and yet changing the behavior of the academic community is incredibly difficult.


Respectfully, it troubles me a bit to see the burden of responsibility being shifted to "the academic community." On my docket for the day is completing a referee report for a 500-page academic book manuscript. It's my second time reading this ms and making pages of comments on it, and the publisher is paying me $80 for my time. That works out to pay of about $2/hour.

As I get further into academia I am amazed at the amount of work we do that is uncompensated yet required of us by the antiquated system of prestige that you mentioned in your parent comment. Perhaps it isn't so much that academics are stubbornly committed to an outmoded system, it's that we literally have no time or energy to do anything beyond the bare minimum of meeting that system's constraints.

Given that the public for-profit publishers are reporting profit margins that put tech companies to shame, I don't really buy the argument that it's the academic community who are holding things back. The perverse incentive here is clearly concentrated on the publisher side, not ours.

I really appreciate that you are taking the time to respond to comments in this thread though, thank you for posting.


I think there's enough frustration to go around. I don't doubt your story. The life of an academic (especially a young, non-tenured academic) is brutal.

To provide a frustrating anecdote from the publisher's side: we'd love to heavily invest in launching new open access journals (which we do, but we'd love to do even more). The problem with launching a new journal (either subscription or OA) is that nobody will publish in it if it doesn't have an impact factor. Impact factor is controlled by a private, for-profit company (Clarivate) that's owned by a private equity firm. Getting an impact factor takes 3-5 years and also relies on the total crapshoot of what Clarivate decides to list or not list. So the prospect of launching a new OA journal is one where you are guaranteed to lose money for the first 3-5 years and then you have to put all your eggs in the impact factor basket, hope you get listed and receive an impact factor, and only after all that will academics choose your journal over any established legacy brand. And all this because at some point academia decided that they'd outsource academic career assessment to the magic number that is Impact Factor.

I also want to thank you and the other commenters for some good discourse here. This has been refreshing and I was only called an asshole once the whole time! But jokes aside, a sincere thanks :)


>> Impact factor is controlled by a private, for-profit company (Clarivate) that's owned by a private equity firm

In the first year of my PhD the final-year student on "our" project, who I shared a bench with, told me all about impact factors and which journals he was hoping to get his paper published in.


I think there is plenty of engagement in that conversation, via new alternative publishing models, but putting that aside for a second, I think there are two separate issues here:

1. Academia has a broken incentive model. I agree with you, I think this is a valid point.

2. Academic publishing is an exploitative monopolistic business.

I don't think the big publishers will be able to do much about (1), because they haven't done anything about (2) - which seems to me to be a much easier problem to solve, as the power is in your hands to immediately start paying reviewers for their time, as a simple example.


I hear you and it's a totally valid point. We could start paying reviewers tomorrow. I'm not convinced that adding a monetary incentive to the peer review process doesn't have its own serious negative consequences, but I'm certainly open to the idea that that's a potentially better version of the system than what we have now. It's certainly a difficult business decision to push through, given the universal lack of anyone doing so industry-wide, but that doesn't make it the wrong thing to do, and it certainly could be a differentiating factor if done well (ie by speeding up the review process a publisher might be able to increase author satisfaction and also increase article output, which in an OA world has a direct revenue impact).

I do have an issue calling it a monopoly, however. At best you can call it an oligopoly. The top 5 publishers publish about half the total articles each year [1]. So half the research is published by a combination of hundreds of smaller publishers (both for-profit and not) or independent scholarly societies. And then within the top publishers, they are absolutely in competition with each other, which becomes readily apparent when you dig into the royalty deals that publishers offer scholarly societies for the rights to publish their journals, which continue to get richer for the societies (which poses a whole different interesting problem in terms of the collateral damage to modern-day scholarly societies if or when the business model blows up).

[1] https://journals.plos.org/plosone/article?id=10.1371/journal...


Other disciplines do indeed pay reviewers - a friend's partner told me about being paid in the humanities. So it can work as a model.

Regarding the monopoly comment: firstly, you have (usually) a complete monopoly on the content you provide. The same paper is available from one publisher.

If we talk in more general terms, then we're in the classic situation where the monopolist (or oligopolist, which is often used synonymously nowadays) pretends they don't have a monopoly, because they don't want to be punished or reformed. But, even if we took your viewpoint, that strength of market power - 5 companies controlling 50% of the market - is overwhelming.

In practice, Elsevier and Springer control nearly all Computer Science publishing (for example), so the situation is extremely bad.


That is a sobering article.

It's clear that Sci-Hub is the only viable strategy.


  why don’t publishers pay
Why buy a cow when you can get milk for free?


Because those people are already being paid anyway - their full time jobs, whether that’s PhD student, lecturer, professor, or industrial researcher, includes the job of reviewing and editing for publications.

It’s a society. You publish papers which get reviews, and you also work to review other people’s papers. Everyone contributes to make it work.


> Because those people are already being paid anyway

What about when the tax payers fund the research and then they have to pay to access it? Didn't they "already pay for it"?


I think it would make a lot of sense for countries to have laws stating that the results of publicly funded research must be publicly available. It's bizarre to allow a profit-driven business to monopolise and exploit it without paying for it.


You're asking a totally separate question - you're asking 'since tax payers funded the research shouldn't they be able to see the output?' That's a great question.

But the question I was replying to was 'who is paying the reviewers'. The answer is that their employers are.


The question wasn't "who is paying them?", the question was "why aren't the publishers paying them?". And that is a crucial question, and not answered by the observation that they are already paid by others.


The answer to 'why' is that no money changes hands for any review - not to pay to get a review for your paper, and not to review a paper for someone else. It's a social system.

You 'pay' for reviews of your papers, by reviewing other people's papers.

All the publisher does is connect people up and produce the final product. You can argue that the cost to subscribe is therefore too high for that service. Fine, it may be so! But saying 'why don't they pay their reviewers' is to misunderstand what the entire setup is here.


But the publisher is the one making the profit here. From other people's work, that other people pay for. The publisher doesn't add any real value to the process. They charge a high price for people to access other people's work. At no point does the publisher pay anyone for anything, yet they do get all the profit.

It's pretty clear why this is a stupid system, isn't it?

Simply saying "but that's the way it is" is not an answer. It's the problem.


> It's pretty clear why this is a stupid system, isn't it?

Maybe! I'm just explaining why the reviewers aren't doing the work unpaid on their own time, which is a misconception people seem to have. Fewer misconceptions is better for the discussion about the remaining issues like whether the system is stupid or not.


> I'm just explaining why the reviewers aren't doing the work unpaid on their own time, which is a misconception people seem to have.

I don't recall seeing that misconception. I had a quick look in the comments here and I don't see it. Can you see any examples?


> why don’t publishers pay

> No reviewers are ever paid either

> reviewers are never paid

> Nobody in the review or author role gets paid

It's all through this thread, and every thread on this topic.


But that's the point: the reviewers are paid, but by the tax payer, not by the publisher. The publisher profits, while the tax payer has to pay 3 times to finally get what they paid for. The publisher is getting a free ride here.


These are all complaints about the reviewers never being paid by the publisher, which is a completely different and completely valid point.

The complaints are all about the publisher. Have you ever seen a complaint that's simply about having to do peer review itself without being paid?


People are asking things like

> How do peer reviews happen, then, if nobody gets paid to do them?

They're not asking about the publisher paying. They're asking about being paid at all.


Where?

You claimed "It's all through this thread, and every thread on this topic", so it shouldn't be hard to find plenty of examples.

The example you gave is not an example of it. Notice the "if" in it? Go read that comment again, and read the comment it is replying to. The person is not expressing a positive belief. They're not familiar with the system, are trying to make sense of the parent comment's statement, and are asking how it works.


Creating a "market" for reviews would be a terrible idea - I agree that the reviewing part of the process seems to work pretty well (or at least did when I was last involved, which is a while now).

Edit: The bureaucracy of charging for reviews would be bad enough - but I suspect perverse incentives would soon arise as reviewing would soon be seen as a revenue generator to be maximised and a cost to be minimised.


> Because those people are already being paid anyway - their full time jobs, whether that’s PhD student, lecturer, professor, or industrial researcher, includes the job of reviewing and editing for publications.

This is inaccurate. This is not a part of a postdoc's job description, for example, in the UK. The university does not pay you to do this - you have to review on top of your day job.


If your job is an academic then of course the job implicitly includes reviewing papers. That's what being an academic involves. It's part of your normal paid day-job if you have an academic position, either in industry or in a university.

Do you think academics and industrial researchers take holiday when they have to travel to a program committee meeting? No of course not, they do it during work time and are paid for it and their employer pays for flights etc.

While I was in academia I was specifically tasked with reviewing papers for an external conference, and so were all my colleagues.


[Speaking from my experiences as a PhD student] once an academic is sufficiently senior they can do what they damn well like. My boss came in late, took long lunch breaks, and went home early.

Back to the topic: if he received papers to be reviewed, he gave them to his post-docs.


What do you mean "have to"? What happens if you don't?


Turn of phrase here, fair point. I meant "if you want to review, then it will be additional time outside of doing your paid work."

Second question is an interesting point: what happens if you don't review? I can tell you that it is certainly the case that a lot of academics don't do reviewing. Given the amount of reviewing I and my colleagues do, there must be others who are not doing their fair share (say 3 x your submission rate) of reviews. What are the consequences of not reviewing? Perhaps you can be seen as a freeloader, some social cost, but mostly people just won't know unless you're replying to them directly.


To expand on what the parent said, what makes it weird that the incentives for people in the community are mixed: there are market incentives (mainly publishers), community incentive (reviews and volunteer expert work), resume incentives (pubs in top journals). They interact in weird and sometimes toxic ways.

(Disclaimer: comfortably funded post doc here)


If I peel back the onion further, I find the root of the problem is how to measure someone who is good at science. How should we measure scientific output to determine compensation (not just monetary or tenure, reputation itself can be considered a form of compensation)? That the way we reward scientist means that few are pushing to reform academic publishing is barely a footnote compared to some of the problems produced by our current model. Look at the lack of replication that occurs because replication is far less rewarded than new research even when the new research fails replication (once someone does eventually get around to it). The way to fix this is to find a method to measure good science that works better than the current method, but given that us tech people can't even figure out a decent way to measure proficiency in our own field, the outlook isn't hopeful for our ability to disrupt science.


Yes, at the core of the problem is how we measure good science. If you want to try to crack that nut I absolutely salute you. Improvements here could pay massive positive dividends for society as a whole. Meta[1], which was bought by Chan Zuckerberg[2], is sort of trying to do this (in addition to trying to tackle better discovery of the literature) by using an AI model to try to identify important research trends earlier in the process.

[1] https://meta.org/ [2] https://www.chanzuckerberg.com/science


> I know the industry is particularly frustrating to the HN crowd. We want to think it's a technology problem - that distributing PDFs is a solved problem (which it obviously is).

That's not at all my impression of these discussions. If people thought it was just a technical problem, then they'd think it would be easy to change the current setup. If people thought it was just a technical problem they wouldn't have such a strong dislike for the publishers.


I certainly made a broad generalization, but in this HN discussion you can find people asking why we can't just have a simple HN or Stack Overflow-like site for upvotes and downvotes instead of the existing peer review system (implying that it would be trivial to apply the tech used for social discourse online to academic publishing) and another comment about spinning up a peer-to-peer BitTorrent approach to distributing digitally signed data. And in every discussion of the topic on HN that I've read someone always asks the same type of question about why this is still a problem when it should be trivial to apply software tech toward a solution. My intent wasn't to discourage folks brainstorming technical improvements for the system, it was only to point out for anyone who isn't entrenched in the intricacies that there's a lot of reasons for the dysfunctional system to both exist and persist.


I'm not sure people are trying to fix academic publishing so much as burn Elsevier to the ground. Whether or how they are replaced is a separate question.

https://www.talyarkoni.org/blog/2016/12/12/why-i-still-wont-...


Well, Elsevier is maybe for some people the worst of the bunch (cannot speak from experience, though).

Springer, for example, has very relaxed and open copyright transfers. You basically retain the right to have copies on your website, on archives, etc., provided that they contain a link to the 'definitive version' of your paper. Elsevier does that to some extent as well, but Springer _also_ invests money to 'cross-fund' (if that is a word) books in fields that do not attract so many readers and that otherwise could not be published. So in that sense, Springer is giving back to the community and people are appreciating that more.

I agree, however, that there seems to be generic and unspecified hatred against Elsevier; having not reviewed for them or published with them, I do not have a properly-formed opinion here.


Totally fair. You can definitely take the stance that without knowing what a substitute is for publishers it's still worth trying to destroy them, and have faith that something will rise up to fill the void. My argument would be that you need to be careful to not underestimate the publishers, particularly Elsevier. The shift to OA hasn't yet harmed the publishers. In fact, all the major publishers are among the largest publishers of OA articles. Elsevier is full of incredibly smart people who can adapt. So if your goal is to put them out of business, you need more than just a strategy to burn their business model (although if you could figure out how to remove money entirely from the process you'd succeed I suppose).


When I worked for RELEX (reed elsiveer) New Scientist was one of their "vanity" publications.


Sorry if my reply seems trite, but the same sorts of things could be said about the cab, bookstore, and music industries. They were entrenched and difficult to take down, but most are better for it.


Sure, I wasn't saying it's not possible to disrupt publishers, and I'm not discouraging anyone from trying. I'm simply saying that to do so you're absolutely going to have to understand what makes the industry work the way it does. If you don't understand your users (in this case academics, tenure committees, universities, grant funding bodies, etc) then you're not going to make a dent in the status quo. I'm more just trying to counter the argument that there's no reason for publishers to exist, therefore it should be easy to make them obsolete. There are tons of reasons publishers and the entire system as it is exists, and if you want to change it you need to understand it.


I think HN has chewed all of the above to death. If you read previous comment threads it's obvious that the community actually does understand all of these things, and also understands why they're a problem.

Personally I've lost count of the number of threads and posts where at least one person says "Publishers are in the reputation business."

The fact that you seem unaware of this is curious. If you genuinely think that plans to disrupt the industry are going to be based on distributing PDFs, it's possible you may not be as familiar with the discussions here as perhaps you could be.


Fair enough, makes sense.


Just FYI although the music industry has changed (much lower profit margins now), it's still a cartel of companies who actually own the vast majority of the content, and extract a pound of flesh. Perhaps you think that Spotify "disrupted" the industry (and Last.fm / others before it), but everyone still pays the piper - the majors (and given that they own the content, they can choose how much to charge for it whenever their contracts renew, which they make sure is frequently).


“Disruption” in those three industries you listed meant that large global monopolies or duopolies were created which captured the majority of the market.

Do you think it’s easier or harder today vs. 10 years ago for an outsider or upstart to create a cab or ride sharing company? How about starting a book selling company?

I’m not sure how those industries are better off except for the deeply-entrenched players that took over their respective markets.


While agreeing to the general principle, incentive structures are wired quiet differently in academia vs end consumer oriented gig/service industries.

Publications ( number, when, where, citations) is the primary currency/value in which one is judged within the peers in academia, and reputation outside the immediate academic community has a much lower weight. Whereas for online market places solid revune is the first priority and then comes reputation ( which is a means for the higher reveune). In academia it is the reverse, with reputation ( in a small clique) being the primary motivator, and funding being the means to gather it.


One upvote is not from agreement, but from interest to see replies to your comment (which upvoting it makes more likely).

Personally, I do think the matter is complex. However, if you're trying to protect the current model by arguing that building a reputation costs money, you will not convince those who, quite reasonably, believe it should be earned by contributions to knowledge, instead.


Do you think the current legacy model of journals and the peer review of articles isn't contributing to knowledge?


I'm sorry, I don't know what the legacy model is. If you mean the way academic publishing works right now, I don't think anyone disputes the fact that publishing to a journal, free or not, constitutes a contribution to knowledge. I mean, I sure do not.

The way I understand the debate about reforming academic publishing, it's about the morality of profiting from resources set aside for public benefit, rather than private (or corporate) profit.


I did a research masters and got VERY frustrated by how publishing worked. But I came to your conclusion, too. I'd put it like this: an academic's career prospects are controlled entirely by the journals and the journals' impact factors, so breaking the cycle a chicken-and-egg problem. There are some great strides towards something better though; like PLOS ONE, or university-specific ventures such as https://www.ucl.ac.uk/news/2018/jan/ucl-launches-open-access....


> so breaking the cycle a chicken-and-egg problem.

Absolutely, huge coordination problem (prisoner's dilemma). Let's see what this cOAlition S can do about it.


> You have to realize publishers are in the reputation business.

Exactly. The name "publishers" makes people think that their primary business is publishing, but the main source of income is seeking rent on the use of their brand names.


This is similar to what I've seen from numerous Non-Govt orgs that try to help in other countries.

A very specific example. In the US, there was a research team building a (post earthquake) hut that could be built with only plywood and also was built to be "tightened down" to resist earthquakes and hurricane force winds. It had anchors and was a really cool technology. I went to Haiti and the people there would rather live in tents temporarily and move into a concrete house because that's how you build a house. Compare this to Paul Farmer, who was on the ground and built a network of medical facilities by including the people he wanted to help. He's been (by my standards) wildly successful.

It's easy to see a change that seems obvious from the outside, but if you don't get in and work on the ground floor to disrupt with people and change culture you're making your job way harder. Get in and talk to people, figure out why, and remember that people are just that people. Politics / Culture are unavoidable. Sometimes the easy part is the technology and that a'int easy.


I agree, the solution is not a technical one, it's about money. If funding agencies require, as they should, day one free and open access, the problem will solve itself. Not without a good deal of temporary pain, but it will be solved. (I am an early career professor who has to worry about promotion - and am happy to bear the pain of such a transition.)


One thing to think about is what happens to academic fields that are not well funded? Plan S makes sense for funded research in STM disciplines. If we accept that publishing does cost some amount of money then it seems reasonable to have the funding of the research also cover that cost. Obviously we can argue about what a reasonable cost is ($100? $500? $5,000?), but it's certainly non-zero (even though yes, you can find specific examples of zero-cost publishing, but not on a scale that works for the system as a whole).

But what about the humanities and social sciences, which are typically not funded by government or foundation grants? We currently have a system in which the expectation of the academics is that they can publish for free because the universities pay for that cost via subscriptions. Changing to an author-pays model, which Plan S seems to push the industry toward, doesn't work for a lot of academic fields. There are certainly alternatives, like university libraries converting some of the funding they currently use for subscriptions to cover publication costs, or entire governments covering all publication costs for every academic within their borders. But it's not as easy to see exactly how the non-zero cost of publishing is covered outside well-funded disciplines.


> the root of the problems (of which there are many) are all cultural

Yes, thank you for restating the obvious. Research should be open, that is what we want. Off to do useful things now.


It's frustrating because it isn't a technology problem, it's a tragedy of the commons.


Can you write a more full-length post with more details and put it on here? I find your post intriguing but you left most of the problems in the background, vague and undefined. Let's hear what they are, please!


"I'm a shareholder and board member of a large privately-held, family-owned academic publishing company. ... You have to realize publishers are in the reputation business."

Prove it. Get out ahead of these issues. Add value. Progress marches on. Lead the way.

Let's assume (for scope of this comment) the replication crisis is the biggest threat to everyone's reputation.

What would a solution look like?

More access & transparency & accountability.

Imagine a scholarly clearing house that facilitated the existing processes and workflows.

Something like github.com for warehousing all the data. Something like a collaborative editor. Something like scholar.google.com to better find & forage for stuff. Something like linkedin.com for researchers to connect and share their thoughts.

Maybe even a brokerage for grants, connecting funders with applicants, help administrate the administrivia.

Publishers would still publish journals. The cream rises to the top. aka curation. As Scott Galloway likes to say "Information wants to be expensive."

TL;DR: Innovate to add value in a changing world, instead using the status quo to continue rent seeking.

--

"It's like discussing politics online."

For policy work (vs electoral politics), I very much agree. Policy work has it's own replication crisis.


[flagged]


Please don't haul internet hostility tropes in here. That's just what the site guidelines are asking you to avoid.

https://news.ycombinator.com/newsguidelines.html


Not sure if you're asking about the role that academic journals play in general, or about what specific actions publishers do.

Journals in general serve to curate an ever-increasing body of research and provide a crude method of gauging scientific quality, relevance to your field, and importance. They provide a crude method of judging the impact of scholars without having to read every single paper ever published.

Publishers specifically put in grunt work to make the curation process function. That means hiring people to coordinate the peer review process, identifying new fields that are in need of new journals to help disciplines form and mature, ensuring standards of the scientific method are being followed, and of course there's the commodity service of hosting digital content.

I think a more interesting question is how much the role of a publisher is worth (as opposed to whether it should not exist), and whether there are ways to fill the same role with a cheaper alternative.


What you say about journals is correct.

But what you say about commercial academic publishers is a relatively recent development. Traditionally, universities and professional societies ran journals. As you say, publishers did the grunt work. However, they did it on a work-for-hire basis. It wasn't until the 50s or so that profiteers took over the academic publishing industry.


This is true. Someone else in this thread posted the long-form article about the history of Robert Maxwell, which is absolutely worth the read. I'm definitely not of the opinion that the existing publishing system is the only way (or the best way) for science to be conducted.


Yes, and the article that you cited. Thank you for that :)


out of those identifying new fields that are in need of new journals to help disciplines form and mature, ensuring standards of the scientific method are being followed are things that really should be done by academics and not by hired professionals.

if commercial publishers are doing that now, it means that academics have dropped their responsibility on this.


Yes?


The only tangible benefit I can read into that is peer review, which isn’t paid for by the journals at all. Open access journals have demonstrated the ability to have effective peer review journals without the middlemen.

Otherwise you haven’t mentioned anything of actual value.


OA journals are a middleman too. They typically charge, just in a different way. In fact, popular OA journals can have much higher profit margins than your average subscription journal. If you're advocating for free publishing (whether OA or not) I'd challenge the belief that a system can scale to the current needs of the academic community without charging someone (institutions, governments, authors) and fees of some kind. You can certainly make the argument that the world would be a better place with all actors in the endeavor being non-profits, but that still doesn't at all mean free publishing.


“Open access journals have demonstrated the ability to have effective peer review journals without the middlemen.”

Open access journals are basically the same as traditional journals, they just charge authors up front, rather than readers later. I don’t understand what you mean by your sentence above - they haven’t demonstrated anything close to what you claim.


Not OP, but maybe credibility through established published history and network effects? Facebook sucks and yet...


[flagged]


If you attack anyone like this on Hacker News again, we will ban you.

Please review https://news.ycombinator.com/newsguidelines.html and post civilly and substantively, or not at all.


This is an unnecessarily rude response and if you look around, you're pretty much the only person striking such a tone.


For anyone who still doesn't know this: publishers do not pay paper authors, in any field. I am also not aware of a publisher paying peer reviewers.

(Lowly soon-to-be-postdoc here; a professor would be able to say more.)


In academia (at least the hard sciences), no faculty are ever paid for publishing in mainstream, peer-reviewed journals. No reviewers are ever paid either, although recently Elsevier has started to publish the names of its "star reviewers", presumably as a token for the large investment in time and expertise required to evaluate papers for publication and improve the field. Even book deals (or editorships) yield no real income, beyond some nominal amount (between $0 and $50). What's more, the lay public is hilariously misinformed about this, believing that faculty, researchers, etc., are paid royalties for journal papers. Where they got this idea, I don't know ...


> No reviewers are ever paid either

That's almost true. Elsevier once surprised me with a $100 Amazon gift card, which they said was in recognition for a particularly good job of peer review that year.

I'd laugh at the prospect of Elsevier rewarding me with "recognition" -- certainly something I would not put on my CV -- although I did take their money.

Another time, the American Mathematical Society (which is non-profit, and perfectly ethical) asked me to do a book review, and then gave me $150 in bookstore credit afterwards -- again by surprise.

My understanding is that book authors can make a reasonable amount in royalties (a few thousand, anyway). For journal papers, as you say I've never heard of any author getting a royalty.


What's your hourly rate? How many hours did you spend reviewing? Divide that $100 by the hours and see if it was worth it.


> What's your hourly rate?

This is not how I think about my job. Rather, the university pays me a salary, and expects me to engage in "scholarship" and "professional service" -- and I get to decide where I think I can make the biggest contributions.

This luxury is not available to everyone. Universities are hiring more and more non-tenure-track instructors, at lower rates of pay, with less job security, with higher teaching loads, and (usually) without any research expectations. In the future, we might be obliged to think about our jobs in more transactional terms.

But for now, I have the luxury of not having to think very much about questions like yours.


The sibling commenters here have it right -- and I'll add that reviewing and similarly reading grant submissions are tasks that benefit the researcher as well. Sure, it's work, but you stay on top of the new stuff in your field, develop some sense of what's important and what's minutia, are able to identify new people coming into the field and how they or their work might interact with your own. If you don't review at all, you are actually missing a fair bit of professional development and power. It works fine for some not to review but for others it's an important part of the intellectual process.


Reviewing isn't a full-time job (or any sort of job really). So the hourly rate doesn't make sense. If anything, you'd have to put reviewing in the consulting category since the hour that you spend reviewing something isn't a literal hour of labor. It's the years of experience that you enabled you to do a review in an hour.


In some subjects, reviewers are indeed paid.

Probably the parent has experience in CS or similar where reviewers are never paid.


> Where they got this idea, I don't know ...

We get this idea from basic fairness within the rules of the larger system. Clearly, there are private companies that make money from publication of those papers. It follows that in a fair arrangement of some sort, they would have acquired the necessary rights from the original authors for some compensation or profit sharing agreement. That they don't actually do that is indication that arrangement is not fair.


I’ve actually heard of some Chinese universities awarding profs something like $150,000 for getting an article published in nature[1].

[1] https://www.economist.com/china/2018/11/17/tsinghua-universi...


> the lay public is hilariously misinformed

When I start thinking about what we lay public have been denied, in welfare, progress and life, just because scientists don't share the knowledge the lay public pays them to produce, the word hilarious isn't what's coming to mind.


I've been paid (in credit for more books) for two reviews of book proposals.


>the lay public is hilariously misinformed about this

Ironic for a scientist to rail on anecdotally and off topic about a problem they play the much larger role in deciding the fate of.

Knowledge of science among the general public let alone of how this sausage factory works is however unfortunate, not the key to fixing the problem.

There are a handful of different key roles and interests competing to influence the outcome. Researchers, universities, publishers, for profit businesses, etc.

It’s not that complicated. There’s the logistics, economics, and best guess modeling of any proposed changes.

Then separate from these technicalities people everyday are making value judgements and prioritizing principles against personal interests.

I’m not trying to advocate my opinion here in a comment of what these stakeholders should do. Only that these individual decisions, judgements, and will to act on them play a far greater role than public knowledge of who gets paid to review a paper.


I have half a dozen post doc friends in science. All of them are honest and exceedingly hard workers, but they became disillusioned after 10 years in academia. Despite their PHD and advanced degrees that they worked so hard to earn, they now work in completely different fields, most very low paying. I really hope you have better outcome.


How do peer reviews happen, then, if nobody gets paid to do them? Does research money (that goes to the production of the paper) not also cover peer-reviews?

I wouldn't see any incentive for any work to be peer-reviewed, ever, then.


I'm a math professor, I can answer this.

We tend to think of journals as being run by their editorial boards -- i.e., by scientists in our field. The publishers are in the background, not taking a very active role, simply skimming off truckloads of money in the process. In particular, peer-review requests tend to come from scientists whom we know at least by reputation, and often personally.

When I spend serious time peer-reviewing a paper, it will be on a quality paper on a subject that I enjoy reading about anyway. (If I get asked to peer-review a paper that looks uninteresting to me, or something outside my interests and expertise, I'll either decline the request or write a quick report recommending that the paper not be accepted.)

It's interesting work, it's important for the health of the field, it helps me support other researchers in my field (if I like their paper and give it a positive review), and it helps my own reputation (the journal editors are often bigshots). From this point of view, it's a win-win-win-win.

Finally, paying for peer review doesn't make much sense because the time required is highly variable. I might take between 20 minutes and 30+ hours to review a paper, and I wouldn't necessarily expect the editor to be able to predict how much (even within an order of magnitude) in advance.

However, I see no reason why high-profit publishers should be able to insert themselves as middlemen and engage in rent-seeking behavior. I hope we figure out a way to give them the boot, and then continue approximately as we have been doing.


> If I get asked to peer-review a paper that looks uninteresting to me, or something outside my interests and expertise, ... or write a quick report recommending that the paper not be accepted.

That seems kind of harsh?


Often (but definitely not always), requests to peer-review are accompanied by statements like "We consider our Journal X to be on par with Journals Y and Z, and aim to be extremely selective. Please only recommend publication if blah-blah-blah...."

I'll then read the introduction. If I don't have sufficient expertise to referee the paper, I'll decline the request. If I do, then I'll see what the authors have accomplished. If, in my judgment, this doesn't rise to the level that the editors asked for, then it doesn't take me a long time to decide this and say so.

Conversely, if the introduction does impress me, then I will want to check the proofs in very close detail. In this case I will commit to writing a detailed report in the future.

It's quite common for papers to be declined from individual journals; it's happened to me plenty. There are tons of other journals out there; you can submit somewhere else. And when I decide that a negative report is called for, I write it right away, so as to not keep the authors waiting forever.


I think the "harsh" comment was because you suggested you might recommend the journal reject a paper because it was outside your interests and expertise, not because of any lack of merit. But that implication only arose from some selective editing.


Thanks. With that added context it doesn't sound harsh after all. :)


Not really. If work is flawed, it will need to be fixed before recommending it to readers (which is now the only marginal value provided by editors). If it appears to be unfixable, the correct choice may be to recommend that it not be endorsed by a journal.

You can always put work up on arXiv if the goal is priority or distribution. Peer review is, in principle, designed to ensure that work is new and true (and that what is new is true). It’s not meant to fix flawed work, but to make good work better.


I reacted when I read this as well. I am hoping he meant that he would recommend that paper to be published/not published...

...I fear however that my hopes are in vain


by "not accepted" I mean "not published in the particular journal I was asked to review for".


The problem is the preceding if clause. You say that you do this if you find it uninteresting, or if you have no expertise in the field of the paper. Both of these are qualifiers of you, not of the paper, so indeed it seems harsh to punish the authors for the bad referee selection of the editor.


He'll decline the request or say it should not be accepted. Declining the request is just fine, and a reasonable thing to do. There are many mathematical results that are in fact uninteresting (to just about anyone) and it's fine to suggest those be rejected from a good journal.

I've had many paper rejections, none because of false or uninteresting claims but almost all because "it's not good enough for this prestige journal" or "it's too specialized for this journal". I expect a certain number of those, because I submit to reach journals now and then and hope I get a reviewer and editor that like the work :) If I wanted certain publication immediately I'd submit to lower-tier journals first, which I have also done when I just needed something out.


The editors of a prestigious linguistics journal actually gave Elsevier the middle finger a while back. Here's a good summary: http://languagelog.ldc.upenn.edu/nll/?p=22162.

Economics and linguistics also publish their own journals which costs some money but significantly less than Elsevier's subscriptions (I believe linguistics' is 300 USD per year). There's probably enough momentum behind middlemen-run journals that prevents this from happening more en masse.


paying for peer review doesn't make much sense because the time required is highly variable.

Paying per paper maybe not, so maybe paying per hour makes sense?


I've only worked with professors in economics and political science but I'm under the impression that this is true more generally of academia. Some people do it to help advance the state of the field. Career incentives and standing in the field also come into play. As the poster above mentioned, a professor would be able to say more.

To respond succinctly to your comments: money isn't the only incentive in life, so it seems weird to not see "any incentive" as soon as money is taken out of the picture.


I'm a professor (in mathematics), and there are some reasons why I personally might say "yes" to review a paper. (1) If the paper is so interesting that when I see it I think "I really, really want to read this paper!", then I am likely to peer review it because as a reviewer when I read it, I get to ask any question and the author will take my questions and concerns very seriously. (2) I owe the editor who is asking me to peer review the paper (or maybe even the author of the paper), e.g., if the editor put a lot of work into ensuring a paper I once wrote got properly reviewed. (3) Even if I say "no" to fully reviewing a paper, as an expert in the area I can often provide quick valuable feedback about the relative value of the claimed contribution, and who is most likely to be suited to review it; I'll do a quick informal review like that in some cases.

From the outside, it is easy to forgot the extreme extent to which academics are motivated by (or view themselves as motivated by) principles of "good citizenship". This is selection -- the people who aren't this way, often do not get hired or promoted. Also, many academics (at least in pure math) view motivation for money as a "lower order term". It's a good thing for them, because for much of your long career in academia there is little you can do to impact your salary, besides applying for a job somewhere else. For me it's always been: each year you get some maximum possible merit raise of between 0% and 4%, depending on external economics that the department has no control over. Academic book royalties might also raise your yearly salary by 2%. Being highly money-motivated in some parts of academia would end up being very frustrating indeed. E.g., even when I've got a big NFS grant, that doesn't change my salary one bit; instead, it changes how many students and/or postdocs I could support.


Well, yeah, but without students & postdocs it is very difficult to make a material impact on your field of endeavor.

There’s also an ulterior motive for some lines of work, where sending a paper to be critically reviewed by rivals is a mechanism to torture test the work & conclusions.

This can be taken too far (especially in biology, the glam fluffing and $million additional irrelevant experiments are legendary), but in principle, if even your most motivated critics can’t find a fatal flaw in your work, it’s a reasonable bet that the work is sound. At least, that’s the principle. Editorial overrides sometimes break this safeguard, though (lord knows I’ve seen a few).


Well, yeah, but if you submit a paper to "torture test" and it gets rejected, that cuts off a journal to be able to submit to.

Well, yeah, but I think what he's saying is it's probably the case that a large number if not majority of academic make the decision to go through the grueling PhD process and give up years of earnings in their prime for things other than monetary gain. But of course people can become disillusioned later down the road.


I claim that it’s better to request major revisions than recommend rejection, but that’s mostly due to my lack of faith in authors.

Too many examples of “oh well, let’s try the next journal” and not enough of “gee maybe we should fix these glaring flaws”. It’s not because I’m a nice person; it’s because I would prefer the literature not to be a toxic waste dump. Nobody is entitled to be published anywhere. In an ideal world, doubly blinded review would become a part of the published record. At some journals it already does.

The reviews offer valuable context, which is often sorely lacking in high profile venues. (The canonical examples of STAP and arseniclife come to mind, but also much more subtle details where an overall sound paper somehow only gets cited for the one shaky assertion in the results)

JMHO


My primary motivation in trying to do a good job as a referee is that if I write a paper, I also want a referee who tries to do a good job. This can improve the quality of a paper tremendously.


> To respond succinctly to your comments: money isn't the only incentive in life, so it seems weird to not see "any incentive" as soon as money is taken out of the picture.

Especially for a group of people where many of them have taken a pass on more monetarily lucrative careers.


In my experience, academia is largely driven by money. Professors at my university were recognized for the money they brought to the university first and foremost it seemed, rather than for any discoveries/innovations in a particular field.


Ah, I know a former engineering professor who left a large flagship university in the Midwest for exactly that reason. Based on what he told me that's more a phenomenon specific to institutions without a lot of money than a general truth about academia.


This resonates with me, as my alma mater definitely doesn't have much (compared to the top endowments out there). It's not amount to sneeze at, but still, there are others ahead of us.


They are paid, just not by the journal, they're paid a salary by their university and one of their responsibilities is peer reviewing papers.


This. "Service to the university and field" is literally part of my contract.


The incentives are 1) professional responsibility; 2) the opportunity to participate in the direction of new research, stay abreast of the field, and maintain a high level of integrity; 3) young researchers joining the field benefit from the process, both as reviewers and authors.


I can't remember the last time I've read a headline about the peer-reviewer that confirmed some groundbreaking finding.

Maybe it gives you the opportunity to network with higher-profile contributors to your field, if you offer to peer review their work (though to be honest, I don't know how the peer review process works, and if you can even "offer" to perform it for a specific paper).


At least in the social sciences, journal editors delegate papers to reviewers. Your first sentence also suggests that you don't know what peer review is.

Peer review is not the same thing as replication, which is replicating the results of a paper after it's been published (e.g., confirming some groundbreaking finding). Peer review happens at the stage before publication of the original paper. Researcher(s) submit the paper to the journal. The journal editor sends the paper out to some reviewers, who review the paper (this is the "peer review" stage). Pending reviewer feedback and editor approval, the paper is published.

Edit: also the "benefit" that rsa4046 refers to probably doesn't mean networking. AFAIK, reviewers are always anonymous to the authors (which can generate its own problems e.g., if the reviewer gets a paper authored by someone he/she doesn't get along with). The benefit being referred to, I believe, is that of learning to write better reviews, and having reviewed other's work, learning how to improve your own.


Well, yeah, but learning to review is less difficult than learning how to communicate clearly, crisply, and forcefully. So in a well administered review, most benefits accrue to the authors even if the reviews are poorly done, so long as they are ethical. (Editors can step in when reviewers are being unreasonable or ignoring explicit instructions; unfortunately professional editors at glam journals can also step in to push exciting or politically expedient work into press before there is time to adequately vet it, which sucks and gives everyone a bad name).


Well, yeah, there are other irrelevant things about reviewing that I also have the ability to bring up but my point is that to say that the "benefit" of reviewing is for the purpose of networking is misinformed.


I sometimes agree to review out of a sense of obligation, where I know damned well that the authors recommended me, but you’re right, that’s not networking per se. There’s no reasonable expectation of benefit and it would be unethical as hell to request any.

The authors most likely won’t (can’t) ever know that I agreed to review their paper; it’s more of a good citizen affair. Sometimes, afterwards, it will become apparent that a particular referee was someone familiar. I would like to think that the original poster was imagining something along those lines, but your take is probably closer to the truth. Oh well.


Nope. Nobody in the review or author role gets paid. And personally, that doesn’t bother me. I put in time to review papers with the understanding that someone else will put in time to review mine. My employer (I’m in industry) understands that this is a reasonable use of my time given that they see value in me being engaged in the academic world. Prior to industry when I was an academic, it was also part of my paid job. It’s usually called service. The trope that it’s all unpaid labor is a bit deceiving and not entirely accurate.


I don't understand why academics seem so complacent in this. Sure I'm perfectly happy to do "service" for a community of researchers who run an open access journal or are organizing a conference. That's a beautiful system. When it's for a for-profit publisher that will charge those not working at a well funded academic institution extortionate fees to access my work , or the work I'm reviewing, it's a whole different story.


You've misunderstood the person you are replying to. The insinuation was not that the labor is unpaid, the insinuation is that the labor is paid for by someone other than the publisher.

The ultimate point being that the group that pays for the labor does not get the benefit of the labor: each member of the public must individually pay for access to the article. This situation is absurd: they have already paid for the article's production.

This is why the most powerful open access initiatives are being driven by grant agencies. They are in a position to unequivocally state that the research they are paying for must be open to the public since they grant the money on behalf of that very same public.


I've shelled out a few thousand dollars to publish my last two papers, and my editors / reviewers got none of it.


'I scratch back, you scratch mine' Generous reciprocity of free service keeps the system going.


Also this - I review for journals because in turn my submissions to those journals are reviewed.


But for open access journals, authors often do pay publishers.


OT:

Lowly soon-to-be-postdoc here

Congratulations :)


Yes, authors even pay for submissions themselves often


Add in that the quality of the system is massively massively broken...peer review is about as accurate as the flip of a coin. It does not promote gorund breaking or novel research, it barely (arguably doesn't) even contribute to quality research. I had a colleague recently be told by a journal editor 'we don't publish critiques from junior scholars.' So much for the nature of peer review being entirely driven by the quality of the work.

As one of those academics...I keep getting requests to peer review, I respectfully make clear I don't review for non open source journals anymore. Same with publishing. I'm not tenure-track so am not primarily evaluated based on output.

Publishing is broken, but it is really just part of the broader and even more broken nature of academic research.


How does the old saw go again? "Academic knowledge moves forward one funeral at a time". People have vested interests and if your research contradicts much of theirs, then they are going to be inclined to reject yours. You'd think that this would be all ivory towers / pursuit of knowledge, but in reality it's dirty and messy and grimy; just like everything else in the world.


What would be a good alternative to peer-review though? Genuinely interested.


It strikes me that the challenge of peer review is that it is non-transparent, and gate-keeping. It is impossible for one to read the peer review feedback related to a published paper, and it's similarly difficult to publish controversial pieces that break established norms.

Contrast this to the peer review culture of popular open source projects - major pull requests have extensive and transparent dialogue, and disagreements known. Meanwhile there is no barrier to releasing anything new.


There's a difference with scientific publishing for most disciplines. I work for a non-profit scientific publisher. Much of the research today that is coming out is extremely specialized. Thus there aren't many people in the world that have the expertise to referee a paper. So the review is anonymous in order to get a more frank and honest review (we hope).

I am also a developer. There are far more people that can give feedback on most code, and also it doesn't take a lot of apparatus or money to get good at coding. Also CS/OSS is relatively young when compared to other disciplines. We've almost always done things with honest and sometimes brutal feedback. Even academic research is usually announced/shared at conferences. Look at what happened when Nature tried to make an AI journal.... [1] I think there is still value in someone getting paid to manage research and referees and ensure a high quality product. Open peer review is just going to be a lot harder for some of these disciplines with limited experts.

[1] https://www.sciencemag.org/news/2018/05/why-are-ai-researche...


Anonymous doesn’t have to mean secret, although it isn’t always trivial to achieve.


Examine the facts:

- Many research fields have anywhere between 3 and 10 groups working in them.

- A review always transpires the background of the reviewer. You just cannot "mask" the shape of your knowledge around a highly-specialized subject. This includes your approach to the problem, the issues you are most interested in (and hence know more about), the references you give, etc.

With a closed system, you only get to see:

- Reviews of your own paper, without knowing who wrote them.

- Reviews of the papers you review (there are usually 3-4 reviewers per paper, and you get to see the other reviewer's reviews and who they are).

With these pieces of information, academia is already full of grudges and strong-arming around. Here's an anecdote:

I was once in a conference, and met a colleague that made the effort to approach me and comment that he was great friends with my advisor (who didn't attend), and just wouldn't stop praising him. After a while, he switched to ranting about the review process, and how a specific reviewer was a moron that wouldn't understand anything and so on. Of course, that reviewer was my advisor and I did know but he obviously didn't.

Given the high egos involved in academia, I am pretty sure that if there was a track record of all reviews, researchers would figure out who reviewed their papers. The backlash would then be ugly, and the entire ecosystem would end up more corrupt that it already is (imho).

I've been on all sides of the fence, and I just don't see a better solution than just having secret reviews. Nice words don't matter (much, to most people) when the e-mail starts reading "We are sorry to inform you...".


In the current system, when you get to see the reviews of your paper, there are still the same challenges for anonymity and ugly consequences that you’re pointing out. Perhaps even to a greater degree, because the information is “leaked” through gossip rather than in a formal process.

If all reviewer comments were collected and then published at the same time to prevent them influencing each other, how could that make the situation with respect to “frank and honest” reviews any worse than it is already?

Are there entirely secret journals where nothing is accessible to non-contributors, for the same reasons? I suspect the answer is no, which is why these feuds often play out in attack-counterattack sequences of published papers.


> So when you see the reviews of your own paper, there are the exact same challenges for anonymity and ugly consequences as you’re pointing out.

Yes, and this already happens (hence the grudges I was commenting about).

> How does maintaining secrecy for the per-paper improve the situation with respect to “frank and honest” reviews?

By increasing the uncertainty. If I am 95% confident it is you who screwed me last year and 50% certain this paper I got now is yours, I will look at it with a harsher attitude than if I'm only 20% and 20% certain.

If you were to publish all reviews then researchers would have much more information to convince themselves that it indeed is "that guy".

It is sad, and I personally despise it, but that's what it is from my personal experience.

PS: I quit academia after getting the PhD, and this was among the reasons for me to quit. The other major reasons being that I don't want to keep relocating somewhere else in the world every 2 years until I'm in my 40s-50s, I don't like the overselling and result dishonesty in general, and I actually enjoy working in industry too).


> By increasing the uncertainty. If I am 95% confident it is you who screwed me last year and 50% certain this paper I got now is yours, I will look at it with a harsher attitude than if I'm only 20% and 20% certain.

Ouch, that’s a good (and depressing) point.


It sounds like machine checked proofs or other mechanical formalisms need to be employed to ensure scientific validity.


Machine checked proofs only work for deductive, non-experimental disciplines. You can't use machine checked proofs to tell if a drug is effective or not. Human referees have to be involved no matter how advanced the AI is.


You can precisely "quarantine" the "human referree part within the framework of a formal proof. That's still a huge step forward from the status quo of reasoning and opinion being smeared together and nothing formal.


> You can precisely "quarantine" the "human referree part within the framework of a formal proof.

I think it is unlikely you can separate the two precisely. If it be so, mathematicians can replace doctors who make diagnoses. I don't think all empirical facts can fit nicely into a formal proof system.

But still, any effort to reduce the burden on the human referees would be welcome. This said proof system for peer review would be most useful to math and theoretical CS (though not so much to biomedical sciences).


I take it as trivially true that they can be separated shallowly. Then the question is how big the informal "leaps of faith" are. If they are big, the next thing is to look for "strands" of reasoning (lemmas that may not be connected) inside them, as a sort of formal proof reverse-marginalia.


How does a machine proof check a psychology study? Or a drug trial?


open question! :)

We could at least have the data and the statistical analysis (code) accompany the paper.


Yes a fat table, a big proof, and some marginalia which is wonderful pure humanities.


Sounds like you are trying to solve a social problem with technology.


gatekeeping in science is a theoretical good...the difficulty in practice is highlighted by the general 'three paths' for citations...articles that never get cited, articles that get cited a lot at first and then are forgotten, and articles that aren't cited until much later and then cited en masse. That last category is really the 'good science' because it does what we want foundational research to do - disrupt current and limiting ways of thinking.

Dialogue is an unmitigated good, but dialogue also tends to be dominated by the normative voice. Where science is different than OSPs is that OSPs need to work with what exists now, whereas science needs to engage in tension and informed dialogue in a more foudnational way..


I'd favor something like clustering and client-side filtering over gatekeeping. Mainstream academic research can coexist just fine with fringe theories and industry research in the same database as long as one can efficiently distinguish between them.

And keeping everything together in one (de)central database would make it possible for people, so inclined, to annotate other people's work with new references to support or debunk the work long after it was published, or to clarify ambiguous language, etc. Those annotations, too, could be subject to filtering as needed.

People could build reputations and whole careers around tying up loose ends instead of the "publish or perish" grind.


Mainstream academic research can coexist just fine with fringe theories and industry research in the same database as long as one can efficiently distinguish between them

How do you propose that is done?

For my job I build neural networks for text processing. I spend a lot of time reading papers in the field.

And yet if I look at something in an adjacent field (even something as close as something like open information extraction) I have trouble telling which papers are important.

How on earth am I supposed to tell if something in a further removed field which attracts more crackpots (say probability theory or something) is a fringe theory or a breakthrough from a new author?

I'd note the example of the Gaussian correlation inequality[1] where even people in the field weren't aware it had been proven for 3 years after publication[2].

[1] https://www.quantamagazine.org/statistician-proves-gaussian-...

[2] https://en.wikipedia.org/wiki/Gaussian_correlation_inequalit...


Arxiv accidentally created a crackpot filter from apparently a simple semantic classifier along with a count of stop words used in the article, which acts as a very coarse "style" metric.[1] Reasonable-looking crackpot work is usually classified as "general physics" instead of being rejected entirely. Unfortunately it also lumps some legitimate but unconventional research in with the crackpots and makes Arxiv itself somewhat of a gatekeeper.

What I'm imagining is a sort of layered approach with raw "article" (or some other unit) data at the bottom and indexing, tagging, clustering, filtering, reviewing, commenting, linking, etc. layered on top with possibly many implementations to choose from.

The analysis/filtering would also apply to people augmenting the data, so if some users are really good at tagging certain types of junk as junk, you could easily filter out that junk. If there emerges a cluster of users who keep tagging certain interesting material as "woo" then you could filter them out or even use them to discover interesting material.

I doubt there's a silver bullet (at least today) that could reliably distinguish between unconventional-bad and unconventional-good work, but keeping the baby and the bathwater together opens the door to such an algorithm in the future.

[1] https://arxiv.org/abs/1603.03824


Here is a question what is the sort of theoretical good for the gatekeeping? What is the functional purpose of it? I think that answer would prove insightful for determining the future.

I can see two noncynical reasons for it one. One is educational purposes. Two would be quality control - if it is already debunkable or widely known there is no point in publishing in Nature "thinking done in the brain".

The cynical abuses and perverse incentives however are myraid and academic politicking is already infamous. I don't know what solution will work but understanding the system goals and functions should help decide what to replace and what to keep.


>Here is a question what is the sort of theoretical good for the gatekeeping? What is the functional purpose of it? I think that answer would prove insightful for determining the future.

Because I am not at all prepared to assess the quality or accuracy of a quantum physics paper and I doubt a molecular biologist is prepared to assess the quality or accuracy of a sociology paper. In what I have seen there are a lot of cargo cultists who think kthey understand a field, and use big words, and just don't get it. I feel that there should be some filtering function to remove that.

As an example:

This guy: https://www.mountainproject.com/forum/topic/113602967/near-m...

This guy: https://www.reddit.com/r/badphilosophy/comments/7x3t1g/stand...

This guy: http://ecclesiastes911.net/

and

This guy: https://arxiv.org/search/math?searchtype=author&query=Simkin...

are all the same person.


That guy has some pretty interesting articles.


So the short answer is I don't know, and I don't think there is a single answer. Different types of scholarship need different standards and processes in my eyes. It isn't so much 'whats the alternative' to what is really a singular step as much as it is a rethink of how scholarship is performed, evaluated, rewarded, and considered. Otherwise the incentives across the playing field don't change enough to make meaningful change meaningful.

What would help? My opinions...

-FOSS publishing -Abandonment of journal metrics, they really serve no purpose besides trophy hunting at this point

-Stronger transparency initiatives

-Public review (c.f. Lim)

-better journal metadata (it affects citations)

-Greater shaming of misbehavior of ALL types

-Elimination of stupid policies like 'issue lengths' The article either should be or should not be published...saying there isn't space in a given issue is insane given that >>90% of article access is online.

-a hell of a lot of older faculty retiring out of the way of science.


> -Elimination of stupid policies like 'issue lengths' The article either should be or should not be published...saying there isn't space in a given issue is insane given that >>90% of article access is online.

I'd prefer to keep article lengths in place. It's easy to write a lot of text (as quite a lot of high school students understand), but it's challenging--and very important--to be able to convey information in a very concise format. The process of trimming down the text to squeeze it under the page limit is very useful in getting the ideas contained in it to a more refined and easily-understood format.


I meant n articles per issue more than allowing articles to be length>m. My bad for the in-clarity :)

I agree, in theory, with your point. I think the unfortunate thing that is lost in most efforts at concision are lack of detail rather than fluff. Fluff is bad, but I think detail is important...especially detail on how things are situated in prior literature. As a reviewer, I have often found there to be a lot of articles using 'strategic concision' to gloss over not doing things properly, or flat our not knowing what you are doing. Saying 'we used method x' means I have to trust you did it properly...or infer from other things how well you did it. Supplementary material could be a potential route here, but I just sent an article back for the second time. The first time they did the 'we did x' and I asked for more detail. The second time they said 'we did x by doing y and z'...but y and z were very very wrong.

There are places where being able to explain concepts concisely are important or a viable tactic. Journal manuscripts are (in theory) a permanent archive of new knowledge for human society and have never struck me as the place were concision is a relevant parameter.


The Sphenodon in the room is that the raison d'etre of the entire Academic structure is unclear. So that should be clarified first. Advancing science is one thing. Publishing as many papers as possible, breaking research into MPUs is another. Lecturing to youth completely uninterested to be taught is yet another vertical.


I think you may be confusing vague with unclear...

The inclarity, and multiple verticals seems more like an example of Goodhart's law than anything else.

Summarized, the goal of academia is and (for most of time in whatever form it took) is to produce and disseminate new knowledge for human society. Creating new knowledge is what we now call research. Publishing papers isn't a raison d'etre...it's a metric for that.

I'll note that Ernest Boyer is far more articulate on this than I am...


Open publishing and commenting would be a good start---having a dialog, like is done at conferences. Older academic journal articles (pre 1900) read much more like discussions than like the hundred dollar word vomits of modern academic publishing. The broken incentives are at the core of this rotten fruit, though. Just making journals open isn't enough.


We have (almost) open publishing and open commenting. Did that improve anything?


I think so. I've followed the back and forth on a few papers on openreview.net and I found the comments nearly as interesting and informative as the paper they were commenting on.

There are a lot of things that only exist as suspicions and intuitions inside of a researcher's head, and that sort of information is much more likely to come out in a back and forth reviewing a paper than in actual published literature.


I have been distinctly unimpressed with open peer review, and given it's antithetical to something I actually think does concretely have benefit (double-blind peer review) I've gotten rather curmudgeonly about it as a fix.


There's open commenting? I've never seen the back and forth of the review process be published. It should be published.


https://hypothes.is supports threaded comments on anything with a URI; including PDFs and specific sentences or figures thereof. All you have to do is register an account and install the browser extension or include the JS in the HTML.

It's based on open standards and an open platform.

W3C Web Annotations: http://w3.org/annotation

About Hypothesis: https://web.hypothes.is/about/


Some machine learning conferences use openreview.net for this.


ResearchOne(1) was founded to address the issues with peer review and the academic journal system.

“A platform for scholarly publishing and peer review that empowers researchers with the

Autonomy to pursue their passions,

Authority to develop and disseminate their work, and

Access to engage with the international community of scholars.”

Harry Crane, an Associate Professor of Statistics at Rutgers, is one of the founders and a good follow on Twitter.

1 - https://researchers.one/


This answer will not please the 'publish or perish' economy which has built up around this: editorial discretion (which is a sort of peer review) and about 10x fewer papers published per capita involved.

Its how science worked for most of its history. Really, Elsiever is responsible for the profusion of people publishing fluff to weak journals, subsidized by the government.


Peer-review is a good tool, but it is not the only tool that should be considered. And the problem is not peer-review specifically, but the implementation of the system and rent-seeking middlemen like Elsevier who do nothing but extract payment from taxpayers and researchers.

PeerJ and PLOS One are good starts, but until academia is no longer a slave to "impact factor," it will have limited effect.


> What would be a good alternative to peer-review though? Genuinely interested.

Replicating researches? That's how we prove findings are not wrong or biased, usually.


Yes. The fact that we have so much published research that can't be replicated indicates to me that peer-review is doing less work than we think it is.


Actually I came up with one...

The top 50 research universities could get together and decide to no longer give consideration to pay-walled articles. It wouldn't affect them AT ALL...the research they do is already treated as pretty much cannon no matter where it appears. Everyone else would follow, and the journals would quickly respond with change.

Yeah I know it's unlikely but they have enough social capital to actually have an impact.


That's not an alternative to peer review. It's a solution to remove the journals from the research, and it has a lot if's.


fair point.


Why not just build a kind of a social network where people would post research papers, "like" interesting and high-quality papers and reference papers relevant to their research, post their comments and for every participant you would be able to see what papers they have published and reviewed? Sounds fairly simple. Why not?


People have tried this, or spins on it, and they don't particularly gain traction. Part of the problem is these have to exist alongside the traditional structure, and "Hey academics, do you want to be busier for intangible rewards?" is a hard sell.


What are, if any, the reasons these have to exist alongside the traditional structure other than the fact only specific reputable traditional journals are recognized as valid publication places by universities boards and other relevant institutions?


So you mean, "other than the fact that reputable journals are the entire basis for the dissemination of new research results, the core of hiring and promotion, and a major aspect of funding"?

The simple fact is that like any replacement system, there has to be a reason for it to exist. And for a social network, there have to be network effects in place. Fragmentation is a problem. "Oops, we ran out of VC money..." is a problem. Hell, "We got bought by Spinger" is a problem.

In the very long term, they don't have to exist along side each other. In the medium term though, they do. Otherwise, it's like going from "I'm going to quit my job tomorrow with no savings!" to "I live sustainably on my off the grid farm" with no plan for how to eat in the middle.


I think the op is not saying that peer review as a concept is broken, more that the current peer review system is broken.

I think we could start fixing things by incentivizing good peer review. There’s a lot of ways to do this and I’d be interested in a discussion of different schemes. It will by no means be easy but I think that’s what needs to happen.


For reference, good peer review is incentivized in some places. A journal I read gives awards every year for the best reviewers, and good reviewers end up as Associate Editors.


Thanks for pointing that out. I have started to see those recently. Not sure if it is a recent phenomena or if I’ve only noticed it recently.


A good alternative to the current system of pre-publication peer review would be post publication peer review. The former, or one pass editorial review, is how scientific publication worked almost universally pre WW2. There are many scientific disciplines where everything important happens in preprints and publication in a journal is for archival purposes more than communication. Economics, Physics, Mathematics, Political Science, Genetics and Genomics, every discipline with substantial coverage on SSRN or the arxiv works on post publication peer review.


For many STEM fields, conference talks, presentations and written contributions are notably faster and better suited to non-tenured applied researchers than the journals route.


unless the people who review their tenure cases don't count them because they are woefully out of date and out of touch.

It has been amazing to come to understand just how effective the corporate takeover of universities has been in the thinking of senior faculty. I have had faculty complain about policies that they literally are the only one's empowered to change because it has never occurred to them that they actually have power.

Conferences, generally, are a better path to science because of dialogue. Having a conference with 2500 presentations because everyone needs another CV line item don't accomplish that because no one is in the room.


It depends on how much waving you do actually need for your career or personal satisfaction, though. All these openness, transparency and calls to dialogue are the exact opposite I can see in my industrial, applied field of research. Most results here are kept private in order to attempt first exploitation or to be ready in case a suitable market arises. I can understand pure sciences and medicine, among STEM curricula, work inherently different, though, and rightly so.


This remarkable article explains how it all came to be this way; https://www.theguardian.com/science/2017/jun/27/profitable-b...

(The good stuff starts where Robert Maxwell appears)


It's worth noting that the computer science community (at least in systems and machine learning) have already made some strides in this direction. Systems conferences like VLDB and CIDR post the proceedings online for free (and most ICDE and SIGMOD articles are available through the author's website). In machine learning, almost every conference paper is also available via arXiv.

When a paper is seemingly not available online, I've always gotten a free copy via an email to the author... And then there's sci-hub. It's not the way it should be (i.e. you shouldn't have to hunt around for publicly-funded research), but at least it's something.


HERE is a trivia question for you: what is the most profitable business in the world? You might think oil, or maybe banking. You would be wrong. The answer is academic publishing. Its profit margins are vast, reportedly in the region of 40 per cent.

40% are very normal and consider healthy margins for many businesses. The business with largest possible profit margins I know of is political donations. For mere $100K, one can own vast public land for mining and selling resources worth billions for many generations. That's 1000000% margin for you.


Consistent 40% margins, year over year, for decades, are rarely seen in other industries. For example, Apple's brand is very strong, so it can ask a very steep markup on their products. Nevertheless, its profit margins are lower than those of the big five publishers, usually.


Please give the name of a mining company with more than 40% profit margin.


From the article:

> the business of publishing tax-funded research and then selling it to tax-funded institutions has produced the most profitable industry in the world

How is this not illegal?


I don't see why would it. You can submit papers without an affiliation, so the idea of publishing only tax-funded research is a generalization. That is, these organizations do work by being selective and then selling the collections of that selection process back.


Granted the publishers do work. Even PLoS does work. It's the lack of access that is the problem. $42.50 for an article one of my colleagues at my own US Government institution wrote (random example from this weekend's work, American Journal of Bioethics) is ridiculous.


I thought in most cases the author is free to provide copies of his/her own articles directly? Is it not true that you could just ask your colleague for a copy of what he submitted for publication?


which part should be illegal, the selling of the subscriptions or the buying?

That statement is misleading. Pblishers sell participation badges to the article authors, the authors put these badges in their CV, and when the time comes to ask for money from the governmnet, the gal with the most badges on her chest wins it.

In addition, the authors have the right to upload a preprint (i.e. a copy identical in content) to a preprint server - virtually all journals allow it nowadays - although it doesn't happen.

So, there is in theory no reason why anyone should go to these websites to download a copy. In practice, it is convenient for the parties involved (researchers & funding agencies).


Even outside of research, in most economies there's a lot of money flows that look like this. Visited Europe last year, it's crazy how many of them are both funded by government grants and provide services to the government. Unfortunately curbing that would involve even more regulation making everything worse perhaps.


According to the opposition open letter, Plan S includes "A prohibition on publishing in either subscription or “hybrid” (i.e. partially open access) journals,"

I don't see why that should be a requirement. So long as papers are made available for free, why should it matter if they are also available in a paid journal?


Here's a Twitter thread with a director at Elsevier (@mrgunn) making extremely snarky comments about why they think academia shouldn't be free and widely available: https://twitter.com/dgmacarthur/status/1028489457803161600


Absolutely disgusting. What a disgrace to science.

Alexandra Elbakyan has done and will do more for science than this rent-seeking asshole will ever do.


The real problem is that these publishers actually provide real value that gives them leverage over the industry. If they didn't, it'd be pretty easy to destroy their grip over academia. The idea that after 23 years of internet they've maintained their position via smart deals is ludicrous. They've maintained it because of useful curation.

If there was a way to open source curation and not lose quality, I can't imagine anyone would disagree with that - even the people who work at these organisations.

Unfortunately, what you end up with is arxiv, while very useful for sure, has no curation.

Everyone just keeps bleeting - we want free! But they don't bother to think about how to do actually do it. How to ensure quality curation remains which is absolutely so critical to the advancement of science.

I've been fantasizing about free everything since forever. Who hasn't? But at some point we have to stop trying to fantasize our way to results.

This is like underpants gnomes logic - steal underpants .. .. .. .. quality curation!

Also, MrGunn is a moron commenting on that thread. Why not just hand out free subscriptions to rare disease patients? What a trivially cost less PR move.


I have to reluctantly agree. If there is one thing I've learned after 20 years on the open commercialized Internet its that any uncurated medium falls to spam, trolling, con artists, cranks, propaganda, and hoaxes. The bad stuff is easy to make and generally has better "viral" characteristics, so it drives out the good. The money is also very much on the side of quacks since absent regulation quackery is more profitable than real medicine. Hoaxes and tabloids are also more profitable than real news.

Science without curation will fall to corporate shills and propagandists with political axes to grind. It's already bad enough but with no gatekeepers at all it would be a total free for all. Tobacco would become good for you again, homeopathy would work, etc.

Curation can be democratic, but if so there must be a well thought out procedure for electing curators and a constitution or set of bylaws that is hard to amend. Science has a lot of power, so any scientific curation system is going to come under constant attack. We are living in the age of information warfare.


> these publishers actually provide real value ... If they didn't, it'd be pretty easy to destroy their grip over academia.

Your argument equates to "they have maintained their grip, therefore they're providing real value".

Their stranglehold seems to come primarily from academic incentives to publish in prestigious journals.

> The idea that after 23 years of internet they've maintained their position via smart deals is ludicrous.

Perhaps someone has claimed that, but I don't recall ever seeing that argument.


We had curation before we had commercial academic publishers. Professional societies handled it. Publishers just did the physical publishing, and didn't get copyright.


Nobody is asking for science to be done for free, most motivations for open access come from people that realize that the biggest stakeholder in all of this is the public.

Grant agencies receive money from taxes and those funds are devoted to science instead of other public projects. That the output of this work is not generally available to the public is the problem.

Most researchers will never feel this problem since their libraries will have access to the journals that they need. Publishers in general will just charge for access and not see the issue. It is only when you go out as a member of the general public and attempt to read some journal article, presumably funded by your taxes, that you will realize that you haven't had a voice in all of this.


Are we still talking about publicly funded research that’s being pay walled?


There are plenty of conferences and journals with curation (peer review) which are free for readers and free or almost free for authors. For computer science, see, e.g., http://pierre.senellart.com/travaux/open-access/


    Shiran Pasternak
‏ @gingi0 Aug 13 Replying to @dgmacarthur

    What is a reasonable profit margin for a platform for communicating publicly
    funded research? Non-rhetorical question.

    mrgunn
‏ @mrgunn Aug 13

    For a publicly traded company, the answer is "what the market will bear".
Wow.

Krista Jamieson offers a great perspective.


What did you expect? I'm not saying that the guy is or isn't a horrible monster, but if he had ethical qualms with what Elsevier does or how it operates, he would have either changed it or left.

The same goes for everyone who works there. They have either found a way to justify that what they do is ethical and okay, or would quit. That's just human nature.


Is this telling us anything we didn't already know?

Either way, I don't think he's helping the image issue.


I will never know why CEOs wade into Twitter spats like this. So little to be gained, so much to lose. Maybe lack of control of ego leads to better decisions elsewhere, I don't know...


I suppose you don't have to worry about reputational damage when everybody already thinks you're a parasite.


He claims that digital distribution isn't cheaper than analog. I don't doubt that, given the amounts of money it probably costs to build effective DRM and paywall systems. Were all journals open-access, their costs would be a lot lower too.

Sci-Hub hosts 85% of the articles published in paywalled scholarly journals on a shoe-string.


I think it should be acknowledged that there are likely a number of accessibility and similar requirements that at scale bear non-trivial costs. I.e. building a website with top of the line accessibility, providing resources for auditory and visually impared, etc.

I'm not saying current publishers are great at these things, just that they should not be forgotten and should be improved on if a new order was to take shape.


That's a good point, and accessibility of science is a big issue across the academy, as the Berkeley captioning case shows. The DRM/subscription piece of digital publishing is still probably the biggest cost of their model.

I remember but can't find a citation for an apocryphal story that the administrative overhead of metering, tracking and billing phone calls in the Ma Bell days was more than 50% of the cost.


Holy shit! I guess in the age of trumpism nothing is a surprise anymore. Well, we have the brightest here on HN, how about someone make a go at a competitive solution?


The most direct way to do this would be to legalize sites like Sci-Hub. It's hard to argue you're performing a valuable service when you have to use legal power to prevent others from doing that service for free.


Surely the system should be like ... Hacker News.

We publish here, and the stuff that is good rises to the top with upvotes from our peers, and it is all subject to review in the comments where many an interesting discussion is had.

We all gain by sharing ideas as widely as possible. We just need a way for that to happen.


Or, maybe not.

An academic article is not a blog post. Also, only the most popular posts get discussed here -- tons of stuff is lost if it's not initially popular. Timing your submission allows for gaming the system.

All scholarly articles need to go through peer review and receive honest, constructive feedback -- especially the bad ones, and Hacker News (and Reddit and Slashdot and any system like that which doesn't assign reviewers to articles) isn't very good at that.


> All scholarly articles need to go through peer review and receive honest, constructive feedback

Which is also a system that can be gamed, by the way. For example, by the reviewers:

- "Oh dammit, we got scooped... better use the insights from here to help us past our hurdles, while stating this work is not original enough to be published so we can quickly submit our own and get published earlier after all!"

- "I see that this paper belongs to Camp A with regards to these two competing hypotheses explaining unexplained phenomenon X, I am in Camp B. Therefore, I will consider this paper rubbish by definition."

- "I am an old researcher who does not believe in some of the newer findings that contradict longstanding assumptions that I "grew up with", and therefore will never allow this paper to pass."

- "I am a very insulated researcher who cannot make sense of this multidisciplinary work, therefore reject the parts that lie outside of my field as rubbish."

- "This paper is a by a researcher competing for the same position/grant/etc. as me. Better make sure they don't get too much credit."

- "I'm not sexist/racist/etc but... totally clearly biased in my judgement of this paper because..."

Now, I'm obviously describing worst-case scenarios here. But my point is: it's human politics all the way down.


Of course it's human politics. Science is a human endeavor, and you can't remove human behavior from it.

My argument is that a blog aggregator style system is a bad fit which doesn't meet the requirements we desire from a replacement of the current commercial publishing scheme.


You definitely do not want that. Vetted science should not go through a pipeline of what amounts to a low brow popularity contest. Hacker news might be somewhat better than reddit in this respect, but there's no way you want to subject it to that kind of process.


Systems like HN and Reddit work (when they do, and to the extent that they do) mostly because they _don't matter_.

If you tried to use them for things that people care more intensely about, like determining what gets to be part of The Academic Literature, then they would instantly become the subject of concerted efforts at manipulation. Even now, HN has to deal with voting rings and the like; it would be much, much worse if people's careers were riding on it.

It's Goodhart's law: as soon as a measure becomes a _target_ it rapidly stops being a good measure.


Upvote/downvote lists a horrible way to publish science. Simple silly example: big group publishes, everyone gets online and upvotes the post in the early stages so it floats to the top.


Considering the popularity of homeopathy and other junk science, that's probably not a good idea.


The trouble here is that the entire academic structure is extremely difficult to question and change. Disrupting academia is kind of like disrupting the Catholic Church. It's so entrenched in its assumptions and rituals that trying to bring revolutionary change to it is pretty hopeless. Professors gained their status in this archaic structure and will resist challenging it. I mean, your PhD advisor will make sure you never challenge the research status quo with your thesis topic, let alone partaking in the academic publishing structure being fundamentally restructured.


> I mean, your PhD advisor will make sure you never challenge the research status quo with your thesis topic

This will come as a surprise to both my PhD advisor and my current advisee


Same for my PhD advisor and current/past students. Unfortunately, this HN post seems to be full of the usual cohort of passionately opinionated HN readers who don’t have a deep understanding about that which they’re passionately opinionated about.


You should elaborate with some detail instead of ad hominem.


You should elaborate. In my experience academia is largely a status structure where people are groomed to be hirable at schools X, Y, Z and publish regularly in conferences/journals A, B, C. You're saying this is fertile ground for hardcore innovation? No way. The extreme majority of the work is incremental and predictable; small low-impact results. (Disclaimer: My experience is in CS.)


Alright, I'll elaborate.

My dissertation was based on a highly speculative new treatment, in a field where mathematical models are not widely used. One of the potential answers was "Yes, this should entirely change the course of care for a particular disease."

My current advisee's first paper is, essentially, "Has the field been doing this wrong this whole time?"

Note that I publish regularly in the good society level journals in my field, and on occasion in the splashier journals, got a tenure track job at an R1 university, and have enough grant funding to cover my lab.

Also, your assertion that "incremental" research can't change the research status quo is flawed.


That's cool, congrats. In my field, computer science, it's far from your experience. Over and over I've experienced PhD students being pushed away from more radical ideas (opening "risky" but potentially really promising new areas of inquiry) toward incremental work on what's currently "hot" in the field. Yes I concur that I used "incremental" in an unnecessarily disparaging way. Some incremental work can be revolutionary, but in my experience in my field, most of it is humdrum.


I've always wondered why scientists, arguably the smartest people around, would fall for this blatantly obvious racket. And why they keep going back to the racketeer for years on end too, wasting millions in tax money while keeping the valuables behind lock and key for most of us.

It's so obvious you're being ripped off. What the heck scientists?

No wonder people believe in conspiracy theories, with a bunch of scientists denying them the knowledge while a bunch of business suits hoard the cash.


Because of the prestige of being in a journal.


There are many perverse incentives in this system, but one that tends to be overlooked is the incentive for faculty to publish in predatory journals. It's not that most want to publish there, but when even 3rd tier schools require their faculty to publish a certain number of articles a year under pain of not making tenure or an increased teaching load, submitting work to an undiscriminating journal is the easiest way to check that box.


Why don't people just publish through Arxiv? Is there not a discussion forum where people can post, discover, and comment on new papers?

If you know of one for CS, do comment.


Posting on arxiv is useful for making results available early, and to make published work available in some form that isn’t pay walled, but posting on arxiv on its own is not publishing as it isn’t peer reviewed.


> Why don't people just publish through Arxiv?

ArXiv is not peer reviewed.


because while that might make sense at the field level, it relies on scholars getting credit for that work, which few do.


Could you elaborate on what you mean by scholars getting credit for their work?


Very few tenure committees, funding agencies, etc. give equal weight to an arXiv pre-print and a published paper.


Note several major research universities require faculty to place an online copy of their papers in an open university server within a year of publication. The problem may be subject-themed fashion like in a journal. Now and them an energetic individual might create a mirror TOC page of a joirnal. For example someone does this for SIIGRAPH's flahship journal Computer Graphics.


While we are at it, lets change the format from .pdf to .zip to include source, data and other information that a hypothesis and test and resulting analysis can encompass.

First step is to put the pdf into the zip and have existing tools be able to navigate the hierarchy. Could include notebooks, bibtex, tex, data, images, etc.

And semantic scholar is in exactly the right place to institute this evolution.


The OA community - particularly PeerJ - has solved this problem by forcing you to upload supplementary data either to their website (if it's small enough) or to Figshare, which does not allow you to delete data once it is uploaded and paired to a research study. It works rather well.


This would be great, many people support this idea. You'll need generational change to get there.


"Truth never triumphs—its opponents just die out." - Max Planck


If you think about the insurance business, it’s only economic value added is fraud detection and minimization. Someone needs to have an incentive to prevent fraudulent claims. The economic value of publishing is orderly and dispassionate administration of intellectual property ownership. Successfully publishing a paper entitles the author to permanent ownership of the work. The author name never changes. A paper that leads to tenure at a major research university has a discounted value of more than 7 figures (USD or Euros.) This is why the academic beneficiaries of this system, who further control research, are not eager for alternative methods, or they could bring it down rapidly.


Academics, I have a proposal/question and was curious if such a system would work for you. Here it goes:

Take a P2P program similar to Kazaa or BitTorrent to share paper pdfs and datasets.

Next to these files also share indexes, derived data for searching.

Next to that, also share user generated review comments and endorsements to implement peer review and quality filtering.

Finally, digitally sign all these data to guarantee provenance and credibility of the data found on the network.

Let me know what you think of it. I’ve been loosely working on a prototype application that does all that for a while.


The problem isn’t technical. The current flaws in the publishing system are not due to lack of indexing, data distribution, provenance, or filtering. And please, please, please, don’t turn reviews into some user driven drivel like amazon product reviews or other systems that are easily gamed or prone to the whims of the masses. We already have problems with gaming and personal agendas making reviews less than impartial: don’t make something that makes it 10x worse. (CS academic publishing and refereeing for 20+ years here).


I understand, please see clarifications in https://news.ycombinator.com/item?id=18537122


Not sure that user generated review comments are a good idea - it would be to hard to convince people (me included), that they'd be a good replacement for peer-review. You don't know the user's expertise, you have no guarantee that they have read the paper. Granted that last one is sometimes not true in real world publishing either - but at least the social pressure is there. It could also lead to gaming, asking your colleagues to give good reviews, or worse, companies that are paid to write good reviews, same as what has happened with Amazon etc.

Academia is largely a social construct, that's why it's hard to replace with 100% technology.


Well the UGC I was thinking of is like http://www.openannotation.org mainly used to link to other papers and/or comment of draft papers. Also simple “Reviewed” tags, and other classification items.

Since any derived data would be signed and linked to the identity of its author a default filter removes anything that is not transitively included in your academic network.

Reviewed articles would just be “review count > N and reviewers in fieldNet”


If publishing can't come up with a solution to limit cost and open up access, there are already online models for open publishing.

An online taxpayer-funded archive could be created/designated by statute. It would require that such articles be published there (before and after reviews). Publishers would pay to have access; their prices would reflect that cost and a cost-plus profit. Privately-funded org's could decide which game they prefer.

Make a fine memorial for Aaron Swartz.


How about this: after 1 year peer reviewed publications get released under a BSD license? Because this research needs to be available to all and this would give publishing shops enough time to sell to universities and people who want it now.


BSD is a licence for software. Papers are literature. How’s that going to work?


A community open free to the world license then. The exact license isn’t the point but that the research is not locked away behind a paywall but free to benefit society and not let a few pocket and profit from the research often done by public institutions like universities and such.


> The answer is academic publishing. Its profit margins are vast, reportedly in the region of 40 per cent.

That's an extraordinary claim to make. I would seriously doubt the "best" profit margin cross-industries is capped at 40%.


https://en.wikipedia.org/wiki/Need_to_know though I do not endorse it for academic research


Welp. It already has been broken up. Scihub is going strong and so is the push and emphasis on open journals such as Arxiv and others


> HERE is a trivia question for you: what is the most profitable business in the world? You might think oil, or maybe banking. You would be wrong. The answer is academic publishing. Its profit margins are vast, reportedly in the region of 40 per cent.

Interesting. Does anyone know what the volume is like?


People have been trying to set up web-based academic publishing for at least twenty years, and they've failed. The obvious explanation is that it does actually cost money to publish academic research.


This comment is more directed at hn commenters than the article itself. I'm led to believe that free journals aren't successful precisely because they don't have standards. They don't have standards because they can't fund people to do basic vetting (this is not the same as peer review), or otherwise are eager to accept whatever comes their way- which are usually papers that have been rejected by the reputable journals.

Assuming there is no money driven agenda, even the reputable journals have the issue where much of the content doesn't get peer reviewed. The findings are far from concrete.

But money driven agendas definitely exist too, and without basic vetting a journal finds itself hosting bs lobbyist material with sloppy "science" finding favorable results for anti-GMO, alternative medicines, organic farming, anti-vaccine, etc. The problem also exists that these entities are effectively funding their own journals to buy "legitimacy"- and they certainly would pose a threat to any journal that doesn't have high vetting standards.

As of yet, free access journals aren't reputable or actually desirable because they don't have adequate quality control. Free access to information is a worthy and noble goal, but we can't render the information useless in the process.


There are plenty of free, open access journals that are reputable.

JMLR http://www.jmlr.org/ is quite successful. There are some fields, such as machine learning, that are not dominated by for-profit journals. Why is this possible in some fields and not others? My answer would be that it is possible in all fields, but incumbency advantages can be very strong and coordination across academic volunteers can be more difficult when they are individually less secure.


There are some good free/non-profit journals, I just don't think you hear about them so much. There's a really good one in Number Theory called Algebra and Number Theory that has been around for about 10 years now. It was started by first-rate mathematicians and their standards, and reject rate have always been really high.


What’s wrong with organic farming? It’s the only kind of produce I want to eat.


Organic farming does not mean farming without weedkillers or pesticides. It means farming with weedkillers and pesticides that are largely unstudied, compared with known quantities like Monsanto's products (whose positive and negative effects are well documented and widely understood).

The 'organic' products may be better on balance, but we don't actually know that, because the body of knowledge isn't nearly as developed yet.

That's my layperson's understanding, anyway. I could be wrong.


It's entirely marketing, there is no actual data supporting the claim that it's healthier for you. It succeeds by the "appeal to nature" fallacy, which causes a bunch of problems and gets decidedly anti-science in a few ways.

Using only "natural" pesticides because they are "healthier". This is unsubstantiated, again it's just an appeal to nature. It can become problematic because natural pesticides often aren't as effective, thus certain farms/crops require using an excessive amount- way more than they would if they used the more effective synthetic pesticides. The synthetic pesticides are of course accused of being unhealthy (because unnatural)- though this is generally untrue. Furthermore, even the natural pesticides cause pollution problems, and overusing them isn't good.

Because the organic lobby succeeds by the appeal to nature, they are also anti-gmo. Which is a big problem.

While there are entirely organic farms, a lot of organic products are really not even what they advertise themselves to be, as many organic farmers don't actually run organic farms: they run normal farms with a section of it that adheres to organic standards. These standards are pretty arbitrary, except for the fact that organic farming is a significantly less efficient use of farmland, which is going to become a problem within the next century. We need to be making efficient use of our arable land, as we don't actually have much left on the planet unless we start turning national parks + protected areas into farmland.

Largely, organic farming is a faux luxury built on anti-scientific ideas that does nothing except waste valuable land resources, sold to relatively privileged people who are willing to pay a premium to feel better about what they eat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: