Hacker News new | past | comments | ask | show | jobs | submit login

As much as I hate the current situation with academic publishers, they have one very valid point - prioritization.

They impose an artificial bottleneck on how many articles can get published, so "published in XXX" means "one of the top X papers in the area according to a reasonable metric".

We have all surely noticed how journalism turned to shit when we moved from printed physical copies (with bandwidth limit) to limitless clickbait-ridden "portals". Real decent journalism just cannot compete for readers' attention anymore.

There could be a reasonable middle ground - like mandated free access to individuals, or free access after a 3-year period or so, but completely killing the publishers would be throwing the baby out with the bathwater.




A publisher per se may be not needed. Upload some PDF.

But a curator and a reviewer are pretty important roles. One for selecting and surfacing the important and high-quality, the other fir checking soundness and weeding out poorly done stuff.

Those are the actual paid services of, say, Elsevier. The rest us cruft and rent-seeking.

Now the question: how well are these services rendered by the publishers? Can this be done more efficiently, with higher quality? If so, how? Are there examples?


Those services are provided by editors and reviewers to Elsevier for free. Reviewers and Editors are _not_ paid by the publishers.

That is why there was hope for flipping journals.

The experience of Gowers and co with Discrete Analysis shows that high quality selective journals outside of publishers are straightforward, and that we are overpaying publishers by a factor of ~100.

https://en.m.wikipedia.org/wiki/Discrete_Analysis


One problem is poorly done reviews. Especially since many reviewers are not being paid anything for their time and have the incentive to ward off competing researchers while being protected by anonymity. It seems hopeless since who else would the publisher go to to assess the quality of supposedly new pursuits of expertise knowledge? I've had experienced many strawman rejections back in the day that I just stopped trusting the publishing system.

I'd rather just upload a PDF and let the readers judge its credibility and let (positive) citations be the benchmark instead on trusting a handful of elite critics with unclear incentives.


And, as you say, the expensive publishing houses don't do anything to ensure high quality reviews. In grad school I had a paper rejected by a prestigious outlet because one reviewer thought that the experimental setup I used was too difficult and therefore they didn't trust that I was being honest about my results. They stopped just a step short of calling me a fraud. I was baffled that this was accepted by the publisher.

It was only later when I was tasked with performing reviews that I realized that almost exactly zero work is done by the publishers. It is just "well there's one Strong Reject so see-ya."


The reasonable middle ground is to decouple selection from publication. There is no reason that research should not be published at all while it's still undergoing review, or still getting rejected from the "top" journals. (Disclosure: I volunteer for https://Plaudit.pub, one effort to achieve this.)

I would also argue that "reasonable metric" in practice is more like "usually reasonable judgement by people with usually relevant knowledge".


I agree with you, but with reservations.

The existing model asks volunteer academics to rate papers on three principle axes: novelty, importance, and rigor. The journals don't do much here except find these academics and send them a handful of papers to look at. Further, novelty and importance seem to be overshadowing everything else in reviews and they are both extremely subjective and can entrench existing biases in a subfield. Whether this metric is "reasonable" is a bit contentious.

I agree with you that a system of open publishing plus some post-publish evaluation would be nice. But you don't really need the journals for this. Or you certainly don't need journals making billions in profit to do this.


Why do you assume the filtering/selection process has to be centralized?

Why not a decentralized rating (and commenting/reviewing) system by scientists for scientists? Similar to how Twitter wants to be a news filter for the masses by the masses.


A major (perhaps even the primary) consumer of the current ratings is the institutions of various governments providing funding for the scientists and the institutions; who are not really able to assess the quality independently (especially due to various perverse incentives making many such institutions likely to intentionally misassess the quality even if they had the diverse competence required) and also definitely not willing to trust the beneficiary institutions/scientists to assess the quality themselves.

And apparently they're willing to pay quite some overhead just to have a separate third party gatekeeper perform that function for them.


This is often not true, and if it was true it would be backwards.

Funding bodies also employ academics who are able to judge the quality of proposals or they solicit feedback from experts. You don't just write "I've got a bunch of first author papers in Science so you should fund my grant." This is important because if the only thing that funding agencies use is institutional reputation then we end up with an even larger percentage of funding only going to entrenched academics at a very small set of institutions.


I'm seeing KPIs for grants (and even for the whole programs of grants) as, for example, quantity of publications in top quartile impact factor according to e.g. Scopus and Web of Science.

They do not want to evaluate each publication separately or pay experts for a separate review of each publication (if they would, that could be a source of funding proper peer review instead of it being done by random volunteers) but rather defer to the existing ratings of the publication venues. They need a simple objectively measurable quantitative metric, not a complex qualitative one.


Given the financial incentive on publishing (publish or perish), I wouldn’t be surprised if a critical number of authors found a way to collectively take advantage of the rating system.


The proposal is not to let anyone just upload unlimited manuscripts.

The proposal is to ensure that papers published by publishers are free (as in beer) to access by the public at large. Publishers can and should continue to curate the best papers after appropriate peer review and typographical review.


Strange, I've been in academia for five years and not once have I seen the name of the journal/publisher being indicative of the quality of the article. Rather it's been the opposite - papers in lesser-known journals from no-name researchers have been more to-the-point and engaging reads in general than most of the HBR bloatfests that we were bombarded with in Business school.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: