Hacker News new | past | comments | ask | show | jobs | submit login
It’s time to allow researchers to submit manuscripts to multiple journals (nature.com)
173 points by wjb3 on Oct 11, 2023 | hide | past | favorite | 121 comments



It is time to burn traditional academic publishing to the ground and rebuild it from the ground up, or else let's rebel against the government, give Alexandra Elbakyan citizenship and put her in charge of the Department of Energy to make all computer science research open domain for all the world to see. Requesting $48 for to read a paper funded by your taxes before you even know it is useful to your work will result in deportation or a firing squad, whichever a panel of independent reviewers decides.

A less spirited method is to let traditional publishing slowly die by adding review features to highly viable alternatives like arXiv, PubMed and others which will then replace it. I hate academic publishers, they are vultures that serve no purpose. I hope they all go out of business or somehow pay big :)


The way forward is for government research-funding bodies (ideally all of them) to insist that funded works only be published under open access. This appears to already be well underway in the UK. [0][1]

The article isn't about open access though, so this isn't strictly on topic.

[0] https://www.ukri.org/manage-your-award/publishing-your-resea...

[1] https://www.nihr.ac.uk/about-us/who-we-are/our-policies-and-...


Then the publishers get thousands of dollars from taxpayer money for each article published under the open access route, while still billing universities for many thousands to give access to the non open access ones.


Right, and that is why the same government bodies need to cap the amount that can be paid towards open access fees or page charges via a grant. If grants only provided $200 per publication that would end the ridiculous open access fees / APCs that currently get charged. There is no reason it should cost thousands of dollars to publish an article given the limited proofing most journals now do, the ability to easily submit works in a template provided by a journal, and that peer-review is uncompensated.


Just a data point: Google scholar regularly reminds me of any publications of mine that list US gov funding that do not have a publicly accessible PDF.

You are already allowed to (at least in tech) provide a preprint and public PDF on personal (non commercial) or educational web sites.

And Google scholar does associate those with the publishers links, as an alternative source.

So, it's not like we couldn't provide all R&D for free already. At least in tech.


I’m all for government agencies requiring research to be available to everyone, full disclosure I work for one, but I’m pretty against paying the open access ransom to do it. GP seems to have a beef with the DoE, but policy requires all published works that have even one DoE employee on them be licensed perpetually royalty free and transferable to the government, and that works funded by the office of science (and many other sources) be available through free open platforms like OSTI.gov. Why should we waste grant money on the thousand+ dollar fees to the publishing houses rather than making it free through the already existing, already required, and already working channels? Of course, this is for Computer Science and mathematics. I can’t speak for other fields.


This already happens. You can find all US NIH funded research for free on pubmed.


Well, that just shows it happens in the US, but it's the same for most tech journals and Google scholar/arxiv


That doesn't solve the problem of funneling public money to elsevier/springer nature. The model in which volunteers perform 95% of editorial work yet private publishing companies charge abhorrent prices for OA is absolutely disgusting.


Let's remember Aaron Swartz died fighting for this:

https://en.wikipedia.org/wiki/United_States_v._Swartz

He helped Assange who is now facing the same fate. And "the west" loves to point fingers that countries for their flawed justices systems. Hypocrisy to the max.


You can fight for a good cause and still break the law. A system of law that selectively enforces laws is inherently flawed. The laws need changing. It was lopsided power against him. The law was still broken.

I wish Aaron Swartz had not committed suicide ahead of his many charges. I wish he had had his day in court to argue the illegitimacy of those laws in these cases in the first place. It wouldn't be the first time that produces a change in laws. We will never know, and must change the laws asap.


> A system of law that selectively enforces laws is inherently flawed.

A system that seeks to inflict all the penalty it can is a system that is doing harm; it has fully lost site of it's original purpose to promote a specific good. Having lost it's way, it is fixating on fairness and outputting damage.


I easily assume that many laws were broken in what lead to his suicide. And equally for Assange. Law is a stretchable concept nowadays: Hunter's laptop turned out to be not "Russian disinfo" as some three letter agencies illegally made us believe. And the laptop is full of evidence of breaking the law, but he got a sweetheart deal.

Laws created+pushed by Joe that put many in jail for drugs, were violated by Hunter, and he walks.

Law is a fluid concept in this day and age. Same like this "rules based international order".

Please remember that hiding "undersireables" in WW2 Germany was against the law, or freeing slaves was against the law.


As someone who knows Alexandra personally and follows her work for a very long time, I must warn you that, despite creating an awesome tool, in my opinion (and the opinion of a lot of scientists who were in communication with her) she is not a sensible person at all. She is a lot into pseudoscience (e.g. astrology) and very radical political views (she's a Stalinist), and she's got a strange mix of mysogyny and radical feminism going on. She is very open about it on public Scihub channels. At least this was the situation a year or so ago.


> academic publishers are vultures that serve no purpose

This is a naively optimistic view of how knowledge production actually operates. Sure, the scientific endeavor is constrained by what is actually the case (i.e.reality), but without some kind of editorial oversight imposed from above nothing coherent nor useful will be produced. A thousand well-trained, well-intentioned researchers toiling away at a problem will not magically self-organize such that their collective efforts make tangible progress on large, intractable problems. This will be true regardless of how many fancy web2.0/3.0 niceties one throws at the problem, since experience has shown that such solutions only make the problems of social dominance worse, not better. In the end, this sentiment is nearly identical to people complaining about "capitalists".

Do capitalists and academic publishers have purposes to fulfill? Yes. Do they fulfill that purpose well these days? Absolutely not. Like many of our social institutions these days, the people who run them seem to fundamentally misunderstand what their roles are, deferring to some vague, financialized liberalism that believes all problems can be addressed by maximizing human-freedom, with no regard to bootstrapping problems. Because the institution ceases to perform it's role, people begin to believe it has no role. Worse yet, now that people have no idea what the institution's role even is, they have even less of a clue as to how to fix it.


> without some kind of editorial oversight imposed from above nothing coherent nor useful will be produced.

True, but academic publishers charge an absurd amount of money in return for very little value. The publisher provides a platform for "editorial oversight" by peer reviewers, but they do not pay the peer reviewers. I would argue that "editorial oversight" in the form of peer review may be worth thousands of dollars per publication, simply providing a platform for that review and profiting from volunteer work should not be compensated as highly as it is right now.


I do computer science research and publish regularly (in conferences, not journals, since that's how computer science mostly works -- you write a paper, look for the soonest upcoming relevant conference deadline, submit there, and get a response 2-3 months later). I think discussions about peer review often fail to explain all of the things peer review can accomplish:

1) Verifying that work is correct, assuming that the author is honest (e.g., you take their data at face value)

2) Verifying that work is correct, assuming that the author is malicious (e.g., you scrutinize their data to see if it's fabricated)

3) Certifying that the paper is "interesting" (universities, grant-making bodies, and other bureaucratic entities want some evidence that the researcher their funding is good, and somebody has to hand out the gold stars)

It takes time for even an expert to do 1), and it takes still more time to do 2). There aren't really good incentives to do it beyond caring about your field, or wanting to build on the thing you're reading. 3) can be done more quickly, but it's subjective, but a world where things are only assessed for correctness and not interesting-ness is a world where external funding bodies rely on other weird proxies like citation metrics or something to figure out who's good, and it's not clear to me that that's better.

My perception from computer science is that it should be harder to submit papers, because there are too many authors who simply rarely produce good papers and are clogging up the conferences with endless resubmissions until they get reviewers lazy enough to all say "weak accept".


Also sometimes reviewers point out interesting ideas you didn't think of because you always have tunnel vision by the point you submit a paper.


> My perception from computer science is that it should be harder to submit papers, because there are too many authors who simply rarely produce good papers and are clogging up the conferences with endless resubmissions until they get reviewers lazy enough to all say "weak accept".

It seems like the root issue here is pathological incentive to publish for career advancement?


That's certainly a driver for much of the pathology, however, I don't really see how that can be changed - I haven't seen any good proposals for what could reasonably replace the current bibliographic metrics for the various funding bodies and institutions. They do need some 'outsourced' objective metric because they aren't capable or willing to do in-depth review of each individual's work, and they won't trust the self-evaluation of researchers or their home institutions.


The best way to allow multiple submissions while amortizing the review work is to make reviews (and rebuttals) public, and overlay them on submissions made public on some preprint server.

This establishes priority (if credit is a concern), can be made blind / double-blind if so desired, and also makes public the reviews (which are as much a public service as writing research papers). Which editorial boards “accept” the paper for publication is then simply a matter of collecting endorsement tags on the submission.


I recently submitted to PLOS One. They publish the complete review alongside, after acceptance. I feel like this is already a big win for transparency. Immediate review publication would be even better.


Nature also does this, it can be rejected, but this is a suspicious sign.


With a few minor changes, eLife is a non-profit journal that has that kind of peer review process: https://elifesciences.org/about/peer-review

It is quite well regarded now, they publish excellent research and the whole process is crystal clear.


One of my favorite journals. Perhaps of interest to HNers is their fairly active GitHub: https://github.com/elifesciences, and my favorite project of theirs, Lens: https://github.com/elifesciences/lens, it's a paper viewer that actually makes reading papers easier (there's loads of these out there but most are crap and inferior to a PDF).


I think it remains to be seen how/if its reputation changes given their new approach. It will take a number of years before its impact can really be assessed.


Speaking as a former scientist, it's worth remembering that peer review is not just about ensuring quality by preventing poor research being published. It also leads to good research being improved, in a very similar way to how code review can improve good code.

So doing away with peer review would have some negative consequences.

It's also worth remembering that journals are really not very different to newspapers, and the top journals are the tabloids. They want to publish things that people will read and cite. They only care about quality in so far as it is required for people to want to read a paper. Ever wandered why so many top journal papers get retracted? It's because the top journals know people will read the click-bait! My favourite example is when the Lancet published the famous MMR-Autism paper by Wakefield, which is terribly unscientific, but was bound to be much cited.


> It also leads to good research being improved, in a very similar way to how code review can improve good code.

But the parallel breaks, as unlike with code reviews, there is no protection against (and the people below are real people I have encountered in my 10+ year research experience):

* A reviewer writing eloquent reviews on every single paper they ever reviewed, explaining why they should be rejected, as a gimmick at this point.

* Reviewers not taking time constraints into consideration. Doing this 2 year experiment would indeed make for a better paper.

* Reviewers not reviewing in a timely manner. Yes, you said you volunteered to review this. It has been 9 months since.

* Reviewers requesting you to cite their papers, however unrelated.

* I quote a full review I once had: "This paper needs more equations."

When code reviewing in a team, these issues get sorted out. In academia, peer reviewing is in a state of complete anarchy.


I agree, those are all existing problems. But I would say the don't detract from the benefits.

Maybe peer review should be time limited, but have the reviewers comments be immediately available for the authors to respond to, rather than having to wait for them all to come in.


more-open publishing does not necessarily do away with peer review.

the really interesting part is that sometimes review benefits from anonymity and even blinding. doing that in a more-public publishing process would be interesting. (but not impossible - there are tricks to providing authentic anonymous identities.)


He mentions that there is a conflict of interest with recommending peer reviewers. While I agree this can be abused, I've often run into cargo cult science in AI when publishing something that is valid, novel, and in my opinion advances the field, because it is not aligned with how past work defined the problem when submitting to conferences where I cannot recommend more senior scientists as reviewers. Recommending people could help a lot to address this.

For example, in continual deep learning people often use time datasets in which they use a small amount of memory and incrementally learn classes and the algorithms cannot work for other incremental learning distributions. It's been very hard to publish work that instead works well for arbitrary multiple distributions, eliminates the memory constraint (which doesn't matter in the real world mostly), and shows it scales to real datasets. We have been able to get huge reductions in training time with no loss in predictive ability, but can't seem to get any of these papers published because the community says it is too unorthodox. It is far more efficient than periodically retraining as done in industry, which is what industry folks always tell me is the application they want from continual learning.

The confusing thing is that when I give talks or serve on panels I always have folks thank me and tell me they think this is the right direction and it was inspiring.

In my field the review system is way overtaxed with too many inexperienced people who struggle with having a broad perspective, so I think submitting to more venues would probably make things worse.


I'm not sure that the bias is only due to inexperienced reviewers. For example, even at a specialized venue like CoLLAs (I also work in CL), where you could send more esoteric research, you still see most people doing the usual rehearsal+class-incremental stuff. Most experienced researchers are also quite happy with the current state of the field. They may agree with your view, but their actual research and vision is much more aligned with the "popular" CL research.

In general, the whole deep learning field tends to oversimplify practical problems and experimental design in favor of overcomplicating methods. This happens at all the levels.


What's CL?


Continual Learning, a ML training concept


Computer Science's conventions solve these problems (although CS certainly has other problems):

  * papers are generally posted to arxiv.org immediately on being finished, so everyone can access them
  * conferences have fixed deadlines and relatively short, fixed/enforced review cycles


Let's distinguish between papers and preprints, please. arXiv has contributed to a blurring of the distinction. The arXiv preprints are useful but should always be taken with a grain of salt. There is nearly no filtering done on things uploaded to arXiv.

Everyone accessing someone's uncritically reviewed work is a bittersweet gift.


In mathematics, at least, papers and preprints are indeed widely considered to be the same thing. In practice, for people working in the field, they are.

Math papers tend to be highly technical, read by other specialists in the field. When it comes for correctness -- whether or not I should take a paper with a grain of salt -- the authors' reputation counts for much more than the journal's. And in case of student authors, who are just beginning to publish, the advisor is implicitly staking their reputation on the work as well.

There are also preprints on the arXiv, written by people unknown in the community, claiming to prove the Riemann Hypothesis or some such. These aren't taken seriously by anyone.

An outsider might not be able to tell which preprints can be considered equivalent to papers, but such people are not likely to be seriously reading math research in the first place.


You can always overlay a reputation system on top of your pre-print server.

The informal one you describe here, or any formal one you can come up with.


Arxiv has been working just fine for a long time, there's no need to change it. Besides I'm not going to voluntarily post my work so I can get publicly rated by a bunch of unknowns lol.


You're thinking of social-media-type "reputation".

Instead, think of the goal being to associate measures of worth with the reviewers. If you're publicly rated by a bunch of worthwhile people, count yourself lucky.


> Arxiv has been working just fine for a long time, there's no need to change it.

Exactly, that's why I am not suggesting any change to Arxiv.

Think more of people eg submitting Arxiv URLs to Hacker News for what I have in mind. Or discussing Arxiv submission on a forum or in a wiki etc. You can imagine some specialised software that has some better support specifically for material from Arxiv.

That's what I mean by 'overlay'.

Or think of Slatestarcodex publishing a blog post with links to his favourite Arxiv papers for that month. That's pretty much equivalent to what a journal produces. And if Slatestarcodex compiles that link list by doing some peer review and chatting with the authors of the papers, that's almost exactly what the journal does.


Yes. For example, here is a paper by some Cornell people where they reinvent machine learning model evaluation with the only motivation that I can tell is hubris and self service:

https://browse.arxiv.org/pdf/2310.02335.pdf

Do not trust arxiv papers. They have not been vetted.


> Everyone accessing someone's uncritically reviewed work is a bittersweet gift.

Review work is not always done by senior researcher (e.g., professors). Senior researchers often hand this down to PhDs. Having 3 to 4 reviews by nice junior reviewers doesn't sound very critical.


Just to be clear: you'd expect PhD students to be trained in reviewing by their supervisors.

So PhD students writing the initial review is not weird - it is an expected part of their training. As is the supervisor going over the review and providing constructive feedback. As is the review being submitted under the supervisor's responsibility, with credits (mention in proceedings) to the student for acting as a subreviewer.

Yes, there are ways to abuse this system and yes, abuses do occur. Any system for gaining job prestige or workload reduction is a target for gaming. This doesn't mean the system should be thrashed, but it does warrant additions to curb excesses.


If a late-stage PhD student in the same narrow technical field can't review the paper, then it's almost certainly a problem with the paper. After all, junior people are the primary audience for any paper. Also, PhD students often have more depth on their research topic than the professors.

The sibling comments about making sure that most reviews are written by senior researchers also make good points. That should be checked by the program committee or editor.


They have to say they did this and you are forgetting the editor's role in paper evaluation. This criticism can and is taken into account and you can send papers out for more reviews if you get conflicting ones. In my experience as an editor, junior people typically give better reviews than senior (unless they are emeritus and then have unlimited time). I suppose this has to do with confidence in the junior person who will question their review themselves.


Arxiv paper quality is better than journals' average paper's quality. Because publishing in Arxiv doesn't count as paper in resume in many places, there are far fewer papers who publish just for resume.


It’s how science worked for 3 centuries before the current review system was instituted just a generation ago.


Let's do a quick analogy. arxiv = github. It's all collaborative writing, right? You publish data, code, and your paper continuously. Then you have releases. Perhaps they get tagged with what publication venues accept them.


I'm confused. Do you accept published papers as gospel? They should be taken with a grain of salt too.


Depends on the field certainly. A paper in the Annals of Mathematics is definitely a lot more rock solid than whatever goes on the arXiv, or reviewed papers in certain fields that are particular magnets for junk science.


Funny you should mention Annals. A journal famous for publishing two papers in three years by the same author, one proving some theorem, and the other disproving the theorem. Sure, tons of other journals have done so, but Annals is definitely the highest profile one. Maybe take a look at https://mathoverflow.net/questions/282742/endless-controvers... or https://mathoverflow.net/questions/35468/widely-accepted-mat... It's also a nice way to pad your CV if you manage to get the wrong theorem published - you get two Annals papers for the price of one.

It is of course true that published papers have been vetted. But very often, it simply means that 1. an editor glanced at it, 2. (optional) a peer provided a positive quick opinion on the paper, without checking the correctness, 3. one or two independent referees presumably read the paper and produced a report on it. It's not nothing, but it doesn't mean you should accept blindly as truth everything published.

For context, I'm an associate professor of mathematics at a large research university.


The way I look at it, we passed the point where there are so many people that no one can read all the papers in their field any more.

Peer review is the first filter that papers go through. It's not perfect (it makes mistakes in both directions), but the output of the peer review process definitely has a higher signal to noise ratio than the input.


That's a good start. It's certainly a major improvement from when I still published regularly (until about 15 years ago).

One thing that struck me as a researcher back in the day was that it was much easier to get people reading things that I put on my blog than it was getting scientists to read my papers. Basically as a researcher, you are basically engaging in 17th century SEO by writing papers and getting your peers to pay attention to this. We use email these days but otherwise the process hasn't changed a lot.

This is weird considering we now have a lot more tools. Imagine if hacker news worked that way. It wouldn't work. The best kind of endorsement for a paper is not random anonymous peer reviewers giving their thumbs up or down as to the inclusion of a digital only journal that nobody ever reads cover to cover. Instead it is other researchers citing your work. This is in fact part of how most academic performance is measured. The goal is to get your peers being aware of the existence of your work, get them to spend time reading or learning about it, and then getting them to engage with it by citing, criticizing, or adding to it.

The whole business of a journal publication is just SEO. You get some renowned journal to include you and maybe people will bother reading it and maybe refer your work if they like it. A citation is just a glorified like. The real goal is to get people to read and "like" your work. It's also self re-enforcing: the more people cite your work, the more people will read it and thus cite it. But it's a super inefficient process. From idea to citations happening can take years.

Conferences and workshops are where scientists meet up and discuss their work. That's where your peers are. I always enjoyed the smaller workshops. Get some smart people in a room and beautiful things happen. The academic world is basically a social network. With likes and everything. But minus all the modern tools that make other social networks work so well. There's some room for improvement.


Also, publicly available reviews and comments at openreview.net.


The author suggests that "The fear that multiple submissions would overwhelm the peer-review system lacks empirical evidence". Maybe it won't "overwhelm" it, but it will certainly add to the reviewing workload. Simply stated, if authors submit to N journals and each asks for 2 reviewers, that's 2N units of work (assuming they can get the reviewers), compared to 2 units of work.

But it may be worse than that, actually. I will not be much inclined to bother reviewing, if I know that the authors might pull their manuscript if another journal gives a green light quicker than the journal for which I have been asked to review.

The solution to a slow reviewing process is not to ask reviewers to do more of this unrewarded work.


Why can't the journals share reviewers? Once the reviews are in, the editors decide if they want the paper in their journal, and if more than one does, the authors get to pick. Obviously it would be a bit more complicated with revisions, etc. but it would be an improvement over the current system.


Who picks the reviewers and nags them to complete their reviews? This is the principal actual useful work journals do these days.


I've often wondered if it might be possible to set up an 'auction' system for papers. Post on arXiv and then have journals 'bid' for publication - saying how much they would charge in publication fees, review turnaround etc. Authors can then choose the journal they prefer. The advantage would be that the initial stage of 'shopping around' would be eliminated (sending a paper to a journal and being rejected without review) and there would be incentive for journals to reduce publication fees. Just a thought...


"The fear that multiple submissions would overwhelm the peer-review system lacks empirical evidence and is outweighed by the burden placed on researchers."

Actually, here's a paper showing that the peer review is already overstretched:

https://arxiv.org/abs/2309.15884


This is a terrible idea.

The biggest problem I see with the current system is that a small number of authors venue shop by repeatedly submitting the same paper without changes.

Allowing submissions to happen in parallel will benefit that group of bad actors to the detriment to pretty much everyone else.

Also, all submitting in parallel will do is increase the number of erroneously accepted submissions. For one thing, typical reviewers review for multiple conferences, so you'd just be sending the same manuscript to the same person multiple times. Also, most related conferences have a fairly similar bar.

In fact, most papers have pretty good alignment on reviews. Reviewers typically can't see each other's reviews or discuss the paper before submitting their review, so these are mostly independent samples. There are papers that have bimodal scores, but it's rare, and usually due to low confidence reviews or the very occasional controversial paper.

It is often detected when people just ignore previous reviews, and resubmit without revision. Typically, one reviewer will overlap between the two conferences and notice the resubmission. If the paper hasn't changed, that's an easy reject, even if the person wasn't assigned to review it. Allowing parallel submissions would normalize this class of denial of service attack against reviewers.

Also, far more often, the previous reviewer will take a look, and say that the authors did a good job addressing the previous group's concerns. Of course, that can't happen if the paper is submitted in parallel. Allowing parallel submissions would put people that actually improve their work between submissions at an unfair disadvantage.


While we're talking about needed journal changes, it's worth pointing out that Nature, the journal, now allows articles submitted by authors to be open to anyone, which is great, but only if the authors pay Nature $11,690 per article. Otherwise, only institutions which subscribe to Nature can see the articles.


That's because nature is a predatory journal. Submit to society journals please.


As far as I know, an author is allowed to share the preprint of their Nature submission, e.g. by posting it on their website or Arxiv, without paying an open access fee [1]. The difference between the submitted and final version of a paper is I think usually pretty minor, so this seems decent enough.

[1] https://www.nature.com/nature-portfolio/editorial-policies/p...


If the peer reivew is turned into a properly paid work through a platform, many researchers from under-developed countries will surely join to become reveiwers. This will bridge the demand-supply gap and and make the publishing faster.

https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


If peer review becomes paid work, most journals will probably outsource reviewing to developing countries. Journals (effectively universities and research funders) just can't afford the consulting fees of first-world academics.


Who wants to pay for something that used to be free?


I'd rather do away with the whole publish or perish thing.

One half-baked idea: For academic hiring, only ever judge the quality of a candidate's research based on their five best papers (as they themselves have nominated), then there is no pressure to publish anything that doesn't break into the top five.


And then someone publishes five good papers and does nothing* for the rest of their life.

I mean, the idea looks nice, but there will always be people trying to game the system.

But don't get me wrong – I still like your idea, I just think it would need some refining (as you yourself admit, ofc).

* As in, no research.


I don’t see a problem with someone publishing 5 great papers early in their career and nothing else. I mean what were the meaningful publications from Oppenheimer in his later years? Or Einstein? Publishing should be done when it matters, not when it’s needed for a salary increase.


"nothing else" really? why would an institution pay them?

in reality, faculty do more than publish, and are also judged by service, supervision, etc.


This would mean that not too exciting results never get published. Just because a topic/finding is not earth-shattering now doesn't mean it isn't useful information for the future.


I think one of the root causes for the problems in publishing is that "original" research has much higher status than the grunt work that contributes to the quality of research.

PhD students should probably spend much of their effort in trying to replicate published research papers, instead of publishing "original" research. This would teach them a lot about the typical quality issues in their research field, allowing them to produce quality "original" papers later.

This may sometimes even allow them to publish highly influencial papers that show the limitations of published papers, because quality issues seem so widely spread. This would also allow them to really contribute to the field.

I think we would see this if Universities and journals would take the research quality seriously.


Why don’t universities publish their own journals in the sciences like they do for law? Why don’t we have the Stanford AI Review or the University of Michigan Journal of Organic Chemistry. Like law, turn them over to the grad students to publish.


Some actually do, but they only accept submission from their own students and members.


journal "monogamy" is the problem.

modest proposal: always publish to arxiv, which assigns the permanent DOI. journals can be offered the article - one at a time if the authors desire, or in more of an auction format. when published in a journal, the arxiv entry is simply updated (new revision findable and attached to the original DOI).

this would make research works first-class, rather than journal byproducts.

internet/telecom providers don't want to be dumb-fat-pipes; journals don't want to be editing and review-coordination services. so what?


As a reviewer, it's really painful to get a paper that's completely wrong (especially if it is over claiming) and that already has a large download count on arxiv.

I'm all for putting camera ready copies on it, but submitted drafts really shouldn't be there.


Tip: Magazines are not part of the scientific process. It is a parasitic industry riding on the back of science to make money. You can do as much science as you want, and not publish it anywhere. Or you can publish it on your own website if you feel the need. But publishing is NOT part of the scientific process.


I've been working on improving academic publishing and review from a software angle for a year a half. I think there's a ton of room for improvement here (all the software tools journal teams have available to them leave a lot to be desired). In improving the tooling, I think we can make the work of editors and reviewers a lot easier and that will increase the response time of reviews. We can also help editorial teams that want to experiment with things like multiple submissions, or open submissions.

I'm currently building a platform that aims to make these improvements and enable these experiments - the working title is "JournalHub". The one liner might be "Github for Journals". It's in alpha and currently supports preprints and preprint review, journal publishing flows, and post-publication review. I've done a bunch of iteration on the review UX, which draws heavily on Google Docs and Github PRs, and that's getting pretty close to beta. I'm still working on the editorial workflows which are in early alpha.

Once we have a platform that a significant number of journals are using, we can then build opt-in experiments with peer review. An example particularly relevant to the OP is potentially allowing multiple submissions where each editorial and review team can see the other's comments and thus multiple journals can collaboratively review a paper and benefit from each other's work. I've talked to editors who actually suggested the idea while doing UX research, so it's not unfathomable that journals might opt-in to something like that.

My impression from UX research is that there are actually a lot of editorial teams aware of the problems with journal publishing and eager to experiment, but feeling restricted by the mandates of the commercial publishers. So my main goal right now is to enable editorial teams to escape the commercial publishers, to free their hands, and enable their experiments.

I'm still doing UX research and looking for editors and editorial teams interested in talking - so if you're a journal editor and you want to help develop a platform that might help you escape the commercial publishers, reach out! (Email: dbingham@theroadgoeson.com)


I want to post my research and journals compete to publish it


I would pay money to watch a battle royale with Elsevier managers.


... because nothing is more pleasant than revising multiple manuscripts with different requirements at the same time?


this is such a joke of a system. half of the web is a system for peer review already. you could basically repurpose any old forum software like Discourse or Reddit, twitter, mastodon to work as peer review. It's just people commenting on a post! With a system for verifying identity (use official email) and assigning reviewers maybe.

Then over here we see a journal publishing this trash that treats their audience as toddlers with amazing tips like

"Follow up: after submission, "

"Professional networks:.."

Mostly I'm just surprised that our smartest people put up with this, when any researcher could ask their CS colleagues to create a replacement over a few weeks. It's a pure social coordination problem, it would only take a few groups of researchers switching to light the fire. It's a preference cascade just waiting for a little nudge.


Peer review is a joke. Peer reviewers don’t look at your data and the programs you used to analyze it. They don’t look at your experimental apparatus, they don’t repeat your experiment, they don’t talk to the subjects you interviewed, at best they can spot obvious “red flags”.

(Of course, in 2023 you should be able to publish your data and all your software with the paper.)


Some of the things that peer reviewers do, in my experience, in biology:

- question whether or not the conclusions you are making are supported by the data you are presenting

- ask for additional experiments

- evaluate whether or not your research is sufficiently novel and properly contextualized

- spotting obvious red flags - you seem to discount this, but it's quite valuable

In my experience, the process of peer review has been onerous, sometimes taking years of work and many experiments, and has by and large led to a better end-product. There are not so great aspects of peer review, but it's definitely not a joke as you characterize it.

I'll add that in biology and adjacent fields, it makes no sense to discount peer review because the reviewers do not repeat your experiment - doing so is simply not practical, and you don't have to stretch your imagination very far to understand why.


I also work in biological sciences research, but I'm more skeptical of peer review than you appear to be. My main criticism is that peer review is an n=2 process. Why not publish an unreviewed pre-print in bioRxiv and explicitly solicit constructive, public feedback directly on the pre-print on bioRxiv? I envision something similar to GitHub where users can open issues and have nuanced discussions about the work. The authors can address these issues by replying to users and updating the data and/or manuscript while bioRxiv logs the change history. Journals can then select sufficiently mature manuscripts on bioRxiv and invite the authors to publish.

This would massively increase the number of people that review a manuscript while also shortening the feedback cycle. The papers I've published have typically been in the peer review process for months to years with just a handful of feedback cycles of sometimes dubious utility. This can be improved!

Edit: I forgot to mention the issue of politics in peer review! If you're in a relatively small field, most of the big researchers all know each other, so peer review isn't truly blinded in practice. Junior researchers are also pressured into acquiescing to the peer reviewers rather than having an actual scientific debate (speaking from experience).


As it happens, I'm building "Github for Journals".

I pivoted away from attempting a crowd source review approach with a reputation system to trying to support journals in going Diamond Open Access.

But the platform I've built supports co-author collaboration, preprints and preprint review, journal publishing flows, and post publication review - all in a continuous flow that utilizes an interface drawing from Github PRs and Google Docs.

You can submit a paper, collect feedback from co-authors, then submit it as a preprint and collect preprint feedback, then submit to a journal and run the journal review process, then collect feedback on the final published paper. And you can manage multiple versions of the paper, collecting review rounds on each version, through that whole process.

It's in alpha, I'm pushing really hard with a short runway to get the journal flows to usable beta while trying to raise seed funding... the catch being I feel very strongly that it needs to be non-profit, so seed funding here is grants and donations.

I'm looking for journal editors who want to participate in UX research. I'm also interested in talking to folks who run preprint servers to see if they'd have any interest in using the platform. If you (being any reader) know any, or have leads for funding, reach out: dbingham@theroadgoeson.com


When you say "submit to a journal" does that mean you are not a journal? Why operate as a preprint server, but not offer to publish with peer-review? (Perhaps I'm misinterpreting your comment).


It doesn't sound like that poster operates as a journal, and that makes sense. Academic researchers need to publish papers in long-standing and highly respected journals in order to be promoted and eventually gain tenure. Journals do not add value by simply providing space for researchers to publish their work—they add value by existing as a reputable brand that can endow select researchers with academic and social credit.


As mentioned in my other comment, crappy peer-review is a big problem for most journals, so a solution to that needs to be found.


Yeah, before I pivoted to trying to flip journals, I spent a year exploring crowd sourcing with an eye on improving peer review. After building a beta and collecting a bunch of user feedback, my conclusion is that academics on the whole aren't ready to crowd source. Journal editors are still necessary facilitators and community organizers. So that lead to exploring flips.

However, I think there's a lot that software can do to nudge towards better peer review. And once we have journals using a platform we can build lots of experimental features and make them easy to use and adopt to work towards improving it.

I've kept crowd sourcing preprint review in the platform - though I removed the reputation system since UX research suggested it was an active deterrent to people using the platform - to enable continued experimentation with it. And the platform makes it easy for preprint review to flow naturally into journal review and for the two to live comfortably alongside each other. The idea being that this should help enable people to experiment with preprint review with out having to take a risk by giving up journal publishing.

And the platform has crowdsourced post-publication review as well.

My thought is that if we can get the journals using the platform, that will get authors and reviewers in the platform and since preprint and post-publish review are really easy to do in the platform that will drastically increase the usage of both forms of review. Then folks can do metascience on all of the above and compare the three forms to see which is most effective. Hopefully that can then spur movement to better review.

I also want to do work to ensure all the artifacts (data, supplementary material, etc) of the paper live alongside it and are easily accessed during review. And work to better encourage, rewards, and recognize replications. I think there's a lot we can explore once we have a large portion of the scholarly community using a single platform.

The trick is getting there.


The platform is intended to host many journals in the same way Github hosts many open source projects. And to facilitate interactions, conversation, and collaboration among authors, editors, and reviewers across them.


I think the key is that peer review is a promise of an n=2 process.

There's no promise that an unreviewed pre-print is going to get two constructive readers. It's also wildly subject to bias - being on a pre-print with a junior, female researcher was eye opening as to the merits of double blind review.


You could blind the pre-print process, too?


I've not seen a major attempt to blind pre-prints, and given you have to remove some identifying information for blinding, I think that would be a tall order.


Why would that be a tall-order? Seems fairly simple and straight-forward, doesn't it?

You'd set up a server where people have accounts, but publishing pre-prints is anonymous by default, and identities can be revealed later.

In the current peer review system, people already have to produce papers with those identifiers removed. They can do exactly the same in the pre-print world, can't they?


A great many papers in my field contain contextual details about the settings the studies were conducted in that would effectively deblind them.

That sort of betrays the idea of a pre-print, in my opinion, because they should not depend on "Someday we'll come back and fix this".


How does conventional peer review work for those papers?


> Junior researchers are also pressured into acquiescing to the peer reviewers rather than having an actual scientific debate

Yes. When I was teaching at the graduate school level, doctoral students sometimes came to me for advice about how they should respond to peer reviewer comments. Those comments were usually constructive and worthwhile, but sometimes they seemed to indicate either a misunderstanding or an ideological bias on the part of the reviewer. (This was in the social sciences, where ideology comes with the territory.) But even in those latter cases, the junior researchers just wanted to know how they could best placate the reviewer and get their paper published. None had the nerve, time, or desire for an actual scholarly debate.


As both a grad student and a postdoc I wrote appeals to rejections for peer review that succeeded.


Yes, you can certainly do that, but I wonder how long the appeal and approval process took? I'd bet it's measured in months.


It was considerably faster than a wholesale resubmission to a new journal, and landed the paper in a better home than it would otherwise have found.


Exactly. The quality of peer review is generally pretty poor. There are a lot of really terrible studies and reviews being published in high quality journals from people like the Mayo clinic, that you have to wonder how they passed peer review.

And then on the other hand, if you ever actually have to submit a paper to peer review, you'll see how clueless a lot of the reviewers actually are. About half do give useful critiques and comments, but the other half seem to have weird beliefs about the subject in question, and they pan your paper due to you not sharing said weird beliefs.


I agree with your suggestion and would 100% welcome that process - though I don't think they're necessarily mutually exclusive. As I see it, the main difference between the status quo and the more open process you suggest is that in theory reviewers that are hand-picked by the editor are more likely to have directly relevant experience, ideally translating to a better, and potentially more efficient review. Of course, that also comes with the drawbacks that you mentioned - that the reviewers are easily de-anonymized, and that they may be biased against your research since they're essentially competitors -- I've had the good fortune of not being negatively affected by this, but I have many colleagues who have not been so lucky.

Edit: Also, to comment more on my own experience, I was lucky to be working in a well-established lab with a PI whose name carried a lot of weight and who had a lot of experience getting papers through the review process. We also had the resources to address requests that might've been too much for a less well-funded lab. I'm aware that this colours my views and didn't mean to suggest that peer review, or the publication process, are perfect. The main reason I wanted to provide my perspective is that I feel that on HN there's often an undercurrent of criticism that is levied against the state of scientific research that isn't entirely fair in ways that may not be obvious to readers that haven't experienced it first-hand.


It still is a quite useful filter, as without it most fields would be even more overwhelmed. As a reviewer, have you seen what garbage gets submitted sometimes? There are incentives to attempt to get garbage published, so throwing out a significant part of submissions does add quite a lot of value to readers, so that they get at a somewhat curated list of papers from that journal or conference.

And while all you say is true, it's probably the most we can get for free in a reasonable amount of time; requiring an independent lab to repeat an experiment would generally be far more delay and cost than we'd accept, other researchers do generally want to see the outcome as soon as the first experiment is documented; and there are people doing great research which won't bother to submit if they'd have to pay for the replication - it's generally the bad research that has motivation to spend more money for a publication. The general public might want to wait for extra confirmation, but they're not the target audience of research papers, those are intended as communication by researchers for researchers. And also quite a few media outlets would disagree and probably prefer grabbing up hot rumors even earlier, even if they turn out to be false afterwards.


All of what you wrote is true too, but it’s also the hollowed out support beam at bottom of “evidence-based everything” culture, which has taken over almost everything.

The truth is that good science is slow and that most “evidence-based” practices are referring to a huge, nebulous cloud of bad results and weak suggestions rather than the evidence that supposedly gives them authority over traditional or intuitive practices.

Scientists participate on “Little Science” and the responsible ones often maintain the perspective that you’re describing here.

But modern society has built itself around the institution of “Big Science” which is structurally forced to assert truths before they can responsibly be defended.

It’s way bigger than the general public being curious or the media wanting to get eyeballs — it’s everything going on in government, economics, medicine, psychology, agriculture, etc etc etc

It’s a house of cards and you’ve just summarized what the core problem is.


> Peer review is a joke. Peer reviewers don’t look at your data and the programs you used to analyze it. They don’t look at your experimental apparatus, they don’t repeat your experiment, they don’t talk to the subjects you interviewed, at best they can spot obvious “red flags”.

if those were the worst problems with peer review, we'd be in a much better place. Your peer reviewers are frequently higher status scientists working (competing) in the same research area you are trying to publish in. Generally, they do not want their own work outshined or overthrown.


Reminds me of code reviews, where sometimes a reviewer will go on a deep dive but usually they just scan it for obvious issues and typos. The thing is, even if my code is only getting a cursory review, I still prefer to have multiple people review it to increase the chances that obvious issues are caught. At least if it's important code.


I partially agree, and I can enumerate other issues with peer review that you have not listed, but it is worthwhile to point out some of the positive features of the peer review concept:

- Peer review in reputable non-profit journals actually provides constructive suggestions that make papers and research itself better. APS's PRX and PRL, as well as Quantum are journals where I have seen these idealistic positive effects;

- Filtering out the obvious red flags is pretty valuable even if boring;

- Thanks to people who care about the "ideal" of peer review we now have the infrastructure necessary to make reproducability much easier: mandatory data and code sharing on archival services, open (even crowdsourced) peer review, immediate feedback, etc.


I wouldn’t say it’s a joke, rather it’s not perfect.

When papers are reviewed, there are going to be a finite number of spots in the journal or conference to be assigned competitively. In good places, the reviewers catch issues in the papers and it won’t be easy to pass them.

Without peer review, a PhD student requesting graduation or a candidate applying for a faculty position would claim they have done major work, and there is no way to filter out the noise.


Fraud is considered rare, and trust is fundamental. In which case, you choose to believe what they said they did and interrogate of what they said they did is reasonable. Nobody has the budget, time, and sometimes magical fingers required to reproduce every submission.

You can disagree with this approach, but then there needs to be huge budgets set aside for reproduction.


> Fraud is considered rare, and trust is fundamental.

This is a nice sentiment but demonstrably false.

Fraud is common in academia and everyone knows it. A large part of academic science is a grift for funding. Is not "Trust" that is fundamental, is tit-for-tat.


Fraud is considered rare, but maybe not actually that rare; hence the replication crisis.


A lot of times it is not deliberate fraud just incompetence. There is the strange fact that the answer to precision QED calculations alway seemed to change when experimental results changed. One enduring lesson from a physics PhD is that a 50 page long calculation without unit tests is… wrong.


Misrepresentation of data and selective reporting to fit particular agendas of the last author are quite common. I have been involved in a couple of projects where I was asked to misrepresent or misreport findings.

Sadly, integrity offices will rarely conduct serious investigations, and won't conclude misconduct happened unless what was done was incredibly harmful. Professors are often too big to fail, they attract tons of grants and are politically entrenched.


Can Journals adopt a pull request review like process on some central server? I am imagining Github PR review like capability on arxiv where anyone or authorized people can review the submission and submitters can respond to comments, all publicly.

I don't if this is how it's done already. I have seen people complaining about peer review here and was wondering why there isn't a solution to that while software already enjoys a well established peer review system.


Idk about you guys but the only reason I do peer review is to reject competitors and enemies.

If I really hate them, I give them a "Major Revision" aka a laundry list of expensive follow-up experiments and then reject them on the 2nd or 3rd round after a few months.

There's actually zero benefit to earnestly engaging in peer review.


It sounds like you would do well in many other businesses. Don't let academia hinder your potential. Have you considered selling timeshares to elderly people?


You are an exemplar of all that is wrong in academia, but I upvoted you because there are so many like you.

(I know it from personal experience).

Personally, I decided to leave and make a more honest living. It seems you chose not to.


If you make your code available, I'm going to make sure it runs and does what you say.


As someone currently preparing a manuscript for submission (and choosing which journal to send it to), I definitely agree.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: