Hacker News new | past | comments | ask | show | jobs | submit login

Computer Science's conventions solve these problems (although CS certainly has other problems):

  * papers are generally posted to arxiv.org immediately on being finished, so everyone can access them
  * conferences have fixed deadlines and relatively short, fixed/enforced review cycles



Let's distinguish between papers and preprints, please. arXiv has contributed to a blurring of the distinction. The arXiv preprints are useful but should always be taken with a grain of salt. There is nearly no filtering done on things uploaded to arXiv.

Everyone accessing someone's uncritically reviewed work is a bittersweet gift.


In mathematics, at least, papers and preprints are indeed widely considered to be the same thing. In practice, for people working in the field, they are.

Math papers tend to be highly technical, read by other specialists in the field. When it comes for correctness -- whether or not I should take a paper with a grain of salt -- the authors' reputation counts for much more than the journal's. And in case of student authors, who are just beginning to publish, the advisor is implicitly staking their reputation on the work as well.

There are also preprints on the arXiv, written by people unknown in the community, claiming to prove the Riemann Hypothesis or some such. These aren't taken seriously by anyone.

An outsider might not be able to tell which preprints can be considered equivalent to papers, but such people are not likely to be seriously reading math research in the first place.


You can always overlay a reputation system on top of your pre-print server.

The informal one you describe here, or any formal one you can come up with.


Arxiv has been working just fine for a long time, there's no need to change it. Besides I'm not going to voluntarily post my work so I can get publicly rated by a bunch of unknowns lol.


You're thinking of social-media-type "reputation".

Instead, think of the goal being to associate measures of worth with the reviewers. If you're publicly rated by a bunch of worthwhile people, count yourself lucky.


> Arxiv has been working just fine for a long time, there's no need to change it.

Exactly, that's why I am not suggesting any change to Arxiv.

Think more of people eg submitting Arxiv URLs to Hacker News for what I have in mind. Or discussing Arxiv submission on a forum or in a wiki etc. You can imagine some specialised software that has some better support specifically for material from Arxiv.

That's what I mean by 'overlay'.

Or think of Slatestarcodex publishing a blog post with links to his favourite Arxiv papers for that month. That's pretty much equivalent to what a journal produces. And if Slatestarcodex compiles that link list by doing some peer review and chatting with the authors of the papers, that's almost exactly what the journal does.


Yes. For example, here is a paper by some Cornell people where they reinvent machine learning model evaluation with the only motivation that I can tell is hubris and self service:

https://browse.arxiv.org/pdf/2310.02335.pdf

Do not trust arxiv papers. They have not been vetted.


> Everyone accessing someone's uncritically reviewed work is a bittersweet gift.

Review work is not always done by senior researcher (e.g., professors). Senior researchers often hand this down to PhDs. Having 3 to 4 reviews by nice junior reviewers doesn't sound very critical.


Just to be clear: you'd expect PhD students to be trained in reviewing by their supervisors.

So PhD students writing the initial review is not weird - it is an expected part of their training. As is the supervisor going over the review and providing constructive feedback. As is the review being submitted under the supervisor's responsibility, with credits (mention in proceedings) to the student for acting as a subreviewer.

Yes, there are ways to abuse this system and yes, abuses do occur. Any system for gaining job prestige or workload reduction is a target for gaming. This doesn't mean the system should be thrashed, but it does warrant additions to curb excesses.


If a late-stage PhD student in the same narrow technical field can't review the paper, then it's almost certainly a problem with the paper. After all, junior people are the primary audience for any paper. Also, PhD students often have more depth on their research topic than the professors.

The sibling comments about making sure that most reviews are written by senior researchers also make good points. That should be checked by the program committee or editor.


They have to say they did this and you are forgetting the editor's role in paper evaluation. This criticism can and is taken into account and you can send papers out for more reviews if you get conflicting ones. In my experience as an editor, junior people typically give better reviews than senior (unless they are emeritus and then have unlimited time). I suppose this has to do with confidence in the junior person who will question their review themselves.


Arxiv paper quality is better than journals' average paper's quality. Because publishing in Arxiv doesn't count as paper in resume in many places, there are far fewer papers who publish just for resume.


It’s how science worked for 3 centuries before the current review system was instituted just a generation ago.


Let's do a quick analogy. arxiv = github. It's all collaborative writing, right? You publish data, code, and your paper continuously. Then you have releases. Perhaps they get tagged with what publication venues accept them.


I'm confused. Do you accept published papers as gospel? They should be taken with a grain of salt too.


Depends on the field certainly. A paper in the Annals of Mathematics is definitely a lot more rock solid than whatever goes on the arXiv, or reviewed papers in certain fields that are particular magnets for junk science.


Funny you should mention Annals. A journal famous for publishing two papers in three years by the same author, one proving some theorem, and the other disproving the theorem. Sure, tons of other journals have done so, but Annals is definitely the highest profile one. Maybe take a look at https://mathoverflow.net/questions/282742/endless-controvers... or https://mathoverflow.net/questions/35468/widely-accepted-mat... It's also a nice way to pad your CV if you manage to get the wrong theorem published - you get two Annals papers for the price of one.

It is of course true that published papers have been vetted. But very often, it simply means that 1. an editor glanced at it, 2. (optional) a peer provided a positive quick opinion on the paper, without checking the correctness, 3. one or two independent referees presumably read the paper and produced a report on it. It's not nothing, but it doesn't mean you should accept blindly as truth everything published.

For context, I'm an associate professor of mathematics at a large research university.


The way I look at it, we passed the point where there are so many people that no one can read all the papers in their field any more.

Peer review is the first filter that papers go through. It's not perfect (it makes mistakes in both directions), but the output of the peer review process definitely has a higher signal to noise ratio than the input.


That's a good start. It's certainly a major improvement from when I still published regularly (until about 15 years ago).

One thing that struck me as a researcher back in the day was that it was much easier to get people reading things that I put on my blog than it was getting scientists to read my papers. Basically as a researcher, you are basically engaging in 17th century SEO by writing papers and getting your peers to pay attention to this. We use email these days but otherwise the process hasn't changed a lot.

This is weird considering we now have a lot more tools. Imagine if hacker news worked that way. It wouldn't work. The best kind of endorsement for a paper is not random anonymous peer reviewers giving their thumbs up or down as to the inclusion of a digital only journal that nobody ever reads cover to cover. Instead it is other researchers citing your work. This is in fact part of how most academic performance is measured. The goal is to get your peers being aware of the existence of your work, get them to spend time reading or learning about it, and then getting them to engage with it by citing, criticizing, or adding to it.

The whole business of a journal publication is just SEO. You get some renowned journal to include you and maybe people will bother reading it and maybe refer your work if they like it. A citation is just a glorified like. The real goal is to get people to read and "like" your work. It's also self re-enforcing: the more people cite your work, the more people will read it and thus cite it. But it's a super inefficient process. From idea to citations happening can take years.

Conferences and workshops are where scientists meet up and discuss their work. That's where your peers are. I always enjoyed the smaller workshops. Get some smart people in a room and beautiful things happen. The academic world is basically a social network. With likes and everything. But minus all the modern tools that make other social networks work so well. There's some room for improvement.


Also, publicly available reviews and comments at openreview.net.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: