Hacker News new | past | comments | ask | show | jobs | submit login

Currently, peer review fulfils three distinct functions, to the detriment of all of them:

1. Acting as a filter for what research is worth reading.

2. Providing input to the author on how to improve their work.

3. Influence the author's professional evaluation.

The first is the most blunt instrument: when something is not considered worth reading, it doesn't get published at all. This is problematic in the cases where the peer reviewers misjudge, or when an article is evaluated on more than its academic merit, e.g. whether it's "groundbreaking" enough for a particular journal.

The second is useful, but made less so due to the power imbalance - an author cannot judge for themselves whether the feedback makes sense in the context of their work. This is especially problematic when someone has to take their work to multiple journals and/or gets conflicting feedback.

And that power imbalance is the result of the third point: it can be more important to the author to get that stamp of approval by the reviewers than to meaningfully contribute to science, because that's the only way they can stay in academia.

Ideally we'd split them up, but that does require aligning the academic community - which is a hard problem, especially considering there are strong vested interests in the status quo.

(Disclosure: I do volunteer for a project (https://plaudit.pub) aimed at splitting evaluation from publication, because I think it's an important issue.)




Plaudit is nothing more than like an upvote, right? I don't think this is really helpful. We already have a very similar metric, which is citation count. Which is definitely a useful metric.

At first, I thought that Plaudit adds some post peer review platform, or discussion forum, for each paper. Similar to OpenReview but for any paper. I think this might actually be a great idea.

I think that https://fermatslibrary.com/librarian actually provides a similar functionality. But it has not seen much adoption.


It's not a metric, it's an endorsement. Think of it like being published in a journal: it's not about in how many journals an article gets published, but in which ones. Likewise, what's important here is who has endorsed it. Instead of a journal name acting as a proxy for "someone I trust thinks this work is worth reading", you can directly see the name of that person.

(That said, a big challenge with citation counts is that they take a long time to accumulate.)

As I mentioned, I think the three functions should be separated. Although Plaudit by itself does not facilitate giving feedback, it does not prevent it either. Ideally, if someone comes across e.g. an error in an article, they provide that feedback to the author (e.g. through an email, of using Hypothes.is - a project that I think you might like). There's no need to only contribute to improving scientific literature through a formal journal-assisted peer review process.

And who knows: if the author incorporates that feedback, the giver of the feedback might decide to endorse that article using Plaudit :)


As another comment implies, there is a fourth function that has been saddled on the process against its original intent: provide a “stamp of approval” for journalists to write misleading articles about “peer-reviewed research”


I'd include that under 1., but I can certainly see why you'd list it as a separate point too.


Interesting list!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: