Hacker News new | past | comments | ask | show | jobs | submit login

Richard Rhodes' monumental work on the A and H bomb mentions that from Thompson onward, "being heard" Was a problem. Juniors often struggled to be taken seriously, sometimes seniors ostentatiously "spoke for them" to have new ideas given credence and women were doubly disabled on the "taken seriously" front.

Right up until Murray Gell-Mann and beyond, speaking outside the current limitations to knowledge was hard. I don't want this to descent into AGW and antivaxx denialism, this is inside classic science but considering radical theory/paradigm shifts which were still testable propositions. New models are hard on early stage career.

Blinded peer review was partly designed to help some of that. In a narrow enough field it's impossible for reviewer and submittor not to know each other. There may only be 3-4 people who understand your niche fully. Rueben Hersh discusses that a bit in "what is mathematics really" (I think, could be another book of his, "the mathematical experience")

Rhodes discusses Michael Polyani's theory of science as old fashioned apprenticeship. Journeyman scientists publish reproducible, testable work. Theoreticians.. harder to test sometimes.




I agree with what you say but the problem is that journals such as Nature do not have blinded peer review.

Reviewers know who you are. This is quite shocking to discover if you come from Math or CS.


Well, the blinding is supposed to allow the juniors to speak up when the seniors are making mistakes, so it makes sense its only one way.

And while double blinding sounds nice in theory I'm not so convinced it is that useful in practice, because it requires the reviewer to play along and pretend they can't figure it out from the text alone: if they are able to do that reliably they can probably be trusted to keep an open mind anyway.

Reliable blinding of the author would mean having them consciously copy the style of others and avoiding citing their own previous work, which would be very hard in a small sub-field since they are by definition a sizeable fraction of it!


I don't see it this way. Juniors will be rarely appointed as reviewers by Nature.

Double-blinding gives unknown groups and juniors the chance to publish at top venues, which are incredibly biased towards big-name universities and big-name groups.


Yes. The system broke down. I don't know anyone who thinks it works how it should

I've had pretty hard bounces which were deserved and I know how to get work over the threshold, but some review feedback has been petty, passive aggressive ignorance, and suspiciously similar stuff pops up from time to time in "3 papers and your phd is done" which makes me wonder if copycats are getting softballed through for career development.

the rules behind length, word count and mark-up are pretty silly too. I've seen some rather odd latex tricks to compensate for length inside word count because typographical checkers were bouncing.

I don't do this for a living, barely act as maybe helpful co author these days: it's hard work.


I think in part this is because a lot of people started to be interested in the meta-metrics of scientific work and like any metric once you start tracking it you influence the system you are tracking. Publishers and various scientific actors then made things worse by making those very metrics (a symptom) a goal in its own right. That's what broke the system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: