Hacker News new | past | comments | ask | show | jobs | submit login

Eh, not really.

There are exceptions [1] but most journals don't expect the reviewers to even attempt to reproduce results, which makes sense given how specialized and expensive scientific experiments often are. As a reviewer on open code papers I would usually try to run the provided code, it didn't always work and that wasn't always addressed before publication. (I was also usually the only one who even tried.)

Usually peer review is more about making sure the work is novel and interesting, fits the journal's audience and doesn't have any glaring flaws. Not entirely unlike code review: if it builds, merge it, and we can address problems in a future PR. Those are basically the reviewer instructions you get from most journals IME.

[1] OrgSyn famously requires a reproduction from one of its editors lab before it accepts any paper,

http://www.orgsyn.org/about.aspx

It has a very high reputation amongst chemists, even if it's "impact rating" is low. High impact journals are not usually considered the most accurate.




I don't think you are really arguing against what the parent poster was saying. That is, I interpreted the parent commenter as saying that journals require that submissions at the very least be in a clear, understandable, "your paper must be at least verifiable (or falsifiable)" format, not that they actually attempt to reproduce the results.


(not OP) Verifiability/falsifiability are big words, mostly it is not clear what that means in a specific case. Crucially, that is not what journals/editors/reviewers do. They check if they find the contribution convincing, novel, and in line with the discipline's community standards, nothing more.


No, I think a half decent paper is expected to 3ither explain their methods or reference a paper that explains their method

You don't get to handwave away the instructions of your experiment in my mind. Maybe other fields are fine without that, but I would never write a paper that doesn't clearly explain how I made samples or reference a paper which does. To do otherwise is bad science.

It's not about a reviewer replicating it, it's about anybody replicating it in a year or 30 years.


> As a reviewer on open code papers I would usually try to run the provided code

You're only one of two people I've ever heard make this claim. Which I'm sure you're aware, but many people probably aren't. Fwiw, I'm often called diligent because I read the code (looking at main method and anything critical or suspicious. Might run if suspicious). Even reading supplementary materials will earn you that title (which is inane). According to this informal survey, ~45% of neurips read the supplementary material <13% of the time and less than a third always read it[0] (I'm in that third, and presumably xmcqdpt2).

> Usually peer review is more about making sure the work is novel and interesting

This is why I find p̶e̶e̶r̶ ̶r̶e̶v̶i̶e̶w̶[1] journal/conference reviewing highly problematic and why this system is at the root of our current existential crisis: the reproduction crisis. Reproduction is the cornerstone of science. And many MANY good works are not novel in the slightest. See the work of Ross Wightman (timm) or Phil Wang (lucidrains). These people are doing critical work in the area of ML but they aren't really going to get "published" for these efforts. Many others do similar work, but just not at the same scale and so you'll likely not hear of them, but they are still critical to the ecosystem.

But with your next point: if it builds, merge it; I'm all for. The system should be about checking technical soundness and accuracy, NOT about novelty and how interesting it is. Of course we shouldn't allow plagiarism (claiming works/ideas that aren't your own), but we should allow: replications, revisiting (e.g. old methods, current frameworks (see ResNet strikes back)), surveys, technical studies, and all that. Novelty is a sham. Almost all work is incremental and thus we get highly subjective criteria for passing the bar.

Which is probably why high impact journals are not considered the most accurate. Because they don't encourage science so much as they encourage paper milling, rushing, and good writing.

[0] https://twitter.com/sarahookr/status/1660250223745314819

[1] We need to stop calling journal/conference reviewing "peer reviewing." Peer review is when your peers review. Full stop. This can come in many forms. Similarly publishing is when you publish a paper. Many important works come through open publishing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: