Hacker News new | past | comments | ask | show | jobs | submit login

This is a fundamentally wrong rule because, as we have learned, researchers and reviewers basic competence has come into play repeatedly. The further you get from quantitative physics, the worse it gets.

That said, if you are going to criticize something based on the idea "this result doesn't agree with my preconcieved notions so I'm going to say that their sample size was too small, the effect size was too small, or they made some sort of incorrect calculation, and conclude the paper can't be trusted", you owe it to yourself to read the paper carefully enough that you can make the determination they made a major error.

Unlike most people, I read the conclusion, then the methods section. For most papers I can't get enough from the methods section to conclude that I trust the authors to make their claim, because most methods sections are missing major details on how the corrections/controls were made.




Should we also look at final code assuming the programmer and their code reviewers lack basic competence? Or do we assume they made a best effort given their abilities and may have still made mistakes?

I believe the latter, and the rule I mentioned also leads to that assumption. Otherwise, why read the studies at all? But that's not what people here do.

> missing major details on how the corrections/controls were made

I also make an assumption that some corrections/controls are so obvious within their field of study that they aren't worth documenting - that it's only "average joe" readers who assume they aren't being done.

It is, in a way, akin to asking why programmers didn't document how their JWT is secured.


I don't compare software engineers and scientists by the same metrics. The important thing about a code review request is that it contains everything required for me to reproduce the effects of the code change in my own hands. Science does not require that- instead, the methods is typically the "least descriptive and helpful representation of what was done" (especially in the case of very complex biological experiments) and can't be reproduced with nontrivial effort.

BTW I've definitely gone into code reviews assuming the programmer lacks basic competence and been right. Other times, I've had to rollback other people's code (or stop a rollout) because an ostensibly genius programmer who made a change (and got a review from a starry-eyed junior) wasn't using their basic competence or testing their change at all.


Bad example. While the tokens themselves can only be secured in a handful of ways, JWT auth implementation is notoriously tricky. Even the official docs of some well known packages promote insecure practices such as storing tokens in browser localStorage.


There is a big difference between "here is what I think the authors did wrong" and "I did not read the paper, but did they even account for <obvious thing>?".

The former is perfectly fine and encouraged. The latter is incredibly rude and derails discussions.

Relatedly, you took the rule incredibly literally just so you could critize a strawman...

> Unlike most people, I...

Yes, yes, you a very smart and everyone else, in particular me, is an idiot...


no, I'm not smart- I'm wise. And brutally honest. It's amazing how gullible most scientists are, reading the results and taking them at face value.


Yes it amazes me you haven't won multiple noble prizes yet


Considering the average quality of software, I'd say people in the tech industry are quite low in the pecking order of who can criticize whose professional competence...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: