Hacker News new | past | comments | ask | show | jobs | submit login

As someone who reads a lot of published research, no, that is a terrible rule and a lot of garbage gets published. Both historically and now there is no shortage of bad science that gets published, or good science that gets really misreported.



If you don't trust studies at such a fundamental level, what is the value in reading and commenting on such threads?


Studies don't have value just because they are a study.

Put another way, just because Cosmo publishes an article doesn't mean it's true - but Cosmo does also publish true articles.

Plenty of studies are great and move our understanding forward. More studies answer a very narrow - but important - question. Some studies are just inconclusive, but have value too.

You should read studies. You should evaluate the quality of studies as part of integrating them and their conclusions. At the same time just because something was "a study" or got published doesn't make it true or good or useful, anymore than being written in blue ink makes something truthful.

The value of science is that it does not require faith.


Not OP but

If someone were advocating making policy decisions based on astrological charts, I would read and comment in such threads despite not trusting astrology on a fundamental level.

I personally would go quite as far as to say that purely observational studies have as much predictive value as astrology, but "observational studies are as reliable as astrology" is a better first-order approximation of my opinions than "observational studies are as reliable as large-n randomized controlled trials".


I don't know what value vorpalhex gets from doing so, but I got value from the gp comment.


A specific form of bad research which could be at play here: The researchers programmatically searched the space of possible control variables to include until they came up with a model which maximized the apparent effect size for coffee, so they could publish an interesting & widely cited paper that looked good on their resumes.

With N different control variables to either include or ignore, that's 2^N possible sets of control variables. Odds are decent at least one of those regressions has a large effect size for coffee.

I would trust this sort of research more if instead of publishing a particular set of control variables obtained by an unspecified method, the researchers chose 100 of those 2^N possible sets of control variables at random, then published the average effect size from the 100 resulting regressions. Ideally they would make the code to reproduce this average effect size publicly available, so anyone could easily replicate using another 100 randomly generated regressions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: