Hacker News new | past | comments | ask | show | jobs | submit login

The perversion of science that believes an observation must be accompanied by a theory (preferably acceptable to the current mainstream) is still alive and well, even in medicine, positivist epistemology or no. Once you start looking for it you'll see it at least once a month even in popular press articles.

But a perversion it still is. It is eminently scientific to simply document and even publish an inexplicable observation, and only later hope that somebody can incorporate it into a testable theory.

To watch a putative scientist discard evidence because it has no theory with it boggles my mind, but even here on HN I've seen articles about papers getting rejected for this reason in the last year, so it's a real problem even today.




A modern-day analogue to this would be the "back is best" campaign for putting infants to sleep on their backs as a way to reduce SIDS. The evidence is overwhelming that putting infants to sleep on their backs (vs stomachs or sides) reduces the rate of SIDS significantly. We have no idea why, but that doesn't really matter.

It's hard to understand how a scientist could be arrogant enough to dismiss legitimate evidence simply because the underlying mechanism isn't understood.


The key is testing the behavior and theory.

There are often lots of confounding factors - in the SIDS case, much of the decrease in SIDS rate can be attributed to changes in factors (continuing the decrease from before the "back to sleep" campaign, generally safer sleep areas, changes in cause-of-death coding, etc.) https://naturaltothecore.wordpress.com/2013/02/14/revisiting...

When you don't have the correct theory or mechanism, it's easy to do the wrong thing - like when the British navy thought acidity prevented scurvy and shifted from using lemon juice to more-acidic lime juice processed with copper tubing that destroyed the vitamin C... http://en.wikipedia.org/wiki/Scurvy

Rather than just following a statistical anomaly, you need to devise and perform tests that will invalidate your theory, as was done with the other theories (e.g., the priest bell in the article). This is perilous when people's lives or global economies are at stake, and so anomaly hunting and cargo cult science can persist in high-stakes, difficult to test environments.


I would observe that A: nothing I said precludes any of that; certainly "science's" job is not done with the mere observation of facts and B: nothing about any of that is helped by including spurious theories in the observation of a brute, unexplainable fact.

Also, I'd happily set the bar higher on correlations to be reported in this manner... then again, I consider 0.05 to be a mistake anyhow that should simply be rectified as that has been concretely demonstrated to not be enough, IMHO.


I agree with your overall point, but I would walk it back just a notch by pointing out that being forced (or at least encouraged) to have a theory first helps to prevent correlation "fishing". If you accidentally stumble upon some really interesting relationship, then that is great and ideally would be shared with the world. But if that sort of thing is allowed to be published without any scrutiny, the incentive becomes to just throw tests at a dataset until it yields something that appears interesting. If you perform enough random tests on a dataset, you are very, very likely to eventually find something "unusual" or "interesting".


Sure. Anyone with the most basic stats knowledge knows that if you dig long enough you'll find random correlations. That's solved by demanding evidence be repeatable rather than requiring the evidence fit into a nice theory.


But asking for a theory can help the situation, that was my point. Of course demanding a theory even in the face of repeated findings isn't helpful, but encouraging a theory, or giving "bonus points" for a theory is, I think, a perfectly reasonable thing to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: