Hacker News new | past | comments | ask | show | jobs | submit login

How does your criticism apply to this scenario? The conclusion of the study (if correct) would in fact provide some predictive power: one would be able to make some predictions about a random child's generosity given their religious upbringing. It should also be repeatable, assuming the methodology for gathering and analyzing the data was published.



The experiment may be repeatable, but that does not mean it's reproducible or even useful. Many of these "soft science" fields are plagued by

1) replication failures

2) scientific errors both accidental (coding errors) and intentional (p-hacking)

I think part of it has to be physics envy, where many disciplines are attempting to adopt either the statistical techniques used by physicists or at least similar language and ideology in order to conduct "experiments" in fields as varied as sociology and psychology.

This seems like cargo cult science. Who says that, say, measuring the tendency of churchgoers to donate to a cause will reveal a truth similar to the mass of an electron -- something that can be accurately measured once and then you know a good measurement will return the same result? It's not at all clear that any real insight is gained of either philanthropy or church attendance from such an "experiment", nor is it clear to me that these fields contain statements with truth values at all, at least truth values as would be expected by mathematicians, physicists, chemists, etc.

But by adopting the methodology of science to fields which may not have yield scientific results to yield, a lot of people are creating a body of fake knowledge, or the appearance of knowledge.


They didn't confirm the model worked by testing the predictions. No successful predictions = not worthy of being considered as 'fact'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: