Hacker News new | past | comments | ask | show | jobs | submit login

This point is critical. A study can have a good methodology. But if nobody tries to replicate it, you have no chance of finding out that you are missing something critical. And with something as complex as people, "missing something critical" is something that we need to figure out.

And this isn't exactly a new problem for psychology. Feynman somewhat famously questioned a lot of classic research with http://neurotheory.columbia.edu/~ken/cargo_cult.html. I say somewhat famously because people in areas like physics have all read that essay. But in my experience psychologists are a little less inclined to read the criticism.

Even at the best of times, replication of results in a complex system is hard. See, for instance, http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fj... for evidence that most published medical research is wrong.

But with psychology it is even worse. We lump people together by symptom, not by root cause. It is like lumping together people with migraines, head injuries, and caffeine headaches together. Then you find out that coffee beats a placebo for helping headaches, and give them all coffee!

You think I'm exaggerating? Read http://www.nimh.nih.gov/about/director/2013/transforming-dia... for verification that the NIH considers it a high priority research item to come up with classifications based on root causes, so that we have a chance of even being able to do useful research!

Yes, psychology has a lot of well-meaning people, and they stumble across a lot of interesting stuff. But it should be read with a critical eye, because the field as a whole does not perform reliably enough to compel our trust.




> But with psychology it is even worse. We lump people together by symptom, not by root cause. It is like lumping together people with migraines, head injuries, and caffeine headaches together. Then you find out that coffee beats a placebo for helping headaches, and give them all coffee!

This happens (in medicine, not just psychology), and there's definitely room for hypothetical improvement. But in many cases, it's hard to blame people, because there's no way to do better without doing an unknown but gargantuan amount of research. For example, lupus is (or was when I had the discussion with my mother) defined as exhibiting X of a list of Y symptoms, where Y > X. This leads to situations like the following:

1. You, the patient, present with X-1 symptoms of lupus. By definition, you don't have lupus, and treatment for lupus is not warranted.

2. Time passes.

3. You, the patient, come back to your doctor with one additional symptom of lupus. Medical doctrine now states that you have lupus (not so interesting) and that, more interestingly, those X-1 earlier symptoms are symptoms of lupus (which you didn't have when you developed those symptoms), not of some other yet-undiagnosed problem. You can now be treated for lupus.

Obviously, that satisfies no one and is crying out for a more "reality-based" definition. The problem is that no one has a candidate for the actual physical causes of lupus. The best anyone's ever been able to do is recognize that the same set of treatments are broadly effective against a constellation of symptoms, which may seem to be related to each other or not, but which co-occur to some degree in people who respond to these treatments.

Of course it's a high priority to learn what the root causes of symptoms are, but that doesn't mean it's easy or even, in the general case, possible. You have to start somewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: