Hacker News new | past | comments | ask | show | jobs | submit login

> It’s really not uncommon for some trials to fail to differentiate from placebo when it comes to depression studies. That doesn’t make this study “rubbish”, it just shows that you need to examine the body of evidence rather than cherry-picking studies that appear to match the outcome you want while dismissing those that say the opposite.

But this study doesn't say the opposite. It fails to show an effect. That's different from proving the absence of an effect. Every Ph.D. student in an empirical field learns this in their first year. I'm surprised this study gets so much attention.

You can make a study verifying that a pound gold and a pound feathers accelerate downwards at the same speed in a vacuum, and perhaps you messed up the vacuum, so they actually fall with different speeds in your study. Doesn't prove gravity is messed up. You just failed to prove that it's not. Can have many reasons. Same with this study.




But this study doesn't say the opposite. It fails to show an effect. That's different from proving the absence of an effect.

This was my first thought, jaded as I am from bad scientific reporting (such as the linked article) which doesn’t distinguish between these two cases, so I had a look at the actual study.

In this case, it looks like the 95% confidence interval just barely overlaps the null hypothesis, however the mean effect favors placebo:

The mixed-effects model showed no evidence of effect of group assignment on post-infusion MADRS scores at 1 to 3 days post-infusion (-5.82, 95% CI -13.3 to 1.64, p=0.13).

(See also figure 2, which clarifies the direction of effect.)

So, potential methodological issues aside, I’d actually consider this evidence against a strong benefit relative to placebo, and possibly very weak evidence of harm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: