Hacker News new | past | comments | ask | show | jobs | submit login

The problem is, reading the methodology can only give you a negative signal or a null signal. If the methodology looks good, that still doesn't speak to the correctness of the study's conclusion, for exactly this reason:

> Frankly, I'd expect industry to simply discard studies that don't support their product (null or negative result) and publish the studies that do promote their products.

There could have been a hundred attempts at this study before, all with a negative result, and none of them got published because they didn't have the result industry wanted. We simply don't know. But theoretically, this is something peer review should be able to fix: Researchers without industry support should be able to get the same results if they run the same methodology. Until then, being skeptical of the result seen, even if the methodology is sound, is completely reasonable.




All of the responses to my comment just explain the issue with having a single study available on a topic. You have the same exact issues with independent grants and researchers who want to uncover some real results and not just waste their time with null results, or they want continued funding.

You could level this criticism at any study much simpler. "Oh, the researchers just wanted this to be true." Why even bother singling out industry interest?

It's why meta-analyses are at the top of the evidence hierarchy: they look at multiple studies from varied sources.

But I'd still like to know how much industry funded research converges with non-industry-funded findings. Else you're doing the equivalent of dismissing, say, observational research even though it converges with RCTs 60%+ of the time which I think definitely takes some wind out of the sails of the "observational research is BS" knee-jerk.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: