Hacker News new | past | comments | ask | show | jobs | submit login

This paper is awesome because it transparently folds the analytical approach into the experiment being conducted.

There are two kinds of scientific study: those where you can run another (ideally orthogonally approaching to the same question) experiment along with rigorous controls, and those where you can't.

The first type is much less likely to have results vary based on analytical technique (effectively the second experiment is a new analytical technique). Of course it does happen sometimes and sometimes the studies are wrong, still more controls and more experiments are always more better.

However, studies were you're limited by ethical or practical constraints (i.e. most experiments involving humans) don't have that luxury and therefore are far more contingent on decisions made at the analysis stage. What's awesome with this paper is it kind of gets around this limitation by trying different analytical methods, effectively each being a new "experiment" and seeing if they all reach the same consensus.

Interestingly, very few features in the analysis were shared among a large fraction of the teams, (only 2 features were used by more than 50% of teams) which suggests that no matter the method, the result holds true. A similar approach to open data and distributed analysis would be a really great way to eliminate some of the recent trouble with reproducibility in the broader scientific literature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: