Hacker News new | past | comments | ask | show | jobs | submit login

I'm not a scientist, but in software development some people used to think that planning and designing everything up front was a good way to go about things. Turns out that taking all the big decisions before getting your hands dirty and really learning about the problem domain isn't really a great idea.



There's a long and unpleasant track record in science of people doing forming a hypothesis, doing trials, mining and changing methods and hypotheses and p-hacking for anything significant, and publishing a misleading paper that completely fails to mention that the original hypothesis was abandoned halfway through.

That may sound pretty agile. It is! You're absolutely right that this is an excellent match! The issue is that while agile may work pretty well for software dev, it perhaps doesn't work so well for science. It's given rise to a number of abuses.

It might also be noted that researchers doing experimental design are often quite familiar with their field, having gotten their hands dirty in a problem domain repeatedly.


There's the XKCD about significant P-values: https://www.xkcd.com/882/

To prevent that from happening, all the experiments that failed need to be reported too, and the link between them has to be obvious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: