Hacker News new | past | comments | ask | show | jobs | submit login

I'm hardly "singling out" the 10x claim. If you knew me, you would know that I readily criticize programming studies [1]. At any rate, the prevalence of poor science in software development is no reason to be silent about this particular case of poor science.

I'll admit to being guilty of using anecdotes to describe my experiences with software development. (That's not quite the same as folklore, but whatever.) You won't find me claiming that my anecdotes prove anything, though, and I try to be clear that my perspective is based on experiences, not objective truth.

[1] For example, this comment three years ago on reddit: http://www.reddit.com/r/programming/comments/7dkjx/rescuing_...




Fair enough. Is there any programming study that you think holds up as science?


Microsoft (Research?) published some stuff in the last year or two that looked really interesting. I don't have a link at the moment and I'm on my way out the door, unfortunately. But I remember thinking that it had promise because it drew conclusions from a huge base of data from actual Microsoft developers working on long-term projects.

Another paper I like is Little's report on data from projects at Landmark Graphics [1]. It's not a controlled study, and shouldn't be treated like one, but it also looks at a large base of data from real projects.

My biggest complaint about academic papers is that they're often conducted with students, using practices out of context, working on projects of trivial size and depth. I like both of the above not because they're perfect, but because they get beyond those problems.

[1] http://www.toddlittleweb.com/Papers/Little%20Cone%20of%20Unc...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: