Hacker News new | past | comments | ask | show | jobs | submit login

It's misleading to single out the 10x meme for the "that's not science" treatment. What beliefs about software development do hold up to the standard of hard (well-designed, controlled, and replicated) experiment? The published literature that I've read is all embarrassingly weak - understandably, given the complexity of what it's trying to study and the minuscule investment in such research.

I haven't read the ebook you're advertising, but a book with an identical pitch, Robert Glass' Facts and Fallacies of Software Engineering, is an example. The difference between his facts and fallacies is that the "facts" tend to have one poorly controlled, never replicated study (usually from 1978) to their name. Better than nothing (maybe), but close to nothing.

My point is that at this stage in history, it's all still folklore. Even you, immediately after playing the science card, followed with mushy folklore of your own ("I've certainly seen...").

Edit: come to think of it, your quotes make it sound like there is empirical evidence for the 10x claim, just that it's old and wasn't replicated. The "old" criticism is weird; it's not at all obvious that the internet invalidated the data. So the critique seems to boil down to: there was a study, but it wasn't bullet-proof and it wasn't properly replicated. What studies of software development isn't that true of?




I'm hardly "singling out" the 10x claim. If you knew me, you would know that I readily criticize programming studies [1]. At any rate, the prevalence of poor science in software development is no reason to be silent about this particular case of poor science.

I'll admit to being guilty of using anecdotes to describe my experiences with software development. (That's not quite the same as folklore, but whatever.) You won't find me claiming that my anecdotes prove anything, though, and I try to be clear that my perspective is based on experiences, not objective truth.

[1] For example, this comment three years ago on reddit: http://www.reddit.com/r/programming/comments/7dkjx/rescuing_...


Fair enough. Is there any programming study that you think holds up as science?


Microsoft (Research?) published some stuff in the last year or two that looked really interesting. I don't have a link at the moment and I'm on my way out the door, unfortunately. But I remember thinking that it had promise because it drew conclusions from a huge base of data from actual Microsoft developers working on long-term projects.

Another paper I like is Little's report on data from projects at Landmark Graphics [1]. It's not a controlled study, and shouldn't be treated like one, but it also looks at a large base of data from real projects.

My biggest complaint about academic papers is that they're often conducted with students, using practices out of context, working on projects of trivial size and depth. I like both of the above not because they're perfect, but because they get beyond those problems.

[1] http://www.toddlittleweb.com/Papers/Little%20Cone%20of%20Unc...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: