Hacker News new | past | comments | ask | show | jobs | submit login
Test Driven Development Actually Works (jphpsf.com)
58 points by jphpsf on Sept 30, 2012 | hide | past | favorite | 32 comments



I am a latecomer to TDD and try to increasingly do it in all of my personal projects, no matter how trivial.

I can't say how much it has been objectively beneficial because I haven't taken the time to measure metrics and I willingly submit that I may be just a sloppy programmer overall. But metrics aside, I will say that TDD, when I've done it on a "sure, why not, there's no deadline", has had great benefit to my morale and productivity because of how it builds the habit of programming into me.

Not the habit of TDD, but the habit of programming itself. Just as the cues, triggers, and rewards of a slot machine hooks people into gambling even if those people dislike gambling, TDD helps me break out of the "I really don't feel like programming right now" mentality that I've always had. When all I have to do is solve some small tasks -- with the knowledge that a reward is immediate (the passing of the test), it's easy for me to jump into it...and once in awhile, I'll even have the "just one more test to pass" attitude.

Now obviously, this (like slot machine addiction) is no good if you haven't built a good project plan and/or lack some sense of ingenuity. But even then, writing some a series of menial functions is much preferable over doing nothing...just as running a pathetically slow two-mile run just to meet a running goal even though you feel lazy is way better than descending into self pity/shame after you drop another new Year's resolution.

I could also point out that TDD has really helped me practice orthogonality but I think it's enough to say for now how it has improved my attitude. yMMV


I couldn't agree more. I work in robotics, and all of the software folk at our lab try to adhere to a very TDD/BDD based workflow; working here is the first time I've been exposed to it, and it's changed the way that I code not so much by altering the quality (though this has probably improved as a side-effect), but by making me more "productive" in the short term.

I'm a serial procrastinator; not in the cutesy demotivational poster way, but in a way that is legitimately harmful to my productivity and success if I don't find a way to reign it in. It was a big problem years ago. I failed classes in school, missed deadlines at old jobs and internships. I've become a lot better at being responsible, but every now and again I feel the same old apathy set in, and I can go a day or two at around 50% of my normal output. I find that when I need to start writing a new Class or do anything that might be front-loaded in the cognitive effort department, or just suffering from a block in general, it helps to just start writing code, no matter what it is. The process of hitting keys acts a great sort of ignition and warm-up for the diesel engine that is my brain. If I'm going to mindlessly hammer out code, it would help if it's useful. And a well-formed unit test, by its very nature, should be small, simple, and informative of the behavior of the code that it tests but shouldn't actually be all that cognitively taxing to write if you understand the nature of the problem but haven't landed on an implementation. Brain-dumping a behavioral outline that happens to be useful and in code form is much more useful than trying to hack on the problem I'm actually trying to solve and having to go back later and engage in a refactoring marathon.

In this way, practicing TDD has helped me lay out a system to kick-start myself when I start to slump. I realize that this is just one of many possible means to an end, but it's the one that works for me.


The thing about TDD is it lets you figure out where you were in a big complicated thing you're implementing. That's the hardest part sometimes about programming. Answering the question: Where was I? What did I have to do next? Especially when the system is non-trivial.

It's like writing very detailed functional notes to yourself about what your program is actually supposed to do so you can implement it. Besides, you write twice as much code, but have half as many bugs. It's a big win all around.

I wonder what it would be like to take TDD to extremes and write all the tests for the minimum viable product before writing a single line of actual implementation code. I wonder if it would be possible. It would certainly help me pick up a project wherever I had left it at any time in the future.


This experiment is not very rigorous. The Q4 2011 change in the proportion of UI bugs could be caused by the addition of back-end code with little UI (such as maintenance tasks), or by an upgrade to the bug tracking software that encouraged grouping multiple UI bugs into one bug report. And the author didn’t calculate whether TDD slowed down their work – it is not clear that TDD is better if it avoids defects but also increases development time.

You can find more rigorous studies on whether TDD actually works on Google Scholar: http://scholar.google.com/scholar?q=test-driven+development+...


Thanks for the feedback. I will checkout this link, it looks like there is a lot of interesting resources.

Regarding the first part of your comment: I don't think that is the case. We don't usually add backend code without UI (I don't have a measurement for this unfortunately). Our bug tracker or tracking method has not changed since we first installed Trac in 2007.

TDD definitely adds more development time (especially in the beginning where you have to learn the practice and put test frameworks in place). However, I think it's good because I see it as a good investment: you spend time upfront writing test instead of spending more time afterwards troubleshooting broken software. Even better, the test suite can catch regressions long after a feature has been implemented, so you get even more value then. That really helps as a development team grows.


My thoughts too. It seems logical to suggest that TDD would reduce the number of bugs in the product that's being tested, at least somewhat. However, the true question may be if the time spent writing those tests (longer that the time spent writing code?) is worth it when looking at the number of problems it prevented.

I believe the practicality of TDD depends on too many factors to make statements about it in general. Is your project well defined (good for TDD) or a cool little idea that changes and morphs every day or two (bad for TDD). Is it large scale and complex (good for TDD) or quite simple (TDD may be a waste of time here). There are more factors of course too.


Thanks for the feedback. See my comment to roryokane above about the time spent on testing.

I agree with you about the practicality of TDD. For instance, recently, I was prototyping an upcoming feature involving a new library. The feature was quite different in terms of functionality (compared to the existing features). In this case, I did not use TDD as the prototyping was morphing (to use your words) quite a lot and I was trying to learn more about the library.


I also dislike these percentage graphs, where you cannot see how "100%" changes. Maybe the server team fired their best programmer and their bug rate went up without any changes to the UI code.


Personally I don't find the idea of 'test first' to be very practical in the kinds of tests that I write. I think the pragmatic programers used the expression 'spike and stabilize' to describe a pattern of development whereby devs get something working then write some tests to support that new feature. They made an analogy of mountain climbers free climbing for a bit then securing everything with the ropes. In order to do 'test first' I would have to decide things like what to call an input field on a web page so that I could write my failing test then code the page. That is too early to make decisions like that. Also, when people say they write a failing test then the impl. are they really writing a single failing test. Often I find I need several tests to cover all the cases. I bring this up because I'm a big fan of TDD but it seems too easy to poke holes in the whole 'write the failing test first' approach.


Perhaps I'm missing something, but it seems like what this post describes isn't so much "test driven development" as "testing". Rigor of the data analysis aside, unless I misread, it seems like a comparison between not testing at all, and testing (I didn't notice any specific mention of TDD methodologies).

Shouldn't the comparison be between full-on TDD (e.g. write tests first, etc etc) and 'old fashioned' testing (for lack of a better term)?


You are absolutely right. On one side, the UI is using automated tests (using TDD) and on the other side there is no tests (backend server has no tests and first 2 years of the UI had not tests).


TDD works for some things, not so much for others. If you know what you want to build, and how it will work, TDD makes it more likely that you will build it correctly, and that the programmatic interfaces will be pleasant to work with. If you don't know how to make the thing you want to build work, TDD is not going to help you. At all.

As an illustration see http://devgrind.com/2007/04/25/how-to-not-solve-a-sudoku/ for an amusing example of TDD failing horribly for a simple algorithm problem.


TDD helps with design problems, not algorithm problems.


Yet the luminaries of TDD actually imagined it might be able to evolve good algorithms...

You might undersell it; they oversold it.


I'm afraid the OP is jumping to conclusions.

You can't arrive to the conclusion that TDD reduced the number of bugs based on a percentage ratio between UI/server bugs. You could simply have a bigger share of backend bugs over time.

You need to see it in absolute numbers, using some meaningful metric (bugs vs. LOC maybe?).


Bugs vs. changed LOC could be a good one. I believe there have been studies showing that bugs are proportional to LOC, but don't quote me on that.

The nice thing about his data is that it's fairly well controlled (for software, anyway)--we can separate his contribution vs. others' and his code pre- and post-TDD.

I agree that his analysis isn't great, though. Also, it's hard to account for effect of legacy code. The big drop in bug count doesn't happen until 5 quarters after he started TDD.


When you spend half your time writing tests you change half as much, and create half as many bugs. Admittedly a bit of exaggeration, as tests do help.


My team has been working toward this goal for the last 8 months. We've talked a lot about testing and making it a requirement to have passing tests before a feature can be called "dev complete".

In going back and writing tests for even some recently developed features we have found a few (albeit, minor) bugs. In adding tests to legacy code we found mountains of issues, but it's also very time consuming to do.

We're just starting work on an API and our goal is to have tests written before the code is written, allowing us to essentially write the requirements for each API and write the code to match those requirements. I'm looking forward to the process as I really feel it'll make our code clean, concise, compartmentalized, and much easier to maintain in the future.


Very interesting. Thanks for your reply. Out of curiosity, I have 2 questions:

1) Do you measure the defect counts over time? And if so can you share it? 2) What is your platform (web?)?

Thanks!


It would be nice if it would be true, the graph looks great and props for supplying the data. Correlation does not equal causation though. Are there any scientifics researches on TDD or any programming styles?


Google Scholar finds many research papers on the effectiveness of TDD: http://scholar.google.com/scholar?q=test-driven+development+...


The IBM research is interesting, though it seems as anecdotal as the OP, it also has the 50% reduction in faults, perhaps if there's a few more cases like this a metastudy could yield some conclusive evidence.



The problem with this study and many others that report on TDD's benefits is that they don't compare the results with a test group that writes unit tests after the fact. So, it's not at all clear what they're validating: the benefits of unit tests (which I think are well accepted) or the benefits of writing tests before code. I suspect it's primarily the former.


The research in this paper does not result in any conclusive evidence that says TDD is better, it merely showed that TDD encourages developers to work longer until a higher code quality is attained. If I read it correctly.


It's going to be impossible to quantify "better" anyways. It depends on what you're optimizing for. This research is incredibly valuable in helping us understand the results of TDD, particularly if we're optimizing purely for quality:)


Is TDD supposed to prevent bugs? I thought the main point of TDD was it was suppose to encourage better design.

First write the test. This is supposed to make you think more about the API and how you'd like to use it. Then write the implementation now that you've effectively tried using the API by writing the test. That why it's called Test DRIVEN Development. If all you're doing is writing unit tests that's not really TDD, that's just writing tests.


it is supposed to do both, prevent bugs by the fact that you capture behavior in test and improving design by leveraging natural laziness of humans to take less effort to achieve results. Usually less effort approach results in clean better structured code at least that is what I observed


The author seems to emphasize about practicing TDD and that it will produce LESS bugs which is not what TDD is all about. TDD is about designing your architecture doing your "tests first".

On the other hand, some developers can write code first then unit tests later which can also produce less bugs but this is not TDD.


A general note on TDD: I love it.

TDD encourages better APIs and more testable code. I don't use TDD to prevent more bugs. I don't even know if this is really true.

Everytime I think of a new feature I fastly have an API in mind. Starting with a test almost always shows me a better way to provide the functionality (easier, more failsafe). Thats one of my two key features of TDD.

The second key feature is that the tests you write are much more expressive than tests you write afterwards. I observe that everyday in my team and company. Writing tests after the implementation is done leads to a lot of mocks and just a static verification that the implementation works how it works. In the end, it is difficult to change the implementation and easy to break the specification. Writing your test ahead of the implementation means you test the specification and not how you implemented it. Tests become much easier to read as they express the specification instead of a 1000 lines test where every variable assignment of the implementation is checked.

As a last hint: I read here that people complain that TDD does not work with "prototyping". I agree with that - the problem with that statement is that a lot of people use their prototype as productive code later on instead of rebuilding it after the experimental phase. In my opinion that is a misuse of the word "prototype". A prototype is an experimental work to verify or learn some ideas and afterwards designing a better system. Other people may state that testing everything is not possible - that there are situations where it is not possible to write a test. I would bet that in 90% of that cases, this is just wrong and means a lack of testing experience of the writer. I can really recommend the following book: http://www.amazon.com/Driven-Development-Embedded-Pragmatic-... I rarly do embedded system programming. My main focus are languages like Java and Go but this book is worth its money independent of your choice of programming language.


There are 2 major flaws in this experiment - both stem from this assumption

that the first couple of years of working for this company, are equivalent to the next 3 years.

This is obviously NOT correct.

1. The more you work with a codebase, the more you undestand its details and complexities - this alone should help you eliminate more bugs during developement.

2. You will have improved as a developer - meaning that the last 3 yeras in your life will have more experience and knowledge than the first 2.

As sad as I am at stating this fact... you cannot conclude that TDD actually works, because there are too many other variables involved.


The article is not conclusive on whether it was TDD or just plain automated testing that provided the benefits.

I've heard good arguments against TDD, but I haven't heard any good arguments against automated testing. (Other than situations where automating a process is cumbersome, like hardware bugs.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: