Hacker News new | past | comments | ask | show | jobs | submit login
When TDD Fails (bitroar.posterous.com)
93 points by gambler on Oct 1, 2011 | hide | past | favorite | 101 comments



I got as far as: Oh, and good luck mocking your database and HTTP request objects. and just stopped. If you don't have any experience with TDD, then don't write about it. Mocking HTTP request objects is incredibly trivial and tools are written in every language to do just that.

I don't know why these articles keep getting written. No one's forcing you to write your tests first, and if you can't wrap your head around the benefits (which the author clearly can't) then don't do it. It's fine. I'll still continue to utilize TDD, and continue to pump out significantly more code (with significantly less bugs) than the code I produced without TDD. Yes, there's definitely code I write where I don't write the tests first. It happens, sometimes it's just easier to bang something out because you aren't sure what you're building. That doesn't mean TDD is a failure, it's a tool, like anything else. Use it where it's appropriate, because the benefits are massive.


No one's forcing you to write your tests first

That is very likely not true. If TDD is your "process" at work, then you are being forced to write your tests first.

If TDD was chosen to be your process because of erroneous assumptions about the values of TDD, its strengths and its weaknesses, and those erroneous assumptions are a result of religious zealotry squashing the heretics rather than reasoned debate, then you have a problem.

Religion belongs in the church, not in software. It frustrates me how poor the technical debates in software can be because people get so attached to their technology. It is directly analogous to an artist demanding a specific type of paint for all their work who freak out when somebody says "you know, a gallon of latex paint might be better than that tube of acrylic for painting the walls in here."


Unless someone is hovering over your shoulder, how is your employer going to tell the difference if you just commit tests and code together?


Pair programming. I'm half-joking, but sometimes it is used like that.


Isn't pair programming where you sit back and think about kittens while someone else monopolizes your keyboard? If they want to write the tests first, let 'em.


> Isn't pair programming where you sit back and think about kittens while someone else monopolizes your keyboard?

No, it's not. If that's how you define it, you've never pair programmed.


I think you're being a bit overly sensitive. I'm describing what you can do other than quit if you don't want to do pair programming. Nobody ever says no to carte blanche to do whatever they want, and if you give them your keyboard, they won't even know you're mentally watching YouTube videos involving rainbows, a repeating soundtrack, and pop-tarts taped to cats.


I forgot, anti-TDD is the trend these days. Should have made a joke about TDD being an STD in the programming world. Much better then actually contributing or correcting those who are wrong.


I have experienced this scenario first hand. It made me quit my otherwise fairly well paid new job.


I know all too well, though I was being a little oblique about it. Also, your religious TDD folks and your religious pair programmers are often the same people (same religion?)


> If TDD is your "process" at work, then you are being forced to write your tests first.

Not necessarily. Usually you're only required to write a test before you start working on a specific feature. If you just want to research something I've never seen anyone who'd get in the way of you pulling up a REPL.

But even so, saying you're forced to is like saying your company forces you to check your work into a VCS before they'll give you credit for having written it. Or forces you to fix bugs/add features from a list instead of just ad-hoc.


You should have continued. His point is that while TDD is good for some things it misses huge areas and has very little ROI in most simplistic cases.

Yes, you can mock HTTP requests, but there are numerous bugs you wouldn't catch unless you went and hit your application with an actual browser. Similarly, you can mock the database, but you're not testing all the "magic" the database does and all the myriad combinations it could fail.


For those cases, tests have value in telling you where the bug isn't.

For me, one of the biggest benefits of code that has been written to be testable is that it's been designed such that it's possible to reason about fine-grained details. If I find a problem, I can generally start adding tests as part of the debugging procedure to make assertions about the specific part of the code that I think might be failing.

That's much harder to do at a sufficiently high level of detail if the code wasn't structured to make it easy from the start.


> Similarly, you can mock the database, but you're not testing all the "magic" the database does and all the myriad combinations it could fail.

You are correct. And this is why you unit test the DB and all the stored procs and queries you perform there.

And, this is also why you can add in tests that do use the database connection, and test the more complete stack. TDD does not limit you to only use the tests at the application level. It does not limit you to building other forms of tests. To imply that is dishonest. It would be akin to me saying "Because you do not use TDD, you do not test."


I have no intention of ever mocking the database, and I mock HTTP requests solely so I can work on my app without an internet connection. Look, a better approach is to use TDD for things it makes sense for. Spending a ton of time to mock out your database does nothing for you. Is mysql going to fail for you? Is the mysql driver that a million people are using going to fail for you? No? So, don't mock it out.


You don't mock the drivers.. you abstract the database layer to test for a particular data. For instance, you make getName() returns empty string, 1000 wide character string, strings with unicode, etc. It's not about MySQL; it's about what mysql returns.


That's done via fixtures, not mocking your database, and is significantly more trivial than HTTP request mocking.


Here's a trivial example:

Let's say you have a class that writes a string to the database. You abstract out the actual writing to the database of this string thinking it's always going to work, it's just dumping data after all. Your test passes, everything is good.

Now you optimize your database and add a restriction to make the string at most 50 characters. Your test still passes, but you now have a bug. OK, so you should've had a restriction in your BO. You add that restriction and move on.

Your DBA comes along and adds an integrity check or a trigger that makes the insertion fail if some weird condition is met. Your test passes, but you have a bug.

This can get even more interesting when you hit some basic database rule that you didn't even know it existed. You assume the insertion will work, but it won't. You now have a bug.

You've tested that 2 + 2 = 4, and it works, but when that code is linked with the actual database you realize that in some cases 2 + 2 doesn't equal 4.

I'm not saying don't have unit tests, but when it comes to bugs, the fast majority of these bugs, in my experience, come from the glue, from the assumption that the piece you're integrating with should work one way, but reality begs to differ.


Well this could be solved with integration or acceptance tests.

Anyhow, TDD doesn't promise you bug free code.


We mock the database because it's a damn sight faster than fixture loading.


IMO, TDD is a crutch. It can be useful but don't think just because they make you faster they they will continue to be necessary. I know plenty of people that found TDD extremely helpful but most of them have moved on, not because they had not found it useful but because they slowly got faster without it. I think it works like training wheals because you notice problems sooner you can start to recognize they approaches that lead to those problems and avoid making those mistakes in the first place.

PS: Having a great tests can really help maintain code, but I would much rather have cleaner code than more tests.


That's not why it's faster. It's faster because I never have to manually run through whatever I'm developing. On a large feature that I can completely write using TDD, I don't ever 'test' the feature until I'm completely finished. There's a MASSIVE time savings in not ever having to walk through anything in a web browser, and never creating any regressions while fixing anything that breaks later in development.


I'm surprised not many people have talked about BDD. BDD lets you focus on the behavior of a method, class, or a set of methods/classes that interact with each other without worrying about the internal implementation details, unlike TDD where you test every method and may end up with tests for dead code. BDD lets you paraphrase what you're trying to do and tells you when you're done. Having said that, i have to agree that TDD/BDD is just a tool and no one should be forced to using it as the only method.

As far as the problem of mocking http requests is concerned, most frameworks like Rails/Spring come with http mocks. But a much better solution is to make your controller code as trivial as possible and move as much of your business logic to models. RESTful design helps you solve this problem. This is not just useful for web applications. Moving your logic from controllers to models is a good MVC principle too. It gives you the flexibility to use a different GUI toolkit for the same code.

As far as the problem of mocking database calls, it depends on whether you prefer your test suites to be real or fast. For simple database calls, I'd rather hit the database and end up with tests that assert behavior than tests which redundantly repeat what the code is trying to do using mocks. Mocking is still a useful technique for things you can't control like an External API.

TDD/BDD is useful for most of the model-centric work we do in Web based software. I'm not sure how useful it would be in areas like language design, framework design, mathematical modeling, algorithm design etc. QuickCheck or some other form of program verification will be a much better choice if your work involves designing/verifying complex algorithms. Djikstra's quote:"Testing shows the presence, not the absence of bugs" points out the problems with TDD. TDD ensures your system satisfies the specs and the addition of new features don't break existing behavior. But it does not protect your system from behaviors you did not anticipate when you wrote the code/tests in the first place. TDD/BDD will not make you a smarter programmer. Genius programmers like Rich Hickey don't need TDD. Hey, But we are mere mortals who work with other mere mortals in building non trivial systems. And TDD is just one of the ways to ensure fewer bugs in the system.


More accurately, nobody's forcing you to take a job where you disagree with the methodology.

I've had to deal with agile whining for nearly a decade now. The only thing to do is ask them to find work elsewhere: they won't be happy on an agile project, and the agile project won't be happy with them.


I'm currently building a test framework for my graphics engine, and have run into the exact problems described in this post. A graphics engine, by design, must be able to do millions of different things that are created by the interaction between relatively few methods, none of which can go wrong at any time. One of my new features is probably buggy, but attempting to brute force test border conditions requires tens of thousands of tests because of all the interacting elements, any two of which could interact in just the wrong way to blow something up. It gets even more ridiculous when you have precision bugs, where only certain numbers in certain ranges in certain cases will explode. Testing that using inputs is impossible.

This occurs so often and with such regularity I am now convinced that everything I write is riddled with bugs that I will probably never find without beta-testing in real-world scenarios, plus many more bugs I will simply never find because they never come up.

A much better design would be a test platform that analyzes the assembly structure to pinpoint actual edge cases in the code itself, which could then be used as a guide for finding bugs instead of relying on hundreds of thousands of test cases.


It sounds like you might be better with something like QuickCheck for your testing: http://en.wikipedia.org/wiki/QuickCheck

This was featured recently in Bryan O'Sullivan's "Running a startup on Haskell" presentation: "[QuickCheck] generates random data, then tells me if all properties that I expect to be true of my functions and data hold. QuickCheck is shockingly more effective at finding bugs than unit tests."

There are ports of QuickCheck to tons of other languages...


This gets complicated in a graphics engine where the only place you can tell something has gone wrong is after the image shows up in the wrong place. It would help a lot for the majority of cases, though.


Depending on the language you're using, there might be some tools to help you.

For example, MS recently released a library for Design by Contract (aka Contract Programming) in .NET 4.0. It allows you to specify constraints on your methods, such as things you expect to be true before the method is called, after, and between all method calls in a class. The library is capable of static verification, but it's partly an experimental feature and you need to pay for expensive version of VS. (Runtime checking is available for free.)

But, here is the cool part: MS also released an automatic test generator called PEX. It can do exactly what you've described - go into your code and automatically find edge cases, and generate tests that cover each of them. And it's free.

So, you can write contracts, run PEX, and if something goes wrong, you will see which inputs generated exceptions.

D also has DbC functionality. I don't think it runs any static verification on it yet, but you can use it to detect abnormalities during functional testing.


That would be fantastic if I didn't write everything in C++


Genuine question: doesn't such thing as "design by contract" make for the same bloat as checked exceptions in java?


DbC doesn't need to be dealt with at every level of the call chain if that's what you mean.

It may seem a bit verbose, but it usually expresses logic that would be in your program or in your tests anyway. Difference is, you will be doing it in a pretty terse and declarative manner. IMO, DbC is one of the coolest features of .NET 4.0.



If you have interactions between relatively few methods, then unit-testing those interactions shouldn't be too much of a problem. You wouldn't test all possible inputs, but focus on making sure that you have a test-case for each input that needs to be handled differently. Bugs will still creep through, but they will be substantially fewer.

If you really do need tens of thousands of tests because you have so much interaction with unique states going on in your engine then you will probably never ship a product with any reasonable quality.

And that floating point operations lacks precision is just the way things are, and you need to understand how to write stable code despite it if you want a stable graphics engine.


While testing code is important (and I have tried to do better at writing proper tests for my code), the reason that I have not adopted TDD is because when I write a test and it fails, about half the time it's an actual bug in the code, and half the time it's a bug in the test. I view testing as more akin to going back and checking your work after solving a math problem - not as the definition of what your code is supposed to do, but as a verification that you did it right (and that it continues to work right in the future).


> when I write a test and it fails, about half the time it's an actual bug in the code, and half the time it's a bug in the test.

Well yeah... You'd expect otherwise?

Do you mean when the test fails that it's some sort of not-bug like you changed your mind about what the method should do since writing the test?


I generally agree with the final sentence: "If you choose a methodology without comparing it against alternatives, if you claim that it works all the time without evaluating the results, than you're not doing engineering - you're practicing a religion."

I've definitely seen some negative effects when a team is forced to create a huge volume of low-level tests because they perceive that to be the only acceptable solution. They get bored with the work, and worse, think less about the larger integration issues.

I'm not arguing that TDD fails, but you'd better monitor the efficacy of whatever testing regimen you employ lest you suffer process and quality rot.

P.S. Dude needs to make friends with a spell checker... wow.


IMHO, good testing is hard. I think that something that's probably as hard as writing programs in the first place shouldn't be commoditized into a methodology. I don't particularly dislike TDD, but I certainly don't like it either.

The best phase to write tests is when you've locked down a part of your program. A part such as one distinct submodule or function or the sort.c or nodegraph.c of your latest project--a part that is relatively orthogonal to the rest of your code. That sort of ensures that the basic blocks, once finished, won't fail surprisingly. However, this can only be applied to basic building blocks.

Testing the bigger parts of your non-trivial medium-size program is likely to be so hard and complex that you have no chance at planning testing beforehand. I think that a good programmer or tester can come up with a relatively comprehensive testing suite that triggers execution paths up to 80-90% code coverage if given a sufficiently finished program, i.e. the program structure has mostly stabilized. Good programmer can also make changes to the same program without decreasing the quality of code. Bad programmers are as able to write good testing as modifying the code itself without letting entropy to the driver's seat.


IMHO, good testing is hard. I think that something that's probably as hard as writing programs in the first place shouldn't be commoditized into a methodology. I don't particularly dislike TDD, but I certainly don't like it either.

Completely agree.

The best phase to write tests is when you've locked down a part of your program.

Completely disagree.


To elaborate, writing tests can very quickly make you aware of shortcomings or clumsy aspects of your APIs, since it should be the first time they're actually used.


The author argues that TTD fails in code that's largely wiring. I think the opposite is true. I'm writing an application that's largely wiring, and for the first time in a long time, using TDD. It's refreshing.

Most of my tests are two-liners, like:

    it "should do x when y" do
        obj_under_test.should_receive(:consequence_method).with(some_args)
        obj_under_test.do_something
    end
There are a number of reasons why this has helped:

1. It ensures I'm using dependency injection, etc to write testable, well-factored code. There's huge correlation between testability and maintainability. 2. I don't have to boot the thing all the time to confirm that I didn't mess up in some obvious way. Covering the code paths prevents typos. 3. My test suite runs in under three seconds. I can sanity check what I'm doing, without being tempted to browse HN/reddit/twitter/etc.

I like TDD more in wiring-only code. If I'm writing wiring, I know what my test case will look like ahead of time, and I'll write it first. If I'm writing experimental algorithms, I have no idea what will happen, and I'd rather write code and poke it.


I know I'm pulling out raganwald's argument from one of his posts, but I'm honestly curious: is there anything other than anecdotal evidence that “there's huge correlation between testability and maintainability”? In particular, here we're talking about unit testability. I don't agree or disagree—I've done both TDD and not, but I'm still not sure I've created more maintainable code when testing than when not.


Replying to both of you so replying to myself -- I know all of the arguments for why testing improves maintainability; that has nothing to do with what I'm asking. It was my mistake for muddying my question with my own anecdote.

I'm asking: is there scientific proof (i.e., in a study) that testing and maintainability are at least positively correlated in a reasonably universal fashion?


Maintainability is not just about you changing your code later on; it's about someone else trying to understand what you were thinking when you write it.. and then change it. Tests make that process way easier. You can add your new feature without wasting time to understand everything else and still make sure it's working. If it's your own project, meh, you know your stuff. You even know all the hack you did to gain time. I see tests as a documentation that shows me what's working. Reading tests (high level ones) is usually the first thing I do when starting on a new project. Comments change over time.. documents aren't updated.. people leave.. but tests remain. If the test passes, it doesn't mean everything is perfect, but at least you know that these things work.


Please note that the grandparent post specifically asks about unit testing and unit testability, while you speak in much broader terms.

Automated regression tests can be created in may ways, including UI-level tests that don't require any changes in coding practices at all.


Testable code is loosely coupled, because it has to interact with mocks. Untestable code tends to have things like hard-wired http clients, because it was "easier" to just inline the constructor. When you go back to upgrade the current version of your http client later, you'll appreciate that you didn't hardwire it.


Frameworks and generic APIs definitely should not have important functionality hardwired. However, changing something from a "hardwired" object property to dependency-injected object property is trivial when you have the control over the code.

If you write client code and don't need to mutate an object, having it "hard-coded" significantly improves readability and makes your program much simpler.

The most trivial example I can come up with. Compare:

        public void SomeMethod()
        {
            string config = File.ReadAllText("bal.conf");
            //etc
        }
with this:

        string config; 

        public Something(string newConfig)
        {
            config = newConfig;
        }

        public void SomeMethod()
        {
            //etc
        }
To understand what the first method does, you only need to look at the method. To understand what the second block of code does, you need to know about the method, the property, the constructor, and even with all that knowledge you have no idea where that string really came from. If it always comes from the same file, it's added complexity with no tangible benefit.

It's the side of dependency injection that people don't like to speak about. Client code doesn't always need to be perfectly flexible. You can make it flexible when and where it's needed.


In a more flexible language (or with overloading), you can get the best of both worlds.

    // yay, scala
    def someMethod(config: String = File.ReadAllText("bal.conf")) = {
        //etc
    }


What happens when you want to rename and/or split your consequence_method? How many tests do you have to rewrite?


What happens when you refactor your code? It's fairly obvious what happens: you change the name of the method being called. However, the benefit here with TDD is you still retain your testing ability. More importantly, anywhere in the code also using the method can be easily discovered. Finally, by doing this, you've automatically updated your documentation as well.

So, let's see, you've updated tests, code, and documentation all in one. Whereas if you didn't have tests, you'd have to update the code through manually searching and testing everything, and still have to remember to update the documentation.


Conversely, what happens if someone else renames/splits consequence_method? Without this unit test, how would they have known they broke something?

Unit tests help refactorability in dynamicly typed languages. Whether this is an argument for static typing is left as an exercise for the reader.


Agreed. I'm currently writing a package that contains a few sophisticated algorithmic parts and a ton of wiring. I find it much easier to utilize TDD in the wiring sections than in the algorithmic parts. I still use TDD for everything, it's just that I find myself to be more easily dogmatic about TDD in the wiring code.


I don't like this kind of mocking because usually, I don't care what methods are called internally, I care what the outputs and side effects are. There's almost always a better approach than expectations.


I've actually come to this conclusion on several of my Rails projects. After struggling with mocking up yet another set of I/O objects I realized that it's not really doing anything useful. I agree that there should be testing, its just that TDD on MVC is very difficult to do properly. :/


The real "problems" with TDD are

(a) in the "driven" part, not the "test" part. Tests are (in general) a good thing. However, using a series of tests to drive your design (aka "TDD is not a testing method,it is a design method" idea) often gives you an illusion of progress as you hill climb using conformance to an increasing number of tests as a progress heuristic and end up on top of a local maximum (as for example in the TDD sudoku episode).

(b)in conflating TDD with one or more of (1) testing, (2) automated testing (3) automated regression test suites (4) developers adding more tests to the automated regression test suite as they develop more features, refactor, debug etc.

You can have (1) to (4) without either (5)writing tests first (aka "don't write a line of code without having written a test covering it") or (6)driving your design with tests. The last two ideas are the real distinguishing features of TDD and are of debatable merit. None of (1) through (4) are novel ideas. (5) and (6) are where differences of opinion happen.

Even if you choose to use TDD, it is good to be aware it is just one tool in your toolbox and not necessarily the default tool to reach for.

(c) in the zealotry of some of its evangelists who insist that TDD is some kind of moral imperative and is the only "correct" way of developing software and anyone who doesn't follow that path or make respectful obeisance to it is "unprofessional","dodgy" etc. This is often accompanied by conflating TDD with more generic notions like "automated tests" etc as above.

For example, Rich Hickey, the author of Clojure, said recently at the Strange Loop conference "We say, “I can make a change because I have tests.” Who does that? Who drives their car around banging into the guard rails!?"

(and that is all he said. One sentence in a keynote presentation)

For this Hickey was taken to task by a TDD advocate, Brian Marick, for not being "respectful" enough to TDD and for his "tone" in daring to mildly criticize it as a development practice. After some tweets complaining about Rich Hickey's tone driving away people from Clojure etc he wrote

http://www.exampler.com/blog/2011/09/29/my-clojure-problem

"The dodgy attitudes come from the Clojure core, especially Rich Hickey himself, I’m sad to say."

This kind of repeated whining and harassment over a few days made the normally unflappable Hickey (who asked for references to his "disrespect" etc, to the sound of crickets) lose his temper and say (on his twitter stream)

"If people get offended when their tools/methods are criticized as being inadequate for certain purposes, they need to relax.",

and "Testing is not a strawman. It's an activity, it has benefits, but is not a panacea. No 'community' should be threatened by that fact"

and later "Accusing people who merely disagree with you of being snarky, intolerant, dismissive etc is both wrong and destructive."

and much later after being subjected to a barrage of tweets criticizing his tone and 'lack of respect' for TDD, "If launching an ad hominem attack is the product of a lot of thought, it is time for you to move on. Good riddance."

postscript: the best criticism of TDD I've seen is at http://www.dalkescientific.com/writings/diary/archive/2009/1... . The responses at http://dalkescientific.blogspot.com/2009/12/problems-with-td... are (mildly) interesting as well.


This comment should not be [dead] (if you don't want to give/dock me karma for irahul's comment, down-/upvote my reply to this comment):

irahul 7 hours ago | link [dead]

> For example, Rich Hickey, the author of Clojure, said recently at the Strange Loop conference "We say, “I can make a change because I have tests.” Who does that? Who drives their car around banging into the guard rails!?"

Rich has spoken about it other time with an interview with Fogus:

http://www.codequarterly.com/2011/rich-hickey/

Hickey: I never spoke out ‘against’ TDD. What I have said is, life is short and there are only a finite number of hours in a day. So, we have to make choices about how we spend our time. If we spend it writing tests, that is time we are not spending doing something else. Each of us needs to assess how best to spend our time in order to maximize our results, both in quantity and quality. If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather spend that time thinking about my problem. I’m certain that, for me, this produces better solutions, with fewer defects, than any other use of my time. A bad design with a complete test suite is still a bad design.

He said something on the similar lines about development on CLR:

Fogus: Clojure was once in parallel development on both the JVM and the CLR, why did you eventually decide to focus in on the former? Hickey: I got tired of doing everything twice, and wanted instead to do twice as much.

His explanation on both fronts boil down to he doesn't find it(TDD and CLR/JVM parallel development) a worthy investment of time, given there are only so many hours in a day.

I don't understand why TDD advocates get all worked up when someone says TDD doesn't work from them. Well, if TDD is silver bullet of software development, they should be delighted that the ignorant singletons fail to see it and they have an edge over the fools.

These reactions remind of this:

“You are never dedicated to something you have complete confidence in. (No one is fanatically shouting that the sun is going to rise tomorrow. They know it's going to rise tomorrow.) When people are fanatically dedicated to political or religious faiths or any other kinds of dogmas or goals, it's always because these dogmas or goals are in doubt.” ― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values


It looks like irahul was hell banned yesterday for the following comment, that's why his comments show up as dead:

http://news.ycombinator.com/item?id=3059454


(karma sink, see above)


> the zealotry of some of its evangelists who insist that TDD is some kind of moral imperative and is the only "correct" way of developing software and anyone who doesn't follow that path or make respectful obeisance to it is "unprofessional","dodgy" etc.

Well, it is (the only correct way of developing software). If you're writing software without a test, even if only a mental one, you're just goofing off. And if it is mental, write it down and let the computer check it for you.

It's like not writing specs. (Not necessarily the executable type.) If you don't know what you're trying to do how do you know when you're done?

As for what people who confuse specs with acceptance tests think, I wouldn't judge a product/methodology by those who don't use it well.


I don't get why the fact that TDD drives good design is a bad thing. When you're not testing your code, it's easy to start taking shortcuts and coupling modules that should have been kept separated. TDD forces you to think about "modular design", which is just great. Of course, if you're trying lots of solutions to find "the right one", don't write tests. Just write hacky prototypes as fast as you can.

  "(b)in conflating TDD with one or more of (1) testing, (2) automated testing (3) automated regression test suites (4) developers adding more tests to the automated regression test suite as they develop more features, refactor, debug etc.
You can have (1) to (4) without either (5)writing tests first (aka "don't write a line..."

I get here that you're being sarcastic; but there's a difference between using TDD effectively to get the job done and using it "because it's cool" or "because people say it's nice". You don't have to write tests for every trivial things, or write tests for your tests of your tests. Lots of time, writing high level tests testing for the interfaces is a good time/quality ratio.. there's nothing wrong with that.

About c). I do agree. However, have you ever jumped into a project without tests? With tests? Which one do you prefer? Tests don't prove that the library is working perfectly; they only prove that what's tested is working. Personally, jumping into a new project that have tests give the confidence to change things and know I won't break anything. Of course, that's not always the case but I think you know what I mean.

And about Hickeys.. I haven't heard the speech and don't know about the context. But changing stuff in your language is something; but again, starting to change stuff in someone else' is different.


Just because you're not doing TDD doesn't mean you're not testing your code. I don't need to be forced to think about modular code, because I (try to) think that way voluntarily.

One of the most frustrating things (to me) about BDD/TDD is that I write a test knowing it will fail (a good thing), then I implement just enough to pass that test. Even though I know that I'll be rewriting that code again very soon to implement just enough more to pass some other test. It's needless context switching when I have a general idea of how I want the code structured before I start writing anyway.

With more experience, you should learn to avoid many of the pitfalls of not writing/designing modularly enough in the first place and at that point writing your tests first becomes a productivity crutch.


You forgot a critical step. Refactoring. Refactoring is a critical part of TDD, one which many people seem to forget.

> It's needless context switching when I have a general idea of how I want the code structured before I start writing anyway.

That's good. You are allowed to have that. In fact, your first test is implementing initial parts of that code. What TDD is force you to document that in a formal manner. It also highlights where things become cumbersome. If it's hard to write a test, your code is probably far too complex for what it's trying to do.

Finally, knowing what you want is different then having what you need. TDD gives the opportunity to focus on getting a finalized code base faster by not having to implement things that are unnecessary.

But really, when you describe TDD as write failing test, then write code to pass the test, and leave it at that, you've left out an essential pieces of the steps that is just as important as writing the test and writing the code. It's akin to me describing TDD as writing code and then refactoring, and leaving out testing.


Unit tests are not formal documentation. Nor are they generally adequate documentation. At best they will explain what the code does, but never how or why.


No, it isn't formal documentation. However, unlike other forms of documentation, it can be proven to be correct. If you want to know how the element being tested is to be used, unit tests are a great way of discovering that. It also tells you what it's supposed to do. Finally, it can tell you all this quickly and efficiently.

So, essentially, it gives you what the code does, as well as how to do it using the code. It doesn't tell you how it does this beneath the API, or why it does what it does, but it doesn't have to.


Your unit test beliefs are giving you a heavy bias.

Unit tests cannot be proven to be correct (not without some other actual formal proof). Running a suite of unit tests proves nothing except that the tests pass. The unit tests can be buggy. The code they test can also still be buggy. Indeed the code is buggy if it's nontrivial, unless you are asserting that unit tests end the very existence of bugs. Unit tests add more confidence about the state of the code, especially with respect to regressions, but they do not prove anything.

As for unit tests telling you how to use the code, I suppose they do to some extent. Your code should probably be clear enough without this, though. If I have to read your unit tests to know how to use your class, then you have failed at writing self-documenting code, and you've also failed at writing API documentation.

I would also say that the how and why are often extremely important. Anyone maintaining your code (i.e. Anyone who cares about your unit tests) needs to understand the how. Anyone using your code probably needs to understand the why. If your calendar unit tests indicate that certain days have 25 hours, but fail to explain that these are due to daylight savings time, that's a pretty important missing why.


> Running a suite of unit tests proves nothing except that the tests pass.

Which, when compared to documentation, is light years ahead in terms or proof.

> Your code should probably be clear enough without this, though. If I have to read your unit tests to know how to use your class, then you have failed at writing self-documenting code, and you've also failed at writing API documentation.

sigh

Well, apparently, if my API is clean enough, then you shouldn't need API documentation. Right?

Regardless, a clean API can be self documenting, but having tests demonstrating all the forms and intents of the classes can help with precisely what to do. As for code being clear: What does code have to do with an API? The whole point is to avoid actually having to look at the implementation of the API.

> I would also say that the how and why are often extremely important. Anyone maintaining your code (i.e. Anyone who cares about your unit tests) needs to understand the how. Anyone using your code probably needs to understand the why. If your calendar unit tests indicate that certain days have 25 hours, but fail to explain that these are due to daylight savings time, that's a pretty important missing why.

These are two different issues entirely. One has nothing to do with the other. Regardless, documenting 25 hours doesn't change the fact that changing it requires testing. It's as simple as that.

You seem to be playing straw man with your first argument, you're just confused with your second, and targeting something that has nothing to do with what we are discussing.

It's like me bashing git because it doesn't compile your code.


> Which, when compared to documentation, is light years ahead in terms or proof.

In three replies you've gone from calling unit tests formal documentation to saying that they are not documentation at all. The fact that you don't see the problem with this is evidence of the bias I mentioned.

> Well, apparently, if my API is clean enough, then you shouldn't need API documentation. Right?

Sure. In an ideal world, the API would be so obvious that further documentation is redundant. That's often not realistic, though, so we have additional documentation to shore up the API. (In modern code, the bulk of this is actually in comments inside the API interface.)

> Regardless, a clean API can be self documenting, but having tests demonstrating all the forms and intents of the classes can help with precisely what to do. As for code being clear: What does code have to do with an API? The whole point is to avoid actually having to look at the implementation of the API.

So instead of looking at the API itself, it's more appropriate to look at unit tests? It's better to look at 'testWiggleWidgetThrowsOnNull()' than to just look at the API and see that 'wiggleWidget()' throws on null? Or better yet, look at the documentation that should state that null is not a valid argument?

Unit tests are not good documentation. Good documentation is intended for humans to read. Unit tests are intended to test code, not to convey information. They generally contain a ton of noise in the form of boilerplate and mock objects.

> These are two different issues entirely. One has nothing to do with the other. Regardless, documenting 25 hours doesn't change the fact that changing it requires testing. It's as simple as that.

Of course. I've not argued against testing. I don't believe unit tests are always appropriate, though, and I think the argument that they are documentation is really weak.

> You seem to be playing straw man with your first argument, you're just confused with your second, and targeting something that has nothing to do with what we are discussing.

I'm not sure exactly what you're referring to as my "first" and "second" argument. It's absolutely not a strawman to say that unit tests don't prove code correct. It's a fallacious claim on your part to say that they do. A unit test passing only proves that the code does what the unit test expects. It doesn't prove that the unit test expects the right thing, nor does it prove that the code is being tested where it actually matters.

If the second argument is about documentation, I stand by my assertion that unit tests are not good documentation. In fact, I would say that they are terrible documentation, for the reasons I outlined above.

> It's like me bashing git because it doesn't compile your code.

Which would be a pretty reasonable response to someone claiming that git is a great compiler.


> I don't get why the fact that TDD drives good design is a bad thing.

TDD does not in and of itself lead to good design.


That's also not what he said. It drives good design, but it doesn't do it alone. You are, in fact, saying the same thing in different ways.


> It drives good design

My point is that it does not necessarily drive good design at all. TDD can lead to crappy code just as easily as any other methodology, it is not a silver bullet that will 'in concert with other stuff' always lead to good design.

It may lead to software that performs correctly but that is not the same things as well designed software.

For instance, you might get software that performs terribly, but still correct.


Actually, I'll disagree. TDD revolves around 3 steps. Unfortunately, people forget the 3rd, which is refactoring. By it's very nature, the 3 step process leads to good code. The problem is when people start to skip that ever important 3rd step.

Countless times I've seen so called TDD evolve into merely writing a test, and then writing the code to pass the test, and then ignoring the 3rd step of the process.

Regardless, you are misrepresenting what is being said:

> it is not a silver bullet that will 'in concert with other stuff' always lead to good design.

And no one is suggesting that. You are inferring it. You are being religious in your dislike of TDD zealotry to the point that you are assuming that suggesting that TDD drives good design means using TDD "with other stuff" always leads to good design.

Rather, TDD drives good design means simply that by properly practicing TDD, you are more likely to result in good design precisely because of how TDD works. That doesn't mean you can't sabotage yourself along the way. However, what I've found is that bad design from TDD is generally more difficult to achieve. If you are finding tests difficult to write, your generally going to find your design is bad. I see this all the time, from programmers who are really smart. We'd prefer to believe in our own brilliance then admit we were proven wrong by a mere methodology.

Side note: Their is a lot of zealotry in this thread. It's mostly those against TDD (you'll notice they use the word religious a lot). I'll admit that some can make the troubling leap that TDD is a silver bullet. But, if your instinct is to argue against TDD by claiming it's not a silver bullet, you're not any better. More importantly, why does TDD need to be a silver bullet for it to be worth while? If their is no silver bullet, then no methodology is worth practicing?


You use a lot of expensive words, zealotry, religion and so on. I'd suggest a bit more moderation lest you come across as a zealot yourself.


The words were used precisely because of the topic and the word choices being made by others. They might be expensive, but I can afford it.


No one at my work place, including me, understand unit testing or TDD. I was recently asked to add a test suite to a service I wrote that is basically a simple wrapper that returns the result of a stored procedure call. The only test I could think of was to call it with all null parameters, in which case there should be no output. Other than that, the results depend on the state of the database. I'm familiar with the concept of dependency injection, but I couldn't add it to this very simple service in good conscience, since I knew that adding the necessary complexity would only increase the likelihood of a defect.


As shadowfiend notes, fixtures are a common approach here.

However, you could also use a mock. What you're really testing here isn't the connection to the database or even that the database contains certain data, so you can rule that out of the equation. What you're testing is that some service (let's just say a 'method' for simplicity's sake) turns an input into an output in some particular way.

What you do, then, is mock the database connection for a particular test case so it returns a guaranteed result to whatever's doing the request in your method. You can then test that the method converts that input into the correct result. You've now unit tested the method rather than the entire service (in a nutshell - it can be more complex than that).


I don't get it. Why would one mock the database connection when the thing to be tested here is a stored procedure?


It's hard to tell without asking the OP for specifics, but..

"I was recently asked to add a test suite to a service I wrote that is basically a simple wrapper that returns the result of a stored procedure call."

I interpreted this as meaning that the unit test would be for the 'service' and whatever it does with the stored procedure's result (and the arguments passed in the first place) rather than on the stored procedure itself. The reason for this interpretation was how he considered passing null values in order to exact a null response to be an acceptable test. Such a 'wrapper' might be a presenter system or simply convert data from the result to a different form, these things could be unit tested by mocking what the database returns.

If, however, he meant that the operation of the stored procedure was to be tested, then my previous post was moot.


That's a great question. It wasn't specified to me either, so I don't even know. But even verifying that the stored procedure actually gets called is problematic. In practice, one of the problems that actually occurred with the service is that the service host somehow lost permission to exec the stored procedure. That's a server admin issue.

In my experience it feels like that case is pretty representative of most system defects, in that the majority of them seem to fall outside the space of defects that are feasible to test. I've always assumed I'm doing something wrong, given how many supporters unit testing has.


Just test the stored procedure, not the code connecting to it.


Traditionally, in these cases, you use a test database with sample data (fixtures) to test that the output depends on the db data in the right way.


The author is complaining that when requirements change that his tests need to be updated/rewritten.

Is he joking? I mean if the updated requirement changes the behavior of the code then the test better freaking fail and require the test to be updated, otherwise the test (if it even exists) is terrible.


I very rarely use TDD, but I am a fan of it. First off, absolutes like "always write tests" are for people that are bad at programming but are still employed as programmers and can't be fired. They haven't developed the judgement for when to write a test or when not to, so in the interest of getting some reasonably useful test suite, you say "you must write a test for everything".

Secondly, I don't really agree that these action methods are untestable. Sure, "print hello world" is untestable, because it's so simple that you're not going to fuck it up, and because there's only one execution path that can possibly occur. But most methods are not like this; they need to reject invalid data or state, they need to craft database queries, and so on. In that case, you very well can write good tests for this sort of thing.

Say I have some code that needs to accept an HTTP request that has a "foo" parameter:

    def do_something(self, context, foo=None):
        if foo is None:
            raise context.UserError( 400, 'You must supply foo.' )

        context.get_model('Something').apply_foo_to( user=context.current_user(), foo=foo)
        return context.get_view('Something').render()
This is easy and valuable to test:

    container = DependencyInjectionThingie()
    context = container.get_fake_context( current_user='jrockway' )
    controller = container.get_fake_controller( DoSomethingController )
    something  = container.get_fake_model( Something )

    # ensure that empty foo is rejected
    raises_ok( lambda: controller.do_something( context ), UserError )

    # ensure that something is mutated correctly
    controller.do_something( context, foo='OH HAI' )
    compare( something.applied_foo, '==', ('jrockway', 'OH HAI' )
(Who would have predicted the day where I started writing my HN examples in Python!)

In just a few lines of code, we get a little bit of extra security around our do_something action. We are sure that a UserError is thrown when foo is not provided (which our imaginary framework turns into a friendly error message for the person at the other end of the HTTP connection), and we're sure that the model is mutated correctly when foo is valid. In three lines of code.

I find that people that have the hardest time writing tests have poorly-architected applications that don't lend themselves to easy testing. The key point to remember is: if you don't pass something to an instance's constructor or to a method, don't touch it. Then everything is easy to test, because you can isolate your function (or class) from the rest of the application, and then test only what that function is supposed to do. (In this case, the fact that a UserError exception becomes an error screen is something you test in your framework's tests. Same goes for the fact that view.render() renders a web page; test that in your view tests.)

This style of development is also good for more than just testing. A few months ago, I wrote an application that monitored a network service. Not wanting to rewrite the service or mock it, I pointed my tests against a dev instance of this service. Everything was great until the service blew up on a Friday night and nobody was around to fix it. Faced with not being able to write any more code until Monday morning, I knew I had to fake that service somehow. 20 minutes later, I had a class with the same API as my "connect to that service" class. I changed one line of code in my dependency injection container (to create an instance of my connection-to-fake-in-memory-server instead of connection-to-networked-dev-server), and then I was back in business. That's the beauty of writing code to be flexible: you don't have to get everything right on the first day.

(People will argue that tests should never depend on external services, because they can blow up and then you're fucked. Yes they can, and yes you are! But while I didn't do everything right on the first day, my design allowed me to recover from this mistake without any code changes. And now I just run the test suite twice before a release; once against the fake connection and once against the real server, just to make sure that whatever assumptions I made in the mock server also hold when connected to a real server. I like releasing code that I know works in real life in addition to my fantasy mock world, but that's just me, I guess.)

Edit: and oh yeah, it's easy to mock databases and HTTP requests. We've seen the second one already; you let your framework translate between HTTP and method calls, and you write the tests for that when you write your framework. This frees up your application developers to Not Care about that sort of thing, allowing them to write great tests with minimal effort. The first one is also easy. You write code like:

    class UserModel(Model):
       def __init__(self, database_schema): ...
       def last_login(self, user):
           self.database_schema.get_resultset('User', user=user).related('LoginTime').order_by('time DESC').get_column('time').first()
Then when you're testing your controller, you pass in a fake UserModel that just defines last_login as something like:

    class FakeUserModel(Model):
        def __init__(self, database_schema): pass # don't care
        @memoize
        def last_login(self, user): return datetime.now()
The code to ensure that last_login generates the right sequence of operations on your ORM is somewhere else. The test that your ORM generates the right SQL AST is somewhere else. And the test that tests that ASTs are converted to correct SQLite SQL is somewhere else. You already wrote and tested that code. Assume it works!!!

Yes, sometimes you will write a few end-to-end tests to ensure that when an HTTP request that looks like foo arrives on the socket, you write a HTTP response that looks like bar to that socket and your database's baz table now contains a record for gorch. But that's not how you test every single tiny thing your application does; it takes too long, it's hard to get right, and it buys you nearly nothing.

So I guess I add: testing is hard if you write your tests wrong.


> I find that people that have the hardest time writing tests have poorly-architected applications that don't lend themselves to easy testing.

It's generally good to start with some framework which provides capabilities to easily mock most of the objects. You sure can architecture your application as such, and have your dependency_injection_thingie to mock objects, but I don't think it's a worthy investment of time.

> (Who would have predicted the day where I started writing my HN examples in Python!)

So why is that? Working on a Python application? While you are there, check out decorators and co-routines/generators. You already have checked out decorators(which are basically function composition - same in Perl other than the syntactical sugar) - I see your @memoize example.

Decorators along with Python introspection are super cool. I used decorators to implement a small contract system - https://github.com/thoughtnirvana/augment

I recommend this talk on co-routines http://www.dabeaz.com/coroutines/

EDIT: Perl has coro. But the language integration(generator expressions, convenient yield) makes it a bit more natural in Python(YMMV). And Python has gevent if you are looking for a threading equivalent.


> Sure, "print hello world" is untestable

Redirect stdout to a file and diff against a file containing "hello world" :-)


Unit testing is not the only form of testing.


Which, of course, is part of the point of the article (“I'm not saying TDD is a bad thing, but there are more tests than unit tests, and there are more ways to verify software than testing.“)


I think there's some confusion in the definitions which makes things a bit muddier.

TDD is traditionally based around unit tests, but nowadays can pragmatically include integration and functional testing practices. The former definition seems to be the one accepted by the article, but the latter definition somewhat solves most of the problems raised.


That's absolutely true. Are there any good articles/posts out there on doing test-first with something bigger like integration tests?


I don't have any mindblowing articles to hand but this might be a start: http://programmers.stackexchange.com/questions/99735/tdd-is-...

More specifically, the London school of TDD encourages thinking about things from an integration testing level, although you quickly progress to doing unit testing with a ton'o'mocks to flesh out the missing parts.


Check out this thread and the associated article:

http://news.ycombinator.com/item?id=1842582


I don't know, it's not that hard.

Write acceptance tests (with Cucumber or something), use a good unit test helper like Shoulda for your standard unit tests, and write unit tests for any complicated methods.

The acceptance tests will cover pretty much everything, and the unit tests will cover anything that's hard to test from a high-level point of view.


This is a difficult topic. I've found that adhering to TDD is just not realistic in some cases.

For example, I'm following the Steve Blank approach of Customer Discovery, building my MVP. If I take the TDD approach (which takes more time to code a finished piece of functionality), and successfully iterate enough times during the Customer Discovery process, I throw out all of the tests that I built and am now moving onto something different that I've discovered a customer is actually willing to pay for.


IMHO, there're two points here; 1. Unit testing is not TDD 2. TDD has nothing to do with software testing

TDD is good for developing rapid changing codebases with short development cycles with known requirements and a harsh time pressure, but it's a front loaded way as well.

By which I mean, you have to invest some hours beforehand to make sure you're not shipping crap because you didn't have time to see what would break or to make sure you didn't skipped the controller your GUI developer needed.

TDD tries to ensure one thing only: "Awareness". You accept the initial costs of TDD if awareness is a big issue for you. Otherwise, use something else which serves you to solve it.

If you're aware of what you will be implementing and if there's a mechanism you can check your code against, it'd be efficient, right? TDD uses unit tests for documenting that "Awareness", aiming to utilize the benefits of unit testing as well.

You're right, that it is not feasible trying to test every aspect of your solution via unit tests and that's why there are integration tests, systems tests and acceptance tests. So if you're planning to find defects via the unit tests you're writing for your TDD cycle, think again. Unit tests are good for checking "completeness" and a great tool helping with regressions. Nothing more and nothing less.

As a good engineering practice, we adopt the method of working, based on how we planned to work. If we find TDD fruitful to implement, we avoid putting any logic to controllers. We implement our business logic behind the controller, which also cleans it off of implementation specific crap. Besides us using a completely detached GUI layer helps us write controller unit tests running via HTTP.

We simplified our working environment and make it suitable for TDD, by ways we found to provide more capabilities to us. If we're not to use TDD, we employ other logics and structures, which would fit well with the method we'd use instead.

Long story short, TDD gives what TDD is intended for and as long as you cooperate. If you expect more than it can provide, you'd be disappointed. Regardless of your development method, you have to make sure your development/architecture models and your tool set complies with your method of choice. The rest relies in the question "What do you need your development model to solve for you?"

All the best


"TDD is nearly useless when your code is the opposite of what I've described above: specific and mostly trivial, with complexity coming from the sheer amount of methods and their interactions. "

In other words, badly factored code is hard to test and maintain. Its just that with TDD its more obvious why badly factored code is bad.


More like wiring or integration code. Not everything is a framework. At some point you have to write code that works with and inside of the framework.


Using TDD everywhere is stupid. I think, tests are most useful between layers of abstraction, especially if you are the provider of abstraction. If you are merely a user of abstraction, it is stupid to unit test that code.

If you have an interface to the outside, then that should be unit tested.


To the dev's who take the time to write tests, when I inherit your code base I feel like buying you a beer!

For the dev's who don't - no matter how 'clean' the code. FFFFUUUUUU


  > Okay, maybe I'm going abit overboard with fake qotes.
In other words, a straw man?


Or a joke.


Why am I not surprised that there are a bunch of typos in this article?


No tests?


This blog and some of the comments below are just crap. If you haven't TDDed before (for a non trivial project) you simply don't know what you are talking about, and it's apparent in your ignorant opinions.

Sorry if that's a bit harsh but I'm tired of seeing these posts by people who don't get it.

I don't get quantum physics, that doesn't mean I'm going to write an blog "when quantum physics fails"

If you don't understand something... Learn about it, practice it, then criticize its actual flaws.

Don't just get frustrated at something and claim its crap. Because its clear to those of us that do understand it, that you are talking crap




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: