Hacker News new | past | comments | ask | show | jobs | submit login
The Failures of “Intro to TDD” (testdouble.com)
100 points by joshtgreenwood on June 18, 2015 | hide | past | favorite | 54 comments



Totally agree about the dissonance with TDD and it always surprised me it wasn't discussed more when TDD was going through the hard sell stage.

I guess my core problem is TDD only drives your design if you write quite granular tests but they then become barriers to further refactoring/redesign. However my current preferred approach (for anything non-trivial) is different to the authors. As he does I start out with outer-level acceptance-y tests, however those integration tests are then combined with extensive refactoring to drive the design, so I don't do the dreaming up of collaborators he does in step 4.

My reason for not thinking of collaborators early is although I think that approach has legs I found it didn't necessarily always the simplest/most elegant design you could come up with. I ended up encoding my first understanding of how to solve the problem into the interactions, even though I knew that my understanding of the problem at that point was much less than it would be once I dug a bit deeper.

Anyway with my current approach if I extract additional classes/whatever I might then test them directly, or use further tests to drive their design further (incl test doubles as appropriate). So I'll have one or a few tests of the extracted behavior at the outer level, and maybe more thorough testing at the lower level because quite often testing all the combinations and edge cases is easier at this level.


I've found the authors proposed approach at the end of the article to be a very useful one, even if you're not using TDD. A very regular piece of advice I give to new developers is "breadth-first not depth-first" - i.e. write a whole function at a consistent level of abstraction before you dive into writing the other classes etc. you need to support those few lines of high level code.

I find that most new developers instinct is to do exactly the opposite - i.e. write the first line of their main function, then realise it needs, say, an argument-parsing class, so start writing that, then realise that needs a logging class, so start writing that, etc. which means you have to keep much more stuff in your head at once, you end up writing much more code in each commit than you should, and the distinction between bits of code at different levels of abstraction often ends up much blurrier which leads to messier design.

This seems to be quite easily countered though - just pointing out to people the difference between breadth-first and depth-first styles can have quite an immediate effect


Is "top down" vs. "bottom up" programming/design really not a set of terms and concepts that is still taught in nearly every introductory programming course such that people mentoring "new developers" need to introduce the concepts and invent new terminology for them?


It definitely is. The problem, I think, is that people forget these fundamental concepts when they're shoved into a giant legacy code base that clearly didn't follow that process. They seem to get overwhelmed by the spaghetti and forget their entire undergrad curriculum.

As a side note, I find it helpful to think of what the high level business objects will be at the beginning, but build a bunch of utility functions from the bottom up, working towards a DSL of sorts at the various abstraction levels. As you explore the solution, the end location of those functions usually becomes abundantly clear.


> As a side note, I find it helpful to think of what the high level business objects will be at the beginning, but build a bunch of utility functions from the bottom up

Yeah I think this is a reasonable approach, especially as if you do end up completing this, and get to a design you are happy with, it will probably be very different (and hopefully infinitely superior) to your original high level design.

You've probably already heard of it but the mikado method can also be useful in some of these situations.


I had not heard of this before. I will have to investigate. I found an InfoQ[0] talk on it that I will watch after work.

[0] http://www.infoq.com/interviews/mikado-method-restrurcture-s...


The "depth-first" approach (I've seen it as well) is a mistake that is largely orthogonal to a top-down/bottom-up style. It looks top-down, but immediately goes off into the weeds and can easily be recognized by statements of the form, "I need an X, so I'm building a Y." As in, "I need a reporting application, so I'm building a logging module", or the '90s Japanese 5th generation computing project, "We need intelligent, communicating systems, so we'll build a Prolog machine."


The problem is students don't listen. The majority of people see course work as a necessary evil, to be gotten through, and forgotten after passing the final exam.


"Students don't listen" is never the problem. "Teachers don't teach" is more like it.


No, students not listening can often be the problem. You can be the greatest teacher in the world, and still there will be a few students who don't want to listen.


But you can't be the greatest teacher in the world.


This article sums up the problems I've seen with TDD first hand. I've never seen a team end up with a better design than they would have using the old maxim: "Think hard. Then code.". I'm sure they exist, but adopting TDD does not automatically lead to better design, that much seems certain.

Another issue with TDD is that it is incredibly difficult to reason about. In many groups, there are at least one developer sold on the concept, and when I voice my concerns with the methodology they immediately demand an explanation why I "hate testing", or alternatively preach about all the goodness of unit testing. Which of course isn't the issue at all. Many developers seem to confuse TDD and unit tests, and the supposed benefits of the former is very hard to quantify.


Sadly, usually tests are written after the fact to satisfy a code coverage requirement. Perhaps that's why people hate testing. That's why I did!

At my startup we're indirectly tackling difficulties associated with unit testing. Our tool Alive [0] is an interactive programming extension to Visual Studio that made me really enjoy writing tests first and then implementing the features - when with each keystroke I get to see what the code does.

[0] http://comealive.io/


TDD is hard. You need to work up a skill, and you probably need someone experienced to guide you at first.


Unfortunately, it stays hard no matter how much deliberate practice you sink into it. I didn't understand what DHH was saying about design damage inflicted by testing until I started seriously confronting questions like "which parts of this API should I mock and which parts should I just use integration tests for?"

Eventually you spend more time thinking about testing than you do actually getting shit done. You have to because otherwise you find yourself rewriting tests every time you refactor. You rationalize this time under the guise of "it's making me think more clearly about my design." Once it starts wearing thin disillusionment takes root. The first step towards my own enlightenment was when I realized that I needed tests to help me ensure that my test framework was working. What's testing those tests? Tests are code, and code needs to be tested. Where does it end?

I wrote my own toy test framework as an exercise. I was trying to * really * wrap my head around meta-programming, so I meta-programmed my test suite to test the heavily meta-programmed data classes. It solidified into a grotesque mush and I scrapped the whole thing. Now I'm solving the original problem with boring old Rails and sanity has been restored.

Now when I start a project I do it knowing that my code is going to suck and I'm going to refactor it over time. The truth is, when there's no tests I only have to refactor one code base and not two. I don't need to learn two frameworks. I don't need to understand two domains. The amount of time I've spent maintaining code has sharply diminished after I stopped being so religious about testing. If I don't know what it's doing, the REPL is my best friend. Backtraces rule.

If your code is Serious Business, like, say, SQLite which is used everywhere, a robust test suite is a very nice tool to have and maintain. For everyone else, it's another step on the road to mastery.

Also if you're using a dangerously unsafe language like C, tests can alert you to brewing problems. If you're using a safe language solving not-so-hard problems, a test framework is just adding complexity to paper over your lack of experience.


Methodologies aren't a substitute for creativity or intelligence.

At best they can give a person with good judgement a different angle to look at their problems with.

At worst they are thought-stopping slogans that turn a gullible novice into a crippled novice.


If only management understood this. Actually, I think good managers do understand this, and do it anyway, because if you take the general case and look at it from above, methodologies do improve upon chaos.


As a manager, the trick is to find the sweet spot between chaos and blind adherence to rules. The uncomfortable truth is that there isn't one sweet spot for all cases.

I've run teams where a very agile approach made sense (usually where the dev and users where small in number and very close) and others where a more formal phased approach made sense (usually where we need co-ordination across companies).

The simple fact is that trying to run these types of programme the same way is an exercise in futility. That's not to say that one can't extract common practices that make projects generally better e.g. it's generally preferable to get code into the wild sooner rather than later if you can do it safely, it's just that the "one true path" idea is a marketing concept not an engineering one.


[deleted]


> Everything I've seen seems so "In my experience..." as opposed to a formal proof.

The scientific method is ill-equipped to formally prove a generalized theory of software development. There are too many significant variables at play; too many equations to solve. We are left with little but "In my experience..." to guide us (as well as local experimentation, where science can actually be of some help).


Yup. TDD, object orientation, agile, functional programming and whatever else are all good ideas. In some cases they work, in others not.

But it seems people have a tendency to make them into ideologies/religions that when applied correctly will solve everything. I guess it's a way to exercise power over people. No individual thought allowed.


Hexagonal architettures combined with a command processing pipeline help in defining test boundaries and entry points. Those entry points are your public interfaces, everything beneath shouldn't need any test. The entirety of the core application behaviour is testable in isolation by design. You basically end up writing mini acceptance tests for your domain logic.

I think that's a preferable approach for complex systems, the "redundant tests" issue is not a problem and end to end tests are reduced to just a few proving everything is hooked up.


His steps 4 through 6 seem to completely violate the principle of YAGNI. It also seems way too much like big-bang releasing. You write all this code for a very long time and don't have any way to objectively say the design is correct (in that it actually performs the business case from start to finish) until the very end.

I've designed systems in this way before (strangely, they always seem to be some sort of document validation), and they always turn into complete hairballs of code that eventually gets thrown away and something much simpler substituted.

Now, I just stick to DRY, YAGNI, and "Compression-Driven Development" http://mollyrocket.com/casey/stream_0019.html


That's why I thought this article was honestly the case against TDD. Because the upfront "this method has major problems" stuff rang very true, but then the "this is the correct way to do it" made me wince hard, with its Java-esque upfront complexity.


I like to always start with YAGNI, and then try some of these exercises after I already have something that kind of works - its much easier imho to refactor something that is simple and gets the job done, i.e. add-to versus something that is wrought with way too many abstractions and such.


I wish I had time to write a decent post. Lacking time, I'll try to be concise. If you want to really understand the benefits of TDD, the best book I know on the subject is actually Michael Feathers, "Working Effectively with Legacy Code". It is rather dense and getting a bit dated, but by showing you what you need to do to improve legacy code in a methodical fashion, he provides the basis for learning how to do good TDD.


Its a good book but from my memory it doesn't really address solutions to whats being discussed in the post, and I'm presuming the author is familiar with the book because he discusses characterization tests.


I can't detect any difference between his proposed approach and Harlan Mills's Top Down System Development. Basically, he's going to take this 'traditional' team and introduce them to the hot new development methodology of 1970, all while showing no awareness that he has done so. What's that saying about people who don't understand history?

At least they won't have to change the acronym.


BDD got it right. Write your tests to prove the features are working, not to prove that the code written to implement the features does what it does.


I've found the way TDD has been "sold" to me as quite odd, and probably what's turned me off from it the most (cure of all ills). At the same time, I find the religious pushes for and against to be both understandable and bizarre. I've been thinking about this for some time now, but don't think I've ever really written it down.

As a very general statement, I'd say any developer should try their hardest covers the basic tenets of TDD while working. Let me back that up, since I'm sure that's probably rubbed people up the wrong way.

1. I know what the system is currently doing

2. I know what the system should be doing

3. These things are different, and I know how to check

4. I change the system

5. I check that the system is now doing what it should be doing

That all seems pretty uncontroversial, I hope.

1. If you don't know what the system is doing now, you can't tell if there's anything to do at all.

2. If you don't know what it should be doing, there's no way of implementing it.

3. If you can't check that if it's doing the right thing (even manually) then you're fairly screwed (although we've all ended up in a case where the "check" is "ask the customer afterwards if it's fixed").

4. You obviously need to actually do something to change the system.

5. Finally you should check your work actually, well, works.

The idea that you have an automated check for 3 seems like a fairly good idea, as long as it's not too onerous. Then suggesting that it's implemented and run before 4 makes sense, it's unlikely to be much harder to do it before rather than afterwards and it gives me some confidence that the test checks what I think it does.

Personally, while I quite like the workflow presented, I'd do most of that on paper / whiteboard / in my head. Take the task and break it down into smaller things it needs to do. Don't rush into creating the right structure, just actually understand what it is this thing will need to do. Then I can consider general designs, where the edges will be, types of failures and errors that are likely to pop up and I can write something (as always, discovering more as it's written and having to change things). It's very useful to then start on the bits I'm less sure about, or the ones that will have the most impact if I have to change things. I think that gets easier the more you code though.


I agree this seems reasonable and sane.

However there are types of programs which aren't well covered by a test (demo) or set there of.

Sometimes the program is being written to learn a value which is unknown, or for which a simplified example having success insufficiently tests the validity or scalability of the software.

Tests in those cases may work at a unit level, but still provide insufficient insight in to the soundness of the overall results.


The tl;dr of this article - some programmers _really_ need to get exposed to a Systems Engineering course.


Here here to this. I never continue to be surprised by the number of people who don't realize that engineering is the practice of breaking big and confusing things into small things you understand.


It's "hear, hear."


The article ends with a diagram showing a bunch of classes whose names end with "er". In my experience, that's usually a strong indicator of confusing design. Objects that have a single method and no real state should just be functions.


Conversely, objects should have a single responsibility [1] and so functions that are unrelated should go on different objects. An InvoiceFetcher is definitely unrelated to a PaymentApplier, unless one takes the argument that "Its about money", in which case you have to argue for a single giant class.

[1] http://c2.com/cgi/wiki?PrinciplesOfObjectOrientedDesign


I dunno, I always thought objects were about managing state. If an object does nothing except provide a method, why can't the method just be a free function instead?

    InvoiceFetcher.FetchInvoice()
    PaymentApplier.ApplyPayment()
just seems terrible.


Because Java. At least I can use an interface to define the function's specification, and then use that in mocks. On the one hand, it seems very different to what I learned about OOP twenty odd years ago, but on the other, it makes for extremely well designed and easy to understand code.



On Java and Java-like languages, objects are also a good option for providing namespaces.


In several languages coughJavacough you can't really have free functions.


Any discussion about TDD requires a link to Rich Hickey's Hammock-Driven Development talk: https://www.youtube.com/watch?v=f84n5oFoZBct

The important thing that TDD or any other software design methodology is trying to accomplish is getting you to think about the design of your programs. However you want to accomplish that is fine as long as you take some time before you open your editor and think before you start writing code. It doesn't necessarily have to be TDD.


Funny thing.

I think that thinking about the design of my programs is easier once I have some amount of working code.

There's a minimum organization needed for starting to code, but every time I decide to think any further when solving a problem I'm not used to (optimally that would be always), I end up optimizing for the wrong problems and have to restructure it all later anyway.


That's a great point! I've certainly had times where spiking a bit of code and then thinking was valuable. There are no maxims in program design so I'm glad you took the time and thought about that :)


Here's a recent post I wrote about reasons for and approaches towards TDD. Generally, I love it. I find an API evolves though as you build it, so for me, it's a bit more about "tests at the same time" and "tests as the primary and first way of exercising the code". Also, that tests do better when they test use cases at a higher level, and that unit tests (I do agree with the article in some ways here) are often written at a level which penalizes refactoring, when they could be instead automated tests at a higher level.

http://michaeldehaan.net/post/120522567217/the-case-for-test...


A better description of the audience you were teaching to would be super helpful, because how you teach needs to be tailored to the crowd. Making a generic statement like you will never start with TDD anymore doesn't make sense.

To be frank, when you said the audience was 'typical enterprise java developers', it seems that you are already describing a group of developers who are sort of 'stuck' in their ways, who maybe haven't heard of other things like TDD and WOULD necessarily miss the point of a lot of it.

Just sayin - it could be helpful to avoid broad statements like "never start with TDD" that may be more audience specific.


This sounds a lot like "Think first, then code". I like the approach, and have more or less been applying it unknowingly since I read Martin Fowler's Agile book during Uni. Nice article!


I've been doing TDD for nearly 10 years and this article describes my experience in a way I have not been able to distinguish myself. Thank you.


I have just come onto a project that has tests up to the armpits, mocked everything, and the test lead wielding an almost evangelical bent across the developers. And the codebase and system is still shit. Why? Because the team though loads of tdd and bdd equals a great system. Wrong. They can help, when used sparingly, but to make them your design methodology, praying at the church of testing dogma is insane. So, no tests bad. Worse, over testing. When I see fourteen tests to one code unit, I know now to run away.


The real limitation in any system is this:

Can you make the code say what you mean?

TDD is just really efficient at demonstrating your inadequacy at achieving this goal. It's a really uncomfortable experience, and to get comfortable with that feeling takes a certain acceptance of the human condition that reads like something straight out of eastern philosophy.

Tl;dr to err is human. To really fuck up requires the aid of a machine.


A good point. I think the bigger learning exercise for this team in particular would be: do we need this test and why? To make them examine a bit more about what they are trying to achieve and not just succumb to testing by numbers. Ultimately, the large battery of tests passes, but when the system is still borked, the disconnect is great.


Sadly you can't fix bad developers with process.


Nicely said.

Many businesses want to treat developers as a fungible commodity and believe some magical process will enable that. It makes sense financially but I've never seen it actually work.


At least you can be sure that it does what you expect. You can have bad architecture with and without tests, but testing will help you to avoid surprising behaviour, regressions, that kind of stuff.


I hear this argument a lot for TDD, but most of the bugs I I get are from subtle ways in which data enters the system in a way that is unexpected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: