Hacker News new | past | comments | ask | show | jobs | submit login

The question is whether the additional time to write a test before any piece of code and to refactor each piece of code to each new test case (assuming you're doing TDD 'properly' and only coding the minimum to pass the extant tests), plus the time spent debugging (because it doesn't completely eliminate debugging) is less than the time you would spend debugging if you didn't use TDD.

I could say "the time I spend debugging has dramatically decreased since I began proving each bit of code to be correct mathematically." But that tells me nothing about whether it is actually a better approach.

I suspect that's why you're getting downvoted: the comparison is naive. (Edit: Also responding to 'how do you debug' with 'I don't' probably doesn't help).

My personal anecdote - I don't spend much time debugging. I spend a lot of time thinking, a smaller amount coding, and a relatively small amount debugging. Spending, say, 20% extra time preventing bugs before they happen would not be cost effective for me.




There are plenty of studies at this point with empirical data showing that TDD is the way to go. The most famous is probably the Nagappan study at Microsoft and is 8 years old now: http://research.microsoft.com/en-us/groups/ese/nagappan_tdd....

tl;dr If you consider a 90% reduction in bugs (and debugging, and unpredictable amounts of post-production debugging time, etc.) worth a 15-35% extra cost in upfront (but more predictable) development time... then you should be doing TDD, full stop.

If you can't figure out how to apply TDD to the problem, then look at the problem again. I/O is a common tripping point. Watch Gary Bernhardt's "Boundaries" talk for ideas: https://www.destroyallsoftware.com/talks/boundaries

I repeat... a 90% reduction in bugs. I think that's pretty damn huge. Read the paper, then google "TDD empirical data." Actually, I'll do it for you: https://www.google.com/search?q=TDD+empirical+data

Trust me, I am all about doing things rationally and staying away from faddy things that have no data to back them up, but in this case, this one seems pretty solid.

Personal experience is that the code I write using TDD is better along almost all dimensions that factor into determining what "good code" even is: More modular, fewer dependencies, easier to maintain, easier to debug (when you must), smaller and more focused methods/functions, better modules/classes... These more intangible perks literally fall out of the sky like manna if you code with TDD.

It's admittedly tough to change to this habit, but now that I've been doing it, I would not go back. At all.


Personal experience is that automated testing improved my productivity, TDD diminished it. But like any religious war, the criticism always is "you didn't do it right, if it didn't work for you."

Thanks for the patronising 'let me google that for you'.

I'm glad you saw the light, I'm content to remain a pagan.

> If you consider a 90% reduction in bugs worth a 15-35% extra cost.

I don't spend more than 31% of my time debugging, so 35% for a 90% reduction is nowhere near useful. I rarely spend 13% of the time on any bit of code debugging. And when I do I write unit tests as a debugging strategy, so it wouldn't have saved anything to write them beforehand.

If you find yourself debugging for 30+% of your time, on new code you write, unless you write tests beforehand, then I respectfully suggest there are more pressing things to worry about in your practice.


I wasn't trying to patronize, I was making it a few seconds more convenient for you in this case (this was NOT a literal "LMGTFY" back-hand!). And I wasn't trying to proselytize. And I know that "what works for me" might not be "what works for you." I just know that individually, and especially on any programming team, this has been a practice that has paid off in spades AND "feels better" as well. YMMV, I guess.

I mean, there's a rational explanation for it: Your code produces states. What is a bug? An unexpected state. As long as you can contain/model ALL of those states, in your head, at all times, while working on the code, you can potentially write 100% bug-free code, at all times. But as soon as you cannot, then the tests will help contain the many possible states your code produces, and that your brain can no longer contain.

And unless you are infinitely genius, your code will eventually reach a point where you simply cannot model all of its potential states in your head (this goes manyfold on a programming team where everyone is sharing bits of the "mass state" the code can produce.)

Speaking of controlling unpredictable states and therefore bugs, FP often comes into the conversation as well, here's John Carmack on that topic: http://gamasutra.com/view/news/169296/Indepth_Functional_pro...


The advantage of TDD lies largely in the practice of ensuring you have code coverage: that you are regularly exploring all the code paths you aim to deliver. It's not about modeling the states. A formal specification could do the same thing. In fact, the code is itself a formal specification of exactly that.

If you start thinking of TDD as specifying what the code should do, then you probably should be writing tests for your tests to ensure they test what you mean for them to test. And if your code is compartmentalized into small enough components, it is just as easy to write a correct test as it is to write correct code. If you are doing that, then writing tests is clearly a waste of time. (And you should be doing as much of that as possible.)


> the tests will help

In my experience, proponents of TDD often end up actually defending testing when pushed, rather than TDD. As if TDD or write-and-forget code are the alternatives.

> unless you are infinitely genius

We've established, based on your own numbers, that you only need to spend less than 10-30% of your time debugging your code before it isn't worth it. There's no need to exclude the middle and pretend it requires infinite genius.


I have alternated between TDD and post-testing.

I've noticed that TDD forces me to think about the code in a better way, before actually coding it, than just going ahead and coding it and then figuring out how to test it after.

This is by no means an easy practice to adopt, btw, especially after years of not doing so.

I actually think TDD should be taught in schools and never even considered a separate aspect of coding; TDD should be integral to coding, period. If it wasn't discovered separately in the first place, it would just be called "coding."

> that you only need to spend less than 10-30% of your time debugging your code before it isn't worth it

That is a good point.

It's worth it down the line. You're not factoring in future costs. You're not just reducing TODAY'S bugs by 90% with that increase in overall coding time, you're also vastly reducing the technical debt of the code in the future.

You're also writing a permanent, provable spec for the code. What happens 5 years after you write your untested but debugged code and have to go back to it to add or fix something? How in the hell will you remember the whole mental model and know if you are in danger of breaking something? The answer is, you will not. And you will tread more useless water (time and effort) debugging the bugfixes or feature-adds or refactorings.

Speaking of refactorings, they are almost impossible to do without huge risk unless you have well-written unit tests against the interface to the code being refactored.

In short, if you do not write tests, you are literally placing a bet against the future of your code. Do you really want to do that? Do you consider your work THAT "throwaway"?

That said, tests are no panacea... and I am NOT trying to sell them as one (which would be wrong). You might assert the wrong things, or miss testing negative cases (a common one is not testing that the right runtime error is thrown when inputs to the code do not conform). There are cases on record of well-tested code that has passed all tests (like satellite code) and still fails in the real world because of human error (both the code AND the test were wrong).


"I've noticed that TDD forces me to think about the code in a better way, before actually coding it, than just going ahead and coding it"

IMO, that is the crux of the matter: Thinking-Driven-Design is the way to go. The idea that you _need_tests_ to do the up-front thinking is, again IMO, bogus, and writing tests without thinking doesn't help much, as you seem to agree on with your remark on missing test cases.

Some people use paper or a whiteboard as things that make them think. Others go on a walk. Yet others, sometimes, can just do it sitting behind their monitor while doing slightly mindless things such as deleting no longer important mail, or setting up a new project.

Also: good tooling makes many kinds of refactorings extremely low-risk. Strongly-typed languages with that are designed with refactoring in mind help tons there.


How do you TDD experimental/evolving code? I work on scientific algorithms, where there are typically only few lines of code that constantly evolve to improve some performance metric.

I have been struggling for a while to integrate testing into my work. TDD is easy and perfectly suited for "normal" software development, where there is some kind of plan, and you are writing a lot of code that changes little after it is written.

I'd be very interested in pointers on how to apply TDD to little code that changes a lot after it is initially written.


Do you mean you work with evolutionary algorithms?

One thing testing does require is determinism. In other words, given all input states X(1) through X(N), output Z should result 100% of the time. If that is not the case, then you haven't accounted for all input states (for example, code that looks at the time and acts based on that- the time is an oft-unaccounted-for input) or you have code that calls something like rand() without a seed.

If you can get your code into a deterministic state, then it is testable.


There are multiple studies (ref. Making Software, Oram & Wilson), the results are inconclusive. It's especially not clear whether it's worth it to do TDD in comparison with simple automated testing and other testing strategies.

You are preaching by the way.


> Making Software, Oram & Wilson

Going to quote one of the authors (Hakan Erdogmus) from here: https://computinged.wordpress.com/2010/11/01/making-software...

"I am one of the authors of the TDD chapter. Ours was a review of existing some 30+ studies of TDD. Yes, Nachi’s experiments (the MSR study mentioned above) were included. BTW, I wouldn’t have concluded that “tdd doesn’t work” based on our data. Rather, I would have conservatively concluded: there is moderate evidence supporting TDD’s external quality and test-support benefits, but there is no consistent evidence in one direction or other regarding its productivity effects."

The only thing inconclusive was the increased productivity effect, not the code quality effect.

> It's especially not clear whether it's worth it to do TDD in comparison with simple automated testing and other testing strategies.

https://www.researchgate.net/publication/3188484_On_the_effe...

I've done every testing strategy. Test-none, BDD, TDD, test-after, integration testing, unit testing, GUI testing, you name it.

If your code will last longer than a couple years, it is worth it to do extensive unit testing via TDD, integration testing of major/important workflows, a visual once-over by a QA person to make sure something didn't break the CSS, and that's it. If you are the founder of a startup and expect to sell within 2 years (and are thus incentivized not to go the TDD route), you better be correct in your bet or there will be technical-debt hell to pay.

> preaching

Espousing something that I have found over and over again in my programming career (testing since about 2004, TDD since about 5 years ago) is now "preaching"? Call me a prophet, then. I know exactly what I see. Don't listen to me though, I've only been coding since I was 8 in 1980... I encourage you to find out for yourself.


I think you should go back and read that comment on the blog, because all statements are qualified e.g: "conservatively concluded", "moderate evidence".

You on the other hand seem to be certain that it works. So certain that you're using no qualifiers, writing lengthy replies and selectively providing links that support your assertions. When pressed for information you provide a reasonable defense of unit-testing not TDD.

Yes, I know unit-tests are nice when part of a testing strategy together with integration testing, UI testing, etc. I am not at all convinced that TDD is better and you haven't changed that.

I am ignoring personal opinions and blog posts because too many software engineering practices are just popular rituals. TDD proponents need to prove conclusively that TDD is significantly better than selective test-after to offset the productivity loss, and they haven't done that.


"Moderate evidence" is still evidence.

TDD forces your code to be tightly focused. It's very hard to write a test for reams of functionality before you write that functionality, so your code is automatically tight (and as a result, easier to refactor, maintain, understand, etc). I don't see how this is so hard to see or why you need empirical evidence for that part at least. A lot of what "value" is is quite subjective, even in programming. You know "good code" when you see it. Why don't you try TDD and form your own opinion?


Yes, I will have to try it at some point. I've been postponing that due to lack of time and skepticism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: