Hacker News new | past | comments | ask | show | jobs | submit login

TDD works great when the requirements are known and there’s little mystery. Waterfall methods oddly enough lends themselves well for TDD, which is kind of associated with hip styles like agile.

The reality is most projects start as exploration. What you’re thinking might not even be possible. Can you even approximate the classes you’ll need yet? For these scenarios TDD fits in awkwardly.

I prefer to do a little back and forth. A little dev. When things are looking stable, a little resting. Towards the end a lot of testing.




I've found the opposite. When I start to write a test, I often realize I don't know exactly what I need to do because the requirements are unclear or somehow conflict with some other assumption of the system. That realization prompts me to go talk to my product owner, which forces them to clarify their requirements and target a more coherent idea of what the system should be.


There is some point to your comment but also something that seem to me to be misunderstanding. Also, part of it seems self-contradictory to me.

There can indeed be exploration stages where TDD does not help that much. E.g, if you are going to interface with a piece of hardware you have never seen before and you need to get some sort of working communication going there is much point to what you say. As soon as you know what sequence of commands makes the piece of hardware work one has arrived in the territory where TDD is highly beneficial. You can then write a test where you require that your software indeed produces the correct sequence of commands and after that you can refactor your exploratory code while having to check that it still works together with the hardware much less often then otherwise would be needed.

The remark about TDD and waterfall seems much less true to me. In my original post I wrote 'both frequent new features and keep working correctly'. That does not describe waterfall at all. In fact, and this is where you messages seems to get a bit self-contradictory, your post actually sounds much more waterfally to me than mine. It contains the words 'Towards the end'. It is waterfall projects that have an end that was envisioned right from the start. Whatever 'agile' means, it seems to have the feature that we never stop adding features and that stuff is supposed to be in a working state in between adding these features. So, if one is actually doing that there are dragons in the sentence 'Towards the end a lot of testing' because it actually turns into 'always lots of testing' which costs lots of time. In case of the hardware example it would also require that every developer has a species of that hardware on his desk next to his computer. As I happen to work in a field where we interface with hardware that is three stories high, we should realize that this is not always practical.

One misunderstanding that might be going on is that perhaps you are assuming that in TDD one would test just one class at a time. I generally prefer to test a set of classes, i.e., a subsystem of a whole program. In that case the test are about properties of the program that, in many cases, the user could recognize as beneficial. There should be less churn at that level. I suppose some people would call this behaviour driven development instead but I am not so very married to terminology and will just keep calling it TDD. Testing individual classes can also be helpful but if they contain rather simple logic I am not sure writing tests for that is so very helpful. At some point would just be testing whether the processor really is capable of copying values which is not that interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: