Hacker News new | past | comments | ask | show | jobs | submit login

Part of test-driven design is using the tests to drive out a sensible and easy to use interface for the system under test, and to make it testable from the get-go (not too much non-determinism, threading issues, whatever it is). It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation! But the best and quickest way to get to having high quality _behaviour_ tests is to start by using "implementation tests" to make sure you have an easily testable system, and then go from there.



>It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation!

Building tests only to throw them away is the design equivalent of burning stacks of $10 notes to stay warm.

As a process it works. It's just 2x easier to write behavioral tests first and thrash out a good design later under its harness.

It mystifies me that doubling the SLOC of your code by adding low level tests only to trash them later became seen as a best practice. It's so incredibly wasteful.


> As a process it works. It's just 2x easier to write behavioral tests first and thrash out a good design later under its harness.

I think this “2x easier” only applies to developers who deeply understand how to design software. A very poorly designed implementation can still pass the high level tests, while also being hard to reason about (typically poor data structures) and debug, having excessive requirements for test setup and tear down due to lots of assumed state, and be hard to change, and might have no modularity at all, meaning that the tests cover tens of thousands of lines (but only the happy path, really).

Code like this can still be valuable of course, since it satisfies the requirements and produces business value, however I’d say that it runs a high risk of being marked for a complete rewrite, likely by someone who also doesn’t really know how to design software. (Organizations that don’t know what well designed software looks like tend not to hire people who are good at it.)


"Test driven design" in the wrong hands will also lead to a poorly designed non modular implementation in less skilled hands.

I've seen plenty of horrible unit test driven developed code with a mess of unnecessary mocks.

So no, this isnt about skill.

"Test driven design" doesnt provide effective safety rails to prevent bad design from happening. It just causes more pain to those who use it as such. Experience is what is supposed to tell you how to react to that pain.

In the hands of junior developers test driven design is more like test driven self flagellation in that respect: an exercise in unnecessary shame and humiliation.

Moreover since it prevents those tests with a clusterfuck of mocks from operating as a reliable safety harness (because they fail when implementation code changes, not in the presence of bugs), it actively inhibits iterative exploration towards good design.

These tests have the effect of locking in bad design because keeping tightly coupled low level tests green and refactoring is twice as much work as just refactoring without this type of test.


> I've seen plenty of horrible unit test driven developed code with a mess of unnecessary mocks.

Mocks are an anti-pattern. They are a tool that either by design or unfortunate happenstance allows and encourages poor separation of concerns, thereby eliminating the single largest benefit of TDD: clean designs.


You asserted:

> … TDD is a "design practice" but I find it to be completely wrongheaded.

> The principle that tests that couple to low level code give you feedback about tightly coupled code is true but it does that because low level/unit tests couple too tightly to your code - I.e. because they too are bad code!

But now you’re asserting:

> "Test driven design" in the wrong hands will also lead to a poorly designed non modular implementation in less skilled hands.

Which feels like it contradicts your earlier assertion that TDD produces low-level unit tests. In other words, for there to be a “unit test” there must be a boundary around the “unit”, and if the code created by following TDD doesn’t even have module-sized units, then is that really TDD anymore?

Edit: Or are you asserting that TDD doesn’t provide any direction at all about what kind of testing to do? If so, then what does it direct us to do?


>"Test driven design" in the wrong hands will also lead to a poorly designed non modular implementation in less skilled hands.

>Which feels like it contradicts your earlier assertion that TDD produces low-level unit tests.

No, it doesnt contradict that at all. Test driven design, whether done optimally or suboptimally, produces low level unit tests.

Whether the "feedback" from those tests is taken into account determines whether you get bad design or not.

Either way I do not consider it a good practice. The person I was replying to was suggesting that it was a practice that was more suited to be people with a lack of experience. I dont think that is true.

>Or are you asserting that TDD doesn’t provide any direction at all about what kind of testing to do?

I'm saying that test driven design provides weak direction about design and it is not uncommon for test driven design to still produce bad designs because that weak direction is not followed by people with less experience.

Thus I dont think it's a practice whose effectiveness is moderated by experience level. It's just a bad idea either way.


Thanks for clarifying.

I think this nails it:

> Whether the "feedback" from those tests is taken into account determines whether you get bad design or not.

Which to me was kind of the whole point of TDD in the first place; to let the ease and/or difficulty of testing become feedback that informs the design overall, leading to code that requires less set up to test, fewer dependencies to mock, etc.

I also agree that a lot of devs ignore that feedback, and that just telling someone to “do TDD” without first making sure that they know that they need to strive to have little to no test setup and few or no mocks, etc., otherwise the advice is pointless.

Overall I get the sense that a sizable number of programmers accept a mentality of “I’m told programming is hard, this feels hard so I must be doing it right”. It’s a mentality of helplessness, of lack of agency, as if there is nothing more they can do to make things easier. Thus they churn out overly complex, difficult code.


>Which to me was kind of the whole point of TDD in the first place; to let the ease and/or difficulty of testing become feedback that informs the design overall

Yes and that is precisely what I was arguing against throughout this thread.

For me, (integration) test driven development development is about creating:

* A signal to let me know if my feature is working and easy access to debugging information if it is not.

* A body of high quality tests.

It is 0% about design, except insofar as the tests give me a safety harness for refactoring or experimenting with design changes.


Don't agree, though I think it's more suble than "throw away the tests" - more "evolve them to a larger scope".

I find this particularly with web services,especially when the the services are some form of stateless calculators. I'll usually start with tests that focus on the function at the native programming language level. Those help me get the function(s) working correctly. The code and tests co-evolve.

Once I get the logic working, I'll add on the HTTP handling. There's no domain logic in there, but there is still logic (e.g. mapping from json to native types, authentication, ...). Things can go wrong there too. At this point I'll migrate the original tests to use the web service. Doing so means I get more reassurance for each test run: not only that the domain logic works, but that the translation in & out works correctly too.

At that point there's no point leaving the original tests in place. They're just covering a subset of the E2E tests so provide no extra assurance.

I'm therefore with TFA in leaning towards E2E testing because I get more bang for the buck. There are still places where I'll keep native language tests, for example if there's particularly gnarly logic that I want extra reassurance on, or E2E testing is too slow. But they tend to be the exception, not the rule.


> At that point there's no point leaving the original tests in place. They're just covering a subset of the E2E tests so provide no extra assurance.

They give you feedback when something fails, by better localising where it failed. I agree that E2E tests provide better assurance, but tests are not only there to provide assurance, they are also there to assist you in development.


Starting low level and evolving to a larger scope is still unnecessary work.

It's still cheaper starting off building a playwright/calls-a-rest-api test against your web app than building a low level unit test and "evolving" it into a playwright test.

I agree that low level unit tests are faster and more appropriate and if you are surrounding complex logic with a simple and stable api (e.g. testing a parser) but it's better to work your way down to that level when it makes sense, not starting there and working your way up.


That’s not my experience. In the early stages, it’s often not clear what the interface or logic should be - even at the external behaviour level. Hence the reason tests and code evolve together. Doing that at native code level means I can focus on one thing: the domain logic. I use FastAPI plus pytest for most of these projects. The net cost of migrating a domain-only test to use the web API is small. Doing that once the underlying api has stabilised is less effort than starting with a web test.


I dont think ive ever worked on any project where they hadnt yet decided whether they wanted a command line app or a website or an android app before I started. That part is usually fixed in stone.

Sometimes lower level requirements are decided before higher level requirements.

I find that this often causes pretty bad requirements churn - when you actually get the customer to think about the UI or get them to look at one then inevitably the domain model gets adjusted in response. This is the essence of why BDD/example driven specification works.


What exactly is it wasting? Is your screen going to run out of ink? Even in the physical contruction world, people often build as much or more scaffolding as the thing they're actually building, and that takes time and effort to put up and take down, but it's worthwhile.

Sure, maybe you can do everything you would do via TDD in your head instead. But it's likely to be slower and more error-prone. You've got a computer there, you might as well use it; "thinking aloud" by writing out your possible API designs and playing with them in code tends to be quicker and more effective.


>What exactly is it wasting?

Time. Writing and maintaining low level unit tests takes time. That time is an investment. That investment does not pay off.

Doing test driven development with high level integration tests also takes time. That investment pays dividends though. Those tests provide safety.

>Sure, maybe you can do everything you would do via TDD in your head instead. But it's likely to be slower and more error-prone.

It's actually much quicker and safer if you can change designs under the hood and you dont have to change any of the tests because they validate all the behavior.

Quicker and safer = you can do more iterations on the design in the available time = a better design in the end.

The refactoring step of red, green, refactor is where the design magic happens. If the refactoring turns tests red again that inhibits refactoring.


> It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation!

Is it? I don't think I've ever seen that mentioned.


Put simply, doing TDD properly leads to sensible separation of concerns.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: