Hacker News new | past | comments | ask | show | jobs | submit login

Do you find it hard to change directions with a feature or component, because your tests have sort of "locked you in" to the structure you were working on?

How religiously do you follow TDD? Do you literally not write a single line of code unless it's in furtherance of passing a failing unit test, or do you just worry about the system test you put in place at the start and unit test aspects as you need to?

Also, you mentioned changes to the API of third-party libraries. How do go about catching how those changes impact your code? (I would assume you are mocking these in your unit tests.)

Wow I just asked you a lot of questions.

Seriously though, thanks for posting this article. It's so rare to be able to read an article about unit testing that isn't just some guy demonstrating how he'd use TDD to determine if a number is in the Fibonacci sequence or something.




> Do you find it hard to change directions with a feature or component, because your tests have sort of "locked you in" to the structure you were working on?

Do you mean changing the direction of the implementation, or the outcome of the feature? If it's just the implementation we usually throw away the unit tests along with the code we don't like and restart. The system test is ultimately responsible for making sure the feature itself is working. The unit tests just allow you to test different parameters without having to write a system test for each of those.

I have never felt "locked in" so far. Generally I feel very good about the stability of the Software and my ability to change it as needed.

> How religiously do you follow TDD? Do you literally not write a single line of code unless it's in furtherance of passing a failing unit test, or do you just worry about the system test you put in place at the start and unit test aspects as you need to

Very religiously with almost no exceptions. When I make an exception it is usually a sign of problems in the code, and I'll revisit it later on / improve my tools.

I think it is difficult to do anything but full TDD, it's just to easy to drop the ball once you make compromises.

> Also, you mentioned changes to the API of third-party libraries. How do go about catching how those changes impact your code? (I would assume you are mocking these in your unit tests.)

System tests are the only line of defense here. That's why we make sure that all mission critical code paths are covered by them. There are no 100% guarantees, but 95% certainty beats 20% certainty after a big library upgrade by a lot : ).

> Seriously though, thanks for posting this article. It's so rare to be able to read an article about unit testing that isn't just some guy demonstrating how he'd use TDD to determine if a number is in the Fibonacci sequence or something.

I never saw the benefits from these articles either. It took the node.js API to break our app in about 100 different parts for me to see the value in TDD. I guess it's just like backups ... ; )


Thanks for your excellent replies. I realize that part of the difficulties I experience with using TDD in real life is that testing feels "expensive" to me, e.g. I frequently spend more time futzing around with test code than I do production code.

Your responses illustrate a different perspective, that testing is cheap (this probably comes with experience, I would imagine). Anyhow, I can see that if I thought of tests as cheap, I wouldn't get so hung up on my tests breaking when I change direction on a feature.

I also write a fair amount of code that consumes web APIs, so I have lost a lot of hair figuring out to test against those properly without mocking out everything to the point of meaninglessness.


> testing feels "expensive" to me

This is actually one of the bigger 'problems' with {T,B}DD; it's much harder to figure out how much time it takes to complete things.

BDD: Wrote the test, wrote the feature, checked it in, wall clock time, 1 hour. Next week, something unrelated changes, my tests show right where the problem is, 10 minute fix. Total time: 1:10.

No testing: wrote the feature, half an hour. Next week, something unrelated changes, breaks the feature, spend an hour fixing it. Total time: 1:30. And keep adding time when other things break later, too...

In projects with a high test coverage, I almost never spend any time debugging. That doesn't mean it's easy to recognize the saved time, though.


I think the difficulties I've personally had with embracing TDD come down to a slightly different set of experiences:

TDD: Spend an hour writing the test (sometimes more if I have to figure out how to properly work with a third-party library). Spend 10 minutes writing the code. Decide a few days later that I need to rework the API a bit on the feature, so spend another hour rewriting tests and 5 minutes writing new production code.

No testing: Write the feature, 20 minutes. "Test" while writing it (refresh a web page or run something at the CL). Realize a few days later I need to rework things, spend 10 minutes doing so. In the rare instances where something breaks, spend 20 or 30 minutes tracking it down and fixing.

Don't read this as an argument against TDD. It's more that I have a very hard time actually realizing an increase in productivity or code quality when using it. TDD-backed code generally takes me longer to write and pretty much nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise. The few bugs that do creep into production code are usually dealt with promptly, and I'm not sure TDD would result in entirely bug-free code either way.

Anyhow, that's why this conversation has been quite valuable to me. I am convinced that if I can clear away real-world obstacles in TDD I can do a better job of embracing it and realizing the productivity benefits that everyone always crows about. For now, my personal experience is that it's a means to marginally improve code quality at a high cost of time.


I think you're probably not taking into account the amortized cost of TDD versus what you're doing now: how much testing is really happening when you "refresh a web page"? How many times do you do that? How confident are you when you make a change to your code base that you don't have to go back and manually perform all those tests you were already doing? All that refreshing takes a lot of time!

I write unit tests so I don't have to refresh web pages all day. If I do find myself going to the command-line to test something, I figure out what I'm trying to test, and write a unit test instead; it's something I have missed when I was writing the upfront tests. Then I can do it again, and again, and again, and use continuous integration so it runs those tests for me over and over.

If you do a lot of web dev, Selenium is a really good way of testing web page features.


>All that refreshing takes a lot of time!

I have the same problem as the parent and this is exactly why. I've always thoroughly tested everything but with ad hoc tests that I had to do over and over.

The main thing I miss with TDD is the interactive nature. If I'm testing in Smalltalk I can write code right inside the debugger and watch the effect it has, but TDD always moves me back to what Smalltalkers call "cult of the dead" programming where I have to stop, run the tests and wait for the output. I wish there was a way to make it more interactive. It would be easier to force myself to do it then.


There's an awesome Ruby library called 'autospec' that watches for files being saved, and then automatically runs your test suite in the background, and gives you a growl notification if they've failed or passed...


> nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise

This. And also Refactoring is almost a dirty word so we try to keep it to ourselves and sneak in bits of refactoring when implementing bigger features


Testing is a skill like any other. When you started programming, I'm sure you were quite slow at it, as well. As you level up your testing skill, the time it takes to write tests drops down. At first, it's really slow going, though, I agree.

I also have the pleasure of writing 90% of my code in Ruby, which has top notch testing support basically everywhere.


I add code that does the "run something at the CL and print the result" type work into a t/ file. Then, when I'm happy, I change it from a print to a test, and move on.

This way manual testing becomes automated tests with very minimal additional effort.


What about POUT (plain old unit testing)? Ie writing he test after the feature?


This might be my own personal failing, but those tests will never get written. Once the feature works, the temptation to move onto the next thing is just too great.

Also, by writing the test first, you ensure that the code is actually easy to test. This gets easier as you get more experienced with testing, but still.


Also, writing the test first helps in identifying a sane API, I personally consider this to be the single greatest advantage of TDD.

I don't write as many tests as you do though, Steve. I generally stop once I have the nominal test cases in place, I don't test edge cases. They seem to tend to get hit by upstream unit tests anyway (I don't mock except where it is absolutely necessary). If I do hit a difficult-to-find bug, I use tests as one of my main debugging tools, writing tests for any code for which I have some doubts. By the end of a project, I generally have a fairly high code coverage for my tests, but I never feel like I'm just writing them for the methodology - they are either testing the nominal case, or testing to verify behaviour when tracking down a bug


> helps in identifying a sane API

I totally forgot about this, but I agree 100%. I'm a big fan of "Write this as though the underlying code exists, then fill it out" for API design.

I don't write as many tests as you'd think I do, actually, because I heavily favor integration tests over unit-style tests, and so we're probably much closer in that regard than you think. ;)


I hope I'm not giving you the wrong picture. We did have to cut features when rewriting our software (after node.js changed all APIs) using TDD. It does take more time to write tested code, and you do not get an advantage until you have the code in production. It is only at that point that you are seeing the returns on that investment.

So generally you need to try to find a small enough product so you can build it using TDD. Also stay as simple as you can with your testing - even if it feels very repetitive. I'm only now thinking of adding more tools to my testing toolchain after I have a better understanding of the actual problems I run into often.

About testing web APIs. You need both unit and system tests. The unit tests make sure your internal processing is solid. The system tests will directly make calls to the service. That won't always feel pretty, but it's the best way to approach it IMO.


How do you test something which requires browser interaction like oauth authentication?

Edit: I know this may not be applicable in transloadit's case. I'm asking for a general strategy


The core of our service is a REST API, so we don't have to deal with that much. For browser interaction we would probably use a combination of Selenium (for system tests), and maybe qunit for the frontend testing.

But anyway, I am no expert on this so I can't give you solid advice.


I have used canoo webtest to test browser interaction that involved a log in to a deployed server.


One method I've used (with Ruby) is use of FakeWeb, and collecting the various responses from oauth providers to test against. It's the closest I've seen to hitting the real services.

If you're testing javascript, you'll need to use Selenium or something similar.


Webmock is an excellent ruby framework for solving this problem. My current app intereracts with both github and heroku and it's come in very handing for filling in what would otherwise be gaps. It's newer than fakeweb and is much better at letting you specify interactions on a per test basis rather than the entire suite.


>Very religiously with almost no exceptions. When I make an exception it is usually a sign of problems in the code, and I'll revisit it later on / improve my tools.

Thats one of the things that turned me of TDD for good - I find it a crime to write code that I know won't work, just to write a more comprehensive test and then rewriting it, wasting a ton of time.

The other was constantly switching between code/test/code - even if it only takes 5 seconds, that essentially means that each statement now costs 10 seconds(time to confirm the test failed plus time to confirm that it has now passed) + the time to write the test.

This is also the reason I tend to stick to statically checked languages.


>I find it a crime to write code that I know won't work, just to write a more comprehensive test and then rewriting it, wasting a ton of time.

I don't remember this being in TDD. Could you expand on this? I remember the rule that you write the simplest code that will pass the test and no more, but I don't recall that causing any rewrite per se.


Statically checked language have certain advantages and disadvantages.

As far as speeding up the process goes, I always have test and code in a vim splitscreen, along with a keybinding that allows me to run a test from within vim. It takes me << 1 second to witch between the two and execute a test.


I can give you my experience with TDD:

You generally don't need to change directions very much. TDD will expose bad interfaces or interfaces that aren't abstract enough early on because you can't test them easily. And if you do decide to change directions you sit down and write the tests and then make the changes until your code works and the tests pass.

Generally you want to write tests before code and make your code pass the tests. How many tests and how much coverage is up to you. It's important that the tests are small and brittle so you can pick out as many breakages as early as possible.

You use the API in your tests. When the tests start to fail you know your API has changed. It gets a bit more difficult with web APIs if they don't provide a test environment.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: