Hacker News new | past | comments | ask | show | jobs | submit login
Test driven development at Transloadit (debuggable.com)
133 points by felixge on Oct 28, 2010 | hide | past | favorite | 65 comments



Do you find it hard to change directions with a feature or component, because your tests have sort of "locked you in" to the structure you were working on?

How religiously do you follow TDD? Do you literally not write a single line of code unless it's in furtherance of passing a failing unit test, or do you just worry about the system test you put in place at the start and unit test aspects as you need to?

Also, you mentioned changes to the API of third-party libraries. How do go about catching how those changes impact your code? (I would assume you are mocking these in your unit tests.)

Wow I just asked you a lot of questions.

Seriously though, thanks for posting this article. It's so rare to be able to read an article about unit testing that isn't just some guy demonstrating how he'd use TDD to determine if a number is in the Fibonacci sequence or something.


> Do you find it hard to change directions with a feature or component, because your tests have sort of "locked you in" to the structure you were working on?

Do you mean changing the direction of the implementation, or the outcome of the feature? If it's just the implementation we usually throw away the unit tests along with the code we don't like and restart. The system test is ultimately responsible for making sure the feature itself is working. The unit tests just allow you to test different parameters without having to write a system test for each of those.

I have never felt "locked in" so far. Generally I feel very good about the stability of the Software and my ability to change it as needed.

> How religiously do you follow TDD? Do you literally not write a single line of code unless it's in furtherance of passing a failing unit test, or do you just worry about the system test you put in place at the start and unit test aspects as you need to

Very religiously with almost no exceptions. When I make an exception it is usually a sign of problems in the code, and I'll revisit it later on / improve my tools.

I think it is difficult to do anything but full TDD, it's just to easy to drop the ball once you make compromises.

> Also, you mentioned changes to the API of third-party libraries. How do go about catching how those changes impact your code? (I would assume you are mocking these in your unit tests.)

System tests are the only line of defense here. That's why we make sure that all mission critical code paths are covered by them. There are no 100% guarantees, but 95% certainty beats 20% certainty after a big library upgrade by a lot : ).

> Seriously though, thanks for posting this article. It's so rare to be able to read an article about unit testing that isn't just some guy demonstrating how he'd use TDD to determine if a number is in the Fibonacci sequence or something.

I never saw the benefits from these articles either. It took the node.js API to break our app in about 100 different parts for me to see the value in TDD. I guess it's just like backups ... ; )


Thanks for your excellent replies. I realize that part of the difficulties I experience with using TDD in real life is that testing feels "expensive" to me, e.g. I frequently spend more time futzing around with test code than I do production code.

Your responses illustrate a different perspective, that testing is cheap (this probably comes with experience, I would imagine). Anyhow, I can see that if I thought of tests as cheap, I wouldn't get so hung up on my tests breaking when I change direction on a feature.

I also write a fair amount of code that consumes web APIs, so I have lost a lot of hair figuring out to test against those properly without mocking out everything to the point of meaninglessness.


> testing feels "expensive" to me

This is actually one of the bigger 'problems' with {T,B}DD; it's much harder to figure out how much time it takes to complete things.

BDD: Wrote the test, wrote the feature, checked it in, wall clock time, 1 hour. Next week, something unrelated changes, my tests show right where the problem is, 10 minute fix. Total time: 1:10.

No testing: wrote the feature, half an hour. Next week, something unrelated changes, breaks the feature, spend an hour fixing it. Total time: 1:30. And keep adding time when other things break later, too...

In projects with a high test coverage, I almost never spend any time debugging. That doesn't mean it's easy to recognize the saved time, though.


I think the difficulties I've personally had with embracing TDD come down to a slightly different set of experiences:

TDD: Spend an hour writing the test (sometimes more if I have to figure out how to properly work with a third-party library). Spend 10 minutes writing the code. Decide a few days later that I need to rework the API a bit on the feature, so spend another hour rewriting tests and 5 minutes writing new production code.

No testing: Write the feature, 20 minutes. "Test" while writing it (refresh a web page or run something at the CL). Realize a few days later I need to rework things, spend 10 minutes doing so. In the rare instances where something breaks, spend 20 or 30 minutes tracking it down and fixing.

Don't read this as an argument against TDD. It's more that I have a very hard time actually realizing an increase in productivity or code quality when using it. TDD-backed code generally takes me longer to write and pretty much nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise. The few bugs that do creep into production code are usually dealt with promptly, and I'm not sure TDD would result in entirely bug-free code either way.

Anyhow, that's why this conversation has been quite valuable to me. I am convinced that if I can clear away real-world obstacles in TDD I can do a better job of embracing it and realizing the productivity benefits that everyone always crows about. For now, my personal experience is that it's a means to marginally improve code quality at a high cost of time.


I think you're probably not taking into account the amortized cost of TDD versus what you're doing now: how much testing is really happening when you "refresh a web page"? How many times do you do that? How confident are you when you make a change to your code base that you don't have to go back and manually perform all those tests you were already doing? All that refreshing takes a lot of time!

I write unit tests so I don't have to refresh web pages all day. If I do find myself going to the command-line to test something, I figure out what I'm trying to test, and write a unit test instead; it's something I have missed when I was writing the upfront tests. Then I can do it again, and again, and again, and use continuous integration so it runs those tests for me over and over.

If you do a lot of web dev, Selenium is a really good way of testing web page features.


>All that refreshing takes a lot of time!

I have the same problem as the parent and this is exactly why. I've always thoroughly tested everything but with ad hoc tests that I had to do over and over.

The main thing I miss with TDD is the interactive nature. If I'm testing in Smalltalk I can write code right inside the debugger and watch the effect it has, but TDD always moves me back to what Smalltalkers call "cult of the dead" programming where I have to stop, run the tests and wait for the output. I wish there was a way to make it more interactive. It would be easier to force myself to do it then.


There's an awesome Ruby library called 'autospec' that watches for files being saved, and then automatically runs your test suite in the background, and gives you a growl notification if they've failed or passed...


> nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise

This. And also Refactoring is almost a dirty word so we try to keep it to ourselves and sneak in bits of refactoring when implementing bigger features


Testing is a skill like any other. When you started programming, I'm sure you were quite slow at it, as well. As you level up your testing skill, the time it takes to write tests drops down. At first, it's really slow going, though, I agree.

I also have the pleasure of writing 90% of my code in Ruby, which has top notch testing support basically everywhere.


I add code that does the "run something at the CL and print the result" type work into a t/ file. Then, when I'm happy, I change it from a print to a test, and move on.

This way manual testing becomes automated tests with very minimal additional effort.


What about POUT (plain old unit testing)? Ie writing he test after the feature?


This might be my own personal failing, but those tests will never get written. Once the feature works, the temptation to move onto the next thing is just too great.

Also, by writing the test first, you ensure that the code is actually easy to test. This gets easier as you get more experienced with testing, but still.


Also, writing the test first helps in identifying a sane API, I personally consider this to be the single greatest advantage of TDD.

I don't write as many tests as you do though, Steve. I generally stop once I have the nominal test cases in place, I don't test edge cases. They seem to tend to get hit by upstream unit tests anyway (I don't mock except where it is absolutely necessary). If I do hit a difficult-to-find bug, I use tests as one of my main debugging tools, writing tests for any code for which I have some doubts. By the end of a project, I generally have a fairly high code coverage for my tests, but I never feel like I'm just writing them for the methodology - they are either testing the nominal case, or testing to verify behaviour when tracking down a bug


> helps in identifying a sane API

I totally forgot about this, but I agree 100%. I'm a big fan of "Write this as though the underlying code exists, then fill it out" for API design.

I don't write as many tests as you'd think I do, actually, because I heavily favor integration tests over unit-style tests, and so we're probably much closer in that regard than you think. ;)


I hope I'm not giving you the wrong picture. We did have to cut features when rewriting our software (after node.js changed all APIs) using TDD. It does take more time to write tested code, and you do not get an advantage until you have the code in production. It is only at that point that you are seeing the returns on that investment.

So generally you need to try to find a small enough product so you can build it using TDD. Also stay as simple as you can with your testing - even if it feels very repetitive. I'm only now thinking of adding more tools to my testing toolchain after I have a better understanding of the actual problems I run into often.

About testing web APIs. You need both unit and system tests. The unit tests make sure your internal processing is solid. The system tests will directly make calls to the service. That won't always feel pretty, but it's the best way to approach it IMO.


How do you test something which requires browser interaction like oauth authentication?

Edit: I know this may not be applicable in transloadit's case. I'm asking for a general strategy


The core of our service is a REST API, so we don't have to deal with that much. For browser interaction we would probably use a combination of Selenium (for system tests), and maybe qunit for the frontend testing.

But anyway, I am no expert on this so I can't give you solid advice.


I have used canoo webtest to test browser interaction that involved a log in to a deployed server.


One method I've used (with Ruby) is use of FakeWeb, and collecting the various responses from oauth providers to test against. It's the closest I've seen to hitting the real services.

If you're testing javascript, you'll need to use Selenium or something similar.


Webmock is an excellent ruby framework for solving this problem. My current app intereracts with both github and heroku and it's come in very handing for filling in what would otherwise be gaps. It's newer than fakeweb and is much better at letting you specify interactions on a per test basis rather than the entire suite.


>Very religiously with almost no exceptions. When I make an exception it is usually a sign of problems in the code, and I'll revisit it later on / improve my tools.

Thats one of the things that turned me of TDD for good - I find it a crime to write code that I know won't work, just to write a more comprehensive test and then rewriting it, wasting a ton of time.

The other was constantly switching between code/test/code - even if it only takes 5 seconds, that essentially means that each statement now costs 10 seconds(time to confirm the test failed plus time to confirm that it has now passed) + the time to write the test.

This is also the reason I tend to stick to statically checked languages.


>I find it a crime to write code that I know won't work, just to write a more comprehensive test and then rewriting it, wasting a ton of time.

I don't remember this being in TDD. Could you expand on this? I remember the rule that you write the simplest code that will pass the test and no more, but I don't recall that causing any rewrite per se.


Statically checked language have certain advantages and disadvantages.

As far as speeding up the process goes, I always have test and code in a vim splitscreen, along with a keybinding that allows me to run a test from within vim. It takes me << 1 second to witch between the two and execute a test.


I can give you my experience with TDD:

You generally don't need to change directions very much. TDD will expose bad interfaces or interfaces that aren't abstract enough early on because you can't test them easily. And if you do decide to change directions you sit down and write the tests and then make the changes until your code works and the tests pass.

Generally you want to write tests before code and make your code pass the tests. How many tests and how much coverage is up to you. It's important that the tests are small and brittle so you can pick out as many breakages as early as possible.

You use the API in your tests. When the tests start to fail you know your API has changed. It gets a bit more difficult with web APIs if they don't provide a test environment.


A few facts that I didn't get to fit into the article:

- We have ~1.6x as much test code as we have code being tested.

- Our unit test suite takes < 5 seconds to run. Our system test suite takes < 60 seconds.

- We use Hudson for continuous integration

- The hardest part with TDD for us was reaching 1.0 as you always feel like it would be "faster" to stop testing.

- We usually don't go around refactoring stuff just because we can. In fact we usually feel more confident building upon existing stuff since we have tests that are saying it works.

And it goes without saying, feel free to ask me anything : ).


Thanks for this, very interesting.

You didn't mention mocking in the article - to what degree do you use mocking, and where does it fit in the testing pipeline?


I have written a library called gently. This library lets you define a series of expected function calls spawning multiple objects (since overall order is important to us).

Whenever you define such an expected call, gently returns a closure that you can use to inject this expectation in the right place.

From my understanding this is hybrid between Mocking (where you have an object with an expected call / state sequence), and Stubbing (where you have various pre-recorded answers to function calls).

But generally I found the semantics of the various TDD methodologies very difficult to translate into actual code. That's why I'm using as little abstraction as possible.


Is gently open source or could it be?


Yes, http://github.com/felixge/node-gently .

However, it's still rather minimal and I'll add more stuff to it now that I actually know what I need to do with it frequently.


Test driven development, to me, starts as the harness you use to write your code in the first place. I'm not sure how people write solid code with any kind of speed without a harness testing/driving it. Why not make your harness your tests?

Everyone keeps talking about how it is more work, which confuses me. I don't make a religion out of test coverage, so maybe that is the difference? As in, I don't write a test for every basic thing, like an accessor, for example, because I feel like there's a point of diminishing returns.

edit: To those downvoting, I'm interested in a dialogue and hearing your thoughts.


The costs for TDD greatly vary. Trying to do TDD for GUI / frontend programming is much more expensive. I also find TDD very hard when working within a framework that was not written using TDD.

So of course, if your environment makes TDD as cheap as ours does, I'm with you and think you have to be a little crazy to not at least try it. But it's not for everybody / everything by default.

About skipping tests for accessors and stuff: That's a slippery slope, I'm afraid it would lead me to skip more "obvious" tests which in turn make it harder to write the right test when this stuff actually is part of a bug in the future.


Agreed, my comment above was not GUI/frontend-centric and that is a different case.

To me there is another slippery slope for TDD in getting bogged down. I'm usually interested in TDD in as much as it can be a competitive advantage. The point where I feel like the returns drop off and I'm burning my man-hours and paying too much for the TDD insurance on very predictable parts is where my tests get sparser.

Keep in mind, the degree of TDD insurance I'd pay for is different for every project--for a Mars Rover you bet I'll test my accessors.


> Keep in mind, the degree of TDD insurance I'd pay for is different for every project--for a Mars Rover you bet I'll test my accessors.

Sure. If you write Mars Rover software you'll also want somebody to review your tests to make sure you didn't get the spec wrong. You can go infinitely deep here, and I'm not telling you which tradeoffs make sense for your app. I just know which ones I can live with for ours : ).


One of the most pleasurable benefits of TDD is that I can refactor code and not worry that I'm going to break something. On larger projects this is really liberating, I can develop with the same freedom that I did on day 1. It means that a code base can evolve smoothly and naturally, assimilating new requirements. Without that safety net you end up with code that gradually becomes a poorer fit for the requirements, changes becoming huge snaps as they are delayed over the fear of regressions.


Yes, plus you'll be doing less rewriting because you spend more time and thoughts on getting it right in the first place.


Is that specific to TDD or just having a comprehensive suite of tests?


There's 2 aspects to it. First, the coverage which gives you the safety net. Second, specifying behaviour before implementation which allows you to maintain tight alignment between your requirements and code as they evolve. According to the BDD guys it's the second aspect that is more important.


I'm not sure why my comment has been downvoted, it answers the question and to my knowledge is accurate. Perhaps I can expand on something for you?


I'm extremely happy to upvote first-hand TDD articles whenever I see them on here.

There have been quite a few nice stories, both pro and con. I'd like to see more folks talk about competitive advantage. After all, code just doesn't exist by itself -- it's supposed to do something. Does TDD help make technology make people's lives better? Or is it just a big PITA that ends up with more maintainable code? There's nothing wrong with that, but it's the kind of perspective question can be a critical success factor for startups.


My feeling is that Behavior-Driven Development (which, as described below, is just TDD "done right") is what pushes this competitive advantage aspect. If you write down your user stories, and nothing is in those stories that says "Oh, that's cool!", you need to re-evaluate what you're doing. If the core requirements of your product doesn't do anything interesting, it's unlikely you have any competitive advantage.

I've been thinking a lot about TDD and BDD and [insert letter here]DD recently as part of ongoing research. One thing that got lost when unit testing was proposed is that it's your requirements that stick with you through levels of abstraction... the specification (ie. what you implement in code and unit test) is a by-product of having to talk to a computer to express a model that meets those requirements.

Unit testing, by itself, just validates that the model does what it says it does, but does nothing to assert whether the model actually does what it was supposed to do to meet the customer's requirements. That's what BDD ensures.


I think that the nice thing about BDD is that it reduces the gap between requirement gathering tools (use cases/scenarios, to be specific) and user acceptance or integration testing.

But it doesn't banish the problems of inconsistent or incomplete requirements. And it doesn't drive away solution complexity exlosion as you descend into the code. And it certainly isn't as useful for diagnostic debugging on changes to the code.

But it's in the user's language (-ish), so it's more likely to be funded and supported. That can only be a good thing.


Regarding competitive advantages: We are working on a new feature that will allow us to encode video uploads while they are still uploading. This change will affect the most critical aspects of our code base and it's very easy to introduce subtle bugs. TDD is helping us to make fundamental changes like this without being too afraid of breaking things. I don't think we would undertake this feature if we didn't have the tests we do. (And TDD is the only way to get tests you really trust)


That to me sounds like a fairly generic reasoning for having tests. Moreover, your blog post reads more or less as "you must have tests", nothing about being "test-driven". I wonder if there were any TDD-specific advantages, and if they really outweigh boredom/inconvenience of having 1.6:1 test-to-code ratio.


(And TDD is the only way to get tests you really trust)

Can you expand on this point? This is one aspect of TDD, which seems to focus most intensely on unit testing, I've never understood. Why wouldn't full-stack, acceptance testing be just as effective?


Full-stack tests will have a combinatorial explosion in the number of tests you have to write to get decent coverage, wheras isolated unit tests have a linear (with some constants) increase.

Also full stack acceptance tests take ages to run (by definition they will be doing some complex computation). As mentioned above, if their test suite runs in under 5s, it gives a huge advantage. Imagine having light on your computer that goes red any time you make a mistake. They effectively have that.


You can get value out of writing a test after the fact. But the value is higher for a test that you have written before the code because you have seen the test go from failing to passing. This way you know it will also fail if you break the code in the future.

I also think writing the test before the feature will cause you to write better assert statements. If you write the test afterwards you'll tend to either assert everything (which leads to every test breaking on a small change), or to be horribly confused about what to assert and what not.


I think it would also behoove you to start using BDD in your development process. With BDD, you get a one-to-many correlation between behavior and code. This also greatly improves on quality. I say this because quality can only be based against requirements. BDD forces the developer to have the behavioral requirements upfront.

BDD allows you to take a given behavioral requirement and see all the code that supports that requirement and visa-versa. If a customer ever asks you, you can point to exactly where you are fulfilling that behavioral requirement in the source code.

And BDD languages such as Gherkin are easy to read for most non-technical people.

Of course, this isn't a replacement for TDD.


I take the definition (proposed, usually, by BDD people like David Chelimsky and the like) that BDD is not fundamentally different from TDD. BDD is just TDD done right.

As such, it makes little sense to propose using BDD "instead of" TDD. If you're doing TDD right, there's no benefit to "switching to BDD", other than a monstrous task of rewriting your tests using another set of sub-frameworks...


I think there is quite a big difference, fundamentally, between BDD and TDD (though they are not mutually exclusive and do overlap).

Let's try a quick example of a google maps type app: locating where you are and where you want to be.

From BDD perspective, you get a mockup of the screen (could even be some non visual process) and this represents the behavior you want of the product: the behavioral requirements. From this mockup we can pull out all the behavior on that screen (or within that system).

Feature: In order to see if I am where I want to be As a traveler I need to know my location.

Scenario: At My Location Given I want to be at Lon XXXX Lat YYYY And I am at Log XXXX Lat YYYY Then I should see "You are There"

Scenario: Not there Yet Given I want to be at Lon XXXX Lat YYYY And I am at Log XXXX Lat YYYY2 Then I should see "You have 4 meters to go".

From this behavior we can then build out the system which inevitably leads to fine grained software specific behavior which should be tested using TDD such as:

it should "calculate the distance between two points correct"

In the case of BDD, we don't worry about how the two points are calculated nor if it is even done correctly (we don't need full fringe test coverage in BDD). We are able to assume that the underlying calculations will be correct. We do need to make sure that the behavior of the system as a whole is working correctly. In this case, we are assuming if the calculation is incorrect we will not see "You are There" on the screen. Why was it incorrect? Doesn't matter. That specific behavior, the calculation, was driven by Tests.

Personally, and for efficiency, it is really important to use both in a project and not look at BDD as "TDD done right."

Just my two cents.


BDD assumes a less/non-technical people can help you write the tests or confirm that you have met the requirement. This usually involves Business Analyst or Product Owner/Manager/Clients.

While it's nice, I found that usually these people are either:

1) too lazy to do that (cause they pay you to do the technical stuff) or

2) don't have time cause they have things to do as well.

I'm not suggesting that people should not do BDD or Acceptance Tests. But in situation where you cannot do that, it's not a big loss. TDD on the other hand, is the minimum requirement.

As a side note, Transloadit may not have these people/roles so it's not a big of a deal for them.


How about from this perspective:

With regard to BDD not being a big loss:

BDD tests the complete behavior of a system. You can refractor the system a hundred different ways and completely change implementation if you like (or even start using a completely different programming language) and as long as the BDD tests continue to pass, you know you have NOT changed the features and behavior of your system. You know, 100%, that you have not broken the system as a whole.

With regard to 1 and 2:

1) Every product owner is different. Some do like to write down what they want. Some don't. In no way does this give a developer the excuse of not developing to specifications/features/behavior. Part of that "technical" stuff that you are paid for is to assure that you have actually developed the exact behavior what was asked for. Nothing more. Nothing less. At the very least, you have something that a BA/PO/Manager/Client can look at when they say let you know you gave them something they didn't want. 2) If hey don't have time to define the behavior of your system (or delegate that task) then you really don't have time to understand their customer nor find that set of features that gives them the best ROI.


Is BDD = TDD? Or more like Acceptance Test?

To my understanding, we need both. As in, we need TDD for unit-level, and BDD/Acceptance Test (like Fit/FitNesse) for high-level.

Is this correct?

Pardon me if I'm not really high into these various testing technique other than TDD because

1) It's damn hard to make people even start writing Unit-tests let alone doing TDD

2) It's damn hard to make people do TDD

3) It's even harder to argue that some people are actually writing unit-tests as opposed to integration-tests (especially for those who uses NoSQL database)

4) We're not even at the point where we knew how to write good unit-tests (yes, we know some rule, for example, not touching the database, and so on, but I bet there are other better ways unexplored in regard to writing good unit-tests)

So with that in mind, I'll focus on one problem at a time. Once I feel that TDD is at the "proven level", then perhaps I'll check BDD.


I think BDD would be a nice addition to our system tests. However, I have not worked much in Ruby, and from the outside it is hard to tell whether those cute DSL's are more poetry or business ; ). (Don't get me wrong, I love cute code - but not at the price of simplicity)


BDD isn't a Ruby library, it's a slight change in focus on what you test. As your sibling says, it's really just "TDD done correctly."

Here's the primary text: http://blog.dannorth.net/introducing-bdd/


I know it's not a ruby library, but I also know I don't know a lot about it.

Thanks for the link, I'll try to see if I can extract some usable concepts from it : ).


John Hughes, (author of Why FP Matter) is mitigating some of the problems of TDD with QuickCheck (http://en.wikipedia.org/wiki/QuickCheck).

Video: http://www.infoq.com/presentations/The-Joy-of-Testing

I've been using it for the last couple of days. You'll have to excuse not being able to see the demo properly. InfoQ fail. Stick with it, there's some really cool stuff at the end.

Another video: http://video.google.com/videoplay?docid=4655369445141008672#


Until recently I was looking at TDD as a necessary evil - yes, it saves your ass a lot of times, but writing tests doesn't feel fun nor very productive - I don't know if I can speak for others, but for myself there are two things that I like most in programming, 1st: creating cool, new features; 2nd: polishing/refactoring the code until it becomes a state of art.

And what I've found out not a long time ago is that TDD helps a lot with the 2nd point, but only (again, at least for me) if you aim for the 100% code coverage. Yes, I know it sounds silly, and pointless, and that code coverage doesn't mean that much, but anything lower than those 100% makes you too relaxed about quality of the code and leads to skipping the parts that are harder to test, and most often they are hard to test because they are badly written (bad architecture, bad API decisions, etc.).

When you go for a 100% CC you have to examine all those parts, and change the code so it actually can become testable, and it's often hard and requires making big changes in many places. It's challenging, but challenging means fun. And you learn a lot, but learning a lot is fun too. So, double win (triple, if you count easier bugspotting)!


Writing those tests feels like proofing mathematical theorems

Uh, not really. Have you ever written tests for functions in math? It's a pain in the ass once you get past 5 variables. This is why we have proofs in the first place, to prove that something is true for each and every case without having to run through every test...


I really like the author's assertion that writing the test feels like proving mathematics. I have always been fascinated by this connection. When you look at the history of mathematics w/r/t the discovery of calculus there was a 150+ year period in which people could use it to do work but nobody could prove calculus. It wasn't until the early 1800s and the technique involved bounding errors. I'm referring to the delta-epsilon proofs from analysis. To me this is exactly like what is happening in computer programming right now. In my opinion automated testing is some of the most cutting edge stuff going on right now.


BTW Felix was interviewed about Node.js at http://chaosradio.ccc.de/cre167.html (German)


I'm building a startup with Python and Pylons - Pylons comes with paster (python web server) and a small unit testing hook for your app; pylons makes it so easy to test your application, I love it. You can write two types of tests: functional tests and unit tests.

I split functional tests into the tests that actually make a web request (by spawning paster, loading the application, and pretending to be a user) - my lowest "unit" there are my controllers. I have to have some special trickery to handle sessions, but it isn't bad, beats having no tests.

My "unit" tests are a bit less fine-grained than yours are. I've had difficulty narrowing tests down below the module level with anything but libraries; primarily because models (the culprits here) have a few interdependencies. I do my best to keep my models orthogonal but not all of them are, so most of my unit tests are basically testing models as they operate on the database tables and some of their assistant methods.

If I write a library, I generally try to determine whether it can be its own "module" or should be a module within the application namespace. For example, I wrote an authorize.net payment library and ultimately decided that it should have its own tests - separate from the application, which lead me to splitting it off as its own project (PayPy now, with adapter support for some other gateways). Writing some of the library modules in their own namespace with their own tests keeps it clean orthogonal.

Overall, I would say this: I'm not as OCD about writing tests as you are because I'm the sole developer and there are too many features to build to be writing functional tests every time I build a controller. My philosophy is this, though: the models should all be covered, no matter what; since they are the messiest bit of logic (usually) in a web application - translation between relational data into object oriented data has a lot of side effects. Having all models covered ensures that the majority of my logic that interacts with the database is solid. It also keeps any schema/model changes in check that may affect other models I've forgotten about.

After that, I try to make sure any critical controllers/pages/functional pieces are covered - the really trivial stuff I worry less about and just make sure it "works right" by doing some manual testing myself before pushing it.

I love functional/unit testing - I will never go back. My old way of development feels like I was lost in the dark ages of my career or something...


Is it really a unit test if you make a call to a REST service? Is that not now an integration test?


Yes, that is an integration test. You need both.


nice one!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: