Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is TDD/BDD hampering your startup's agility?
156 points by bdclimber14 on Feb 20, 2011 | hide | past | favorite | 122 comments
I'm a fairly big fan of agile software development, but at the risk of being deemed an agile heathen, I'm beginning to doubt the benefits of test-driven development (and BDD) for a lean startup. Developer friends drinking the agile kool-aid swear by BDD. In theory, it's great--saves you time by identifying bugs at lower-levels and guarantees that everything works.

In practice though, I find myself spending a disproportianate time writing and getting test cases to pass compared to just writing good code and testing in a browser.

- For functionality that I know works, I sometimes have a difficult time writing and passing tests especially when the functionality is unique (e.g. Facebook login?).

- When we pivot or iterate, it seems we always spend a lot of time finding and deleting test cases related to old functionality (disincentive on iterating/pivoting, software is less flexible).

- Test cases (namely Rspec) are just plain slow to run (seconds staring at a screen before getting positive or negative feedback).

- There always seems to be about 3-5x as much code for a feature's respective set of tests as the actual feature (just takes a lot of damn time to write that much code).

- Most of the code in a lean startup are hypotheses that will be tested by the market, and possibly thrown out (even harder to rewrite test cases with slightly different requirements over writing from the beginning).

- Refactoring breaks a lot of test cases (mostly with renaming variables).

I do think TDD is great for client work, but for lean startups, I'm not so sure.

For a startup that is iterating very frequently and trying to reach a product-market fit, I find TDD to be harmful and actually impede agility. Speed trumps reliability here.

Like security (budget vs security) are speed and reliability 2 points of a continuum? Where is your slider as a lean startup?




I'm an agile coach and startup junkie.

TDD/BDD doesn't fit the mold of startups. Here's why:

TDD/BDD assumes you know the problem and are coding to create a solution. In startups, however, you do not know the problem. Sure, you can imagine some kind of customer that might want some kind of code to do something or another. But that's all pipe dreams. You have no idea if method X is useful or not. Sure, you might know if method X meets the requirements of your imagination, or your architecture dreamed up in your imagination to deal with your imaginary customers with their imaginary problems, but you really don't know.

So it's a waste of time. Startups are all about providing value -- not flexibility, not bug-free code, not code you can hang on a wall and take pictures of. Value. If you want to be the guy who can make business solutions happen, the guy that customers can come to with any problem and you can make it all happen, you need to bone up on this stuff. But in the business world, you've already got the money, your job is to make the solution happen. In the startup world, you don't have the money yet. Big difference. Big, big difference.

Look at it this way: 19 out of 20 startups fail. That means that odds are that you will never see this code again. You'd be a fool to spend any more time on it than absolutely necessary. But the math works out completely differently in the commercial world, where most things you write stay around forever.

What I found over and over again with Agile is teams and individuals buying into the marketing/herd mentality of agile and forgetting about the adaptive/iterative nature. Everybody wants to either use a recipe book or just repeat their last project that they thought was really cool. "True" agile means ditching whatever isn't working. Pronto. There are no sacred cows. Everything is on the table.


TDD/BDD assumes you know the problem and are coding to create a solution. In startups, however, you do not know the problem.

This seems like two ideas conflating into one. The business 'problems' (1) and the problem you're solving in a particular piece of code (2). These aren't usually directly related. For example, if I wanted to build a YouTube killer, the TDD process isn't relevant to the high level "problems" I want to solve (which, as you say, one might not know of yet).

TDD comes to play (from a developer POV) when I actually start to write some code that, say, transcodes a video or provides an authentication system. In those areas, the problem is obvious and contained and TDD can work well.

That means that odds are that you will never see this code again. You'd be a fool to spend any more time on it than absolutely necessary.

Which does not necessarily mean TDD is "a waste of time". If practicing TDD in a particular situation will result in fewer hours spent developing a feature, the upfront "cost" of those hours is not a waste of time if you spend more hours debugging your way through building a non-tested equivalent.

A developer should have a feel for which way works for them. In my case, I know that the time I spend is ultimately lowered through using some TDD principles (though not all) vs tiresome debugging of untested code. The reward cycle of TDD is commonly overlooked. "Code->Yay!->Code->Yay!" beats "Code->Code->Code->Code->2 hours of debugging->FFFFUU!!" any day.

"True" agile means ditching whatever isn't working. Pronto. There are no sacred cows. Everything is on the table.

Agreed, but that runs counter to making absolute statements like "TDD doesn't fit the mold of startups" and that "it's a waste of time." It can fit and it can save time (just not always, sure, or if you're 'doing it wrong' for your situation).


Ditto. Couldn't agree more, in fact, I was going to write something very similar.

At high level also, Start-ups do need a TDD philosophy. For Start-up business ideas, the market is the test suite (to take the software analogy further) that is already there and changing the execution plan to make sure you pass those tests is vital.


I'm not sure I agree that the market makes a great test suite. It kinda hurts when you fail.

But yeah, there's other things you can use, like - A/B testing (for web apps). You can automate "code smell" detectors, and get a continuous graph on how good your code seems to be.

Of course, things like converters and math functions (anything non-GUI) can benefit from unit tests.


Any contained problem will benefit will from TDD given that the code will be revisited by others and yourself later in time. For a startup testing hypotheses, this is rarely the case and most code will end up in the junkyard.


Though usually the same thing in practice, a test first environment is not conceptually the same thing as Test driven development.

The reward cycle of TDD and the focus that TDD brings to writing code are the reasons I use TDD. The added confidence in correct code is also extremely helpful.

As I see it, if one were "in the zone" from 8-5 for a week, and had a clearly defined and understood spec, most software wouldn't take more than a few weeks to build.

Since being "in the zone" and understanding a well-defined spec aren't really that easy. Most time spent (in my experience) is understanding the spec and wrapping your head around the problem. This is a reason why pair programming can be so helpful.

Regression testing aside, writing tests is valuable to me mostly because it helps to define the problem for me enabling me to more quickly write code.


I've worked on an awful lot of teams and projects over the years. This discussion and the avoidance or embracing of the process flavor of the month, is not new. Agile has been around since people stopped drinking the RUP cool aid and came up with XP (a process that keeps meetings short by standing on your feet!). RUP lost its own gravity when Rational then IBM acquired the IP and turned it into another legendary white whale, from the origins of humble Objectory.

The one great thing I learned from RUP, a cocktail napkin makes a great realization document. Especially when originally served under a cool drink.

Communication and being clear on what the tree-swing looks like and needs to do is all that matters. Your companies will develop culture as they grow based on their pain points and their experiences.

I take the approach of "automate, automate, automate". I also enforce regular activities in intervals of by day of the week. I insist on profiling. I insist on peer code review; which can be done in the IDE or with a cocktail napkin, but I like my developers to be able to communicate what they've just written in case they get hit by a bus.

As a leader, not just the coder, you DO have to be concerned that you're writing something that can hold a the weight of a business on top of it. Would I worry about TDD or BDD before my Series A? No. But I would go back and reiterate and reinforce everything written prior, shortly after.


the TDD process isn't relevant to the high level "problems" I want to solve

In most businesses the opposite is true. Test Driven Development isn't really about testing low-level functions, it's about making sure the code and the business logic behave in the same way. TDD really comes into its own when you start using the tests as documentation on how the system is supposed to perform. A company using TDD can very quickly adapt to change because they can be confident in their code.

The reasons TDD isn't suited to startups is more that it's a heavy process which doesn't fit well with a small, tech-heavy group of people who aren't selling code.


One of the problems I face as coach is that people ask me simple, yes/no questions and I always answer "it depends". It's not that I'm trying to be evasive, it's just that life is full of little complexities. So I took a bit of artistic license and made a generalization. As you point out, we could speculate all day on various scenarios where TDD/BDD might be useful. That line of reasoning was too much for a short comment on HN, though.

One of the things I really don't think you get -- something I had an extremely difficult time getting -- is just how worthless your code is. Startups are not rewarded for writing good, maintainable code. Hell, they're not even rewarded for writing code at all. There's a lot of folks that say to advertise, create a vapor-ware product, and only once you know for sure people are willing to pay money for it should you start programming. Whether you like that idea or not, it shows just how far down the line programming is in the scheme of things for a startup. "Maintainable" or "Great" code is even further down.

Startups are about making things that people want that can scale out to huge numbers. Coding gives you that ability to scale out -- in certain instances and under certain situations. But you could just as easily write 10 lines of javascript and make a million dollars as you could write 100K lines of C. In fact, there's another huge school of thought that says the more you code the farther you've probably drifted in your mind from where the actual market might be. Each little test you create is further reinforcing in your mind -- that little code->yay cycle -- that what you are doing is cool. And it's probably not. "Yay" is people paying you money. Not having a test run.

And that gets to agile principles, which I think are very applicable. Do a little, test the market. Do a little more, test the market. I have been doing that for a few months, and I've found that most people who want stuff could care less whether it involves programming or not, or how well it's programmed. They just want stuff. If a guy is drowning, he's perfectly happy with you throwing him a life saver that falls apart and is crappy -- as long as it saves him from drowning. You might not even need a lifesaver -- you could do something else. In fact, the more you speculate on what the drowning guy might need, the more off-course you are. That means that any structure you put in your code early on is smoke and based on pipe dreams. The iterate-test cycle for me is what has verified beyond a doubt how low on the totem pole programming is. Find something people want. Period. More customers, less anything else.

Coming from many years of loving to learn how to be a better craftsman with my code, kinda sucks, huh? All those guys telling you what a great programmer is and how to be one, and then Average Joe Sixpack doesn't recognize how cool your TDD and programming language are. Dang customers. :)


Can't argue with what you say here - thanks. Just wanted to give a thumbs up.


Agile is meaningless at the point. It started to die the minute the first methodology guy with no actual coding experience thought it would be a good marketable product for his portfolio. It's been diluted and redefined so many times, and so many people are so quick to swear by their one true definition, that it's an impossible term to even discuss rationally anymore.

My feeling on automated testing is that it's far more valuable and saves more time than people who are inexperienced with it can understand, and yet it has shortcomings and limitations that go far beyond what any zealot is willing to admit.

I do encourage every developer to spend 3 months doing pure TDD, just to learn how to test. It's not that testing first inherently makes code better (it doesn't; test code is meta), but it's the fact that to realize the value of testing you have to be good at it, and to be good at it you have to practice. Once you are good at testing you realize that in many circumstances writing an automated test takes exactly the same amount of time as manual testing, plus then you have a regression test.

As far as test coverage is concerned, if you are coding a medium to large application in Rails, you should have base coverage on every controller, period. This is a good basic sanity check that is necessary for a language as dynamic as Ruby (if you're using Java it's an order of magnitude less important, if you're using Haskell, two orders of magnitude). The only excuse is if you're really sure that this code is going to either live on in exactly the same state or be erased. The fact that you're a startup doesn't mean that tests have less value. To the contrary, if you are iterating quickly, you need a safety net when you are refactoring. Once the application reaches a certain size, you will never have time to add test coverage, and there will be certain classes of changes that become impossible because they are just too risky.


"if you are coding a medium to large application in Rails, you should have base coverage on every controller, period"

I actually find this to be a huge time sink for most projects. Usually in a rails app for a client's application there's only a few fancy controller methods that do anything outside of typical REST scaffolding (there's no way I'm testing all of the typical admin CRUD). Add in something like inherited_resources using thin controllers and you're wasting even more time. IMO only write tests for functionality that's non-standard, which is probably going to be a very small percentage of your controller code.

For client work I find that most of the complicated or non-standard functionality happens on the client side, and rarely on the rails side. Which means a huge portion of my tests would just be fluff (like commenting method names and params manually) if I were to test all of my controllers.


Functional tests in Rails can be cheap integration tests. What happens when one of the models that is touched by one of this thin REST scaffolding controllers has an incompatible change introduced? The person making the change may no even know about the controller in question. Who's to blame when it blows up in production? Writing tests to cover every conceivable failure condition is indeed a time sink, but the first test is a very valuable sanity check that pays dividends over times.

Also, I have to question how much time you save by not writing tests for boilerplate. How much boilerplate is there really? Most business code isn't exactly novel or exciting, but neither is it scaffolding. Even if it is, you should just crank it out and be done with it right? It shouldn't be taking a large percentage of your time, and neither should the tests.


I've always thought it ironic how inflexible agile was, at least based on the perception from agile purists I'm exposed to.


What I find is that I teach agile as being results and value driven first, everything else doesn't matter. All the stuff like TDD and BDD are just canned versions of patterns other teams have found useful -- each team can take them or leave them (assuming that they understand them first -- many teams are all too willing to ditch things that they have no idea what they are talking about. Helping teams make these decisions is a big part of coaching)

The fascinating thing is that once I teach that it's all flexible, guess what? People come in and make it all rigid and orthodox anyway. It's not Agile that's the problem, it's the people that, over and over again, just beat the living hell out of agile trying to make it into some rigid 1-2-3 formula for success. No matter how I beg and plead with them not to do this, still they persist. Strange.

I'm left with the conclusion that the basic problem in software development is people's fear of having a truly unpredictable, flexible, and open way of working -- even the agile adherents (especially the agile adherents, it seems). That's simply too much for some folks. They'd rather have a fixed list of things to do each day, even if they know the list is full of crap, they are taking ten times longer than they should, and they are doing busywork things that provide no value. As an example, I've had teams be very successful keeping their list of things to do on post-its on a wall. Then I leave for a year and come back and they've got some kind of spreadsheet monstrosity that has hundreds of items nobody can remember. Nobody likes the new system (except for perhaps the person who runs it, who will also admit that they don't like it, but only in private), it doesn't fit their needs, it's a huge time sink, yet they don't know how to make it stop. They are prisoners of their own need for fixed systems and over-engineering everything. This is because somehow they feel by over-engineering things that they make them "safer". Even though at some level they realize that this is not the case at all. Very strange.


[deleted]


Accidentally upvoted, meant to downvote (reading on an iPhone, go figure). Who the hell posts such a retarded comment? Truly evocative of the vicious anti-intellectualism of our age.


It's [deleted] now. What was it was that drew such well-worded ire?


@pragdave recently gave a keynote speech at the Magic Ruby conference and made a big point of saying "Agile is not a noun." Agile isn't about subscribing to practices and then writing everything in stone. It's about doing that works for you situation.

So don't blame 'Agile' for being inflexible, blame the people who are using it for not actually being it.


If that's what Agile is, then Agile is just common sense. So, as someone else commented, it apparently has been diluted down to nothing.


I keep hoping somebody recorded a video of that talk!

It was great, and I always want to share it when the concept of 'Agile' comes up, but can never find a way to sum it adequately.


If Agile is about doing what works for your situation, how is it different from magic pixie dust solving all your problems? :)


The rub with agile is people conflate process with communication. Things agile (hopefully) expedite commuincation, however, are only as good as those who guide it. So perhaps, more precarious.

It's not a blue pill.


I've never quite understood the anti-TDD attitude around here, but an experienced developer and coach such as Mr. Markham being so confused about the scope to which TDD/BDD is applied is eye-opening.

The kinder gentlemen above have explained why the parent is fundamentally incorrect.


A big +1 to that. Most of the successful startups I know have literally 0 tests for the first few years while they iterate.


TDD/BDD assumes you know the problem and are coding to create a solution. In startups, however, you do not know the problem.

I think this is where these posts on HN always end up talking at cross-purposes. "Startup" can mean many different stages. I definitely agree with this. It's one of those things where when I think "startup" I think "they've got a problem, they're solving it" whereas some people read it at that very early stage instead. TDD doesn't suit that stage, because, like you said, you don't even have an idea of what success looks like.


I'm not sure what your suggestion here is instead of TDD. Just wing it?

Your whole premise is based on the the idea that your startup is going to fail. In that case why write any code at all?

If your startup doesn't fail then you've really made things extra hard on yourself. One of my favorite things about working on my startup is getting to learn and apply good practices.

It's just as easy to write good code as it is to write bad code. I find that TDD speeds me up not slows me down. It's not as much overhead as people seem to suggest.

You can write new code and then keep running the program to see if it works. Or you could write the code and keep running a test until it works. The test is typically faster for this because you don't have to deal with the rest of the program which is unrelated to what you're working on. Once you have working unit tests the implementation is typically trivial.

In practice, I don't write tests for everything but I code in a way that means I could test everything if I wanted to.


Note: This response is about software TDD and not some high level TDD

If you don't know the problem then you are just winging it. Writing software is always with a goal (can it be without?) and thus you should be able to write a test (first) that will tell you if it does what you expect. If you expect it to record click stream, then it records click stream or whatever.

When I was new to TDD, I too found it to be very cumbersome. It took me some time to realize that I don't have to figure out the entire design before I write any test code. I can start with a test, then write the functionality that it is testing and when I realize that this is not what I want but something else then I change my test to test that new thing. Iterate. That is what agile is about.

Discovery that you want something else, happens whether or not you are doing TDD.


This whole thread should be required reading for startups, especially bootstrapped ones, but you made the key point in a sentence:

"19 out of 20 startups fail. That means that odds are that you will never see this code again. You'd be a fool to spend any more time on it than absolutely necessary."


TDD/BDD assumes you know the problem and are coding to create a solution.

Actually, no.. TDD is almost never used when you fully know the problem!

Look at this way.. would you trust that free source clone of facebook with your data knowing the site security holes i the first release simply because they did not test using something like TDD/BDD? I would not..

That being said TDD/BDD is not agile! Neither is waterfall!

WHAT?!

Agile is critical analysis of processes and tools used in the dev process to reach a set of dev and management objectives and goals by modifying those processes and tools.

You got some of it right and some of it wrong


Considering how many hours I've saved not having to "debug" after drinking the TDD koolaid, my answer is no.

The problem I notice, though, is many people will either TDD the "right/official" way or not at all and that's a false dichotomy. If a particular type of testing is slowing you down or causing you to be less productive, don't do it! But stick with the tests and processes that do allow you to be quick but without abandoning testing in favor of the old "code and pray" approach.

For example, on a recent project I built it almost entirely in the models only (using TDD) before even hitting controllers or views. It only took a few days to tack those on at the end and I didn't bother doing any testing of them beyond some cursory "make sure the site generally works" integration tests. (I see the value in controller and view tests but.. well.. they'd have slowed me down and the models were far more important.) In a contrast to that, I have a 5+ year old project I retroactively added lots of integration tests to. The models are untested but at least I know if a change screws the app up in a big way because so many different use cases are tried in the integration tests.

TLDR: With TDD, stick with the stuff that works and tone down the stuff that doesn't. Don't feel you have to do things the official/"cool" way - come up with your own processes.


petercooper has nailed it.

Disclaimer: I'm an agile evangelist, spent years at startups, and I love quality code.

Be pragmatic in your adoption of any software engineering process, agile methods included. Tossing TDD/BDD out the window because you believe it's slowing you down is a mistake. Adapt it to your environment; use the aspects that work.

Remember that the goal of TDD/BDD and testing in general is to deliver useful software that works. Even your startup's prototypes, quickly hacked together, should at some level meet that criteria.

If you're writing code, you have an intention for it, a purpose, and that can be tested on some level or another. It's hard to make an argument that manually clicking around in your browser after writing your implementation is superior to having some form of automated tests in place beforehand.

If you're in the exploratory stage of a startup, you'll pivot often, and you'll be building a lot of throwaway prototypes. You should limit the depth of your test coverage, but you can still do TDD. Just don't be strict about it. Not every method/class/behaviour needs to be tested. You may not know the broad direction your software is going in. When you sit down to code, at least the behaviour of that little piece is known, and it's at that point you can write a test or two to express and validate your intentions.


God damn, dude. You broke it down. Very nicely said. A couple questions. Do you think that you might do “Selenium”-like testing on older, legacy systems? What type of testing would you do for brand-new systems? And what type of testing would you do for the mature, but green and healthy, kind of system?


I think Selenium and other related tools are what he is talking about when he says "integration" testing. I'm a Ruby/Rails guy, so I use Cucumber/Steak+Capybara for my integration testing.


I'm doing BDD as the lead (of 2) Rails developers at our startup, and _its the reason_ we can go so fast.

Some differences we're doing from your situation:

We use Cucumber to cover the whole web app(but not flash or video processing), and only have some small rspec model specs on important methods involving billing.

Cucumber coverage is also very powerful per line of test code. We have 1000 lines of cucumber covering 5000 lines of code.

We aren't covering everything by tests. For example, I would have given up on the Facebook login test coverage, and just written some tests that mock a facebook-logged in user but not covered the actual login funtionality itself.

If we were not doing BDD, my time estimates for each ticket would have to double because the hair-pulling debugging time would skyrocket and kill productivity.

I would also hate working on a team that didn't have test-coverage because developer B might build something I don't understand or know about but I inadvertently break it and we find out 4 days later.

Another benefit: we can ruthlessly refactor and tear out code because the tests immediately identify if something broke.

There's also more payoff for your tests over time. The longer your project lasts, the more those tests pay dividends. Even they seem painful now, they are an investment towards maintainable code in the future.

My advice: keep going with TDD/BDD and consider Cucumber for everything but your most important business-logic methods.


There's also more payoff for your tests over time. The longer your project lasts, the more those tests pay dividends. Even they seem painful no, they are an investment towards maintainable code in the future.

I think this is the most important thing. Tests are an investment, but once you've done them, every time you run them from that point on is free. The amortized cost keeps getting better and better.

Once you pull in a Continuous Integration engine, or even turn it up to 11 and implement continuous deployment, those tests really do pay for themselves. It's just intimidating in the short-term.

Whenever I've gone rogue and thought "Sod it, not today", it's invariably bitten me on the ass twice over the pain it would have been to write them.


> Tests are an investment, but once you've done them, every > time you run them from that point on is free.

True, but they have their own maintenance cost. When features evolve in the future you have to pay to update the tests as well. Worthwhile, but something to bear in mind.


When features evolve in the future you have to pay to update the tests as well.

Very true, but when features evolve in the future, you may have to pay the debugging costs if they evolve in ways you did not intend with regard to other features.


I agree with your points, and I think my skepticism is because most of my exposure to BDD is from purists.

I worked with a developer who was pulling his hair out because he couldn't figure out how to test the SMTP emailing with gmail. We all knew it worked, but he spent a few days figuring out the test cases.

A couple points though:

- Ruthlessly refactoring is huge with TDD. However, I don't think there is much refactoring with bootstrapped, pre-revenue startups. Changes are generally functionality changes.

- Payoff over time. With very early stage, I can imagine that shipping a day early is worth a high interest rate of payoff later.

- Solo hackers don't have to worry about other developers, again at the very early stage.

Overall, maybe TDD/BDD doesn't make sense for very early, pre-revenue startups, but instead should start after funding or product-market fit where you need all these things?


However, I don't think there is much refactoring with bootstrapped, pre-revenue startups.

At our startup, we learned a lot during the pre-revenue stage, and twice realized that our basic architecture was stupid, wrong, and overly complex. (That was my fault, by the way.)

Both times, I took 3 or 4 days and savagely refactored our product, deleting massive amounts of code. When I was done, it was much easier to make changes.

Mind you, in the end, this wasn't enough to save us: We ran up against the reality of a short runway and a long enterprise buying cycle. But our product was very sweet. :-)


I always say, somewhat some tongue-in-cheek, and somewhat intentionally provocatively, that if you can use stuff like TDD and pair programming, then you're probably working on a boring problem.

And I think there's some truth to that. On a macro-level, how would you even begin to write tests for a search engine or some stock market bot or other notoriously hard problem?

search_on("avatar").should_return("http://www.imdb.com)

best_stock_for(:percent_return, 200).should_return("cisco")

?

These problems an inherently non-deterministic. How do you even begin to write a test for that?

On a micro-level, sure, maybe you're working on a single component. And TDD would help you come up with the interface. But if you don't even know if the answer is a genetic algo, or simulated annealing, or using mechanical turk, or whatever, there's really no point in even trying to freeze the interface. Which is what TDD really does as much or even more than it verifies the resulting code. It defines the interface ahead of time. It's a way to trick developers into writing specifications without using that nasty imprecise context-sensitive language know as english.

But then again, right now we're rewriting a pretty critical piece of code. We've thought a lot about how it works. We had a few meetings about the new approach. Wrote up a quick email with a basic API. And doing pairing and TDD from there, well that's actually working out pretty well. And I'm confident we're getting better code quicker because of the approach.

Ultimately it gets back to the statement that real developers ship. In some cases, BDD and pairing will help you ship higher quality software quicker. In other cases it won't, and it'll end up wasting money and time. And real developers will then use their tools accordingly, and not dogmatically.


Google has tests that read almost exactly like your examples.

Of course, the whole search engine isn't specified by tests. However, if IMDB isn't in the top 10 results for [avatar] or barackobama.com isn't in the top 10 for [obama], something is seriously wrong and a human should look into it.

The rest of your post is pretty good. Not sure why you've been downvoted.


But I imagine those are more like regression tests, right? Not write the test first BDD/TDD-style, then implement the whole search engine around it, tests. That was the implied context of that statement.


These tests would send a red flag if they failed, but they aren't good as TDD tests.

First, they are tied to a current time's context. Maybe next year, IMDB is no longer a good source for movie information, and it's so bad that it's on page 2. It could happen. In theory, your test cases should be consistent.

Secondly, the TDD process is always to write code to pass tests. Well it's pretty easy to write code to return IMDB, but it's really hard to write test suite, that when coded against, would produce Google. That test would look like:

search_on("avatar").should_return("www.imdb.com") if is_really_good_result(imdb)

Once you are Google, then you should have these tests to make sure things keep working. However, I think it is really hard (in a bad way) to do TDD on hard problems.


You can't test for a particular outcome given an unknown set of input data.

If you have a known snapshot of data, you can set expectations for how you want your system to behave under those circumstances.

If you create a world where IMDB would rank highest for your indexing algorithm for the term "avatar", then you can expect that when you run a search it will be returned as the first result.


On a macro-level, how would you even begin to write tests for a search engine or some stock market bot or other notoriously hard problem?

These problems are hard because you evaluate your results with some kind of "quality score" instead of a simple "pass/fail" metric. You need to adapt your testing strategy accordingly.

One useful strategy is to define a decent metric, and try to maximize it. Let's say that you have two sources of data: (1) A list of pages that should rank highly for specific queries, and (2) a list of links that users clicked on and stayed at without coming back to click on other links < 30 seconds later.

Your goal is to write a search engine which ranks these links, on average, as highly as possible for the relevant queries. You do this with the usual techniques of experimental science: Split your data in half, develop against part of it, and hold a part in reserve for your final tests.

If you make a clever change, and your average "good link" position drops from 1.7 to 3.9, you back your change out and try again.

One of my clients actually did something similar with mapping software. Before deploying new code (or a new version of the map data), they ran extensive automated tests, and flagged anything weird for human review. I wrote them several test harnesses, one of which discovered a situation where the driving directions took a right hand turn off an overpass and tried to merge into traffic below... :-)

Granted, these types of "tests" are essentially very high-level integration tests. But they serve the major purpose of tests: Automatically ferreting out disastrous mistakes before you ship to customers.


I think this is another misunderstanding because I worded that question so poorly. Apologies.

Yes splitting your data into a training set and a test set is a really good idea for problems like these.

But I'd still argue it's near impossible to generate decent grading first, TDD-style, with only a superficial knowledge of the problem you're trying to solve in that case.

Say you've got one obscure Star Wars character in your test set. Top results are probably imdb and wikipidia and that looks good. Then you've got a slightly less obscure character in the test set. And you stick with imdb and wikipedia as the top results for testing purposes. But he's got a huge page on wookiepedia. He's got a huge fansite. He's got a huge personal page. A twitter account more popular than Austin Kutcher. All popular enough to bump IMDB and Wikipedia out of the top five. Now the test you wrote is broken, and you're coding to broken results.

In that case I don't think we're capable of generating good tests to define the result set FIRST, TDD-style. I don't think we're capable of specifying the application behavior FIRST, BDD-style. I think its better to play around with the data and algorithms first, until you start to get a gut feel for the data. And then you can write some decent tests for the test data.

Same with mapping software. It's probably a good idea for the shipping product to have a test suite that does something like generate 100 random or not-so-random trips, then make sure that: you can get there from here, You can do it in +/- 5% of the time/miles that we've already esablished, etc. But I question how much value those tests have for day 1 or week one or even month one when you're writing the software. There's a lot of legwork before those will even come close to passing. And because you now don't have an un-broken build, people will potentially start ignoring problems with tests that should be passing week one.

And I know you're not saying this, but every time people argue about TDD, the TDD proponents seem to think that no-one else tests. Which isn't true at all. You do need tests. The question is when and how. And the answer isn't always before anything else.


I once worked with some developers that literally did something close to what you described. In a nutshell, it was for an advanced job-employee recommendation algo. I knew how the algo should work, it was a form of collaborative filtering, but hadn't a clue how to write test cases. It was a really hard problem.


> But if you don't even know if the answer is a genetic algo, or simulated annealing, or using mechanical turk, or whatever, there's really no point in even trying to freeze the interface.

You are completely right. In this case you would start prototyping various solutions to the problem. Once you have explored the domain, you either go for implementing a certain approach using tdd, or you do more exploring.

--fg


I can't believe I came across this post. I have been fascinated by a podcast interview with Kent Beck, the creator of TDD. I've been listening to it for two days. The interviewer asks Kent Beck: Is there a time in a start up's life where TDD is inappropriate? Kent Beck responds: Yes. There is a time when you are trying to generate a lot of ideas. You need to think of a lot of ideas so that you can find a good one. In order to do that, you have to work fast- many of the things you build, just won't work out. (Or rather, you lose interest in them) During this phase of a project TDD can slow you down. Those are his words, not mine. Although… keep listening. Kent has quite a bit more to say on the topic.

Find it here (scroll down to the link for "Show #74"). http://startuppodcast.wordpress.com/2010/07/10/show-74-kent-...

Or, subscribe to the show here: http://itunes.apple.com/podcast/the-startup-success-podcast/... Look for episode 74.


I've used BDD in a startup, and it increased my development speed. Here's what I did:

- I wrote the specs before I wrote the code. Essentially, I used Cucumber to define how users interacted with the site, and I used RSpec+Shoulda to define how low-level APIs worked. This rarely took longer than testing by hand: I just wrote something like "When I click on 'Sign in', Then I should see 'You are signed in.'", and that was it.

- I kept a watchful eye on the size of the tests. If the test-to-code ratio ever drifted far from 1:1, I figured out why and fixed it. A 3:1 or 5:1 ratio is a sign that your BDD/TDD process has gone way off the rails, at least in my experience. Common causes are (a) not using Shoulda to test models, and (b) relying on controller specs when you should be using Cucumber (or Steak).

- I used Cucumber for specifying user interactions, and RSpec for testing models. I only wrote controller specs for weird edge-case behavior that was a pain to test with Cucumber. Edit: And I virtually never wrote view specs.

- Refactoring was easy, because I could tear into the code and trust the specs to report any breakage.

I agree, however, about the speed of Ruby test suites. I hate hate hate waiting for specs to run. I get some mileage out of autotest and Spork, but not enough for my tastes.


I always thought cucumber was, well, cumbersome. I stick with Rspec, which you could argue saves time. The 5:1 ratio mainly hits when testing models, especially scopes. Maybe the problem is that I've written tests for all validation, attributes and scopes. For a model, that can be a dozen lines of code, but for the test suite, it could be hundreds.

I do you Shoulda and I don't do controller/view specs (so not mocking/stubbing). I do integration (i.e. request) specs.

I've always had a problem with testing view output like you mentioned e.g. I should see "You are signed in." I'm always making last-minute copy changes, and sometimes would make a change like this to "You are logged in". It's a very simple change, but potentially could break some specs. I'd have to run the suite, see what failed, look at the line numbers, figure out why, then realize it wasn't that my app wasn't functioning properly, but the assertion was tied very closely to a transient message. Again, maybe this is just isn't a good way to test.

I'll agree on refactoring. Nothing is scarier than going in and changing internals, hoping nothing breaks.


    Refactoring breaks a lot of test cases (mostly with renaming variables).
This is a red flag to me. Your tests should not know or care what variable names are used by your code.

It sounds in general like you might not be doing TDD correctly. Your test cases shouldn't be slow to run, either. Are you actually hooking up with the database in your tests, or are you isolating them properly?


The other red flag: This is 2011, there are good automated refactoring tools out there that make renaming variables/methods/classes trivial.


Yeah, there's one built into Vim!

    :bufdo %s/^R-w/new_name/gc


This is exactly the kind of action that leads to build breaks.

Don't do it, use a real IDE.


Unless you're using Smalltalk, then no, I'm not willing to use a watered down language for the simple ability to make variable renames bulletproof.


There are other dynamic languages besides smalltalk that have a good IDE. Jetbrains provides IDEs with refactoring support for JavaScript, python, ruby, php and others.


not bulletproof refactoring though.


Sorry but you are wrong. Every IDE I've seen is big, bloated and the useful features it provides are more cumbersome that sed, grep and find. OTOH vim is fast, universally available, and because of it's nature much faster to code in. I look at it this way: when you first switch to vim, you will be working WAY slower than in your favorite IDE. Take one hour to learn the basics. Then you will be just as fast. Then, anything new you learn is just going to give you that much more speed over your old way.

Also, in general, it doesn't matter what you program in: be it an IDE, vim, cat, or a magnetized needle. A good hacker finds a way to write good code.


If you are writing Java code in vim, I guarantee you that you will be faster and more productive with an IDE.

I can't believe you actually think that typing all these characters (imports, renaming by hand, creating methods by hand, etc..), navigating between classes, finding symbols is making you faster.

vim is great for general text editing, but for Java, nothing can beat an IDE. Try it, you'll be surprised.


Heh. True, I do most of my development in Python, PHP, JavaScript, and C. vim is all about saving you keystrokes though. NERDTree is for navigating files, and autocomplete plugins are abound.


Refactoring support has nothing to do with IDEs. Nothing.

An IDE is, well, an integrated development environment - I find the idea of mashing my editor, VCS, build system, debugger, etc. into one tool to be incredibly distasteful.

What you want is a competent editor. Unfortunately, most editors that provide support for things like automated refactoring are bundled into IDEs, hence your confusion.


> Refactoring support has nothing to do with IDEs. Nothing.

Uh?

The strength of an IDE is that it understands your code. It knows what a method, a class, a variable, a package is. Because of that, it can give you more assistance in writing your code than any editor that bases its highlighting on regular expressions ever will.


I used the term variables to include a lot like attributes, and even some methods. If you are testing a model's attribute, then the test case definitely needs to know about it.

Technically, they are hooking up with the database since the attributes are automagically generated from the DB schema (using ActiveRecord on Rails). Maybe there's a better way? I'd love to know.


I love developing in rails but let's be clear, refactoring ruby is just painful at the moment. Yesterday I decided to rename a model, it took over an hour to get the tests passing again. Ok there's rubymine but it's only partially effective, it misses things like renaming relationships and references in templates. Given the difficulty of static analysis I've been wondering if a refactoring tool might look a little different with a dynamic language. Given that refactorings are effectively replacement patterns perhaps a find/replace workflow tool would be the way to go. The workflow might go something like: specify broad regex which matches all potential changes. View matches and then partition them with further rules like file type and tighter regexes. Once you've grouped the matches provide appropriate replacements. Run tests. Refine if required. Obviously you could follow this workflow using command line tools but the refinement process is difficult because you don't retain any of the context. Having said that it wouldn't be too hard to use existing commands to put something like this together. Looks I've got a project for this afternoon.


That there is the big reason I can never convince myself to stick with a language like Ruby or Python. I do about one project a year in one or the other, and am always re-amazed at the lack of proper tools. It's a shame, because in other ways they're so good.

The refactor you mention is single right click in C# (with ReSharper, and without the frankly terrifying idea of using Regexes to do it). Compounding matters is the fact that a huge portion of your unit tests simply evaporate once you realize that you simply can't have typos or type mismatches in an old dinosaur language like C#. So it's entirely possible that that big suite of testcases it took so long to refactor would not be necessary at all (replaced instead by the occasional int x;).

Once somebody gets the tools together so I can be as productive in Rails as I am in ASP.NET, I'm jumping ship. Until then, I'm hanging tight.


I'm a java refugee and used to make heavy use of the refactoring tools in eclipse. It's definitely something I miss from my old workflow. Given you're using BDD there isn't really anything to be concerned about when using regexes etc. to manipulate the code. I'd say that learning to use the command line better is essential if you move to a dynamic language. With regards to the test suites being larger due to type checking I think it's been fairly well established now that this isn't the case. Those types of errors are always shown up when testing other aspects of your code, I can't think of a single case where I've written tests that do any type checking. In the end I find that losing the refactoring assistance is a tradeoff that I'm happy to make in exchange for the increased expressivity and productivity I enjoy with rails.


You've piqued my curiosity. Why would a developer ever need to do anything (that affected his code) from the command line?


Find and replace is probably the only time I use the command line to actually modify the code. I could use a text editor to do this but favour the power I get with the command line(using ack and sed). Using the command line isn't necessary to work with rails but it is favoured within the culture. Generally people use it for everything from running the server to checking code in and as mentioned, refactoring.


I've never bitten the Apple of TDD/BDD but:

* To me, you should balance the amount of testing on several things. Not all code is created equal - some are plain old quick tests which are meant to be thrown away. Other code is something you expect to be running for a long time.

* The most important balance is this: If you leap quickly over testing of a piece of code, it may or may not cost you more time in the longer run. In other words, not testing increases the risk variance of the code having a bug further down the road. You have to evaluate if that is going to be a problem or not. The problem may also occur because your code is too slow. With a good test-harness it will be easier to optimize and sometimes the tests can be used as a start for benchmarking.

* On the contrary, if you feel the grass is greener on the other side of the road, you may test too much and thus never move fast enough to getting something done. It will cost further down the road, but it hinges on the premise that you will not discard both the code and the idea and rewrite (so tests needs to rewritten anyway).

* Personally, I rarely use a TDD approach. I rather like property-based testing: I "fuzz" out errors. I've just written a protcol encoder and decoder and there is an obvious test: (eq orig (decode (encode orig))). So I automatically generate 1000 "origs" and test that the above property holds. To me, this is much more valuable than TDD/BDD - but I've never been a fan as I said.

* Sometimes the idea of BDD is to shape your process and thinking pattern. In that case, it hardly looks as if it a waste of time: Had you not BDD'ed, well then you may have been in the unlucky case where you implement a lot of code only to realize that you implement the wrong idea because the API has to be different and serve you differently.


Much of the same sentiment jlouis sums up above plus I always keep in mind:

Time, Quality, Cost, Scope

If what your coding will have limited impact on use or functionality, scalability or performance later down the road... Fine. However, it is my experience that standard approaches are standardized for the greater good and health of a "more mature company". As a manager, director and head honcho I sure don't want a developer making that kind of evaluation. It will work for you now, but for your own sake and others later down the road; artfully comment your code!


I do MDD. Market-Driven Development. It's the latest craze! But secretly I think the cool kids have been doing it for hundreds of years, we just forget about it from time to time.


Nice. I'm going to throw this term out next time I'm with my agile junkies.


I rarely use TDD for prototyping or even the first version of a project. I tend to only write tests on my second pass through a chunk of code (generally when I'm refactoring it).

Works for me perfectly well, and I don't give a damn what the TDD True Believers think.


I think even the true TDD "purists" would say you're doing it exactly right...

Robert Martin usually says "You are not allowed to write any production code unless it is to make a failing unit test pass."

When I'm writing code to solve a problem I've never solved before, I don't write tests. But then I scrap it and write tests first, implementing code to pass the tests. It may sound wasteful, but I have enough experience behind me to know that those tests come in mighty handy about six months later when I want to add in some crazy new feature I didn't think of before.

Heck, I even do that with new features - I'll branch, write it, see if it works, then branch again off the master and implement it again with tests. It's not much extra work really, and I often do catch little mistakes I made in the "prototype" version.

But it's taken me a long time to get used to TDD, and I feel like I'm still learning. I occasionally find myself over-testing. Like anything else, it's a discipline, but I find it so worth it.


Actually, I think when Martin says "production code", he means "non-test code", as opposed to "code that will be released to production."


Yes, and I take that to mean anything that's non-trivial or experimental. I've heard him speak on the idea of experimentation before, and that's what led me to the methodology I use today.


A few things I've noticed:

Developers should not be paid to write tests, only code. If the tests are worthwhile, then they'll get written anyway.

I've seen some developers who write lots and lots of pointless tests... hmm does Model.find(:all) return all the items in the test db? Ok one passing test, does :first return one? Ok, another passing test. I'm not exaggerating.

If your test codebase is full of stupid tests that are actually testing your framework, and if your test suite takes 5 minutes to run, maybe that's why your team has so much time to read HN.

Good, useful tests will test the most critical 10% of the codebase at most. The "money" paths that are critical to your core business. Things like credit card processing, account signups, password resets.

Many of the critical 10% of tests may very well be integration tests, not unit tests. There is no reason to write unit tests if the big problems would be caught by an integration test before a deploy.

If your testing ideology makes all this sound like hogwash, then you probably work in a cubicle where it does make sense to do test your codebase more broadly.


I know you're not exaggerating because I know a lot of them. They live hard, die hard by "if you code it, test it" -- which includes these trivially simple statements. Maybe not Model.find(:all) because that isn't new code, still framework, but definitely testing all the attributes of a AR model based on the DB schema.


TDD/BDD isn't hampering my startup(s) agility because I'm not letting it. We don't do them. I only write real code I actually need to do something real. This is pretty useful when you're pre-revenue and your feature set or implementation choices may need to change drastically and/or be abandoned entirely. Less ballast the better.


This is not a one-size-fits-all issue. Depending on the team, you might be able to write code that's 90-99% correct with little test coverage. Depending on the problem space, code that's 90-99% correct might be good enough. In others, it might sink your company.

You might lose 1% of "customers" due to bugs, but you could also easily lose 1% of customers due to bad copy or UX. Is that tested as rigorously as the code? Could the time you spent writing tests/specs have been used to implement and analyze A/B tests?

Etc.


One symptom of the BDD kool-aid is Cucumber. Cucumber is very useful if you've got a customer in the loop who doesn't speak Ruby. However, if everybody who is viewing/writing the tests speaks Ruby, then maintaining the Gherkin translations is a waste of time, and a "leaky abstraction". Webrat by itself presents a very clean, concise, readable syntax, so just use it by itself for integration tests, or use one of the other alternatives, like Steak.

http://mrjaba.posterous.com/acceptance-testing-and-cucumber-...


    I sometimes have a difficult time writing and passing tests...
That sounds like a design issue - if you don't design the code to be testable you'll probably find it hard to test. Even programming in the small e.g. at the method level, you should be thinking "how will I be able to test this"?


I find the thread starter and the highest ranking comments to be seriously deluded, and here's why: I find that TDD almost 99% of the time speeds me up, lets me iterate and test my thesis much faster than without, and for this very reason, I almost always write my code with good test coverage.

The times I have omitted tests, I have always come to regret it, having to re-write the code from scratch for it to be up to par.

A few reasons for this: - Yes, TDD is belated gratification - the first few cycles of write-deploy-open browser and test are quicker than writing a test. But as your functionality grows, instead of linearly incremental effort to write new test code, your manual regression testing grows exponentially. - TDD actually HELPS dealing with change: when you refactor functionality, you have instant feedback as to what still works and what doesn't. Though features change the whole system and code base rarely do. See previous point. - TDD helps writing minimal, flexible architectures that are adept at change, as systems are de-composed into, well, testable units! - the "prototype" code almost always ends up being the production system. What is easier once that is the case without tests: trying to write tests for code that isn't very testable, rewrite the system, or just live with testing costs that are much higher than that of the competition?

I have actually seen startups slowly die due to the first two points that I raise. But if you think it's still a good idea to skimp on testing for the sake of expedience, good luck to you, you're going to need it..


I have realized that all kool-aid is quite useless for most situations and thus have reduced my tests to two simple things: - write unit tests for units that actually are complex and do need it - focus on getting as much coverage as possible on functional and system tests


When you're a startup, getting to market should trump everything else, and TDD gets in the way of that.

Don't listen to agilists who tell you that untested code is unprofessional.

First of all, hundreds of thousands of untested lines of code go to production every day and they work fine.

Second, agilists usually go by the false fallacy of "Either you're using TDD or you're not testing". Which is obviously false: you could also be writing tests last. Which works just fine.


I've done TDD to a variety of degrees on different code bases, with a variety of success. I think when you achieve the right rhythm and approach for your particular code base and team, TDD can make you go faster. If it's not helping you build quality software quicker than you could without it, don't do it.

A number of these points aren't familiar to me (trouble finding testing for deleted code? harder to modify tests for changing requirements than start from scratch? Renaming variables is hard?). These comments make me wonder if you've been treating your tests the way you treat your code.

When TDD has worked best for me it's because I've spent a lot of time thoughtfully putting some organization into my tests, making sure they're ridiculously fast to write, and ridiculously fast to run. Your source code becomes slave to your tests, that's the whole point. The fact that your tests are in your way, suggests that you're doing it wrong. If you were doing it correctly, and TDD was failing you.. I think the symptom would be your operational code getting in the way instead.


I am programming mostly for myself only (so far) and thus I haven't done anything big but some of the points you made remind me how I feel about TDD (from a different, but in ways a similar perspective).

I rarely start with a concrete goal: to clarify I make a general overview what I want, but not the pathing I should take to achieve it. When I am coding I am often exploring, I want to try new things, and then after a while I settle with code that I am pleased with. But before that happens I can go through several iterations of changes. Writing tests before writing code is one thing, but adjusting the tests afterwards to accomodate the changes (which may be big) is a hurdle and slows down progress. In addition in a case such as mine where I am doing all the work alone and thus I know every corner of the code written so far gives little benefits.

I can imagine of course that TDD is a great tool when there's an assignment for a client with specific tasks to accomplish, but in other cases 'get something working first' is better I guess.


Here's my slider, which is working really well at my startup.

1. Speed trumps reliability Also, manual testing of stable features is wasteful.

For things at the business logic layer, I have a suite of (many) unit tests that verifies that all of the domain objects do the right thing. They are small, fast, don't break when I refactor the code (automated refactoring tools FTW) and easy to work with.

I have another set of integration tests which talk to some external services (twitter, fb, etc.) these are slow and aren't as core to the business logic.

I have yet another set of tests that test the real database (I use simple test doubles in the unit test layer).

Whenever we update code in our source control system, a TeamCity server builds, runs the unit tests, integration tests, data tests, and then does a zero-downtime deployment to our production server. Immediately after that, we run a handful of tests against the server to make sure that the server isn't totally broken.

This sounds like a lot of work, and it is, but it makes us able to deliver so much more quickly than any alternatives. Continuous deployment means that no time is wasted on manually pushing code to servers. Also, it means that the changes made are the smallest and safest changes possible.

And, most importantly, it means that you don't have to spend a lot of time manually testing existing stable features just to get a baseline of reliability.

Do I strive for 100% code coverage of all classes? No. There's a continuous cost-benefit analysis going on. If something is tricky or would create brittle tests, I don't have automated tests for it. If something is really core (e.g. proper enforcement of game rules) I sure as hell am going to write tests for it. Right now, I'm at 72% test coverage.

Trust your common sense here. It's not all-or-nothing. You can get meaningful speed-and-stability-improving-value out of having some tests without having to test every single line of code.


It just depends on what you want to test for. I find balance in testing for the most important security features such as authentication and stuff that'll most likely change very little.

You can speed up RSpec so much by offloading it to Spork, a test server of sorts that loads your environment.


I can't speak for speed and reliability but I can speak for budget vs security and in all honesty, unless it's going to kill people otherwise, screw security in your first iteration and get it out.

    If you don't ship you don't have a startup.
If TDD/BDD is getting in the way of shipping, then ditch it. Like security, you can always absorb the debt and introduce it later. To put it another way, if you spend all this time doing it right, ship (eventually) and it never gains traction then what have you gained? On the other hand if you ship a buggy (and presumably fairly insecure) product but it does gain traction then you should pay down the debt because it's working.


You only get to make one first impression. If my first impression is that you got the fundamental requirements down, and now just need to add features and deal with growth, then I'll stick around.

If my first impression is that you ditched fundamental requirements, such as security, then I don't care how many features you have at launch--I can't trust your site. If your idea is good, I'll wait for someone to rip it off and go with them.


You are of course correct. However, there's a difference between enough security to launch (i.e. what comes with your framework like XSS, SQL injection protection and basic common sense etc.) and spending lots of time doing extra security stuff (like HTTPS everywhere, making sure cookies are properly scoped etc.) that could be spent getting your startup out of the door.

There has to be a balance, which is something quite a few fellow security nerds miss. The value security brings is in protecting data. If you have no data, then there's not much value in security. Likewise, if you have sensitive data then it's worth going the distance to secure it.


Skip unit tests. Only do acceptance / functional tests. W/o any tests, your stuff will break constantly. W/ unit tests, you will waste years of time. Acceptance tests (e.g. cucumber) fill the gap.


I have a question to the functional gurus out there:

Do you use unit testing in a functional programming context?

The reason I ask is that programming with a REPL fundamentally changes how you typically write programs - the bottom-up mentality. You don't even write a test-first, you test first! The tested program is then assembled into a unit. I feel there's much less incentive to write a test, if you use a statically typed language and use a REPL as it is meant to. Am I wrong in my thinking?


I've been doing some programming in lisp/arc, which isn't exactly functional but I try to stay as side-effect-free as possible. And I've been having a lot of fun doing TDD. See, for example, http://github.com/akkartik/wart which has about as many LoC in tests as in code.


Yes, you should use unit tests in FP.

Traditionally you build larger functions on top of small ones, and you might change the later. Unit tests guarantee that the larger functions still work after those changes.

Also, in Haskell, I prefer to do property-based testing instead of unit testing. Google for quickCheck for more details. (There's an Erlang version as well).


It sounds like the issue is with how you are writing you tests. For instance, you state "refactoring breaks a lot of test cases". As the normal TDD cycle is test-code-refactor, refactoring shouldn't break your tests. Without seeing your actual code, it is hard to give you advice, but it sounds like your test cases are too coupled with the internal workings of your code, rather than testing the interface.


While refactoring, it's always the unit tests that break. Maybe instead of having a (Rails) scope on a child class, I move that to a method on the parent class. The integration tests don't know about the models, and the exact same results are returned. But all my unit tests are haywire because they were testing the models.


If your refactoring causes test code to break, then it also causes "real" code to break, yes? All code using that interface needs to be fixed appropriately, which can typically be done with automated tools.

If there's no corresponding production code using the same interface, you have a problem: you're probably testing at the wrong level.


In the presentation shown at http://www.startuplessonslearned.com/2008/09/customer-develo... (specifically the last slide where he talks about the Five Why's system) it seems to me that the guru of Lean Startups is a advocate of automated test suites, and therefore a believer in TDD.


Automated test suites don't automatically imply TDD. Many people use automated test suites and CI without TDD.


To put it in math terms, TDD/BDD gives you

  output = C1*(e^t-1)
while non-TDD/BDD gives you output

  output = C2*log(t+1)
That is, you get better speed at the beginning without TDD/BDD at the cost of slower output as the codebase grows. With TDD, you generally start slower, but increase output velocity over time. (and let's not point out semantics here... the equations will hold for awhile, then flatten out).

So, where's the intersection? I claim it's usually at about the minimum viable product or before.

And of course, this all exists on a continuum. So don't TDD things you don't understand. Instead, spike on the new technology outside your app, then bring it in with "gentle" TDD/BDD.

If you're sold on TDD/BDD like I am, the key is to work to increase C1. That is, get better at these disciplines. You should be able to write tests quickly and have them run quickly.

And frankly, during a pivot, I'd rather have obsolete tests (pointing me to obsolete code) than obsolete—and hidden—code. Obsolete tests scream "Fail" when they're obsolete. Code does not.


Refactoring breaks a lot of test cases (mostly with renaming variables).

Please tell me about this. My experience is that refactoring tools can often apply the same refactoring to the tests as to the code. For OO projects, the only variables visible to a test should be temporary variables to hold test objects. Renaming those shouldn't be a hard refactoring.


In a startup, you need to engineer to minimize the latency of validating features. Sometimes tests help with that (like when you're dealing with a complicated algorithm), sometimes they don't. I wrote a poker engine recently and I had many tests for the engine itself, a few tests for the tricky parts of the UI, and no tests at all for the system as a whole.

The challenge is that when you get into scaling, you need to begin engineering for throughput, which requires a completely different engineering style focused on higher throughput and reduced variance. This style is well supported by "test absolutely everything" TDD.

Oh, and then when you want to do a tangential experiment, you want to go back to latency-oriented engineering, but without destabilizing existing code.

In short, I'd say yes, it's plausible that overuse of TDD could be hampering a startups agility.


So when you say "minimize the latency of validating features" do you mean that you want to reduce the time between building something and getting feedback from your users? Can you elaborate on this point? What is latency-oriented engineering?


I think you have it right. Latency-oriented engineering is a style of development that minimizes the time through the entire loop from idea to learning to feedback from real users to learning based on that feedback to the next idea. What you do to achieve this is very different when you have a bare idea and no customers or when you have a million daily users. The goals is the same--minimize the loop.


Hi Kent,

I wanted to reply to your comment...

How do I apply the theory of latency-oriented engineering to the following real world problem? I have an idea for a book that might be called “Simple Code". It presents a decision-language to help software engineers make good design decisions. My idea is in the very beginning stages (4 days in) so it is vague; but it would incorporate XP theories and practices, especially TDD. The set of rules in the decision-language might be similar to rules in a game. I’m not sure if this “idea” of mine is a book at all. It might be a web-based tool for searching "reliable" sources or it might simply be a new language. The project is going to be my first attempt at merging my art with my software engineering skills. Whatever form it takes, it will be inspired by art, nature, and minimalism. If it's a book, it will be small enough to read in bed. How would I use latency-oriented engineering to minimize the loop from idea to learning to feedback from real users? How much of this project do I have to imagine/articulate/build in order to get feedback from real users? How do I get that feedback? Also, how do I justify the expense, i.e. the time needed to complete one loop, to my investors (my husband)?

Thanks.


If you have slow tests that are hard to write, you're already in trouble.

For example: my recent Node.js projects use Vows. More complicated test details are encouraged to become small functions that are reused over and over. (Vows calls these macros.) For testing HTTP servers, I wrote a set of macros, called Pact, that make my tests very concise.

For other big important pieces, I isolate myself from upstream changes by creating an interface into the dependency, then testing the interface instead.

Instead of changing lots of tests, you change a macro.

The result: very fast feedback from tests that are easy to add and change, especially when refactoring or when your plans evolve. (Using a new dependency, going sync to async, etc.)

I'd love to see these benefits in more places.

http://vowsjs.org

https://github.com/reid/pact#readme


I disagree with your point that "most of the code are hypotheses to be tested by the market". Most "pieces of software" might be, but I contend that most of the code in most software is under-the-hood stuff that objectively either works or doesn't. If it works, it has nothing to do with the market's opinion. On the other hand, if it doesn't work, then regardless of what a good fit the idea of your software might be, it will colour the market's opinion against you.

Using the market to test a hypothesis, and using them to test whether your code works are two different things. The former is a great idea, the latter, not so great. Mixing the two is a bad idea.


Like anything else, there's a balance. Don't let yourself go all-out to ensure you have 100% test coverage of every line of code.

There is always a cost-benefit argument that should be replaying in your head over and over. If it's a critical part of your application, then make sure it has test coverage. If it's a simple part that's basically just doing CRUD operations with almost no custom code, it's probably not worth worrying about in the short term. Just make sure you cover the primary flex points and custom algorithms or libraries, and you'll be fine.


"Refactoring breaks a lot of test cases (mostly with renaming variables)."

this seems to be the easiest thing a refactoring tool would handle. Why wouldn't you re-factor your tests at the same time as your code?


If RSpec is too slow to wait on you could use autotest, which automatically runs tests when a file is saved. There's also a plugin for it that uses growl/libnotify to display the results.


A common problem at the moment is RSpec being slow with Rails 3 and Ruby 1.9.2. If you do things the "normal" way, Rails ends up having to load twice and you're guaranteed to spend at least 10 minute looking at nothing.

The solution is to use Spork to have a Rails test environment running permanently (it also does caretaking between sessions) and then RSpec will access it over DRb. Couple this with a good .watchr file to run only the relevant specs when you update them and you get very quick RSpec tests. It's a bit of a pain to set everything up but it's worked great for me.


I was inspired to write up how to do this:

http://www.rubyinside.com/how-to-rails-3-and-rspec-2-4336.ht...


I only start writing tests once my startups profitable and stable (if that ever happens). TDD is just not as important as getting a product out the door.


TDD is just not as important as getting a product out the door.

There's not always a "do TDD, develop slowly" vs "do no TDD, develop quickly" dichotomy.

If you're doing it right, TDD should speed you up. It adds some guaranteed extra time to your plans up front but significantly reduces all of the hidden costs of wasting hours debugging bugs that unit tests would have picked up.

So I might know developing a particular app might take 10 hours without tests, 15 hours with tests, but if I waste more than 5 hours fixing a myriad of bugs, the TDD approach still wins. (The constant reward cycle is an important psychological factor too, though, even if the times are equivalent.)


Knowledgeable people in TDD code faster and better using the technique. Practice takes you there.

In the mean time, test the parts of the application that do not change. How much can the payment flow, sign in or sign up change? Make sure you have test for those in order to catch regressions.

Know where you are. Are you building to last? Are you building to test an idea? balance.

Build better abstractions.

Practice. Practice. Practice.


in my experience, yes, TDD/BDD slows me down (partially because I'm not 100% on it yet) BUT it's sooooo much more painful to me to add it back to legacy code than start with it fresh in a new project. I have two relatively large existing projects where I would kill to have solid test coverage, but to implement it just feels so daunting I haven't taken it on yet.


Here is Kent Beck's take on the four phases of startups where he talks about how these different phases require different development practices, principles, and technologies: http://www.threeriversinstitute.org/blog/?p=252


TDD can be useful when complemented by an isolation framework, such as Typemock Isolator. It'll save precious time & most importantly for a lean startup - scarce financial resources.


For more information about Isolator, check out http://www.typemock.com. TDD can save startups resources -- performing unit tests will save them money and lead to a better product and thus better reputation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: