Hacker News new | past | comments | ask | show | jobs | submit login
Test-induced design damage (heinemeierhansson.com)
243 points by petercooper on April 29, 2014 | hide | past | favorite | 155 comments



I suspect my disagreements with DHHs last couple of blog posts have more to do with what each of us has seen in the wild than with any actual disagreement in principle.

For instance, in my experience I more frequently encounter places that have way fewer tests than necessary and there is no consideration about how to verify requirements at all.

Further in my experience, the GUI and database layers are the least interesting parts of the systems I work with. They truly are parts that can and do get swapped out with some regularity.

I suppose if I worked mostly on systems where they were in essence gui's on top of databases with little logic in the middle, I wouldn't want to isolate those concerns either. It would be more trouble than it is worth.

For instance, when I write small unixy command line utilities I very rarely test anything in unit tests. What would be the point? I can easily define the entirety of the specification in example tests that utilize the utility as a black box. I still do it first though...


Is swapping out the database layer so common you really need a complicated abstraction like the Repository model he mentioned? We've been running an app for 3 years and have never dropped ActiveRecord or Postgres.


Spinning this around on you: I worked on a project where swapping data stores around wasn't common because of tight coupling with AR. We considered swapping data stores around for different and evolving use cases and never did largely because we had so much code that we felt was tied up so tightly with AR that it would have been very expensive to swap. I think it definitely decreases flexibility, but that it's pretty hard to know if you want more flexibility ahead of time.

Pretty much all of these pattern discussions seem to be this way to me - "just do it the simple way! YAGNI!" versus "crap this one time I did need it and it was difficult to change by then! Maybe I should design things more flexibly from the start next time!". It's pretty easy to get burned going either direction, and depends a lot on things like what the project is, what organization is building it, and the level of success it ends up having. The closer a project is to a simple-CRUD, small team/unproven-company, prototype with limited success, the more sense YAGNI makes, and the further from each of those criteria a project is, the more it makes sense to design for more flexibility.


> It's pretty easy to get burned going either direction

Quite true, though I'd argue that YAGNI is still true as a probabilistic maxim. You'll make the "will I need it" decision many thousands of times in your career. If you follow YAGNI consistently[1], it will help you more often than it hurts, and you'll come out ahead in the long run.

[1] But nobody is saying you should ignore concrete evidence that you will need something later. That's its own cargo cult. If there's good reason to believe YAGNI doesn't apply in a particular case, don't follow it in that case.


I think this is a dangerous line of thinking, but I suppose I wouldn't modify it very much. What I would say is that YAGNI should perhaps be weighted higher, but that the probability of it being wrong in particular cases should be considered carefully.


With the "complicated" repository model mentioned you can transparently introduce other behavior, like a caching proxy, retry-on-fail proxy, migrate-on-write proxy, whatever. It might not be a valid use case for you, but I have seen tangible non-testing benefits of using the repository pattern.

Disclaimer: not a rails dev, ymmv, etc.


I tend to not work on applications that work with a single data store. Some data will be stored on the file system, some in a traditional RDMS and some in a NoSql implementation. What data goes where, frequently changes and isn't really a major concern of the system, at least not the parts that need specification verification.


So you have data that one day might go into an RDMS but the next day might go into file store? Do you constantly migrate old data between the two stores? What is the use case for this? Not being snarky, I'm genuinely curious. We use more than one datastore but that data usually stays put once its committed to one format. RDMS for most of the app, Redis for quick lists and cacheing. Flat files when necessary. But those models don't change their store ever unless its a major overhaul.


It's a combination of 3 things. 1) Storing of the data is not what is central to my business case. It is an operational requirement, not what I'm selling. My major architecture requirements therefore do not get driven by what data store I'm using. 2) I frequently have data migrate from 1 format to another. 3) The actual data store formats don't change often, but each one of them has changed at least once in the last 3 years. That means that every six months we are migrating data store implementations. I don't want this to actually impact my business (see point 1) and therefore data store specifications are highly isolated from the other code.


When I worked at last.fm code that assumed all the data would always be in a single postgres database was a constant source of pain - we spent a lot of time migrating tables out of the big central database, either because the data simply didn't fit any more, or because we wanted it to be available to a Hadoop job. (There were probably other reasons, but those are the ones I remember). Maybe last.fm's an extreme case, but it does happen.


wouldn't a single abstraction over postgres, filesystem, hadoop, etc be either really leaky or really inefficient? different datastores are better suited for certain kinds of queries. It seems like the programmer should be aware of what he/she is querying.


You invert the dependency. The abstraction is over the things that the higher level code needs. I don't need to know about query types, indexes etc. I need some business answer (all log records between x/y, a user matching username x), I program to an interface that provides all the answers necessary for the high level code.

The implementation of that interface is data store aware and implements the interface in the most effective way possible for the data store holding the things I'm interested in.


Yes, if your building a user deployable product like TeamCity, Crucible, or other app where it might be deployed on a number of database tiers to fit your customer. In that case, a Repo abstraction (or ORM) makes life livable.


Making your information store and your GUI separable is the whole point of the C in MVC. That is not what David is complaining about. He's complaining about inserting another layer of indirection between the controller and the view, and between the controller and the model, particularly when the reason for introducing this indirection is purely to allow inserting hooks for tests.


It is often wise to design code to be testable. This usually results in cohesive, decoupled packages consisting of many simple methods. However, it's rarely a good practice to make custom code in your application solely for testing. That's usually a tell-tale that you could design better for test such that the code can be tested without special modifications, i.e. MVP over MVC.


David is explicitly arguing the contrary. I find his argument far more persuasive than your assertion that adding extra layers of abstraction just for testing purposes improves the code.


Hold on, twistedpair said "it's rarely a good practice to make custom code in your application solely for testing". That seems rather the opposite of "assertion that adding extra layers of abstraction just for testing purposes improves the code".


Can all this be summed up with "Avoid all dogma, even this."?

Any time we buy into a dogma at the expense of rationality, we lose. This has been demonstrated throughout history in human interactions with each other (via religion, politics, legal systems), the development of science and technology (see Galileo, Copernicus, the 19th century US doctors ignoring germ theory and killing a president).

Sometimes we create dogmas to try and move things away from bad ideas towards better ideas. Dijkstra's "Go To Considered Harmful" was one such effort. Gotos, as used at the time, were fucking terrible. They were used instead of higher level expressions like if/then/else, for, do/while, function calls. But the (at the time I was in college, early 2000s) refrain was tired and wrong (or misapplied). Sometimes, in some languages gotos can, in fact, be very useful, so long as their use is chosen deliberately and with care (see the C idiom of using gotos to jump to error handling/reporting code in functions).

In the end, nearly every development process runs the risk of becoming a dogma. Avoid that. Study the process, practice the process, and reason about where the process should actually be applied. And we already know that the answer isn't "everywhere and everytime".


I feel like MVC is being treated as like the one true pattern to design your web app with and that's just simply not true. Rails is MVC and maybe that's all it should be for what it is intended for. Other projects might not be a great fit for such a simplistic view of the world and maybe that means Rails is not a great fit for projects that don't fit into the MVC abstraction.

In my experience MVP, MVVM, super thin Sinatra API's, hexagonal architecture, functional programming, and other sort of weird approaches fit certain projects much better than the standard Rails MVC approach.

Also, not every project is a web app and there are plenty of times where various testing approaches make a lot more sense than they do in Rails. It's too bad that a whole line of thinking about software quality is being disparaged because it isn't a good fit for Rails as DHH sees it.

TDD is a useful tool in the right context. Maybe that context isn't Rails.

It seems unwise to be telling a lot of smart people who care about software quality to "get off my lawn" so to speak, but I've never run a successful OSS project as big as Rails, so I probably don't have a clue about how to lead a community as big as Rails is.


This comment feels orthogonal to TFA (but it makes a great strawman). The author was using rails apps as an example of how to think about testing and design, not laying out the one true path to software development. Whether it is a rails web app or a functional high frequency trading app, the author's point stands: don't let dogma dictate your testing strategy at the expense of harder to read, nastier to maintain code.


You have it backwards. TDD advocates are the ones claiming that their way is the one true way, that all software should be done their way.

Edit: I'm not sure why I'm getting downvoted. Here's a quote from Bob's post that was linked yesterday which shows what I'm talking about

> If you aren't doing TDD, or something as effective as TDD, then you should feel bad.


That quote specifically says there isn't "one true way". You are ignoring the "something as effective as TDD" in the middle of it.


I'd love for him or you to give me an example of an alternative that I should't "feel bad" about.


I've used software contracts in conjunction with automated tests written after the fact to good result.

I didn't find it any better than TDD and in some cases my outcomes were worse (but I expect that was my inexperience with the development mode).


if you write them after the fact you aren't doing TDD or something as good as TDD. At least, this is what the TDD mob says.


For nearly any realistic definition of "TDD mob" you can name, I'm in it. If your process results in well factored, automatically verified code that let's me refactor mercilessly, than its as good as TDD. In my experience I'm not disciplined enough to do this & most other developers aren't either.

I've encountered far more untested code bases in my life than TDD zealots.


I think one important thing to keep in mind is that TDD is not necessarily synonymous with "software quality". In some cases it's a very useful tool to ensure the quality of your code, but it's not even the stated goal of TDD, and a focus on TDD as the "one true path" to software quality ignores that some things are more effective (not necessarily simpler) to test using more of a "test later", integration-focused testing approach.

I agree that there's projects where a simplistic MVC approach doesn't completely fit. That doesn't mean that every software project needs to be built to the standards of the most complex software, or even that aspects of a project that do require this complexity can't be solved with a more straightforward, simple MVC approach.

At the end of the day, I think the main message I get from DHH's recent series of blog posts is that treating anything as a silver bullet, or a universally beneficial pattern is harmful - and this is equally as applicable to MVC for everything itself as it is for a complex, hexagonal architecture.


Bad programmers will write bad code no matter the methodology, pattern, language, tooling, or best practice.


That's not useful though. What makes programmers bad? In some cases, at least, it's the methodology, patterns, or best practices they use.


More often than not it is lack of any methodology, patterns or best practices.


MVC (like all good tools) has it's place.

Shoehorning everything into MVC because it's the "one true way" is where the problems arise.

My current system uses Controllers, Views, Services and Repositories with ORM objects as the "entities" (it's based on Laravel/Eloquent) and I've found that to be an acceptable trade off for the domain I'm modeling, MVC would have been painful when you have a lot of business logic.


The overriding message here is one of pragmatism. TDD, like a lot of methodologies before it, became gospel and people began practicing it in a dogmatic fashion without thinking about the best way to apply the principles to whatever problem is at hand. The spirit of TDD is that you have a safety net of tests to protect you from making changes in class A and breaking something in class B. If those are acceptance, integration or unit tests, great. If the code is cleanly organized and readable, great. Don't let zealots on either side of the aisle convince you to do anything beyond what makes sense to solve the problem that is sitting in front of you.


Well said. Methodologies should be treated as patterns and as such you need to understand what they aim to achieve, the trade-offs in terms of risks and benefits and most importantly how to adapt the pattern to what you are doing. You open yourself up to problems when methodologies and patterns are adopted without thought.


Does anyone know of a public example of a Rails application that does testing in the way that DHH says is good?

I'm tired of the talk talk talk talk talk talk of "proper" testing in Rails, yet the examples always seem to be hidden away behind company firewalls. I've only seen a couple Rails apps with Rails-Way test suites, and they were nightmares that took many minutes to run. But I have seen dozens of Rails apps written by opinionated Rails devs with strong views about what proper testing was... and the apps had no tests at all.


If your tests take 10 seconds, how much did you really test?

The point it sounds like he's trying to make is that if you say things like "they were nightmares that too many minutes to run" you may be approaching testing from the wrong point of view. He sounds like he wants to say "let the tests take 5 minutes" and I agree with him, thats what CI is for. Commit your code, mark the issue your fixing, let CI tell you if its done or not, take your pomodoro break, coffee break, etc, then sit back down, and pick back up with your test results on the CI server and repeat the cycle, a 5 minute test suite is NOT A BAD THING...

If you think 5 minutes is terribly long spare a thought for us deployment engineers... my test suite involves building and tearing down entire VM's or PXE booted machines and depending on what software is being built and tested through deployment can take an hour or more.


> If your tests take 10 seconds, how much did you really test?

I see your point, but time-to-test is a horrible proxy for quality of tests. Business logic isolated from external systems can run incredibly fast, so ten seconds worth of testing can mean an awful lot in that case. The nature of TDD basically demands that you structure your code that way to remain productive. Otherwise it's like using a text editor that takes five minutes every time you try to save a file.

That's my inherent frustration with this argument. Both sides aren't arguing for their methodologies, they're arguing against the byproducts of each others methodologies.


Excluding the acceptance tests (written with capybara and spinach) the test suite for my current client takes less than three seconds on my machine, including the run time (excluding the phantomjs boot time) of the suite of ui-exercising JavaScript tests, and they are nearly comprehensive, testing every component contract from the client to the backend. The ATs run in a little under a minute, covering the major integration points. There is very little mocking in any of the suite, and no direct database access.

Testing hurts when you do it poorly or naively. I know because I've done it both ways, and when I find something harder than it ought to be I invariably find some point of coupling beneath the surface. When my design is good, my tests are fast and easy. If you listen to DHH you're going to have problems testing. Not because you have to when writing software, but because he's already made decisions for you which are bad or highly coupled. Don't fall for the straw man. There are better ways to do it.


Maintaining giant test suites and trying to keep them running fast is why I am so glad to not be using Rails anymore. Dynamic languages don't scale well for me because the testing is difficult to scale. With Yesod (a Haskell web framework I help maintain) I have a fraction of the need for unit tests. The compiler already gives me the equivalent of 100% code coverage for catching basic errors. I can focus efforts on testing application logic and integration testing.


Are you able to find consistent work building web applications with Haskell? It seems that many organizations are reticent to build on those sorts of technologies (not quite sure what else I'm including - maybe OCaml?) for very rational reasons - it is hard and expensive to find employees who are capable of being productive in them.


There seems to be a very common perception that it is hard and expensive to find employees who can be productive in less mainstream languages, but I rarely see evidence to back that up. I can't speak for Greg, but in my experience, hiring is hard in every language, but not materially moreso in Haskell.


It's interesting, I meant my question to be about finding companies using Haskell or willing to hire contractors that build projects for them using it, but of course I did take it in the hiring direction, so it's my own fault. Rephrasing: Is it easy to find work using Haskell, despite the perception (whether deserved or no) that it is difficult to hire for, which may limit the number of companies willing to build in it?


It is very interesting, because the dominant narrative seems to be both that it's hard to hire for and hard to find jobs in, which can't both be true without massive communication inefficiencies, which I'm fairly sure don't exist. My experience has been that it's somewhat harder to find a job writing Haskell than it is to find a job in a mainstream language, but that it is still very doable.


This doesn't seem inconsistent at all. It's a classic feedback loop - it is hard to hire because most people don't want to invest in becoming experienced in a technology that not many companies are using, and it is hard to find a job because companies don't want to invest the resources in using a technology that not very many potential employees are experienced in. Very similar to social network chicken/egg problems. Seems to usually be solved by either investment from one or more large companies with an interest (Sun, Oracle), or a "killer app" (Web browsers, Rails, college students for Facebook). My little theory here is woefully inadequate to explain Python's and Go's success, so maybe another way out of the trap is "lots of people just really like it". Not sure if that will or won't work for Haskell...


My point is that "hard to hire" implies more jobs than candidates, whereas "hard to find jobs" implies more candidates than jobs. They can't both be true. You can say things like "there are few candidates" or "there are few jobs" in an absolute sense, by comparison to other languages, but that actually isn't very relevant.

This is an oversimplified model because it doesn't take into account engineer skill level, which actually does seem to be the primary problem. Companies want skilled engineers, but it's hard to become skilled without having a job in the language first. So we end up with several companies trying to hire seniors, and several juniors looking for jobs.


Both "hard to hire" and "hard to find jobs" are satisfied by a disjoint set of job locations to candidate locations.

That is, there could be plenty of X jobs in City Y, but that does little for the X candidate in Z city. Flip candidate/job as desired.


That's true, and there does seem to be a disproportionate number of Haskell jobs in Singapore, but overall I don't think it has a huge effect.


More likely you are just seeing a fairly common network effect, where once you are in it is easy to see many connections.

So, if there are some fairly good quantitative treatments of this, I'd be interested. I suspect it isn't too shocking. Probably more than the parent poster and friends think. Probably less than you do. :)


I actually really don't have much of a hypothesis at all. I asked my original question out of pure curiosity, and my reply was along the lines of it seeming very plausible for there to be both a shortage of candidates to hire and a shortage of companies hiring. But I also think the opposite is plausible. I think so far the most specific answer to my initial question of "is it easy to find Haskell work?" is "yes, in Singapore it is".


Anecdotally, as somebody who has tried to find a Haskell job, there seems to be a LOT of competition for them.


It's about accountability.

There are two types of it: a leader either recognizes and understands the gain to be had from using something like Haskell and has the resources to hire someone really good at it (who is therefore accountable, but he/she knows how to replace that person if they leave), or the leader themselves is an implementor in the esoteric technology and is comfortable being an accountable party themselves.

You rarely see the first type, more often it's the second type. Being accountable means that even if all of our programmers left us, I could at least keep the lights on without panicking (funny enough, that's actually easier to do in Haskell than a large Python / Ruby codebase).

BTW, it isn't hard to find them but it is expensive. These are people that have a level of motivation above the average and subsequently have a level of knowledge and skill also above the average. Basic economics can be used to answer why they are more expensive. Particularly when you start looking at specialized fields with a specialized technology: applied mathematicians that are also skilled Haskell programmers, or kernel hackers that also understand Haskell well.


I agree with every word you wrote.

To take it further, assuming both that projects built by experienced Haskell programmers are "better", and that those experienced Haskell programmers are more expensive, are those projects actually "better" enough for the company to break even. For the vast majority of projects, the answer is almost certainly "no", because most projects don't live or die on their technical merits. I think this limits the supply of companies further than even the perception-of-difficulty effects.


Projects don't live or die on technical merits. But more often than not, they are born or not on technical merits. Remember that lots of IT projects fail, often with no result at all to show.

That fact it easy to ignore when you interact only with competent developers.


Out of curiosity, as someone who uses Yesod, do you have any issues with it taking a really long time to compile?

Or do you not use cabal sandbox for Yesod dev projects?

I like to sandbox all my dependencies per app, but I just couldn't get over the fact that each new sandboxed install of yesod-platform took the better part of an hour to compile on my MacBook, and actually crashed my micro VPS instance.


There is a difference between installation and compilation. I install into a sandbox once, and then I don't need to do it again for a long time, so it isn't a big concern on a commercial project (make sure to use the -j option though!). You can deploy a final build image to a 'micro', but you do need a separate build server with some RAM to perform the build. You could just use a VM on your Mac.

The compilation time during the development cycle is a greater issue for me. I am testing out a way to speed that up now.


OK, thanks for the response. So it's not just me, a new install of yesod-platform is supposed to take a lot of time and RAM. I was just a little shocked by it, since a new rails install is pretty quick, even on a micro VPS.

I wasn't bothered by the time it took to recompile my app while running yesod dev, especially since it recompiles automatically when it detects a file system change. But my yesod app is trivial, maybe this becomes more of an issue with a substantial app. Worth it, though, for the compile-time syntax and type checking. And probably much faster compared to most rails test suites, which you'll have to rerun anytime you make a non-trivial change anyway.


My least favorite example of when tests damage an API design is with dependency injection. There seems to be very little need to resort to dependency injection as a way to create a good API with easy to understand architecture many times, but dependency injection gets abused because it makes testing easier. You can supply your API with mock and stub classes at every turn if you use dependency injection everywhere, but the consequence is a more difficult to use API that requires the programmer who uses your API to understand more arbitrary and unnecessary implementation specific details.

For example, maybe I just want to open up an encrypted TLS TCP socket to a server. From a user perspective this could be really basic, you provide a library with an API that you provide with server address, port and handlers. It could be as simple as a few lines of code. But the dependency injection version of this would require maybe require creating an SSL Factory, which requires a 509x certificate provider, which requires a certificate storage locater. Then instead of an address you must provide it with an ipaddress factory method and a protocol factory which requires a list of available protocol implementors. Then 200 lines later you want to actually manage your connection and you must provide a connection manager and a byte buffer which itself involves tons of cruft.

Sometimes dependency injection is like a person walking around with their organs hanging outside of the body. When two people want to make babies they don't have to know low level biological mechanics of how sperm sends signals to a ready to be fertilized egg. They don't have to read and learn pointless documentation. They just insert the thing and everything usually works although under the hood it is maybe one of the most complicated processes in biology. That's how an API should work: making complicated things simple.


Could you not have the best of both worlds by writing a facade over the public-facing API? I have used this approach in the past and it works well. The facade just handles grunt work of wiring the myriad of types together, as you described. It can be verified by blackbox testing or simply by eye (as it should not contain any logic beyond instantiation).


While there are valid use cases, this sort of thinking is why apps are no more responsive today than they were 20 years ago... everything is running through 20 translation layers.


While that is a risk, it's not a necessity. Many of the classes in the C++ standard library are designed in exactly this way without runtime performance issues. All of the wrapping and indirection get inlined at compile time.


I don't agree. I'm pretty sure we can make apps today equivalent to those from 20 years ago that are an order of magnitude more responsive even with all the indirection layers (640x480 resolution, 256 colours, no ui animations etc.)


20 years ago (94!) one was always waiting for graphical applications to respond, and lots of people prefered command line applications instead, because of it.

Currently, almost all interatcions with nearly all applications (Firefox and OpenOffice charged as guilty) are intantly.


As long as the user facing API is simple, and the thing works well enough such the user doesn't have to know about how the underlying black box pastes its spaghetti together it doesn't matter. So I think what you are suggesting is fine.


I've solved this in past Java/Spring projects by having two public constructors, one default and one that takes all the "organs". The default constructor just delegates to the "provide me with organs" constructor but creates all the factories and providers and such itself. This doesn't work if you have long chains of singleton objects that get passed around to a lot of classes, but it works well for simpler cases.


Maybe the real problem is that we have crappy tools for hexagonal-oriented architectures; especially Rails. Classic Rails style dictates that ActiveRecord is Good Enough for your domain logic. This creates a sort of framework lock-in: inheritance is one of the strongest forms of coupling there is, especially when you inherit from classes you do not control. The framework superclass is a likely to be a relic of current-gen frameworks that we do not tolerate in the future.

The technological way out is to use a Data Mapper pattern ORM to isolate the domain logic and the persistence. But this approach won't catch on, because Rails devs have tasted the simplicity of ActiveRecord and aren't about to do more work to get the same result.

It is telling that many language communities eventually head towards amalgamating a collection of really good libraries in a low-coupling manner. This is still a fringe movement in Ruby, currently.


If ActiveRecord allows you to do less work for the same result, at least for some less complicated applications, isn't that a good thing? I think that's DHH's whole point - we shouldn't be pursuing some mythical perfectly testable architecture where it doesn't make sense. If you can write clearer, more concise code for less effort that doesn't fit into the purely separated, easily testable TDD approach, is that really such a bad thing?


In Java, the most obvious example of testing affecting the design of a class is the necessity of avoiding private methods in order to facilitate testing. While there are ways around this -- reflection, PowerMock, probably others -- they all tend to be ugly and hackish.

This has an effect upon the design of classes, because the easiest path is simply to make private methods package private. This is frequently not the ideal design, and taken to its logical extreme means that you will have no private methods.

I think unit testing is important, and do use it. The line for me, though, is similar to DHH's here: when the drive for unit testing affects the design of the software, that's when I tend to become less enamored.


> In Java, the most obvious example of testing affecting the design of a class is the necessity of avoiding private methods in order to facilitate testing.

IMO, this is unnecessary and a failure to understand the point of unit testing: unit testing is testing the public interface of the unit-under-test in isolation from other components, so there is no reason to avoid private methods to facilitate testing since private methods are, ipso facto, not part of the public interface of the unit under test, they are called by methods in the public interface and tested by testing the methods which they serve. Making private methods public and directly testable makes unit tests more brittle and refactoring more expensive, which is exactly the opposite of what you should be striving for with unit testing.


I disagree. For example, a method to generate all the permutations of a sequence is easy to get wrong and should be tested whether or not the library using it exposes it.

Testing an internal method by itself, instead of indirectly through the public API, gives you the same scope reduction benefits that testing a unit instead of the entire program gives (but less pronounced).

Personally I think the solution is to scope unit tests into the thing they are testing. So tests of a private method would be scoped to that method. That way your decisions about what to test aren't constrained, though they can be guided, by what is visible.


> For example, a method to generate all the permutations of a sequence is easy to get wrong and should be tested whether or not the library using it exposes it.

There are three possibilities here:

1. If your language or common utility libraries have a permutations() method, you shouldn't be rolling your own permutations() method because one exists in libraries. 2. If you're in an environment that doesn't have built in permutations() you should group these kinds of very generic functions that are hard to get right in to some sort of utility module (in which case it would necessarily already be public). 3. If you're in a language that doesn't have built in permutations() and permutations() is in the class which uses it, you have a very generic function on a more specific class, where it has no business being, so it should be moved to a utility class.

In all three cases, the solution isn't just "make it public". If you find that you're just making something public to unit test it, this usually points to a much larger problem with your design.


1. Agreed. (Assume it doesn't.)

2. Why am I making my library's utility methods public? It's a frob library, not a generic utility method library. I don't want clients depending on my utility methods. I don't want to support a separate utility library just to avoid testing private methods. I would prefer not to take on an external dependency for a single simple method. Having it private and tested is the best tradeoff here.

3. Agreed.


The point of unit testing, or any testing, is to make sure that the code does what it should. Public vs. private interfaces are philosophical distinctions in determining what is a "proper" unit test, and do not help in validating functionality. In order to validate that functionality testing frequently requires making changes to the class structure that would otherwise not be needed.

(Aside: I did not say private methods need to be made public to facilitate unit tests. They do, however, need to be made at least package private; this annoys me.)


I don't think you need to avoid private methods in Java. Rather, I think you should ask yourself if a private method is really necessary. Secondly, you should be asking yourself how you can test private code via the class's public methods.


> I don't think you need to avoid private methods in Java.

I think the reverse question is better in Java. Does this method/class need to be public? Only expose the bare minimum in the API so that you retain free reign within your codebase.


Agree with that. If you want to unit test something, you should be able to test the outputs for a given set of inputs, plus possibly verify some interactions, i.e. if I call chargeCustomersCreditCard(BillingInfo billing, Invoice invoice), then there will be exactly one call to myCreditCardProcessingMock.chargeTheCard(String creditCardNumber, Decimal amount), where creditCardNumber == billing.creditCardNumber and amount == invoice.totalAmount.


Tests should be testing a classes public interface, not the private methods that may or may not drive the public interface. What this type of design will push is to stick with the Single Responsibility Principle. If you find yourself wanting to test a lot of individual private methods you have likely violated SRP.


> the simple controller is forbidden from talking directly to Active Record [..] This is not better.

It is. The controller layer should be as dumb as possible, it shouldn't contain your (entire) application logic. It's a matter of single responsibility if anything.

Also, I find it very sad that we're still discussing the usefulness of the active record pattern. Other than convenience, it has none. It's a pain to maintain an application that uses it once it reaches a certain level of complexity.

And not just because of testability, it's a pain in the ass to replace/fine tune certain queries if you're calling active record methods in your controller.


Honest question: what pattern for database access is better, and what are the best tools for said pattern? I don't necessarily mean just in Rails, but everywhere. ActiveRecord-the-ruby-library is incredibly mature and convenient to use, and for better or worse, encourages active-record-the-pattern. The repository pattern seems nicer in theory to me, but in practice, the best tool (in ruby) to implement it with still seems to be ActiveRecord, and then I find that I'm mostly delegating to the underlying AR object (because it already does everything!) and wondering what I've really gained. I was hoping DataMapper2/ROM[0] may have been a more straightforward but high-functioning replacement for AR, but it seems there has been no progress on it for quite some time.

tldr; I'm wondering how you actually do this. Firstly, in Rails, but other acceptable answers are "other technologies do it in this other way, which is better than how Rails does it for these reasons".

[0]: http://rom-rb.org/


I'm not using Rails (nor Ruby for that matter) so I can't comment on that part, but I found the repository pattern to be really useful. Using it with an active record is something I've seen other people do, and at least it gets the active record calls out of the controllers. It's not an optimal solution of course, but IMO it still beats plain active record.


Are you willing to say what language you're using the repository pattern in and what tooling you've found useful in supporting it? I think my use of DAOs in Java struts projects was somewhat repository-pattern-esque, but regardless of the frustrations I have with ActiveRecord, it is way better than that was, so I'm really curious what, if anything, is better than either of them.


I'm using PHP and Doctrine 2.


Thanks! I'll check it out.


I've been using cqrs as a pattern for a couple of years, and event-sourced data as well now. On Rails. Life is significantly better.


Anything open-source or articles about this approach that I could go take a look at?


sorry for the delayed response—email is in my profile, feel free to reach out with more questions.

I've got a post or two on my blog[0] (haven't updated it in awhile, but working on it!) and I just released a gem[1] to facilitate event-sourcing in ruby. Its young, but i've refined the api a bit from where I started a couple years ago (when it was just an experiment), and it has a much cleaner implementation now.

For more info about CQRS/ES and DDD in general, I recommend starting here[3] and lurking on the DDD mailing list[4]

[0] http://karmajunkie.com

[1] http://github.com/karmajunkie/replay

[3] http://martinfowler.com/bliki/CQRS.html

[4] https://groups.google.com/forum/#!forum/dddcqrs


In addition I find the controller to be one of the worst parts of Rails. The input comes in form of some magical methods that are available (params, request) and the output happens partially by setting instance variables and partially by calling render. How could anyone ever think that's a good idea?


I guess it helps if you're developing an MVP and you need to show something in a matter of days/weeks. But beyond that, I never saw the appeal of Rails.


Don't get me wrong, I am pretty happy with Rails in general. I just think that the way controllers are implemented seems just totally random. Another, possibly worth, part are lifecycle callbacks on AR models. Those are just a huge invitation for confusion and trouble down the road.


I generally agree, but it's an important distinction to make that controllers should not be "dumb". If that's the case, then we need to stop calling them controllers. Instead we should insist that controllers be "lean".


I specifically use dumb to mean that controllers should know as little as possible about the business logic. They should just mediate between requests and responses.

EDIT: dumb not dump.


I echo his sentiment. Integration testing, especially when you have a JS frontend, makes much more sense. I never saw the point of controller tests and making sure a controller assigns variable @widgets with [widget] and all that nonsense. An integration test will identify all those problems and then some.


For my use case (building a reasonably complicated e-commerce platform) I find that the sweet spot tends to be integration testing to make sure the frontend is spitting out the right thing when going through the whole stack, and unit testing to make sure the correct logic is being applied when performing particular operations.

That combination allows reasonably fast unit testing, because database interaction is stubbed out for that level, which gives a decent level of confidence that nothing major has been broken within a few seconds, and then a longer 10-15 minute integration test suite which checks the stack works as a whole.


I agree. Instance variable assignment is an implementation detail, IMO it's more useful to describe their behaviour with a request spec for example.


Yes, exactly. We use request specs to test regular HTTP requests to regular pages or our API. And then we use feature tests with Capybara and PhantomJS to load the app, click through it, and make it loads as it would in a customer's browser. This covers most of our application. And then we use regular unit tests for anything not customer facing, such as background jobs. But these unit tests make up only a small fraction of our test suite.


The point is to be able to unit-test business logic in the controller. If your app is just dumb crud end-to-end then sure, there's no point in anything but integration tests (and maybe not even those), but if you have any interesting logic then you can get much better coverage more efficiently in a unit test than having to run through the whole stack every time.


Decoupling is one of the fundamental tenants of development, and provides far far more benefits than merely TDD. If he thinks decoupling is about TDD he's missed out on architectures that can be easily fixed when bugs show up by isolating causes in changed code, being able to extend without modifying core code (the open close rule) and managing regressions in general. How do you scale software to a team of developers without decoupling?

The only argument I've ever seen against decoupling is performance, and it's rare that argument makes sense in all but the most real time of applications.


I don't think that DHH is arguing against decoupling but rather he is arguing that certain decoupling practices that are encouraged by TDD interfere with the readibility of the application.


I've never seen a Rails application that suffered from too much decoupling. If someone out there wrote one of those, call me because I have a job for you. 99% of all things written in Rails are monolithic messes of business logic encrusted with persistence and response handling. So ridiculously coupled that you don't talk to objects by themselves, but bring in a whole family of resources to models and controllers that violate almost every SOLID principle.

The whole "You're not gonna need it" argument works until you actually do need it. Which, unless you're not doing a good job, is going to happen. Then you have no discriminated interface to pry your application blocks away from each other and can't persist a model without dozens of unintended side effects.

There's no readability penalty to decoupling. The more you decouple the less you need to read to understand an application.


(Tenets, not tenants. Friendly heads up, not a dig)


I don't understand why he has to equal TDD with the mockist approach to TDD without clarifying he is talking about the mockist approach and not TDD in general. Pivotal Labs for example is obviously a huge proponent of TDD, but has been historically been opposed to hesitant towards true, isolated, heavily stubbed and mocked unit tests.

That makes me wonder if he just doesn't have a differentiated enough view of TDD or if he omitted that on purpose to get more attention. I am also not sure which answer is would be more disappointing.


" but does so by harming the clarity of the code through — usually through needless indirection and conceptual overhead"

This argument feels a bit thin and unsubstantiated for the general case. I can see his criticism of hexagonal design applied to Rails, but he's using that as a straw man to attack TDD. I think he could better criticise the limitations of TTD by directly examining applications of RGR and other TTD pronciples.


It's not really a straw man. Driving your application with tests at the unit level doesn't make a lot of sense, in the case of a web app at least. The BDD approach makes more sense to me. It's how I work and in my experience tends to inform design a lot better.


I think saying hexagonal design is bad therefore TDD is bad qualifies as a straw man.

Person 1 has position X. Person 2 disregards certain key points of X and instead presents the superficially similar position Y. The position Y is a distorted version of X.

Here X = TDD is good. Y = hexagonal design could be good for rails Y =/= X

I'm no TDD zealot, I just think DHH's argument here is weak.


I was unaware that people were actually trying to unit test controllers. That to me just seems like a recipe for endless frustration. Mock out a web request? Please don't.

Everything I've ever read about Rails refactoring indicates that your controllers should be skinny, implying they don't need to be tested, push all complex logic out into helper functions, lib classes or models and unit test those.


I found myself in that exact situation recently, struggling with finer points of Capybara, trying to nail down critical behavior in my controllers . . . then I realized, why had all of this crap crept into my controller in the first place? I refactored a ton of things into the models where they really belonged and ended up with much tighter code and better reuse of functions. Now testing is way easier - the only thing my controllers are really doing now is routing after certain conditions and serving up error/success messages.

Sometimes I feel like Steve Martin when I'm getting more sophisticated with a framework . . . I've got a googlephonic stereo with a moonrock needle, but maybe the problem is the shocks in my car: https://www.youtube.com/watch?v=Cjjsz14hL48


Yeah it's a tempting thing to do because there's this naive belief that you must have 100% coverage. The fact that you've got coverage on trivial code is not doing anything to increase your reliability, and you're spending so much time mocking the requests which is not code you have any control over.

Your 2nd paragraph is what DHH is fighting against. He's advocating much simpler controllers than many people tend to write, and the the inclination to write tests for a controller is increased by the complexity you've put in there. DHH is more importantly advocating an architectural style and that's getting lost in the "TDD is dead" linkbait.


TDD is supposed to affect design in good AND bad ways. It is not true that TDD claims to have the best design, but the more testable. The first time I read about TDD it basically said testability > clarity.

A succinct code that you dont know if its doing the right thing is worse than more verbose code that you can easily verify what it does.

I do think that specifically with Rails, tests get so plentiful that they take long to run and it threatens the whole process. And the weight of testing models/controllers/integration is something that has bitten me before. Particularly, doing less integration and more model, because integration tests can be flimsy and slow an order of magnitude more.

Since my first web programmer job, in 100% of my projects tests grew so big it took them minutes to run, making me nostalgic about the speed of Java tests I had for my first programming job.


> The first time I read about TDD it basically said testability > clarity.

Now you scared me. I never tried TDD, and if that's a required tenet, I never will. This is completely upside-down.

Tests can not verify that a program is correct.


Only proof can verify a program is correct, and that is so cumbersome and expensive is done on rare critical operations.

Tests give you the ability to know how a certain code behaves in specific circumstances.

Clarity makes it easy to understand the general case.

So if the clarity of the general case is a little worse to be able to test the outlier cases as well, the TDD philosophy would welcome that trade. Or at least, thats how I understood it.

A brute and unpolished example of clarity vs testability:

A)

  def complicated_algorithm(input)
    mod_input = Math.root(input)
    mod_input = input / mod_input
    ...
  end

  def division_in_complicated_algorithm_test
    input = 1
    input.should_receive(:/).and_raise(ZeroDivisonError)
    Math.root.should_receive(input).and_return(0)
    complicated_algorithm(1)
  end
-----------------

B)

  def complicated_algorithm_testable_version(input)
    mod_input = Math.root(input)
    mod_input = divide(input, mod_input)
    ...
  end

  def divide(a,b)
    a/b
  end

  def divide_test
    assert_exception ZeroDivisionError, divide(1,0)
  end
Overall, the point of doing tests first is that you dont hack them together doing complex dependency injections, because naturally to save work you do the easy test..at some expense of the final code.

In the above example, the top example is less verbose, having one full less method, but the test is more complex and fickle, because it was done to verify the above code.

Its not a fantastic example, we could argue that the tests try different things, and that division is too silly to put into a different method. The point is that the above happens when you write code first , test later, and the bottom one happens the other way around. TDD advocates for testability over clarity.


I'll argue that clarity makes it easier to understand the code. All cases of it. Not just the general case.

Yet, I can see how one'd want to sacrifice a small bit of clarity to gain a big amount of testability. Thanks for the example.


I really identify with some of the points he's making, they're observations I've made myself so it's nice to see someone with his clout bringing them up.

I wonder about the design thing though - our code is in some ways a document of the circumstances surrounding it. Does it make sense to have it conform to some Platonic ideal, which we corrupt when we alter it to make it more testable? I'm really not sure about this, but I doubt it. Code ultimately needs to work in a given set of ways and that's our primary concern with it. Making the code "pure" (or just "easy to read" if you like) is a service to other developers who come along later. So, the tradeoff is testability for intelligibility. I can imagine a lot of scenarios where that tradeoff is a rational one.


We were having discussion about a collaborating groups architecture. The guy said something couldn't be done, because it would involve coupling two components. Again that is a nice ideal to aim towards, but surely functionality comes first.



That was a terribly thought through reply. It sounds like the sort of bullshit you hear on sunday morning political shows. "Let's pretend I didn't hear my opponents points, and throw back my own talking points which fail to address his points.".


I don't really see DHH giving any arguments as to why designing for tests leads to poor design decisions. I suppose I can buy the argument that there are cases where this isn't true, but I can't think of any and he's not giving any. I would argue that Angular is a good example of how designing for testability creates good design decisions.

Secondly, I don't buy the idea that you should focus on integration tests over unit tests. Integration tests are important, but they're also the most expensive tests in terms of maintenance. Unit tests you can run with every code submit. You can run them multiple times per code submit. Integration tests take too much time for this to be practical.

In all, I'm tired of people making decisions based on what they're against. DHH is just being negativistic and defining his code design strategy around being against TDD and test-driven design. That's ok. But what design strategies does he support? He starts giving more information about that at the end, but I'm still left scratching my head and wondering what design philosophy he's actually advocating rather than what design philosophy he's bashing.


I don't really see DHH giving any arguments as to why designing for tests leads to poor design decisions.

It results in pointless levels of abstraction that aren't used to abstract anything in real code, but destroy readability and screw up static analysis tools. It also results in over-splitting of entities to the point where they don't represent anything remotely similar to problem domain. Finally, it encourages "old stuff plus this addition" kind of design. (For example, using a switch statement to cover 7 different cases for days of the week, rather than using a math formula.)


Black box (functional) testing is the way to go. I created a flow style of testing, which allows "Fast & Thorough Testing". This is a javascript & jasmine extension, but the concept can be applied to other languages.

http://briantakita.com/articles/fast-and-thorough-testing-wi...

The nice thing is the testing does not have a large effect on the implementation, so you have the freedom to change the implementation without the tests failing.

The test suite scales since edge cases can be grouped together into a single flow. This removes extraneous runtime burden of having to recreate the same context for a each individual edge case.

I find that I don't need to be performing TDD as often.


I certainly do agree about integration tests being important. I've also started moving towards using a live database for my tests. I set up a postgres database by copying over a master copy to a temporary directory and running a postgres daemon from there. It takes ~100ms and with fsync turned off it makes for snappy tests. If it starts getting to be slow I can always move it to a ramdisk.

Here's a library I wrote for golang which wraps it all up in a convenient package:

https://github.com/surullabs/ghostgres


This may be a naive comment, but is DHH simply being defensive about a perceived movement away from Rails i.e. "decoupling from Rails"?

* edit - of course, he could be defensive and right, they aren't exclusive


I don't see why TDD proponents make a big deal about not touching the database. It's as if they haven't heard of SQLite's in-memory option, in which the database is just another data structure in RAM, which is all that their extra layers of objects are. True, with that setup, you're using SQLite for tests and, say, PostgreSQL in production. But is that any worse than using your own mock objects in tests? What am I missing?


SQLite is not <your actual rdbms>... it's SQLite. A database that does not enforce column types and lacks most of the advanced features of a real rdbms. Using SQLite will work great right up until the point where it won't and you get your fingers burnt.

If you separate your concerns properly you won't need to mock the database layer either. Mocking is just one part of the trifecta of good testing, along with Stubbing and Faking.

For most things it would make more sense to fake the database layer or stub the database layer in your "logic" layer.

However, if your application makes heavy use of the rdbms then you should test that layer too: In your integration tests and not your unit tests. Most places that interact with an RDBMS treat them like black boxes and not like a business layer of its own. You really need integration tests to ensure things like constraints and your business rules are captured properly... most people never bother.

The real problem with TDD and its methodologies isn't TDD itself; it's people shoehorning about 10% of what proper testing should be into two narrow groups: stuff that you can do with "unit tests" and "things we can mock." There's a lot more to it than just those two things.


Using SQLite's in-memory option is basically just a convenient way to get database mock objects. I don't personally think it's any worse (or better) than using mock objects. However, I think a reasonable argument for why they might be worse is that you've introduced a lot of fake code (as in, not used in the real application) that can cause false-negatives or false-positives in your tests. That's true of libraries for mocks too, but they introduce way less code.


SQLite probably qualifies as a mock in the sense DHH is talking about - as another reply said, there's no clear sharp line between "unit" and "integration". But to answer your question, what you may be missing is performance. I don't object to tests that use the database on ideological grounds, I object because they're slow. In-memory databases are faster, but still much slower than a true unit test.


I think it's less that running database tests is bad, but more that your business objects (models) should not be so tightly coupled to your ORM. All of the business logic is supposed to go in the models, but the models inherit from ActiveRecord::Base so therefore you're stuck with ActiveRecord forever and you can't test any business logic until after you do a schema migration. It places the database at the center of the development process, because the first thing you have to do is design the database schema, before anything else will work.

Instead, you really should be writing (and testing) business logic first, and figuring out what your objects/models are going to be through a gradual refactoring process. Then you can design your persistence schema after your objects and their relationships are fully fleshed out.

Rails is really a database-driven development tool, but guys like Uncle Bob are arguing that database-driven development is an anti-pattern.


Kind of off-topic, but it is nice to read a thoughtful post that isn't over-the-top flame inducing. Less emotional rhetoric than some recent TDD discussions.


Seriously? It's full of snarky swipes at "true believers". I don't think DHH has made a sincere attempt to engage with the other side; this is knocking down a strawman again.


I would have a repository layer even if I wouldn't write any tests. I don't want an ORM to interfere with my "business". So this could as well be a post against ORMs. But still I think author is right. TDD does enforce an architecture, and that architecture doesn't have to be the best. TDD zealots often run around, preaching their cult and blind out any drawbacks, as if there were none.


I wonder if this isn't a maintainability issue in disguise.

I have never had a problem with unit tests or int tests. As a rule I never use mocks, and everything fits into one of those areas. Either you have real data sources available (such as an in process db) or you make it a module that can be easily unit tested.

It's clear he is against TDD first, and looking for reasons second. I feel other factors are at play.


>> … it's a mistake to try to unit test controllers in Rails (or similar MVC setups). The purpose of the controller is to integrate the requests from the user with the response from the models within the context of session.

Well said … Now, if somebody from Salesforce.com could understand this and stop forcing their customers to write these useless tests for the controllers.


One thing I like about Java programmers is that they realize everything a class depends on needs to be passed to that class's constructor. There's really no way to avoid it. Change concrete instances to interfaces, and you have a nice testable class. Write an integration tests, write a unit test, they're both easy.


s/Java programmers/good Java programmers/.


DHH is arguing from the perspective of a Rails developer working on a Rails application. It's no small kingdom but to discredit TDD as a practice for all software developers is short-sighted. There are enough counter-examples of the benefits of TDD in my own experience to make the claim invalid as a universal truth.


Test... good. TDD ... sometimes (often) good.

But unit tests can lead to an overly abstracted design that harms the quality of the code.

Test what you can with unit tests but don't compromise your code to do so when there are other ways to achieve a suitable level of testing


Curious why DHH is in the title. This doesn't seem to happen with posts from others well-known in the tech community.


Because it's in the title on the site, however it should be removed and treated like a "- NY Times" suffix.

Just checked, it is now removed here.


Ah, didn't notice this was on the title on the original site. Thanks for pointing that out.


We removed it. In general, HN tries to emphasize content, not personalities.


Thanks Daniel!


Rails, rails. rails. Everything is about Rails. David is a good guy, but I'm starting to wonder if he's ever built a moderately complex system, involving integrations, message queues, several data stores, a handful of third-party libraries and APIs and deployed on more than 3 machines.

TDD is a tool to manage complexity. It's an advice, not a recipe. Like any technology - it isn't a substitute for thinking.


Everyone has their own definition of complexity. I make no mince about a decade worth of developing Basecamp is where I draw my primary experience from.

That system is small by web scale standards -- only 70 million requests/day, 1.5 terabyte of DB data, half a petabyte of file storage, two data centers, and about 100 physical machines -- but probably still larger than 97% of all Rails apps.

Also, plenty of data stores (memcached, redis, multiple MySQLs, solr), many 3rd party libs, job servers, integrations, and more.

So no, it's no Facebook or Yahoo or Google. But it also isn't a toy system, except in the sense that we're still having so much fun playing with it.


I am not dismissing Basecamp. I am just saying that in a large portion of the software world, applications are waaaay more complex than normal rails apps. And in that context, TDD makes sense if only to manage complexity. Even if you are not Facebook, but say Airbnb. If their tests are not fast enough and they cannot trust them to make decisions, they wouldn't be able to deploy in a reasonable time. And when slow tests lead to infrequent deployments, that's when the real problems begin. (Airbnb is an arbitrary choice which came from the top of my mind, not anything specific)

My gut feeling is that >50% of software development happens in those complex apps and not rails apps. So dismissing TDD is just yet another extreme viewpoint, which many people will unfortunately take for granted.


AFAIK Airbnb uses Ruby and Rails to some extent. An actual job offer lists it as a requisite: https://www.airbnb.com/jobs/departments/position/2192


I think the counter argument may be that TDD actually adds complexity to a system by destroying the architecture. So I am interested in what particular arguments you have in how TDD manages complexity instead of increasing it.


Here's my nagging question since TDD-malaise has definitely crept into a broad dev consciousness in no small part to your recent shots across the bow.

Have you distilled out broader guidelines for system dev and valuable testing? Your focus seems to be on your experience and community which isn't getting picked up so well outside of it.

This post was moderately rails-centric and the wider conversation is coming from more varied groups. Is the ruby+rails ecosystem fundamentally different in ways where outside groups should consider your perspective before sharpening their pitchforks?

The TDD drag on development seems different for different folks. The ramp for TDD tells devs that they are following some best practice to limit human error in their implementations. But humans are building the tests and humans also commit errors in focus.

For the devs that can piece together awesome and fast test suites to run against their awesomely structured and implemented code, will they find lower value in all that test-building time?

For devs that have trouble implementing, but can piece together test suites that help them along, will they find higher value in their tests?

You have some devs who don't need to test wasting time and marring otherwise shippable code. You have others guarding against egg on their face spending that much more, but valuable time.

Is there a dev efficiency divide opening up? Are there differences in the value and importance of TDD across all the various categories of languages, tools, developers which just can't be summed up in blog posts and retorts? We demand cargo to build a cult around!


> I'm starting to wonder if he's ever built a moderately complex system, involving integrations, message queues, several data stores, a handful of third-party libraries and APIs and deployed on more than 3 machines.

Erm… Have you heard of 37Signals?

> TDD is a tool to manage complexity. It's an advice, not a recipe. Like any technology - it isn't a substitute for thinking.

I don't think you disagree with DHH here. The key point is TDD cargo-culting has encouraged codebases to become deformed beyond recognition in pursuit of unit isolation, which btw provides no guarantee that a system even works end to end.


> which btw provides no guarantee that a system even works end to end.

Nor shoud it. Some things you test in isolation, others you test together. I am not making fun of dhh or Basecamp (37signals is the company, not a system). I even read most of his stuff and admire him.

All I'm saying is that there are many systems far more complex than Basecamp and when you cannot fit the whole thing into your head, TDD helps to divide and conquer. I am against blindly following TDD, but I am also against dismissing it because it gets in the way when building a Rails app.


That's precisely the problem. Many in the rails community do use it as a substitute for thinking.

In general, I'm not the biggest DHH supporter (although, he's made me a ton of money, indirectly, via rails), but I do like that he's stirring the pot here.

Back in the 90s and early 2000s, I wrote tests, when needed. Sometimes before application code, sometimes after, it was a discretionary tool that I had in my arsenal that helped me both solve problems and feel confident that "my code won't break".

At some point in time the majority of the rails community decided that if you don't test, you're a terrible programmer. Full Stop.

The problem with this was that testing tools were terrible at the time, RSpec, before it's API solidified was breaking every other release, things like capybara and selenium and watir, always kinda worked, but not really, and you'd often spend 10 minutes writing the business logic, and then 40 minutes writing tests, getting them to pass, wrestling with external dependencies, etc.

Furthermore, because people were practicing test-based application design, you were constantly re-writing your codebase and your test suite, because if you're designing for tests, you're not designing for the domain, and as domain requirements changed, and broke your testing model, you basically had to fix everything, and you couldn't get a handle on the domain because you had to be ruled by tests.

All that said, I do think tests are useful. TDD isn't as useful for me. My thoughts are in line with Rich Hickey who said:

"Life is short and there are only a finite number of hours in a day. So, we have to make choices about how we spend our time. If we spend it writing tests, that is time we are not spending doing something else. Each of us needs to assess how best to spend our time in order to maximize our results, both in quantity and quality. If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather spend that time thinking about my problem. I’m certain that, for me, this produces better solutions, with fewer defects, than any other use of my time. A bad design with a complete test suite is still a bad design."

From http://www.codequarterly.com/2011/rich-hickey/


I am quite sure that the Basecamp cluster contains tens of machines. It also features a rather clever caching design. Please don't try to justify a push towards simplicity as lack of ability to handle complexity.


A) I don't subscribe to the idea that building guis on data stores is in any way demeaning and/or easier than other kinds of development work. B) I do think that if that is the main kind of systems you write (and I could be mistaken but all of 37signals products look like that to me) it will impact on your design philosophies and your thinking about quality assurance in ways different than other sorts of systems.

The requirements are completely different for a system that is taking info from a user, persisting, and giving it back to them later, than for a system that is processing information and acting on it.

Those different requirements will drive different architectures and different QA processes. None of that means you can't document those requirements as automated tests at the beginning of your coding cycle.


In my experience as a system gets more and more complicated it is typically the interfaces between various things (e.g. integration) that become problematic. While the number of logic bugs increase linearly with the size of the system, interface / integration bugs increase exponentially. It is the integration tests that really keep madness at bay. TDD and unit tests can be useful, especially if you're working with a dynamic language, but nearly not as useful as integration tests and won't really help you with complexity.


It's this assumption that TDD is all about unit tests that I find interesting. I drive my code with integration tests I write ahead of time as well. Maybe it is just the crowds I run in, but I've never encountered this dogmatic assumption that unit tests are the only tests you should have.


The TDD book I'm most familiar with, GOOS [1], advocates starting with integration tests and then using unit tests during the process of making the integration test work. This approach has always helped me stay focused on the real-world requirements of the feature while also having fast tests to rely on during development.

[1]: http://www.growing-object-oriented-software.com/code.html


I consider GOOS the most credible authority on how to build a project with TDD methods. It even uses a staticly-typed language, which is interesting because the most vocal adherants of TDD tend to be using dynamically-typed languages.

I've read the book, but I'm not such an avid TDD'er myself, mostly because I'd probably be doing it wrong for quite a while before getting it right.

It'd be great if other people who've read the book had any insights here on how it related to DHH's opinions.


> It even uses a staticly-typed language, which is interesting because the most vocal adherants of TDD tend to be using dynamically-typed languages.

I think that's mostly an accident of the history in that TDD was becoming a thing with Java initially, but a lot of the community attached to it overlapped with the community moving away from Java to dynamic languages at the time TDD was taking off as a thing. There's nothing really inherently tying TDD and dynamic languages


I have seen that dogmatic assumption, but I feel that it has fallen out of fashion at least a little bit. Nobody is telling me that I am doing it wrong for insisting on tests at multiple layers of the pyramid.


Same impression here, dated Jan 2013 after his diss on dependency injection, based on a single method toy example: http://david.heinemeierhansson.com/2012/dependency-injection...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: