Yes, but if you're not wise about how you test it's easy to not get the bang for your buck. Cargo cult testing is a very real problem.
Example: I was extending some code the other day that had 3kloc of thin wrapper code and 30kloc of mocks and unit tests which effectively did nothing but ensure that the 3kloc passed things through unaltered. Meanwhile, the code is widely acknowledged to be frustrating to interface with because it uses a string-based API that isn't documented (and no, those unit tests won't tell you semantics), versioned, or maintained with an eye towards compatibility. Naturally, there are no integration tests with the layer above or below it, and those interfaces are where all the actual bugs happen. So the tests increased cost by a factor of 1000% and had 0 return.
Also see: cargo cult comments. You know, these guys:
/**
* Gets the foo.
*/
Foo getFoo() {return foo;}
Anyway, my personal rule of thumb is to start with integration tests and only drop to the granularity of unit tests if I've got a "logic hairball" on my hands where the interfaces are much simpler than the internal logic (parsing and algorithm implementations, usually). I'd be happy to hear other opinions though!
My experience with integration tests first is that a handful of use cases get stuffed into these test cases, but it's harder to figure out what you are actually testing. It's good to test that everything comes together and works in this one cherry picked case, but I'd rather be able to give the more granular guarantee that all of my components hold up their individual contracts.
Doesn't it make sense to focus on contracts at a higher level of abstraction though? Wouldn't it be better to put the contracts at the level of the user rather than setting contracts for most/all of the functions in your code? If the mocked user input resulted in the correct output and there were no other side effects, wouldn't that be sufficient in many cases?
There might be confusion about terminology here. I think of unit tests as testing contracts of individual units and then each appropriate level of abstraction has its own set of unit tests. Integration tests are those that use the external user interface and represent a use case, BUT a unit test of layer N+1 could effectively be an integration test of layer N (I just wouldn't call it that if it was using a mocked layer N interface).
> ... BUT a unit test of layer N+1 could effectively be an integration test of layer N (I just wouldn't call it that if it was using a mocked layer N interface).
Fixing the problem of unit tests becoming de-facto integration tests is exactly what mocks are for. If you don't mock your dependencies, then you are in fact executing an integration test. The problem becomes that your mocked dependency and the real dependency can now drift because there's nothing tying them together. So a unit test works but you still have an integration bug.
Personally, I think both unit and integration tests are really important, but that integration tests tend to get a bit underlooked with all the existent zeal towards unit tests. Testing a component's contract is obviously necessary, but testing that your components are expecting the correct contract from others is also important.
I agree, both are really important. I think integration tests get passed over because they require a higher level view of your software. Personally, in my day job, I'm not extremely familiar with the domain, so I'm pretty ineffective at writing sensible integration tests. I can, however, pass my unit tests/contracts off to the guys that are familiar with the domain so they can shape up an integration test.
My experience with production/enterprise/whatever code has been that bugs at interface boundaries exceed bugs interior to an interface, so I like to check that A plays nice with B plays nice with C rather than ensure that A, B, and C will be individually robust players in the game of blame volleyball. That won't save them from inter-component misunderstandings.
As always, exceptions: algorithms, logic hairballs, parsers, anything with a sufficiently low surface-area:volume is likely to generate significant bugs in the interior, so it can likely benefit from a unit test.
I'm writing a translation wrapper (very high surface area:volume), but I might have the advantage that A and B are in my project, so I can have a good idea of how they behave. C isn't, but I have an additional test suite for the small bit I use, just to make sure it behaves reasonably well, and that if it doesn't, that I recognize something is wrong.
Example: I was extending some code the other day that had 3kloc of thin wrapper code and 30kloc of mocks and unit tests which effectively did nothing but ensure that the 3kloc passed things through unaltered. Meanwhile, the code is widely acknowledged to be frustrating to interface with because it uses a string-based API that isn't documented (and no, those unit tests won't tell you semantics), versioned, or maintained with an eye towards compatibility. Naturally, there are no integration tests with the layer above or below it, and those interfaces are where all the actual bugs happen. So the tests increased cost by a factor of 1000% and had 0 return.
Also see: cargo cult comments. You know, these guys:
Anyway, my personal rule of thumb is to start with integration tests and only drop to the granularity of unit tests if I've got a "logic hairball" on my hands where the interfaces are much simpler than the internal logic (parsing and algorithm implementations, usually). I'd be happy to hear other opinions though!