Telling developers that 100% test coverage is an anti-pattern is like telling a room full americans that running too much is harmful to your health: true, but not helpful.
But then there's that one guy that tells you you need to run 12 miles per day, otherwise it's not worth existing. False, and not helpful.
The reply was great; it acknowledged the value of test coverage metrics while also understanding the dangers of using it as a numeric goal in the extreme. True, and helpful.
The trick is that one of those un-helpful suggestions is more dangerous than the other. If you consider them opposite sides of the same coin, that coin doesn't have a fair flip.
But thinking that running 12 miles a day is the only useful way to run, will discourage you from attempting to run.
To put it another way, saying "50% test coverage is fine, sheesh" should encourage more testing, because it lowers the bar you have to strive for to consider yourself to be "doing it right", and therefore makes it easier to talk yourself into trying at all.
Even better, what we're saying is "50% test coverage is a good start, keep writing good tests and if the number goes up, that's even better." Targeting 100% coverage simply motivates using the wrong incentive.
It is a subtle, but extremely important, distinction.
I've seen codebases that suffered from too much testing. It overcomplicated the code and made it brittle without catching actual bugs. Higher quality tests that would have been useful and allowed refactoring would have had lower coverage metrics, and that wasn't acceptable.
Granted, too little testing is probably the more common error.
This is an awesome, excellent and scientific understanding of process quality control.
This is essentially [Deming's 11th point](http://en.wikipedia.org/wiki/W._Edwards_Deming#Key_principle...) applied to software. Deming warned of using numeric goals or quotas, since the target of the simplistic quota or numeric target will affect the output of the very thing you're trying to measure.
This is not to say that numeric measurement is not useful—quite to the contrary. Statistics are an extremely useful tool to understand your process. But using them as a target of performance is where you draw the line, since that is when they begin to affect the quality itself. That was exactly the point of twpayne's post, and it's a nuanced and constructive argument.
The title this was submitted under is totally divorced from the content of the post.
The point of the post is: obsessing over line coverage is stupid.
The post says nothing about how much coverage to shoot for, or how to assess whether or not you're testing too much. It does not use word 'antipattern'.
The title may not be a direct quote but it's not an unfair heading to this comment. The comment clearly lines up with the sentiment of "anti-pattern", and it does speak to overall code coverage, not just specific line coverage. See:
> people writing completely useless tests [...] just to get 100% coverage
> To write 100% coverage tests, you tie yourself to implementation details that simply do not matter
> Test coverage is a false idol.
Even if you still don't think it lines up precisely with the intent of the comment, I think a much greater disconnect is necessary before something (that I consider) drastic like changing a submission's title.
Edit: Welp. I guess it's been decided. Personally, I find this new title fairly nonsensical (it's grammatically incorrect and taken from the post above the one that was actually submitted), weak, uninformative, and, well, just plain bad. This wasn't even a blog post, it was an excerpt from a conversation, so the "use the original title" rule doesn't apply. I have to say I strongly disagree with whoever changed it. For reference the original title was "100% code coverage is an anti-pattern".
Second edit: The rate of upvotes (and presumably views) has noticeably slowed down for this interesting submission since the title change, despite it still being prominent on the front page.
I used to think 100% coverage was a good idea. But every test has a cost to write and maintain. The question is, is that cost worth it?
The answer depends both on how important the code is and how tricky it is. If the code is so simple it obviously can't be wrong, I won't test it -- unless it's also so critical that I need to be sure there are never regressions.
If it's slightly tricky but not very important, I'll probably, but not certainly, test it.
If it's critically important AND tricky, I'll test every edge case I can think of.
Totally agree. And when I'm not sure, I used randomized testing against a trivial but slow implementation to make sure that it also works for all the edge cases that I didn't think of:
https://github.com/openlayers/ol3/pull/418/files#L12R552
Haskell's QuickCheck is two generations ahead here. Not only does QuickCheck check random cases, it's also clever enough to refine those random checks into concrete edge cases.
I think the key here is that a lot of folks just write bad tests, and they write code that is hard to test.
Part of the point of TDD, as I understand it and I am fairly new to this whole world so I am happy to be debated on the topic, is that if you're getting pain from writing your tests then it should cause you to think on how you're writing your code, and the structure of what you've built.
So, people who only have bad experiences with tests will often be aggressive about how much they loathe them and how over-rated they are, because they experience brittle slow frustrating tests. People who use (like it says in TFA) dependency injection, loose coupling and write tests that aren't tightly coupled to implementation details - their unit tests only test one unit of code for example - and make good use of mocking and stubbing, have happier experiences with testing and for those folks the more tests and better actual coverage (not just a number reported by an algorithm) the happier they are.
...and never the twain shall meet.
When I started out learning to code again, a few years back, after a decade or two gap from my BASIC/FORTRAN days during my Physics degree I hated testing. It was hard. It was a waste of time. It just frustrated me. It took me a long time to learn to write good tests and to write good code that was, it turns out, much easier to test. This is an under-documented area, with nowhere near as many online resources as there are for just slinging code. But it's worth it. IMHO
Test coverage is near worthless as a metric. I'm trying to write programs with countless possible states consisting of thousands of attributes, and gain high confidence my program performs correctly in every single one. Coverage states that I should focus on a single one: the program counter. I don't know about you, but there's a lot more to my program that the address of the instruction currently executing. If I keep a list of the streets I've driven on, that might give me a list of new places to visit, but it's not going to tell me how well my car's working.
So, if not coverage, what should we use? I like mutation analysis. How do you know if your image recognition algorithm works? You run it on new images. How do you tell if your tests are catching bugs? You add bugs and see if it catches them. It's simpler than coverage in some ways -- you need no instrumentation.
And yet somehow, every test infrastructure can measure coverage, with mutation analysis nowhere to be found. We have a huge literature on testing (mutation analysis is over 40 (!) years old), and yet developers simply choose to ignore it.
One of the interesting bits of the discussion was the motivations schemes on code coverage. I agree on having testing and code coverage as an intrinsic motivator (continual improvement) and not an extrinsic motivator (performance metric). @twpayne was worried about the latter usurping the original intention.
Code coverage through testing is a tool to enable code quality but not THE way to measure code performance.
Code coverage is good at telling you how much of the code is covered with tests. If the code coverage is low, you have a problem. If the code coverage is high, that doesn't mean anything about the quality of the code nor the quality of the tests.
I will take 100% coverage with good tests over anything else any day of the week. That is the ideal. I've never seen this.
I've seen 100% coverage with good tests on specific parts of project. It's not unusual for me to see 100% coverage of an entire class or namespace. But I don't have any problem sleeping at night with less than that in other places.
SQLite is awesome, and a true model of just how good software can be. It's also designed by programmers for programmers and has a well-defined API that can be thoroughly tested. It's the perfect module, with an API (SQL) that has been refined over several decades. Less well defined stuff - like interacting with users - is harder to test.
It's about risk reduction. The risk of bugs and failure can never be 0. That does not say that you should not try to improve as much as possible and try to achieve and measure "easy accountable" "risk reducing metrics" like test coverage.
Any code that I haven't executed in test will dump core in production. No matter how innocuous it is, it will cause a problem.
I have learned this lesson after many years of having it repeatedly pounded into my head. :) Over, and over and over.
While 100% coverage is not sufficient, 0% is BAD. Yes, people will try to game the metric, but that's what code reviews are for.
I have also found that shooting for 100% has an additional benefit. It can show what code is truly unreachable. If it is unreachable, I can usually safely remove it. If I can't, I add a comment saying _why_ it can't be removed as well as why it can't be tested.
I remember the very first integration of code coverage into IntelliJ, a very long time ago: way before the community / paying edition (because I remember that then I switched to the free/community version and it didn't have code coverage in it anymore)...
Well: it was already able to perform partial line coverage, so the 66%/100% example given in TFA is totally bogus.