I used to think 100% coverage was a good idea. But every test has a cost to write and maintain. The question is, is that cost worth it?
The answer depends both on how important the code is and how tricky it is. If the code is so simple it obviously can't be wrong, I won't test it -- unless it's also so critical that I need to be sure there are never regressions.
If it's slightly tricky but not very important, I'll probably, but not certainly, test it.
If it's critically important AND tricky, I'll test every edge case I can think of.
Totally agree. And when I'm not sure, I used randomized testing against a trivial but slow implementation to make sure that it also works for all the edge cases that I didn't think of:
https://github.com/openlayers/ol3/pull/418/files#L12R552
Haskell's QuickCheck is two generations ahead here. Not only does QuickCheck check random cases, it's also clever enough to refine those random checks into concrete edge cases.
The answer depends both on how important the code is and how tricky it is. If the code is so simple it obviously can't be wrong, I won't test it -- unless it's also so critical that I need to be sure there are never regressions.
If it's slightly tricky but not very important, I'll probably, but not certainly, test it.
If it's critically important AND tricky, I'll test every edge case I can think of.