> While you may end up testing third-party code incidentally in the process of you testing your first-party code, explicit tests directed at third-party code necessarily requires testing implementation details, and testing implementation details is how you get constantly breaking tests every time you try to change something and developers giving up on testing in frustration.
Depends. I'm certainly not advocating testing implementation details - 3rd party or 1st party - that your codebase doesn't rely upon. That is brittle as you say. But I do advocate for being willing to test anything you rely on - be that documented, hinted at, undocumented, or even explicitly warned against assuming if for some horrible reason you have a terrible need to make such assumptions anyways.
1st party or 3rd party.
When so written, if the tests are brittle, the codebase is brittle, and the tests correctly identify that needs to be fixed. Deleting or not writing the tests won't fix the problem - it'll merely shove the problem into production.
Amelioration might include carefully controlling and vetting updates, switching libraries, writing your own, rewriting your code to make fewer assumptions... or perhaps just yeeting the entire feature relying upon it out of your codebase, if you're desperate enough.
> The value proposition of testing is that it documents the user interface in a way that is independent of implementation
That is a value proposition, but not the only one. Others include simplifying debugging when you break things, when other people break things, and catching bugs before they go live instead of after (even if it'd be trivial to fix when someone belatedly notices.) Fuzzing-generated regression tests are often unreadable garbage when it comes to the purpouses of "documentation", a cargo cult of redundancies and red herrings and voodoo that, once, caused a crash.
Another value proposition - and I do find value in this in bugs caught in my code, their code, and their documentation - is "documenting" my understanding of upstream documentation, for which upstream tests alone will be useless. After all, that verified someone else's understanding, not mine. And it turns out this is important, because the documentation is outdated, the documentation lies, the documentation is insufficient, the documentation didn't consider that edge case, and the documentation foolishly presupposes common sense. Anyone telling you otherwise has a bridge to sell.
Even worse: the documentation may be technically correct... but misleading. Can't even blame the author - it made sense to them!
> allowing you to continue to iterate on the implementation while having assurances that, no matter what you do under the hood, the user experience does not deviate from what is documented.
Such iteration might include updating third party dependencies. I do this frequently. Poor test coverage means heisenbugs and fear around updates. Good test coverage might explicitly compare two different backends, or different versions of the same backend - 100% implementation details - and ensure the publicly visible behavior of my own APIs remains identical when switching between them. This means knowing which versions of which third party libraries to forbid or write workarounds for. Such tests should make no stupid assumptions, but should absolutely test for sane assumptions.
Such iteration might include upgrading compilers. I do this frequently. We've had unit tests catch codegen bugs. This is good.
> Depends. I'm certainly not advocating testing implementation details
Then it is otherwise impossible to explicitly test a third-party dependency. As soon as you state an explicit relationship, then the tests become tightly coupled to that dependency. If down the road you switch out that library for a different one, your tests will break as they explicitly reference that library. This is not where you want to be.
Well written tests will have no concern for what libraries you use. As before, they will only test the outside user interface. How that interface functions under the hood does not matter. As long as the user interface conforms to what you have documented, who cares what libraries you have used? The fact of the matter is that nobody cares about how it is implemented, they only care about the result.
> be that documented, hinted at, undocumented
Only the documented. Anything undocumented will never be considered. As stated in another comment, this is provable by taking it to the logical extreme and assume a program is completely undocumented (no tests). When no tests run, nothing will be tested.
Only the interfaces that which you have documented will be validated when you run your test suite, and they will only be validated for what you have documented as being true.
Yes really.
> While you may end up testing third-party code incidentally in the process of you testing your first-party code, explicit tests directed at third-party code necessarily requires testing implementation details, and testing implementation details is how you get constantly breaking tests every time you try to change something and developers giving up on testing in frustration.
Depends. I'm certainly not advocating testing implementation details - 3rd party or 1st party - that your codebase doesn't rely upon. That is brittle as you say. But I do advocate for being willing to test anything you rely on - be that documented, hinted at, undocumented, or even explicitly warned against assuming if for some horrible reason you have a terrible need to make such assumptions anyways.
1st party or 3rd party.
When so written, if the tests are brittle, the codebase is brittle, and the tests correctly identify that needs to be fixed. Deleting or not writing the tests won't fix the problem - it'll merely shove the problem into production.
Amelioration might include carefully controlling and vetting updates, switching libraries, writing your own, rewriting your code to make fewer assumptions... or perhaps just yeeting the entire feature relying upon it out of your codebase, if you're desperate enough.
> The value proposition of testing is that it documents the user interface in a way that is independent of implementation
That is a value proposition, but not the only one. Others include simplifying debugging when you break things, when other people break things, and catching bugs before they go live instead of after (even if it'd be trivial to fix when someone belatedly notices.) Fuzzing-generated regression tests are often unreadable garbage when it comes to the purpouses of "documentation", a cargo cult of redundancies and red herrings and voodoo that, once, caused a crash.
Another value proposition - and I do find value in this in bugs caught in my code, their code, and their documentation - is "documenting" my understanding of upstream documentation, for which upstream tests alone will be useless. After all, that verified someone else's understanding, not mine. And it turns out this is important, because the documentation is outdated, the documentation lies, the documentation is insufficient, the documentation didn't consider that edge case, and the documentation foolishly presupposes common sense. Anyone telling you otherwise has a bridge to sell.
Even worse: the documentation may be technically correct... but misleading. Can't even blame the author - it made sense to them!
> allowing you to continue to iterate on the implementation while having assurances that, no matter what you do under the hood, the user experience does not deviate from what is documented.
Such iteration might include updating third party dependencies. I do this frequently. Poor test coverage means heisenbugs and fear around updates. Good test coverage might explicitly compare two different backends, or different versions of the same backend - 100% implementation details - and ensure the publicly visible behavior of my own APIs remains identical when switching between them. This means knowing which versions of which third party libraries to forbid or write workarounds for. Such tests should make no stupid assumptions, but should absolutely test for sane assumptions.
Such iteration might include upgrading compilers. I do this frequently. We've had unit tests catch codegen bugs. This is good.