I would appreciate any resources you could point me towards to help make this argument against the Staff+ engineering leaders at my company who are pushing standards that say exactly the opposite.
This sounds a lot more like company politics than a technical issue, but I would probably start with Mitchell Hashimoto's talk "Advanced Testing With Go" - along with the just, like, reading the tests / testing tools in stdlib. They didn't include httptest so you could spend time mocking away http.Client usage behind an interface!
(I should add that this is explicitly contra to e.g sethammons's suggestion above, which seems to be relatively common in the part of the Go community that come from PHP. I inherited a couple large projects that did this. Today they use sqlite instead, and both the program and test code is ~50% the size it used to be.)
For us, stub injection points come naturally out of 12-factor-style application design; the program can already configure the address of the 2-3 other things it needs to talk to or files it needs to output, etc, just out of our need for manual testing or staging vs. production environments. If you have technical leadership encouraging Spring-but-in-Go, you'll probably hit a wall here too though.
It's also possible you're simply writing too many functions that can return errors. Over-complex code makes over-complex tests; always think about whether you're handling an error or a programming mistake - if the latter, panic instead of returning.
Thanks for the suggestion. I watched the talk and found some new information, as well as confirmation of some things I had been starting to adopt. I don't find the stdlib very informative about my problem, since most stdlib packages are "leaf nodes" - not layers that call out to lower layers. I'll check out more of Hashicorp's tests as I suspect their code might be more similar to the kind of code I work on. From a quick glance, in all of Consul I see only a handful of Mockery mocks, suggesting they are doing something very differently.
You're giving a baby-level introduction to mocking in a thread about how this approach leads to low-quality, meaningless tests in some cases, and right beneath concrete suggestions about how to make tests better by deviating from this pattern.
I'm sorry if you've had bad experiences with this approach in the past, but it emphatically does not lead to low quality and/or meaningless tests. It's the essential foundation of well-abstracted and maintainable software.
Here's a test I wrote recently, in the style expected at my company. Tell me what exactly you think this is contributing to maintainability, or what you think could be done better. I spent an hour on this and found it pure drudgery. I half suspect I could have written a code generator for it in that hour instead. I had no idea whether the code really worked until I ran it against the real upstream.
The unit under test is a gateway to a configuration service.
It's hard to give solid advice based on a view of this single layer, but at a glance unless this gateway client is itself something to be extended by other projects, this is probably not something I would write test cases for per se. If "apipb" stands for protobuf, I definitely wouldn't inject a mock here but would make a real pb server listening with expectations and canned responses. (Our protobuf services have something like this available in the same package as the client, i.e. anyone using the client also has the tools to write the application-specific tests they need.)
The resulting code probably wouldn't be shorter, but it would exercise a lot more of the real code paths. The availability of a test server with expectation methods could also (IMO) improve readability. Instead of trying to model multiple variants of behavior via a single test case table, using a suite setup + methods (e.g. `s.ExpectStartTransaction(...); s.ExpectUpsert(...)`) would make clearer test bodies. Check sqlmock for something I think is a good example of a fluent expectation API in Go.
When you want to exercise this code, you want to construct an instance with mock/deterministic dependencies, so that you have predictable results when you apply input and receive output. That's the model: give input, assert output.
But your linked code is kind of different! Each subtest varies not the input but the behavior of the mocked dependencies. I understand the point: you want to run through all the codepaths in the gateway method. But is that worth testing? Do the tests meaningfully reduce risk? I dunno. It's not obvious to me that they do.
The use of gomock is also a big smell. Generating mocks kind of defeats the purpose of using them. I would definitely write a bespoke client: