I'm sorry if you've had bad experiences with this approach in the past, but it emphatically does not lead to low quality and/or meaningless tests. It's the essential foundation of well-abstracted and maintainable software.
Here's a test I wrote recently, in the style expected at my company. Tell me what exactly you think this is contributing to maintainability, or what you think could be done better. I spent an hour on this and found it pure drudgery. I half suspect I could have written a code generator for it in that hour instead. I had no idea whether the code really worked until I ran it against the real upstream.
The unit under test is a gateway to a configuration service.
It's hard to give solid advice based on a view of this single layer, but at a glance unless this gateway client is itself something to be extended by other projects, this is probably not something I would write test cases for per se. If "apipb" stands for protobuf, I definitely wouldn't inject a mock here but would make a real pb server listening with expectations and canned responses. (Our protobuf services have something like this available in the same package as the client, i.e. anyone using the client also has the tools to write the application-specific tests they need.)
The resulting code probably wouldn't be shorter, but it would exercise a lot more of the real code paths. The availability of a test server with expectation methods could also (IMO) improve readability. Instead of trying to model multiple variants of behavior via a single test case table, using a suite setup + methods (e.g. `s.ExpectStartTransaction(...); s.ExpectUpsert(...)`) would make clearer test bodies. Check sqlmock for something I think is a good example of a fluent expectation API in Go.
When you want to exercise this code, you want to construct an instance with mock/deterministic dependencies, so that you have predictable results when you apply input and receive output. That's the model: give input, assert output.
But your linked code is kind of different! Each subtest varies not the input but the behavior of the mocked dependencies. I understand the point: you want to run through all the codepaths in the gateway method. But is that worth testing? Do the tests meaningfully reduce risk? I dunno. It's not obvious to me that they do.
The use of gomock is also a big smell. Generating mocks kind of defeats the purpose of using them. I would definitely write a bespoke client: