> You can’t just give someone a simple command to call an endpoint—it requires additional tooling that isn’t standardized.
GRPC is a standard in all the ways that matter. It (or Thrift) is a breath of fresh air compared to doing it all by hand - write down your data types and function signatures, get something that you can actually call like a function (clearly separated from an actual function function - as it should be, it behaves differently - but usable like one). Get on with your business logic instead of writing serialisation/deserialisation boilerplate. GraphQL is even better.
Letting clients introduce load into the system without understanding the big O impact of the SOA upstream is a foot gun. This does not scale and results in a massive waste of money on unnecessary CPU cycles on O(log n) FK joins and O(n^2) aggregators.
Precomputed data in the shape of the client's data access pattern is the way to go. Frontload your CPU cycles with CQRS. Running all your compute at runtime is a terrible experience for users (slow, uncachable, geo origin slow too) and creates total chaos for backend service scaling (Who's going to use what resource next? Nobody knows!).
Any non-trivial REST API is also going to have responses which embed lists of related resources.
If your REST API doesn't have a mechanism for each request to specify which related resources get included, you'll also be wasting resources include related resources which some requesters don't even need!
If your REST API does have a mechanism for each to request to specify which related sources get included (e.g. JSON API's 'include' query param [0]), then you have the same problem as GraphQL where it's not trivial to know the precise performance characteristics of every possible request.
Premature optimisation is the root of all evil. Yes, for the 20% of cases that are loading a lot of data and/or used a lot, you need to do CQRS and precalculate the thing you need. But for the other 80%, you'll spend more developer time on that than you'll ever make back in compute time savings (and you might not even save compute time if you're precomputing things that are rarely queried).
For GUI, I've been very happy with grpcui-web[0]. It really highlights the strengths of GRPC: you get a full list of available operations (either from the server directly if it exposes metadata, or by pointing to the .proto file if not), since everything is strongly typed you get client-side field validation and custom controls e.g. a date picker for timestamp types or drop-down for enums. The experience is a lot better than copy & pasting from docs for trying out JSON-HTTP APIs.
In general though I agree devex for gRPC is poor. I primarily work with the Python and Go APIs and they can be very frustrating. Basic operations like "turn pbtypes.Timestamp into a Python datetime or Go time.Time" are poorly documented and not obvious. proto3 removing `optional` was a flub and then adding it back was an even bigger flub; I have a bunch of protos which rely on the `google.protobuf.Int64Value` wrapper types which can never be changed (without a massive migration which I'm not doing). And even figuring out how to build the stuff consistently is a challenge! I had to build out a centralized protobuf build server that could use consistent versions of protoc plus the appropriate proto-gen plugins. I think buf.build basically does this now but they didn't exist then.
No need to be snarky; that API did not exist when I started using protobuf. The method was called `TimestampProto` which is not intuitive, especially given the poor documentation available. And it required error handling which is unergonomic. Given that they switched it to timestamppb.New, they must've agreed with me. https://github.com/golang/protobuf/blame/master/ptypes/times... <-- and you can still see the full code from this era on master because of the migration from `github.com/golang/protobuf` to `google.golang.org/protobuf`, which was a whole other exercise in terrible DX.
> a query-oriented API, useless if you don't need flexible queries
Right but, the typical web service at the typical startup does need flexible queries. I feel people both overestimate its implications and under estimate its value.
- Standard "I need everything" in the model call
- Simplified "I need two properties call", like id + display name for a dropdown
- I need everything + a few related fields, which maybe require elevated permissions
GraphQL makes that very easy to support, test, and monitor in a very standard way. You can build something similar with REST, its just very ergonomic and natural in GraphQL. And its especially valuable as your startup grows, and some of your services become "Key" services used by a wider variety of use cases. Its not perfect or something everyone should use sure, but I believe a _lot_ of startup developers would be more efficient and satisfied using GraphQL.
GraphQL is fine until you have enough data to care about performance, at which point you have to go through and figure out where some insane SQL is coming from, which ultimately is some stitched together hodgepodge of various GraphQL query types, which maybe you can build some special indexes to support or maybe you have to adjust what's being queried. Either way, you patch that hole, and then a month later you have a new page that's failing to load because it's generating a query that is causing your DB CPU to jump to 90%.
I'm convinced at this point that GraphQL only works effectively at a small scale, where inefficient queries aren't disastrously slow/heavy, OR at a large enough scale where you can dedicate at least an entire team of engineers to constantly tackle performance issues, caching, etc.
To me it also makes no sense at startups, which don't generally have such a high wall between frontend and backend engineering. I've seen it used at two startups, and both spent way more time on dealing with GraphQL BS than it would have taken to either ask another team to do query updates or just learn to write SQL. Indeed, at $CURRENT_JOB the engineering team for a product using GraphQL actively pushed for moving away from it and to server-side rendering with Svelte and normal knex-based SQL queries, despite the fact that none of them were backend engineers by trade. The GraphQL was just too difficult to reason about from a performance perspective.
> maybe you can build some special indexes to support or maybe you have to adjust what's being queried. Either way, you patch that hole, and then a month later you have a new page that's failing to load because it's generating a query that is causing your DB CPU to jump to 90%.
> To me it also makes no sense at startups, which don't generally have such a high wall between frontend and backend engineering.
Startups are where I've seen it work really well, because it's the same team doing it and you're always solving the same problem either way: this page needs this data, so we need to assemble this data (and/or adjust what we actually show on this page) out of the database we have, and add appropriate indices and/or computed pre-aggregations to make that work. Even if you make a dedicated backend endpoint to provide that data for that page, you've still got to solve that same problem. GraphQL just means less boilerplate and more time to focus on the actual business logic - half the time I forgot we were even using it.
Take the protobuf and generate a client… gRPC makes no assumptions on your topography, only that there’s a server, there’s a client, and it’s up to you to fill the logic. Or use grpcurl, or bloomrpc, or kreya.
The client is the easy part if you just want to test calls.
> It's in the name, a query-oriented API, useless if you don't need flexible queries.
It's actually still nice even if you don't use the flexibility. Throw up GraphiQL and you've got the testing tool you were worried about. (Sure, it's not a command line tool, but people don't expect that for e.g. SQL databases).
GRPC is a standard in all the ways that matter. It (or Thrift) is a breath of fresh air compared to doing it all by hand - write down your data types and function signatures, get something that you can actually call like a function (clearly separated from an actual function function - as it should be, it behaves differently - but usable like one). Get on with your business logic instead of writing serialisation/deserialisation boilerplate. GraphQL is even better.