Hacker News new | past | comments | ask | show | jobs | submit login
Understanding gRPC, OpenAPI and REST and when to use them in API design (2020) (cloud.google.com)
323 points by hui-zheng 13 days ago | hide | past | favorite | 276 comments





If I could go back in time I would stop myself from ever learning about gRPC. I was so into the dream, but years later way too many headaches. Don’t do it to yourself.

Saying gRPC hides the internals is a joke. You’ll get internals all right, when you’re blasting debug logging trying to figure out what the f is going on causing 1/10 requests to fail and fine tuning 10-20 different poorly named and timeout / retry settings.

Hours lost fighting with maven plugins. Hours lost debugging weird deadline exceeded. Hours lost with LBs that don’t like the esoteric http2. Firewall pain meaning we had to use Standard api anyway. Crappy docs. Hours lost trying to get error messages that don’t suck into observability.

I wish I’d never heard of it.


IMO the problem with gRPC isn't the protocol or the protobufs, but the terrible tooling - at least on the Java end. It generates shit code with awful developer ergonomics.

When you run the protobuf builder...

* The client stub is a concrete final class. It can't be mocked in tests.

* When implementing a server, you have to extend a concrete class (not an interface).

* The server method has an async method signature. Screws up AOP-oriented behavior like `@Transactional`

* No support for exceptions.

* Immutable value classes yes, but you have to construct them with builders.

The net result is that if you want to use gRPC in your SOA, you have to write a lot of plumbing to hide the gRPC noise and get clean, testable code.

There's no reason it has to be this way, but it is that way, and I don't want to write my own protobuf compiler.

Thrift's rpc compiler has many of the same problems, plus some others. Sigh.


> The client stub is a concrete final class. It can't be mocked in tests.

I believe this is deliberate, you are supposed to substitute a fake server. This is superior in theory since you have much less scope to get error reporting wrong (since errors actually go across a gRPC transport during the test).

Of course.. at least with C++, there is no well-lit-path for actually _doing_ that, which seems bonkers. In my case I had to write a bunch of undocumented boilerplate to make this happen.

IIUC for Stubby (Google's internal precursor to gRPC) those kinda bizarre ergonomic issues are solved.


Stubby calls (at least in Java) just use something called a GenericServiceMocker which is akin to a more specialised mockito.

In my experience, only Swift has a generator that produces good-quality code. Ironically, it’s developed by Apple.

Any alternatives that take a similar philosophy but get the tooling right?

Depends what you mean by "similar philosophy". We (largeish household name though not thought of as a tech company) went through a pretty extensive review of the options late last year and standardized on this for our internal service<->service communication:

https://github.com/stickfigure/trivet

It's the dumbest RPC protocol you can imagine, less than 400 lines of code. You publish a vanilla Java interface in a jar; you annotate the implementation with `@Remote` and make sure it's in the spring context. Other than a tiny bit of setup, that's pretty much it.

The main downside is that it's based on Java serialization. For us this is fine, we already use serialization heavily and it's a known quantity for our team. Performance is "good enough". But you can't use this to expose public services or talk to nonjava services. For that we use plain old REST endpoints.

The main upsides are developer ergonomics, easy testability, spring metrics/spans pass through remote calls transparently, and exceptions (with complete stacktraces) propagate to clients (even through multiple layers of remote calls).

I wrote it some time ago. It's not for everyone. But when our team (well, the team making this decision for the company) looked at the proof-of-concepts, this is what everyone preferred.


Yes, it's good for internal use.

Caveat is when you need to go elsewhere. I still remember the pain of Hadoop ecosystem having this kind of API


Protobuf is an atrocious protocol. Whatever other problems gRPC has may be worse, but Protobuf doesn't make anything better that's for sure.

The reason to use it may be that you are required to by the side you cannot control, or this is the only thing you know. Otherwise it's a disaster. It's really upsetting that a lot of things used in this domain are the first attempt by the author to make something of sorts. So many easily preventable disasters exist in this protocol for no reason.


Agree. As an example, this proto generates 584 lines of C++, links to 173k lines of dependencies, and generates a 21Kb object file, even before adding grpc:

syntax = "proto3"; message LonLat { float lon = 1; float lat = 2; }

Looking through the generated headers, they are full of autogenerated slop with loads of dependencies, all to read a struct with 2 primitive fields. For a real monorepo, this adds up quickly.


This is because protobuf supports full run-time reflection and compact serialization (protobuf binary objects are not self-describing), and this requires a bit of infrastructure.

This is a large chunk of code, but it is a one-time tax. The incremental size from this particular message is insignficant.


Can you elaborate?

Some very obvious and easily avoidable problems (of the binary format):

* Messages are designed in such a way that only the size of the constituents is given. The size of the container message isn't known. Therefore the top-level message doesn't record its size. This requires one to invent an extra bit of the binary format, when they decide how to delimit top-level messages. Different Protobuf implementations do it differently. So, if you have two clients independently implementing the same spec, it's possible that both will never be able to communicate with the same service. (This doesn't happen a lot in practice, because most developers use tools to generate clients that are developed by the same team, and so, coincidentally they all get the same solution to the same problem, but alternative tools exist, and they actually differ in this respect).

* Messages were designed in such a way as to implement "+" operator in C++. A completely worthless property. Never used in practice... but this design choice made the authors require that repeating keys in messages be allowed and that the last key wins. This precludes SAX-like parsing of the payload, since no processing can take place before the entire payload is received.

* Protobuf is rife with other useless properties, added exclusively to support Google's use-cases. Various containers for primitive types to make them nullable. JSON conversion support (that doesn't work all the time because it relies on undocumented naming convention).

* Protobuf payload doesn't have a concept of version / identity. It's possible, and, in fact, happens quite a bit, that incorrect schema is applied to payload, and the operation "succeeds", but, the resulting interpretation of the message is different from intended.

* The concept of default values, that is supposed to allow for not sending some values is another design flaw: it makes it easy to misinterpret the payload. Depending on how the reader language deals with absence of values, the results of the parse will vary, sometimes leading to unintended consequences.

* It's not possible to write a memory-efficient encoder because it's hard / impractical sometimes to calculate the length of the message constituents, and so, the typical implementation is to encode the constituents in a "scratch" buffer, measure the outcome, and then copy from "scratch" to the "actual" buffer, which, on top of this, might require resizing / wasting memory for "padding". If, on the other hand, the implementation does try to calculate all the lengths necessary to calculate the final length of the top-level message, it will prevent it from encoding the message in a single pass (all components of the message will have to be examined at least twice).

----

Had the author of this creation tried to use it for a while, he'd known about these problems and would try to fix them, I'm sure. What I think happened is that it was the first ever attempt for the author in doing this, and he never looked back, switching to other tasks, while whoever picked up the task after him was too scared to fix the problems (I hear the author was a huge deal in Google, and so nobody would tell him how awful his creation was).


> Had the author of this creation tried to use it for a while,...

The problem is that proto v1 has existed for over 20 years internally at Google. And being able to be backwards compatible is extremely important.

Edit. Oh. You're an LLM


Your problems has more to do with some implementations than the grpc/protobuf specs themselves.

The modern .NET and C# experience with gRPC is so good that Microsoft has sunset its legacy RPC tech like WCF and gone all in on gRPC.


Agreed. The newest versions of .NET are now chef’s kiss and so damn fast.

I would really like if proto to C# compiler would create nullable members. Hasers IMO give poor DX and are error prone.

The biggest project I’ve used it with was in Java.

Validating the output of the bindings protoc generated was more verbose and error prone than hand serializing data would have been.

The wire protocol is not type safe. It has type tags, but they reuse the same tags for multiple datatypes.

Also, zig-zag integer encoding is slow.

Anyway, it’s a terrible RPC library. Flatbuffer is the only one that I’ve encountered that is worse.


What do you mean with validating the bindings? GRPC is type safe. You don’t have to think about that part anymore.

But as the article mentions OpenAPI is also an RPC library with stub generation.

Manual parsing of the json is imho really Oldskool.

But it depends on your use case. That’s the whole point: it depends.


> The wire protocol is not type safe. It has type tags, but they reuse the same tags for multiple datatypes.

When is this ever an issue in practice? Why would the client read int32 but then all of a sudden decide to read uint32?


I guess backwards incompatible changes to the protocol? But yeah, don't do that if you're using protobuf; it's intentionally not robust to it.

Since you mention Maven I'm going to make the assumption that you are using Java. I haven't used Java in quite a while. The last 8 years or so I've been programming Go.

Your experience of gRPC seems to be very different from mine. How much of the difference in experience do you think might be down to Java and how much is down to gRPC as a technology?


It's not Java itself, it's design decisions on the tooling that Google provides for Java, mostly the protobuf-gen plugin.

At my company we found some workarounds to the issues brought up on GP but it's annoying the tooling is a bit subpar.


Have you tried the buf.build tools? Especially the remote code generation and package generation may make life easier for you.

a couple of links

https://buf.build/protocolbuffers/java?version=v29.3 https://buf.build/docs/bsr/generated-sdks/maven


I use gRPC with Go+Dart stack for years and never experienced these issues. Is it something specific to Java+gRPC?

Go and Dart are probably the languages most likely to work well with gRPC, given their provenance.

Google has massive amounts of code written in Java so one would think the Java tooling would be excellent as well.

Doesn't Google mostly use Stubby internally, only bridging it with gRPC for certain public-facing services?

Google also uses a completely different protocol stack to actually send Stubby/Protobuf/gRPC around, including protocols on the wire and bypassing the kernel (according to open access papers about PonyExpress etc)

As someone that used it for years with the same problems he describes... spot on analysis, the library does too much for you (e.g. reconnection handling) and handling even basic recovery is a bit a nuisance for newbies. And yes, when you get random failures good luck figuring out that maybe is just a router in the middle of the path dropping packets because their http2 filtering is full of bugs.

I like a lot of things about it and used it extensively instead of the inferior REST alternative, but I recommend to be aware of the limitations/nuisances. Not all issues will be simply solved looking at stackoverflow.


What would you recommend doing instead?

Web sockets would probably be easy.

Some web socket libraries support automatic fallback to polling if the infrastructure doesn’t support web sockets.


Do you need bidirectional streams? If so, you should write a bespoke protocol, on top of UDP, TCP or websockets.

If you don't, use GraphQL.


"Write a protocol and GraphQL", god damn it escalates quickly.

Fortunately, there are intermediate steps.


Any suggestions for a good RPC library?

I have had a really good experience with https://connectrpc.com/ so far. Buf is doing some interesting things in this space https://buf.build/docs/ecosystem/

I've used twitchtv/twirp with success. I like it because it's simple and doesn't reinvent itself over and over again.

What about songle directional streams? Graphql streams aren't widely supported yet are they? Graphql also strikes me as a weird alternative to protobufs as the latter works so hard for performance with binary payloads, and graphql is typically human readable bloaty text. And they aren't really queries, you can just choose to ignore parts of the return for a rpc.

I've been building API's for a long time, using gRPC, and HTTP/REST (we'll not go into CORBA or DCOM, because I'll cry). To that end, I've open sourced a Go library for generating your clients and servers from OpenAPI specs (https://github.com/oapi-codegen/oapi-codegen).

I disagree with the way this article breaks down the options. There is no difference between OpenAPI and REST, it's a strange distinction. OpenAPI is a way of documenting the behavior of your HTTP API. You can express a RESTful API using OpenAPI, or something completely random, it's up to you. The purpose of OpenAPI is to have a schema language to describe your API for tooling to interpret, so in concept, it's similar to Protocol Buffer files that are used to specify gRPC protocols.

gRPC is an RPC mechanism for sending protos back and forth. When Google open sourced protobufs, they didn't opensource the RPC layer, called "stubby" at Google, which made protos really great. gRPC is not stubby, and it's not as awesome, but it's still very efficient at transport, and fairly easy too extend and hook into. The problem is, it's a self-contained ecosystem that isn't as robust as mainstream HTTP libraries, which give you all kinds of useful middleware like logging or auth. You'll be implementing lots of these yourself with gRPC, particularly if you are making RPC calls across services implemented in different languages.

To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol. With an HTTP API, you can make calls to it via curl or your own code without having the OpenAPI description, so it's a "softer" binding. This fact alone makes it easier to work with and debug.


There is a distinction between (proper) REST and what this blog calls "OpenAPI". But the thing is, almost no one builds a true, proper REST API. In practice, everyone uses the OpenAPI approach.

The way that REST was defined by Roy Fielding in his 2000 Ph.D dissertation ("Architectural Styles and the Design of Network-based Software Architectures") it was supposed to allow a web-like exploring of all available resources. You would GET the root URL, and the 200 OK Response would provide a set of links that would allow you to traverse all available resources provided by the API (it was allowed to be hierarchical- but everything had to be accessible somewhere in the link tree). This was supposed to allow discoverability.

In practice, everywhere I've ever worked over the past two decades has just used POST resource_name/resource_id/sub_resource/sub_resource_id/mutatation_type- or PUT resource_name/resource_id/sub_resource/sub_resource_id depending on how that company handled the idempotency issues that PUT creates- with all of those being magic URL's assembled by the client with knowledge of the structure (often defined in something like Swagger/OpenAPI), lacking the link-traversal from root that was a hallmark of Fielding's original work.

Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.


I tend to prefer RESTish rather than RESTful since RESTful almost suggests attempting to implement Fielding's ideas but not quite getting there. I think the subset of approaches that try and fail to implement Fielding's ideas is an order of magnitude (or two) smaller than those who go for something that is superficially similar, but has nothing to do with HATEOAS :-).

REST is an interesting idea, but I don't think it is a practical idea. It is too hard to design tools and libraries that helps/encourages/forces the user implement HATEOAS sensibly, easily and consistently.


While it is amazing for initial discovery to have everything presented for the developer's inspection, in production it ends up requiring too many network round-trips to actually traverse from root to /resource_name/resource_id/sub_resource_name/sub_resource_id, or an already verbose transaction (everything is serialized and deserialized into strings!) becomes gigantic if you if don't make it hierarchical and just drop every URL into the root response.

This is why everyone just builds magic URL endpoints, and hopefully also includes a OpenAPI/Swagger documentation for them so the developer can figure it out. And then keeps the documentation up-to-date as they add new sub_resource endpoints!


If you are talking about REST here, expect an angry mob outside your door soon. URIs that have inherent structure and meaning? Burn the heretic! :-)

> Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.

Yes, exactly. I've never actually worked with any group whom had actually implemented full REST. When working with teams on public interface definitions I've personally tended to use the so-called Richardson's Maturity Model[0] and advocated for what it calls 'Level 2', which is what I think most of us find rather canonical and principal of least surprise regarding a RESTful interface.

[0] - https://en.wikipedia.org/wiki/Richardson_Maturity_Model


> There is no difference between OpenAPI and REST, it's a strange distinction.

That threw me off too. What the article calls REST, I understand to be closer to HATEOAS.

> I've open sourced a Go library for generating your clients and servers from OpenAPI specs

As a maintainer of a couple pretty substantial APIs with internal and external clients, I'm really struggling to understand the workflow that starts with generating code from OpenAPI specs. Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.

This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code. It's not perfect, but it's a 95% solution that works with both Echo and Gin. So when we need to stand up a new endpoint and allow the front end to start coding against it ASAP, the workflow looks like this:

1. In a feature branch, define the request and response structs, and write an empty handler that parses parameters and returns an empty response.

2. Generate the docs and send them to the front end dev.

Now, most devs never have to think about how to express their API in OpenAPI. And the docs will always be perfectly in sync with the code.


HATEOAS is just REST as originally envisioned but accepting that the REST name has come to be attached to something different.

> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code

OpenAPI is a spec not documentation. Write the spec first then generate the code from the spec.

You are doing it backwards, at least in my opinion.


That's conceptually true, and yet if the hundreds of code generators don't support Your Favorite OAPI Feature &trade; then you're stuck, whereas the opposite is that unless your framework is braindead it's going to at least support some mapping from your host language down to the OAPI spec. I doubt very seriously it's pretty, and my life experience is that it will definitely not be bright enough to have #/component reuse, but it's also probably closer to 30 seconds to run $(go generate something) than to launch an OAPI editor and now you have a 2nd job

I'd love an OAPI compliance badge (actually what I'm probably complaining about is the tooling's support for JSON Schema) so one could readily know which tools to avoid because they were conceived in a hackathon and worked for that purpose but that I should avoid them for real work


This comes down to your philosophical approach to API development.

If you design the API first, you can take the OpenAPI spec through code review, making the change explicit, forcing others to think about it. Breaking changes can be caught more easily. The presence of this spec allows for a lot of work to be automated, for example, request validation. In unit tests, I have automated response validation, to make sure my implementation conforms to the spec.

Iteration is quite simple, because you update your spec, which regenerates your models, but doesn't affect your implementation. It's then on you to update your implementation, that can't be automated without fancy AI.

When the spec changes follow the code changes, you have some new worries. If someone changes the schema of an API in the code and forgets to update the spec, what then? If you automate spec generation from code, what happens when you express something in code which doesn't map to something expressible in OpenAPI?

I've done both, and I've found that writing code spec-first, you end up constraining what you can do to what the spec can express, which allows you to use all kinds of off-the-shelf tooling to save you time. As a developer, my most precious resource is time, so I am willing to lose generality going with a spec-first approach to leverage the tooling.


In my part of the industry, a rite of passage is coming up with one's own homegrown data pipeline workflow manager/DAG execution engine.

In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator that scans an annotated server codebase, probably bundled with a client codegen tool as well. I know I've written one (mine too was a proper abomination) and it sounds like so have a few others in this thread.


> In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator

Close, it's writing custom client and server codegen that actually have working support for oneOf polymorphism and whatever other weird home-grown extensions there are.


> Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.

This is why I have never used generators to generate the API clients, only the models. Consuming a HTTP based API is just a single line function nowadays in web world, if you use e.g. react / tanstack query or write some simple utilities. The generaged clients are almost never good enough. That said, replacing the generator templates is an option in some of the generators, I've used the official openapi generator for a while which has many different generators, but I don't know if I'd recommend it because the generation is split between Java code and templates.


I'm scratching my head here. HATEOAS is the core of REST. Without it and the uniform interface principle, you're not doing REST. "REST" without it is charitably described as "RESTish", though I prefer the term "HTTP API". OpenAPI only exists because it turns out that developers have a very weak grasp on hypertext and indirection, but if you reframe things in a more familiar RPC-ish manner, they can understand it better as they can latch onto something they already understand: procedure calls. But it's not REST.

> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code.

This is against "interface first" principle and couples clients of your API to its implementation.

That might be OK if the only consumer of the API is your own application as in that case API is really just an internal implementation detail. But even then - once you have to support multiple versions of your own client it becomes difficult not to break them.


I don't see why it couples clients to the implementation.

Effectively, there's no difference between writing the code first and updating the OpenAPI spec, and updating the spec first and then doing some sort of code gen to update the implementation. The end state of the world is the same.

In either case, modifications to the spec will be scrutinized to make sure there are no breaking changes.


Yeah this is the way, I mean if the spec already exists it makes sense to go spec-first. I went spec-first last time I built an API because I find most generators to be imperfect or lacking features; going spec-first ensured that the spec was correct at least, and the implementations could do the workarounds (e.g. type conversions in Go) where necessary.

That is, generate spec from code and your spec is limited to what can be expressed by the code, its annotations, and the support that the generator has. Most generators (to or from openapi) are imperfect and have to compromise on some features, which can lead to miscommunication between clients/servers.


OpenAPI spec being authored by a human or a machine, it can still be the same YAML at the end of the day, so why would one approach be more brittle / breaks your clients than the other?

The oapi-codegen tool the OP was put out (which I use) solves this by emitting an interface though. OpenAPI has the concept of operation names (which also have a standard pattern), so your generated code is simply implementing operation names. You can happily rewrite the entire spec and provided operation names are the same, everything will still map correctly - which solves the coupling problem.

These days there's gprc reflection for discovery https://grpc.io/docs/guides/reflection/

I'm piggybacking on the OpenAPI spec as well to generate a SQL-like query syntax along with generated types which makes working with any 3rd party API feel the same.

What if you could query any ole' API like this?:

  Apipe.new(GitHun) |> from("search/repositories") |> eq(:language, "elixir") |> order_by(:updated) |> limit(1) |> execute()
This way, you don't have to know about all the available gRPC functions or the 3rd party API's RESTful quirks while retaining built-in documenting and having access to types.

https://github.com/cpursley/apipe

I'm considering building a TS adapter layer so that you can just drop this into your JS/TS project like you would with Supabase:

  const { data, error } = await apipe.from('search/repositories').eq('language', 'elixir').order_by('updated').limit(1)
Where this would run through the Elixir proxy which would do the heavy lifting like async, handle rate limits, etc.

> To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol.

That's not quite true. You can build an OpenAPI description based on JSON serialization of Protobufs and serve it via Swagger. The gRPC itself also offers built-in reflection (and a nice grpcurl utility that uses it!).


> https://github.com/oapi-codegen/oapi-codegen

I'm using it for a small personal project! Works very well. Thank you!


Just chiming in to say we use oapi-codegen everyday and it’s phenomenal.

Migrated away from Swaggo -> oapi during a large migration to be interface first for separating out large vertical slices and it’s been a godsend.


Buggy/incomplete Openapi codegen for rust was a huge disappointment for me. At least with grpc some languages are first class citizens. Of course generated code has some uglyness. Kinda sad http2 traffic can be flaky due to bugs in network hardware.

As someone who has worked at a few of the FAANGs, having thrift/grpc is a godsend for internal service routing, but a lot of the complexity is managed by teams building the libraries, creating the service discovery layers, doing the routing etc. But using an RPC protocol enables those things to happen on a much greater scale and speed than you could ever do with your typical JSON/REST service. I've also never seen a REST API that didn't leak verbs. If I need to build a backend service mesh or wire two local services together via an networked stream, I will always reach for grpc.

That said, I absolutely would not use grpc for anything customer or web facing. RPC is powerful because it locks you into a lot of decisions and gives you "the one way". REST is far superior when you have many different clients with different technology stacks trying to use your service.


For a public API I wouldn’t do this, but for private APIs we just do POST /api/doThingy with a JSON body, easy peasy RPC anyone can participate in with the most basic HTTP client. Works great on every OS and in every browser, no fucking around with “what goes in the URL path” vs “what goes in query params” vs “what goes in the body”.

You can even do this with gRPC if you’re using Buf or Connect - one of the server thingies that try not to suck; they will accept JSON via HTTP happily.


I'd argue just making everything POST is the correct way to do a public Api too. REST tricks you into endpoints no one really wants, or you break it anyway to support functionality needed. SOAP was heavy with it's request/respone, but it was absolutely correct that just sending everything as POST across the wire is easier to work with.

Some of the AWS APIs work this way too. See for example the Cloudwatch API: https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIRefer..., which is really JSON-RPC, not REST.

Yeah, I like doing this as well. And all the data goes in the request body. No query parameters.

Especially when the primary intended client is an SPA, where the URL shown is decoupled with the API URL.

Little bit of a memory jolt: I once built a (not for prod) backend in python as follows:

write a list of functions, one for each RPC, in a file `functions.py`

then write this generic function for flask:

  import server.functions as functions

  @server.post("/<method>")
  def api(method: str):
      data: Any = request.json if request.is_json else {}

      fn = lookup(functions, method)
      if fn is None:
          return {"error": "Method not found."}
      return fn(data)

And `lookup()` looks like:

  def lookup(module: ModuleType, method: str):
      md = module.__dict__
      mn = module.__name__
      is_present = method in md
      is_not_imported = md[method].__module__ == mn
      is_a_function = inspect.isfunction(md[method])

      if is_present and is_not_imported and is_a_function:
          return md[method]
      return None
So writing a new RPC is just writing a new function, and it all gets automatically wired up to `/api/function_name`. Quite nice.

The other nice feature there was automatic "docs" generation, from the python docstring of the function. You see, in python you can dynamically read the docstring of an object. So, I wrote this:

  def get_docs(module: ModuleType):
      md = module.__dict__
      mn = module.__name__
      docs = ""

      for name in md:
          if not inspect.isfunction(md[name]) or md[name].__module__ != mn:
              continue
          docs += md[name].__doc__ + "\n<br>\n"

      return docs[:-6]
Gives a simple text documentation which I served at an endpoint. Of course you could also write the docstring in openapi yaml format and serve it that way too.

Quite cursed overall, but hey, its python.

One of the worst footguns here is that you could accidentally expose helper functions, so you have to be sure to not write those in the functions file :P


Use a decorator to expose functions explicitly, otherwise sounds like security issue waiting to happen. All your decorator needs to do is add the function to an __exposed__ set, then when you’re looping over the dict, only expose keys who’s values are in the __exposed__ set

Good idea

Although I suppose using an approach like mine precludes any notion of a serious application.


This. The amount of time lost debating correct rest semantics for a use case is staggering.

Arguing the Right Way To Do REST was a favorite passtime amongst people at one of my previous jobs. Huge waste of time.

Yeah, when it matters in close to 0% of cases. Everyone reads the docs for everything anyways, any shared knowledge granting implicit meaning to things is very close to useless in practice with REST APIs.

What about non-web client/server applications though? I'm thinking online games / MMOs that require much more realtime communications than REST does. I have no idea what is used now, socket connections with something on the line I suppose.

For a game, I would maybe use Protobuf and grpc. There is serialization and deserializarion required. Something like flatbuffers or capnproto where the wireformat matches language data layout makes for extremely efficient parsing (though it may not be as network efficient). Really depends on how you structure your data.

> thrift/grpc is a godsend for internal service routing

Compared to what? What else did you try?


What do you mean by “leak verbs”?

Not OP, but https://cloud.google.com/blog/products/api-management/restfu...

The problem is that clients generally have a bunch of verbs they need to do. You have to design your objects and permissions just right such that clients can do all their verbs without an attacker being able to PATCH "payment_status" from "Requires Payment" to "Payment Confirmed".

RPC uses verbs, so that could just be the SubmitPayment RPC's job. In REST, the correct design would be to give permission to POST a "Payment" object and base "payment_status" on whether that has been done.


This is the most painful bit of REST for sure.

Unless you are doing bidirectional streaming (for which it seems pretty well suited, but I haven't used it, so it might be a fucking mess), grpc is usually a waste of time. Runtime transitive dependency hell, toolchain hell, and the teams inside Google that manage various implementations philosophically disagree on how basic features should work. Try exposing a grpc api to a team that doesn't use your language (particularly if they're using a language that isn't go, python or java, or is an old version of those.) Try exposing a grpc api to integrate with a cots product. Try exposing a grpc api to a browser. All will require a middleware layer.

I've used grpc at multiple companies and teams within these companies, all of them 100-500ish engineering team size, and never had these dependency and tool chain issues. It was smooth sailing with grpc.

I have worked full time at now two companies of that size making the dependency and tool chain problems not be a problem for all the normies.

In my opinion, you shouldn't expose it to a browser, it's not what is good at, build something custom that converts to json. Like using REST to talk between backend services, makes no sense using a human readable protocol/api especially if there are performance requirements (not a call every now and then with a small amount of data returned).

To be fair, it was intended to be for browsers. But it was designed alongside the HTTP/2 spec, before browsers added HTTP/2 support, and they didn't anticipate that browsers wouldn't end up following the spec. So now it only works where you can rely on a spec-compliant HTTP/2 implementation.

The article seems to be an advert for this, with its plug of that hosted gRPC<->JSON service.

> Try exposing a grpc api to a browser

I remember being grilled for not creating "jsony" interfaces:

message Response { string id = 1; oneof sub { SubTypeOne sub_type_one = 2; SubTypeTwo sub_type_two = 3; } }

message SubTypeOne { string field = 1; }

message SubTypeTwo { }

In your current model you just don't have any fields in this subtype, but the response looked like this with our auto translator: { "id": "id", "sub_type_two": { } }

Functionally, it works, and code written for this will work if new fields appear. However, returning empty objects to signify the type of response is strange in the web world. But when you write the protobuf you might not notice


Bidirectional streaming is generally a bad idea for anything you’re going to want to run “at scale” for what it’s worth.

Why do you say that? I'm involved in the planning for bidi streaming for a product that supports over 200M monthly active users. I am genuinely curious what landmines we're about to step on.

bidi streaming screws with a whole bunch of assumptions you rely on in usual fault-tolerant software:

- there are multiple ways to retry - you can retry establishing the connection (e.g. say DNS resolution fails for a 30s window) _or_ you can retry establishing the stream

- your load-balancer needs to persist the stream to the backend; it can't just re-route per single HTTP request/response

- how long are your timeouts? if you don't receive a message for 1s, OK, the client can probably keep the stream open, but what if you don't receive a message for 30s? this percolates through the entire request path, generally in the form of "how do I detect when a service in the request path has failed"


> - there are multiple ways to retry - you can retry establishing the connection (e.g. say DNS resolution fails for a 30s window) _or_ you can retry establishing the stream

This isn't a difficult problem to solve. We apply both of those strategies depending on circumstances. We can even re-connect clients to the same backend after long disconnection periods to support upload resuming etc.

> - your load-balancer needs to persist the stream to the backend; it can't just re-route per single HTTP request/response

This applies whether the stream is uni- or bi-directional. We already have uni-directional streams working well at scale, so this is not a concern.

> - how long are your timeouts? if you don't receive a message for 1s, OK, the client can probably keep the stream open, but what if you don't receive a message for 30s? this percolates through the entire request path, generally in the form of "how do I detect when a service in the request path has failed"

We maintain streams for very long periods. Hours or days. Clients can detect dropped streams (we propagate errors in both directions, although AWS ALBs are causing problems here) and the client knows how to re-establish a connection. And again this applies whether streams are uni- or bi-directional.


> - there are multiple ways to retry - you can retry establishing the connection (e.g. say DNS resolution fails for a 30s window) _or_ you can retry establishing the stream

That's not how protobuf works? If a connection fails, you simply get an IO error instead of the next message. There is no machinery in gRPC that re-establishes connections.

You do need to handle timeouts and blocked connections, but that's a generic issue for any protocol.


Not going to give you any proper advice but rather a question to have an answer for. It's not unsolvable or even difficult but needs an answer at scale.

How do you scale horizontally?

User A connects to server A. User A's connection drops. User A reconnects to your endpoint. Did you have anything stateful you had to remember? Did they loadbalancer need to remember to reconnect user A to server A? What happens if the server dropped, how do you reconnect the user?

Now if your streaming is server to server over gRPC on your own internal backend then sure, build actors with message passing, you will probably need an orchestration layer (not k8s, that's for ifra, you need an orchestrator for your services probably written by you), for the same reason as above. What happens if Server A goes down but instead of User A it was Server B. The orchestrator acts as your load balancer would have but it just remembers who exists and who they need to speak to.


Nothing in Protobuf is suited for streaming. It's anti-streaming compared to almost any binary protocol you can imagine (unless you want to stream VHD, which would be a sad joke... for another time).

> Nothing in Protobuf is suited for streaming.

Uhh... Why? Protobuf supports streaming replies and requests. Do you mean that you need to know the message size in advance?


No, Protobuf doesn't support streaming.

Streaming means that it's possible to process the payload in small chunks, preferably of fixed size. Here are some examples of formats that can be considered streaming:

* IP protocol. Comes in uniformly sized chunks, payload doesn't have a concept of "headers". Doesn't even have to come in any particular order (which might be both a curse and a blessing for streaming).

* MP4 format. Comes in frames, not necessarily uniformly sized, but more-or-less uniform (the payload size will vary based on compression outcome, but will generally be within certain size). However, it has a concept of "headers", so must be streamed from a certain position onward. There's no way to jump into the middle and start streaming from there. If the "header" was lost, it's not possible to resume.

* Sun RPC, specifically the part that's used in NFS. Payload is wildly variable in size and function, but when it comes to transferring large files, it still can be streamed. Reordering is possible to a degree, but the client / server need to keep count of messages received, also are able to resume with minimal re-negotiation (not all data needs to be re-synced in order to resume).

Protobuf, in principle, cannot be processed unless the entire message has been received (because, by design, the keys in messages don't have to be unique, and the last one wins). Messages are hierarchical, so, there's no way to split them into fixed or near-fixed size chunks. Metadata must be communicated separately, ahead of time, otherwise sides have no idea what's being sent. So, it's not possible to resume reading the message if the preceding data was lost.

It's almost literally the collection of all things you don't want to have in a streaming format. It's like picking a strainer with the largest holes to make soup. Hard to think about a worse tool for the job.


Ah, you're an LLM.

Protobuf supports streaming just fine. Simply create a message type representing a small chunk of data and return a stream of them from a service method.


> Try exposing a grpc api to a team that doesn't use your language

Because of poor HTTP/2 support in those languages? Otherwise, it's not much more than just a run of the mill "Web API", albeit with some standardization around things like routing and headers instead of the randomly made up ones you will find in a bespoke "Look ma, I can send JSON with a web server" API. That standardization should only make implementing a client easier.

If HTTP/2 support is poor, then yeah, you will be in for a world of hurt. Which is also the browser problem with no major browser (and maybe no browser in existence) ever ending up supporting HTTP/2 in full.


My only work experience with gRPC was on a project where another senior dev pushed for it because we "needed the performance". We ended up creating a JSON API anyways. Mostly because that's what the frontend could consume. No one except for that developer had experience with gRPC. He didn't go any deeper than the gRPC Python Quick start guide and wouldn't help fix bugs.

The project was a mess for a hundred reasons and never got any sort of scale to justify gRPC.

That said, I've used gRPC in bits outside of work and I like it. It requires lot more work and thought. That's mostly because I've worked on so many more JSON APIs.


That sounds more like a critique of the "senior" developer who didn't know grpc isn't compatible with browsers before adopting it than grpc itself.

Correct, I wasn't critiquing gRPC. I was critiquing a type of person who might push for gRPC. That developer probably thought of it as a novelty and made up reasons to use it. It was a big hassle that added to that teams workload with no upside.

When all you have is a hammer…

gRPC is fantastic for its use case. Contract first services with built in auth. I can make a call to a service using an API that’s statically typed due to code generation and I don’t have to write it. That said, it’s not for browsers so Mr gRPC dev probably had no experience in browser technologies.

A company I worked for about 10 years ago was heavy gRPC but only as a service bridge that would call the REST handler (if you came in over REST, it would just invoke this handler anyway). Everything was great and dtos (messages) were automatically generated! Downside was the serialization hit.


gRPC is indeed for backend service to service calls with strong contract/model first approach. It’s important for company in serious API and SDK vending business.

yes who would imagine that the homegrown rpc of the internet and browser company would work on the internet and in a browser

very fair critique


I've been having fun with connectrpc https://connectrpc.com/

It fixes a lot of the problematic stuff with grpc and I'm excited for webtransport to finally be accepted by safari so connectrpc can develop better streaming.

I initially thought https://buf.build was overkill, but the killer feature was being able to import 3rd party proto files without having to download them individually:

    deps:
      - buf.build/landeed/protopatch
      - buf.build/googleapis/googleapis

The automatic SDK creation is also huge. I was going to grab a screenshot praising it auto-generating SDKs for ~9 languages, but it looks like they updated in the past day or two and now I count 16 languages, plus OpenAPI and some other new stuff.

Edit: I too was swayed by false promises of gRPC streaming. This document exactly mirrored my experiences https://connectrpc.com/docs/go/streaming/


> It fixes a lot of the problematic stuff with grpc and I'm excited for webtransport to finally be accepted by safari so connectrpc can develop better streaming.

We developed a small WebSocket-based wrapper for ConnectRPC streaming, just to make it work with ReactNative. But it also allows us to use bidirectional streaming in the browser.


Awesome! Could you share? I also use react native.

https://gist.github.com/Cyberax/3956c935a7971627e2ce8e2df3fa...

I'll do a proper write-up in a couple of days.


It still uses protocol buffers though, which is where many of the problems I have with gRPC comes from

The auto-generated SDKs are very useful here. An API customer doesn't have to learn protobuf or install any tooling. Plus they can fall back to JSON without any fuss. Connectrpc is much better at that than my envoy transcoder was.

If you're thinking from the API author's point of view, I might agree with you if there was a ubiquitous JSON annotation standard for marking optional/nullable values, but I am sick of working with APIs that document endpoints with a single JSON example and I don't want to inflict that on anyone else.


It doesn't use protocol buffers any more than gRPC does, which is to say it only uses them if you choose to use them. gRPC is payload agnostic by design. Use CSV if you'd rather. It's up to you.

You can also choose to use JSON instead. Works great with curl and browser dev tools.

Is there recent news on safari supporting webtransport?

Google somehow psyoped the entire industry to use gRPC for internal service communications. The devex of gRPC is considerably worse than REST.

You can’t just give someone a simple command to call an endpoint—it requires additional tooling that isn’t standardized. Plus, the generated client-side code is some of the ugliest gunk you’ll find in any language.


> The devex of gRPC is considerably worse than REST.

Hard disagree from the backend world.

From one protocol change you can statically determine which of your downstream consumers needs to be updated and redeployed. That can turn weeks of work into a hour long change.

You know that the messages you accept and emit are immediately validated. You can also store them cheaply for later rehydration.

You get incredibly readable API documentation with protos that isn't muddled with code and business logic.

You get baked in versioning and deprecation semantics.

You have support for richer data structures (caveat: except for maps).

In comparison, JSON feels bloated and dated. At least on the backend.


I also disagree, at Google everything is RPCs in a similar way to gRPC internally, and I barely need to think about the mechanics of them most of the time, whereas with REST/raw HTTP, you need to think about so much of the process – connection lifecycle, keepalive, error handling at more layers, connection pools, etc.

However, I used to work in a company that used HTTP internally, and moving to gRPC would have sucked. If you're the one adding gRPC to a new service, that's more of a pain than `import requests; requests.get(...)`. There is no quick and hacky solution for gRPC, you need a fully baked, well integrated solution, rolled out across everyone who will need it.


The flexibility of HTTP has advantages, too; it's simple to whip up a `curl` command to try things out. How does Google meet that need for gRPC APIs?

There's a curl for RPCs internally. It helps too that RPC servers are self describing, so you can actually list the services and methods exposed by a server. I'd say it's much simpler than curl, although again that's in large part because there's a lot of shared infra and understanding, and starting from scratch on that would be hard.

Server reflection exists (https://grpc.io/docs/guides/reflection/), but you don't really need to whip out curl when you have the RPC service's definition. It tells you everything you need to know about what to send and what you will receive, so you can just start writing type-safe code.

>you don't really need to whip out curl when you have the RPC service's definition

Following up a "how do I experiment with this in my workflow" with "oh you don't need to" is not the greatest look. There is a vast portion of programming bugs that stem from someone misunderstanding what a given API does, so the ability to quickly self-verify that one is doing things right is essential.


As the linked docs mention, grpcurl is a thing if you want to use it.

My perspective stems from working with it in backend services as well. The type safety and the declarative nature of protobufs are nice, but writing clients and servers isn’t.

The tooling is rough, and the documentation is sparse. Not saying REST doesn’t have its fair share of faults, but gRPC feels like a weird niche thing that’s hard to use for anything public-facing. No wonder none of the LLM vendors offer gRPC as an alternative to REST.


The benefits you mention stem from having a total view on all services and which protos they are using.

The same is achievable with a registry of OpenAPI documents. The only thing you need to ensure is that teams share schema definitions. This holds for gRPC as well. If teams create new types just copying some of the fields they need your analysis will be lost as well.


> You get incredibly readable API documentation with protos that isn't muddled with code and business logic.

I mean, ideally (hopefully) in the JSON case there's some class defined in code that they can document in the comments

If it's a shitty shop that's sometimes less likely. Nice thing about protos is that the schemas are somewhere


> You can’t just give someone a simple command to call an endpoint—it requires additional tooling that isn’t standardized.

GRPC is a standard in all the ways that matter. It (or Thrift) is a breath of fresh air compared to doing it all by hand - write down your data types and function signatures, get something that you can actually call like a function (clearly separated from an actual function function - as it should be, it behaves differently - but usable like one). Get on with your business logic instead of writing serialisation/deserialisation boilerplate. GraphQL is even better.


> GraphQL is even better.

Letting clients introduce load into the system without understanding the big O impact of the SOA upstream is a foot gun. This does not scale and results in a massive waste of money on unnecessary CPU cycles on O(log n) FK joins and O(n^2) aggregators.

Precomputed data in the shape of the client's data access pattern is the way to go. Frontload your CPU cycles with CQRS. Running all your compute at runtime is a terrible experience for users (slow, uncachable, geo origin slow too) and creates total chaos for backend service scaling (Who's going to use what resource next? Nobody knows!).


Any non-trivial REST API is also going to have responses which embed lists of related resources.

If your REST API doesn't have a mechanism for each request to specify which related resources get included, you'll also be wasting resources include related resources which some requesters don't even need!

If your REST API does have a mechanism for each to request to specify which related sources get included (e.g. JSON API's 'include' query param [0]), then you have the same problem as GraphQL where it's not trivial to know the precise performance characteristics of every possible request.

[0] https://jsonapi.org/format/#fetching-includes


Premature optimisation is the root of all evil. Yes, for the 20% of cases that are loading a lot of data and/or used a lot, you need to do CQRS and precalculate the thing you need. But for the other 80%, you'll spend more developer time on that than you'll ever make back in compute time savings (and you might not even save compute time if you're precomputing things that are rarely queried).

> GraphQL is even better

just a casual sentence at the end? How about no. It's in the name, a query-oriented API, useless if you don't need flexible queries.

Why don't you address the problem they talked about, what is the cli tool I can use to test grpc, what about gui client?


For GUI, I've been very happy with grpcui-web[0]. It really highlights the strengths of GRPC: you get a full list of available operations (either from the server directly if it exposes metadata, or by pointing to the .proto file if not), since everything is strongly typed you get client-side field validation and custom controls e.g. a date picker for timestamp types or drop-down for enums. The experience is a lot better than copy & pasting from docs for trying out JSON-HTTP APIs.

In general though I agree devex for gRPC is poor. I primarily work with the Python and Go APIs and they can be very frustrating. Basic operations like "turn pbtypes.Timestamp into a Python datetime or Go time.Time" are poorly documented and not obvious. proto3 removing `optional` was a flub and then adding it back was an even bigger flub; I have a bunch of protos which rely on the `google.protobuf.Int64Value` wrapper types which can never be changed (without a massive migration which I'm not doing). And even figuring out how to build the stuff consistently is a challenge! I had to build out a centralized protobuf build server that could use consistent versions of protoc plus the appropriate proto-gen plugins. I think buf.build basically does this now but they didn't exist then.

[0] https://github.com/fullstorydev/grpcui


timestamppb.New(time) is hard to figure out?

> timestamppb.New(time) is hard to figure out?

No need to be snarky; that API did not exist when I started using protobuf. The method was called `TimestampProto` which is not intuitive, especially given the poor documentation available. And it required error handling which is unergonomic. Given that they switched it to timestamppb.New, they must've agreed with me. https://github.com/golang/protobuf/blame/master/ptypes/times... <-- and you can still see the full code from this era on master because of the migration from `github.com/golang/protobuf` to `google.golang.org/protobuf`, which was a whole other exercise in terrible DX.


grpcurl is what I use to inspect gRPC apis.

https://github.com/fullstorydev/grpcurl


> a query-oriented API, useless if you don't need flexible queries

Right but, the typical web service at the typical startup does need flexible queries. I feel people both overestimate its implications and under estimate its value.

    - Standard "I need everything" in the model call
    - Simplified "I need two properties call", like id + display name for a dropdown
    - I need everything + a few related fields, which maybe require elevated permissions

GraphQL makes that very easy to support, test, and monitor in a very standard way. You can build something similar with REST, its just very ergonomic and natural in GraphQL. And its especially valuable as your startup grows, and some of your services become "Key" services used by a wider variety of use cases. Its not perfect or something everyone should use sure, but I believe a _lot_ of startup developers would be more efficient and satisfied using GraphQL.

GraphQL is fine until you have enough data to care about performance, at which point you have to go through and figure out where some insane SQL is coming from, which ultimately is some stitched together hodgepodge of various GraphQL query types, which maybe you can build some special indexes to support or maybe you have to adjust what's being queried. Either way, you patch that hole, and then a month later you have a new page that's failing to load because it's generating a query that is causing your DB CPU to jump to 90%.

I'm convinced at this point that GraphQL only works effectively at a small scale, where inefficient queries aren't disastrously slow/heavy, OR at a large enough scale where you can dedicate at least an entire team of engineers to constantly tackle performance issues, caching, etc.

To me it also makes no sense at startups, which don't generally have such a high wall between frontend and backend engineering. I've seen it used at two startups, and both spent way more time on dealing with GraphQL BS than it would have taken to either ask another team to do query updates or just learn to write SQL. Indeed, at $CURRENT_JOB the engineering team for a product using GraphQL actively pushed for moving away from it and to server-side rendering with Svelte and normal knex-based SQL queries, despite the fact that none of them were backend engineers by trade. The GraphQL was just too difficult to reason about from a performance perspective.


> maybe you can build some special indexes to support or maybe you have to adjust what's being queried. Either way, you patch that hole, and then a month later you have a new page that's failing to load because it's generating a query that is causing your DB CPU to jump to 90%.

> To me it also makes no sense at startups, which don't generally have such a high wall between frontend and backend engineering.

Startups are where I've seen it work really well, because it's the same team doing it and you're always solving the same problem either way: this page needs this data, so we need to assemble this data (and/or adjust what we actually show on this page) out of the database we have, and add appropriate indices and/or computed pre-aggregations to make that work. Even if you make a dedicated backend endpoint to provide that data for that page, you've still got to solve that same problem. GraphQL just means less boilerplate and more time to focus on the actual business logic - half the time I forgot we were even using it.


Take the protobuf and generate a client… gRPC makes no assumptions on your topography, only that there’s a server, there’s a client, and it’s up to you to fill the logic. Or use grpcurl, or bloomrpc, or kreya.

The client is the easy part if you just want to test calls.


> It's in the name, a query-oriented API, useless if you don't need flexible queries.

It's actually still nice even if you don't use the flexibility. Throw up GraphiQL and you've got the testing tool you were worried about. (Sure, it's not a command line tool, but people don't expect that for e.g. SQL databases).


> what is the cli tool I can use to test grpc

Use https://connectrpc.com/ and then you can use curl, postman, or any HTTP tool of your choosing that supports sending POST requests.


i agree, was forced to use it at several companies and it was 99% not needed tech debt investment garbage

even in go its a pain in the ass to have to regen and figure out versioning shared protos and it only gets worse w each additional language

but every startup thinks they need 100 microservices and grpc so whatever


> even in go its a pain in the ass to have to regen and figure out versioning shared protos and it only gets worse w each additional language

The secret is: don't worry about it. There is no need to regenerate your proto bindings for every change to the protos defs. Only do it when you need to access something new in your application (which only happens when you will be making changes to the application anyway). Don't try and automate it. That is, assuming you don't make breaking changes to your protos (or if you do, you do so under a differently named proto).


> If your API is a REST API, then your clients never have to understand the format of your URLs and those formats are not part of the API specification given to clients.

Roy Fielding, who coined the term REST:

"A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations."

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

I know it's a dead horse, but it's so funny: the "API specification" given to clients, in a truly RESTful system, should only be the initial entry point URI/URL.


This idea of self-describing REST is now better known as HATEOAS. Personally I think it’s bloated and doesn’t solve a real problem.

https://en.m.wikipedia.org/wiki/HATEOAS


HATEOAS is one sub-constraint of the uniform interface constraint of REST, see chapter 2 of my book:

https://hypermedia.systems/components-of-a-hypermedia-system...

It's an important aspect of a truly RESTful network architecture


HATEOAS is fantastic when your clients are humans. Not so much when they're code.


How does one even write an API client against a REST API that only publishes the initial entry point? in particular, how should the client discover the resources that can be manipulated by the API or the request/response models?

The responses from prior requests give you URLs which form subsequent requests.

For example, if I,

  GET <account URL>
that might return the details of my account, which might include a list of links (URLs) to all subscriptions (or perhaps a URL to the entire collection) in the account.

(Obviously you have to get the account URL in this example somewhere too, and usually you just keep tugging on the objects in whatever data model you're working with and there are a few natural, easy top-level URLs that might end up in a directory of sorts, if there's >1.)

See ACME for an example; it's one of the few APIs I'd class as actually RESTful. https://datatracker.ietf.org/doc/html/rfc8555#section-7.1.1.

Needing a single URL is beautiful, IMO, both configuration-wise and easily lets one put in alternate implementations, mocks, etc., and you're not guessing at URLs which I've had to do a few times with non-RESTful HTTP APIs. (Most recently being Google Cloud's…)


> How does one even write an API client against a REST API that only publishes the initial entry point? in particular, how should the client discover the resources that can be manipulated by the API or the request/response models?

HAL[0] is very useful for this requirement IMHO. That in conjunction with defining contracts via RAML[1] I have found to be highly effective.

0 - https://datatracker.ietf.org/doc/html/draft-kelly-json-hal

1 - https://github.com/raml-org/raml-spec/blob/master/versions/r...


Look up HATEOS. The initial endpoint will you give you the next set of resources - maybe the user list and then the post list. Then as you navigate to say, the post list, it will have embedded pagination links. Once you have resource urls from this list you can post/put/delete as usual.

your browser is a client that works against RESTful entries points that only publish an initial entry point, such as https://news.ycombinator.com

from that point forward the client discovers resources (articles, etc) that can be manipulated (e.g. comments posted and updated) via hypermedia responses from the server in responses


The browser is also driven by an advanced wetware AI system that knows which links to click on and how to interpret the results.


Your Web browser is probably the best example. When you visit a Web site, your browser discovers resources and understands how it can interact with them.

It certainly does not. Sure it can crawl links, but the browser doesn't understand the meaning of the pages, nor can it intelligently fill out forms. It is the user that can hopefully divine how to interact with the pages you serve their browser.

Most APIs however are intended to be consumed by another service, not by a human manually interpreting the responses and picking the next action from a set of action links. HATEOS is mostly pointless.


> the "API specification" given to clients, in a truly RESTful system, should only be the initial entry point URI/URL

I don't know that I fully agree? The configuration, perhaps, but I think the API specification will be far more than just a URL. It'll need to detail whatever media types the system the API is for uses. (I.e., you'll need to spend a lot of words on the HTTP request/response bodies, essentially.)

From your link:

> A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state

That. I.e., you're not just returning `application/json` to your application, you're returning `<something specific>+json`. (Unless you truly are working with JSON generically, but I don't think most are; the JSON is holding business specific data that the application needs to understand & work with.)

That is, "and [the] set of standardized media types that are appropriate for the intended audience" is also crucial.

(And I think this point gets lost in the popular discourse: it focuses on that initial entry URL, but the "describe the media types", as Fielding says, should be the bulk of the work — sort of the "rest of the owl" of the spec. There's a lot of work there, and I think sometimes people hearing "all you need is one URL" are right to wonder "but where's the rest of the specification?")


that <something specific> should not be API specific, otherwise you are just smuggling an API specification into a second-order aspect of your system and violating the uniform interface.

You both agree: when he writes ‘format of your URLs,’ he literally means the format of the URLs, not the format of the resources. Like you, I clicked on the article expecting yet another blogger who doesn’t understand REST but it appears this author has at least some basic knowledge of the concepts. Good for him!

I like gRPC too, and honestly for a commercial project it is pretty compelling. But for a personal or idealistic project I think that REST is preferable.


Classic case of a good idea going viral, followed by people misunderstanding the idea but continuing to spread it anyway.

I think the original REST is only suitable for "file" resources, so there's WebDAV and nobody bothers to use it these days.

I dislike the use of gRPC within the data center. People reach for it citing performance, but gRPC is not high performance and the quality of the available open source clients is very poor, particularly outside of the core C++/Java implementations like the nodejs implementation. I am not against the use of protobuf as an API spec but it should be possible to use it with a framing protocol over TCP, there just isn't a clear dominant choice for that way of doing RPC. When it comes to web based APIs I am more in favour of readable payloads, but there are issues here since we tend to use JSON but the type specificity is loose, which leads to interop problems between backend languages, particularly in nodejs where JSON parse is used to implement a schema mapping. In order to do this properly, encoders and decoders need to be generated explicitly from schemas, which somewhat diminishes the use of JSON within the context of JS.

In what situation is performance enough of a concern that you would consider gRPC but not enough of a concern that you would let nodeJS anywhere near your stack?

No one is picking Nodejs for high performance, but when it is chosen for other reasons it's still expected to perform well. The Nodejs gRPC library performs poorly relatively to the overall performance characteristics of Nodejs, and this is a problem because most of the work performed by typical Nodejs services is API-related work (sending data, encoding and decoding payloads, managing sockets etc). That's not even touching on the bugs in the http2 implementation in node core or the grpc library itself, but much of the selling point of gRPC is supposedly the language interop, and this seems like false advertising to me.

I would imagine the reason is really that Google internally doesn't allow NodeJS in production, so the tooling for gRPC for NodeJS does not benefit from the same level of scrutiny as languages Google uses internally.

I agree, though Zod greatly helps with the JS schema issue. I’m keeping an eye on Microsoft’s TypeSpec project too: typespec.io for interoperable schema generation.

The main benefit of protos is interop between various languages. If you are a single language tech stack, it matters less.

Also, if you use languages outside of Google's primary languages, you're likely not going to get as good of an experience.


There was a talk in 2023 of a non-TCP based protocol, Homa in RPC for data center use-case https://youtu.be/xQQT8YUvWg8?si=g3u5TogBe0_QpPpj.

always felt like grpc was unnecessarily inaccessible to the rest of us outside google land. the grpc js client unnecessarily heavy and kinda opaque. good idea but poorly executed compared to people who are familiar with the "simplicity" of REST

The frontend / backend split is where you have the REST and JSON camps fighting with the RPC / protobuf / gRPC factions.

RPCs have more maintainable semantics than REST as a virtue of not trying to shoehorn your data model (cardinality, relationships, etc.) into a one-size-fits-all prescriptive pattern. Very few entities ever organically evolve to fit cleanly within RESTful semantics unless you design everything upfront with perfect foresight. In a world of rapidly evolving APIs, you're never going to hit upon beautiful RESTful entities. In bigger teams with changing requirements and ownership, it's better to design around services.

The frontend folks don't maintain your backend systems. They want easy to reason about APIs, and so they want entities they can abstract into REST. They're the ultimate beneficiaries of such designs.

The effort required for REST has a place in companies that sell APIs and where third party developers are your primary customers.

Protobufs and binary wire encodings are easier for backend development. You can define your API and share it across services in a statically typed way, and your services spend less time encoding and decoding messages. JSON isn't semantic or typed, and it requires a lot of overhead.

The frontend folks natively deal with text and JSON. They don't want to download protobuf definitions or handle binary data as second class citizens. It doesn't work as cleanly with their tools, and JSON is perfectly elegant for them.

gRPC includes excellent routing, retry, side channel, streaming, and protocol deprecation semantics. None of this is ever apparent to the frontend. It's all for backend consumers.

This is 100% a frontend / backend tooling divide. There's an interface and ergonomic mismatch.


Protobufs vs. JSON are orthogonal to REST vs. RPC: you can have REST where the representations are protobufs or JSON objects; you can have RPC where the requests and responses are protobufs or JSON objects.

yes!

REST is kind of like HTML... source available by default, human-readable, easy to inspect

GRPC is for machines efficiently talking to other machines... slightly inconvenient for any human in the loop (whether that's coding or inspecting requests and responses)

The different affordances make sense given the contexts and goals they were developed in, even if they are functionally very similar.


The official grpc JavaScript implementation is sort of bad. The one by buf.build is good from what I've seen.

https://buf.build/blog/protobuf-es-the-protocol-buffers-type...


GRPC is a nice idea weighed down by the fact that it is full of solutions to google type problems I dont have. It seems like a lot of things have chosen it because a "binary" like rpc protocol with a contract is a nice thing to have but the further away from GoLang you get the worse it is.

There are uses where gRPC shines. Streaming is one of them - you can transparently send a stream of messages in one "connection". For simple CRUD service, REST is more than enough indeed.

afaik grpc did callbacks before we got sse/ws/webrtc/webtransport. so grpc was needed kind of.

and also canonical content streaming was in grpc. in http there was no common accepted solution at old times.


Your memory appears to be incorrect.

SSE was first built into a web browser back in 2006. By 2011, it was supported in all major browsers except IE. SSE is really just an enhanced, more efficient version of long polling, which I believe was possible much earlier.

Websocket support was added by all major browsers (including IE) between 2010 and 2012.

gRPC wasn't open source until 2015.


Im old enough to have worked with asn.1 and its various proprietary “improvements” as well as SOAP/wsdl and compared to that working with protobuf/stubby (internal google predecessor to grpc) was the best thing since sliced bread

Even in 2025 grpc is still awful for streaming to browsers. I was doing Browser streaming via a variety of different methods back in 2006, and it wasn't like we were the only ones doing it back then.

You should check out https://connectrpc.com/ It's based on grpc but works a lot better with web tooling

How could gRPC be simpler without sacrificing performance?

There's two parts to gRPC's performance

- 1. multiplexing protocol implemented on top of HTTP/2 - 2. serialization format via protobuf

For most companies, neither 1 or 2 is needed, but the side effect of 2 (of having structured schema) is good enough. This was the idea behind twrip - https://github.com/twitchtv/twirp - not sure whether this is still actively used / maintained, but it's protobuf as json over HTTP.


What kind of performance? Read? Write? Bandwidth?

grpc "urls" and data are binary.

binary with schema separation.

3x smaller payload.


there are well working (official) generators of openapi/json schemas for grpc.

so once you wrote grpc, you get open api rpc for free.


I like this article format. Here, let me try. In my opinion, there are three significant and distinct formats for serializing data:

  - JSON
  - .NET Binary Format for XML (NBFX)
  - JSON Schema
JSON: The least-commonly used format is JSON—only a small minority use it, even though the word JSON is used (or abused) more broadly. A signature characteristic of JSON is that the consumer of JSON can never know anything about the data model.

NBFX: A second serialization model is NBFX. The great thing about NBFX is that nobody has to worry about parsing XML text—they just have to learn NBFX.

JSON Schema: Probably the most popular way to serialize data is to use something like JSON Schema. A consumer of JSON Schema just reads the schema, and then uses JSON to read the data. It should be obvious that this is the total opposite of JSON, because again, in JSON it's illegal to know the format ahead of time.


This is great. I feel like this speaks to the strangeness of how this article was written perfectly.

Everyone is hating on gRPC in this thread, but I thought I'd chime in as to where it shines. Because of the generated message definition stubs (which require additional tooling), clients almost never send malformed requests and the servers send a well understood response.

This makes stable APIs so much easier to integrate with.


> Because of the generated message definition stubs (which require additional tooling), clients almost never send malformed requests and the servers send a well understood response.

Sure. Until you need some fields to be optional.

> This makes stable APIs so much easier to integrate with.

Only on your first iteration. After a year or two of iterating you're back to JSON, checking if fields exist, and re-validating your data. Also there's a half dozen bugs that you can't reproduce and you don't know why they happen, so you just work around them with retries.


There’s also a gaping security hole in its design.

They don’t have sane support for protocol versioning or required fields, so every field of every type ends up being optional in practice.

So, if a message has N fields, there are 2^N combinations of fields that the generated stubs will accept and pass to you, and its up to business logic to decide which combinations are valid.

It’s actually worse than that, since the other side of the connection could be too new for you to understand. In that case, the bindings just silently accept messages with unknown fields, and it’s up to you to decide how to handle them.

All of this means that, in practice, the endpoints and clients will accumulate validation bugs over time. At that point maliciously crafted messages can bypass validation checks, and exploit unexpected behavior of code that assumes validated messages are well-formed.

I’ve never met a gRPC proponent that understands these issues, and all the gRPC applications I’ve worked with has had these problems.


I have yet to see a good way to do backward compatibility in anything. The only thing I've found that really works is sometimes you can add an argument with a default value. Removing an argument only works if everyone is using the same value of it anyway - otherwise they are expecting the behavior that other value causes and so you can't remove it.

Thus all arguments should be required in my opinion. If you make a change add a whole new function with the new arguments. If allowed the new function can have the same time (if overloading should be done this way is somewhat controversial - I'm coming out in favor but the arguments against do make good points which may be compelling to you). That way the complexity is managed since there is only a limited subset of the combinatorial explosion possible.


> every field of every type ends up being optional in practice.

This also means that you cant write a client without loads of branches, harming performance.

I find it odd that grpc had a reputation for high performance. Its at best good performance given a bunch of assumptions about how schemas will be maintained and evolved.


Hence, the qualification of stable API. You can mark fields as unused and fields as optional (recently):

https://stackoverflow.com/a/62566052

When your API changes that dramatically, you should use a new message definition on the client and server and deprecate the old RPC.


> After a year or two of iterating you're back to JSON, checking if fields exist, and re-validating your data.

Every time this has happened to me, it's because of one-sided contract negotiation and dealing with teams where their incentives are not aligned

i.e. they can send whatever shit they want, and we have to interpret it and make it work


unless you want to be locked into a cursed ecosystem where you spend all your time reimplementing libraries that have existed for decades in rest land, fighting code generators that produce hideous classes that will randomly break compatibility, and debugging random edge-casey things in your hosting stack bc nobody truly supports h2, steer clear of grpc

'rest' isn't anything (complementary)


> The least-commonly used API model is REST—only a small minority of APIs are designed this way

brother.


Technically they're right, though; the textbook definition of REST is rare to nonexistent in my experience. What people do instead is create JSON-RPCs-over-HTTP APIs, sometimes following a REST-like URL scheme, and sometimes using different HTTP verbs on the same URL to perform different actions as one would in REST... but the API isn't really REST. The creator of REST has gone on the record multiple times about how you shouldn't call these APIs REST[0].

But in practice when most people say REST they just mean "JSON RPC over HTTP". I avoid calling things REST now and just use "JSON HTTP API" to avoid the "well, actually..." responses. (and yes, these APIs are by far the most common.)

[0] https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


Real REST is a very, very small minority.

Fake REST (i.e., JSON RPC) is really ridiculously common.


I’ve never liked the no true scotsman nature of REST (which is exacerbated by the fact that its canonical “specification” is a broad PhD dissertation with a lot of other concepts thrown in), so I have adopted a fairly lax definition: if your URLs are subjects and you use HTTP verbs for the verbs, I feel like it qualifies.

Language is a means of communication, and we have to have some sort of agreement on terms. REST had an original meaning; that is a useful thing to be able to discuss. JSON-RPC is also a useful thing to discuss. But the two things are different. It’s confusing to use the one word or phrase to mean two different things (like ‘inflammable’!).

Granted, language is to some extent defined by usage: if enough people use a term incorrectly, and few enough people use it correctly, then the incorrect usage becomes correct and the correct incorrect. Fine, we can use ‘REST’ to mean ‘RPC over HTTP with a JSON encoding.’ But could the advocates of that usage propose a term we can all use for what Fielding described in his thesis? Potrzebie?

The thing that worries me, is if we start using ‘REST’ to mean JSON-RPC, and ‘Potrzebie’ to mean ‘the style formerly known as REST’: will people start using ‘Potrzebie’ to mean JSON-RPC? Perhaps worse, maybe they will start using it to mean ‘gRPC with JSON encoding.’

I propose that it’s best to use words and phrases for what they originally mean, for as long as one can, and to fight strenuously against changing them. Otherways wun May nyet wit was kood hap.


REST’s original meaning is pretty ambiguous and poorly specified. The dissertation was written to describe the design and architecture of the HTTP protocol itself, which was largely designed with and alongside this concept of REST. It predates a lot of modern Internet usage and thus doesn’t map perfectly onto current paradigms. I’d argue that even saying a “REST API” means you’re already in the land of impurity.

“True REST” is expounded upon by Fielding in a variety of places, and is essentially HATEOS (hypermedia as the engine of application state). But no one, and I mean no one, actually does that. Thus, in order to communicate effectively and “have an agreement on terms,” as you say, we need a less strict definition. I provided my suggestion.

If your suggestion is to go back to the primary sources, I have. Multiple times. It does not provide a particularly concrete or useful definition (because its point was not to define REST). If it had, trying to define it would be much less of a no true scotsman game. Notice that we’re not sitting here debating the meaning of HTTP.


It is. One of the biggest points of tension is that we've more or less settled on JSON as an interchange format which is not exactly hypermedia put of the box. That contradiction has severe implications in the application of HATEOS as it exists re JSON APIs.

Maybe I've been educated in a strange part of the internet, but I assume that this ship already sailed ~10 years ago: when most people (90%+) hear REST, they imagine something vaguely like JSON-RPC.

(and this is how ChatGPT, a sort of average of all opinions on the Internet, understands it)

So if you say REST and mean something other than that, then you're committing to being misunderstood by most people.


> So if you say REST and mean something other than that, then you're committing to being misunderstood by most people.

Perhaps, but TFA is clearly written in that it is using the actual, real meaning of REST, not the value-drift corruption the laity have wrought. (The upthread comment snips out the surrounding context that brings that clarity.) Which brings us right back to the problem at hand: Potrzebie.


This is true. And the comments are full of people confused about how the article is using the term REST.

Does Stripe's API qualify? It has URLs which are verbs, e.g.

GET /v1/customers/search

POST /v1/payouts/:id/cancel

POST /v1/disputes/:id/close


IMO, no, those endpoints don't qualify. That's just RPC over HTTP. Nothing wrong with it, but not REST (REpresentational State Transfer), since it's not transferring representations of state.

Search is probably not worth shoehorning into a REST paradigm (don't be a purist!). The others are easy enough though, something like

  PATCH /v1/payouts/:id
  {state: canceled}

  PATCH /v1/disputes/:id
  {state: closed}
Or an equivalent PUT with the whole object.

Or if you want to be json-patch standards-compliant:

  PATCH /v1/payouts/:id
  {op: "replace", path: "/state", value: "closed"}
FWIW this is why I think it's not really productive to be persnickety about a "REST API" being 100% always REST. A CRUD app is still a CRUD app if you do some occasional other operations, and a REST API can still be a REST API with some endpoints that are not REST.

Never really understood the folks pushing for RPC-over-HTTP. RPC is for systems that are close together (ie intra-DC). These simple rules work well: 1. JSON-over-HTTP for over the web 2. RPC (pick your flavor) for internal service-to-service

I will say that Amazon's flavor (Coral-RPC) works well and doesn't come with a ton of headache, its mostly "add ${ServiceName}Client to build" and incorporate into the code. Never mind its really odd config files

Related note, I've never understood why Avro didn't take off over GRPC, I've used Avro for one project and it seems much easier to use (no weird id/enumerations required for fields) while maintaining all the code-gen/byte-shaving


> 1. JSON-over-HTTP for over the web

So literally gRPC[1]? You make it sound like there is a difference. There isn't, really.

What gRPC tried to bring to the table was establishing conventions around the details neither HTTP or JSON define, where otherwise people just make things up haphazardly with no consistency from service to service.

What gRPC failed on in particular was in trying to establish those conventions on HTTP/2. It was designed beside HTTP/2 with a misguided sense of optimism that browsers would offer support for HTTP/2 once finalized. Of course, that never happened (we only got half-assed support), rendering those conventions effectively unusable there.

[1] I'll grant you that protobufs are more popular in that context, but it is payload agnostic. You can use JSON if you wish. gRPC doesn't care. That is outside of its concern.


According to this, what is GraphQL? This article seems like something written with limited or unusual experience.

> According to this, what is GraphQL?

GraphQL is akin to gRPC: a non-HTTP protocol tunnelled over HTTP. Unlike gRPC, I’m unconvinced that GraphQL is ever really a great answer. I think what the latter does can be done natively in HTTP.


For all the people singing the praises of how efficient gRPC is, I enjoy countering that the most efficient response is one which doesn't include 99% of data that the client doesn't care about in the slightest

GCP (and I believe Azure, too) offer `GET /thing?$fields=alpha,beta.charlie` style field selection but now there's a half-baked DSL in a queryparam and it almost certainly doesn't allow me to actually express what I want so I just give up and ask for the top-level key because the frustration budget is real

I for sure think that GraphQL suffers from the same language binding problem as gRPC mentioned elsewhere: if you're stack isn't nodejs, pound sand. And the field-level security problem is horrific to fix for real


Efficient in terms of wire transfer sure, but GraphQL tends to wind up generating queries that are quite difficult to optimize at the DB layer, so you wind up spending way more computer and time than you would otherwise need. If you're in an organization where folks with no database knowledge are writing the GraphQL queries, this winds up being a never-ending game of whack-a-mole. For anything performance sensitive, I'd much rather have a nice, optimized endpoint that returns more data than the client needs rather than have the client be able to issue any query they want.

This article is more of a marketing / paid endorsement for"gRPC" than something that is speaking any truth. The article mentions that "the least used" API method is REST, and I would argue, as would almost any developer (except google employees) would argue that gRPC is the least used and REST is by far the most widely adopted method.

The article is using REST in the original sense, and defines it as such to dispel any confusion with any other usage.

There is no way it is the most widely adopted method. To ever get to see a REST service in the wild is like winning the lottery.


The problem with gRPC is the "R". It's been the same with JMI, Corba, ONC-RPC and all the others.

Making "procedure calls" remote and hiding them underneath client libraries means that programmers do not consider the inherent problems of a networked environment. Problems like service discovery, authentication, etc are hidden beneath something that "looks like" a local procedure call.

That's one problem, the other is that procedure calls are focusing on the verbs, not the nouns (called "entities" or "resources" in the article).

If you can't express an FSM about a noun and what causes its state to change, then how the hell do you know what it does or how changes to its environment affect it?

If you don't know whether some procedure call is idempotent, how the hell can you write code that handles the various network failure modes that you have to deal with?



That is a problem, certainly, but not the only one.

My point is that "procedure calls" have always covered up failure modes and making them "remote" papers over all of the problems of distributed computing.

The other problem with "procedure calls" is they are imperative and grow without any constraints on their implementation without very careful design and review.

The functionality of the "procedure" is unbound and has unknown dependencies and side effects.


It depends. That’s the whole point.

I see a lot of people here saying one is better than the other. But it depends on your use case and company size.

GRPC is a lot more complex to start using and hides internals. However it has some major advantages too like speed, streaming, type safety, stub generation. Once you have it in place adding a function is super easy.

The same can be said of OpenAPI. It’s easier to understand. Builds upon basic REST tech. However JSON parsing is slow, no streaming and has immature stub generation.

From my experience a lot of users who use OpenAPI only use it to generate a spec from the handwritten endpoints and do manual serialization. This is the worst of the two worlds. - manual code in mapping json to your objects - manual code mapping function parameters to get params or json - often type mapping errors in clients

Those engineers often don’t understand that OpenAPI is capable of stub generation. Let alone understand GRPC.

GRPC saves a lot of work once in place. And is technical superior. However it comes at a cost.

I’ve seen OpenAPI generated from routes, with generated clients libs work really well. This requires some time to setup because you can hardly use OpenAPIGenerator out of the box. But once setup I think it hits a sweet spot: - simple: http and json - can be gradually introduced from hardcoded manual json serialization endpoint (client and server) - can be used as an external api - allows for client lib generation

But it really depends on your use case. But to dismiss GRPC so easily mainly shows you have never encountered a use case where you need it. Once you have it in place it is such a time saver. But the same holds for proper OpenAPI RPC use.

However my inner engineer hates how bad the tooling around OpenAPI is, the hardcoded endpoints often done instead of server stubs, and the amount of grunt work you still need todo to have proper client libs.


I think everyone who worked at Google in the past has PTSD from having to migrate gRPC schemas. What a mess! Type safety doesn't have to be this costly

Lot of gRPC hate here.

I like gRPC in terms of an API specification, because one only needs to define the “what”, whereas OpenAPI specs are about the “how”: parameter in path, query, body? I don’t care. Etc.

Plus the tooling: we ran into cases where we could only use the lowest common denominator of OpenAPI constructs to let different tech stacks communicate because of orthogonal limitations across OpenAPI codegenerators.

Plus, Buf’s gRPC linter that guarantees backwards compatibility.

Plus fewer silly discussions with REST-ish purists: “if an HTTP endpoint is idempotent should deleting the same resource twice give a 404 twice?” - dude, how’s that helping the company to make money?

Plus, easier communication of ideas and concepts between human readers of the (proto) spec.


> Plus fewer silly discussions with REST-ish purists: “if an HTTP endpoint is idempotent should deleting the same resource twice give a 404 twice?” - dude, how’s that helping the company to make money?

It helps by trying to map standard metaphors to your company's concepts instead of inventing bespoke return types for your company's concepts. You still need to decide whether or not to indicate that the resource is either not there, or was never there.


REST is just pure bullshit. Avoid it like a plague.

It's a fundamentally flawed model, as it smears the call details across multiple different layers:

1. The URL that contains path and parameters

2. The HTTP headers

3. The request body that can come in multiple shapes and forms (is it a JSON or is it a form?)

As a result, OpenAPI descriptions end up looking horrifying, in the best traditions of the early EJB XML descriptors in Java. And don't get me started on leaky abstractions when you want to use streaming and/or bulk operations.

In comparison, gRPC is _simple_. You declare messages and services, and that's it. There's very little flexibility, the URLs are fixed. A service can receive and return streams of messages.

The major downside of gRPC is its inability to fully run in browsers. But that's fixed by ConnectRPC that adds all the missing infrastructure around the raw gRPC.

Oh, and the protobuf description language is so much more succinct than OpenAPI.


Yeah, I never understood the blind worship of REST. It's just another API style.. and not a good one at that. It is the way it is due to browser limitations.

To avoid the complexity you mentioned, one would have to adopt some other tool like OpenAPI and it's code generators. At that point, you might as well use something simpler and plain better: like gRPC.


"REST is just pure bullshit. Avoid it like a plague."

No it isn't. Evidence: I'm reading this in a web browser.

"...REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them."

Bikeshedding the spelling of resource identifiers? Or what "verb" should be used to express specialized domain semantics? Yeah, _that_ is certainly plague bullshit.


> No it isn't. Evidence: I'm reading this in a web browser.

And you might not that this site is _not_ REST-ful. It's certainly HTTP, but not REST.

> Bikeshedding the spelling of resource identifiers? Or what "verb" should be used to express specialized domain semantics?

Or whether we want to use If-Modified-Since header or explicitly specify the condition in the JSON body. And 6 months later, with some people asking for the latter because their homegrown REST client doesn't support easy header customization on a per-request basis.

Or people trying (and failing) to use multipart uploads because the generated Ruby client is not actually correct.

There is _way_ too much flexibility in REST (and HTTP in general). And REST in particular adds to this nonsense by abusing the verbs and the path.


> It's certainly HTTP, but not REST.

How isn't it RESTful? It's a single entrypoint using content types to tell the client how to interpret it, and with exploratory clues to other content in the website.


The "R" letter means "Representational". It requires a certain style of API. E.g. instead of "/item?id=23984792834" you have "/items/comments/23984792834".

HN doesn't have this.


Representational is to do with being able to deal with different representations of data via a media type[0]. There is stuff about resource identification in ReST, but it's just about being able to address resources directly and permanently rather than the style of the resource identifier:

> Traditional hypertext systems [61], which typically operate in a closed or local environment, use unique node or document identifiers that change every time the information changes, relying on link servers to maintain references separately from the content [135]. Since centralized link servers are an anathema to the immense scale and multi-organizational domain requirements of the Web, REST relies instead on the author choosing a resource identifier that best fits the nature of the concept being identified.

[0] https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...


> "REST is just pure bullshit. Avoid it like a plague."

> No it isn't. Evidence: I'm reading this in a web browser.

REST is not HTTP endpoints and verbs.


I've come to the conclusion that whatever the question is, gRPC isn't the answer unless you are working on Google backend.

The performance benefit they mention comes at the cost of (un)debugability of the binary protocol, and the fact that the interface definition language requires client code generation just further deepens the existing moats between teams because of diverging tooling and explicit boundaries drawn up by said contract.

IMO gRPC mostly ends up used as a band-aid for poor cross-team collaboration, and long-term worsens the symptoms instead of fixing the core issue. The fact that it's PITA to use is secondary, but significant too.


gRPC is an anti-pattern for 99% of services. HTTP + JSON is the better choice in 99% of decisions. For high scale, HTTP + <Binary Payload> gets you most the way there.

gRPC's design while a great technical achievement, is overly complex.


By far the worst part of OpenAPI is how aspirational the documentation seems to remain. It seems it is always leveraging things that almost worked in the previous version with advice on how it should be done. But if you do the new way, expect that about half of the tooling you find won't work.

It really is WSDL all over again. Where if you buy in to a specific vendor's tooling, and don't leave it, things actually do mostly work as advertised. You want to piecemeal anything, and holy crap at the unexpected pitches.


Oof, I strongly disagree with this article's description of how REST apis are used, and the distinction between openAPI and rest. If I design a REST api in 2023, and in 2024 produce an openapi yaml or json file for that API with no other changes, is it somehow no longer a REST api? of course not. The article seems to be predicated on this distinction.

> The least-commonly used API model is REST

Is that true? I don't think it is frankly, though I suppose if any API that would be a REST api _if it didn't have an openapi spec_ is somehow no longer a REST api, then maybe? But as previously stated, I just don't think that's true.

> A signature characteristic of [REST APIs] is that clients do not construct URLs from other information

I don't think this is true in practice. Let us consider the case of a webapp that uses a REST api to fetch/mutate data. The client is a browser, and is almost certainly using javascript to make requests. Javascript doesn't just magically know how to access resources, your app code is written to construct urls (example: getting an ID from the url, and then constructing a new url using that extracted ID to make an api call to fetch that resource). In fact, the only situation where I think this description of how a REST api is used is _defensibly_ true (and this is hella weak), is where the REST api in question has provided an openapi spec, and from that spec, you've converted that into a client library (example: https://openapi-ts.dev). In such a situation, the client has a nice set of functions to call that abstract away the construction of the URL. But somewhere in the client, _urls are still being constructed_. And going back to my first complaint about this article, this contrived situation combines what the article states are two entirely distinct methods for designing apis (rest vs openapi).

Re: the article's description of rpc, I actually don't have any major complaints.


This stood out to me as well. The author must have a particular understanding of REST that differs from the usual sense in which it’s used. He might be technically correct — I haven’t read the primary sources related to REST — but it distracted from the meat and potatoes of the article, which is really a comparison of gRPC and OpenAPI. It seemed very strange for this reason.

or he works for Google (author of gRPC) and is being paid to extol the virtues, albeit short sighted, of gRPC

You're being way too polite. The article is garbage and completely incorrect about what REST and OpenAPI even are.

You're wrong. The author is using "REST" to mean an API at Level 3 on the Richardson Maturity Model[0] - this was the original conception of what it meant to be a "REST API" before the wider internet decided "REST" meant "nice looking URLs". What he refers to as "OpenAPI APIs" could be called Level 2 Web APIs on the same model.

He uses "REST" correctly. He uses "OpenAPI" as a shorthand for the class of web APIs that are resource-based and use HTTP verbs to interact with these resources.

[0] https://en.wikipedia.org/wiki/Richardson_Maturity_Model


I could concede perhaps he wasn't necessarily wrong on REST, though I personally think it's pedantic and incorrect, regardless of what the creator of the term says. Things evolve, and returning a list of objects instead of a list of links was an obvious progression, since spamming 1000s of GET requests doesn't scale well in the post 90s. If the industry at large generally agrees on what makes an API restful, it feels like we should accept such evolution.

OpenAPI is a description language and has little to do with an API itself. It's documentation. People were using 'unpure' REST long before it or Swagger even existed. And as the parent pointed out, you can add an openapi spec later, and it doesn't magically change the API itself.

Further, it creates a weird circular logic that doesn't work.

From https://swagger.io/docs/specification/v3_0/about/ -

"OpenAPI Specification (formerly Swagger Specification) is an API description format for REST APIs"


> > A signature characteristic of [REST APIs] is that clients do not construct URLs from other information

> I don't think this is true in practice.

'recursivedoubts: https://news.ycombinator.com/item?id=42799917

The blogger is completely correct. In a true REST (i.e., not JSON-RPC) API, the client has a single entry URL, then calls the appropriate HTTP verb on it, then parses the response, and proceeds to follow URLs; it does not produce its own URLs. Hypertext as the engine of application state.

For example, there might be a URL http://foocorp.example/. My OrderMaker client might GET http://foo.example/, Accepting type application/offerings. It gets back a 200 response of type application/offerings listing all the widgets FooCorp offers. The offerings document might include a URL with an order-creation relationship. That URL could be http://foocorp.example/orders, or it could be http://foocorp.example/82347327462, or it could be https://barcorp.example/cats/dog-attack/boston-dysentery — it seriously doesn’t matter.

My client could POST to that URL and then get back a 401 Unauthorized response with a WWW-Authenticate header with the value ‘SuperAuthMechanism system="baz"’, and then my client could prompt me for the right credentials and retry the POST with an Authorization header with the value ‘SuperAuthMechanism opensesame’ and receive a 201 response with a Location header containing a URL for the new empty order. That could be http://foocorp.example/orders/1234, or it could be https://grabthar.example/hammer — what matters is that my client knows how to interact with it using HTTP verbs, headers and content types, not what the URL’s characters.

Then my client might POST a resource with content type application/order-item describing a widget to that order URL, and get back 202 Accepted. Then it might POST another resource describing a gadget, and get back 202 Accepted. Then it might GET the original order URL, and get back a 200 OK of type application/order which shows the order in an unconfirmed state. That resource might include a particular confirm URL to PUT to, or perhaps my client might POST a resource with content type application/order-confirmation — all that would be up to the order protocol definition (along with particulars like 202, or 200, or 201, or whatever).

Eventually my client non-idempotently PUTs or POSTs or whatever, and from then on can poll the order URL and see it change as FooCorp fulfills it.

That’s a RESTful API. The World Wide Web itself is a RESTful API for dealing with documents and also complete multimedia applications lying about being documents, but the RESTful model can be applied to other things. You can even build a RESTful application using the same backend code in the example, but which talks HTML to human beings whose browsers ask for text/html instead of application/whatever. Or you might build a client which asks for ‘text/html; custom=orderML’ and knows how to parse the expected HTML to extract the right information, and everything shares common backend code.

Or you might use htmx and make all this reasonably easy and straightforward.

That’s what REST is. What REST is not, is GETting http://api.foocorp.example/v1/order/$ORDERID and getting back a JSON blob, then parsing out an item ID from the JSON blob, then GETting http://api.foocorp.example/v1/item/$ITEMID and so forth.


I think there's the REST that Fielding intended, and there's the REST that everyone has spent almost 20 years implementing. At some point we should acknowledge that the reality of REST-like API design is a valid thing to point to and say "that's REST!" even if it doesn't implement all of Fielding's intentions.

To me the critical part of REST is the use of http semantics in API design, which makes it very un-RPC like.

The idea of a naive api client crawling through an API to get at the data that it needs seems so disconnected from the reality of how _every api client I've ever implemented_ works in a practical sense that it's unfathomable to me that someone thinks that this is a good idea. I mean, as a client, I _know_ that I want to fetch a specific `order` object, and I read the documentation from the API provider (which may in fact be me as well, at least me as an organization). I know the URL to load an order is GET /orders/:id, and I know the url to logout is DELETE /loginSession. It would never make sense to me to crawl an API that I understand from the docs to figure out if somehow the url for fetching orders has changed.

I do think we need some kind of description of REST 2.0 that makes sense in today's world. It certainly does not involve clients crawling through entry urls and relationships to discover paths that are clearly documented. It probably does involve concepts of resources and collections of resources, it certainly mandates specific uses for each http method. It should be based on the de facto uses of REST in the wild. And this thing would _definitely_ not look like an rpc-oriented api (eg soap, grpc).


Thank you for the summary! :)

Yeah, the author has an extremely idiosyncratic take on the definition of REST which is either based on a misunderstanding, or a fundamentalist view of "pure" REST.

HATEOAS is crucial to what [Roy Fielding](https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...) calls REST APIs.

>A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations.

Most APIs that people call "RESTful" -- regardless of whether they come with an OpenAPI spec -- don't obey HATEOAS. A typical OpenAPI spec describes the possible request paths and verbs. However, you probably wouldn't be able to discover all that information just by starting from the entry point and parsing the `hrefs` in the response bodies.


Roy Fielding would also say that you probably don't need that definition of REST. The main advantages most people get from REST are in the standardised tooling, faster browser/library parsing of JSON, HTTP makes firewalls easy, and it looked so much nicer than the incumbent, SOAP[0].

[0] https://www.w3.org/TR/2000/NOTE-SOAP-20000508


> The least-commonly used API model is REST

IMHO that's not true. We could argue that the REST name is abused, but it's the word commonly used to describe a stateless API that use HTTP with URI trough verbs (GET, POST, PUT, DELETE, PATCH).

This article seems opinionated towards gRPC


I’ve generally regarded gRPC as a high-performance protocol mainly suited for connecting microservices—something you’d keep internal rather than expose publicly. But it shines in use cases like live captioning, where a transcription service has to stay in sync with a video feed and can’t afford dropped messages. In my experience, using plain WebSockets for high-throughput internal communication was a mistake because while WebSockets use TCP underneath, they don’t inherently handle reconnection or message acknowledgments. With gRPC, those features come built-in, saving you from implementing them yourself.

I generally like the article. I wished the REST concept would have been explained with some code / payload examples though. Other the that it managed to steer me away from gRPG. All the cons he mentioned are huge deal breakers in my opinion. I would only consider if I can control both server and client and its implementation details (tech stack in this case).

But he addressed some issues with OpenAPI I constantly struggle with. And the fact that seemingly none is able to say what the standard is for certain patterns. And don’t get me started with OData …


One thing that plagues almost all API solutions where you have to generate code is that the vast majority of code generators are bad, and often the code they generate is ugly.

I've never understood why so many code generators are so fiddly. They are supposed to parse text and produce text as output. You would think that it would be possible to do this without involving all manner of junk dependencies.

It reminds me of what I refer to as "the most important slide in software engineering". It was a slide by Sterling Hughes (PHP, MyNewt etc) from a presentation I can no longer remember the details of. But the slide simply said "nobody cares if it works for you". In the sense that if you write code that other people are supposed to use, do make an effort to put yourself in their place. Sterling was probably 16-17 at the time, and worked as a paid intern where I worked. But despite his young age, he managed to express something that most of us will never fully take on board.

Whenever I get an OpenAPI yaml file instead of a client library for some system I know things are going to be frustrating.


Netflix, Coinbase, Spotify and several big/medium tech company pretty much all in gRPC. I guess there must be a problem with the haters here who could not get the value

I feel like this article just discusses API semantics, which just feels like a bunch of pedantic best-practices with no actual substance. It doesn't mention any of the things that gRPC offers that the alternatives don't offer, which you would expect from a google article of all places

Would've been nice if they talked about how schema evolution is different in both cases, bidirectional streaming, or performance differences for different workloads


Keep in mind gRPC is not stable over the internet; middleboxes love to break it (looking at you, Google Cloud, exposing some of your services as gRPC-only APIs)

If you don't have a monorepo in your org, don't use gRPC.

Specifically, if you can't maintain those .proto mess inside one single source of truth, you're probably fucked.

If devs are afraid of updating .proto and adding many `context` or `extra` or `extension` fields, you are fucked. Get rid of gRPC ASAP!

Look are your .proto definitions, if there are tons of <str,str> mapping or repeated key-value pairs, just forget gRPC, use JSON.

Need performance? Use msgpack!


This article seems to make the mistake of thinking that things are either full Roy Fielding REST or it's RPC.

OpenAPI is not similar to gRPC because it's noun-oriented, not verb-oriented. gRPC is more like SOAP: ignore HTTP semantics and write method calls and we'll sort it out. OpenAPI is somewhere on the path to full REST: few verbs; lots of nouns.


I'm not sure why the article picks these 3 options as if that's it.

An RPC API can happily exist over plain old HTTP/1 (no protobuf required) and it also doesn't mention the primary benefit of RPC over REST/RESTish (IMO) - and that's the ability to stack multiple RPC calls into a single request.


The lack of first class js support just kills it. Having to use middleware that doesn't work too well on AWS is the nail in the coffin.

It's different if you've drunk the microservices koolaid but for normal projects it doesn't help generate front-end client API libs like you'd hope.


I disagree that OpenAPI is just RPC mapped to HTTP. A well-designed OpenAPI spec can be quite RESTful. The problem is many developers don't take the time to design good resource models and just slap RPC-style operations into URL paths.

Security usually allows simple HTTP requests and REST is the quickest way to get started.

entity-oriented models are more stable over time compared to procedure-oriented RPC. In my experience, starting with resources/entities and mapping operations to them does lead to cleaner APIs that are easier to evolve.

> The problem is that MVPs don’t actually establish whether the team /could/ get to a finished product, and in practice many can’t.

Isn't that WHY you go to investors? To get the funding to hire to get it to market?


Question: I have a really simple game, but seeing latency issue when users aren't near the servers. Using websocket w json format to send data. Would moving to protobuff help?

No

We should use gRPC only after conducting proper domain driven architect. Properly categorizing classes into domain/services/infra is more important.

My experience with grpc was not good.

I was writing some python code to interface with etcd. At least at the time there wasn't a library compatible with etcd 3 that met my needs, and I only needed to call a couple of methods, so I figured I'd just use grpc directly, no big deal right?

So I copied the proto files from the etcd project to mine, then tried to figure out how to use protoc to generate python client code. The documentation was a little lackluster, especially on how to generate annotations for use with mypy or pyright, but whatever, it wasn't too hard to figure out the right incantation.

Except it didn't work. The etcd proto files had some annotations or includes or something that worked fine with the golang implementation, but didn't work with the Python implementation. I thought the proto files were supposed to be language agnostic. Well after a couple hours of trying to get the files working as is, I gave up and just modified the proto files. I deleted most of it, except for the types and methods I actually needed, got rid of some annotations, and I think I ended up needing to add some python specific annotations as well.

Then I finally got some python code, and a separate file for type annotations. But I still have issues. Eventually, I figured out that what was happening was that the package hierarchy of the proto files, and imports in those files has to match the python package names, and it uses absolute, rather than relative, imports. Ok, so surely there is an option to pass to protoc to add a prefix package to that, so I can use thes files under my own namespace right? Nope. Alright, I guess I have to update these protoc files again. It'll be a pain if I ever need to update these to match changes upstream.

Ok, now the code is finally working, let's make sure the types check. No. MyPy gives me errors. In the generated code. At first I assume I did something wrong, but after much investigation, I determine that protoc just generates type annotations that are not just wrong, but invalid. It annotates global variables as class variables, which MyPy, rightly, complains doesn't make sense.

To fix this I resort to some hackery that I saw another python project use to fix the import issue I mentioned earlier: I use sed to fix the pyi file. Is it hacky? Yes, but at this point, I don't care.

I assume that other people have had a better experience, given its popularity, but I can't say I would be happy to use it again.


This post is exactly how I imagine people who only ever worked at Google to think. This has been my experience from having to work at Google and to work with Google.

Bizarre definitions of commonly used words. Huge emphasis on in-house tech, which is mediocre at best. Extraordinary claims supported by fictional numbers.

I think, there used to be a culture where employees scored some brownie points by publishing blogs. You'd need those points to climb the ranks or to just even keep your job. This blog reads as one of those: nothing of substance, bunch of extraordinary claims and some minutia about Google's internal stuff that's of little consequence to anyone outside the company.

I mean... choosing gRPC of all things to illustrate RPC, when there's actual Sun's RPC in every Linux computer is just the cherry on top.


Does anyone use gRPC-Web? What do you use it for and how would you rate your experience with it?

And only needed when the product is good and company's size scaled.

(2020)

If google offer the ability to fuck up other people's lives should they be financially liable for the costs associated to return the individual to the state prior of the abuse?

Is this basically gaslighting us on what REST APIs are, it is it just me?

No. Most people, when they use "REST", do so incorrectly. The article is right, for example, that one of the requirements in the definition of REST was the use of URLs to identify resources:

> REST uses a resource identifier to identify the particular resource involved in an interaction between components.

(And it goes on to cite URLs as an example of a resource identifier in REST as applied to the modern web; note that "REST" is an architectural style to describe the design of systems, the web is an application of that style.)

Many allegedly RESTful APIs simply don't do that, and instead you'll see something like,

  {"id": 32, …}
Particularly so when combined with tightly coupled URL construction.

There are other facets of REST that you could compare to most JSON/HTTP APIs and find that they don't obey that facet, either.


C rtrrgehevvrrhg33g

Tgfgyvvvgggggvv

What's funny is none of these are very good, but they're now the most common standards. They are designs to be sure. But they lack the one thing that makes a standard valuable: not having to do a bunch more work every time you want to work with a single new application.

The idea many of you were literally raised with, that you have to look up an application's specific functions, and write your own code to specifically map to the other application's specific functions? That basically didn't exist before, like, 2000.

Look at any network protocol created before HTTP (that wasn't specific to a single application). A huge number of them (most of them?) are still in wide use today. And basically none of them require application-specific integration. FTP, SSH, Telnet, SMTP, DNS, TFTP, HTTP, POP3, SUNRPC, NNTP, NTP, NetBIOS, IMAP, SNMP, BGP, Portmap, LDAP, SMB, LDP, RIP, etc. All layer-7, all still used today, decades after they were created. And every single application that uses those protocols, is not custom-built to be aware of every other application that uses that protocol. They all just work together implicitly.

There's almost no benefit to even using gRPC, OpenAPI, REST, etc. You could come up with a completely new L7 protocol, and just say "if you want to be compatible with my app, you have to add support for my new protocol. here's my specification, good luck.". Sure there are benefits on the backend for transmogrifying, manipulating, re-routing, authenticating, monitoring, etc the underlying protocols. But as far as the apps themselves are concerned, they still have to do a ton of work before they can actually communicate with another app. One other app.

Now it's a feature. People brag about how many integrations they did to get app A to work with apps B, C, D, E, F, G. Like Oprah for protocols. "You get custom code, and you get custom code, and you get custom code, and you get custom code! You all need custom code to work with my app!"

You could say, oh, this is actually wonderful, because they're using a common way to write their own layer-8 protocols! But they're not even protocols. They're quirky, temporary, business logic, in a rough specification. Which is the way the big boys wanted it.

Corporations didn't want to have to abide by a specification, so they decided, we just won't support any applications at all, except the ones we explicitly add code to support. So application A can talk to apps B and C, but nothing else. It's ridiculous. We regressed in technical capability.

But it has to be this way now, because the OS is no longer the platform, the Web Browser is. No protocol can exist if it's not built into the browser. The bullshit people try to sell you about "middleboxes" is bullshit because middleboxes only matter when all the apps are on a Web Browser. Take away the web browser and middleboxes have no power. If the entire internet tomorrow stopped using HTTP, there would literally be no choice but to do away with middleboxes. But we won't go there, because we won't get rid of the web browser, because we like building abstractions on abstractions on abstractions on abstractions on abstractions. People get dumber, choices get smaller, solutions get more convoluted.

C'est la vie. The enshittification of technology marches on.


> One other app.

I don't really understand this criticism. FTP and HTTP are equivalent, and you can serve all the apps on HTTP by implementing HTTP, just as you can send any file over FTP by implementing FTP. The apps that sit on top of HTTP are of course going to have custom integration points. They all do different things.


What a load of nonsense. OpenAPI is a documentation standard for HTTP APIs. So, this is an apples and oranges comparison that starts off on the wrong premise.

Some of those APIs might be REST APIs in the strict hypermedia/ HATEOAS sense as popularized twenty years ago by some proponents of this. However, looking back that mostly did not get very popular. I actually met with Jim Webber a couple of times. He co-authored "REST in Practice", which is sort of the HATEOAS bible together with the og. HTTP spec by mr. REST Roy Fielding. Lovely guy but I think he moved on from talking a lot about that topic. He's been at neo4j for more than a decade now. They don't do a lot of HATEOAS over there. I remember having pointless debates about the virtues of using the HTTP Patch method with people. Thankfully that's not a thing anymore. Even Jim Webber was on the fence about that one.

Most people these days are less strict on this stuff and might create generic HTTP REST APIs that may or may not do silly things as making every request an HTTP POST like SOAP, Graphql, and indeed Grpc tend to do. Which is very un HATEOAS like but perfectly reasonable if you are doing some kind of RPC.

Most APIs trying to do some of notion of REST can and probably should be documented. For example using OpenAPI.

Most modern web frameworks support OpenAPI directly or indirectly and are nominally intended to support creating such REST APIs. There's very little reason not to support that if you use those. Things like Spring Boot, FastAPI, etc. all make this pretty easy. Your mileage may vary with other frameworks.

Grpc is a binary RPC protocol that gets used a lot for IMHO mostly invalid reasons and assumptions. Some of those assumptions relate to assuming applications spend a lot of time waiting for network responses and parsing to happen and that making responses smaller and easier to parse makes a significant impact. That's only true for a very narrow set of use cases.

In reality, textual responses compress pretty well and things like JSON parsers are pretty fast. Those two together mean that the amount of bytes transferred over the network does not really change significantly when you use Grpc and the time waiting for parsing relative to waiting for the network IO is typically orders of magnitudes less. Which leaves plenty of CPU time for parsing and decompressing stuff. This was a non issue 20 years ago. And it still is. I routinely added compression headers to web servers twenty years ago because there were no downsides to doing that at the time (minimal CPU overhead, meaningful network bandwidth savings). Parsers were pretty decent 20 years ago. Etc.

Using RPC style APIs (not just grpc) has two big issues:

- RPC protocols tend to be biased to specific implementations and languages and rely on code generation tools. This can make them hard to use and limited at the same time.

- They tend to leak internal implementation details because the APIs they expose are effectively internal APIs.

The two combined makes for lousy APIs. If you want an API that is still relevant in a decade or so, you might want to sit down and think a little. A decade is not a lot of time. There are lots of REST APIs that have been around for that long. Most RPC APIs from that long ago are a bit stale at this point. Even some of the RPC frameworks themselves have gone a bit stale. Good luck interfacing with DCOM or Corba services these days. Or SOAP. I'm sure there's a poor soul out there wasting time on supporting that shit in e.g. Rust or some other newish language. But don't get your hopes up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: