Hacker News new | past | comments | ask | show | jobs | submit login
A detailed comparison of REST and gRPC (kreya.app)
94 points by CommonGuy on April 29, 2023 | hide | past | favorite | 69 comments



While this is a nice overview (I wouldn't call it detailed), I feel it misses the points of REST, gRPC, and GraphQL. I think it's a more useful engineering decision to first filter based on how you want an API to be used, and then to filter based on the implementation details such as these.

The whole point of REST is that it's discoverable. Now web standards didn't quite manage to standardise HATEOAS, so sadly full machine discovery is unlikely via an API, but you can build quite adaptable clients that can respond to changing APIs well, going as far as optimising client usage by changing API responses without redeploying clients. That may or may not be something you want in an API, but it's worth considering, because it's not going to happen with gRPC.

GraphQL, like REST, is about the objects and relationships, and lends itself well to building highly capable client-side caching layers when you introduce the Relay patterns to it – particularly globally unique IDs. Given GraphQL's well defined schema it's relatively easy to build generic caching mechanisms. Again, this may or may not be something you want in an API.

gRPC doesn't really allow for any of this, if you want it you've got to invent it all yourself. But that might be ok! Server to server calls rarely need a big cache to work around poor networking. gRPC does however offer considerably more control over streaming behaviour, more standardised error handling than GraphQL, and more.

There are quite a few factual errors in this post, like REST not being schema based (that's up to the implementer), no streaming in REST (not necessarily true).

When deciding on an API technology the first questions must be "who is the consumer", and "what are their constraints". These will often lead to just one or two of the options here, then you can drill down into the details like tooling, API design, and so on.


> There are quite a few factual errors in this post, like [...] no streaming in REST (not necessarily true).

Where did it say that? I got the opposite impression:

> Handling large data sizes with REST APIs, such as file uploads, is rather straight forward. The received file can be treated as a stream, using very little memory.

Re: streams, I've found that an opaque binary stream in REST is okay, but the moment I want to stream messages, my life is hell. I keep running into this, and it's such a pain. If I have to stick with REST, I end up inventing a bespoke RPC protocol to allow for sending errors as part of the stream, either as part of the message or as a trailer header (and trailers have their own issues like no browser support). And then the semantics of HTTP status codes have to change.

The reason I often stick with REST is that infrastructure supports HTTP/1 much better than h2/gRPC (such as L7 load balancers, caching, etc.). JSON RPC is not always a good answer either as caching POSTs is typically not well-supported and using GET with JSON RPC is discouraged and fraught with issues.

I would definitely prefer to just use gRPC but the poor middlebox support sometimes makes it cost-prohibitive.

What a mess.


From the article: "This is a very nice advantage of gRPC in comparison to REST, which only supports unary requests."

You're right that files can be streamed and it does say that, but additionally there's no reason why messages can't be streamed down an open connection.

Out of interest, as I hadn't considered middlebox support, what sort of things do you find becomes problematic? Doesn't TLS mitigate that? Or are you thinking of niche environments like companies that require intercepting all traffic on their networks? I realise those use-cases do exist, but they're pretty uncommon now as more people realise they're terrible security practice.


I'm talking about L7 middleboxes like HTTP load balancers, API gateways, caches/CDNs, etc.


Oh fair enough. I guess that's true, although I'm not sure I'd see too much of a problem there. For load balancers there are options that work fine with gRPC, similarly API gateways. Caches and CDNs no (at least I haven't seen any), but then again, I'm skeptical about putting CDNs in front of business logic, and would only ever use them for actual content (where the serving logic doesn't matter). Even logged out usage of APIs still often has analytics and other stuff that means you want to hit the backend every time.


Typesafety is another big part of this imo. GraphQL and gRPC give you this out of the box, but with REST you have to cobble together some sort of solution for providing a schema & client libraries.


True, but again this is a technical detail that should be secondary to who the client is and what they need.

Is the client always going to use a library to interact, is connecting without any code/schema published by the API provider a core requirement? Are clients going to be in languages with good protobuf/GraphQL support? Some don't have this. Is the code that talks to the API also owned by the API provider? Public vs private. Support lifecycles.

These factors all play into whether the type safety could even be utilised by clients. And it's not like REST doesn't have this – OpenAPI and Swagger can get a lot of the benefit with fairly minimal work. Both are very common.


I do agree that client requirements come first of course, but after that DX is at the top of my list.

They do differ fundamentally in that type safety is an afterthought with REST but is built in with some of the alternatives. That can have it's benefits too of course, it's easier to integrate with and more broadly supported in part because it doesn't concern itself with that.

OpenAPI can help but keeping your schema in sync with reality can be a pain depending on what libraries you have available. In the best case it really is minimal work, but if you don't have good tooling for whatever web framework you're using it can be a bit of a pain. In my experience it often requires more manual effort to maintain & more risk of mistakes causing the schema to be inaccurate.


That is true, but it has been very well solved and standardised by OpenAPI and JSON Schema.


It really depends on the language and framework you're using how convenient OpenAPI actually is to implement. In many cases there's a lot of additional manual effort involved if you can't auto generate a schema, or at least generate one that is good enough to then generate client libs from.


I've been disappointed with almost every OpenAPI spec I've come across for this reason. They're OK for documentation but can't auto generate clients due to errors.


There are three standard models of working with API schemas, in relation to the implementation:

  * manual:
    * schema is maintained manually, independent of the implementation
    * usually an afterthought and used mainly for documentation
  * implementation-first:
    * schema is autogenerated from the code
    * used as a reference, possibly to generate clients, and often to run tests
    * probably the most common approach, supported by many frameworks
  * schema-first:
    * schema is maintained manually, with client and server code generated from it
    * very rare, but the most correct approach


This isn't 100% true - unless you're using a json-schema --> types package then there's a risk of your documentation not aligning with your types.


A common mistake I have noticed is that people identify API models with their business logic models. They may look similar, but they are very rarely identical, just as is the case with business logic models vs ORM models.

Unless your models are very simple, the best approach is to use three separate layers of model definitions:

  * API models for serde and conversion of external requests
  * business logic models that carry the actual internal functionality
  * (optionally) ORM models to convert the data for persistence to a RDBMS


An even worse mistake is treating all three of those model layers as one and the same, which tools like Django REST Framework make it so easy to do. It all seems well and good for a while, as developers build up a big codebase with ease, but then are confronted with an almost insurmountable amount of work when the need to refactor arises.

The thing I’ve noticed when stepping into a codebase where this problem has been allowed to occur is the lack of layers of abstraction. Having those different models built up from the start allows for an application to shift along with the needs of the product. Having a single layer, with the endpoints talking literally directly to the ORM models, almost inevitably leads to calcification, spaghettification, and disastrous performance.


Is the image/jpeg media type "typesafe"?

How is using, for example, JSON Schema to define your types any better or worse than a "proto" file that requires a compiler, a parser, and a client and server library?


The difference is really that with something like protobuf type safety is built in & with JSON it's an afterthought. With JSON Schema or OpenAPI it's on you to keep things in sync with your API & there's often a significant amount of work that comes along with that too.

There's many tradeoffs of course, it's not like dealing with proto files is painless either.


What you're actually saying is that when you use the standard language bindings for a protobuf parser, the type safety is as built in as your language.

JSON has a limited set of types. There is object, array, string, number, boolean, and null. That's it.

JSON Schema adds to that by providing definitions of types that build on those basic JSON types.


gRPC does not give you any form of type safety. Even if you assume protobuf entities over gRPC, protobuf also does not give you any form of type safety.


> but you can build quite adaptable clients that can respond to changing APIs well

Any good examples on how to do this for non-trivial API changes?


gRPC is so much cleaner and easier to work with for the APIs.

- Error codes are well defined (vs "should I return 200 OK and error message as a JSON or return 40x HTTP code?")

- no semantics ambiguity ("should I use POST for query with parameters that don't fit URL query param, or POST is only for modifying?")

- API upgrades compatibility out of the box (protobuf fields identified by numbers, not names)

Not to mention cross-platform support and autogenerating code.

I use it in multiple Flutter+Go apps, with gRPC Web transparently baked in, and it just works.

Once I had to implement chunked file upload in the app and, used to multiform upload madness, was scared even to start. But without all that legacy HTTP crap, implementing upload took like 10 mins. It was so easy and clean, I almost didn't believe that such a dreadful thing as "file upload" could be so easy. (years of fighting HTTP legacy).

Compared to the "traditional" workflow with REST/JSON the downside for me is, of course, the fact that now you can't help but care about API. With web frameworks the serialization/deserealization into app objects happens automagically, so you throw JSON objects left and right, which is nice until you realize how much CPU/Memory is being wasted for no reason.

Also, check out drop-in replacements for cases where you don't need full functionality of gRPC:

- Twirp (Twitch light version of gRPC, with optional JSON encoding, HTTP1 support and without streaming) - https://github.com/twitchtv/twirp

- Connect - "Better gRPC" https://connect.build/docs/introduction/


> Not to mention cross-platform support

I'm truly curious to know how REST or JSON or HTTP is not cross platform


Good point. I refered to the code, not to the protocls.

With REST/JSON mostly you write code twice – server implementation and client implementation. If you add new platform (say, iOS), you write new code again. So for each new platform you maintain separate code => not cross platform.


What code are you talking about? Having to write the URL in the HTTP client call?


No, a bit upper layer. You need to handle error codes, request and response parameters, naming for the API etc.

Imagine having API with 100+ endpoints. How do you add another one in gRPC? You edit .proto file and run codegenerator, which creates stub code for clients and server. You just need to connect that code to the rest of your app (say, talk to database here or display the response on the screen here). You don't write actual code for the network part.

With traditional HTTP/REST/JSON approach you would have to write handlers yourself for each codebase separately. And there is no way to make sure they're in sync – only on organizational level (by static checks, four-eyes policies and such). I know it doesn't sound too big of a deal because most people used to it, but gRPC gives a different experience. Hence my initial comment that it's so much easier to work with.


I don't have much experience. I don't understand the complaints around semantic ambiguity. Can't you just declare you aren't trying to build the semantics perfect system, pick one of the options and have things be perfectly all right?


For me it's not about building perfect system. It's this gut feeling that you're using wrong tools. Like you're trying to build a house, but all you have is a set of old car tires and a bunch of duct tape. It just feels ill-suited for the task.

That's not just REST/JSON, of course. That's the general feeling I have from web-ecosystem. Recently I had to work with a simple HTTP form with one field and a single checkbox that is shown conditionally. Hours of debugging revealed that you can't just POST unchecked checkbox [1]. There is no difference between "unchecked checkbox" and "no checkbox". Instead you have to resort to hacks with hidden input field. It's just all feel hackish and you constantly question yourself – am I doing something wrong or it's just this whole stack is a set of hacks on top of hacks?

Same feeling with REST/JSON. Once your API grows past simple CRUD you start caring about optional values and error codes. Ambiguity seems fine until project grows and more people join and introduce inconsistency: one call returns 200 OK for error, other returns 5xx/4xx (cost of choice). Now you have to enforce rules with static checkers and other tools. You bring more tools just to keep the API sane, and it all, again, feels hackish and ill-suited. I don't have this feeling with gRPC - it feels like perfectly designed for APIs.

[1] https://stackoverflow.com/questions/1809494/post-unchecked-h...


But you have to carry around the schema on top of the data. And they become out of sync among different groups with different versions.

What is the ubiquitous utility for interacting with gRPC? We have curl for REST. What is openAPI of gRPC?


> And they become out of sync among different groups with different versions.

This is technically true, but part of the "grpc philosophy", if you will, is to not make breaking changes, and many of the design decisions of protobufs nudge you toward this. If you follow this philosophy, change management of your API will be easier.

For example, all scalar values have a zero value which is not serialized over the wire. This means it is not possible to tell if a value was set to the zero value, or omitted entirely. On the surface this might seem weird or annoying, but it helps keep API changes backwards _and forwards_ compatible (sometimes this is called "wire compatible", meaning that the serialized message can work with old and new clients).

Of course you still can make wire-incompatible changes, or you can make wire compatible changes that still break your application somehow, but getting in the habit of making only wire-compatible changes is the first step toward long term non-breaking APIs.

GraphQL, by contrast, lets you be more expressive in your schema, like declaring (non)nullability, but this quickly leads to some pretty awkward situations... have you ever tried adding a required (non-nullable) field to an input?


>What is openAPI of gRPC?

The proto file. Grab that , use protoc to generate bindings for your language and off you go....


> What is the ubiquitous utility for interacting with gRPC? We have curl for REST. What is openAPI of gRPC?

grpcurl[1] combined with gRPC server reflection[2]. The schema is compiled into the server as an encoded proto which is exposed via server reflection, which grpcurl reads to send correctly encoded requests.

[1] https://github.com/fullstorydev/grpcurl [2] https://github.com/grpc/grpc/blob/master/doc/server-reflecti...


> What is the ubiquitous utility for interacting with gRPC? We have curl for REST.

Kreya, for example, haha (check the original link of this post).

There are many, actually, including curl-like tools. But I almost never use them. Perhaps it's because my typical workflow involves working with both server and app in a monorepo, so when I change proto file, I regenerate both client and server.

Just once I had to debug the actual content being sent via gRPC (actually I was interested in the message sizes) and Wireshark did job perfectly.


> should I return 200 OK and error message as a JSON or return 40x HTTP code?

Anyone who returns a 200 OK on error is, of course, in a state of sin.

You’re welcome to return a 4xx + JSON, if that is congruent with the client’s Accept header.

Yes, I’m aware that GraphQL uses 200 OK for errors. That is only one of the reasons that it GraphQL is unfit for purpose. Frankly it is embarrassing that it has become popular. It’s like a clown car at a funeral.


Why nobody ever mentions JSONRPC 2.0 over WebSockets? It's one page doc simple, debuggable, supports realtime notifications, is bidirectional, works on browser and your internal apis in pretty much any language and probably fits 99.9% of projects for their requirements. JSON encoding/decoding is extremely fast in any language due to its pervasive usage.


Makes more sense to compare it to gRPC at least: REST is not a standard, and JSON-over-HTTP doesn't offer enough to be worth a discussion.

Does JSONRPC have tooling for generating clients or API documentation?


Client is something like 20 LoC in js with `ws` npm?

You mean typed client with runtime assertions, cross language spec etc, right? That's what I mean, people should be discussing jsonrpc with joi vs zod vs jsonschema, their runtime overhead, cross language support, codegen support etc.

Short answer is whatever currently we have for json will work. There is no point in complicating things.


gRPC is a pain in the ass to setup. I strongly disliked the web client library for it.

I don't like REST because it's basically become a specific url structure with semantic http methods (e.g. POST for create). Practically nobody bothers with the HATEOS stuff. Things start breaking down if you have a complex resource hierarchy and custom methods.

I've personally just settled on rpc using json. Any consumer of the api REST or otherwise is going to check the docs for the correct endpoint, at which point there's little to no difference between rpc and rest.


This depends on your language of choice a lot but even still I don’t know how much I agree with it.

It’s not at all obviously wrong on the surface, there is some setup involved in the sense you’re bringing at a minimum one new tool into your build process but you are picking up a LOT in exchange for that trade off from prebuilt client libraries down to an extremely efficient wire format.

RPC using JSON sounds like the worst of both worlds to me. You ended up trading away a huge amounts of the benefits for things like code generation and general efficiency for a pretty reasonably one time setup cost.

Maybe that’s different for the language of your choice I don’t know…


REST is an architectural style, HTTP is a protocol that implements that style. People really should read the Fielding thesis to understand what it is, and what it isn't.

The concept of resources maps well to URLs, which name things, ie a URL is a noun not a verb. The REST style is about two endpoints transferring their current state of a resource, using a chosen representation, whether JSON, XML, JPG, HTML etc.

gRPC is a remote procedure protocol that uses protobufs as a TLV binary encoding for serializing marshalled arguments and responses. It requires clients and servers to have compiled stubs created to implement the two endpoints. Like most RPC, it is brittle and its abstraction as a procedure call leaks when networks fail.

GraphQL is a query protocol for retrieving things, often with associated (ie foreign key) relations. The primary use is defined in the "QL" of the name.

The benefit of HTTP and the use of limited verbs and expressive nouns (via URLs) is that HTTP defines the operation, expected idempotency, and expected resource "state" after an HTTP request/response has occurred. It has explicit constructs for caching, allowing middleware to optimize responses.

There's nothing in HTTP that requires JSON, the choice of media type is negotiable between the client and server. The same server URL can serve JSON, XML, protobufs, or any other format.

gRPC is yet another attempt to extend the function call of imperative languages to the network. It is the latest in a long line of attempts, Java had RMI, there was SOAP and XML-RPC. Before that there was CORBA and before that there was ONC-RPC. They all suffer from the lack of discoverability, the tight binding to language implementations, and the limitations of the imperative languages that they are written in.

They all end up failing because of the brittle relationship between client and server, the underlying encoding (XDR, IDL, Java etc etc) of the marshalling of arguments and responses is essentially irrelevant.


I agree with all of that, except that procedure calls over a network have "failed". The funny thing is, the way REST is used at 99% of places is just... RPC. People do everything as a POST request (because they're afraid of something being cached), or they may use GET for some things out of a misguided concession to REST, so the verbs really end up having little to do with the actual API semantics, they're just an implementation detail. HTTP may be idempotent, but it's up to the programmers to carry that ideal forward into their implementation. No one is doing programmatic (or even development-time) API discovery, so that aspect of REST goes to waste, also.


Dynamic api discovery is useful when you are traversing complex linked data from a service.

For example https://jsonapi.org/format/ focuses on traversal of relations.

If you are doing RPC-on-HTTP then it is a bad idea.


The first thing everybody using "REST" should do is completely ignore Fielding's thesis. So much internet comment time is wasted on interpreting that document. It's overly confusing.

I'm fine with the idea that URIs reference resources and HTTP actions are verbs.

I also take issue with the claim that gRPC and protocol buf is brittle; the protocol was explicitly designed to allow older servers to process messages from newer clients (and vice versa) to the best of their ability. More importantly: there are enormous production servers that see waves of server updates (and their associated clients are also getting updated) sending petabytes to exabytes to each other every day; in that sense, it's clearly not brittle or far more users would have a negative experience.

I just spent several weeks onboarding a new system that is based around REST and JSON schema. JSON schema... like most of the things with JSON and Javascript, feel like they were implemented in a hurry by non-experts who wanted to solve a problem, and made something simple enough that large numbers of users adopted it. Now we're stuck with the "core technology is based on less-than-awesome technology". most folks don't even use schema, and the document databases that receive the JSON blobs just sort of treat them as a dynamically created schema defined by the envelope of all extant messages (see, for example, dynamic mapping in elasticsearch).

(my experience includes: XDR and SunRPC, CORBA, protocol bufs, stubby/grpc, XML, WSDL, SOAP, and many more systems. I am not authoritative, and I have my own strong opinions based on experience. But I have to say, I'd rather work in an grpc/protobuf world than a REST/JSON one. It's much more robust.


In what way is REST more discoverable than RPC alternatives? SOAP had WSDL; you could just hit an endpoint and download the whole schema for a web service. With most RPC protocols there's some sort of formal published IDL. That all sounds a lot more discoverable than anything in the REST world, which is pretty much "go read the docs".


How are you using this web site? The entire thing is "discoverable".

What you're talking about is how can you publish a machine readable discoverable API. Just like RPC, there is no "well known" endpoint for getting the API specification.

The RPC IDL is effectively the same as "go read the docs". How is downloading a "formal published IDL" any different to a "formal published OpenAPI specification"?

gRPC just has an entire infrastructure of compilers, parsers and language libraries that generate stub code that you then have to go and "fill in".

OpenAPI is a pretty good standard for defining an HTTP based REST API.


For gRPC, the protobuf is the API specification, a service definition with endpoints, what requests to those endpoints should look like and what responses look like. Of course, there are better and worse implementations, e.g. a well commented proto definition explaining what various args do, etc.

In gRPC the definition is a requirement to use, so at the bare minimum you have the typed structure of requests and responses. There is no such requirement for REST


gRPC has well-known services ServerReflection and ProtoDescriptorDatabase that allow clients to discover all available services, create request bodies, and parse response bodies, without having built-in protocol definitions. It is more discoverable than vanilla REST.


> gRPC is yet another attempt to extend the function call of imperative languages to the network.

Not really. Grpc is just sending/receiving messages to/from an address. Other protocols like COM/CORBA tied the address to an object.

Also there is nothing to prevent you from writing a grpc service in a resource/entity oriented style while REST makes expressing non-resources like actions seem a bit awkward.


I don't know why JSON over HTTP gets so little love on here. It's probably the most popular option. If you don't want a resource-oriented API you don't need to throw the baby out with the bathwater and go all the way to gRPC.


One problem that I find is that it is so unstandarized and ad-hoc beyond the network level that you basically need a complete bespoke client/parser for every API, even within larger companies if the teams are more independent. Meanwhile if you have 1 way to do gRPC or GraphQL clients they fit for anything and you can focus on what you actually want to do with the API.

GraphQL especially is excellent with this, with Apollo federation [1] allowing you to interconnect all the APIs of your complete company into 1 single endpoint and reference each other's data, which is extremely powerful and allows development to move fast if done right.

[1] https://www.apollographql.com/docs/federation/


OpenAPI?


More closely related to gRPC, Protobufs over HTTP is great too. You can then take advantage of cache-control, last-modified, etc headers in browsers and on CDNs instead of having to reimplement them. My current favorite way to make APIs is using this, and allowing user to request json as well, using a query parameter or header. This is easy to do with protojson conversion


Agreed! Had good success doing this with a toy mapping web app. Serializing polylines with Protobufs significantly reduced payload size compared to a JSON `[number,number][]` and as it was HTTP I could offload onto a CDN by setting cache-control headers.

Developer experience is also much nicer. Instead of repeatedly writing JSON serializers/deserializers on the backed + fronted you get them generated for free. I know there's various open API/json schemas code generators but for me i'd rather use Protobuf's at that point and get the benefits of a strongly typed schema, schema evolution, reduced payload size so faster experience for the user, cheaper bandwidth and cheaper storage if you currently store opaque JSON blobs in the db.

JSON API's are api's in debug mode.


In other words, by using a RESTful style, with HTTP as the protocol, you get all the benefits of REST. The fact that you're using protobuf as the media type is irrelevant to the protocol.

As for the content negotiation for JSON vs protobuf, that's also defined in the HTTP standard.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ac...


You can get very close with Twirp, without losing endpoint definitions. It’s such a simple protocol you can even manually implement client wrappers or server endpoints without much difficulty.


Thanks for the suggestion, I'm considering forking that and adding support for GET requests since caching POSTs is not ideal


That sounds like REST but more awkward to use.


What I'm referring to is what most rest APIs actually are. You mostly just use GET & POST, and urls are more arbitrary and often verbs instead of nouns. Pretty much nobody uses "true" rest.


> Pretty much nobody uses "true" rest.

[citation needed]

You're using it via your browser on this very website.


That is because HTML is intrinsically built on rest ideas. But few people develop browsers


When I converted all the micro services in my big app from HTTP to WebSockets my test automation performance in the browser instantly became 7x faster. I am still using JSON and not using protobuf.

The reason for the performance difference is that a WebSocket is a single statefull TCP socket with a tiny extra binary frame header sending messages back and forth. HTTP is stateless so it creates a new socket for each and every message. HTTP also has a round trip, request/response. The round trip means twice the message frequency as compared to the fire and forget nature of WebSockets.


On the server side what do you use? Is it custom code to essentially reimplement the usual RPC patterns and multiplexing, etc?

Or are you using something like trpc?[0]

Another thing I’ve also thought is that Content-Type plus the appropriate headers could easily open up using compressed/efficient binary serialization on the web as well and unlock even further gains.

One thing though is the issues with how websockets complicate scaling horizontally for simple request handling.

[0]: https://trpc.io/docs/subscriptions


On the server I use WebSockets between peers. I use my own WebSocket implementation but I try to stay within RFC6455 even when not talking to browser just so that I have a uniform format processing.

I can see how protobuf would be handy if you are just talking only to a database or directly transmitting a binary BLOB, but I rarely do any of that. Most of my messaging are microservices used by the application so I just use JSON.stringify for each message. I know there is overhead to that, but it works well for me. I should probably find a way to pull my application data directly into a binary format and back out without string handling, but I have not figured this out yet. Instead of 7x faster than HTTP my message handling would be 10-11x faster than HTTP.


> HTTP is stateless so it creates a new socket for each and every message

Well, HTTP 1.1 supports persistent connections and pipelining[1].

Though I suppose if you're a JavaScript in the browser it's easier to just use WebSockets.

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Connection...


gRPC is kind of plagued by its strong coupling with Protobuf, who is a disappointment.

Imo, what could have been a decent piece of engineering has been killed by that "you should do like Google does" mentality, with some highly debatable design choices leaking in an unrelated standard (hello defaults).


I actually have the opposite feeling where I mostly like the Protobuf ecosystem, but find the actual GRPC implementations in the languages I’ve tried (Java/Go) extremely frustrating to use in practice, especially for anything performance sensitive


I don't understand why GRPC would use HTTP2 as a default instead of WebSockets. HTTP was designed for transferring documents and resources in a way which maintains file types. When you're just transferring raw data, HTTP adds unnecessary overhead and complexity. WebSockets, on the other hand, was designed precisely for raw data transfers.


And the first thing that people do when they start using websockets is to define a message layer.

HTTP is designed for transferring media types. It can transfer the application/octet-stream if you want to transfer "raw" binary.

If want you want to do is stream binary data, there are better protocols than TCP or UDP for the purpose. The use of Websockets is a way to get binary streams through the HTTP "firewall hole" on port 80/443, not because it is more efficient.


There are bridge/tunnels out there. I use this with Unity client -> C# server (non-Unity): https://github.com/Cysharp/GrpcWebSocketBridge


Fwiw, you can't use gRPC over HTTPS proxies if they don't implement ALPN correctly, and some of the biggest vendors of corporate security HTTPS proxies don't. It turns out a bunch of Google Cloud APIs are only available as gRPC. So you basically can't use Google Cloud APIs from a corporate network.

The internet and common protocols used to be designed with real world use cases in mind, but now all technology is designed with only one generic consumer user in mind, and everybody else gets left behind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: