Hacker News new | past | comments | ask | show | jobs | submit login
gRPC-Web: Moving past REST+JSON towards type-safe Web APIs (improbable.io)
329 points by bestan on April 27, 2017 | hide | past | favorite | 132 comments



I first saw this one a few weeks ago, and have been trying to weigh up its pros and cons over GraphQL (my tool of choice).

gRPC-Web:

* Speaks protocol buffers, a fast and compact format compared to JSON

* Allows clients to use the same APIs as backend services.

GraphQL:

* Enables a client-centric view of the system. I have abstractions in my GraphQL server that only make sense to clients. It's a query-centric implementation of the Backend-for-frontend pattern, where the owners of the service are also the consumers.

* Enables an entire UI's requirements to be fetched in one go (or optionally split up, if some content is less important). To achieve the same level of aggregation performance in gRPC would require building something analogous to GraphQL.

The other benefits of gRPC-Web outlined in the article (generating typescript bindings) are equally possible with GraphQL (Relay Modern generates flow types, and is probably just one pull request from supporting TypeScript too)

The status code standardisation only makes sense for single-purpose endpoints/calls, once you're dealing with aggregations, the semantics of a downstream status code will vary depending on the use case the query fulfills.

I think both solutions have use cases, and can even happily co-exist. I don't believe gRPC-Web to be as much of a game-changer as GraphQL, but for certain scenarios (needing to rapidly fetch streams of data that don't have cross-dependencies) I can definitely see the benefits of a solution based on gRPC.


    Allows clients to use the same APIs as backend services.
Whether this counts as a bug or a feature depends on your APIs. I'm currently unfucking a suite of applications which bought into "your SPAs can just call backend services directly!" without getting a better security model - so the SPAs use hard-coded tokens that don't do any authorization, just like the backend services... facepalm


Sounds like those backend services needed better security too. It seems orthogonal to whether you have 3 different API surfaces (one for web, one for mobile app, one for backend servers), or the same for all. At Google, gRPC allowed us to move from the former to the latter.


Personally I consider it a bug, but I was trying to avoid being too dogmatic in my comment, given it's clear i'm already decidedly biased in favour of GraphQL.


> Speaks protocol buffers, a fast and compact format compared to JSON

This is definitely important at Google-scale, but for the rest of us compressed JSON typically isn't that bad.


If your users access your website from their phones, the latency and CPU benefits of proto become important for them.


You should also test the speed. JSON.parse is surprisingly fast in modern browsers and binary access is surprisingly slow.


Unless you're, you know, streaming anything other than UTF8.


Last time I tried protocol buffers it was dog slow, json was pretty fast, and msgpack was insane fast. This was on a heavily nested dict of dicts structure.


does graphql come with its own serialization protocol? I would have thought that the comparison of gRPC to GraphQL is apples/oranges.


GraphQL defaults to JSON over HTTP, but you can funnel it through sockets or protobuf or anything else. The conflict here is that gRPC goes beyond a serialization protocol - it's a full strongly typed RPC layer. You define your objects, methods, fields, types, relationships, data resolvers, execution pathways, etc. Just like GraphQL.

I see no meaningful advantage in binary serialization - JSON is fast enough not to be an issue, and HTTP2/GZIP minimize any bandwidth advantage. I do see a bit advantage of GraphQL in tooling and query composition, but I gRPC provides the building blocks required to rebuild that. gRPC-web is GraphQL of 12 months ago - it's definitely on the right path to help with complex data wrangling. My question is - does it do something important better/differently to GraphQL to warrant new players to enter the game and catch up to make it a strong competitor?


it uses json ? so you compare it to that


The default is JSON, but the spec doesn't mandate that it be JSON over the wire, or that even that it be over HTTP.


Also in the Json the "query" is actually a string that is in GraphQL language. So GraphQL is an encoded format embedded inside Json.

{ "query": "mutation Foo() { ... }" }

As such type safety is dependent on the query langauge (graphQL) not Json. As far as the Json goes, it might as well be multi-part form encoded.


Nothing says you can’t have clients use the same APIs as backend services with GraphQL, and GraphQL proper doesn't require JSON; it just requires a map content type (more or less, per https://facebook.github.io/graphql/#sec-Serialization-Format).


I don't understand this, to be honest. What does type safety have to do with serialization formats or application protocols? I've used Servant (Haskell) to define the REST API for my backend, which gives me type safety, server stubs and generated client code for free. In my view, type safety is about what you internally use to represent your API, and has nothing at all to do with the underlying protocol(s). There's nothing about REST+JSON that prevents type safety, as far as I'm aware.

I plan to switch to protobuf3 for the serialization format, since this offers both a JSON and binary representation. Why would I want to choose gRPC+proto3 over REST+proto3?


I think what they mean is by defining the contract first as a .proto file, and by having type-safe languages automatically read them and generate code, they are able to have a sort of cross language type safety.

If you create a method like

    double GetThing();
and then you want to change it to:

    int GetThing();
All you have to do is change it in your proto, then both the typescript in the browser and the go code in the server will adapt, and shout at compile time if the types don't match. This wasn't the case when the server was sending JSON to a web listener. You'd have to hunt down the dependency to that method and change it.


But unless you control all client code, you can't check the client code at server compile time.

You're still breaking and forcing a refactor by all your clients and there's no way to track that with type safety.

That said, this use case seems to be for a single web front end and go back end but that part is left out of the title.

Protos are fine, google likes protos, gRPC works for Google because they have that insane CI system that builds every project at once...but any schema would work and you can be generating and checking against a schema for a JSON API as well. You don't need to move past REST and JSON to get what you're asking for.


GP was a bad example. You probably wouldn't have an RPC returning a simple type. It would instead return a message (aka a Go struct, or an object in JS) which has various fields (or even just a single field). Protobuf messages have built-in backwards compatibility as long as you don't change field numbers. That way you can incrementally change your messages without breaking older clients.


I am aware this is not a real life scenario, I'm just saying what he calls type-safe, is actually something like cross-language-static-typing.


To change this without breaking existing users, including internal ones, you shouldn't redefine the method signature, you should add a new one and deprecate the old one.


> There's nothing about REST+JSON that prevents type safety, as far as I'm aware.

Well, except for the fact that JSON is pretty much untyped? I don't understand what there is to not get. If you want type safe RPC you have to use an RPC system that at least has types! What stops me sending `{ name: 12, age: "Hello"}` to your JSON RPC system?


JSON is just a serialization format. Everything you send over the wire is just a bunch of bytes, including JSON and protobuf. JSON has types, and you can use it with a typed RPC system just fine. Whether or not clients can deserialize something you send them can't be known statically, even with protobuf.

> What stops me sending `{ name: 12, age: "Hello"}` to your JSON RPC system?

Nothing. And the same applies to a typed RPC. You don't have to use the typed client. Or you could be using an old version, or it could be misconfigured, etc. You can enforce schema on the server and client side, but you'll never really know if it works statically, since you don't compile and deploy the server and client atomically.


> And the same applies to a typed RPC.

Erm. It clearly doesn't. Because with typed RPC the generated functions will be something like this:

    sendDetails(name: string, age: int);
And you'll get a compile error if you use the wrong type (assuming you are using a language with proper typing - which they are). With JSON you can't do that (without crazy hacks anyway).


Yes you don't have the guarantees of statically compiled/linked code but the entire point of using protobufs is that if you use the generated interfaces, you'll end up with fast binary serialization/de-serialization with type safety. That's a lot better than just using JSON. Which is very verbose.

You could of course make your own protocol and type system and use JSON for transmission. Why bother when this is done for you and is likely going to have the best adoption when it comes to interface definition languages like protobuf or Thrift.


I agree, I love protobuf and JSONSchema and Thrift, I think they're great ideas that solve important problems.

But the hype is getting carried away--they aren't a silver bullet for distributed systems, they're just a way to manage schema. Define a schema in an IDL, and Protobuf/Thrift/whatever generate schema validators and serializer/deserializers for clients and servers. But they are not type safe across systems, and that they compile doesn't imply anything about whether they will actually work at runtime.


Sure, and gRPC doesn't guarantee that the server is turned on when the client makes a request. Some expectations are unreasonable.

On the other hand, if you publish one set of proto files and all clients & servers consume these artifacts, the system as a whole is more "typesafe" and reliable than if you just post swagger docs and expect all your developers to check them daily for updates.

gRPC means API changes stand a good chance of getting caught by build systems. That seems about as reasonable a definition of "type safe across systems" as can be expected.


I agree completely which is why I love proto.


Couldn't have put it better myself.


> fast binary serialization/de-serialization with type safety

I'm not sure about Protobuf3, but certainly the older versions of protobuf could never be as fast as some other formats, because the message layouts were dynamic, meaning that you had to have lots of branches in the reading code.


Protocol Buffers can even serialize down to JSON (as does Thrift, for example).

Client code for PB+JSON has been a part of Google’s Closure library in fact (open source), and they used it in production for Gmail’s RPC, but I’m not sure if it’s still supported/recommended for PB 3.

The reason to use JSON instead of compact binary representation has been because JSON.parse is fast and doesn’t require any additional code. But nowadays JS is fast enough in most cases (unlike you’re doing something like streaming with media exts), but you can also ship Web Assembly/NACL library and (soon) delegate parsing to Web Worker with shared memory which should altogether give best possible performance.


nothing stops you sending bad data to a gRPC endpoint either


On the contrary, generating the client code using the same .proto definition as used by the server stops your client from sending bad data to the gRPC endpoint. Your binding code will fail to compile once the data definitions compiled from the .proto have changed. Then the client that sends bad data cannot be built.

They are talking about a work flow that prevents you from compiling bad clients. Of course if you don't use that workflow, you will still be able to make a bad client.


"Bad clients" in your example don't include malicious ones, who'll see a gRPC endpoint generating JSON to be consumed in a JavaScript app.

With no runtime type checking, JS's casting problems, and potential bugs caused by leaning on "type safe" serialization, there could be lots of black hat opportunities...


Protobuf/gRPC are not specific to any given language.


Well aware. I was answering mainly with regard to GP's quote, agreeing with IshKebab:

> There's nothing about REST+JSON that prevents type safety, as far as I'm aware.


> If you want type safe RPC you have to use an RPC system that at least has types!

If you want a type safe RPC, you have to use proper RPC system in the first place. It doesn't matter if it is gRPC, SOAP, XML-RPC, JSON-RPC, or Apache Thrift. REST doesn't even define serialization format or error reporting.

> you have to use an RPC system that at least has types! What stops me sending `{ name: 12, age: "Hello"}` to your JSON RPC system?

So you claim that JSON doesn't have types, right? And, by extension, Python and Lisp don't have types, too, because of the very same arguments? You know you're being ridiculous now, right?


JSON's types are limited (object, array, string, number, boolean, and null) and provides no standard way to define new types. It frequently devolves into ad hoc data structures with stringly typed values. I'm pretty sure that's what @IshKebab meant by JSON being "pretty much untyped".


I'm pretty sure that's not what he said. He used an argument that is not about how much one can express directly, but about dynamic typing nature of JSON.


gRPC has some benefits over REST. Since it's declarative, the entire API can be expressed as an RPC block in the .proto file, so your clients in multiple languages can be generated from this spec. With REST, you'd have to tell clients what protobuf structs to use with what endpoints. But of course the end result is roughly the same, once you've built clients and servers.

gRPC's built-in streaming and support for out-of-band data ("header" and "trailer" metadata) is also nice. For example, a gRPC server can emit trailing metadata containing an error, if something goes wrong halfway through a streaming request. Without this, you'd have to build a custom "protocol" on top of your Protobuf REST stuff; the stream would/could emit oneOf-structs that contain either payloads or metadata.

One downside to gRPC/Protobuf is that it's much less convenient to perform quick and dirty interactions against the servers. There's no standard, I think, for exporting the gRPC API to a client that doesn't already have the .proto file; I could be wrong, but don't think you can build a generic client that can talk to any gRPC server, list its available endpoints and so on.


The reference implementation of gRPC on github[0] has 998 open issues and 215 open pull requests. Every time I've tried to use this package I have encountered a previously-reported issue which has remained unfixed for months.

If you need to interact with Google platform it's hard to avoid using gRPC, since many "official" libraries seem to be migrating towards this library, while it remains fragile and bug-ridden. My "days since gRPC problem" counter is currently on "2", after hitting an issue which /crashed my python interpreter/ and required altering apache config to workaround[1].

[0] https://github.com/grpc/grpc/issues

[1] https://github.com/grpc/grpc/issues/9223


The gRPC project is surprisingly shoddy, given that it's Google's "next-generation" client protocol that's being slowly implemented across all Google products.

The state of the various Protocol Buffers projects isn't great, either. Google's Go implementation produces some awkward Go code, but efforts to increase user-friendliness are generally being knocked down (by Googlers, I think). Pull requests are lying untouched.

There's a fork called Gogo-Protobuf [1] that tries to evolve the Protobuf project and make it friendlier and faster. One way in which it is nicer is that it makes some attempts to make the generated Go code more idiomatic Go. But it's also hobbled by the above problems. The biggest challenges revolve around two areas: The canonical JSON mapping (which is particularly bad at "oneOf" structs), and custom types (Gogo adds things like better date/time support).

The Gogo team (understandably) doesn't want to diverge too far from the mainline, but that means open issues are stagnating while the PRs in the upstream Google project are stagnating.

These issues of course then also leak into related projects that extend Protobuf, such as the Go gRPC gateway, which implements a REST proxy on top of gRPC. Since the JSON mapping is so lacking, the JSON data you get out isn't quite idiomatic JSON, which makes the gateway pretty much useless for a lot of applications. (Note: It's been about 6 months since we tried to use it and had to give up, it may have improved since.)

We also ended up forking the official JSON marshal code to work around the various issues with it. Other projects such as CockroachDB also do this.

[1] https://github.com/gogo/protobuf


I agree. As much as I really wanted to use gRPC on Go for a current project, this bug ultimately scared me away: https://github.com/grpc/grpc-go/issues/1043

Big kudos to everyone working on it (most of which are Google engineers, some very senior) but I can't help wonder what other issues might be lurking if that one went unnoticed for so long.


The thing that went unnoticed with that issue is that grpc-go didn't have an option to manually increase the window size. Both the Java and C implementations had the option much earlier; I see a commit adding it in Java in March 2015.

At least by August 2015 I was working on automatic window size tuning, which I've not seen in another HTTP/2 implementation to date. July 2016 it was implemented in Java, but remains disabled due to interoperability concerns that we need to spend time addressing. C now implements something similar and it may be enabled in 1.3.

For flow control to work promptly the window size should be only as large as necessary, so there is cause for keeping it small. Since the lower value is appropriate for many networks, it isn't completely outrageous.

I think the biggest failure on gRPC's part here is not having a notice in a widely-read part of documentation informing users to be aware of the limitation. That's easier said than done, but that doesn't negate its importance.


It is worth pointing out that the Python implementation is particularly bad. Perhaps gRPC is really pleasant to use with Java and Go, but the Python implementation is neither usable nor stable enough for it to be worth considering its use for one's own services.


We haven't tried using gRPC in Python, as we have completely migrated from Python away towards Go.

Our experience of using gRPC in Java, C++ and Golang is pretty good. While it had some initial teething issues (when it was first released), the libraries have generally been a non-issue since the gRPC General Availability (GA-1.0 version).

If you're considering using it in Go, check out the https://github.com/grpc-ecosystem/go-grpc-middleware helper libraries that we've contributed back to gRPC Ecosystem.


Ruby too was pretty bad - more so Protocol Buffers than gRPC itself though. In my previous place, I had to fork the Ruby library and write a non-trivial amount of C code so that instances of Protocol Buffers messages could be treated as regular Ruby objects by their callers. Once I opened a PR against upstream it was quickly accepted but took some 6 months to be published to Ruby Gems. I'm not saying to stay away, but please be prepared for surprises.


My instinct would be that it'll be just inherently nicer to use in a statically-typed language.


Python can be statically typed.


Maybe I'm wrong here, but the number of open pull requests isn't a bad thing. If you start poking at them, you will see that people submit pull requests to have the full gRPC test suite run against their changes (which probably includes cross-platform tests). So maybe putting a pull request up is the only way to get Jenkins kicked off, which then means it is part of the standard flow when doing work on the project to test changes.


I own about 15% of those PR's at any given time... and you're right: our Jenkins instance looks at open GitHub PR's, so to run all the tests somewhere, and to verify performance, we open a PR. Many of mine are lower priority things (or exploratory enhancements) and so they'll stay open for even a quarter at a time before it really makes sense to land them.


The author addresses this in the OP:

"We later learned that Google’s gRPC Team was working on a gRPC-Web spec internally. Their approach was eerily similar to ours, and we decided to contribute our experiences to the upstream gRPC-Web spec (currently in early access mode, still subject to change)."

I suspect anyone attempting to adopt this implementation is going to hit similar issues until a release is finalized.


I've been using c++ for the server and python for the client, and it's worked well what issues did you hit?


Number of GitHub issues is not a good metric for quality.


I think I like gRPC, but have several reservations regarding replacing REST with it. REST web services often return multiple mime formats, not just pure structured data. Some services return images, others return HTML, and then you also have cache... Maybe I just don't know enough about gRPC, but I can already imagine many people passing images around as byte arrays inside protocol buffers and when we look back, we have reinvented SOAP, which reinvented Corba.


Totally agreed. There are cases where you basically want to deal with well structured web resources: HTML, images etc. For these, HTTP is a perfect fit.

What we're replacing with gRPC is usage of REST (URL-encoded resources) + JSON for application APIs, not really Web-resources.

What we found is that gRPC is really good at capturing both a resource-oriented API (we use similar conventions to Google's excellent API Design handbook https://cloud.google.com/apis/design/resource_names#resource...) and imperative ones. The major difference we no longer have a weird POST method with `/books/do_recalculation` that breaks the RESTfulness of the API.


gRPC generates rpc server and client stubs based off a protocol buffer definition. Saying it's a replacement for REST makes little sense since it's possible to define a REST API within it. It's also possible to write a totally not RESTful API in a modern http api framework.

In fact that's what Google's API design guide does. Encourages RESTful API design then describes how to implement them using proto and gRPC.

The more interesting tradeoffs are proto vs json or other and how this restricts message patterns to request/response (rather than pub/sub or push/pull)


Interesting! When I first looked at gRPC I missed the option(google.api.http). Are you aware of the reason why REST mapped gRPC is not possible in GAE (http 1 only on the server end of our code)?


There's actually a stand-alone proxy that translates the REST mappings of `google.api.http` into gRPC requests. It relies on code-generation: https://github.com/grpc-ecosystem/grpc-gateway

This has been the way we've been shipping our REST services until now, but the need to recompile the proxy was a major hinderence to our development speed. Hence gRPC-Web implementation.


Probably because it's pretty bleeding edge.

Google Cloud Endpoints, which released earlier in the year, allows you to write a gRPC server, and offers the HTTP proxy as part of the service.

The ecosystem still isn't quite there though. It could be easier to just write a thin webserver that just points at services over tcp (using something like ZeroMQ) rather than writing a service with gRPC from the ground up.


Stub generation? Remote access protocols? CORBA all over again?

So, now I know the next big thing. Portable distributed objects. :-P


It's wrong to assume that just because gRPC shares some design features with CORBA that it shares every CORBA feature or every CORBA shortcoming.

There have been technologies to generate stubs, skeletons, and protocols from specification files for at least 30 years. Some of the older designs sprung out of a client/server world, whereas newer designs deal with today's reality, e.g. interoperability with today's web and the need for horizontal scalability.

What hasn't changed is that it's still useful to describe a communication protocol in a declarative way and then rely on a code generator to provide the code to work with that protocol.

Google protocol buffers offer advantages beyond that, but you may need to look beyond any preconceived CORBA biases to appreciate them.


CORBA tried to make remote calls look local. Making them look remote (and potentially even making some local calls look remote) is a very different design principle that leaks through almost every bit of the standard, and definitely leaks through to every bit of the implementation.


It's funny to complain about CORBA's local/remote muddling in 2017 when we have moved on to microservices that also talk to each other locally with TCP/IP.


CORBA was superior in terms that it had distributed two phase commit protocol, which gRPC doesn't have, and which is very critical in banking/finance industry.


... on the Blockchain!


Or AMF, everybody forgot about Flash and AMF?


Everything old is new again, except tunnelled over HTTP.


You mean like SOAP?


Like CORBA, EJB, RMI, and a pile of other rpc libs and specs that died over the past 40 years.


Another hammer to a multifaceted problem, keep in mind that people have a lot of different use cases where JSON REST API's are the lesser evil.

I only skipped through the spec for gRPC, but the protocol seems very limited. I dont like the 'gRPC status codes', where HTTP status codes at least can be grouped in ranges.

The abstraction from the technology/protocol should not be the issue compared to the abstraction from the core business logic. When handling multiple consumers, customers and technologies I tend to worry more about where logic is handled and where data is stored, compared to how its transferred.


REST + JSON is simple, easy to debug, and it does the job. Web clients speak JSON, servers speak JSON, humans can read JSON usually. JSON can be gzipped so you get some benefit there.

gRPC is another large pile of foreign C code that's essentially a black box. If there is a buffer overflow there that your code hits only somehow, you'd have to know how to debug it and fix it.

Also chances are you are not Google, Facebook or Amazon , and you don't really do BigData just your know, regular data.


   and it does the job.
It does some jobs admirably. Because of the tooling and familiarity, it's often asked to do other jobs, and here the results are decidedly mixed.


> gRPC is another large pile of foreign C code that's essentially a black box

Not for web, as this is generating Typescript code on top of the fetch API.


Just saw this during the morning reading. Without a deep dive, this looks very promising. We have recently been moving to k8s and grpc for our node work and the last piece was how to get to the browser. If this ties it all as one straight protocol from db to browser it will be very welcome and could not have come at a better time. We were evaluating the alternative (GraphQL etc) but our experiences with node and grpc have been excellent so far.

Immediate evaluation planned.


That sounds very much what we're doing, except our microservice stack is in Go. For nodeJS you probably want to front your pods with grpcwebproxy (https://github.com/improbable-eng/grpc-web/tree/master/go/gr...)

If you hit any snafus, please file bug reports in https://github.com/improbable-eng/grpc-web We're happy to help :)


Much thanks. I'll add it to the immediate list.


The title ("by Google and Improbable") is a bit misleading since it implies a collaboration between the two, when it's actually Improbable's own implementation that they released ahead of Google's pending spec.


One of the authors here.

This is indeed our own implementation of the pending spec. We are in touch with the gRPC team to make sure that their (still unreleased) implementation is cross-tested with ours.

The benefit of our implementaiton is a relatively light-weight client-side lib for Typescript and >=ES5, and a "ready-to-go" Go middleware.


Unless I need high throughput, I am sticking with JSON Rest as it's good enough for most things and easier to debug by sticking a proxy in the middle or using ngrep.


This kind of stuff couldn't happen soon enough. REST is so arbitrary and less than useful for building UIs. It's just bad.

I've got Relay talking to a GraphQL service built in graphql-java which then talks to a gRPC service layer. The gRPC service layer is a great fit for GraphQL. Some type safety all around, but there could always be more. And there could always be less JSON. Please, no more JSON. The only thing it's good for is debug logging.


I'm using my own gRPC services and Google's speech recognition gRPC API and I absolutely love it. Protobuf/types, generated code, clear API contract.


I'm not sure why you need to modify your serialization or protocol to get type safety.

I've been using NSwag (https://github.com/NSwag/NSwag) to generate TypeScript clients from .ASP.NET controllers and it works great. It can generate TypeScript request/response handlers, and interfaces or classes for any public facing models.



Personally I'd prefer a promise based api. eg. stub.QueryBooks(qbr).then(...)

A benefit of grpc coming to the web means someone will inevitably build a tool to parse a .proto file and generate a ui to test your microservices during dev. That will be cool.


Not exactly a promise, but grpc has an asynchronous API.


I acknowledge it could just be me and the specific projects I worked on, but I've never been encumbered by an API style. RPC, Rest, GraphQL, I almost find them all to simply differ in syntax.

I've managed to solve all my use cases using all three with equal effort, time and with comparable outcomes.

There's value in compression and faster serialization/deserializarion formats when and only when micro-performance becomes an issue. Other then that, I think programmers spend way too much time debating over these, where I don't see any one of them providing an ROI advantage over the others.


REST (in the pure sense) vs RPC is a valid comparison, but GraphQL solves a different problem. GraphQL is an alternative to other gateway API or backend-for-frontend solutions.

The benefits of a gateway API to both developers AND users are hugely significant. These are decidedly NOT micro-optimisation.


What's a gateway API? I acknowledge, I don't do a lot of frontend work. As I see it, GraphQL is just a query syntax that uses a JSON like language and where the query engine is built in code on the backend. You could replace all queries by REST requests, or build your own query language on top of JSON or RPC.

In my experience though, anything approaching a query language was too expressive for an API, and it was better to offer one API per query, and encapsulate the queries on the backend themselves.


Gateway APIs: http://microservices.io/patterns/apigateway.html

Assuming you've read that then I'd add the following. If you don't have something approaching a gateway, you're probably doing a big disservice to your users (assuming clients fetch data via API, rather than using a traditional render-on-the-server framework like Django or Rails).

The value proposition of GraphQL is based on the assumption you're already doing things right by users, and its possible to fulfill the data requirements for a single page or app screen in a single API call. What this probably means is:

* You have a custom endpoint per screen

* That endpoint to either a) be versioned or b) support as many historical versions of your app as are in production

* You might have a gateway per client (this is the full backend-for-frontend pattern)

* Every time you add functionality, or change existing functionality, you're adding to what quickly becomes a huge set of endpoints

What GraphQL promises (and delivers on) is the ability to get all the benefits of the backend-for-frontend pattern, without anyone ever having to write an endpoint specifically for a give client use case. The clients convert their data requirements into a query, and the GraphQL server returns the exact data you need, no more, no less. It requires some discipline when it comes to evolving the schema over time, but it works really really well. And when implemented intelligently has comparable performance to hand-crafted endpoints (I've written about how to approach this here: https://dev-blog.apollodata.com/optimizing-your-graphql-requ...)

When you experience the front-end workflow of using a library like Relay or Apollo (both are GraphQL clients) and having perfect synchronisation with UI and data, it's a really magical moment. You end up in a world where you can just get on with building UI, it's amazing.


Don't forget to protect against malicious user input! There are both pros/cons for 'non-human-readable' in the security department for sure.

Finding a $5,000 Google Maps XSS by fiddling with Protobuf | https://news.ycombinator.com/item?id=13829925


You can easily write an extension to in-browser Dev Tools to show human-friendly representation of any binary protocol for development/debugging purpose.


Nice. Are you aware of any Chrome extension for decoding Flash's 'application/x-amf' Action Message Format like Burp, Charles, and this Firefox extension do?

https://addons.mozilla.org/en-US/firefox/addon/amf-explorer/


Nope, sorry! But Firefox addons will be soon switching to extension format compatible with Chrome, so you might get it for “free.”


There is a reason why more and more people left RPC and use REST.

And for me "the next big thing" is something like GraphQL.


Imo it was mainly poor language / framework support for asynchronicity. That's not really an issue anymore.


So how is the modern way to work around issues like network partitions, servers not responding, going off and on on network connections, duplicate answers,....?


Futures-oriented programming - helped by the "async" or "generators" support in quite a few modern languages. Wrap timeouts and retries around everything, and process the timeout errors accordingly.

RPC doesn't mean that a remote function call looks exactly like a local one. That was a mistake. Modern RPC systems return composable futures which make it trivial to do timeouts and retries, send off many requests at once and wait for all/some of them to return, and so on and so forth.

If you're doing something that shouldn't happen more than once, generate a transaction ID to identify it by.


https://web.stanford.edu/~ouster/cgi-bin/papers/rifl.pdf

This paper says that exactly once execution of an RPC is possible. I have no idea if grpc does this or not.


Bring back SunRPC and CORBA!


I strongly advocate GraphQL for client-facing APIs, and gRPC (or Thrift) for internal (or pure) APIs.

REST was a breath of fresh air after SOAP, but unfortunately it is an incomplete solution and leaves too many unsolved problems in user-space. This has led to a proliferation of attempts to build standards on top of REST, such as JSON-API. When we're building systems, RPC is a more natural fit because the semantics are clearly defined.


REST isn't very widely used. Handwritten RPC protocols using HTTP/JSON is, though.


REST has been a complete failure from the very beginning. At best it maybe introduced some people to HTTP caching headers. Sad!


One cautionary tale is to avoid generating code that exists at run time with Typescript. We managed to cause a severe page load regression for a while by generating Typescript for each DbObject and Projection (similar to a persistent GraphQL query).

My advice is to only generate types and interfaces if possible.


There is also Google's own implementation of this that has been in the works for some time. See https://github.com/grpc/grpc/issues/8682


Yea, we actually work closely with the gRPC-Web team at Google to make sure our implementations are interoperable and we have plans for cross-integration-testing.


Awesome, thanks for sharing this.


It is still a bit shocking to see how many people are not getting the distinction between an API and a RESTful service.

They are very different things and they have very different goals. The hint is in: how much do you value the client?

If you have full control over both client/server or if we don't care about any 3rd party developing a client library for your service, then go with something like GraphQL or gRPC or SOAP with a nice, typed spec you can generate your code from and optimize the heck out of the bytes coming through the tubes.

If OTOH you have an interest to create a RESTful service that is discoverable, that doesn't require constant client changes, that offer a wider variety of resource representations, that need to stand the test of time, then use HATEOAS and a RESTful architectural style.

gRPC is yet another ... RPC. Nothing good or bad about that. Just make sure you understand the consequences.


But I always thought speaking HTTP with a server can be considered RPC. We can pack our message in XML, JSON, or whatever serialization we choose, passes on to the destination, unmarshall, do something, return.


RPC has its own set of limitations, which if your application qualifies, might be a good fit:

* Coupled server and client. gRPC uses protocol buffers which have zero backwards compatibility.

* Zero discoverability. The client knows in advance what the server can do.

* No standards to follow. You make up your own specs, like Google did.

These constraints are orthogonal to REST, the architectural principles behind the web. What they're doing is tunneling RPC over the web, which is what most HTTP APIs are doing already. There are only superficial differences like the use of protobuf, lack of verbs and URIs, etc.


"protocol buffers which have zero backwards compatibility"

Either I misunderstand you, or this is _remarkably_ wrong: Protocol buffers were designed to make it easy to define protocols which are both backward and forward compatible.


My understanding is the same. Though I had also come to believe that the primary way to achieve this is via loose constraints, i.e. required fields should be used VERY sparingly.

This compatibility pattern also leads me to conclude that protocol buffers aren't a suitable model for generating a client-side type system. You'll just end up with structures where everything is a Maybe type, so you end up needing tons of bespoke client-side code to handle the possible permutations.

You need a layer on top of of them to express the true type system suitable for clients, and I believe GraphQL does a great job of this (but I hasten to add that even GraphQL's type system is relatively limited and isn't a magic bullet).


gRPC is based on protobuf3, which doesn't support "required" or "optional" in the first place.


so everything is optional?


Except primitive types. So this is avoided:

> You'll just end up with structures where everything is a Maybe type, so you end up needing tons of bespoke client-side code to handle the possible permutations.

Absent primitive values default to zero, while absent message fields are mapped to what makes most sense in the specific programming language (in most of them, "null" or "nil").


yes


I made a small mistake in the first point, got downvoted to oblivion and people seemed to stop reading there: the binary serialization indeed has backwards and forwards compatibility. However, the textual serialization lacks this compatibility. I can't remember a petty detail of a vendor specification apparently. Pedants with encyclopedic knowledge of Google are out in full force today!

I should have stopped at "coupled server and client", the point is that they both rely on an agreed upon external schema since the messages are not self-descriptive.


Honestly: The other points are not better if your intention was to make grpc look worse than other HTTP based APIs:

- For all of them you need to know the remote addresses (IP/hostname, port) upfront or use an external service discovery solution.

- For both you can implement some service introspection, which delivers you a list of available services/methods. Afaik for grpc there even exists some standardized introspection mechanism. For other HTTP APIs you might want to download some swagger description from a well-known address. Or WSDL scheme. Or GraphQL schema.

- Standards on which layer? On transport layer you are following the HTTP standard, independent of whether you are using grpc, GraphQL, json-rpc or some handmade REST API. On application layer you are mostly on your own anyway, there's not a lot things one could standardize. There are some exceptions, like the standardization of Webdav on top of HTTP, but most applications have their own specific set of requirements. If we are talking about standardization without meaning offical-standardization, then we can argue that grpc provides a more rigid (standardized) model for an application than the definition of some ad-hoc APIs: It is standardized how APIs and exchanged data types are defined (.proto files), how they can be accessed and how data is transferred over the wire (mapping to HTTP). All of that without the application developers on both sides needing to care for it.


Actually gRPC is about the same as other HTTP based APIs, it is just a more efficient RPC. All of them are lacking what made the web scalable in the first place.

- HTTP APIs are worse than websites of the 90s. At least a browser could be expected to view a few websites. HTTP APIs require a custom client for each one.

- Document media types, not APIs. This isn't such a novel concept, browsers (fancy HTTP clients) work because HTML is a standard.

- Standards at the application layer, not specifications. You mentioned specs only.

Let me just clarify that RPC is a great fit if you are constrained to a single vendor and don't care about third-party clients. On the web, every browser is a third-party. For HTTP APIs to take off, they need to be built more like websites, or else vendor specs will fill every niche.


I don't understand your point about third-party clients. APIs defined in .proto files can have both clients and servers implemented by anybody.


APIs defined in XMLRPC, CORBA, SOAP, et al, can have clients and servers implemented by anybody.

Programmers don't seem to learn from history and struggle with thinking over time. These formats worked well in a time when a single party (or second party) controls the server and client, when services were very consolidated. Now that the web is becoming more and more centralized and closed, it follows that RPC is making a comeback: widespread interoperability is not much of a concern.


Sorry but you're not addressing your own point:

> RPC is a great fit if you are constrained to a single vendor and don't care about third-party clients

If I'm not constrained to a single vendor, and care about third-party clients, what makes RPC a bad fit? In specifics, not vague historical comparisons.


Practically speaking, most REST based servers and clients are tightly coupled anyways.

That is why SDKs are so popular with developers. They just want to call a method and not be concerned with how the bits get across the wire.

And please do not cite the browser as a good example of a REST client. The browser is driven by an advanced AI (namely a human). We are not there yet with machine to machine interactions, and it isn't clear that REST is the magic bullet that will enable this kind of system.


A good rule of thumb is that if a REST API has an SDK, then it's not really RESTful in the first place.

SDKs are an enormous effort to create and maintain for every HTTP API, I think it's a malpractice that is all too common.

Browsers (and by extension, websites) are not good examples of REST in practice? I don't know what world you're in.


> SDKs are an enormous effort to create and maintain for every HTTP API,

Hence the attraction of generating client and server stubs with gRPC

> Browsers (and by extension, websites) are not good examples of REST in practice?

The "client" of a website is a human being - and we are very good at interpreting dynamic content.

As an example: There are probably 100+ websites out there where you can book a ticket for a flight. It might be painful, but as a human, I can figure out how to navigate and book a flight on any of those systems.

I challenge you to write a REST client that can do the same.


Every single one of those websites use HTML, a common media type, with semantics defined in that standard. What's missing are the application-level semantics like what's a "flight search" and "flight booking", which can be solved with linked data. In fact, this already exists! [0]

In theory, using a common media type and linked data vocabulary, one can make this hypothetical scenario of an automated machine-to-machine flight booking system happen. In practice, it requires either changing how people think, or build APIs to begin with. It's a steep uphill battle to change how people think, making this happen is much easier.

[0]: https://schema.org/FlightReservation


Is this SOAP 2.0 ?


They sure do like spikes... whatever that means.


"A spike is a product-testing method that is used to determine how much work will be required to solve or work around a software issue." [1]

[1] https://en.wikipedia.org/wiki/Spike_(software_development)


Ah, agile mumbo-jumbo.


Scalakka devops represent!


why not emulate an RPC using webpack?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: