Hacker News new | past | comments | ask | show | jobs | submit login
Twirp: A new RPC framework for Go (twitch.tv)
368 points by spenczar5 on Jan 16, 2018 | hide | past | favorite | 97 comments



Hey everyone! I'm the OP and primary author of Twirp. I'm happy to answer any questions and hear feedback.

You can also reach me directly, if you like, email is in my profile.


I like the simplicity of the library. I haven't tried it yet, but this behavior is a bit unfortunate choice

> If there is an error, it will be JSON encoded, including a message and a standardized error code.

Why not to return standard protobuf error when the source was a protobuf? It massively complicates things when you have to expect errors in one format and responses in the other.


It’s important for errors to be human-readable off the wire. If they were proto-encoded, you couldn’t get at them easily with tcpdump, and you couldn’t read erroring response bodies from curl, and you couldn’t read errors shipped to stuff like rollbar.

Hopefully, the complexity is encapsulated by the generated client.


What about allowing/respecting an Accept header in the Request? In @doh's case, if the client only specified Accept: application/protobuf that would override the default behavior of returning JSON encoded errors.


That’s a pretty good idea. It does expand the complexity of the client a bit, but at least it’s in an opt-in way so it doesn’t strictly need to be done for cross-language clients.

But what would the benefit(s) to users be? If they are deserializing a protobuf error, they are almost certainly using a generated client, so I don’t think they will know or care how the error was encoded.

(This might be better as a github issue to keep a visible record of the design for others.)


I think it depends who the user is in your case. In mine it's the developer who has to work with Twirp outside of the standard libraries (maybe different language, maybe just wants to incorporate it in their own code, ...).

I also like when things are consistent without surprising behavior.


Did you ever consider Cap'N Proto [1] as well? How does it compare? It looks like it could easily be integrated as a third transport encoding besides JSON and ProtoBuf.

[1] https://capnproto.org/


Yeah, we looked at it a bit. To be honest, the biggest blocker was that we wrote Twirp as a gRPC alternative, so we already had protobuf service definitions in some spots, and it seemed easiest to keep using the same thing.

Adding another transport encoding would be good, but its important that any Twirp server can support all of the encodings. We would need to be able to map a message defined in capnproto to a protobuf message definition, which didn't seem completely trivial when we looked at it, since capnproto uses its own IDL, I believe.

I don't know a ton about capnproto, though, and I'm open to learning more about it. We would just need to work under the constraints that JSON and Protobuf requests would need to still work.


What do you think about prpc, which is our version of "gRPC for http/1.1" https://godoc.org/go.chromium.org/luci/grpc/prpc


I hadn't even heard of it until yesterday, to be honest. I know very little about it. It looks like it does a good job addressing the HTTP 1.1 concern, for sure, but I don't know whether it addresses the other issues we've had. I'd have to spend a lot more time reading to understand it.


Thanks for posting! Does Twirp use a single HTTP request per call, or does it use a persistent connection similar to the net/rpc package? If the latter, does it provide options for heartbeat, reconnect and retry?

EDIT: the blog post covers this in the protocol section. Every request is a POST :)


It's one HTTP request per call, but requests usually flow over a persistent connection in HTTP 1.1 - it's certainly not re-opening a connection for every request.

TCP reconnection and stuff like that is at a lower level than Twirp. When you make a client of a Twirp service, the constructor accepts a http.Client which it'll use to send the requests. http.Client has a "Transport" field which is responsible for opening connections, that sort of thing. The Go standard library's defaults are pretty good, but you can tune it as you like.


The blog post mentions the following:

> The core design of Twirp is language-agnostic and we’re planning to expand into new languages, but our Go implementation is already stable and capable of serving heavy production loads.

I was wondering, is Rust on the roadmap?


It's not on the roadmap because I have zero experience in Rust. I'd be very happy to see the community make a Rust generator and could provide guidance on getting the protocol side right, but I wouldn't be able to tell whether the generated client was idiomatic - which is really important.

It'll take someone who is very fluent in Rust and who is motivated to do it, but I'm all for it.


Thanks for posting! How did you work around issues with GRPC on AWS/ELB?


What works for us (grpc)

Elb with layer 4 and proxy protocol enabled. Behind Elb sits nghttpx (not nginx) doing TLS termination and request forwarding to gRPC.

Proxy protocol is used to keep the source IP.

This setup is all done with Kubernetes using kops for the cluster setup, nghttpx-ingress-lb as the ingress controller. Also we have multiple namespaces/environments in Kubernetes (staging/dev..), so nghttpx does routing based on the hostname.

We tried linkerd before but somehow failed using it as an ingress controller doing TLS termination and upstream HTTP2. Doing the other way of routing everything through a dedicated linkerd port and a dtab worked, but mixing in TLS termination + upstream HTTP2 in a single dtab stopped us.

So for now we keep this simpler setup and we probably are going to check out Istio/Heptio/Envoy


When I implemented GRPC on ELB, we used a multiplexer to re-route the GRPC requests. See: https://github.com/soheilhy/cmux/. The only other issue we had was that ELB's would not let connections live longer than 30 seconds.


You can increase that connection timeout, although there's an upper limit of an hour or so which it will eventually exceed.


I completely forgot this. Makes it a lot more feasible to use it.


We tried grpc-gateway a little bit. But mostly, we wrote Twirp as a gRPC alternative because it was hard to work with ELBs.


We had to stick to layer 4/TCP via an ELB (as opposed to an ALB).


gRPC connections are persistent, so a single client will always talk to a single backend in that configuration. It's fine if you have lots of clients but can easily lead to load imbalances.

That's why projects such as Envoy exist. I'd link it, but I'm on mobile.

Keep an eye on it.


You can use round robin load balancing in gRPC without Envoy.


And it's not difficult to swap it out for a consistent hash balancer or other solution.


You are doing L4 load balancing with the ELB? So you can’t do sticky session management? Is that a problem you run into, if so, how did you handle it?


I think the work the gRPC contributors are doing is great, including all the features. But I can't emphasize enough how important it is for projects like this that take great ideas to a new level by prioritizing simplicity. It's like one project brainstorms great ideas by not being too resistant to new ones (the "yes, and..." rule), while the other refines the ideas with a focus on simplicity to extract the greatest value for the cost.

Great work.


Really excited about this. I didn't like how opaque and heavy gRPC is. Also I really wanted support for JSON. Mostly for these reasons, RCP hasn't been implemented in my architecture yet (Just using standard REST)

Twirp is everything I wanted in an RPC framework and I'm looking forward to implementing it ASAP. Thanks Twitch team :)


Can I ask why you want JSON with gRPC? The benefits to protobuf are tremendous, with little to no downsides


On the other hand, plain URLs with JSON are much easier to work with without writing any code. You can do everything want with curl from the shell, and often an API allows doing almost anything from a browser (Elasticsearch comes to mind). The simplicity of it all comes in handy when you want to do something trivial — load a small piece of data into the server, do some diagnostics, run some ad-hoc queries, etc. — without really wanting to write a program.

Debugging with lower-level tools like strace and tcpdump is also something that's trivial with JSON-over-HTTP/1.1, but virtually impossible with gRPC. (I mean, you could grab the payload and run it through gRPC deserialization, but would you?)

I'm a big fan of gRPC, but it is pretty top-heavy — lots of code to accomplish relatively little. If you have a language that can build client bindings dynamically from .proto files without recompilation, that would ease things a lot, but if you're using Go, for example, the bar is pretty high going from zero to productive.



To complement the grpc_cli recommendation, I've been using grpcc for 1-2 years now: https://github.com/njpatel/grpcc


I think the only RPC mechanism I've been happy with, that required little work and didn't constantly get in the way was Stubby - the precursor to gRPC used inside Google.

For a few years inside Google I experienced zero discussions about almost every aspect of RPC. It took a trivial amount of time to implement stuff interfaces, clients and servers in multiple languages and it was trivial to understand the interface of, and implement a client for, other people's code.

I didn't necessarily like everything in Stubby, but I absolutely loved not needing to have pointless discussions about RPC mechanisms or protocol design.

Since I left Google, anything even remotely resembling RPC (including REST) has been an utter waste of time mostly spent bickering over this crap solution or that crap solution – mostly with people who don't care about the same things you care about.

REST is a crap solution in my eyes because it invites absolutely endless discussions on an endless list of subtopics. From the dogmatic/fundamentalist HATEOAS end of the spectrum to the RPC-using-HTTP-and-JSON-and-let's-call-it-REST camp. Not to mention that in addition you need to have an IDL and toolchain discussion. (Of course, none of the toolchains or ways to describe interfaces are very good. In fact, they all suck in part because the attention is being spread across so many efforts that don't get the job done).

I have yet to see an IDL that works better than a commented .proto file from a "get stuff done" point of view.

I completely understand where you are coming from when it comes to having human readable wire format. For 20 years I was a strong believer in the same, and for some systems I still believe in human readable formats.

But RPC and RPC-like mechanisms is no longer among them. RPC is for computers to talk to each other and not for humans trying to manually repair stuff.

(I'm a pragmatist, so I'm allowed to both change my mind and have seemingly inconsistent opinions :-))

For RPC you should encourage the creation of tools. If you need to look at the traffic manually: fine, make a Wireshark plugin or a proxy that can view calls in real time. That's annoying, but cheaper than going off and inventing yet another mechanism. And once it is done, it is done and there's one more thing that is sane.

We should really encourage people to build tools so we can automate things and have more precise and predictable control over what we are doing without having to reimplement parsing (which is what happens if people think they understand your protocol - which they often don't)

Also, make sure it works for a large enough set of implementation languages and understand how to work in mechanical sympathy with build systems. I don't care if Cap'n Proto is marginally better than Protobuf if it lacks decent support for languages I have to care about.

I have no idea how much time we wasted on trying to get Thrift to work in a Java project that needed to build on Windows, Linux and OSX back in the day, but I was ready to strangle the makers of Thrift for not paying attention to this.

At this point I'm beyond caring about the design of RPC systems. I just want something that works for software development and doesn't have to be a discussion. Hence, I get annoyed every time I see a new RPC mechanism instead of attempts to make some of the existing mechanisms work by just making just one aspect of them a bit more sane and exhibit a bit more empathy with programmers rather than the egos of protagonists of various libraries, frameworks and formats.


How does Stubby compare to gRPC?

I imagine part of the lack of friction around Stubby was that Google was the only consumer, and could maintain client and server bindings/tools for the strict subset of the languages that Google standardized on.


It was pretty similar, but gRPC is a bit simpler since Stubby had a lot of other stuff to deal with authorization etc.

I wouldn't say the lack of friction was mostly due to Google being the only consumer. It was mostly because there was a clear path from A to B when you wanted to give something an RPC interface and that this path was made efficient.

Or at least more efficient than trying to use REST-like stuff in a large organization with lots of different teams using different technologies.

It also helped that it wasn't a democracy. You had to use it. If you didn't like that you were free to leave. As a result people will focus more effort on making the tools better and make friction points go away.

In practical terms: we can spend weeks on getting a REST-like interface to work with other projects because everyone has an opinion on every bit of the design, and everyone uses different, and quirky libraries and tools. For Stubby in Google back then, it was mostly about defining the data structures, the RPC calls, discuss semantics and then the mechanics were taken care of. This is far, far, far from the actual case for many other technologies.

(And while I appreciate HATEOAS as a design philosophy, and I've tried to make use of it several times, it just is not worth the effort. It is just takes too much time to do right and to get everyone on the same page. Most proponents are more keen on telling everyone how they are using it wrong, than on writing good tools that actually help people use it right. There's very little empathy with the developer).


We ran into problems where we had embedded Ruby and Python interpreters (Chef/SaltStack) that made it a big pain to ship new libraries. It was much easier to use the grpc-gateway (HTTP/JSON) for those clients and the generated grpc bindings (HTTP2/proto) for services.


One reason is to be able to call gRPC service from a web browser. Native JSON support makes that much easier.


There has recently been a lot of work for native grpc client in the browser. It’s not fully there yet but is looking real promising!

https://github.com/improbable-eng/grpc-web


Also grpc web is coming along

https://github.com/grpc/grpc-web


Link is dead


Is Cloud Endpoints an option for you? It supports gRPC with JSON/REST transcoding.

[1] https://cloud.google.com/endpoints/docs/grpc/about-grpc

(Disclaimer: I work on Google Cloud.)


It is amazing to me that almost nobody here actually questioned the wisdom of throwing out the time tested benefits of robustness of REST in exchange for that which REST was created to eliminate; the fragility of RPC. And all because using RPC is easier in the moment (vs. over time.)

This reminds of of the old saw "Those who ignore history are doomed to repeat it."

If you are unaware of the benefits, here are a just few links that can explain it:

- https://www.quora.com/What-are-the-advantages-of-REST-over-a...

- https://www.quora.com/What-is-the-difference-between-REST-an...

- http://duncan-cragg.org/blog/post/getting-data-rest-dialogue...

- https://apihandyman.io/do-you-really-know-why-you-prefer-res...

- https://www.quora.com/What-are-the-pros-and-cons-of-REST-ver...


I did. It's pretty obvious they don't understand REST at all so they reinvented the wheel.


What could make this really take off is an in-browser JS client. The simplicity it has added seems to really help there. The gRPC team has had one in hiding for a long time only giving people access who explicitly ask: https://github.com/grpc/grpc/issues/8682 (good thing GitHub has a feature that snips hundreds of comments or that link would take a while to load)


Totally agree, and it's something I'd love to see. Consider this a call for contributors - I think a simple generated javascript client would be an excellent way to help with the project.


I am interested in finding out how Twirp helps with versioning? Is it possible to have services evolve independent of each other?


This is a pretty awesome project. The one thing that's missing would be autogenerated javascript/typescript stubs like grpc-web does. Will definitely experiment with this when building small go applications.


Hopefully these will be implemented soon by the community. Open source rules :)


I'm trying to understand the problem this solves. Let's say you have an HTTP API which allows users to update their email address:

    POST /api/user/:username/update_email
But you change the application to require API clients to send the user_id instead of the username.

    POST /api/user/:user_id/update_email
Wouldn't you still need to mandate that all clients are updated regardless of whether you use this tool as an abstraction layer to your API?


Recommend reading on what an RPC is: https://www.geeksforgeeks.org/operating-system-remote-proced...

And protobufs: https://developers.google.com/protocol-buffers/docs/proto3

Example, one benefit is that you're defining your API by using language neutral protobufs which then generate code consistently (including types!) into many languages. Your entire communication procedure can be easily and succinctly described in a single small, human readable file.


Lots of comments here about lack of JSON support in gRPC - while that's true, it's relatively easily to bolt on using grpc-gateway (https://github.com/grpc-ecosystem/grpc-gateway).

Here's how we did it in CockroachDB: https://github.com/cockroachdb/cockroach/blob/24ed8df04719a1...

The supporting code (protoutil) is https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/... and https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/....


I wrote a similar library to this called Hyperbuffs[0] in Elixir. The goal of the project is to document and build your endpoints using protobufs, but to allow the caller to choose either protobuf or JSON encoding for content and accept types.

[0] https://github.com/hayesgm/hyperbuffs


Author of go-micro here. Good to finally start seeing some choices focused on RPC. I started go-micro in 2014, before gRPC came on the scene. Even still I think the tooling doesn't emphasize ease of development. That was my goal with go-micro.

https://github.com/micro/go-micro


Looks nice. Can anyone comment on how auth works with Twirp? I was trying to get GRPC working to authenticate with unsigned ssl certs (much like using SSH) and was rather disappointed how awkward it was. Basically two completely different methods requiring hiding session ID in two unrelated places just to allow a SSL cert to control authentication.

Anyone done similar with Twirp?


Yep, you can do this pretty easily because Twirp's generated objects plug in nicely to the normal `net/http` tools. The server is a `http.Handler`, and the client constructor takes a `http.Client`. So if you're familiar with how to use SSL certs for authentication with a vanilla Go HTTP client and server, Twirp would work in exactly the same way.

When you create a Twirp server, you get a `net/http.Handler`. You can mount it on a `http.Server` with its `TLSConfig` field set to the right policy.

The client constructor similarly takes a `*net/http.Client`. You could provide a Client that uses a `http.Transport` with its `TLSClientConfig` field set to something using the right value (like in https://gist.github.com/michaljemala/d6f4e01c4834bf47a9c4, say).


this looks really sweet. i've never understood why gRPC limits itself to protobufs only when the protobufs have a canonical json representation. i'm glad that twirp is fixing that piece.


On GCP, Cloud Endpoints proxies will transparently translate back and forth between protobufs and the canonical JSON, allowing either representation to be used. So if you're on GCP and don't care about vendor lock-in, that's a solution.


There is also the grpc-gateway project for JSON transcoding https://github.com/grpc-ecosystem/grpc-gateway


To be fair, if you chose gRPC for performance, but then end up using JSON for most of your traffic, perhaps you picked the wrong tool.


I don't think generally people want to have JSON accepted to use in their production workloads. More for development, testing, that kind of thing. Being able to just curl your service makes a huge difference.


It always starts that way, then people demand JSON everywhere ("why not?"), then they complain when things get too slow or when the OOMs begin to appear. :-)


You can use gRPC with FlatBuffers, too.

https://grpc.io/blog/flatbuffers


HTTP 1.1 + json support for twirp opens up a lot of doors too. It's easy for the browser to natively hit a twirp service without the need for large packages such as https://github.com/improbable-eng/grpc-web.


Yes! In theory, gRPC has a way to pick custom serializers... but in practice, they are pretty clumsy to use and don't seem very well supported. There's a lot more benefit when you can guarantee that all servers will support JSON, too.


This looks promising! We use the go-grpc SDK in conjunction with gogoprotobuf, and it's been a rocky road.

While the article identifies some operational issues (e.g. the reliance on HTTP/2), there are several considerable deficiencies with gRPC today, at least when using it with Go:

1. The JSON mapping (jsonpb.go) is clumsy at best, and by this I mean that it produces JSON that often doesn't look anything like how you'd hand-design your structures. "oneof" structs, for example, generate an additional level of indirection that you typically wouldn't have. Proto3's decision to forego Proto2's optional values (in Proto3 everything is optional) cause Go's zero value semantics to leak into gRPC [1]. (We had to fork jsonpb.go to fix some of these issues, but as far as I can tell, upstream is still very awkward.)

2. The Go code generator usually produces highly unidiomatic Go. "oneof" is yet again an offender here. The gogoprotobuf [2] project tries to fix some of go-grpc's deficiencies, but it's still not sufficient. Ideally you should be able to use the Proto structs directly, but our biggest gRPC project we basically gave up here, and decided to limit Proto usage to the server layer, with a translation layer in between that translates all the Proto structs to/from native structs. That keeps things clean, but it's pretty exhausting work, which lots of type switches (which are hampered by Go's lack of switch exhaustiveness checking; we use BurntSushi's go-sumtype [3] a lot, but I don't think it can work for Proto structs, as it requires that a struct also implements an interface).

3. Proto3 has very limited support for expressing "free-form" data. By this I mean if you need to express a Protobuf field that contains a structured set of data such as {"foo": {"bar": 42}}. For this, you have the extension google.protobuf.Value [4], which supports some basic primitives, but not all (no timestamps, for example) and cannot be used to serialize actual gRPC messages; you can't serialize {"foo": MyProtoMessage{...}}. Free-form structured data is important for systems that accept foreign data where the schema isn't known; for example, a system that indexes analytics data.

From what I can tell, though, Twirp doesn't "disrupt" gRPC as much as I'd like, since it appears to rely on the existing JSON mapping.

[1] https://github.com/gogo/protobuf/issues/218

[2] https://github.com/gogo/protobuf

[3] https://github.com/BurntSushi/go-sumtype

[4] https://developers.google.com/protocol-buffers/docs/referenc...


Yeah, I agree with pretty much everything you've written here.

> 1. The JSON mapping (jsonpb.go) is clumsy at best

The best thing for optional fields in jsonpb is to use the protobuf wrapper types [1]. They have special support in jsonpb to serialize and deserialize as you would expect, without the indirection. But the Go structs you get on the other end are a little weird, so its a tradeoff.

> 2. The Go code generator usually produces highly unidiomatic Go.

Yeah, using the generated structs as the main domain types in your code can be up-and-down. I agree that gogoprotobuf can help, but it's rough. We definitely use Getter methods on generated structs quite a bit for stuff like oneofs.

> 3. Proto3 has very limited support for expressing "free-form" data.

There's always `repeated byte` :) It sounds like a joke, but we've used it in some spots where the input is totally schema-less.

The Any type is also designed for this sort of thing. Still clumsy, though.

[1] https://github.com/google/protobuf/blob/master/src/google/pr...


> > 2. The Go code generator usually produces highly unidiomatic Go.

> using the generated structs as the main domain types in your code can be up-and-down

At $DAYJOB we solve this by doing code generation outward from our domain types. The RPC layer is idiomatic Go because that's what we began with.

Some go/token and regexes take our structs and produce a server-side router implementation for net/http (endpoints from magic comments), some client-side libraries for Go / C++ (Qt) / PHP / JS, and documentation in markdown.

Our system is in a pretty reusable state, but nobody has the free cycles to open it. If Twirp had been available 24mo ago our project might have been different.


3. Also google.protobuf.Struct


You mentioned problems with gRPC, but I think every one of your problems is with protobuf. Is that correct?

Also, regarding point 3, I'm confused with two things:

- You want "free form" data, but you're talking about protos in the context of Go. How would you define this "free form" data in Go?

- You explain that "free form structured data is important for systems that accept foreign data ... where the schema isnt known". Why are you using protobufs for this usecase? Protobufs are specifically meant to make the schema known, and be enforced by serialization.


True, but gRPC inherits these problems as it's based on Protobuf.

As for free-form data, it should be representable as map[string]interface{}. Our specific use case is a document store that stores documents on behalf of clients. The format of documents cannot be not known by the store, but the API to the store is gRPC. Also, we have a desire for documents to contain higher-level types such as dates, but we're forced to use google.protobuf.Value for this, and treat dates as strings, since Value cannot contain Proto types.

(Our next step is probably to model this explicitly, by defining our own Value message that uses "oneof" to represent all the possible types we need to support, and then using a map of these. But it would be nicer if Protobuf had first-class support.)


Any performance numbers?


It's really hard to write benchmarks of an RPC system that mean much, but the overhead is really just in serialization. We have services that handle tens of thousands of requests per second on Twirp in one process.

Serialization/deserialization of a typical protobuf struct takes a microsecond or two, but it generates some garbage, so GC ends up slowing you down if you try to go really crazy and push past 100k req/s in one process with non-trivial message structures.


Thanks, that's the exact info I was looking for.

I've unfortunately been bitten before by choosing json as a serialization format, specifically in go, due to JSON performance dominating processing.

No criticism though, JSON is the right choice for many types of APIs.


You can and should use Twirp's protobuf serialization instead for almost all applications. The JSON serialization is really intended for developers and low-throughput cross-language clients.

Protobuf serialization isn't free, but it's definitely cheaper than JSON serialization.


awesome! a lot easier to integrate than gRPC. It will be way more useful after other languages are supported


Is Thrift supported in Go?



beautiful, i'm sick of juggling get/post/put/patch/delete and figuring out which one the api developer chose. just do X using Yparams.


Can any explain to me what an RPC is?




Looks like the website got a HN hug of death and isn't really loading for me whatsoever.


Unlikely? It's hosted on medium - I'm pretty sure they can handle the load. However here is the cached version: https://webcache.googleusercontent.com/search?q=cache:6vYOM9...


Must just be your network. I would be surprised if HN traffic could bring down Medium


It is nice but would say with limited value outside of Twitch


anyone who is going a move from monolith to microservices on top of AWS is a potential user of twirp and it will save tons of time on design and implementation. that's a lot of value and for a lot more people than just Twitch.


I definitely would prefer to use this instead of bizare-REST-like (it's how REST usually devolves into) in a next project , if I can't use graphql.


I was surprised to find that GraphQL was probably the easiest sell to my team ever.


I've been saying for years and years that JSON RPC is the way to go. Glad to see at least someone agrees. http://www.jsonrpc.org/


There seems to be somewhat of a pattern of Go being linked to outages (CloudFlare and now Twitch). Any regrets investing in Go?


Are you talking about problems with gRPC mentioned in the article? gRPC is not in any way specific to Go or even related to Go, and I can confirm that there have been some problems with the C++ version of gRPC.

The CloudFlare outage was related to leap second handling... while the particulars of the Go library contributed, this is also far, far from the only time that a leap second caused havoc online. Hell, in 2008, Zunes were crippled by a leap day bug.

RPC and time handling are notoriously tricky problems to get right.


Um... what? Might as well say that operating systems are linked to outages. Pretty sure Twitter wasn’t running on Go all the times it went down. Go is really ridiculously solid and used in all kinds of production systems, outages happen no matter what language.


Why use HTTP for transport instead of Messagepack or ZMQ? Seems a bit overkill if you are whipping binary data back and forth between services. Protobuf + ZMQ seems a lot more efficient to me.


Absolute throughput or efficiency isn't the goal of most RPC mechanisms, twirp specifically. However I don't think it's underperformant either. It's very easy to reason about, and debug, plays with ELBs well, and most important, gets developers thinking at a Service to Service RPC level instead of about low level stuff.

Fix and optimize throughput for services which actually have those problems.


Messagepack is a serialization format, not a transport. It is an alternative to protobuf or json.

Mostly people use HTTP because there are so many services and components that already support it (load balancers, proxies, sidecars etc), it has well-supported options for authentication, conveys metadata seamlessly through multiple layers, and its overhead isn't an issue for the kind of payloads they are sending.

Plus, you know, curl.


A lot of this boils down to wanting to be able to use standard load balancers.


ZMQ uses ZMTP which is a TCP protocol, and for example, HAProxy supports TCP just as much as TCP/HTTP.


Not just standard load balancers, standard http stacks. Polyglotness is a big benefit here.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: