Hacker News new | past | comments | ask | show | jobs | submit login
Connect: A Better gRPC (buf.build)
308 points by emidoots on June 1, 2022 | hide | past | favorite | 91 comments



I'd appreciate it if they could be more upfront about their language support and what this even does for me. Right now this is: "A better gRPC for Go". And I don't know what's better about it without doing much more research. Apparently proto3 has support for json encoding by default. So they somehow give me a better schema validation. That's their claim but I don't see how exactly they do that without going much deeper into it. Meanwhile .proto files are obviously already a schema, so they might be parsing your golang server code to see if it still matches your .proto file. That sounds very brittle to me if true. I'm not convinced at all from this post and I don't see how this works.


Fair criticism - you're all the first people to see a lot of this writing, so we're taking notes and we'll try to make the next iteration clearer.

Right now, we've got a Go RPC library that's ready for early adopters. In a month or two, we'll have a TypeScript library for web developers. A while after that, we'll probably tackle server-side TypeScript. We're not sure what will be highest-priority after that: could be Swift + Kotlin for mobile, could be Python, could be something else.

We think that `connect-go` is a better choice than Google's gRPC implementations because it's (1) a simpler implementation, (2) interoperates better with the Go HTTP ecosystem, and (3) supports a nicer protocol _in addition to_ the gRPC and gRPC-Web protocols. You get everything that's good about gRPC, very little (hopefully none!) of what's bad, and you get some extra goodies on top. The same is true of the upcoming `connect-web` compared to `grpc-web`, and so on.

Re: JSON, you're right - proto3 defines a Protobuf-to-JSON mapping that everyone uses. What's less obvious is that if you use JSON with the gRPC protocol, you don't actually end up with JSON HTTP payloads. Instead, you get `<5 bytes of binary framing>{"some": "json"}` - which isn't all that useful, because it's no longer JSON. Then consider that gRPC doesn't use meaningful HTTP status codes, even for request-response RPCs, and it requires using HTTP trailers, and it requires HTTP/2. All of a sudden, it's not much like the ubiquitous, un-fussy HTTP APIs that make REST so successful. The `connect-go` library offers you a solution for this by supporting the Connect _protocol_, which fixes these warts.

Hopefully that's a little clearer <3


Obviously I’m self-interested here, but it might be good to take a look at .NET 7‘a gRPC JSON transcoding as it seems like they’re trying to solve a similar problem.


Btw, having a POST only protocol for web will completely destroy the performance of any modern app.


so.... what's bad?

You haven't really actually answered the question except for hand waving and most of it is opinions against things that exist for good reason, without justifications.


Their "opinions" ring pretty true to me. I'm sure everything in GRPC exists for good reason... if you are google.


I've deployed gRPC and used the Go bindings extensively, and when I read this article I went "yes! finally!" because this directly touches on two of my chronic pain points.

The Go code it generates from .proto files are the first generated bindings I've seen that use Go generics. I always dreaded needing to look in the stock generated code, but I took a peek at the sample Connect-generated code and it's quite readable. Generics do help; a big part of the cognitive overhead for me in reading the existing bindings is how many "synthetic" types it adds that don't correspond to a named type in the .proto IDL. They have a bidirectional RPC in their example, and its generated Go method has a return type of `*BidiStream[ConverseRequest, ConverseResponse]`. In the grpc-go generated code, that would spawn a named interface type like `ElizaService_ConverseClient`. There's less to wade through.

But for me that's a "nice to have", and the big win I see is the unbundling of gRPC. What I've wanted is just a standard way to do Protobuf-defined RPC over HTTP, preferably one that the developers of API's I use have also adopted. gRPC is that, but when I've deployed it, I've deployed it within a service mesh that provides service discovery, load balancing, circuit breaking, backoff and retry behavior, etc. But gRPC clients include all of that, too, and you can't opt out. The gRPC client will think it has one subchannel open to a single service endpoint, when really it's got a localhost connection to a sidecar proxy providing transparent load balancing. I would just hope things ended up OK.

Distributed systems are complex enough, and the fewer state machines I have to internalize, the better I can understand the rest. Connect removes some of the incidental complexity from something that's meant to be a ubiquitous server-to-server protocol. I'm looking forward to TypeScript and Rust bindings being released!


The OP discusses code complexity, bugs, API instability, and incompatibility with the rest of the Go HTTP library ecosystem as issues that it’s aiming to solve.


> Rather than using the Go standard library's net/http, grpc-go uses its own implementation of HTTP/2. It's incompatible with the rest of Go's HTTP ecosystem, so you can't cleanly serve gRPC requests alongside other HTTP traffic and can't use most third-party packages.

Mostly because grpc-go's implementation is about 5-10x faster than net/http.


I'm curious how the performance of the two (grpc-go and connect) compare in benchmarks in terms of memory usage and so on. I know there is fasthttp in Go which sometimes people use as an alternative to net/http for performance sensitive code. https://www.sobyte.net/post/2022-03/nethttp-vs-fasthttp/

I also presume there are some reduction in bugs simply by reducing the LoC in this project compared to grpc-go.


Good stat, anymore detail? Love to read more into why


invoking Cunningham

Maybe it follows the spec completely and defensively, requiring many more checks and supporting typically unused behavior


> We've got big plans for the Connect ecosystem! Along with Connect in Go, we've been working on a TypeScript implementation for browsers. It shares the same design priorities: it's idiomatic TypeScript all the way down, stays close to the browser's fetch API, fits neatly into React hooks, and supports both gRPC-Web and Connect's own protocol. It also produces tiny bundles. We've already replaced grpc-web in the Buf Schema Registry frontend, and we're planning to release connect-web soon — stay tuned.

i belive this will be huge. biggest blocker for grpc adoption is the really poor js and grpc-web implementations.


Buf has done an amazing job simplifying the whole proto generation process, and now they're at it again, making gRPC easier to deal with. I've been using buf for years now and am excited to dive into connect! The team is rock solid and I love their development philosophies. This is a technology to bet on.

No affiliation, just a fan


It always felt like gRPC was solving Google problems, heck that's (probably) what the g stands for.

Connect is for the rest of us. I wonder what kind of useful general-purpose interceptors folks will come up with?


As someone who’s done lots of RESTful (or RESTish) json/HTTP APIs, lots of gRPC, and lots of GraphQL … the simplest, best solution in almost all cases is RESTful json/HTTP APIs. There’s really no need for gRPC for the rest of us, it’s just unnecessary complexity.

If you want code gen, just edit OAS specs with Stoplight’s OpenAPI editor, and generate clients/servers with OpenAPI generator. If you really need event streaming between services, just use something like Google PubSub, webhooks, Kafka, whatever - external systems are better for this than direct server to server comms because they’re more reliable, far less worries about deployment, etc.

RPC seems simple at first but always balloons into excess complexity. Just stick with REST, and sprinkle in a PubSub system if absolutely necessary, it’ll stay simpler that way.


I don't see how Kafka means less worries about deployment


For event streaming, a shout out for redis's streams data type. A lot like Kafka for a lot less administrative overhead. Works in cluster mode, too.


gRPC streaming is too complicated so you should ... deploy Kafka?


I think the principle of using messaging is valid. You could just use a simple reliable MQTT broker.


gRPC is overkill for pretty much everything I do, but I like Protocol Buffers[0] for serialisation/deserialisation of big objects to disk.

[0] FlatBuffers would probably work well too, though I haven't used it


Only when "rest of us" === Go developers, in its current state.


Put out an example of using it to switch the transport so that it is over NATS instead, this works now thanks to the interop with net/http package: https://github.com/wallyqs/connect-go/commit/2e744ec4bf7ce31... Internally requests are treated as NATS requests so you would get similar performance and latency as when using core NATS request/response.


There's definitely a tipping point where having language independent interfaces/protobufs and definitions of services are basically mandatory for any cross-team collaboration or work to be possible. But yeah for small teams or single products it really might not be necessary or be total overkill early on in a new product.


No, the g in gRPC stands for gRPC.


It’s something new every release. Current is golazo, previous was gravity. https://github.com/grpc/grpc/releases

They use this to pad the release notes so they don’t have to comprehensively document the behavior changes.


> They use this to pad the release notes so they don’t have to comprehensively document the behavior changes.

I want to laugh, but I also want to cry because it's true.



Curious to see their typescript implementation and how it compares with https://github.com/stephenh/ts-proto which works great for grpc-web.


https://github.com/bufbuild/protobuf-es/blob/main/docs/gener... And there's also this which is by the same author but came before it: https://github.com/timostamm/protobuf-ts

The latter has code-generation for services and has various transport packages for twirp, grpc, and grpc-web.


You found us :) That's the very early version of the marshaling support - it's pure ECMAScript, so it'll work in browsers, Node, or Deno.

`connect-web` will add the RPC layer, built on top of the browser's `fetch` API.


How does it compare to Twirp, which is already well established?


Twirp _is_ great. For unary RPC, Connect is similar: using the Connect protocol, it works over HTTP/1.1 or HTTP/2, doesn't require any special networking infra, and has first-class support for JSON payloads. In short, `curl | jq` works again.

To us, the biggest difference is that Connect _also_ supports the gRPC and gRPC-Web protocols. That lets your code interop with a much, much larger ecosystem of tools and systems, many of which are better-supported than their Twirp equivalents.

We designed the Connect protocol and the Go APIs to make multi-protocol support seamless. Server-side code supports the Twirp-like protocol and the gRPC protocols simultaneously by default. Clients can switch protocol with one option, no other code changes required.


twirp is great. You get all the benefits of a schema'd/proto'd wire format (like automatic client library generation), but using standard http infrastructure (web servers, loadbalancers, etc etc) without the custom grpc routing stuff.


I think another differentiator is support for streaming. Tbh I am shocked at the number of folks that use streaming, but alas.

It was the #3 issue opened on twirp repository, but they never settled on a solution.

https://github.com/twitchtv/twirp/issues/3

One super underrated feature Connect offers for Go devs is access to the request/response headers. No more plumbing incoming/outgoing context :phew:


I've been using grpc-js for peer to peer work. It's been tough so much do that I'm considering dropping it in favour of just json RPC.

I'd hope that when the typescript version is available that it actually uses promises, Async generators and supports both client side and server side. And most importantly we can use any underlying stream socket (like UDP).


Are you familiar with https://trpc.io/?


Buf is cool, but I do wonder about this custom protocol. It sounds good, but I thought gRPC’s bidirectional streaming was neat. It seems that Connect may support it, but it’s not straightforward and it may only work in some conditions:

> Streaming RPCs may be half- or full-duplex. In server streaming RPCs, the client sends a single message and the server responds with a stream of messages. In client streaming RPCs, the client sends a stream of messages and the server responds with a single message. In bidirectional streaming RPCs, both the client and server send a stream of messages. Depending on the Connect implementation, IDL, and HTTP version in use, some or all of these streaming RPC types may be unavailable.

I am also curious to check out if it assails my concerns about gRPC streams in its own API. I recall having a lot of trouble with figuring out whether gRPC would handle streaming connections hanging gracefully, for example.


I'm not the greatest wordsmith, and in the rush to write lots of docs for today's launch I let some less-than-clear paragraphs slip through. Sorry! I'll try my best to clarify - appreciate you pointing this paragraph out :)

In general, gRPC documentation is very clear because you're either in or out: if you have end-to-end HTTP/2 and trailer support, you're in and everything works. If you don't have both, you're all the way out and _nothing_ works. Connect aims for something more like progressive enhancement, where implementations support as much of the feature set as they can. Even if that's just unary (request/response) RPCs, it's still useful.

No matter what protocol you're using, bidirectional streaming requires:

1. A schema language with the concept of bidi streaming RPCs. If you're using Thrift, you're out of luck.

2. An HTTP/2 connection. If you're using Python and `requests`, you're out of luck.

If you have both of those, you're capable of bidi streaming. (gRPC's HTTP/2 protocol _also_ requires support for HTTP trailers.)

Protobuf supports bidi streaming RPCs. `connect-go` supports HTTP/2 (and trailers, for the gRPC protocol). So if you're using Protobuf schemas and a `connect-go` client and a `connect-go` server, you have full bidi streaming support and you can choose between any of the three supported protocols. If you're using a `grpc-go` client and a `connect-go` server, or vice versa, you have full bidi streaming support but only using the gRPC HTTP/2 protocol. The wire details differ between the gRPC, gRPC-Web, and Connect protocols, but the semantics are the same and all the Go APIs behave the same.

Hopefully that clarifies things. (If not, let me know and I'll take another run at it!)


I think everything gRPC can do, Connect can do too, either via gRPC or its own protocol. Given browser support is planned, I think the "in some conditions" you refer to is basically the browser using HTTP/1.1. I imagine they'll support a websocket driver for true bidirectional streaming.

> I recall having a lot of trouble with figuring out whether gRPC would handle streaming connections hanging gracefully, for example.

I think I know what you mean, I've always had a hard time differentiating from protocol errors and application errors. Ultimately I gave up trying to be smart and just treated any interruption the same way: resume the stream from where it left off. Transactional behavior is impossible within just gRPC, as far as I can tell.


Basically, yes! Browsers aren't the only clients (or servers) with limited capabilities, but they're probably the most important. Funnily enough, the biggest problem with browsers & gRPC isn't HTTP/2, its support for HTTP trailers.

I'm bemused by gRPC's status code system. It seems just as difficult to use as HTTP status codes, more or less, while also being less widely understood. If I were designing for myself, there would only be four error codes: maybe retriable and definitely not retriable, with an extra bit for whether or not the application returned the code explicitly. Sadly, we had to adopt gRPC's codes to make multi-protocol support work well. (Plus, I don't think anyone else agrees with my hot take on errors!)


> the biggest problem with browsers & gRPC isn't HTTP/2, its support for HTTP trailers.

Browser fetch also doesn't support streaming request body (on current day browsers)

https://web.dev/fetch-upload-streaming/ "It doesn't look like your browser supports request streams."

https://bugs.chromium.org/p/chromium/issues/detail?id=688906


Yeah, I suppose if you can just use ordinary gRPC, this is basically not a real problem. I’ll definitely want to take a look at the API to see if it improves on the confusion I ran into with upstream gRPC Go. The inability to do “transactional” stuff with gRPC isn’t really a deal breaker, I just want good error handling and recovery.


I’m tracking a similar issue in https://github.com/bufbuild/connect-go/issues/222 - is this roughly what you’re looking for?


Yeah, I think so. I’m a bit fuzzy on the details as I haven’t been working with gRPC in a little bit now. But that seems like it would solve the problem, to me.


It’s weird seeing “full duplex” get reinvented. It’s not the wheel, but in computer time it practically is.


This looks great. Congrats buf team. How does it compare to https://storj.github.io/drpc/?


DRPC doesn't actually support gRPC generally - you can't connect with a python client to the go DRPC server. (At least, was that way when I tried to use it)


The main problem I have with grpc is how the connection is handled, I don't really like how minimal the api is in regard to handling disconnections or connectivity issues (the famous GOAWAY). Sometimes it's not enough if the library just autonomously reestablish the session, I want to be notified of any issue and being able to act in a custom way.


At my company we would love to use buf but can't: because of the BSR. We already pay shitloads for our VCS - we don't want to pay for another product that does the same thing. After using go and getting a taste of what life is like without third party package publishers (like npm, pypi, etc) it's hard to want to go back.


another gRPC alternative

https://brpc.apache.org/


Good luck Buf! I spent many years building an RPC framework around gRPC called Go Micro (https://github.com/asim/go-micro). I think one of the biggest issues was just resources to see it through but also my own desire to move beyond it towards a platform and services. I hope you're able to bring some sense to the gRPC world. It's mostly a networking library. The ecosystem around it is too low level. If anything abstractions and more developer friendly tooling would be a massive improvement. No one needs to see or touch the guts of gRPC. I wish I didn't have to peak into the internals but unfortunately that's what it takes to integrate it elsewhere.

I hope you build something awesome for the community!


> If anything abstractions and more developer friendly tooling would be a massive improvement.

This was what I missed when I did JMS for a project in Java. The tooling felt confusing and I never found good tooling to see the messages passed through and poke and prod at them.


At $WORK, we wrote our own RPC framework that uses protobufs HTTP. gRPC wasn’t an option because our infrastructure couldn’t support HTTP/2 and we needed first-class Ruby support, something Google seems doesn’t seem to have much interest in. Twirp didn’t exist when we started building our framework, but I’m not sure it would satisfy our needs anyways.

connect-go looks very similar to what we’ve built. If you ever release Ruby bindings, there’s a strong chance we could switch to it. Nice work!


This is really interesting, having built your own rpc layer, which parts of the code did you visit to compare implementations? For example: for logging frameworks, I immediately check if they do buffering and support reentrancy.


Why wouldn’t twirp satisfy your needs? It’s just protos, codegen and http. And supported very well in ruby


Intro definitely hits some pain points. I use Go+gRPC+Flutter stack for the last few years, and when Connect have Dart implementation, will give it a try for sure.


… because you really couldn’t make a worse gRPC.

Ok, though, that really is the point of the article. And, it’s more than fair. Really hoping for a Java version of this.


This might be a better gRPC for Go. When I read the title I expected a new protocol based on gRPC, maybe something like Cap'n Proto.


Since this serves on a REST endpoint, does that mean you wouldn't need an envoy proxy if you want to use gRPC from the web?


That's correct!

Edit: That's _mostly_ correct! You don't need Envoy to have JS/TS running in a browser call your handler. If I'm being really picky, though, in that case the browser and your handler would need to use either the gRPC-Web or Connect protocol, not the backend gRPC-over-HTTP/2 protocol. Any `connect-go` handler you write supports all three protocols by default, so the short answer is still "yes!"


> Excluding comments, grpc-go is 130 thousand lines of hand-written code. It has dozens of subpackages, nearly a hundred configuration options, and bespoke name resolution and load balancing mechanisms.

In short: what's the deal with all these fences, and who the hell was "Chesterton"?


I don't think that's fair. It's telling that two companies have ditched the enormous complexity of gRPC for their own solution. There's a great comment from a gRPC engineer back when Twirp launched evaluating why Twirp has gained traction: https://news.ycombinator.com/item?id=18688942

I evaluated gRPC but the web story is a disaster; you need an envoy proxy, and TypeScript support was half-baked. I went with Twirp instead. If Connect had been around, I would have gone with it over Twirp because it's compatible with gRPC and Twirp makes a few weird choices like JSON for error messages.


To abuse Chesterton’s analogy a bit, I suspect that some of the fences only keep in google-shaped sheep. Sometimes the best way to find out which fences are necessary is to tear them all down.


This is such a low effort and information free argument. I suppose nothing is ever unnecessary in your eyes?


Sometimes it turns out down the line that all the things that you thought were unecessary are in fact necessary.


Yeah and sometimes it doesn't. That's my point.


Pretty cool. Maybe if it gets supported by some big org this can be something. gRPC in general is not a good option for reasons this project mention among other things like it does not play nice with common infra. This also needs the magic of Envoyproxy or?


Is there any good guide on when to use gRPC over JSON and is it good to use for small teams?


gRPC and JSON are not really alternatives. gRPC is an RPC framework that typically uses protocol buffers as a serialization format but is perfectly capable of using JSON.

When you would use JSON over HTTP vs gRPC? I can think of a couple cases:

1. Your infrastructure doesn't support HTTP/2 with trailers.

2. You need to send/receive payloads that aren't well-defined. The nice thing about JSON vs protocol buffers is that JSON is self-describing and flexible. In most languages you have libraries which can deal with JSON as essentially a Map<String,Object>, with maybe a nice cursor API that allows you to pick/update specific fields without knowing anything else about the object structure. You can encode the JSON AST in protocol buffers but it will not be very nice I don't think.

3. You want your wire-format to be human readable for some reason. You can technically do this with gRPC as it is not strictly tied to protocol buffers as the wire format for messages, but in practice it is kind of a pain and language support for doing it can be spotty. At the very least you have to figure out how to do it as protocol buffers is very much the default.

4. The code-generated classes/structs from gRPC IDL are often quite bad to work with. If you've ever used the case classes generated by scalapb (Scala) or the structs generated by Prost (Rust) then you know what I mean. You end up having to create a parallel set of data structures for your protobuf message types to keep yourself sane. With JSON on the other hand, you can typically create classes/structs which are nice to work with internally but also directly serializable to JSON.

All that said, gRPC is really nice. If I am bootstrapping a project, gRPC is definitely the default and I need a good reason to use anything else.


Understood. I think I'll have to play around with it just to get a good feel of whether it is worth the switch for my personal projects at least. Thanks.


How does it differ from Twirp?


Very cool, love the direction and this is definitely needed


From scanning the page it’s a little bit confusing what the scope of this actually is:

A new protocol specification that fulfills the goals of working in browser environments (no trailers) and is more debuggable? Seems to be like it - but then it wouldn’t really be compatible to regular gRPC and gRPC-Web.

A library which implements the new protocol and the existing ones? Apparently from how far how understand the page - but probably wouldn’t fulfill the „no-bloat“ goal too much.


I read the whole page and the argument felt quite clear.


Any similar frameworks for flatbuffers?


You can use Connect for flatbuffers, too - the gRPC, gRPC-Web, and Connect wire protocols aren't tied to protobuf.

https://connect.build/docs/go/serialization-and-compression#... covers this, and coincidentally uses flatbuffers as the example :)


Very excited by what Buf is doing. Would love to see Bazel rules to completely automate all of this stuff and make a uniform dev environment possible.


Awesome, do you have any example code for flatbuffers?


I don't - I've never actually used flatbuffers. There are two pieces of code you'll want (if possible):

1. A `connect.Codec` implementation. I think it'd look almost the same as our Protobuf codec [0], but you'd use `FlatbuffersCodec.Marshal` and `FlatbuffersCodec.Unmarshal` (from github.com/google/flatbuffers/go). You _must_ have a `Codec`, but it looks like it should be pretty quick.

2. Ideally, you'd have a standalone program to parse your Flatbuffer schema and produce Connect code. The output would probably be very similar to `protoc-gen-connect-go`'s [1]. This isn't required, but it's a nice quality of life improvement (check out [1] to see the kind of conveniences it adds). With Protobuf, you'd do this via a `protoc` plugin and you wouldn't need to parse the schema yourself. From a quick look, I don't think `flatc` supports plugins at all - so maybe you'd either skip this step or put in the effort to parse the schema yourself?

[0]: https://github.com/bufbuild/connect-go/blob/main/codec.go#L5...

[1]: https://github.com/bufbuild/connect-go/blob/main/internal/ge...

Also, thank you for making me stop and think about this in detail. I just pulled Flatbuffers out of a hat when I was writing that part of the docs and assumed that `flatc` supported plugins like `protoc`. Turns out that when you assume... I'm updating that portion of the documentation now :)


Hmm, I'm really surprised that gRPC is build on HTTP/2. It's a very complex protocol.


Yeah, and i am still sad that they've removed Java RMI.


Can I say _finally_ !!


All these web technologies stacking inefficiencies on top of each other to build massive frameworks that you can do much better with a hundred lines of C.


your site gives 403 forbidden error


I worked with the author of buf at Uber. He's brilliant. You should use his stuff.


could be more convincing by offering examples and why that means good things for buf.


no


nice


Is there any reason that this browser compatible protocol is just not a part of gRPC-Web? It would be much better to maintain community project than building new branded project.

How strongly connect relies on gRPC? Because it sounds like it’s easy to lock in to buf’s ecosystem.


In theory, there's no reason that grpc-web couldn't support a new protocol. That would be complicated, though; Envoy would also need to support it, and the gRPC team has already designed a protocol that they presumably like.

Connect supports gRPC fully, so you shouldn't ever be locked in - you can always migrate systems to `grpc-go`, one piece at a time. We intend all the Connect projects to be community projects - we're happy to accept contributions, design proposals, and anything else. At the same time, we're committing our resources to keeping these projects well-maintained.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: