> Getting all of these technologies to work together was a real challenge. I had to dig through countless GitHub issues and dozens of example projects to make all these things work together. I'm offering this repo as a starter pack for other people with a Bazel monorepo targeting Kubernetes.
So, if it was such a hassle just to get them all glued together- how would they fair for a project that adopts it and matures?
It was a hassle mostly because a lot of the libraries are new and the interfaces are still changing a lot. Bazel is a fantastic build tool that I'd recommend for large projects. The Bazel + gRPC story, though, has a ways to go.
But I don't see any reason why you shouldn't use Bazel + gRPC for a larger project. You may have to tweak small things in the future but I suspect that the UX will only improve. Plus, you can always write your own Skylark rules if Bazel isn't cutting it for you in some way.
It's been a while since I looked at this but ~6 months ago I was surprised to find that there are no official bazel rules you can just import to make gRPC "just work." As a result various third-party incarnations exist and they differ slightly in their usage. When you try to compile a basic service it's pretty frustrating.
I really wish gRPC team would come out with rules you can import and best practice examples / tutorials :(
(I work at google but opinions are my own. I don't work on gRPC. I don't have to write much skylark normally because Google infra teams write the common rules you need. I found that writing my own skylark for project outside of work isn't that fun.)
Things have improved but only marginally. Java and Go are the only languages that have "native" gRPC support in Bazel. For others you need to rely on an external repo whose connection to Google is unclear (https://github.com/pubref/rules_protobuf). I find this disappointing, as a gRPC-connected microservice architecture that Just Works and easily interoperates with Docker/Kubernetes/whatever seems like it could be THE "killer app" for Bazel. I built this project to say "look, it's easy!" But alas, it wasn't, and I've communicated that to some core Bazel folk.
I must say, Java was pretty seamless. I followed the instructions and it came together quickly. Go was harder because there's a vendoring issue (surprise!) in that Bazel requires two versions of the gRPC for Go library ("google.golang.org/grpc"), one for Bazel, the other in Go's vendor directory. Gazelle tries to make you use the vendored version, which leads to a conflict, so I had to figure out how to disable that default behavior, which took a few hours of banging my head against the wall followed by reaching out to the Bazel Google Group, which quickly yielded a workaround (thanks, Ofer!).
Beyond that, the real issue is that no other languages have "official" gRPC support, i.e. support from the core Google team rather than from third parties. I do hope that they expand the number of languages soon.
We are using a similar setup. K8s, Bazel, Go, Java (Android), Swift (iOS), Typescript. Everything with gRPC.
I really like Bazel because it makes working with Protobuf so much cleaner (at least in theory) and allows us to have one build system for all those languages (at least in theory).
The easiest to set up was Go. Go support is already very good, including gRPC and Protobuf.
The only drawback was that it breaks editor integration of some tools / linters because of the different directory layout for generated files.
Vanilla Java also has good support but when you go to Android land things change. At first we had to use a custom fork of grpc-java to build for Android, but with this change https://github.com/grpc/grpc-java/pull/4289 we were able to go back to the main repo.
gRPC-web was a prohibitively large dependency for our front end. After all we switched to a JSON API (using Envoy). I read that Google uses gRPC for products like gMail and since Google is so obsessed with web performance I wonder how they manage the dependency size (a few 100kb for us). There is a lot of information scattered around like a json-proto protocol (not publicly available?), which apparently should not be needed with proto3 and newer browser because performance is supposed to be similar. grpc-web uses the official protobuf/javascript package, using protobuf.js the dependency size as well as the compiled protobuf might be smaller.
While building Swift and Objc for iOS was super easy with Bazel we couldn't manage to build our protobuf/grpc. There were no working rules at the time and we had no time to fix it completely which is why we are not building that part with Bazel, yet.
https://github.com/pubref/rules_protobuf/issues/188
I'd appreciate any insights you have on Objc support and grpc-web. It'd be great to be able to use grpc across the whole stack.
I'm using GRPC-ish stuff in my stack. I would say that one large hole with grpc right now is figuring out the local development story. I am all on-board with running databases in local Docker/local k8s, but still having trouble wrapping my head around local grpc-web development given that grpc-web repo seems completely stalled right now.
I am using improbable-webs grpc-web[0] right now and it works very well once some kinks were worked out. Only problem has been lack of client-side streaming.
Additionally, I use grpcc[1] to test calls locally and that has worked well. I originally tried using some GUI's that looked promising but couldnt get them to work. This is good enough and less work than writing a quick client in python.
I am definitely using Weave Net for my trivial single-node cluster because it was the only CNI option that included a one-liner to install.
I have never had any reason to try a different one, the pods can talk to each other, "does what it says on the tin"... and on top of that but unrelated, their webinars tend to be star-studded and very informative.
Very cool. Eventually I'd like to turn Colossus into something more like what you have here, i.e. a "real" backend that does something meaningful. The good news is that the Bazel + gRPC plumbing was the hard part. Adding new, meaningful services will be pretty trivial going forward.
Is it weird that I use most of this stack already but still thought the repo was an attempt at satire? I feel like this experience has left me questioning everything I think I know.
So, if it was such a hassle just to get them all glued together- how would they fair for a project that adopts it and matures?