Disclaimer: I work at Google Cloud, helping "developers" understand what Anthos is.
(If you criticize my comment, there's a decent change the marketing is gonna have a discussion with my boss about my employment. I'll try not to use as many marketing-y terms as the article.)
At a high level, if you're a solo developer or a small company, you probably don't need to understand everything going on there, many Anthos features/products are actually available a la carte, possibly under different names.
First, I’ll agree that the blog post is written in a way that might be hard to grasp for hands-on practitioners (assuming I know HN enough, since I visit it 30 times a day). I'll start with an architecture diagram: https://cloud.google.com/anthos/docs/concepts/overview
You might still find this diagram difficult to understand (unless you live with the "cloud-native" stuff day to day) so I'll break this down for you:
- Anthos GKE: This is basically the GKE (Google Kubernetes Engine) you know. But it is now capable of running clusters not just on Google Cloud. If you ever thought "ahh, X cloud doesn't have good Kubernetes support", well Google can bring GKE quality to that cloud (AWS support is now GA, Azure is in preview) or to your on-prem datacenter, or AWS/Azure account. The GKE you know is still available with pay-as-you-go model.
I must note that this GKE on-prem/hybrid capabilities are probably the most critical part of Anthos stack.
- Anthos Service Mesh: This is Istio (the open source service mesh), set up and managed for you. It helps you (1) connect your services, even if they're across different datacenters or clouds (2) automatically export telemetry like traces, metrics and set up SLOs/alerts with them (3) set policies for your RPC traffic at high level e.g. tcp retry policies should be like X (4) enable mTLS automatically without changing your code across ALL your fleet and without having to run a PKI. (If you're still not convinced about service mesh, you probably don't have a RPC-heavy infra like some companies, maybe you don't need it.) You can technically install/manage Istio yourself across your clusters, but with Anthos, Google does that for you.
- Cloud Run for Anthos: This is actually hosted Knative (open source serverless stack on Kubernetes) for you. If you like rapid request-based autoscaling containers, you can have it anywhere (GCP, AWS, on-prem). You can install Knative anywhere, but Google does it for you and ensures it works properly with Istio and Kubernetes versions. I've written a blog post explaining what Knative does here: https://ahmet.im/blog/knative-better-kubernetes-networking/ Basically, this works like a CaaS (containers as a service) or build your own opinionated in-house opinionated PaaS with this.
We also offer Cloud Run as a fully-managed serverless product (not running on GKE, runs on Google’s infra directly) if you're into that: https://cloud.run/
- Anthos Config Management: This is available to GKE users as "GKE Config Sync". It's basically a GitOps tool (though not a complete suite like some stuff like Weaveworks Flux etc). You point it to a git repo/branch, and it will go create Kubernetes manifests in that directory across your clusters. It has a pretty neat model where leaf directories correspond to Kubernetes namespace and there's inheritance to fanout Kubernetes policy objects to multiple namespaces etc. AFAIK many companies using Kubernetes at scale build similar in-house solutions to this. So we worked with them to create a solution for everyone. This product also has some policy enforcement features and monitoring to see if your Kubernetes objects are propagating properly to your clusters.
- Other components: You'll find them on the page I linked. I think they are not as interesting to the day to day practitioners here.
Obviously I've oversimplified many of these, but the docs of each specific product do a decent job explaining, you should read if you're interested.
I'd say that if you are a practitioner, you don't need to worry much about details of Anthos. You can continue focus on the technologies you need to know (choose how low-level you wanna go: containers, Kubernetes, RPC or networking layer/service mesh, DevOps/GitOps) and continue to be successful.
However, if you work at a company which doesn't haven't gone through the cloud native application modernization (I'm hitting my head on the wall for you for saying this), Anthos can actually help your company use cloud-native stack WITHOUT having to go to cloud.
As you might imagine, a ton of companies out there are (perhaps happily) run on infrastructure stacks from another time. However, those looking to change it and try to use Kubernetes, they need to build a lot of in-house tooling, hire talent to manage Kubernetes. Google is decent (maybe more) at running Kubernetes clusters at scale for many customers, so Google can bring this service and all the listed above to you.
I'm not that familiar with pricing, though as you can see from https://cloud.google.com/anthos/pricing if you want to use these features on just on GCP, there's a $30 vCPU/month cost based on GKE nodes you're running. For other stuff, you need to contact support.
Thank you for the praise, I'll pass it on to the team and I'm sure they'll appreciate it :)
I'm glad to hear you're enjoying the new Cloud Run support we've added to Cloud Code for IntelliJ. We'll get it added to Cloud Code for VS Code soon too for anyone else who is interested.
Also, thank you for submitting feedback via the user surveys - I read every single one and survey feedback, yours among others, absolutely was used to prioritize this work. If you have other features you'd like to see, please let us know.
Agreed Cloud Run is awesome, but still waiting on that Load Balancer to Cloud Run route (in private preview from what the Cloud Run PM said on twitter).
How are these actually distributed and what do they actually consist of?
Is it binary-only software or is the source code available anywhere, and if so where?
Can you install it yourself or do you have to give Google systems or employees root access to your servers or your instances on non-Google cloud?
Does it need to communicate with Google servers to keep working or can it be configured to work without? If you go for a hybrid setup, does it always need your datacenter to be up, or can it be configured so that it doesn't?
In case it needs to always communicate with Google servers, what's the point of using a multi-cloud approach if it will go down whenever Google goes down?
Is there any plan to deliver it in a normal fashion (clear distributables and prices) instead of "contact sales"?
(If you criticize my comment, there's a decent change the marketing is gonna have a discussion with my boss about my employment. I'll try not to use as many marketing-y terms as the article.)
At a high level, if you're a solo developer or a small company, you probably don't need to understand everything going on there, many Anthos features/products are actually available a la carte, possibly under different names.
First, I’ll agree that the blog post is written in a way that might be hard to grasp for hands-on practitioners (assuming I know HN enough, since I visit it 30 times a day). I'll start with an architecture diagram: https://cloud.google.com/anthos/docs/concepts/overview
You might still find this diagram difficult to understand (unless you live with the "cloud-native" stuff day to day) so I'll break this down for you:
- Anthos GKE: This is basically the GKE (Google Kubernetes Engine) you know. But it is now capable of running clusters not just on Google Cloud. If you ever thought "ahh, X cloud doesn't have good Kubernetes support", well Google can bring GKE quality to that cloud (AWS support is now GA, Azure is in preview) or to your on-prem datacenter, or AWS/Azure account. The GKE you know is still available with pay-as-you-go model.
I must note that this GKE on-prem/hybrid capabilities are probably the most critical part of Anthos stack.
- Anthos Service Mesh: This is Istio (the open source service mesh), set up and managed for you. It helps you (1) connect your services, even if they're across different datacenters or clouds (2) automatically export telemetry like traces, metrics and set up SLOs/alerts with them (3) set policies for your RPC traffic at high level e.g. tcp retry policies should be like X (4) enable mTLS automatically without changing your code across ALL your fleet and without having to run a PKI. (If you're still not convinced about service mesh, you probably don't have a RPC-heavy infra like some companies, maybe you don't need it.) You can technically install/manage Istio yourself across your clusters, but with Anthos, Google does that for you.
- Cloud Run for Anthos: This is actually hosted Knative (open source serverless stack on Kubernetes) for you. If you like rapid request-based autoscaling containers, you can have it anywhere (GCP, AWS, on-prem). You can install Knative anywhere, but Google does it for you and ensures it works properly with Istio and Kubernetes versions. I've written a blog post explaining what Knative does here: https://ahmet.im/blog/knative-better-kubernetes-networking/ Basically, this works like a CaaS (containers as a service) or build your own opinionated in-house opinionated PaaS with this.
We also offer Cloud Run as a fully-managed serverless product (not running on GKE, runs on Google’s infra directly) if you're into that: https://cloud.run/
- Anthos Config Management: This is available to GKE users as "GKE Config Sync". It's basically a GitOps tool (though not a complete suite like some stuff like Weaveworks Flux etc). You point it to a git repo/branch, and it will go create Kubernetes manifests in that directory across your clusters. It has a pretty neat model where leaf directories correspond to Kubernetes namespace and there's inheritance to fanout Kubernetes policy objects to multiple namespaces etc. AFAIK many companies using Kubernetes at scale build similar in-house solutions to this. So we worked with them to create a solution for everyone. This product also has some policy enforcement features and monitoring to see if your Kubernetes objects are propagating properly to your clusters.
- Other components: You'll find them on the page I linked. I think they are not as interesting to the day to day practitioners here.
Obviously I've oversimplified many of these, but the docs of each specific product do a decent job explaining, you should read if you're interested.
I'd say that if you are a practitioner, you don't need to worry much about details of Anthos. You can continue focus on the technologies you need to know (choose how low-level you wanna go: containers, Kubernetes, RPC or networking layer/service mesh, DevOps/GitOps) and continue to be successful.
However, if you work at a company which doesn't haven't gone through the cloud native application modernization (I'm hitting my head on the wall for you for saying this), Anthos can actually help your company use cloud-native stack WITHOUT having to go to cloud.
As you might imagine, a ton of companies out there are (perhaps happily) run on infrastructure stacks from another time. However, those looking to change it and try to use Kubernetes, they need to build a lot of in-house tooling, hire talent to manage Kubernetes. Google is decent (maybe more) at running Kubernetes clusters at scale for many customers, so Google can bring this service and all the listed above to you.
I'm not that familiar with pricing, though as you can see from https://cloud.google.com/anthos/pricing if you want to use these features on just on GCP, there's a $30 vCPU/month cost based on GKE nodes you're running. For other stuff, you need to contact support.
I should probably make this a blog post.