Hacker News new | past | comments | ask | show | jobs | submit login

We're moving away from Kubernetes because managing even a simple implementation is a full-time job. Obviously Kubernetes is meant for enterprises, but I've seen seed round/series A startups with a small engineering team using it.

We were at a crossroad. Either we hire an SRE that has extensive kubernetes experience for $130K/year or we move to a managed platform until we actually need those capabilities.

We're happy with the move and the pressure relief is tremendous. Now we can focus on features and not whether or not deploying is using ENV vars from our local machines and causing the system to crash.




Why not just used managed Kubernetes? Every cloud vendor has it, even Tier 2 operators like DigitalOcean.

Running Kubernetes is vastly different for using it to run your apps. You really shouldn’t do the former unless you have to be on-Prem or have some other need.


There's a ton at play here.

"Managed Kubernetes" really runs the spectrum between "one step above just installing it yourself on a bunch of VMs" and "I spend 1% of my time managing anything below the product." Each cloud provider exists somewhere different on this spectrum, with none of them being in quite the same location, and some of them have multiple different products which exist at different points.

For example: AWS is among the most bare-bones. EKS is just a managed control plane; coming from GKE, you might click "create an cluster" then be very confused how there are no options for, say, instance size, or how many... because you have to do that all yourself. There are tools like eksctl or Rancher which can help with this, but ultimately, you're managing those instances. You're doing capacity planning (you think kube would be a great pick to integrate with spot fleets because of its ability to schedule and move workloads to a new instance when one goes down? have fun setting it up, hope you like ops work.). You're doing auto-scaling (and that ASG? its not going to know about your pod resource requests, so you either need some very smart manual coordination between the two, or you need to set up cluster-autoscaler). You're setting up cluster metrics (definitely need metrics-server. not heapster, that was last year, metrics-server is this year. but how to visualize? do i host grafana in the cluster? then i need to worry about authn. cloudwatch really isn't made for these kinds of things... maybe I'll just give datadog a few thousand bucks.) Crap, 1.16 is out already? They only support 9 months of releases with security updates?! I feel like I just upgraded my nodes! Oh well, time to lose a day replicating this update across all of my environments.

I'd go on, but you get the point. There is nothing "managed" about EKS.

DigitalOcean is pretty similar to this (it does provision instances, but the tooling beyond that is barebones). Google Cloud/GKE is "more managed" in a few sense; the cloud dashboard provides some great management capabilities out-of-the-box, such that you may not need to reach for something like Datadog, and the autoscaler works really well without a lot of tinkering. There are still underlying instances, so you're worrying about ingress protection, OS hardening, OS upgrades, etc... but its not as bad as AWS. Not by a long shot.

The holy grail (for some companies) is really something like Azure AKS + Azure Container Instances. No instances to manage. Click a button for a kubernetes cluster. Schedule workloads. Get functional metrics, logging, tracing, dashboards out of the box. Don't worry about OS upgrades, hardening, autoscaling, upgrading the cluster, etc; we'll do it all for you, or at least make it one click to configure. That's the ideal situation. I haven't used AKS/ACI so I can't comment on whether Azure gets us there, but the idea is sound; even if its more expensive.

This sounds like an anti-Kube post, right? Wrong (its a Tide ad). The beautiful thing about Kubernetes is that it can span this spectrum. The same exact API surface can scale from a fully-managed abstract platform where you just say "take this git repo and run it" (see: Gitlab Auto-DevOps), all the way to powering millions of workloads across dozens of federated clusters at Fortune 500 companies.

But, to the OPs point: We're close to solving that right end of the spectrum, and a lot further away from the left end. We're getting there, but we're not there yet. There isn't enough abstracted management of these compute resources... yet. But there's enough money and desire for there to be that I know we'll get there.


>For example: AWS is among the most bare-bones. EKS is just a managed control plane; coming from GKE, you might click "create an cluster" then be very confused how there are no options for, say, instance size, or how many... because you have to do that all yourself. There are tools like eksctl or Rancher which can help with this, but ultimately, you're managing those instances. You're doing capacity planning (you think kube would be a great pick to integrate with spot fleets because of its ability to schedule and move workloads to a new instance when one goes down? have fun setting it up, hope you like ops work.). You're doing auto-scaling (and that ASG? its not going to know about your pod resource requests, so you either need some very smart manual coordination between the two, or you need to set up cluster-autoscaler). You're setting up cluster metrics (definitely need metrics-server. not heapster, that was last year, metrics-server is this year. but how to visualize? do i host grafana in the cluster? then i need to worry about authn. cloudwatch really isn't made for these kinds of things... maybe I'll just give datadog a few thousand bucks.) Crap, 1.16 is out already? They only support 9 months of releases with security updates?! I feel like I just upgraded my nodes! Oh well, time to lose a day replicating this update across all of my environments.

Haha - I had to chuckle on this. It really is this bad on EKS. My god, upgrades are a total joke.

As bad as it is you have to believe that AWS seems to want to ignore Kubernetes and lock you in with ECS.

They need to get EKS as easy to manage as GKE and they need to provide free control plane/masters like all the other managed k8s services. Hopefully we see something at Re:invent this year... at this point there is no reason to use EKS unless you are trapped into AWS. Which unfortunately is what AWS is counting on :(


GKE and AKS both require setting up instance types for node pool(s) but you don't need to worry about the OS and related issues on either. Both also support auto-scaling and auto-upgrades.

GKE is far more managed though with advanced features like global network, aliased IPs, global load-balancing, istio/traffic manager integration, private IPs, metrics-based-autoscaling, preemptible nodes, local SSDs, GPUs, TPUs, etc.

Azure Container Instances are nice, with the GCP corollary being Cloud Run using KNative. Both products are designed to quickly run a container image with a public endpoint, but ACI is not part of AKS. You might be thinking of the Azure Virtual Kubelet which lets you burst on-demand to ACI, but this is a very advanced use-case and not the normal cluster setup.


> The holy grail (for some companies) is really something like Azure AKS + Azure Container Instances. No instances to manage...

Google Cloud have been having that since before it's being called Google Cloud -- I mean Google App Engine. It has all those features, and with its "Standard" environment you don't even need to build docker image.


Cloud Run is even closer. App Engine Standard requires you to use the sandbox, App Engine Flex is tied to VMs, but Cloud Run makes containers serverless:

https://cloud.google.com/run/

...and since it's based on Knative, it's portable-ish.


I can attest that Cloud Run has been great for me. I literally "ported" over some of my apps that were already running in containers by replacing references to my existing env PORT_NUM to GCR's PORT and it just worked. It's been a life changer. Now if it could run non-http workloads, I'd be able to shut down every one of my VMs.


What makes DO tier2?

I use 1 linode for personal stuff, but otherwise, I'm all in on dedicated baremetal. From my perspective, DO is just another cloud vendor.


AWS, GCP, and Azure have a huge portfolio of datacenters, infrastructure, compute, networking, storage, security, databases, analytics, AI/ML, and other managed services that are completely missing from DigitalOcean.

DO is tier 2 because it provides basic IaaS components along with S3/Spaces, managed databases, load balancing, and K8S as compared to providers like Linode which would be tier 3 and below.


I'd say because they have way less products and glue services between these than the big ones (GCP, AWS, Azure).

Just compare the list on the product pages:

- https://www.digitalocean.com/products/

- https://aws.amazon.com/products/

Also DO doesn't have a lot of more niche services like Redshift, Mail, ML


I don't understand. You had a problem; you needed to run a distributed system with containers which Kubernetes is perfect for. Did your problem disappeared or did your requirement changed or did you change to a different container orchestrator tool?


I'm saying we didn't have a problem that Kubernetes solved. A lot of seed-round/Series A startups also are quite simple, but hopping onto Kubernetes because it's "enterprise".


Adopting any piece of technology just because it's a fad or otherwise trendy is rarely a good justification.

It sounds like the primary reason you're deciding to move away from it is because you don't face any of the problems that it's there to solve, rather than it being an operational burden.


To be fair, k8s imposes a large complexity load. So what it gives you has got to be worth the time your developers/devops/sysops will spend learning to work with it.

We're migrating to k8s, and it's fantastic for us, but we have a large complex system that we're moving into the cloud, which really benefits from k8s features - loving the horizontal pod autoscalers, especially when I can expose Kafka topic lag to them (via Prometheus) as a metric to scale certain apps on.


It's really not "enterprise". It's because you have a resilient platform that provides automatic deployments, rolling upgrades, failover, auto-repair, logging, monitoring, storage/stateful data management, and more out of the box with a simple declarative YAML configuration.

Kubernetes can run a simple container with 1 line if that's all you need, or you can scale up to several different services with a few more files, all the way a massive deployment with thousands of containers. How you use it is up to you, but you do need to read and understand the basics.

However there's absolutely no need to run Kubernetes yourself unless you have a serious reason to. If you must, I highly recommend using something like Rancher [1] that can install and manage the cluster for you, even on cloud providers.

1. https://rancher.com/


Really disappointed in the people downvoting you for sharing your personal experiences.


"I had a bad experience with Kubernetes" isn't really relevant discussion on a post about a kubernetes release. It's already well known that kubernetes is used by companies all over that don't need it and don't benefit from it because of its popularity.


> I'm saying we didn't have a problem that Kubernetes solved.

So why did you adopted a tool you had no use for to begin with? Sounds like poor judgement all around, from the adoption to the complains.


In my experience, Kubernetes is a lot of work to install and maintain. However, I've found managed offerings—especially ones like Azure that actually abstract away the complexity (Amazon exposes all the nitty-gritty, in my experience)—make it easy to use for even smaller shops.

Of course, not every workload benefits from the features k8s provides.


I agree on EKS,IMO is it an unfinished bare bones "managed" service. Upgrades of control plane and nodes is a royal PITA - very manual and error prone. In GKE it is literally 2 gcloud commands.

I think AWS hamstrings kubernetes still bc it is holding onto ECS. Hopefully they see the light soon, beef it up and make it equal to AKS/GKE. At least make masters/control plane free...ugh


did you ever looked at k3s? what is more difficult about k3s than managing ansible? what about kubespray? it's basically k8s managed via ansible? what makes it so difficult?


When folk want managed k8s, they effectively wanted something akin to a "serverless" compute pool that they point the k8s API at and worry about nothing else - not underlying hosts, networking configuration, storage, etc. The ideal is how close it gets to that. K3s still leaves hardware management to you.


ansible, etc, vm's, etc as well?!


For work projects, I don't want to care about hardware—or "machines" of any sort, for that matter. Give me a ready-to-go platform with networking, storage, node management, etc. already configured. I don't even want to have to run scripts.

At home, I'm happy to play around with such things—and just might (I just purchased an old Dell server to take care of some home tasks and provide a sandbox environment). But at work I want to be as far away from the bare metal (physical machines or VMs) as possible.


Don’t do Kubernetes or any other container orchestration when you don’t have CI/CD in place yet. I really think this is meant to work in tandem and changing Kubernetes YAML on a developer laptop can blow up fast.


> Kubernetes is meant for enterprises

Actually it's meant for virtually no one, because very few people have the problems it solves. Like AutoScaling Groups, Serverless, and Cloud its self, it's a tool that solves a pain point in a specific domain. About 0.001% of the business world have that problem.

> We're happy with the move and the pressure relief is tremendous.

I'll bet it was. I've convinced businesses to go in the completely opposite direction to K8s. I've told them to developed a monolith before they start optimising into microservices. The startups that listened to me are still around and in round C. The others closed up shop ages ago (minus one of them) because they never got to market in time.

K8s is a tool. Docker is a tool. Cloud is a tool. Businesses have to utilise tools as efficiently as possible to get their solutions out the door if they're to survive. Using K8s from the ground up is a death sentence.


The "Kubernetes is only for Google-scale companies" meme needs to die. Kubernetes is a useful tool even if you have a single node.

As a case in point: I just set up a small new app on GKE. Because I'm experienced with Kubernetes, within a few minutes I had my app (a React front-end written in TypeScript and a backend written in Go) running and receiving HTTPS traffic. I entered just a handful of shell commands to get from an empty cluster to a functional one.

It's a cluster with a single node, no resilience. But this is genuinely useful, and a perfectly appropriate use case for Kubernetes. The alternative is the old way — VMs, perhaps with a sprinkling of Terraform and/or Salt/Ansible/Chef/Puppet, dealing with Linux, installing OS packages, controlling everything over SSH — or some high-level PaaS like AppEngine or Heroku.

While it's an example that shows that Kubernetes "scales down", I'm now also free to scale up. While today it's a small app with essentially zero traffic, when/if the need should arise, I can just expand the nodepool, add a horizontal pod autoscaler, let the cluster span multiple regions/zones, and so on, and I'll have something that can handle whatever is thrown at it.

My company has a ton of apps on multiple Kubernetes clusters, none of them very big. From my perspective, it's a huge win in operational simplicity over the "old" way. Docker itself is a benefit, but the true benefits come when you can treat your entire cluster as a virtualized resource and just throw containers and load balancers and persistent disks at it.


Agree totally. One win Kubernetes provides my current team has been the fact that it is the interface on top of a cloud with agnostic primitives, allowing for multi cloud scaling. There are a lot of primitives, and a lot of things to monitor to ensure a running cluster, but the complexity of the system is IMO overblown in the popular mind. Isn't the core tenant of SRE to have the stability of the system automated?


Can't agree more. That said, Kubernetes still has way too much RAM/CPU overhead in a single node setup. It scales up, but it doesn't scale down. Hopefully, it can be solved. There is k3s, for example, that limits the overhead by replacing etcd with SQLite, removing old stuff, etc.


> It's a cluster with a single node, no resilience. But this is genuinely useful Can you elaborate? Sounds like replacing one kind of server management overhead with a new kind otherwise.


It's replacing one thing with something else, but the management overhead on GKE is significantly less.

I've also managed a built-from-ground-up Kubernetes cluster. It's not rocket science. That said, I wouldn't do it in a company without a dedicated ops person.


> My company has a ton of apps on multiple Kubernetes clusters, none of them very big. From my perspective, it's a huge win in operational simplicity over the "old" way. Docker itself is a benefit, but the true benefits come when you can treat your entire cluster as a virtualized resource and just throw containers and load balancers and persistent disks at it.

And I agree. However...

I'm referring to the companies and people who don't think like you do and instead think, "We need to build everything as a microservice and have it orchestrated on K8s". I'm thinking of the people who have a monolith that's scaling fine, solving a problem, and generating income, but an executive has been sold on K8s and now wants to refactor it.

Your use case makes perfect sense, but very few people are thinking like that, sadly.

In my opinion if you're starting out with Docker and K8s, you're going down the right path, but only provided you're not starting with microservices.

Like you said it's a tool and like with most tools there is a balance between over engineering its use.


That sounds more like a criticism of microservices, which can definitely be overdone.


> controlling everything over SSH

It just takes some experience. I mostly disable SSH for example. You bake your images once, not install everything on each boot.


You are being downvoted by the Kubernetes crowd, but you are quite right, it is the new NoSQL.


People realise their time has been wasted and their feelings are hurt as a result... oh well.


I keep using managed platforms, Kubernetes is the new NoSQL.


Moving away to what?


Maybe a serverless stack?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: