Hacker News new | past | comments | ask | show | jobs | submit login

Could you explain why helm is garbage? I think it suits its purpose rather well without being too complex. You can essentially "plug-in" different types of resources rather easily. Especially in v3 now that you don't need to install Tiller and can avoid setting those cluster permission requirements.

Have you tried some Kubernetes api libraries? You can generate and configure resources with [python kubernetes-client](https://github.com/kubernetes-client/python) without much trouble. Personally I prefer editing them as JSON instead of python objects, but it isn't too bad.




> Could you explain why helm is garbage?

Not the OP, but..

1. YAML string templating makes it very easy to get indentation and/or quotation wrong, and the error messages can easily end up pretty far from the actual errors. Structured data should be generated with structured templating.

2. "Values" aren't typechecked or cleaned.

3. Easy to end up in a state where a failed deploy leaves you with a mess to clean up by hand.

4. No good way to preview what a deploy will change.

5. Weird interactions when resources are edited manually (especially back in Helm 2, but still a thing).

6. No good way to migrate objects into a Helm chart without deleting and recreating them.

7. Tons of repetitive boilerplate in each chart to customize basic settings (like replica counts).

It's a typical Go solution, in all the wrong ways.


> "Values" aren't typechecked or cleaned.

Helm 3 does offer a solution: a JSONSchema definition file for the values.

Which works ... in a very Helm-like fashion. Meaning: it's messy and awkward.


It's not going to solve all your problems but dhall can fix your first few gripes. I've been using it for several months and it's an excellent way to write configuration imo.


Yeah, I have used Nix to generate them in the past, which worked pretty great too. But Helm does, admittedly, solve a real problem: garbage collecting old resources when they're deleted from the repo. I just wish we could have something much simpler that only did that...


`kubectl apply --prune` should nominally do this. Irritatingly (I acknowledge I'm almost as responsible as anyone else for doing something about this), it's had this disclaimer on it for quite some time now:

> Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See ⟨https://issues.k8s.io/34274⟩.

I haven't used it in anger, so I can't add any disclaimer or otherwise of my own.

kpt is the recent Google-ordained (AFAICT) solution to this problem, but is ot yet at v1.

You could also resolve this yourself by either:

* versioning with labels and deleting all resources with labels indicating older versions

* using helm _only_ in the last mile, for pruning


As helm charts become used by more people and more complicated, exposing more of the underlying config, they just turn into a set of yaml files with as much or more complexity as the thing they are trying to replace. Configuration with intelligence that allows arbitrary overrides of any of the underlying configuration is important in order to meet all use cases. Without that, helm will only be useful for a strict subset of use cases and eventually you will outgrow the chart, or the chart will grow in complexity until its worthless.


We've found Kustomize, or just straight up writing the deployments ourselves the best approach.

The actual spec for a Deployment/Daemon Set/Stateful Set/CRD is usually super straightforward once you get the Kubernetes terminology, and most of the issues I've had with Helm have boiled down to "oh they haven't parametised the one config I need to change"


I always have some configs that helm hasn't parameterized. But that's not a problem because I always fetch my charts from helm hub into my repo. So I just add any parameters I need.


I also think helm is terrible.

Helm's stated goal is to be something of a package manager for apps in k8s, but this is fundamentally unworkable as shown by... Helm. It's hard to describe just how unworkable this idea is.

Let's start with an example, you want to install an app (let's say Jira) and a DB backend of your choice, postgres or mysql.

The first step where this all falls down is, it may or may not support your preferred DB. Sure, Jira does, but does the chart?

Assuming it does support your preferred backend, maybe it depends on the chart for the db you picked. If it does, it's going to install it for you, hopefully with best practices, almost certainly not according to your corporate security policy. This is also a problem if say, you have a db you want to use already, prefer to use an operator for managing your db, or use a db outside k8s.

You got lucky, and it supports your DB just the way you want it. Next question, do you want an HA Jira? Often, this part is done so differently that HA Jira and Single host Jira are straight up different charts.

Do you want your DB to be HA? Unfortunately, the chart Jira chart author picked to depend on is the non-HA one. Guess you're out of luck.

Maybe you want to add redis caching? Nginx frontend/ingress? Want to terminate TLS at the app host and not ingress? How do you integrate it with your cert management system?

We haven't even looked at the config, where you have to do everything in a variables.yaml file which is never documented as well as the actual thing it's configuring on your behalf, and is not always done in a sensible manner.

Hopefully it's clear from this that as a user, helm isn't going to work for you, because just as there's no such thing as the average person, there's no such thing as an average deployment. Even a basic one is filled with one off variations for every user that a public chart needs to support.

As a developer, helm is unworkable because you're templating yaml inside yaml. This isn't too bad if you're just tweaking a few things on an otherwise plain chart, but a public chart, that naively hopes to support all the possible configurations? Your otherwise simple chart is now 5-10x longer from all the templating options. Have fun supporting it and adding the new features you'll inevitably need to add and support.

As a counterpoint to all this, kustomize gets a lot right. I don't mean that kustomize is perfect, or even good, but I've found like k8s itself, it understands that the problem space is complex and to try and hide that complexity leads to a lesser product that is more complex because of leaky abstractions.

Kustomize acts as a glue layer for your manifests, so instead of some giant morass of charts and dependencies none of which work for you, you're expected to find a suitable chart for each piece yourself and compose them with Kustomize.

Going through the same example again as a user:

Your vendor has provided a couple basic manifests for you to consume, maybe even only one, because they're expecting you to supply your own DB. Since they only need to supply the Jira part, instead of having an HA chart and a Single node manifest, They just give you one manifest with a stateful set. Or maybe they give you two. One as a deployment and one as a Stateful set. The stateful set might also have an upgrade policy configured for you.

Since the vendor punted on the DB, you can do whatever you like here. You'll have to supply a config map with your db config to the Jira deployment, but that's okay, it's easy to override the default one in the manifest with kustomize. You are now free to use a cloud managed DB, an operator managed one, or just pull the stock manifest for your preferred DB.

Want to terminate TLS in your app again? Easy enough, Cert Manager will provide the certs for you, and supply them as secrets, ready to consume in your app, you just need to tell it where to look.

So now you have all the parts of your Jira deployment configured just how you like, but they're all separate. Maybe you just edited the stock manifests to get your changes in the way you like. Dang, you've created a problem when the vendor updates the manifest, as now you need to merge in changes every time you update it. That seems like a huge hassle. What if you could just keep your changes in a patch and apply that to a stock manifest? Then it's easy to diff upgrades of the stock manifest and see what changed, and it's easy to see what you care about in your patch.

All of this seems like it's getting kind of unwieldy, maybe we can make it easier. We'll have a single kustomization.yaml, and it'll be really structured. In it, you can list all the paths to folders of manifests, or individual files, or git repos/branches/subpaths. We'll also specify a set of patches that look like partial manifests to apply to these base manifests to make it clear what is the base and what goes on top. Then finally, for common things like image versions and namespaces, we'll expose them directly so you don't need to patch everything. We can do that because we're using standard manifests that can be parsed and modified.

That is kustomize, and as awkward as it is, it's just trying to make it clear what you're applying, and what your one customization from stock is in a maintainable way. It does a better job of 'package management' by not managing packages. This is pretty similar to how Linux package managers work. If you install wordpress on your server, it's going to install php, but it might not install apache or nginx, as everyone wants something different. It's definitely not going to install a DB for you. You as the admin have to decide what you want and tell each part about the other.


I understand your pain and can see where you are coming from.

But Helm is just a package manager and not a software delivery platform like you ask.

I mean do you have the same expectations from a Deb or RPM package?

If I give you a deb package that "contains" Jira, won't you have the exact same concerns?


Thanks for the detailed post. Really helps newcomers like myself.

One of the benefits as a new user of k8s is the ability to grab a helm chart to get me most of the way with something like ELK. I want to go the way of Kustomize but can't seem to find the same with it?


If you want an ELK stack, you should look into the operators provided by Elastic [1]. All you need to do is write a very small manifest or two for each thing you want operated. I feel like this is a better solution to 'I want an ELK cluster' than a helm chart because it solves more problems without leaking.

[1] https://www.elastic.co/blog/introducing-elastic-cloud-on-kub...


I love how the kubernetes client is only compatible with python <= 3.6




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: