Hacker News new | past | comments | ask | show | jobs | submit login

Hey congrats for the launch!

How does it compare to timoni[0]?

[0]: https://github.com/stefanprodan/timoni






Timoni seems to be focused on managing applications by evaluating CUE modules stored in OCI containers, and produces OCI containers as artifacts. It looks like Holos is more of a configuration management solution that also uses CUE, but produces rendered Kubernetes manifests instead of OCI containers. Holos narrowly focuses on integrating different types of components and producing "hydrated" manifests while relying on other solutions (Argo/Flux) to apply/enforce the manifests.

This is a great summary, thank you.

Thanks! Holos and Timoni solve similar problems but at slightly different levels of the stack and with a different approach. Timoni is focused on managing applications by evaluating CUE stored in an OCI container. Stephan expressed his intention to have a controller apply the resulting manifests similar to Flux. Timoni defers to Flux to manage Helm charts in-cluster, Holos renders Helm charts to manifests much like ArgoCD does, using helm template within the rendering pipeline.

We take a slightly different approach in a few important ways.

1. Holos stops short of applying manifests to a cluster, leaving that responsibility to existing tools like ArgoCD, Flux, plain kubectl apply, or other tools. We're intentional about leaving space for other tools to operate after manifests are rendered before they're applied. For example, we pair nicely with Kargo for progressive rollouts. Kargo sits between Holos and the Kubernetes API and fits well with Holos because both tools are focused on the rendered manifest pattern.

2. Holos focus on the integration of multiple Components into an overall Platform. I capitalized them because they mean specific things in Holos, a Component is our way of wrapping existing Helm charts, Kustomize bases, and plain raw yaml files, in CUE to implement the rendering pipeline.

3. We're explicit about the rendering pipeline, our intent is that any tool that generates k8s config manifests could be wrapped up in a Generator in our pipeline. The output of that tool can then be fed into existing transformers like Kustomize.

I built the rendering pipeline this way because I often would take an upstream helm chart, mix in some ExternalSecret resources or what not, then pipe the output through Kustomize to tweak some things here and there. Holos is a generalization of that, it's been useful because I no longer need to write any YAML to work with Kustomize, it's all pure data in CUE to do the same thing.

Those are the major difference. I'd summarize them as holos focuses on the integration layer, integrating multiple things together. Kind of like an umbrella chart, but using well defined CUE as the method of integrating things together. The output of each tool is another main difference. Holos outputs fully rendered manifests to the local file system intended to be committed to a GitOps repo. Timoni acts as a controller, applying manifests to the cluster.


> For example, we pair nicely with Kargo for progressive rollouts

You've greatly piqued my interest (we're currently using Timoni). However, how does this integration work exactly? We've been wanting to try Kargo for awhile but it appears there's no support for passing off the YAML rendering to an external tool.


Holos writes fully rendered manifests to the local filesystem, a gitops repository. Kargo watches a container registry for new images, then uses the rendered manifests Holos produced as the input to the promotion process.

The specific integration point I explored last week was configuring Holos to write a Kustomize kustomization.yaml file along side each of the components that have promotable artifacts. This is essentially just a stub for Kargo to come along and edit with a promotion step.

For example the output of the Holos rendering process for our demo "Bank of Holos" front end is this directory:

https://github.com/holos-run/bank-of-holos/tree/v0.6.2/deplo...

Normally, ArgoCD would deploy this straight from Git, but we changed the Application resource we configure to instead look where Kargo suggests, which is a different branch. Aside, I'm not sure I like the multiple branches approach but I'm following Kargo's docs initially then will look at doing it again in the same branch but using different folders.

https://github.com/holos-run/bank-of-holos/blob/v0.6.2/deplo...

Note the targetRevision: stage/dev Kargo is responsible for this branch.

The Kargo promotion stages are configured by this manifest Holos renders:

https://github.com/holos-run/bank-of-holos/blob/v0.6.2/deplo...

Holos writes this file out from the CUE configuration when it renders the platform. The corresponding CUE configuration responsible for producing that rendered manifest is at:

https://github.com/holos-run/bank-of-holos/blob/v0.6.2/proje...

This is all a very detailed way of saying: Holos renders the manifests, you commit them to git, Kargo uses Kustomize edit to patch in the promoted container image and renders the manifests _again_ somewhere else (branch or folder). ArgoCD applies the manifests Kargo writes.

I'll write it up properly soon, but the initial notes from my spike last week are at https://github.com/holos-run/bank-of-holos/blob/main/docs/ka...

I recorded a quick video of it at https://www.youtube.com/watch?v=m0bpQugSbzA which is more a demo of Kargo than Holos really.

What we were concerned about was we'd need to wait for Kargo to support arbitrary commands as promotion steps, something they're not targeting on having until v1.3 next year. Luckily, we work very well with their existing promotion steps because we take care to stop once we write the rendered manifests, leaving it up you to use additional tools like kubectl apply, Kargo, Flux, ArgoCD, etc...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: