Hacker News new | past | comments | ask | show | jobs | submit login

K3S includes some extras that make it nice for working in small local clusters, but are not part of the standard k8s codebase.

* Traefik daemonset as a load balancer

* Helm controller that lets you apply helm manifests without the helm command line

* Upgrade controller

* Sqlite as the default backing store for the k8s API

* Their own local storage provisioner

K0S has a lot of the same goals: be light weight and self contained in a single binary. But K0S tries to be as vanilla as possible.

Choosing between the two it comes down to your use case. Do you want light weight and compatible (k0s), or lightweight and convenient (k3s)?

Edit: formatting




What you've listed for k3s is mostly included in k0s. I wouldnt go far to say k0s isnt convenient.

* A helm controller is included in k0s

* Etcd is bundled and bootstrapped automatically which I perfer because I dont want the overhead of the translation that Kine does. Although Kine is available for a non-etcd datastore if that is preferred.

* Upgrade controller is included (autopilot).

* They have a local storage provider based on OpenEBS.

* Ingress is missing, but due to the built in helm controller that can be boot strapped upon cluster initialisation.

Overall, together with k0sctl and its declarative configuration it is easier to deploy k0s than it was k3s.


Can you please elaborate on the "kine" overhead?


Kine (https://github.com/k3s-io/kine) is a shim or an external process (when not k3s) that translates the etcd api to enable compatibility with a database or alternative data store. Kubernetes natively talks etcd, so this translation is what enables its usage with sqlite or another database, but it incurs an overhead.

I don't have specific numbers unfortunately since it was years ago I benchmarked Kine against etcd. But I had a better results with etcd both in cluster and single node.

I happened to stumble upon this paper that echos my experience. https://programming-group.com/assets/pdf/papers/2023_Lightwe... Particularly, the high controller cpu usage (even for an empty cluster), and higher latencies.


thanks!

my problem with etcd was very high and constant I/O and CPU usage. I don't mind the latency.


Thank you! Great details. Definitely want convenient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: