Hacker News new | past | comments | ask | show | jobs | submit login
K3s: Kubernetes without the features I don't care about (github.com/ibuildthecloud)
171 points by ingve on Sept 26, 2018 | hide | past | favorite | 88 comments



I created this (I'm the Chief Architect and Co-founder of Rancher Labs, I experiment with stuff like this all the time). The purpose of this project is really to embed in another project (https://github.com/rancher/rio is an example) but as a side effect it's a nice little standalone k8s package. I never had any intention to support it as a standalone project, but hey if somebody is interested who knows.

All of the features removed are based off of two basic ideas

1) If there are X ways to do something, I choose the best one (obviously subjective). 2) The features are not commonly used or really shouldn't be in k8s to begin with.

The end goal of what I'm working on is basically to use k8s code as a library to do orchestration. Step one was to reduce the footprint of k8s (I'm 80% there with this project), step two is to completely rework the state management such that you no longer need to persist state in k8s. This is a much larger goal. I've fooled around a lot with the persistence layer in k8s (this project is running on sqlite3) so I'm basically doing a lot of work of figuring what state I can throw away. The theory is that all desired state comes from your yaml files. Actual state is actual (what really exists). Everything else in k8s should be purely derived and not important.


Striped down and it still looks pretty complex. I really like containers, but I feel the orchestrations systems all leave something to be desired. I've heard there are some big changes in Racher and I've been meaning to look at it again.

I've setup DC/OS and played with Nomad and wrote this post about my frustration dealing with orchestration systems:

https://penguindreams.org/blog/my-love-hate-relationship-wit...


I completely agree. The current trend of orchestration all sells a story of hyperscale like GIFEE. That is not how most people do or should run there deployments (I'm talking about the vast majority outside SV). The operational complexity today is too high. Kubernetes is a big business system, it caters to enterprise. If you aren't an enterprise I would caution you against k8s.


>> Kubernetes is a big business system, it caters to enterprise. If you aren't an enterprise I would caution you against k8s.

That seems overstated, frankly. What is so complex about, for example, GKE that renders it unfit for smaller organizations? If you mean "caution you against hosting your own clusters" I could probably get onboard with that.


For sure running k8s is difficult, I don't think anybody would disagree with me there, but of course you can buy GKE. I have to point out that GKE is the only decent k8s implementation, EKS and AKS are not real contenders. Digital Ocean will launch their service, I've seen it and it's good and I hope they do well. But in general you can't really expect the Kubernetes landscape to change much from what it is today, which is basically GKE good, everything else meh. Why? Kubernetes is not really a proven business model for cloud providers yet. It costs the cloud provider more to offer Kubernetes and it not significantly drawing more users in (and consuming more). The obvious example of this is AWS lack luster support and commitment to EKS. If there was a big business opportunity Amazon would be doing more. I have no concrete proof but ECS and Fargate stand to make much more money for Amazon (just a guess). So in general the argument that running on GKE isn't bad I don't think holds up because GKE is the only decent option. DO has yet to launch, but they can't really standup to GKE because you have to look at things holistically. Just running the cluster is like 10% of the whole solution. You need to then figure out how to do authentication, authorization, load balancing (built in service LB and ingress are not sufficient), monitoring, logging, and disaster recovery. Google has a product in each of these areas, but they have 100x more resources than DO. Basically nobody can (or is willing to) put the money into K8s like Google does and Google has yet to prove that their investment is worth it.

That's not to say I don't think Kubernetes will survive. I do think it will survive and it will be around for awhile. Because it's a good fit for enterprise right now. Not a good cloud provider business, not good for small dev shops, but good for enterprise.


You make some interesting points, and I don't disagree with them (nor do I really have the inside view needed to even have an opinion on a good chunk of it). After reading your reply I have two thoughts: the first is that whether GKE is a profitable thing for Google Cloud at the moment, it definitely seems to have brought the platform a lot of positive attention. Secondly, I think you're right about the complexity of running kubernetes correctly. From that perspective maybe it's a useful analogy to think of it as an operating system for the data center, and that what is needed eventually are higher level abstractions that build on it and make it easier to deploy.


It's interesting you didn't mention Docker once in all of that. I agree with most of what you said, but paying for Docker Enterprise gets you integration with k8s that makes the complexity tolerable and manageable, and also includes all that extra shit that needs to be included. Docker Inc seems to be all-in on this as a core part of their business, so for all the half-hearted ennui of AWS, Docker Inc seems to be heavily invested in solving the problem.


You know how these days, "Java Enterprise Edition on Glassfish" is criticised for being burdened with "cruft" ?

And this makes it large, slow to start, difficult to work with, and provides a bunch of obsolete features you might mistakenly start using, that turn out not to be as well designed as everyone thought, but that can't be fixed for reasons of backwards compatibility?

In 10 years, could people will look at Kubernetes the same way.


Rancher (2.0) certainly made it a lot easier for me as a sole developer to get Kubernetes running in an actually useful way.

Though I’m still running into a hundred gotchas a month :P


We're still on Rancher 1.6 and Cattle because it's much simpler for us and we don't have the time to invest in a migration to 2.0


I would suggest you wait until 2.0 is a bit more mature. It’s fun, but there’s still a lot missing (that appears to be in 1.6)


What do you think most people should use instead of k8s? Or is there not a good answer for that yet? Do you think containers themselves are overkill, or just k8s?


Containers have a lot of value. My rule of thumb is basically use containers (docker) and as little as possible of anything else in the container ecosystem. So basically put your app in a container try to not change too much from however you currently do things. There is no shame in running --net=host or just doing "-v /mnt/someebsvolume:/data." I personally like Hashicorp because their tools are simple and grow with you. Besides Hashicorp, I don't have great recommendations for tools. In my opinion the current container ecosystem is a disaster for users. Big business ideas took over and killed innovation. Simple lightweight modular systems all got annihilated by Kubernetes and now it's pretty much a monoculture.


> Big business ideas took over and killed innovation. Simple lightweight modular systems all got annihilated by Kubernetes and now it's pretty much a monoculture.

I don't work with K8S or even docker on a day-to-day basis, but to me the ecosystem looks rather good. There are several solutions of orchestration and containerization competing from different vendors, and things tend to be standardised to make them work together :

- https://github.com/kubernetes/community/blob/master/contribu...

- https://www.opencontainers.org/

No later than yesterday there was that post on HN : https://news.ycombinator.com/item?id=18050781

People experiment, dissect, keep some old things, hack new things. K8S is still in active development, new initiatives like Istio and Knative are built on top of it. It looks like an healthy innovative ecosystem to me. I guess it depends of point of view and interests.


Where can i read more about best practice regarding this while not getting to distracted or off in the wrong direction?


I would also really love to learn the same.


I still don't quite understand why Docker Swarm is so overlooked. It essentially offers easy-to-manage orchestration while providing most of the core features that many people are looking for. It even makes use of compose files, which many developers are already writing. Given that most of the complaints about Kubernetes relate to its complexity and the need for a Kubernetes "expert" to look after said complexity, I'm somewhat baffled by how many people just totally overlook Swarm as if it wasn't an option.


> I still don't quite understand why Docker Swarm is so overlooked.

No IPv6 support in it at all, and the company seems to have moved resources internally away from Swarm development.

Doesn't bode well for it's future. Expecting it'll be taken out to the back shed (-bang-) in a while.


Check this GitHub issue on SwarmKit for more info: https://github.com/docker/swarmkit/issues/2665


Docker staff have been saying words to the effect of "we'll make an announcement with info soon!" for at least the last three+ months, across several projects. Swarm, Docker-Machine, & Infrakit come to mind.

With no actual announcement(s) then occurring.

The commit activity graph on InfraKit - which people are supposed to use now that Docker-Machine has been deprecated - seems pretty clear:

https://github.com/docker/infrakit/graphs/contributors

SwarmKit is the same:

https://github.com/docker/swarmkit/graphs/contributors

Clearly these aren't where resources have been placed.

Actions are not lining up with the words given. :(


I guess you are right. It's unfortunate because judging from the comments here you would think that there's a market for a simpler orchestration software while it will be hard to compete as just another Kubernetes distributor (even more so now that there are several alternatives for the container runtime). But hey, what do I know about product management?! :)


Nomad is that. We run Nomad quite successfully at SeatGeek, and I know of several Very Large Companies™ doing so at scale as well. One of them even has a fruit logo.


You are using the enterprise version, I assume?


We are running the OSS version of Consul, Nomad, and Vault. While the enterprise versions provide some items that could be interesting, they are not necessary given our workflow, and I'd bet that the majority of development shops out there wouldn't need them either (though I am certainly more than happy to play with them!).


I think you are right but the fact that Docker Inc included Kubernetes surely didn't help Swarm adoption.


I have had a good experience using the Hashicorp stack of Nomad and Consul, integrated with nginx for load balancing.

This stack doesn't offer all of the same functionality as the Kubernetes ecosystem, but it has the benefit of being vastly simpler and composed of relatively small, independent parts.

We did examine Kubernetes, but for various reasons we need to manage our own cluster and the engineering overhead was too high for our team to take on.


Why is Nomad & Consul simpler that Kubernetes?


If you just need to "orchestrate" on a single box, Docker Compose (https://docs.docker.com/compose/) is the bees knees.


What about Rancher 1.6 and Cattle? Is it going to be improved? It had a perfect balance of complexity and features. I prefer it over k8s.


Hey, I've been using Rancher for a year, and I love it.

Just wanted to bring your attention to a YouTube channel from Rancher Labs, where "Learning Rancher" videos are posted regularly. But the content is all the same: boring 10min intro to every video, after which developers go through the same stuff, painfully slowly. Is it meaningful to produce this kind of content over and over again? IMO there are already way better Rancher tutorials by indian dudes on YouTube.

I've started with k8s thanks to Rancher, and I presume that the vast majority of Rancher users are doing the same. Rancher without k8s does not make sense, and it seems to me that k8s stuff is actively avoided in docs and tutorials. Maybe it would be better to make a series of tutorials on Rancher+k8s, and go through concrete topics like networking and CI?


I really enjoyed your initial rancher orchestration architecture write up and diagram. In some ways the centralized state machine and federated transition handling reminds me of the omega system now that I think about it.. It would be cool to read what your orchestration plans are for k8s.


I'm working on some pretty massive things right now. I should have something to show December (DockerCon EU) time frame. It's all in the area of state management. This is something I've been working on for quite some time. The hardest part of an orchestration systems is the persistent store. I'm working on some patterns to effectively remove the need for database but use only object storage (which is significantly more reliable, cheaper, and easier)


How do you get the watch API's without etcd? or does k3s doesn't have watch API support.


It's a combination of pulling and notification. The sqlite DB table is structured like a log, so clients basically just need to keep grabbing the latest id. In practice it's has proven to work pretty well. The code I have public right now does not support HA, I have a private branch that I'm still testing.


I know pretty much zero about k8s, but was it really important to remove Swagger? Seems like relatively useful functionality and not a huge complexity point? I really like the idea of a simple opinionated K8s BTW good luck!


You can probably argue the same about every part removed, but if you (him) are not going to use it now, remove it.

Just dead weight otherwise. You can always add it later if people really need it.

True, Swagger docs can be helpful, but also if present but rubbish ignored swagger docs with missing features or side effects is more confusing. Just talk.


Agreed. I don't really get the love for swagger, everything about it seems so meta. People say you get documentation for free, but auto generating some fancy webpage out of code (or the swagger spec) is not really documentation.

You have to write your documentation in a real human language. Maybe I just don't get the point of swagger.


Yeah this seems weird to me. If your API schema is not introspectable, where do your API clients come from? They don't grow on trees. I guess it's easy to cheat and just use an existing client, but that just shows that this is not a sustainable project


In context of complexity vs power tradeoff, where do you see this versus Docker Swarm ?


k3s is not intended to be a competitor, but let's say I wanted to build it out to compete with systems. Swarm excels in UX and cluster management. It is easy to run and use. But it is incomplete in features and functionality. So k3s addresses the cluster management issues and has tons of features. It makes k8s easy to run, but it still has Kubernetes UX which is quite difficult. So in the end I think it is a wash between k3s and Swarm. You either get hard to use (k3s) or incomplete (Swarm). I have a different project Rio (https://github.com/rancher/rio) that attempts to address UX (and it's built on k3s). So Rio could possibly be a system that is easy to use, powerful, and simple to run. I think Rio could be something that is better than both raw k8s and Swarm for regular users, but it's a very early.


Looks interesting. Why are you (Rancher) building your own distributed block storage (Longhorn)? What about Rook? OpenEBS?


How hard was the etcdectomy surgery? I can't see that situation getting prettier as the world of CRDs explodes.


Easier than you would think. I basically replaced the etcd KV API with a SQL based one. But that means I kept the majority of the k8s etcd3 driver in place which has all of the complicated k8s specific code.


OT but why do you want to push containers? In my opinion it's a misadventure from the real issue of a proper package manager (which in turn is a misadventure from a proper build system) The issue with current containers is it's the wrong level of abstraction, containment should be per binary and ideally inline with code (see pledge(2)).


As one of the maintainers of runc, I cannot agree with your general thrust more (I am obviously biased since I work for a distribution company -- SUSE). I don't agree it should be containment-per-binary, but I think that containment should be transparent to the operating system so it can be managed.

Currently the idea with k8s is that you just destroy anything that has broken state and rebuild it somewhere else. While this is the easiest solution to the problem, I think that containers should not always be treated like disposable objects.

I think in many ways, Solaris Zones (and FreeBSD Jails to a lesser extent) did this right. Every administrative command in Solaris is zone-aware and you can manage your zones like you manage your host. On Linux, containers are completely opaque root filesystems which contain who-knows-what and are completely unmanagable or introspectable by the host. Everyone is cargo-culting and creating hacks around this fundamentally opaque system to try to make it less opaque (or just giving up and making it easy to purge them).

I'm working on some improvements to the OCI image-spec that _might_ help with introspection of images[1], but runtime introspection is going to be quite hard (needs runtime agreement).

I get that it's no longer sexy to work on container runtimes, now that everyone is losing their mind over orchestrators (and I imagine soon they'll be losing their mind over the next big thing). But I think that improving these fundamental blocks is pretty interesting and might result in better systems.

[1]: https://github.com/openSUSE/umoci/issues/256


It was never sexy to work on container runtimes, and nobody is losing their minds over orchestrators either. There was a real need for a platform to deploy to without being tied in to {aws, gcp, etc}. First it was OpenStack, now its kubernetes.


> I think that containment should be transparent to the operating system so it can be managed

you mean, as in "VM"? :)


VMs are entirely opaque to the host operating system. I'm not sure what the parallel is (though I assume you were being tongue-in-cheek).


Well, I always had problems differentiating "opaque" and "transparent" because depending on your point of view often both concepts can be applied interchangeably.

For instance in your post I didn't really get your point because at one time you seem to favor containers as a single entity from the perspective of the OS so they can be easily managed. This is what I understand you called "transparent to the OS". The is opposed to the container root fs that you call "opaque", supposedly because it cannot be easily mounted from the host OS etc.

As the implementor of container runtimes you know that there's not a container entity but an agglomeration of isolation techniques such as namespaces, cgroups etc. including some kind of overlay fs that make up an actual container. For me that's not very transparent. VMs on the other hand are in that respect – in a similar way Solaris zones and Jails are, as they are actual entities that can be seen and managed on the host OS.


I mean "transparent to the OS" in reference to operating system tooling rather than the kernel objects that make up containers (there is an argument that these are related concepts -- but I think we can have cohesive tooling without redesigning how Linux handles containers). The lack of a 'struct container_t' in Linux definitely makes containers quite complicated to set up, and is a constant source of pain. But this also allows for lots of flexibility with different userspace programs to use various subsets of containers as part of their operation (the Chrome sandbox is a good example of this).

> The is opposed to the container root fs that you call "opaque", supposedly because it cannot be easily mounted from the host OS etc.

Well, you can mount it and you can manually funge with it using 'nsenter' but that's not really ideal. Why (from the perspective of a user) can't you run a package upgrade across your containers? Why can't you get the free memory across all containers? Free disk space? And so on. This information is _available_ but it's not cohesive for _users_.

> VMs on the other hand are in that respect – in a similar way Solaris zones and Jails are

I agree that Solaris Zones and FreeBSD Jails are transparent (in fact that level of tooling transparency is what I wish we had on Linux). I don't agree that VMs are transparent in the same way -- they are an opaque DRAM blob from the perspective of the host kernel. You can dtrace zones/jails, you cannot dtrace a VM (and get meaningful results about what the guest kernel is doing) AFAIK.


> I mean "transparent to the OS" in reference to operating system tooling rather than the kernel objects that make up containers (there is an argument that these are related concepts -- but I think we can have cohesive tooling without redesigning how Linux handles containers). The lack of a 'struct container_t' in Linux definitely makes containers quite complicated to set up, and is a constant source of pain

It makes it difficult to reason about containers, as well. For instance do you see confinement technologies such as AppArmor or SELinux belonging to the container or are they slapped on top of containers? Where do you draw the boundary? And does it make sense to talk about "containers" when it's such a vague concept. Also, there are now "VM containers" such as Kata.


> And does it make sense to talk about "containers" when it's such a vague concept.

Well at some point we have to be able to talk about the widely-used and glued-together namespaces+cgroups+... concept. "Containers" is a perfectly fine thing to discuss (especially since the history of namespaces was definitely based in more "fully fledged" container technologies like Xen).

> Also, there are now "VM containers" such as Kata.

... which are just VMs that can interact with container tools. I'm pretty sure that most people would not call these "containers" from an OS point of view because they do not share any of the primary properties that containers (or Jails/Zones) have -- the host kernel actually knows what the container processes are doing.


There is always systemd-nspawn to do it the simple way.


systemd-nspawn doesn't really help do any of what I complained about. Besides, LXC is a much better "simple" container runtime in my opinion.


I think pledge (and unveil) are really good ideas, but that doesn't mean it's a substitute for containers. They're kind of different: pledge mostly just enforces correctness of a program, whereas containers enforce separation. I think containers are more analogous to privsep (used extensively in OpenBSD) than pledge.

My biggest issue with the current state is that the common tools (e.g. Docker, k8s) are pretty terrible. Docker overcomplicates a great many things, which is one reason for the many bugs. And k8s ... well, let's not even go there.

I don't think much more is needed than just `run-container /usr/containers/cool-container` and presto, a container with the files in that directory. No need for buggy/slow platform-dependent FS abstraction layers that change every year because they never quite work right in a bazillion edge cases. This is how e.g. FreeBSD jails work.

This won't prevent implementing tools like `docker pull`; at it simplest it could be `curl ... | tar xf - -C /usr/containers/cool-container`. Running multiple containers based of a single image? `cp -r`.

Maybe I'm just an old Unix guy, but I think that composition from generic tools is a much better approach for a lot of this stuff. It's easier, less platform-dependent, has less code (meaning less bugs), etc.


Containers are basically a few syscalls wrapped in a ball of mud, pledge is containment and the Linux somewhat broken and ugly alternative seccomp bpf is used in most containers _but_ not properly as most containers are bundles of software (multiple binaries) which makes it hard to contain a container (ha) as you have to allow more syscalls than needed. A better way would just use .preinit_array and inject a .seccomp section, .unshare, etc to create per binary containment or just chain-load everything in one big line.


> Containers are basically a few syscalls wrapped in a ball of mud

I don't think it has to be; I mean, the current Linux/Docker situation is kind of that, but that's just a single implementation.

> most containers are bundles of software (multiple binaries)

Yeah, exactly. Containers aren't just about security, also convenience. Something like "docker pull someimage" can be a handy cross-platform way to run stuff.

I think "docker pull" is the reason for Docker's popularity. You can build a self-contained container containing Ruby, all your Ruby on Rails stuff and JS stuff, and whatnot, and then send that to any server and just "docker run" it. Getting some of this dynamic stuff to run can be tricky and error-prone: need to get the correct Ruby version, Rails version, muck about with bundle, etc.

Want to add a new app-server or migrate to somewhere else? Need to spend a day setting everything up again and hope you didn't make a mistake that will bite you later :-(

A self-contained container is a lot easier, even with good OS package management or automation tools like Puppet.

Many people also use Docker as "Vagrant" (e.g. to run dev versions or tests and whatnot). Again, "docker pull" is convenient here, especially when there are certain dependencies/requirements, or for running testing against different versions of some dependency, etc.

Again, I don't think Docker is very good as such, but I do think there are some ideas worth keeping, both for security and plain convenience (not often those two go hand-in-hand!)


K8s is about cluster orchestration. It's for building apps that allocate memory and compute from a pool of machines almost like malloc() and free(). It's also an abstraction for allocating compute away from the specifics of a provider, so you theoretically don't care if you deploy to gcloud or Azure or AWS. Containers and images merely define the interface with the agnostically allocated compute. They're not the essential bit though, not the magic.


Right, and there are some scheduling systems like Maraton/mesos that can be used to control jobs or VMs as well (although they're more often used with containers than anything else these days). Can K8s run VMs or is its only abstraction containers/pods?


The pod abstraction can be implemented as a vm. Take a look at clear containers


Containers in pods only. In practice we run all of our Mesos tasks and Marathon services in containers anyway, as you mentioned.


I never said K8s wasn't about orchestration, I said why push containers in their current horribly broken form


The biggest issue I have with Kubernetes is it doesn't downscale. I have a lot of projects that don't need a cluster. A single node is enough. But Kubernetes has too much CPU and RAM overhead in such a configuration.

Because of this, I'm forced to either use another orchestrator (Docker Swarm is lightweight enough to work on a single node or a small cluster), or pay for machines I don't really need.

What are is the current thinking on this issue at Rancher?


Have you considered stepping back and wondering if you really need orchestration at this scale? Couldn't a simple unit file suffice?

If you have many such projects, perhaps something like Saltstack or Ansible, both of which have support for Docker container management built in.


Great question!

I already use Ansible to setup machines and automate deployments.

I used systemd unit files until I switched to Docker. I don't use them anymore since dockerd supervises the container itself.

I use Docker Compose to describe the set of services composing my app and it's great.

But in production, Docker Compose is unable to provide zero-downtime deployments, and this is why I'm looking at Docker Swarm or Kubernetes.

systemd unit files are not very useful for Docker containers, and they don't solve the zero-downtime deployment issue.


Have you looked at using a managed kubernetes service like GKE, you still lose about 10% of your resources, but you're not paying for the master. You could even look at something like Google App Engine.


No, I haven't because I thought the overhead was bigger.

Are you saying that the overhead on the worker nodes is about 10 % for the kubelet, the containerd and the logging daemons?

GKE documentation says 25 % of the first 4 GB are reserved for GKE:

https://cloud.google.com/kubernetes-engine/docs/concepts/clu...


have you tried dokku? https://github.com/dokku/dokku


Apps are usually composed of several services working together, each service being in its own container. I'd like to be able to use something like docker-compose.yml to describe this. And it looks like Dokku doesn't do this.


This is such a great stepping stone to playing with k8s. Maybe there should be a book with each chapter named k1s, k2s, k3s. Each chapter introduces more and more features with the why of each. Eventually you get to the full set of features and can use k8s in its entirety.


Speaking of "a great stepping stone to playing with k8s", there's a free, scenario-based site called KataCoda.com out there that gets you putting Kubernetes (and Docker and other things like TensorFlow) through its paces -- no need to install anything, it's all browser-based.

The site also has some "playgrounds" for experimentation once you know what you're doing and apparently can be used to create one's own courses, e.g. for training people at your company, etc.

List of all the courses: https://www.katacoda.com/learn


Presumably some of those features will be relevant to you as you scale.

Also, ignoring a feature seems far cheaper than forking a project and renouncing to updates for desired features?

e.g: feat A is desired, feat B is not, a new upstream refactor touches A and B. How do you stay up-to-date?


Curious if this would be better than minikube when running in a CI pipeline to run tests on.


For years now my favourite way to deploy my own 'side-projects' has been Dokku. It is like having your own Heroku on a single machine. Very neat, but no clustering.

Lately I have been playing with kubernetes and now docker swarm. I must say that I too find kubernetes to be too heavy (feature- and concept-wise) for the smaller scale things that I am up to: I have to read up too much, learn too many new concepts, write too many config files.

Docker swarm is a lot more appropriate for scaling those side-projects to a slightly bigger step, however, I have concerns about the longevity of the project. Will Docker Swarm still exist in 5 years?


I can't wait for someone to come out with a product called `k8OS`


What do you have against Rnete?


Against Kubernetes? Nothing, it's a beautiful architecture. I just want to use the code and architecture in different ways and specifically with lower resource constraints.


Oh, I was merely making light of the name "kubes" which I figure is what you meant with the numeronym. Apologies that my dog couldn't hunt.


> Apologies that my dog couldn't hunt.

Huh?


Its an American English idiom, in this case referencing his joke not performing the intended job of making you laugh.

https://english.stackexchange.com/questions/52755/meaning-an...


this is getting real meta now :)

Thanks for the explanation.


I salute the enormous amount of work you’ve done. But without a consistent store, it seems to me there are a myriad of race conditions the system can fall into.

Maybe if you accept a bit of oscillation under stress, and damp that oscillation a lot, it would be ok.


Not sure what it is, but the name really tickles me. Bonus points that it's kinda meta.


It's a cut-down version of kubernetes. Author left the basic features and removed most things that are oriented to heavier users or that required more resources than they wanted to spare.

I assume the goal is to make k3s into something light where you can deploy small projects instead of using something like swarm/dokku.

He has also removed etcd and replaced it with sqlite which seems like a nice way to reduce resource requirements.


I more than get the joke, but I'm worried about the ambiguity of "5 less then k8s. Short for kates, pronounced k8s", and worry that it will stop you from getting successful, if that's your intention.

Are you saying your project is named "kates"? Why not name it something like "Khios", a Greek island, which still lets you keep 'k3s' as your abbreviation?


I never intented anyone to directly use it. The "selling point" is what I'm going to put on top of this. This is intended to be an internal library. Hence the stupid puny name.


How about "k2s", short for "K.I.S.S."? So simple it saves you an S. :)


Kubernetes == K8s, not his project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: