Hacker News new | past | comments | ask | show | jobs | submit login

Maybe I'm too cheap but I don't see an option for 5/mo nodes in any DC, they're starting at 10 or 15.



Hmm! I think this has changed since the beta.

Why not try a cluster with a smaller scaling group? You can create a cluster with only one node in it, but what is it that you are trying to do on top of your Kubernetes? In my experience with growing clusters, you probably want to scale your per-each individual node size up before you want to scale up the number of nodes in your cluster. (You might even find that you really need only one big node, say for your databases, and want to build a heterogeneous cluster with an autoscaling group of little nodes and that one big node. That's a possibility with node pools on DO K8s.)

An ideal cluster size for me is probably 5 nodes with ~8-16GB RAM each. You could make it still worthwhile to do the cluster thing with probably only 2 nodes at ~1-2GB each, but that'd be pushing it.

I am practiced at making clusters cheap, actually I once was published in the Deis blog, an article about how to deploy Deis v1 PaaS in a highly available fashion for as cheap as possible.

Many of those lessons from nearly a year of research that I did on the topic prior to that publishing, still apply on modern Kubernetes clusters; but many of them don't, and still others are out the window completely on these managed environments, where now it seems possible to get pretty much the same idea of "High Availability" as I was aiming for, but for much cheaper and with better guarantees.

For instance, since you are not running etcd for yourself (it runs under the hood, on the management plane) there is no specific rule that says you must have at a minimum 3 or preferably 5 nodes to keep a stable cluster anymore. This was the basics of learning to wield CoreOS and Fleet 101!

Consensus is handled on the masters, and that consensus is subject to split-brain problems, so this knowledge is still important, but you don't need to have it yourself. In many more basic clusters with managed systems like GKE and DOK8s, this knowledge is practically reliquary! Two nodes may ensure that one is there to pick up the slack when the other has a fault. Exactly how you'd imagine it should work without a Computer Science degree. But with two nodes, ... since you'll probably never see a fault like that ... and the whole environment is self-healing, even if one happens on your watch, might never even have to know about it.


I noticed this as well. I think they are probably still evaluating where to start it at. In all honesty $10 nodes are very fair. I had a semi-poor experience with $5 nodes (for masters, at least) when I used kubeadm on DO. The $15 2cpu/2gb is probably the sweet spot for this. Although the $5 would be nice to start for just messing with some workers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: