Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes clusters for the hobbyist (github.com/hobby-kube)
321 points by pstadler on May 8, 2017 | hide | past | favorite | 65 comments



This guide makes an interesting choice with regards to etcd security, which I'm not sure I'd go with.

etcd stores a load of sensitive cluster information, so unauthorised access to it is a bad thing.

There's an assumption in the guide that you have a "secure network" and therefore don't have to worry about etcd authentication/encryption. The thing is if you have a compromised container (say) and that container, which has an in-cluster IP address can see your etcd server, then it can easily dump the etcd database and get access to the information held in it...

Personally I'd recommend setting up a small CA for etcd and using it's authentication features, there's a good guide to this on the CoreOS site https://coreos.com/etcd/docs/latest/op-guide/security.html


This is great input and definitely something worth considering.

Related issue on GitHub: https://github.com/hobby-kube/guide/issues/6


no worries, I've got some more info. on that on my blog https://raesene.github.io/blog/2017/05/01/Kubernetes-Securit... which may be of use.


Is the issue you outline something an out-of-the-box Kops cluster would suffer from?


That's a very interesting question. The answer is that it depends on the K8s distribution in question.

One of the challenges of K8s security is that there are a wide variety of distributions, each with their own defaults and configuration, so it's hard to make blanket statements about their security.

I've definitely seen some with this issue out of the box.


The second question:

> Choosing a cloud provider

This really annoys me about Kubernetes. Essentially all the official documentation is about how to select a cloud and let a cloud-specific tool magically do everything for you. There's no procedure for setting up a single host for development purposes or to have a Dokku-like personal PaaS.

This guide is super useful because it avoids all the magic and lets you set things up properly (despite assuming you're doing it on a cloud) and potentially even do it on a single host.


Thanks for that. This is one of the few comments in this thread that truly captures the idea behind this guide.

I've grown quite a thick skin since I exposed the first project of mine to a wider audience. But still, it's feedback like yours that keeps me going.


Let me also chime in and say I deeply appreciate the tone and level of technical detail in this... repo? report? guide? It's precisely the sort of thing that's been infuriatingly absent from the Kubernetes community for far too long. If this or something like it had been released with 1.0 we'd be a lot further along than we are now, I think.


No worries, thanks for writing it! A lot easier to follow than https://kubernetes.io/docs/getting-started-guides/scratch/#b..., which I was looking at previously.


> There's no procedure for setting up a single host for development purposes or to have a Dokku-like personal PaaS."

Can you clarify what you mean by this? Since the beginning (pre-1.0) it's been possible to stand up a cluster on bare metal/VMs using a single command. This used to be done with the `kube-up.sh`[1] tool, which has been replaced with kubeadm in 1.6 [1].

If anything, I thought the fair complaint would be that there are too many ways to set up a cluster, and it's confusing to figure out which is the right one [2].

[1]: https://kubernetes.io/docs/getting-started-guides/kubeadm/ [2]: https://kubernetes.io/docs/setup/pick-right-solution/#table-...


The problem with these tools for me was that they required either Ubuntu or CentOS/RHEL. I was installing on bare metal running Fedora and I'm unwilling to rebuild my machine for one application.

I think I eventually got something set up using the manual documentation but at the time said documentation resulted in a completely insecure installation, entirely unauthenticated. Since I don't have (or want) a "private" network and the docs didn't tell me how to get security any other way, I just gave up on Kubernetes.

And really, it's incredibly frustrating how inflexible the docs and setup tools are. As detailed in another comment (https://news.ycombinator.com/item?id=14296176), Kubernetes really just consists of Docker and 6 Go binaries. There's no reason setup should be so difficult and certainly no reason why it should be OS dependent.


Got you, that's fair. It's definitely something that's been a pain point in the past, and is what kubeadm was designed to solve.

There are actually a lot of tedious configuration parameters that need to be tuned per-distro, which is why there's only now a candidate for a unified solution; it's harder than you might think to create a truly generic tool for this job. (Take a look at the kube-up saltstack files if you want to be horrified by some surprising complexity).

Nowadays, it looks like there are a couple options on Fedora, specifically

https://kubernetes.io/docs/getting-started-guides/fedora/fed... https://kubernetes.io/docs/getting-started-guides/fedora/fla...

If you're looking for something that "Just Works (tm)", the ansible config (above) would be my bet for where to start these days.


> here's no procedure for setting up a single host for development purposes

I felt exactly like this, but then found one.

https://kubernetes.io/docs/getting-started-guides/kubeadm/

It requires manual kubelet setup (usually there are OS packages for that - which is the proper way to install software on the host system), and then it generates a CA, node certificate, spins up etcd and flanneld containers and sets up k8s on top of those. It also lets one join that cluster in a semi-automatic manner, making the whole setup no harder than Docker Swarm.

I think it's a reasonably nice middle point between unexplainable cloud magic and do-it-all-yourself-the-hard-way setups.


Red Hat Container Development Kit is a great way to get started locally, if you want to deploy OpenShift on a VM there's an Ansible-based installed that will interview you and install it on your VM's (since OpenShift is just a layer on top of k8s you can use all the underlying primitives if you don't want the additional features).


It's for real world distributed systems. The sort of thing that cannot run or be simulated on a single machine.

It's not worth bothering to make a single machine setup.


> It's for real world distributed systems.

The same can be said of Heroku, yet Dokku exists. The same can be said of Docker, yet plenty of hobbyists use it for their projects.

> It's not worth bothering to make a single machine setup.

Sure it is. Say you're a web developer and you use MySQL, PHP and Apache on Kubernetes in prod. Why would you go to the effort of setting those up individually on your machine when you could just put your prod manifest on Kubernetes on your workstation?

There's nothing about Kubernetes (aside from poor documentation) that prevents it from being used on a single host. Actually, according to its own documentation (https://kubernetes.io/docs/getting-started-guides/scratch/), it consists of:

- Docker/Rkt

- Kubelet

- Kube-proxy

- Etcd

- Apiserver

- Controller

- Scheduler

The bottom 6 are actually written in Go and can be run from a single binary.

There's no reason Kubernetes should be hard to set up. There's no reason it can't run on a single node. There's no reason you need a cloud. There's no reason you need a "supported" operating system.


>>> Say you're a web developer and you use MySQL, PHP and Apache on Kubernetes in prod. Why would you go to the effort of setting those up individually on your machine when you could just put your prod manifest on Kubernetes on your workstation?

A basic LAMP stack doesn't need Kubernetes to operate. It's up to you to justify the effort of setting up 7 super complicated and useless software.


But a developer doesn't want "a basic LAMP stack," they want something that matches what's in production. "It worked on my machine" is not a great way to do software development.


Except they provide a official single-machine setup called minikube. Sadly minikube suffers from the same issue GP describes - it is just calling one of the existing automagic bootstrapping scripts against a local VM.


I like the juxtaposition of the words "hobbyist" and "kubernetes cluster".


It seems that a proper Kubernetes setup is the modern day equivalent to the proper email server setup of the 90's or the 00's :)


Yeah, the community follows the wrong approach. It's not so different what two services need in features. There should be an end-to-end solution. But each tool, including kubernetes, only delivers something that is not even 100% of one feature and hopes that someone else comes up with a solution for the other features.

Usually in my day job I have to work with these tools, and currently have a stack of 5, that mostly have incomplete documentation, zero explanation of how they actually solve the problem and nearly zero ability to debug (e.g. what value have kubernetes logs and events. Usually when you have a problem is when you have no kubernetes logs yet/anymore and the events only tell you what you already know). Now I'll probably need to learn another one, considering these three options for storage.

At the weekend, where I'm mostly trying to relax and have physically only 2/7th of the time to alot I learn the basics behind containerization, e.g. namespaces, cgroups, virtual network adapters, iptables. And I feel in this small time slot I make a lot more progress at getting to that end-to-end solution that people actually need.

The example I'm using is wordpress+mysql. It's a simple thing that covers >70% of what anybody wants to deploy. And on-premise with kubernetes+docker it's still not possible without hacks (e.g. for volumeclaims and logging), after, I don't know, 4 years of Kubernetes? I bet in spending 2 years worth of weekends any normal person can come up with something better.

---

Re missing features, examples for Kubernetes:

A) Why does Kubernetes not solve the networking part? If I have a cluster and containers that may run in different places, of course the tooling I use to maintain that cluster needs to ensure that containers can talk to each other. There can be an abstract API and the option for other people to write plugins, but the core needs to come with one solution, that mostly works and is debuggable when not.

B) CrashLoopBackoff. Why did no Kubernetes developer get the idea that this state may require any kind of log/debugging?

C) Why does kubernetes assign random ports to services and not provide a simple way to retrieve them? Of course I can get them after I've learned the JSON api. But usually that is not considered a solution but hacking. I really don't care what the service is running on, I just want to use it.

D) Why do I need to manually say how many containers should run for each service? There are very distinct options a user may need. E.g., most services should run 1 instance and replace it if it dies. If it needs reliability I want to define how reliable it should be and accordingly the cluster should decide if it needs 2, 3 or 5 instances. And lastly I want to run stuff on all my nodes. Each node with a container. That is not even possible afaik.

E) Most kubernetes tools are not working well with environment proxies. For instance kube-proxy shell tool will completely bug out. But surprise, clusters make a lot of sense in enterprise environments, and enterprise environments have proxies. It's also not a hard setup for testing. A raspberry pie can be your home networks proxy.

F) on premise storage solutions, considering that a restarted container may not run on the same host.

G) Since not much is really running right now we didn't run into that problem yet. But it's totally possible that the whole cluster runs out of resources. I haven't seen any piece of info about the general cluster status and when I need to increase or replace infrastructure.

Honestly if I don't have all these things solved, am I really better of using Kubernetes or writing my own scripts? I think it currently is about equal in effort. And if that's the case my own solution has the huge advantage of being under my control and allowing me to learn a lot of new things.


It seems you have not discovered the wide possibilities of kubernetes yet :)

How are you setting up your cluster?

A) Kubernetes abstracts the networks solution so people can choose between multiple implementations. Some people already use openstack and want to use their openvswitch network for their containers others want to use a pure container network like flannel. Kubernetes distributions usually come with a integrated network solution.

B) You can still retrieve the logs from a container is CrashLoopBackOffState. You can even retrieve the logs from the previously failed container by using the `kubectl logs <container> --previous`. Applications can write information about their failure to /dev/termination-log which can be used to debug the failure.

C) You can define the port yourself otherwise kubernetes defaults to a random port to avoid port conflicts. The recommended way to expose HTTP services would be by using ingress.

D) You can run an instance on every node by using a daemonset.

E) Are you talking about outbound traffic from your containers to an external system? You would need to configure this in the container engine and the container itself. I had little problem doing this in an enterprise environment that required an http proxy for all external communication.

F) You can attach you existing SAN solution over iSCSI, fibre channel or even NFS. Another solution would be to run a distribution storage like ceph or glusterfs in kubernetes for kubernetes. You then provision persistent volumes that are attached to the node your pod is running on. If your pod is rescheduled the volume will be moved too.

G) If you have resource requests/limits set kubernetes will not schedule your pods if you have no resources available.


I'll read all of the details you provided. Thanks a lot. It may be lack of knowledge in our team, not kubernetes. Sorry if I was too frustrated and blaming the tool instead of my lack of knowledge.

Re proxy: Not only that, but also. Let's say you have single instance deployment, and run kube-dashboard. You want to access the dashboard on localhost:8080 so you start the kube-proxy shell command. What actually happens is that the go code underneath kube-proxy will redirect the request through your $http_proxy, even if localhost, the dashboard's internal address, and your hosts external address are in the $no_proxy environment. And if your network proxy doesn't allow the dashboard's port you get a HTTP 403 instead of the dashboard.

Re storage: We have a cluster with 10+ nodes, each with 5+ disks a 10 TB. How would you make sure that your software talks to the correct disk, after it gets restarted and may end up on another node?

Re limits: The highest goal is not avoiding ressource exhaustion, the highest goal is service continuity, though. It can all run on 100% and die. no problem. I just need to know that I should exchange the first few disks/processors/machines before the customer facing service slows down.


@Proxy I am not sure what the issue is that you are facing. kubectl proxy should create a tcp-proxy from your loopback device to the pod. So there should be no http_proxy involved. If you think a kubernetes component does not respect the no_proxy setting you should create an issue.

@storage If you want to move your disks with your containers you need an additional storage system(SAN, cloud provider or distributed storage in kubernetes). Then you create persistence volumes in kubernetes that references a disk from the storage provider. This allows you to assign this disk to a pod with a persistence volume claim. The disk that is linked to the container through persistent volume and persistent volume claim will be moved on the same node the container is scheduled. If you want to run stateful workloads on kubernetes I would advice you to use a storage system. You can use local disks but you loose some of the flexibility that you gain from kubernetes by tying your containers to certain nodes with the data. There is also work being done on improving the handling of local storage to treat it more like a resource and introduce separation by using local volume per kubernetes local disk volumes.


The local persistent volume claim is a completely sane and valid feature, but it was not implemented, because no one stepped up to do it initially.

But it'll be alpha in 1.7: https://github.com/kubernetes/features/issues/121


Kubernetes is very much like mesos + its frameworks (chronos/marathon/aurora), it's for devops automation, for those who got tired of writing scripts/programs to manage docker (and LXC, and OpenVZ before that), and got tired of thousands upon thousands of small edits to config management repos. Knowledge of kernel stuff is required to administer kubernetes, at least for now.

Deis aimed (aims?) to be the lightweight version, but then got acquired by MS.


It seems more like setting up the ha clusters we had in the 90s. Things like veritas, lifekeeper, service guard, etc. The parts aren't even that much different, really. They managed a network, services exposed on the network, health of said services, etc.

And the installation was just as confusing it seems.


i'll wait for qmail then.


You mean because it is now considered a basic requirement to have failover and horizontal scaling?


I mean because of the high number of working parts for a fully functional system: Docker, Kubernetes, Etcs, Kubeadm, etc., etc.


Oh, so it's about the high complexity and many interoperating parts?

I'm just asking because I was born in the 90's, I'm not very aware of the state of email server management then.


Well, let's just say that today we have tutorial and bootstrap projects for Javascript frameworks, back then we had tutorials and bootstrap projects for setting up email servers with spam filters :)


Why are you doing all of this stuff manually? There are several providers that will set all of this stuff up automatically for you. I like the Kismatic toolkit (https://github.com/apprenda/kismatic), but there are a bunch of others. Sure, maybe once you go to production you'll want to install manually so that you have everything finely tuned the way you want, but learn it by using it rather than trying to have to figure things up front.

Or even better just use GKE for development / learning purposes. Just stop the cluster when you're not using it, and it'll be a lot cheaper than something you won't want to take down because you spent days installing it.


Because Kubernetes is a complex beast with many moving parts, and learning about all those moving parts becomes more and more important as your usage grows.

Personally I've used Stackpoint.io to provision some small clusters but I was very excited to see this project because deploying my own cluster from scratch is next on my todo list. Kelsey Hightower's "Kubernetes the Hard Way"[1] is the canonical go-to reference here but it's also very daunting so this looks like a great middle ground.

Let's face it, even today the k8s docs can be quite sparse sometimes or gloss over the details, so knowing how all of the pieces work from the ground up can be a big help. Plus, you prevent vendor lock-in when whatever automated tool you're using doesn't solve your use-case or decides to start charging a lot of money.

[1] https://github.com/kelseyhightower/kubernetes-the-hard-way


I agree with this assessment. We decided to do "kubernetes the 'sorta' hard way" by leveraging the Saltbase installer with some level of customization and full control via terraform of how our infrastructure was being allocated. I think its valuable to learn what the tool is doing if you have to maintain it. When something breaks, an upgrade has issues, or you need to better understand the system to make a decision, I feel that you gain a lot in setting up a system yourself. I think you'll be more likely to know precisely where to look to debug things. You also get closer to the tool which makes it easier to contribute back into the community. You also get the benefit of making your own infrastructure decisions. Yes k8s can provision ELBs and EBS volumes (and their equivalents in Google Cloud, Azure, etc) as well as autoscale nodes via a cluster addon, but the big moving pieces, such as instances, VPCs, Networking, etc, remain well-defined in Terraform or some other infra-as-code. That means that you can decide how to deploy that etcd cluster, how it gets backed up, whether or not its encrypted at rest, etc. Generally speaking, we just value the level of control and insight that we get out of controlling the stack definition ourselves. To some extent that may be antithetical to the purpose of k8s, since the goal of the project overall seems to be simplification and centralization of best practices of deployment.

With all that being said, kops is an incredible tool (as are others) and we used it to learn about the system and test some of the functionality for ourselves. Can't recommend it enough.


And if you want to be awed look up Kelsey Hightower's Github profile.


There are indeed many options for automated setups. I did my first steps with Kubernetes on GKE, later followed the CoreOS guide to set it up on bare metal. This guide is for creators, a written form of my lessons learned. It should enable people to run secure clusters wherever they want.

In case you missed it, there's a repository[1] in the same org which offers fully automated provisioning using Terraform.

[1] https://github.com/hobby-kube/provisioning


Yes, it does seem like an awesome resource for intermediate Kubernetes users like myself. But beginners should be steered away from it because it's too much, too soon. And by talking about choosing a cheap cloud provider at the beginning it implied to me that you were targeting beginners.

Please add a disclaimer on the top pointing beginners at some of the better resources for them, and then tell them to come back once they've learned what stuff like "ingress" means.


Used this recently and found it great. That's for the hard work and effort that clearly went into this.


Hobbyists like to do things for themselves and better understand how things work. It's the nature of being a hobbyist rather than a consumer.


Stackpoint.io is great for just spinning up k8s to get a feel for how it works. No charge, and all web based. It supports other providers, but I tried it with digital ocean. You pass it a DO API token, like this: https://cdn-images-1.medium.com/max/800/1*tcGINse5on6qnbsRYN...

Then it does the installation and gives you urls for the control panel.

I don't think I'd use it for production, as everything is behind stackpoint.io urls, but for experimenting, it saves a ton of time.


I've been using Stackpoint for a while and they do a great job, but be aware that they'll soon start charging $50/mo, which will remove them as a tool for hobbyists.

https://stackpoint.io/#/pricing

They haven't announced a date for when they'll start charging, and I'm not sure how that will affect existing clusters.


> they'll soon start charging $50/mo, which will remove them as a tool for hobbyists.

What's wrong with paying for something as a hobby?


If every tool/thing I relied on costed $50/mo I wouldn't be able to afford my hobby. I would need to seek a cheaper - possibly free - alternative or give up the hobby.

Many people don't have $50/mo to spend on unprofitable hobby projects. There's nothing wrong with paying for hobbies, but most people's hobbies fall under (a) infrequent large purchases or (b) really cheap occasional costs.

Upgrading my computer falls under (a). It's expensive but I only do it once every 3-4 years. Buying new guitar strings for my guitar every so often falls under (b). It's cheap and while I should do it every-other-month I do it maybe twice a year.

If the cost of new guitar strings was equivalent to the cost of upgrading my computer - I'd give up playing guitar as the hobby would become too expensive.


Building this stuff by hand is... yeah.

We open source our Kube bootstrap toolkit (which uses kops). No one should build this by hand. It would be like, I dunno, setting up a MariaDB Galera Cluster by hand when you intend to use RDS (not a perfect example, please no pedantry!). It's a fun learning experience, I guess - if you're into that sorta thing - but is not what you want to use in any production context.

https://www.reactiveops.com/blog/kops-102-an-inside-look-at-...

https://www.reactiveops.com/blog/using-k8s-scripts/

Use the automation to bootup clusters, please!


I strongly disagree. Rolling out a K8s cluster to a production environment without knowing how to "do it by hand", relying instead on helper tools that abstract away the complexity, is deeply irresponsible, even professionally negligent.

Like learning shortcuts in math, you only "earn" automation once you've done things the hard way often enough to know how it works at a low level. That knowledge is critical. Otherwise you're just cargo-culting something you don't understand.


Great set of resources -- I just went through the process of defining a terraform cluster in AWS over the past few weeks, though I'm leveraging the k8s Saltbase installer for the master and nodes.

I'm curious, why no mention of AWS as a provider for roll-your-own? Is this a cost thing?

Also, I get the feeling that Ubuntu is _not_ a first class citizen of the k8s ecosystem, but perhaps my newness to the ecosystem is to blame here. The Saltbase installer, for example, only supports Debian and RHEL distros, `kops` prefers Debian, and the documentation for cluster deployments on kubernetes.io and elsewhere also seems to be somewhat suggestive of Debian and Core OS. Perhaps thats just a mistaken interpretation on my part. I'm curious what other peoples thoughts on this topic are!


Ubuntu is absolutely a 1st class citizen in the K8s Ecosystem!

The front page of https://kubernetes.io/docs/ has a bullet that links to a super simple way to deploy Kubernetes to Ubuntu on any of [localhost, baremetal cluster, public cloud, private cloud]!

See:

* Installing Kubernetes on Ubuntu: Deploy a Kubernetes cluster on-premise, baremetal, cloud providers, or localhost with Charms and conjure-up.


kops doesn't necessarily prefer debian - we support Ubuntu, Debian, CentOS/RHEL, CoreOS and Google's Container OS. One of the outputs of the Kubernetes-on-AWS efforts is an AMI that is "Kubernetes Optimized" - a 4.4 kernel, Docker pre-installed, lots of inodes etc. That AMI _is_ based on Debian, hence the suggestion is that if you don't otherwise care (and my hope is that eventually you won't), that you should probably just use that AMI. But if you do have a preference, by all means use your distro of choice.


We found that setting up Kube on AWS in a prod-ready way was sufficiently complex enough that we wrote up some docs on it: https://www.datawire.io/guide/infrastructure/setting-kuberne..., hoping it would be helpful.


I appreciate the responses everyone. Glad to be set straight on some of this stuff.


I'm surprised a hobbyist K8s administrator is not choosing to use kubeadm instead.

https://kubernetes.io/docs/getting-started-guides/kubeadm/


https://github.com/hobby-kube/guide#installing-kubernetes

> There are plenty of ways to set up a Kubernetes cluster from scratch. At this point however, we settle on kubeadm. This dramatically simplifies the setup process by automating the creation of certificates, services and configuration files.


¯\_(ツ)_/¯


I just had my first read of Kubernates. Looks doable. Time to jump on the bandwagon.


Really great resources! I was working on my own version of k8s setup scripts using Ansible, and I will definitely use this guide to improve mine.


Can I ask what you thought of the k8s roles in Ansible Galaxy?


Great timing, I was wondering to myself about the feasibility of a 10€ cluster on scaleway last week.


I found gluster-kubernetes quite simple to install. But the install instructions do assume that you're going to be giving it it's own partition, which you would be doing on any sort of real production deployment.


You can spin up a cluster on gce in a couple of minutes.


There are a couple reasons to do it manually and/or outsite GKE, notably:

1) Cost. In VPSs like Digital Ocean/Scaleway, there's usually a large network out transfer quota included in the price, which just isn't there with GCP, where you pay for network usage on a metered basis.

2) Learning. Although you can defer most of the heavy work to GKE, it's still good to understand the moving parts so you can make better choices as you grow.


What do you do if something goes wrong with it?


exactly what i was looking for! Eureka!


Good to see the author using wireguard as an additional network security layer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: