Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes the Hard Way (github.com/kelseyhightower)
240 points by tantalor on Aug 19, 2016 | hide | past | favorite | 79 comments



Lots of good information here but it is still not enough for a production setup IMHO.

There is a great need of good source of production setups. In open source software this seems to be the secret that no one is willing to reveal. I tried to setup a kubernetes cluster from scratch a while ago and soon I was browsing the source code for answers. Openstack is the same, you need to understand a lot about the inner workings before you can even attempt to setup something for production.

There is always a simple "this shell script starts your own <name your tech here> cluster in vagrant" but it is still not a production setup.

And even if this article is the "hard way" it describes:

> This is being done for ease of use. In production you should strongly consider generating individual TLS certificates for each component."

And it does not mention that the crucial part is the common name field in the certificate maps to the user name that is the magic information that I once needed.

I sincerely appreciate this article but production setup is still a long learning experience away.


I propose a new term: Consultancy Driven Development. It goes like this:

- If it's too easy to set up, nobody will hire us to make it work.

- Implement a kickass setup dirt cheap for some big-name company, so we can claim they use it in production. Yeah we tweaked it so it bears little resemblance to the original product, and only fits an incredibly narrow use case, but nobody stands to benefit from blogging about that.

- Better ship with a configuration file that isn't production ready.

- Did I say one? Better have three configuration files, each duplicated in distribution-dependent directories (in some cases), needing manual sync between servers to prevent catastrophic data loss.

- Remember not to publish the program that checks for errors in configuration; half of our income would disappear.

- Benchmark with a configuration file that nobody would use in production, but looks really impressive when taken out of context.

- People want transactions, remember to claim support (and if you must, explain somewhere in the fine print that a transaction can only span a single operation on exactly one document, and btw. is precisely none of A, C, I or D).

- Somewhere on the front page, it should say how we can support petabytes of data (and it performs very well, as long as you write all your data in one batch, never modify it, keep it all in memory, and turn off persistence).

- Never give away answers online. Answer every question about configuration with "it depends".

- Don't release a new version without renaming a few configuration options. Be "backwards compatible" by ignoring unknown, obsolete and misspelled options.


What you are describing is basically Openstack. Although Hanlons razor applies, none of the current actors stands to benefit from improving the situation.

* Extremely difficult to set up.

* Claims that half of Fortune 100 uses it (read: many are required to support it; the rest have one guy with a toy installation in some branch office).

* Consists of dozens of components, each with several-thousand lines config files (actually Python code) that must be kept in sync between all nodes (yet have node-specific data).

* Claims to be "modular", but have complex interdependencies between each of the components.

* Upgrading is not officially supported, but some companies will help you.

* Will break in mysterious ways, and require you to backport bugfixes since you're stuck on an unsupported version after a year.

* Have unhelpful error messages (e.g. throw Connection Refused exception when you're actually receiving an unexpected HTTP return code).

* Write documentation in a way that appears OK to new users, but vague enough to be useless for those who are looking for specific information.


This is the most accurate description of OpenStack i've ever read. Congrats!


Yes, and this is why we had to invent a new job position and hire a huge workforce of "devops". A cost that tends to far outweigh the license costs associated with most commercial software because those people have to have the skills of a full blown software engineer, to preform what should be a sysadmin job.

I think if 15 years ago during the opensource vs close source wars you had mentioned that opensource projects would eventually decide that they wouldn't make any attempt at documenting the product, no one would create actual installers, and continuous development methodologies would leave 99% of opensource projects in a state of 0 testing before releases, it would have been a lot harder sell.

Part of this goes back to the early linux RPM/DEB decisions to refuse to follow in the footsteps of more traditional software installers and provide an interactive UI for configuration (see bottom about HPUX). Resulting in chef/puppet/etc. This has removed the onus on projects to dedicate resources to that portion of the project. I've worked at a few companies shipping commercial software and there were always either a team responsible for building an installer, or a part of the development cycle dedicated to it. There were also frequently technical writers (a job position that seems to be mostly gone). Now, with opensource/git its done when the code gets pushed to github. Forget any documentation more complex than a few docbook/etc comments scattered around the code base and a wiki so fragmented that going from one page to another switches the version of the code in question.

Its a pretty sad/depressing state, and sometimes I wonder how anything works anymore. Thank god for RH, which actually has testing labs, people to fix bugs, and strong opensource ethics about making sure the upstream/other distro's benefit from their work. But, then they go and behave like poo flinging CADT https://www.jwz.org/doc/cadt.html in other projects.

* See HPUX for a good example of how to do it on unix, without some of the problems windows installers frequently have. The packages have a standardized form description language that are picked up in a preinstall process, so the user can be prompted for install locations/configuration/etc before any of the packages are installed. The user runs though the entire dependent set of package forms before the install actually begins. Alternatively an ignite script (like kickstart/autoyast/etc only with most of the functionality of chef/puppet/etc) can be generated for automated deployments.


Clarification:

kasey_junk hints at this bit I'll spell it out:

This is a problem with enterprise software as well as some Open Source software. IMO Open Source might sometimes even be simpler than some commercial stuff.

Jokes about SAP consultants moving in isn't funny anymore.

Former boss of mine told me every one of his customers that ever had decided to go with SAP had burned themselves. (IIRC the latest one I heard of started with an estimate of 5 illion monies and when I was there to install our stack they were at 25 and counting.)


I agree there is perverse incentive for open source companies to do this if their business model depends on support. Freely after Upton Sinclair: It is difficult to get a person to document something, when his salary depends upon his not documenting it!


Consultant ware has a storied history in our industry. SAP, Oracle & Peopleware are just some of the names you can think. Devops is clearly the next frontier in this movement.


This all makes me laugh, and cry at the same time. It makes me laugh because everyone wants to run k8s for no real reason, they havent got scale, traffic or many woes. Please just run some vms, CM, unattended upgrades, capistrano and packer. Mostly the loose reasoning is 'simplicity', and its the new shiny. This is perceived by people thinking that deployment, service discovery, config etc. is provided for free in kubernetes, and one boot script will solve all. On top, everyone thinks its trivial to operate this, maintain it, and no one understands what 'production ready is'. I almost think people think it replaces ops, but it does the opposite.

It makes me cry because, running k8s is hard, ops is hard, and so is telling people they might be wrong. K8s consists of half a dozen components, they have dozens of config flags, and much functionality is buggy, in beta, or flux. To top this off k8s is based on etcd. Etcd is barely production ready by their own admissions (remember /production.md in github?) but if you have run it you will understand the bugs, and vague docs coupled with reading the source constantly when problems arise. K8s consists of many components, kubelet, proxy, controller, scheduler and more. These you have to install and configure, and many scripts do this badly in a one size fits all approach, and many CM methods do this barely in an ok manner currently. I cry, because of overlay networking too, its a nightmare, and the alternative cloud permissions are scary.


How do you reconcile "ops is hard" with "just run some vms, CM, etc."? Is it because ops only becomes hard when you force yourself to use k8 when it's not really needed?

Also do people really think k8 is a drop-in/trivial solution? I just got done evaluating it and the overriding sentiment seemed to be "it's super flexible but super complex and badly documented and you'd better hope you're using the happiest of happy paths."


My anecdotal experience: I only started working with Docker and distributed systems in January and I've been through a couple iterations of provisioning, deployment, and orchestration since then - started with docker-machine and docker-compose, added in swarm, private registries, and a set of bash scripts, now moving to kubernetes - and I am finding that kubernetes handles many tricky components out of the box with very little effort. I wouldn't have understood how to use it four months ago but now that I've implemented much of it myself I understand the underlying architecture and design goals and find it to be a better solution than the tools I've put together. It took me a few days to see if it would do everything I needed and understand how to configure it, which I suppose could be seen as complexity, but it took me much longer to understand how to run docker in production in the first place since I had to learn and build it all without prior knowledge. Kubernetes has been a cakewalk in comparison. If I ever need to set up a cluster from scratch again I'll be using k8s.


Disclosure: I work at Google on Kubernetes.

We're always looking to improve on both complexity and documentation - but, as Kelsey pointed out in his tutorial, it's definitely got a few steps. The biggest question is would you really want any fewer? That is to say, you could use a hosted solution (like Google Container Engine) or set up with a one-line command, but most people who want to run at production scale definitely want individual steps so they can customize.


Hi - I work at Google on Kubernetes.

The biggest reason people use Kubernetes is because it is the best solution to solve the need "how do I bring my containers to production." You're absolutely right, you can solve this in many other ways, but, no matter what, you're going to have to address the problem of scaling, updating, monitoring, logging, securing, maintaining persistence, implementing 12-factor applications and many many other things before you do - Kubernetes solves all this for you.

Just a quick correction - Kubernetes is not based on etcd, nor do you have to set up half a dozen components. There are three components to Kubernetes - an API server, a scheduler and a controller manager - there are flags, true, but you can easily use a configuration file (just as you would with any other production ready server).

If you saw anything that's badly documented or buggy, please let me know!


k8s is still way easier to run than mesos & marathon with all the accompanying service that make it usable.


And Nomad is an order of magnitude easier than k8s. Might not be as full featured but for basic use cases of give me X resources to run Y it's really great. I was shocked at how easy it is to setup and run. The folks at Hashicorp are doing some great things.


Agreed (as someone who has spent the last 1.5 years working on/with Mesos and will soon be looking to migrate to openshift/k8s)


"unattended upgrades".

Why? With VMs there is no reason, ever, to be modifying the OS with the VM still running, rather then cutting over to a new, tested instance.


Sorry, I was referring to security updates through unattended updates. Its pretty hard to do if your machine is storing state or is running a lot of traffic. Whilst churning your entire estate should be possible, it is not always.


There is a ton of work going on upstream to make Kubernetes easier to install and manage in production environments.

A big chunk of that work is what is being called "Self-Hosted Kubernetes". The idea is that once you bring up a single machine running a Kubelet you can bootstrap the other services that make up a Kubernetes cluster from there. You can learn more about that here: https://coreos.com/blog/self-hosted-kubernetes.html

As far as TLS there is ongoing work upstream to add a CSR system for the "agents" called Kubelets. This will allow people to automate the TLS setup and simplify the management. Details are tracked here: https://github.com/kubernetes/features/issues/43

Also, there are more discussions happening to improve the first install experience. https://github.com/kubernetes/kubernetes/pull/30361#issuecom... https://github.com/kubernetes/kubernetes/pull/30360

Kubernetes is really focused on not just making it easy to install. Which is the trivial scripting part, as you point out. But, to make Kubernetes easy to manage over the lifecycle of the cluster. Which is where work like self-hosted, TLS bootstrap, etc start to come in.


RedHat has a production Ansible script for OpenShift: https://github.com/openshift/openshift-ansible most of which is Kubernetes setup.

Its amazing how much actually goes on here. Firewall rules, certificates, Docker storage configuration, etc. Its definitely not something that you can just thrown in a VM and assume everything will work.


There is also a chef version a colleague of mine wrote:

https://github.com/IshentRas/cookbook-openshift3


ansible/puppet/chef is the way to go. Single line of DSL is worth 1000 lines of text on a blog, imho.


CFEngine, too! :) I'm still in business consulting and training on CFEngine. Although I'm branching out into Git as that has a wider user base.


Don't get me started on setting up production hdfs and hadoop that is a nightmare. It is comparatively well documented with lots of people running it! As soon as you get into the weeds with kerberos and HA mode forget about the documentation explaining anything properly. Cargo culting from random blog posts, reading the source, and just playing around with config files is the name of the game. There are some weird interactions between KDC settings and some of the daemons that are not documented at all.


I'm happy to see marcoceppi mentioning juju here - i'm one of the enablers of juju big data.

We've worked really hard to make it simple to stand up hadoop on clouds, containers, and metal (https://jujucharms.com/hadoop-processing/). Juju brings the modeling, Bigtop brings the core apps. Scaling, observing, and integrating are old news; HA is landing now; your post and others like it have put security on our -next radar.

Having read a few kdc/hdfs stories, i think i'm going to miss the days when dfs.permissions.enabled was good enough ;)


Spot on! Once you get everything running you're so exhausted documenting it by writing a blog post is not on your mind, sleeping for a week is...


Oh God yes, this nonsense is most of my life - kerberos, ad, and my favourite un covered area of enterprise integration, storage. Nfs4 plus Kerberos anyone?


This has been my experience with Spark too! On one hand, its exciting working on new things changing so fast that the "best way" isn't common knowledge yet. On the other hand, having to grep through source to find out what a config option really does is just painful.


Have you tried to juju to set up things like hdfs and hadoop and the things that plug into it?


I think that not having a good production setup for an open source project is a combination of a couple of things:

1. Documentation is not satisfying work, maybe because it is as absolute as code?

2. Contributing documentation doesn't get you as much recognition as code

3. If you set it up differently yourself there is no need to maintain a fork (unlike code changes)

4. For open source projects that are company backed there is a perverse incentive to keep the documentation vague if they only make money though support


I just used `./cluster/kube-up.sh` to setup my cluster on AWS. I am now wondering what's missing for a production setup. It seems to be working OK so far (though I just have 3 minions and a few pods). One thing I wish I knew how to do is how to safely upgrade the cluster without re-creating it from scratch. Care to elaborate a bit?


The problem is that it's not declarative. You can't tweak the config and run it again to converge.

Kubeup is designed to run once, unlike systems such as Puppet and Terraform that declaratively set up the world to fit your specification.

Kubeup also does a lot of mysterious stuff. By using it, you don't have a clear idea of which pieces have been set up and how they slot into each other. It is, in short, opaque and magical.

For comparison, I set up Kubernetes with Salt on AWS. It was, by all means, "the hard way", and took me a few days to get running and a couple of weeks to run completely correctly (a lot of stuff, like kubeconfig and TLS behaviour, is still undocumented), but as a byproduct I now have the entire setup in a reproducible, self-documenting, version-controlled config.


Have you by chance open sourced your setup? I started going down this route with terraform, but ended up stopping and just using the kube-up script due to time constraints.

However, now that I have a cluster up and running, I can take the time to build a parallel cluster with more understanding, and migrate the services to it.

I found a terraform example, but it declares itself out of date, and looked more complicated than I thought it should be... that was just a gut feel though.

I have not used Salt, but I always like to learn new things, especially if they make my life easier.

I'd be very interested in checkout out your setup, and any lessons learned you have to share.

Thanks in advance



This project looks very interesting, and once I'm more comfortable at a lower level of abstraction, I may use this.


I haven't, but I would be happy to.

I just need to generalize it a little bit. Email me and I will send you a link once it's done, sometime next week (I'm on vacation).


Hi, I would just like to second the request for an open-source, production-ready implementation. They seem to be rarely shared, which is a big shame.

I've been playing with https://github.com/kz8s/tack lately, but your implemention sounds like even more comprehensive.


I used this as reference - https://github.com/samsung-cnct/kraken

Terraform + Ansible for K8S on CoreOS


Thanks for the tip. I will take a closer look at this.


Oh yeah I agree, that's annoying. I considered using one of the few projects that attempt to solve this problem but decided to stick with `kube-up.sh` because, as a beginner, I'd have a hard time telling which pieces belong to Kubernetes vs the third party tool. I also don't have time to become a Kubernetes expert because I'm crazy busy developing. Hopefully, Kubernetes will eventually obsolete `kube-up.sh` for something better based on Salt/CloudFormation/Terraform/etc.


Watch this tool called kops - https://github.com/kubernetes/kops

Official upcoming replacement for kube-up


Kube-up actually uses Salt for some of the setup. But it's a big mess, since it needs to support a lot of platforms, Linuxes and cloud providers.

I don't know what the future of Kubernetes setup is, exactly, but right now it's quite safe to settle on Salt, Puppet, Ansible or Terraform. I haven't used Terraform, so I don't know how suitable it is to OS-level setup (things that the aforementioned tools are good at), as opposed to orchestration.


When you click the GKE(hosted Kubernetes) button on Google Cloud Platform, it's those very Salt configs that set up your nodes (and once upon a time they setup your control plane too).

Basically what's arguably the best publicly available Kubernetes setup in the world is hiding in that Salt codebase, and EVERY would-be Kubernetes admin should look at it before venturing on their own.


I used it as inspiration for my setup. But you also need a bunch of other stuff, like the CA setup and Kubelet cert generation, which are buried in the whole kube-up structure.


Yeah, agreed. CoreOS + cloud-init could remove necessity to do OS-level setup.


There's always something to do at the OS-level, which is why cloud-init configurations tend to spiral out of control with in-line scripts, configs, and binary downloads.

There's no getting away from configuration management and software installation at SOME level of your stack, and setting up a substrate for Kubernetes is no exception.


Nah, who needs all these version-controlled configs?

Just look how google runs stuff! https://cloud.google.com/compute/docs/tutorials/setup-joomla


If you're looking for something that is a little more flexible for deploying Kubernetes, I recommend either KOPS[1] or kube-aws[2]. kube-aws is tethered to AWS but is much more flexible than the standard kube-up.sh script. KOPS is the heaviest lifting tool I've found for deploying Kubernetes. It's short for Kubernetes Ops and (I believe) it can even generate Teraform configs so you can get the upgrades without re-creating everything.

[1] https://github.com/kubernetes/kops [2] https://github.com/coreos/coreos-kubernetes/tree/master/mult...


KOPS is pretty awesome in that:

* Actually pretty much works for what's in scope..

* It's got some nice configuration options that are discoverable and not hidden away in envars...

* Some good prelim docs explaining how kubernetes is bootstrapped

* Cluster management seems to function properly

* Updating/upgrading

What's missing IMHO(from an AWS user's standpoint including kops and k8s):

* SUPER unapproachable codebase ATM for KOPS and friends

* More flexible cluster dns naming so we can leverage real wildcard certs accross dev environments

* Running kubernetes in private networks

* Passing in existing networks created through other tools(terraform, cloudformation, custom etc)

* Responsibility for stuff seems spread out across projects and is unclear which lies where(also leading to an unapproachable-ness for contributions)

* AWS controllers that don't seem to fully leverage the AWS API's (traffic balanced to all nodes and then proxy'd via kube proxy; no autoscale life cycle event hooks)

* Unclear situation on the status of ingress controllers; are they even in use now or is it all the old way?!

* No audit trails

* IAM roles for pods

* Stuff I'm probably missing

It's very frustrating TBH. On one hand AWS ECS has IAM roles for containers now, for the new Application Loadbalancer, and private subnet support. On the other hand they DON't have pet sets, automatic EBS volume mounting(WTF), a secrets store, configuration API, etc. Also frustrating is I feel the barrier to contribute is a too high ATM even though I have the skills necessary..

It's SO close though. If I can get private, existing subnet support I can probably start running auto provisioned clusters that are of use for some of our ancillary services in production. From there I might be able to help contribute to KOPS and AWS controllers. Right now it looks like there is just this one guy doing most of the work on AWS and KOPS; probably quite overloaded.


IAM roles for pods: https://github.com/jtblin/kube2iam

Running kubernetes in private networks: You could probably get private subnet support by - Deploying manually or deploying with a script, then changing things in AWS (route tables, public IP, etc) to be private, manually afterwards (both cumbersome but possible) - Using NodePort instead of LoadBalancer on any services


(Currently work in progress, but working. Some of these statements are forward-looking.)

Lol


That is perfectly fine if it suits your use case! I have to deal with industry certifications and unfortunately using a ad-hoc certificate authority is not an option or running in insecure ports.

Also I was setting it up on coreos and baremetal servers. It should be possible to run pods in google container engine or similar very easily, but would there be any fun in that?


The CoreOS team has worked a ton to improve the baremetal installation experience for Kubernetes. You can read more about it here:

https://coreos.com/kubernetes/docs/latest/kubernetes-on-bare...

And the installation flow that builds on top of that for Tectonic:

https://tectonic.com/blog/tectonic-1-3-release.html


Right, no industry certification to follow here and pretty loose availability requirements. You just had me worried for a minute that everything would suddenly grind to an hal or that there were glaring security holes! But my "production" requirements are definitely not as strong as yours.


If you move etcd to separate t2.nano then upgrading is easy. The only stateful part is etcd.


Have you tried shooting a node in the head and seeing what happens? Always a good exercise to run. Run a few disaster recovery exercises and see if you can get it back. I recommend doing that on non-production of course!


Thanks for the tip. I did yesterday actually by manually shutting down the node from SSH (sudo shutdown). It seemed to "just work" without having to do anything else. There might have been a tiny period of unavailability to one of my services but not enough for me to notice. Luckily, I don't have crazy high availability requirements yet.


I set up a K8S cluster from scratch using the CoreOS tutorials and several other articles (like Kubernetes From the Ground Up series).

What's missing is this:

1. Architecture for your specific needs. This is a design process and not easily condensed into a tutorial. It still requires critical thinking on the part of the designer.

2. How the different components fit together and why they matter.

For (1) to be commoditized, there needs to be sufficient number of installs where people try different things and come up with a best practices that the community discovers. There are not enough of that for that to take place.

For example, I put brought up a production cluster on AWS. I also had to decide how this was all going to interact with AWS VPS and availability zones. How do I get AWS ELB to talk to the cluster? The automated scripts are only the starting point because they assume a certain setup, and I wanted to know what the consequences of those are. This is where the consultants and systems design comes in.

On the other hand, Kesley Hightower probably has a lot of that knowledge in his head. By getting it out there, I think more people will try this, understand the principles, and collectively, we'll start seeing these best practices emerge. Maybe Hightower will eventually write a book.

In the meantime, if you want to know how to design and deploy a custom setup, you do need to know the building blocks, how they are put together, and how you can compose them in a way for specific use-cases. It's no different than choosing a framework, like Rails or Phoenix, and then learning how to compose things that the framework offers you in order to do what you need to do. You get that knowledge from playing with it and experimenting.

Having said all of that, while I'm glad I do have a good foundation for Kubernetes, if I want to use it in production, I'm probably just going to use GKE.



OpenStack has an attempt at Ansible playbooks for production deploys: https://github.com/openstack/openstack-ansible

I have no idea how effective they are, but it's something to consider when setting up a deployment from scratch.


Just saw Kelsey give a talk at Abstractions about more advanced patterns in Kubernetes and he mentioned this repo. Looks like a fantastic tutorial and his talk was very informative.

Highly recommend watching the video when it's released if you weren't able to attend.


He gave a great live demo at CodeConf about using Kubernetes for 12-factor apps[1] that I highly recommend as well. The video for the talk isn't up yet, but the code he used is.[2]

[1] https://12factor.net/

[2] https://github.com/kelseyhightower/app


We were in the same room. Yes, good talk.


Same! Actually I asked the question that led him to point me to this repo :)

We had a conversation later in the hotel lobby where he made a great analogy: running this stuff yourself is going to be like running your own mail server. Its really nice to know how to do it, but at the end of the day unless you are a very large organization, you're most likely going to use a hosted service.

Personally, I'm going to go through with setting up a test kubernetes cluster just so I know what it's made of. Then if I think it's great, maybe .. just maybe I'll give Google's hosted solution a try with a small project to start.


I feel similarly to Kelsey. I also plan on setting up a test cluster to learn the ins and outs and seeing if it's something that might fit in at work for our needs.

For personal stuff GKE looks really nice.


This is a great starting point. We're been running Kubernetes in production alongside an OpenStack for a while and charm'd up the deployment: https://jujucharms.com/kubernetes-core. The majority of the information here (and more) seems to already be encapsulated: `juju deploy kubernetes-core`. Since we need things like logging and monitoring, we bolted the elastic stack on the side and called it observable kubernetes: `juju deploy observable-kubernetes`.

While one-liners are typically pretty limited, the charms come with quite a few knobs to help tweak for deployments.

http://www.jorgecastro.org/2016/07/29/ubuntu-kubernetes-v1-d...

There's still room to improve, but we've been happy with the cluster so far. Considering Juju and charms are open source. Eitherway, great guide for those getting started.


The article mentioned Google Container Engine as one of the 'easy ways' but it didn't mention Rancher http://rancher.com/ - This is not quite as easy as GCE but I found it pretty easy.


I use rancher in prod. I absolutely love it. I don't use kubernetes in prod though. But if you want to go kubernetes through rancher, it is pretty easy.


So, I see something like this, assume they mean "from scratch", read a little way down the README and it says "The following labs assume you have a working Google Cloud Platform account and a recent version of the Google Cloud SDK (116.0.0+) installed."

The irony.


Here is Kubernetes the easy, production-ready way: https://github.com/kz8s/tack

I've spun up a few clusters using this, and I absolutely prefer learning this way. I love to get something running before I dive in and look at all the pieces and individual options. I also really love "convention over configuration". I want to study a production-ready reference implementation that simply works, with sane defaults.

I've spent a lot of time banging my head against the wall while I try to follow some complicated tutorial that doesn't work with the latest versions. Maybe this approach works for others, but it's not for me. I like to stand up a cluster, deploy a database and a real application, try to scale it, set up some test DNS records, do some load testing. Figure out the pain points and learn as I go. If something just works perfectly fine behind the scenes, then I probably don't need to learn that much about it (or I have enough experience that I already know what it does and how it works.)

This might not be a suitable learning style for a beginner, but I think I have enough experience working and experimenting with AWS, Saltstack, Chef, Puppet, Ansible, OpenStack, Deis, Flynn, Terraform, Docker, CoreOS, etc. etc. So at this point, I just prefer to evaluate new technologies by spinning them up and diving straight in.


For Kubernetes the easy way checkout this blog post on the work going on upstream for "self-hosted" Kubernetes: https://coreos.com/blog/self-hosted-kubernetes.html


It would be nice if all the Kubernetes tutorials didn't assume you're using GCE.


Exactly. I expected "hard way" to be using your local box without any external cloud provided service.


Kelsey is a national treasure. Kubernetes is getting pretty close to being ready, if it can avoid the same fate as OpenStack and the like. Interesting times.


What's nice about this is it seems to teach you without taking a GKE-first approach, like some of the other tutorials out there.


Which is doubly awesome coming from a google employee. It shows Kelsey really wants to teach k8s to everyone.


Looked at the documentation and quickly gave up on kubernetes. It's nice and it solves real problem but the barrier to entry is INSANE.

And it's lacking wayyy to much documentation deploy in production on own cluster. It's probably gonna take years to improve.

I'm wondering. Did anyone tried the kubernetes on GCE?

If Google can handle all the annoying setup and makes a mostly managed service, that would be extremely attractive. Actually, that's probably the only way k8s would be achievable in production, i.e. have someone else do it.



this seems interesting. I have to say I have heard a lot of people talking about kubernetes but few actually using it in production.

for your CI woes there is a system that isn't hard to setup and is actually really easy to use: Mesos and Marathon with weave (without the plugin, for now) and docker.

Your biggest challenge is learning zookeeper, but really if you're dealing with large scale deployments, you're probably already using it or something like it.

there are puppet/chef/ansible modules for installing and configuring mesos and zookeeper.

toss in gluster as a storage driver in docker and you're pretty much ready to go for most types of application deployments using docker.

heck, there's even kubernetes integration if you're really hung up on it ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: