Hacker News new | past | comments | ask | show | jobs | submit login

I enjoyed reading this post, but came away with more questions.

They talk about providing a service for developers to deploy stuff easily, but I'm wondering how that works in practise.

I've very little experience with Kubernetes, the author mentions they run 'mini' kubernetes clusters at each restaurant. Does that mean they have to deploy software/container updates to each restaurant one by one? Or is it abstracted above that level, where they can see ALL of the restaurants as one "big" cluster?




Hey! Caleb here... I'm the lead SRE building the clusters on the restaurants.

Each individual restaurant gets it's own cluster (we aren't fully deployed yet). There are too many network latency challenges and too much immaturity around federated clusters to take the route (unfortunately).

We currently use gitops, which houses all the configs per cluster in a single repo per restaurant (CRAZY amount of corgis!)... we call that Atlas (the repos) and we made a little pod called Vessel that polls and applies configs in those repos.

We're almost done building something called Fleet that will generated and manage those repos (Atlas) at scale. Ie; a UI where we can say "send this version to 10% of restaurants" and it will regenerate and deploy those configs to all the appropriate Atlas repos, which will then get pulled down by Vessel.

We tried doing all this with Helm but failed miserably. Maybe it was us? But templating vs gitops... the choice seemed obvious.


Wouldn't Spinnaker be suitable for something like this?


Caleb, we’re Armory, W17. This does indeed sound like an interesting use case for Spinnaker, as Jacques said. Happy to chat with you about it. DROdio@Armory.io (we are commercializing Spinnaker — http://www.Armory.io )


Your blog post turned up in slack today (congrats). My particular flavour is Concourse but I nodded all the way through.


What were the main issues you faced with Helm?


The primary challenge was just reasoning with the template and using Helm at scale... ie; what exactly did we deploy on those hundreds of varying clusters?

Other issues included; tiller would sometimes become unstable... version mismatch issues between helm local and roller... lack of a clear, outage free canary deployment... we even found cases where helm would not cleanup after itself during a deployment and retain previous config settings within k8s.


For me, Helm caused more problems than it solved. Pulling in packages always seems good, but as soon as you want customization, you're back to merging in the (relatively) straight-forward yaml files from the chart. Also, instead of Helm's templates (which get crazy complex and unintuitive), a simple tool like Kustomize[0] is very straight-forward and allows per-environment configuration. Finally, tiller pods do present (yet another) security risk for Kubernetes.

[0] https://github.com/kubernetes-sigs/kustomize


We should have a Medium post about this soon, but we built a set of tools called Fleet, Atlas, and Vessel. The simple premise is that Fleet generates code for 1..n clusters. By code I just mean yaml files. Atlas is a git repo per cluster that holds the declarative state for that cluster. At any time it should be able to be applied to the cluster and result in the ideal state. And then vessel runs in each cluster and does a (I am oversimplifying a bit) git pull and kubectl apply -f. We can use these tools to do canary deployments (generate x percent this way, y percent that way in ingress rules) across a wide range of clusters. The goal here is really to minimize configuration drift and keep a large number of fairly ubiquitous (but not always so) clusters in order.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: