Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes 1.3 released (github.com/kubernetes)
185 points by nkvoll on July 2, 2016 | hide | past | favorite | 92 comments



I'm really liking Kubernetes — we're in the process of migrating to it.

If there's on area that is in dire need of improvement, though, it's the documentation. If you look around, there is essentially no documentation that starts from first principles, going through the different components (and their lifecycle, dependencies, requirements and so on) one by one, irrrespectively of the cloud environment. There is a "Kubernetes from scratch" [1] document, but it's just a bunch of loose fragments that lacks almost all the necessary detail, and has too many dependencies. (Tip: ask the user to install from source, and leave out how to use images, cloud providers and other things that obscure the workings of everything.)

Almost all of the documentation assumes you're running kube-up or some other automated setup, which is of course convention, but hides a huge amount of magic in a bunch of shell scripts, Salt config and so on that prevents true understanding. If you run it for, say, AWS, then you'll end up with a configuration that you don't understand. It doesn't help that much of the official documentation is heavily skewed towards GCE/GKE, where certain things have a level of automatic magic that you won't benefit from when you run on bare metal, for example. kube-up will help someone get it up and running fast, but does not help someone who needs to maintain it in a careful, controller manner.

Right now, I have a working cluster, but getting there involved a bunch of trial and error, a lot of open browser tabs, source code reading, and so on. (Quick, what version of Docker does Kubernetes want? Kubernetes doesn't seem to tell us, and it doesn't even verify it on startup. One of the reefs that I ran aground on was when 1.11 didn't work, and had to revert to 1.9, based on a random Github issue I found.)

[1] http://kubernetes.io/docs/getting-started-guides/scratch/


I can't agree with this enough. We are all on AWS and the level of effort it would take to migrate to Kubernetes while maintaining our ability to spin up complete ad-hoc environments on the fly(which also serves as continual DR testing) seems too much to justify at this point. Also, I can't come out the other side with just one or two people understanding, or having any hope of understanding, how everything works :|

Likely if I had to choose today or this quarter we would go the empire route and build on top of ECS. Though, our model and requirements are a bit different so we'd have to heavily modify or roll our own.


One thing I would say is that — because of the aforementioned documentation mess — it seems more daunting than it actually is. And the documentation does make it seem like a lot of work.

All you need to do, in broad strokes, is:

* Set up a VPC. Defaults work.

* Create an AWS instance. Make sure it has a dedicated IAM role that has a policy like this [1], so that it can do things like create ELBs.

* Install Kubernetes from binary packages. I've been using Kismatic's Debian/Ubuntu packages [2], which are nice.

* Install Docker >= 1.9 < 1.10 (apparently).

* Install etcd.

* Make sure your AWS instance has a sane MTU ("sudo ifconfig eth0 mtu 1500"). AWS uses jumbo frames by default [3], which I found does not work with Docker Hub (even though it's also on AWS).

* Edit /etc/default/docker to disable its iptables magic and use the Kubernetes bridge, which Kubelet will eventually create for you on startup:

   DOCKER_OPTS="--iptables=false --ip-masq=false --bridge=cbr0"
* Decide which CIDR ranges to use for pods and services. You can carve a /24 from your VPC subnet for each. They have to be non-overlapping ranges.

* Edit the /etc/default/kube* configs to set DAEMON_ARGS in each. Read the help page for each daemon to see what flags they take. Most have sane defaults or are ignorable, but you'll need some specific ones [4].

* Start etcd, Docker and all the Kubernetes daemons.

* Verify it's working with something like: kubectl run test --image=dockercloud/hello-world

Unless I'm forgetting something, that's basically it for one master node. For multiple nodes, you'll have to run Kubelet on each. You can run as many masters (kube-apiserver) as you want, and they'll use etcd leases to ensure that only one is active.

[1] https://gist.github.com/atombender/3f9ba857590ea98d18163e983...

[2] http://repos.kismatic.com/debian/

[3] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_m...

[4] https://gist.github.com/atombender/e72c2acc2d30b0965543273a2...


You're making things really hard on yourself. Boot your nodes with CoreOS and it provides almost everything you need (except Kubernetes itself) out-of-the-box. It all works really well together and you get automatic updates, too. I can't imagine trying to run the cluster we run on Ubuntu, trying to roll my own Docker/etc/flannel installs.


I'm sure CoreOS is nice, but we're currently on Ubuntu, and I'm trying to reduce the number of unknown factors and new technologies that we're bringing into the mix. Ubuntu is not the issue here. (FWIW, you don't need Flannel on AWS.)


Can you expand a bit on why you don't need flannel on AWS? We're currently deploying a k8s cluster and I surely went the flannel route (following the steps of CoreOS guide to k8s) but it'd be nice to remove that setup from our deployment if possible.


AWS has VPCs, allowing you to get a practically unlimited number of private subnets.

In some cloud environments (e.g. DigitalOcean), there's no private subnet shared between hosts, so Kubernetes can't just hand out unique IPs to pods and services. So you need something like Flannel, which can set up a VPC either with UDP encapsulation or VxLAN.

Flannel also has a backend for AWS, but all it does is update the routing table for your VPC. Which can be useful, but can also be accomplished without Flannel. It's also limited to about 50 nodes [1] and only one subnet, as far as I know. I don't see the point of using it myself.

[1] https://github.com/coreos/flannel/issues/164


Could you say how you arrange that the addresses you pick for your pods do not clash with the addresses AWS picks for instances?


Kubernetes does this for you IPs. For example, if your VPC subnet is 172.16.0.0/16, then you can tell K8s to use 10.0.0.0/16.

AWS won't know this IP range and won't route it. So K8s automatically populates your routing table with the routes every time a node changes or is added/removed.

K8s will give a /24 CIDR to each minion host, so the first will get 10.0.1.0/24, the next 10.0.2.0/24, and so on. Each pod will get 10.0.1.1, 10.0.1.2, etc.

Obviously having an additional IP/interface per box adds complexity, but I don't know if K8s supports any other automatic mode of operation on AWS.

(Note: Kubernetes expects AWS objects that it can control — security groups, instances, etc. — to be tagged with KubernetesCluster=<your cluster name>. This also applies to the routing table.)


OK, I see this is the same as what Flannel does in its aws-vpc backend, but I though you were saying you could do better. Maybe I mis-parsed what you said.

If you're adding a routing rule for every minion then you will also hit the 50 limit in AWS routing tables.


Sorry about the confusion — yes, absolutely. One option is to ask AWS to increase it.

Flannel is just one of many different options if you need to go beyond 50 nodes. It seems some people use Flannel to create an overlay network, but this isn't necessary. You can use the host-gw mode to accomplish the same thing as Kubernetes' default routing-table-updating behaviour, but with routing tables maintained on each node instead.


Forgot to say: Kubernetes will keep the routing table up to date if you use --allocate-node-cidrs=true. That way, it does exactly the same thing as Flannel with the "aws-vpc" backend.


Awesome, I'll take a look into all that! This doesn't look too bad. Do you know if you can combine the masters/minions? Our environments are VPC isolated, and we support ad-hoc creation so I'd like to keep server count requirements to a bare minimum.. The current from-scratch guide says it is not necessary to make the distinction between master nodes and normal nodes; and the api, controller, etc appear to be hosted as pods. This makes me happy and makes sense, but then you have something like this which has me confused: https://github.com/kubernetes/kubernetes/issues/23174 .

On a side note, it's pretty awesome how Docker embedded the key-value store into the main binary. Appears to reduce complexity quite a bit.


You can run them on the same box just fine. There's nothing magical about any of those processes.

However, using dedicated masters (by which I mean mostly kube-apiserver) separate from worker nodes is a good idea to avoid high load impacting the API access.

(Just keep in mind that the Kismatic packages I referred to won't support this — you can't install kubernetes-master and kubernetes-node at the same time. But as you discovered, you can run everything except kubelet as pods. On the other hand, kube-apiserver needs a whole bunch of mounts as well as host networking, so to me it seems like you don't gain all that much.)

What is this Docker key-value store you mention?


https://blog.docker.com/2016/06/docker-1-12-built-in-orchest...

They are using a Raft based store inside the engine now so there is no external etcd dependency. IIRC they are using etcd's raft implementation.


Interesting, thanks. Personally, Docker is already too monolithic, and this just looks like it adds unnecessary coupling to something should be less coupled in the first place. I'd prefer to use etcd.

I think rkt is making some good decisions and is worth keeping an eye on. Not sure I love the tight coupling to systemd, but the fact that it avoids the PID 0 problem and lets containers be their own processes (separate from the "engine", which can choreograph containers through the systemd API, building on all of its process handling logic) are improvements over Docker. In fact, rkt uses the same networking model as Kubernetes.


Replying to myself: On Ubuntu Xenial you have to start Docker with this additional flag:

    --exec-opt native.cgroupdriver=cgroupfs
Since Xenial uses systemd, there's no longer an /etc/default/docker. Instead, create /etc/systemd/system/docker.service.d/docker.conf with:

      [Service]
      ExecStart=
      ExecStart=/usr/bin/docker daemon --exec-opt native.cgroupdriver=cgroupfs --iptables=false --bridge=cbr0 --ip-masq=false


Quick question - if you're using AWS (or GCP or Azure), was there a reason that:

  ./kube-up.sh
Didn't work for you?

Disclosure: I work at Google on Kubernetes


Did you read my earlier comment (https://news.ycombinator.com/item?id=12024148)?

In short, I want and need to understand how it's put together so that I can use it.

There was someone on the #kubernetes-novices slack today [1] who rightly pointed out who described his approach as: Run kube-up, then try to deconstruct everything that kube-up did into a repeatable recipe. I went the other route, by trying to understand what kube-up did and replicating it. I'm still working through things I missed or did wrong.

To be honest, I think Google's approach here is wrong. Kubernetes is being developed at a frenetic pace, but documentation is not being maintained (it's pretty lacking even if you're on GCP!), and users are understandably frustrated with the obscurity of the whole thing. It works, but it takes weeks to gather enough of an understanding of the system, and that's entirely due to lack of documentation.

The documentaton is lacking both a high and low level. At no point does the documentation offer a big-picture view of how everything works together, nor does it offer low-level descriptions of the stack.

I also think the strong focus on kube-up is a mistake, given the lack of docs. I'm sure it works great, but it's not an option for production use, in my opinion. Terraform would have been better here. You're also using Salt — honestly, it would have been so much cooler if kube-up could just take a few inputs ("what cloud?", "what are your credentials?" etc.) and generate a finished Salt config for you, with a separate salt-cloud orchestration config for the provisioning. The current Salt config is a bit of a mess, and not really something you can build on.

Feel free to reach out to me (@atombender) on the Kubernetes Slack if you want to chat.

[1] https://kubernetes.slack.com/archives/kubernetes-novice/p146...


Great feedback! Both Kubernetes Anywhere (https://github.com/kubernetes/kubernetes-anywhere) and our documentation efforts (https://github.com/kubernetes/kubernetes.github.io) are very much in flight - and they're both coming by 1.4 (~90 days).


Really amazed by their great work. I'll look forward to upgrade the setup at my company. Since we started using kubernetes, we reduced our bill to 30% of its original price, and it made everything easier and scalable just as if we were using the costy Heroku. It's a really useful tech for third-world startups that cannot afford to spend thousands of dollars on infraestructure. I hope I can contribute to this OSS in the near future.


We've seen similar savings at our company. We have deployed Kube on a 6-node cluster of CoreOS nodes with 512GB each. These are dedicated servers hosted at Rackspace. We're about 30-40% utilized on RAM and maybe 15-20% on CPU. To host a similar set of services on our older Openstack environment would require at least 2-3x the number of servers. The cost savings isn't even the best part. Kubernetes has allowed us to build a completely self-service pipeline for our devs and has taken the ops team out of day-to-day app management. The nodes update themselves with the latest OS and Kube shifts the workload around as they do. This infrastructure is faster, more nimble, more cost-effective and so much easier to run.

This is the best infrastructure I've ever used in twenty years of doing ops and leading ops teams.


You folks don't know how much it means to us to hear that people are finding success with Kubernetes. Thanks for using it. We'll try to keep pushing the envelope.


Thanks, Tim. Y'all have been awesome. Thanks for the quick response to GH issues and Slack questions. I hope that we can speak at a conference someday and tell the world about how much more fun and easy Kubernetes made our jobs.


so true, the self service aspect is indeed amazing. once developers or qa people grok the concepts and api it can do wonders to your productivity.

also, working with k8s will probably spoil you, it's pretty annoying to "go back" to other environments, where you're confronted with problems which would be effortlessly solvable in kubernetes.


It completely spoils you. My team got to do an Openstack cluster migration this month and there was definitely some grousing. The workflow of traditional private clouds is just so tedious and flaky. We could grow our Kubernetes cluster 10x without hiring any additional engineers for the ops team.


I can't agree more with others in the thread. This entire comment warms the cockles of my cold dead heart.

I know Silicon Valley folks are infinitely pessimistic and/or grandiose, but this is LITERALLY the reason I got into this job.

Disclosure: I work at Google on Kubernetes


Have you compared the cost of maintaining your own CoreOS infrastructure at RAX just for kubernetes to using Googles Container Engine? If your services are all containerized and deployed via k8s to begin with, seems like you wouldn't have much reason to maintain your own infra at that point.


Yes, absolutely. We ran a four month experiment on GCE--we built an off-site logging cluster fluentd+Elasticsearch+Kibana. The performance was decent but the cost of RAM and disk are way higher.

I will tell you that the economics are most definitely not there. This is a common misconception amongst the HN crowd in general--that public cloud infra is cheaper. For small footprints, public cloud makes sense but once you get into the larger footprints (300+ instances), it's far cheaper to lease dedicated hardware or DIY in colocation. We're running on approximately 40 dedicated rackmount servers for Openstack and 6 for Kubernetes. To get the equivalent amount of disk and RAM, we would pay 2-3x at AWS or GCE. We could probably cut our cost by an additional 30% by moving what we have to colo but we would lose some flexibility and would have to take on additional headcount.

From a maintainability standpoint, GCE makes Kubernetes easy which is a good thing if you've never run it before. It's not that hard to run it yourself, though. A senior-level systems engineer will be a Kube master after about two months of use. Just guessing, I think it takes about 1/4 of an engineer-week to support our Kube cluster for a week. I think we could grow our cluster 20x without a significant workload increase for our ops team.

We are in the process of automating the last few manual aspects of our Kubernetes infra: load balancing and monitoring. We're building these in the style that we've built the rest of our pipeline: YAML files in a project repo. Simply drop your Datadog-style monitoring/metrics config and your load balancer spec in your project's Github repo and the deployment pipeline will build out your monitoring, metrics, and LB automatically for you.


I'm curious - did you model reserved instance pricing, or on-demand pricing in your comparison? AWS typically charges ~1/3rd the price for a 3-year reserved instance vs. on demand pricing. This comparison would be much more apples to apples if you are purchasing hardware with CapEx that typically depreciates over 3-5 years.


This is really interesting. Are you planning to open source this pipeline automation? I'd be interested.


This is exciting. I need to update my Django Kubernetes tutorial (https://harishnarayanan.org/writing/kubernetes-django/) with some new constructs that simplify things.


Django is about the perfect use case for a Kubernetes tutorial.. because it has enough moving parts that you need to do some trickery, but not 10 levels deep.


This is an excellent tutorial. Thanks!


Yea! The team at CoreOS is really excited about this release and the work that we have done as a community.

If you are interested in some of the things that we helped get into this release see our "preview" blog post from a few weeks ago, RBAC, rkt container engine, simpler install, and more: https://coreos.com/blog/kubernetes-v1.3-preview.html


The CoreOS team and technologies have been critical to getting Kubernetes going. Thanks, Brandon.


At the risk of sounding like a mutual admiration society: Working with and learning from the folks in the community like Brian Grant, Dawn Chen, Joe Beda, Sarah Novotny, Brendan Burns, Daniel Smith, Mike Danese, Clayton Coleman, Eric Tune, Vish Kannan, David Oppenheimer, yourself Tim, and the hundreds of other folks in the community has been a great experience for me and the rest of the team at CoreOS.

Can't wait to continue the success with v1.4!


I can't imagine running Kubernetes without CoreOS. You guys make everything so easy for us (Revinate). Our systems infra workload for the CoreOS/Kubernetes cluster is a tiny fraction of what we spend on our Openstack gear.


I'm still sitting on the sidelines waiting for the easy to install, better documented for AWS version. It's also a bit unclear as to why we are talking federation and master/slave in 2016; other systems are using raft and gossip protocols to build masterless management clusters..

Watching issues like https://github.com/kubernetes/kubernetes/issues/23478 , and https://github.com/kubernetes/kubernetes/issues/23174 .. I'm not super interested in "kicking the tires"; I'm evaluating replacing all our environment automation with a version built around Kubernetes. Easy-up scripts that hide a ton of nasty complexity won't do the trick.

Following the issues I'm getting the impression that too much effort is being put into CM style tools vs making the underlying components more friendly to setup and manage. Did anyone see how easy it is to get the new Docker orchestration running?

Then there is the AWS integration documentation.. I'm following the hidden aws_under_the_hood.md updates, but I'm still left with loads of questions; like how do I control the created ELB's configuration(cross zone load balancing, draining, timeouts,etc)?

I re-evaluate after every update and there are some really nice features being added in, but at the end of the day ECS is looking more and more the direction to go for us. Sure, it's lacking a ton of features compared to Kubernetes and it's nigh but impossible to get any sort of information about roadmaps out of Amazon... But it's very clear how it integrates with ELB and how to manage the configuration of every underlying service. It also doesn't require extra resources(service or human) to setup and manage the scheduler.


It's funny how words can be played. The Kubernetes "master" is a set of 1 or more machines that run the API server and associated control logic. This is exactly what systems like Docker swarm do, but they wrap it in terms like RAFT and gossip that make people weak in the knees. Kubernetes has RAFT in the form of the storage API (etcd). This is a model that has PROVEN to work well, and to scale well beyond what almost anyone will need.

"Federation" in this context is across clusters, which is not something other systems really do much of, yet. You certainly don't want to gossip on this layer.

"evaluating replacing" really does imply "kicking the tires". Put another way - how much energy are you willing to invest in the early stages of your evaluation? If a "real" cluster took 3 person-days to set up, but a "quick" cluster took 10 person-minutes, would you use the quick one for the initial eval? Feedback we have gotten repeatedly was "it is too hard to set up a real cluster when I don't even know if I want it".

There are a bunch of facets of streamlining that we're working on right now, but they are all serving the purposes of reducing initial investment and increasing transparency.

> how easy it is to get the new Docker orchestration running

This is exactly my point above. You don't think that their demos give you a fully operational, secured, optimized cluster with best-perf networking, storage, load-balancing etc, do you? Of course not. It sets up the "kick the tires" cluster.

As for AWS - it is something we will keep working on. We know our docs here are not great. We sure could use help tidying them up and making them better. We just BURIED is things to do.

Thanks for the feedback, truly.


I would consider "kicking the tires" actually running up a cluster and playing with it. One can also evaluate by reading documentation and others reports of issues to look for show-stopping problems. For instance, a couple releases ago there was not multi-AZ support. The word on the street at that time was to create multiple clusters and do higher level orchestration across them.. That was a no-go for us; no need to "kick the tires".

Whatever you may think of my level of knowledge or weak knees for consensus and gossip protocols, these problems(perceived or otherwise) with setup, documentation, and management seem pretty widely reported.

EDIT: I hope this doesn't sound too negative. Kubernetes IS getting better all the time. I only write this to give a perspective from somebody who would like use Kubernetes but has reason for pause. Our requirements are likely not standard; our internal bar for automation and ease of use is quite high. We essentially have an internal, hand-rolled, docker-based PaaS with support for ad-hoc environment creation(not just staging/prod). We would like to move away from holding the bag on our hand-rolled stuff and adopt a scheduler :) Deciding to pull the trigger on any scheduler though would be committing us to a rather large amount of integration effort to reach a parity that doesn't seem riddled with regressions over the current solution.


This frustrates me greatly, only because I agree with you so vehemently :-) We have an open issue tracker where real decisions are made, and so engineers will argue about different approaches. Compare to alternatives, where you see demos that are double-acts of good-cop vs good-cop where apparently there are no trade-offs and everything is perfect. It isn't my experience that products where the debates are hidden are better; it is certainly easier to see the compromises when the debates are public.

So: there was a big discussion about whether a single k8s cluster should span multiple AZs (which shipped in 1.2), or whether we should allow the API to target multiple independent clusters (federation, the first version of which is shipping in 1.3). The core of the argument is that multi-zone is simpler for most users, but with only one control plane it is less reliable than a federation of totally independent clusters. Federation also brings other benefits, like solving the problem of running in clusters that are not in a single "datacenter" i.e. where you need to worry about non-uniform latency. I haven't seen anyone else make a serious attempt at solving this.

So, remember that the issue tracker is filled with the unvarnished discussions that come from true open source development. I think it is an asset for you, because you don't discover those things 3 months into using your chosen product; but it is definitely a liability for k8s, because we rely on you realizing this in your initial evaluation and weighting appropriately (the devil you know vs the devil you don't). I think k8s is likely much better than you think it is, and you should come talk to us on slack and make sure of that fact! It certainly sounds like you have an interesting use case that we'd like to hear about and consider.

But yes, our docs should be better!


You might take a look at Rancher - it integrates and fully automates Kubernetes deployment, but personally, I find their Cattle scheduler is much easier to reason about, supports multi-AZ out of the box, and supports all of the features you would want (DNS-based service discovery, encrypted overlay networking, etc.)

Regarding the multi-AZ support issue - this is mostly because an EBS volume can only be attached to EC2 instances in the same AZ, and since Kubernetes has great support for persistent data volumes, you're pretty much limited to a single AZ if you're using persistent data volumes and want them to be remounted on a different instance in case of a failure. I think a more viable solution for persistent data volumes is to leverage EFS and use Convoy NFS to mount them. Now you have highly available, scalable, persistent data volumes, and you can stretch your cluster across multiple AZs.


In this case, what you would do is set up two separate clusters, and spread an ELB across them. No federation required :)

Disclosure: I work at Google on Kubernetes


But, if you have persistent EBS volumes, you wouldn't be able to mount them on the other cluster if you had a failure of an entire AZ.


I do a lot of work for k8s on AWS (e.g. I'm to blame for the original under-the-hood document). I agree with most of what you've said we need to improve on, and we are working on it! We added a bunch of ELB configuration options in 1.3 and we'll likely complete the ELB feature set in 1.4 (it probably would have landed in 1.4 but for my tardiness in doing reviews). In addition, there's a well-known "trick" where you set up a service using a NodePort and then set up the ELB to point to that, if you want to go beyond what k8s offers (e.g. if you want to reuse an existing ELB etc). What our docs are lacking, we make up for with an excellent community (for AWS discussions #sig-aws on the k8s slack).

The debate between automation vs simplification is one that has gone on since k8s 1.0 and likely will continue to be had. But I think to an extent it is a false choice: I created a new k8s installation/ops tool (i.e. did work on the "automation" side), and out of that I got a fairly clear road-map for how we could simplify the installation dramatically in 1.4. In other words, some of the simplification you ask for comes by embedding the right pieces of the automation. k8s is open-source, so I have to persuade others of this approach, but I think that's a good thing, and I'd hope you'd join in that discussion also (e.g. #sig-cluster-lifecycle on the k8s slack).


Can you say more about the complexity you're worried about? The projects you mention WILL land in 1.4 (which is about 90 days away), and thousands of companies are running huge production deployments on AWS, GCP, Azure, OpenStack and on-premises. Further, plug-ins already exist for Chef, Puppet, Salt and Ansible, if you'd like to use them.

To be clear, nothing is "masterless" - please go check out the production deployments for other container management solutions, they all require a separate control plane when running in production with a cluster of any reasonable (>64 nodes) size. FYI, it's a best practice when running a cluster of any size to separate the control plane.

To your direct question, with the other orchestration tools, how would you manage your ELB? Wouldn't you have your own management? They don't (to the best of my knowledge) do any sort of integration - not even the minimum level that Kubernetes does.

Disclosure: I work at Google on Kubernetes


We are really proud of this release, both making it much easier to get started (with a laptop ready local development experience) as well as large scale enterprise features (support for stateful applications, IAM integration, 2x scale).

As others in the thread mentioned, this was the cut of the binary, we'll be talking a lot more about it, updating docs and sharing customer stories in the coming weeks.

Thanks, and please don't hesitate to let me know if you have any questions!

Disclosure: I work at Google on Kubernetes.


The momentum and features are really unmatched compared to any comparable solution.

Disclosure: I do not work at Google


> laptop ready local development experience

The experience, definitely something I’m looking forward to, needs a lot of improvement if your laptop has an Apple logo on it. Hopefully some part of the team is working on that :)


Check out minikube, it's designed for running on laptops with Apple logos. :)

https://github.com/kubernetes/minikube

Disclosure: I work at Google, on minikube.


I run into kubernetes a week ago. Found out this: https://www.udacity.com/course/scalable-microservices-with-k...

Sounds pretty interesting, especially all the part about service discovery & node health/replacement.

Anyone using it for production?


We (Google) are :-)

Otherwise, there's a list at http://kubernetes.io/community/, including: New York Times, eBay, Wikimedia Foundation, Box, Soundcloud, Viacom, and Goldman Sachs, to name a few.


Duh, that's a nice list of references. I'll try to get trough the documentations and tutorial. It seems to solve a lot of troubles when we (normal people) have to deal when deploying docker container (in aws for example). Among others service discovery and health of nodes.


Don't you guys use Borg that I don't know how close to kubernetes?


This course is cringeworthily shallow. Short videos that don't go into details 'why' stuff happens. and people screaming 'WOW this is sooo useful' all the time without explainin why it's useful.


I didn't find that Udacity course all that helpful. Especially toward the end, where it could really shed some light on the actually advanced Kube topics, the videos shorten to 60 seconds each and he just glosses over topics without any explanation why they matter.


after have seeing the whole course it lacks of the details. however, i think it's enough to get a glimpse of the tool and benefits. I don't think you can learn kubernetes and docker in less than 60 min.


1.3.0 is tagged, yes. The actual release (docs, release notes, etc) will happen early next week.


Not to get into a discussion regarding what constitutes a "Release" for any specific project, whether it's tagging, pushing, announcing[0], updating documentation, creating release notes, publish a release blog post and so on.

A final build of 1.3 was tagged with an accompanying changelog and announcement post. I found it weird that it had no more ceremony, nor any prior submission on hn, and as it had been announced through the kubernetes-announce mailing list for 17 hours, I figured its existence would be interesting to the community, so I submitted it in good faith.

In any case, kudos to everybody working on it and congratulations on the release, whether it's this week or the next.

[0]: https://groups.google.com/forum/#!topic/kubernetes-announce/...


I'm glad you posted it - thanks!

My understanding is that with the timing of the US holiday, it made more sense to hold off on the official announcement for a few days. So that's why there aren't more announcements / release notes etc; and likely there won't be as many people around the community channels to help with any 1.3 questions this (long) weekend.

You should expect the normal release procedure next week! And if you want to try it out you can, but most of the aspects of a release other than publishing the binaries are coming soon.


There's no karma in waiting on actual releases, friend.


I'm looking for startups that are using Kubernetes in production who would like some free publicity.

I'm the new executive director of the Cloud Native Computing Foundation, which hosts Kubernetes. We have end user members like eBay, Goldman Sachs and NCSoft, but we're in need of startup end users (as opposed to startup vendors, of which we have many).

Please reach out to me at dan at linuxfoundation.org if you might like to be featured in a case study.


Wow, spooky coincidence - I discovered and installed this for the first time today! The docs could use some work, but generally pretty easy to get started.

Great to see an openstack provider's been added, too.


Which guides did you use?


Only the docs on the website (barring a 5 minute intro to the "Why" of kubernetes[0]). Used the docs for getting set up locally with minikube, and also the hello node example.


minikube is awesome, really exciting development for the community. Which platform are you on?


I'm on OSX, deploying to AWS - currently manually, potentially with Kubernetes soon!


Interested in federated clusters. How is federation being scoped and who's doing most of the work on it?


Federation is a VERY big area. Your best bet is to start with the proposals:

  https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation-high-level-arch.png
  https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federated-api-servers.md
Though there are many issues in discussion. Anything in particular you want to work on? Disclosure: I work at Google on Kubernetes


One feature I was hoping to see in this release was the ScheduledJobs controller. I remember seeing it mentioned in one of the RCs; did it get pushed back? This would be useful for those of us who want a more highly available cron-like system running on top of Kubernetes.


It was so close, it just missed the boat. It will hopefully be on the next boat.


Does anybody know how 1.3 is for stateless services? Can I use an API to crete a persistent disk volume and adjust that volume size just as I would any other resource like CPU, memory? The use case being postgres/mysql instances.


You can't, in general, adjust the size of block devices purely transparently. We don't currently support blockdev resizing. Would love to talk about how to achieve that, though


Is it possible under specific circumstances such as if using LVM devices?


an LVM device is local, which is not supported as PersistentVolume. :(


Thats too bad it would be great to have an API for that. Mesos has the concept of path and mount disks, it would be neat if Kubernetes had something similar:

http://mesos.apache.org/documentation/latest/multiple-disk/


We do! it's just not a "persistent" volume because, well, it's not persistent...


Awesome. Just digging into Docker and have recently been reading about Kubernetes.

Does anyone have examples of how they are managing deployments? I.e. deploying app update, running db migrations perhaps?


There was some coverage including labs at this week's RedHat Summit. Once all the materials are on line that may prove a useful 1.3 reference.


Our blog post is live!

http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cl...

Disclosure: I work at Google on Kubernetes


Now if only a native Azure provider was developed it would be excellent...


The status of K8s on Azure is being updated here - https://github.com/colemickens/azure-kubernetes-status


Cole has been an absolute machine working on this - we'd love your help! Net, though, is that extending Kubernetes in this way is available to everyone - we support ~15 different cloud and OS configurations today, we'd love to support more!

Disclosure: I work at Google on Kubernetes.


Nice! I'll follow it and try to help as I can.


There's a working implementation here. I'm wrapping up cleanup + some unit tests before sending the PR. https://github.com/colemickens/kubernetes/tree/azure-cloudpr...

Not that it's very exciting to anyone who is familiar with Services + Pod networking, but there's a video demo: https://asciinema.org/a/48294


w00t!!!


Why would I want to take my perfectly nice VMs and run them on a windows server?


Not sure if you are serious, but while Azure is from Microsoft, it is definitely not Windows only. In fact, most of the products coming out of the "new" Microsoft pipeline aren't in any way tied to Windows.

Kudos to them, and awesome to see people working to get Kubernetes to work on Azure.


  > AWS    
  > Support for ap-northeast-2 region (Seoul)
What does this mean? How can K8S be tied into something as specific as an AWS region?


It means the `kube-up` scripts now work with ap-northeast-2 straight out of the box.

Just set

    export KUBERNETES_PROVIDER=aws
    export KUBE_AWS_ZONE=ap-northeast-2a
    kube-up.sh
Here's the PR https://github.com/kubernetes/kubernetes/pull/24464


Kubernetes can be installed from AMIs, which are region-specific.

In addition, different regions support different AWS features/products and being a newer region usually means the least amount of support. So any setup tooling or infrastructure integration needs to account for those differences and use alternatives if certain services aren't available.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: