We aren't planning to support multiple nodes anytime soon. The resource overhead would probably be too much for most users just looking to try out kubernetes.
You can take a look at the k8s docs about running locally. There's a script that brings up a cluster with a configurable amount of nodes on linux.
Not really. We run in a buildroot [0] based VM on your laptop so that we can run k8s components and a docker daemon cross platform (windows, osx, linux). On linux, it might be an interesting idea, but it looks like theres still an upstream issue on kubernetes to support lxc/lxd [1].
Right now, minikube supports rkt and docker. And the systemd integration of rkt is pretty nifty.
Django doesn't really seem like a framework that is suited to cluster computing, its almost always backed by a relational database and usually IO to the DB is the bottleneck, assuming you now how to use it. Can someone explain the advantages of running it like this to me?
As the other poster said, you run Django on Kubernetes for the same reason you run anything on it:
* Containers are an easy way to package an application so that it's easy to run anywhere, even if it has weird dependencies
* Eventually, you'll want to run containers on more than one machine for performance/reliablity
* Kubernetes makes it easy to schedule containers on a lot of machines, have them talk to each other over a virtual network easily, and provides a toolkit to solve a bunch of other random things you'll probably want (image updates, secret management, namespaces, authentication, jobs, etc)
So the biggest reason to run Django on Kubernetes is you're running other stuff on it, or just because it's one of the many ways to run a containerized app. Sure, you could write a Compose file and just run Docker on a single VM, but you're probably going to quickly want to add other apps or run on more than one machine, at which point you are likely going to look at Kubernetes, Swarm, maybe Mesos/Marathon, or roll-your-own container orchestrator.
Running Django on Kubernetes makes a lot of sense. Running something like Postgres on Kubernetes is the part that is admittedly more questionable. It's mostly just done for fun/as an exercise. Long term, databases will probably run in containers since everything else is. Short-term, for a serious project, I would probably just use a managed database service.
Er, the same as the advantages of running any web application on k8s. Django web applications themselves are well suited for stateless hosts. You don't need any persistence where the Django app runs, you push your DB persistence off to another host (or container backed by a persistent volume).
I was under the impression that the advantages were being able to spin up multiple instances quickly, whereas I usually see Django running on a single server.
- Scalability. Even if it is true the db layer will probably be the bottleneck in your typical django app, it is nice to have a scaling strategy at every layer.
- Redundancy. Platforms like k8s and other PaaS can distribute instances of the same apps to multiple hosts / availability zones.
- Zero downtime deployment. Spin up the new version in a container, have both previous and new version running concurrently, then start routing traffic to the new app.
- Standardization. Settling on a container type to run all your apps makes for great dev and ops relationships.
- Immutable code. Know exactly which version is running in production and be confident it can't be changed manually.
Getting development as close to production as possible means you discover problems sooner. I've been trying to get kube-solo up running our entire infrastructure (mongo, postgres, elastic, two python 3.5 apps, one python 2.7 app) for a few days. I think, after this post, I'm going to switch over to minikube now that I know it's officially part of the kubernetes project.