Reminds me of Draft from Microsoft (https://github.com/Azure/draft). It also targets what they call the "inner loop" of the development workflow.
It looks like Skaffold watches for code changes, and Draft doesn't. But Draft automatically creates the Docker and Kubernetes configs, while Skaffold requires you to create (or copy and edit) them.
Not directed at you specifically, but what is it with people these days feeling the need for a "disclosure" when just stating plain hard facts? Disclosures are only necessary when you state an opinion about something that you are involved with.
I think it parallels the online culture around shilling/astroturfing -- no one wants to be lumped in that category, so people are extra careful to air their laundry before someone tries to spin an armchair-investigative Google search against them after finding their employment history.
The shortfall, currently is some sort of long-term-supported version. Google's GKS and Amazon's (currently beta) EKS, seem to be the answer for the spaces I'm in. Love K8S, but not wanting to manage it like I currently manage operating systems.
The pace of K8S versions, and subsequently things like ingress controllers for various tech stacks, is currently a whirlwind.
Consider kops. It will change the way you work with K8s, frankly. I moved from a Terraform/Ansible setup for Kubernetes to a single command in a Makefile that bulds a multiple master, production ready K8s cluster in AWS in 15 minutes.
It helps, but things like ingress controllers that work with Amazon's ELB and newer ALB load balancing change often. Or things like the overlay network. Basically, because K8S is new, version churn is high.
I'm a big K8S fan. But wanting my cloud providers to deal with it instead of me.
Otherwise, it's all the worst off on-prem and cloud mixed together.
That's great, thanks for sharing. I'm currently the "why don't we wait a little" guy in a F500 where apps are trying to manage their own K8S installs. Still feel pretty strongly that "just wait a bit" is the right answer. I'm not loved at the moment for that opinion ;)
The k8s ecosystem is definitely volatile right now and there are a lot of good projects coming from that, but yeah, there's a lot of building layers from the ground up. If you've used competing orchestration platforms or just dealt with lots of snowflake environments in the past, developing on top of k8s today is still a much much nicer experience.
Shameless plug: We've recently re-written our platform which glues together open source data engineering tools to run on k8s [0] which may be worth checking out if you're interested in user analytics or ETL pipelines on k8s as opposed to writing for it at an app level. It's almost all open source as well. We're rolling out for multiple F500s currently :).
If you need an F500-friendly kubernetes platform, Pivotal has PKS. Red Hat has OpenShift. I imagine either of us would be happy to listen to what's worrying you.
Disclosure: as you might have guessed, I work for Pivotal.
Can you elaborate? Not at all a doubter here of what you're saying... could just benefit from a tl&dr as I'm not likely to sort licensing issues out on my own.
Just adds another set of pricing complexity on top of AWS's. Look at the pricing for either and you'll see the per-hour plus limits on vcpu or memory, charges for going over the limit, etc.
PCF is not licensed according to inputs (cores, memory). It's licensed by application instance -- ie, the output of what you bought from us. If you run one giant app with all the CPUs and RAM you can muster, that counts as one AI.
We like this model because (1) it roughly approximates the added value our customers derive and (2) it's easy to calculate without disagreement.
We do bill PWS based on total RAM-minutes, but that's simply because the existing hosted-PaaS market assumes that model. Our price for PWS is pretty close to what it costs us to run the bits on AWS.
Of course, I should emphasise, I do not work in the sales side of things, where prices can be discussed and Baldwin quotes abound. But I kinda like the sales folk in my office, so if you want to meet one, please email me.
Among other things, let's you package an app and kubernetes together into a unified installer, upgrade it, etc. No internet access required or kubernetes experience needed for end users.
We are moving to Kubernetes at the moment. What is still miss is this “just run docker-compose up and everything works” setup. Maybe this or Draft from Microsoft is something to look into.
We (Datawire) also wrote https://forge.sh which seems similar to Skaffold in a lot of ways. You can think of it as a Kubernetes-native Docker Compose, letting you declaratively define your app & its dependencies and then deploy them all into Kubernetes. We also wrote Telepresence which is mentioned elsewhere in this conversation, and we're working on integrating it into Forge.
Forge lets you do things like specify different profiles (QA, staging, prod) when you deploy, map branch names to profiles, do super fast incremental Docker builds, dependencies, etc.
What we've found though, over time, is that it's less about the tool and the best practices on how you use these tools -- that's something we're now spending a lot of time now since our users are showing us how they're trying to use this, and it's really fascinating.
You can sort of achieve the same outcome just my having all the various services/container specs inside a single .yml file that gets fed to 'kubectl apply'. docker-compose up does create networks and containers unique to the app your developing, but you could have the .yml setup a custom namespace and pile everything into that and get close. No option to concat all the log output though AFAIK.
This looks interesting, but having to perform a Docker image build (even if most layers are cached) as part of your iteration loop seems potentially slow. We use minikube to run local stacks but have been using telepresence[1] to swap out services for ones running from a local build (outside of a container) with incremental compilation and hot reloading and all of that good stuff.
Skaffold maintainer here - most of us are also minikube maintainers and we have a bunch of ideas on how to speed up this loop. Keep an eye on this repo:)
Telepresence is really neat, I saw a demo last week from another team member.
Speeding up builds is something I'm interested in. Google's FTL is one good solution; I think we'll see some more from their container tech team before long. Personally I've pitched a notion based on automatically building layers for "synthetic" images by identifying commonalities between filesystems.
I wasn't aware of telepresence. Why have you opted to go this route over using docker compose and bind mounting host volumes? Does this work across all major platforms? (Windows/Mac/Linux)
We initially went that route but changed for a couple of reasons:
1. The performance of bind mounted host volumes in Docker for Mac was pretty disappointing. We found our initial builds were substantially (~3x?) slower inside a (volume mounting) container than out.
2. Having to build all of our services when you started the stack was also slow, versus the minikube + telepresence setup where you by default run pre-build images (the same ones we deploy) and swap out services as needed. You can mitigate this with well written Dockerfiles that do a good job of caching layers but it can be tedious.
Cool, looks interesting. I just spent a couple of weeks getting a k8s setup up and running on AWS, from zero. As much as I appreciate these sort of platforms, I did enjoy learning the inner details of how k8s components communicate together and how they work.
I've been thinking about integrating a local minikube lately but haven't started yet. I may end using Skaffold as a backend or just deprecating kube_maker entirely if it becomes obvious that Skaffold is the way to go.
I find it odd when I see projects in the GitHub org github.com/GoogleCloudPlatform like this one. Being that the org has the same name as Googles product (GCP) I would not expect experimental or nacent projects. I never know what to expect. Is this an official Google product that will be supported or some experiment by some devs in Google.
It looks like Skaffold watches for code changes, and Draft doesn't. But Draft automatically creates the Docker and Kubernetes configs, while Skaffold requires you to create (or copy and edit) them.