Hacker News new | past | comments | ask | show | jobs | submit login
Google Skaffold – Easy and Repeatable Kubernetes Development (github.com/googlecloudplatform)
199 points by nikolay on March 7, 2018 | hide | past | favorite | 43 comments



Reminds me of Draft from Microsoft (https://github.com/Azure/draft). It also targets what they call the "inner loop" of the development workflow.

It looks like Skaffold watches for code changes, and Draft doesn't. But Draft automatically creates the Docker and Kubernetes configs, while Skaffold requires you to create (or copy and edit) them.


Draft can watch for code changes via a `watch` setting in draft.toml. (Disclosure: I work on the containers team at Microsoft.)


Not directed at you specifically, but what is it with people these days feeling the need for a "disclosure" when just stating plain hard facts? Disclosures are only necessary when you state an opinion about something that you are involved with.


It's easier to be over-honest, honestly.

Disclosure: I suspect I am partly responsible for the "Disclosure:" thing become a fixture.


I think it parallels the online culture around shilling/astroturfing -- no one wants to be lumped in that category, so people are extra careful to air their laundry before someone tries to spin an armchair-investigative Google search against them after finding their employment history.


The shortfall, currently is some sort of long-term-supported version. Google's GKS and Amazon's (currently beta) EKS, seem to be the answer for the spaces I'm in. Love K8S, but not wanting to manage it like I currently manage operating systems.

The pace of K8S versions, and subsequently things like ingress controllers for various tech stacks, is currently a whirlwind.


Consider kops. It will change the way you work with K8s, frankly. I moved from a Terraform/Ansible setup for Kubernetes to a single command in a Makefile that bulds a multiple master, production ready K8s cluster in AWS in 15 minutes.


It helps, but things like ingress controllers that work with Amazon's ELB and newer ALB load balancing change often. Or things like the overlay network. Basically, because K8S is new, version churn is high.

I'm a big K8S fan. But wanting my cloud providers to deal with it instead of me.

Otherwise, it's all the worst off on-prem and cloud mixed together.


Can't say I've had those problems. /shrug


AWS Fargate will be interesting too. But, like you said for EKS, it's still in private beta as well.

I'm also keeping a pulse on riff [1] as a k8s FaaS.

[1]: https://github.com/projectriff/riff


That's great, thanks for sharing. I'm currently the "why don't we wait a little" guy in a F500 where apps are trying to manage their own K8S installs. Still feel pretty strongly that "just wait a bit" is the right answer. I'm not loved at the moment for that opinion ;)


The k8s ecosystem is definitely volatile right now and there are a lot of good projects coming from that, but yeah, there's a lot of building layers from the ground up. If you've used competing orchestration platforms or just dealt with lots of snowflake environments in the past, developing on top of k8s today is still a much much nicer experience.

Shameless plug: We've recently re-written our platform which glues together open source data engineering tools to run on k8s [0] which may be worth checking out if you're interested in user analytics or ETL pipelines on k8s as opposed to writing for it at an app level. It's almost all open source as well. We're rolling out for multiple F500s currently :).

[0]: https://github.com/astronomerio/astronomer


If you need an F500-friendly kubernetes platform, Pivotal has PKS. Red Hat has OpenShift. I imagine either of us would be happy to listen to what's worrying you.

Disclosure: as you might have guessed, I work for Pivotal.


I’m aware, but the licensing implications for highly elastic environments are suboptimal for both RedHat and Pivotal at the moment.

Thus, I continue to wait for the big 3 cloud providers to figure it out. Unrelated, but seems like an opening for Linode or DigitalOcean.


Can you elaborate? Not at all a doubter here of what you're saying... could just benefit from a tl&dr as I'm not likely to sort licensing issues out on my own.


Just adds another set of pricing complexity on top of AWS's. Look at the pricing for either and you'll see the per-hour plus limits on vcpu or memory, charges for going over the limit, etc.


PCF is not licensed according to inputs (cores, memory). It's licensed by application instance -- ie, the output of what you bought from us. If you run one giant app with all the CPUs and RAM you can muster, that counts as one AI.

We like this model because (1) it roughly approximates the added value our customers derive and (2) it's easy to calculate without disagreement.

We do bill PWS based on total RAM-minutes, but that's simply because the existing hosted-PaaS market assumes that model. Our price for PWS is pretty close to what it costs us to run the bits on AWS.

Of course, I should emphasise, I do not work in the sales side of things, where prices can be discussed and Baldwin quotes abound. But I kinda like the sales folk in my office, so if you want to meet one, please email me.


At gravitational we actually build a platform for this: https://gravitational.com/telekube/

Among other things, let's you package an app and kubernetes together into a unified installer, upgrade it, etc. No internet access required or kubernetes experience needed for end users.


Is your product like Pivotal's PCF / PAS?

https://pivotal.io/platform/pivotal-application-service


At a glance, no. PAS is our distribution of CFAR, what you might think of as "classic" Cloud Foundry.

Our k8s distro is PKS, which is the closer one here.

Disclosure: I work for Pivotal.


Fargate is generally available.


Only for ECS, not yet EKS/Kubernetes.


We are moving to Kubernetes at the moment. What is still miss is this “just run docker-compose up and everything works” setup. Maybe this or Draft from Microsoft is something to look into.


Kubernetes support in the Edge release of Docker for Mac provides this or via minikube on other platforms.

We wrote a short doc on the former for Kubernetes itself if you happen to be on a Mac.

https://enterprise.astronomer.io/guides/docker-for-mac/index...

As far as app code, I highly recommend building on Helm if you're not already. I imagine that's what skaffold and draft are doing behind the scenes.


We (Datawire) also wrote https://forge.sh which seems similar to Skaffold in a lot of ways. You can think of it as a Kubernetes-native Docker Compose, letting you declaratively define your app & its dependencies and then deploy them all into Kubernetes. We also wrote Telepresence which is mentioned elsewhere in this conversation, and we're working on integrating it into Forge.

Forge lets you do things like specify different profiles (QA, staging, prod) when you deploy, map branch names to profiles, do super fast incremental Docker builds, dependencies, etc.

What we've found though, over time, is that it's less about the tool and the best practices on how you use these tools -- that's something we're now spending a lot of time now since our users are showing us how they're trying to use this, and it's really fascinating.


You can sort of achieve the same outcome just my having all the various services/container specs inside a single .yml file that gets fed to 'kubectl apply'. docker-compose up does create networks and containers unique to the app your developing, but you could have the .yml setup a custom namespace and pile everything into that and get close. No option to concat all the log output though AFAIK.


I use stern for multi-pod log tailing: https://github.com/wercker/stern


I've switched to https://github.com/boz/kail now, as stern seems to somehow miss pods coming up.


There's no shortage of tools that do the same thing in Kubernetes universe :)

https://github.com/johanhaleby/kubetail


This looks interesting, but having to perform a Docker image build (even if most layers are cached) as part of your iteration loop seems potentially slow. We use minikube to run local stacks but have been using telepresence[1] to swap out services for ones running from a local build (outside of a container) with incremental compilation and hot reloading and all of that good stuff.

[1] https://www.telepresence.io/


Skaffold maintainer here - most of us are also minikube maintainers and we have a bunch of ideas on how to speed up this loop. Keep an eye on this repo:)


Telepresence is really neat, I saw a demo last week from another team member.

Speeding up builds is something I'm interested in. Google's FTL is one good solution; I think we'll see some more from their container tech team before long. Personally I've pitched a notion based on automatically building layers for "synthetic" images by identifying commonalities between filesystems.


Hey Jacques,

Do you have any more info on the idea about identifying commonalities between filesystems?

I work on Skaffold and really want to speed up builds.


Hit me up on my work email and I'll send you a link to a doc I've shared with some of your colleagues (jchester@pivotal.io).


I wasn't aware of telepresence. Why have you opted to go this route over using docker compose and bind mounting host volumes? Does this work across all major platforms? (Windows/Mac/Linux)


We initially went that route but changed for a couple of reasons:

1. The performance of bind mounted host volumes in Docker for Mac was pretty disappointing. We found our initial builds were substantially (~3x?) slower inside a (volume mounting) container than out.

2. Having to build all of our services when you started the stack was also slow, versus the minikube + telepresence setup where you by default run pre-build images (the same ones we deploy) and swap out services as needed. You can mitigate this with well written Dockerfiles that do a good job of caching layers but it can be tedious.


Mods probably should consider updating "Google Skaffold" to "Skaffold by Google" as that's not the project name.


Agreed. Only @dang can change it now as editing is now disabled.


Cool, looks interesting. I just spent a couple of weeks getting a k8s setup up and running on AWS, from zero. As much as I appreciate these sort of platforms, I did enjoy learning the inner details of how k8s components communicate together and how they work.


This looks really interesting. I have a project that does something similar for the remote deployment but uses only make and kubectl.

https://github.com/davidbanham/kube_maker

I've been thinking about integrating a local minikube lately but haven't started yet. I may end using Skaffold as a backend or just deprecating kube_maker entirely if it becomes obvious that Skaffold is the way to go.


I find it odd when I see projects in the GitHub org github.com/GoogleCloudPlatform like this one. Being that the org has the same name as Googles product (GCP) I would not expect experimental or nacent projects. I never know what to expect. Is this an official Google product that will be supported or some experiment by some devs in Google.


No matter what, it's probably best to expect that Google may deprecate any feature or project at any time.


This looks promising. My main concern is whether it requires changing the way code is built and deployed too much.

For instance, I don't want `npm run build` to be executed every time I change a Javascript file.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: