Thank you for being here and answering questions. Is there any way you can (internally) push to remove roadblocks for the insanity of Windows Home vs Pro for Docker/Hyper-V? I know this isn't possible in the short run but I think it prevents a lot of children from getting started with Docker.
Based on what Scott Guttrie's team has been able to accomplish, I am cautiously optimistic that this is possible if there is enough push for it from within Microsoft. Thank you once again for your support!
I would discourage development for Windows but not necessarily on Windows. I develop on Windows and deploy to Linux all of the time. I also have started using the Linux subsystem for Windows.
I’ve been developing for and on Windows for over 20 years. The “Windows Tax” didn’t become a concern of mine until I started using cloud providers. The cost of Microsoft’s licenses was someone else’s problem.
But, when every resource you use is tagged and it’s very clear how much you’re spending on an implementation, the double hit of Windows becomes real. First you pay more for Windows VMs than the same size Linux VMs and then you need more resources.
I can do a lot with a 256Mb-512Mb RAM Linux VM. I at least need 4GB of RAM for Windows and that’s stretching it.
On the other hand, I still love .Net Core but it’s not getting the uptake that Node is or even Java - yes that makes me sad.
It's a philosophical difference. NT doesn't do overcommit. In theory overcommit is dangerous (and if you decide that matters you can tell Linux not to do it) but in practice it's usually a huge RAM saving.
If your apparent virtual size is 2.6GB but there's actually only 240MB of resident memory, Linux will run on 256MB of RAM. NT requires enough RAM for the entire 2.6GB plus overheads.
This is especially frustrating if you have orchestration services that would have recovered from the unlikely event of OOM since avoiding OOM is literally the only reason for NT's choice.
Personally, I've run small LAMP stacks on VPS's with 256 MB RAM and without major issues. Running a web server on a Windows VPS usually requires more than that (albeit not 8 time as much.
A lot of families already have a Windows computer and not everyone can afford a spare computer to run Linux on, and some people's PCs can't run a full Linux in a VM either.
Windows has a lot of ways to develop for Linux/Unix now. It sure isn't FOSS, but let's not discourage a promising way to get more people into software, even if it's on Windows.
The "installing" part is the problem here.
I installed Manjaro XFCE in my parents' old PCs, and they're now enjoying it. They never did it because they didn't know how to and didn't want to destroy their already working, but slow PC.
Teach that, and I'm pretty sure everyone would love the customizability that the world of Linux provides.
Do you have any specific points or you're just trying to be edgy?
Hint- no circular arguments allowed. For example- UNIX line endings are '\n' so they are better so look UNIX is better. I mention this because I've heard this from my friends none of whom have ever programmed a raw terminal.
I wonder if I'm misunderstanding. Why would I want install, uninstall, etc. type actions running in a Docker container? Isn't that going to encourage people to spin up production environments that aren't reproducible?
I'm not very familiar with Ansible, etc., so maybe tools like that have strategies for building deterministic environments, but I can see a lot of people putting `apt-get` or `yum` commands in an install script.
CNAB makes reproducibility possible by providing unified lifecycle management, packaging, and distribution. Of course if bundle authors don't take care to work around problems with imperative logic, that's a risk. In practice, we see declarative models for building bundles offer more reproducibility.
Yeah. I read through some of those existing configs and I see how it works now. My first instinct when I see `cnab/app/run install` is to think it's for installing the app, but now I see it's for provisioning / deploying to the environment. As soon as I think of install, uninstall as deploy, undeploy then it clicks for me.
We may want to discuss changing the keywords from install/uninstall to be deploy/uninstall if that helps signal to the user the intent behind those actions. That's great feedback, thank you!
The goal with CNAB is to be able to version your application with all of its components and then ship that as one logical unit making it reproducible. The package format is flexible enough to let you use the tooling that you're already using.
Do CNAB bundles support the ability to specify where parameter and credential details might be fetched from?
Currently, we provide developers with lab environments that wire together a small subset of containers under Docker compose for local development because running the full system is impractical. However, most of our lab environments may have important external dependencies (i.e. Slack, SMTP gateways, etc) that require configuration and often secrets.
One challenge of maintaining these lab environments is keeping these external configuration details up to date, so it would be helpful if the CNAB spec allowed configuration of this sort to be provided by an external provider similar to how Docker images themselves are expected to be provided by a container registry.
Have you anticipated this use case? If so, does CNAB have this type of support?
Yes. The intention has always been to put CNAB and related tooling into a foundation that offers vendor-neutral IP ownership. That could be CNCF, OCI, or something else.
It seems that in this context cloud agnostic means any cloud can be supported.
I'm interested in application portability [0]. To do this with CNAB you need to add every cloud to the bundle. This is contrast to something like Crossplane [1] that intents to support multi-cloud with a single specification.
I get the specification is cloud agnostic. But it looks like the developer needs to write the underlying code to provision and maintain the application on the various clouds. It feels like a too thin abstraction. And it seems pretty leaky currently; the examples all have hard and specific requirements e.g. Azure, k8s etc.
What am I missing? Is the idea that we will create tooling to automatically create the provisioning and maintenance code?
> How is this not just an alternative to normal Docker tools?
From my understanding, you've got Docker for defining your app's services, you've got Kubernetes for orchestrating them, you've got Terraform et al. for defining/configuring your infrastructure, and now you've got CNAB/Duffle to bring all these tools and configs together under one umbrella.
> Does it run on Linux?
From the article[1] posted above:
> By design, it is cloud agnostic. It works with everything from Azure to on-prem OpenStack, from Kubernetes to Swarm, and from Ansible to Terraform. It can execute on a workstation, a public cloud, an air-gapped network, or a constrained IoT environment.
I am one of those people who learn through examples. Do you have an example of using duffle for defining some sort of application? I work with this stuff and the websites are so abstract as to what it does and the spec is so low level...
I'm even more confused after watching this video. Is the right way to think about this that it is a common specification that enables helm and compose to interoperate?
What would be the advantage over just using helm directly? (not a criticism, I'm not a big helm user for now, just kubectl but would like to know the difference for future project). Do you plan to integrate CNAB into kubernetes directly so we can bypass helm in the future?
If you're only using Kubernetes, straight Helm might be a better fit.
But imagine you need to run your Helm chart on Kubernetes environment that doesn't have access to your container images. You could build a thick bundle from your Helm chart, put it on USB stick, sneaker-net it over to a disconnected Kubernetes cluster, hydrate a container registry and run the Helm chart with full fidelity in the new environment.
Both crossplane[1] and CNAB are attempting to play in similar spaces, understanding that deploying an application for the cloud (public or private) is more than just dumping your software into an image and giving it to a provider (be it K8s or some other IaaS/PaaS stack). There's more associated with the application, specifically what it means to orchestrate IaaS/PaaS/SaaS to realize your application.
But I see CNAB falling into the same trap as Helm, and many of the package managers before them (including newer variants with things like charm/juju), an archive with some notion of lifecycle events is not enough. Even though it's cute that lifecycle events are encapsulated in containers making it easier to manage their runtime dependencies.
What I think makes Crossplane's model more attractive is the notion of building on top of Kubernetes design and leveraging things like the operator pattern[2]. Now application stacks can do more interesting actions during their lifecycle, which can work to preserve availability during an event (like upgrading your application stack). Crossplane is about expanding the management of resources beyond just containers running in your cluster, but to any resource you can model in the K8s control plane and writing software that can react to events related to those resources.
We are big fans of Kubernetes operators. However, taking a dependency on Kubernetes to solve this problem is not something our customers want. CNAB design acknowledges this, and leans on the concept of invocation images to perform lifecycle management.
Take the example of deploying serverless functions and a cloud based datastore (like CosmosDB) with a 3rd-party DNS service. Kubernetes operators are a poor fit for this, as they presume the existence of a Kubernetes cluster.
To me it looks like CNAB invents a new way of describing and deploying an application that looks nothing like a Kubernetes API while Crossplane is trying to use the existing Kubernetes API tooling to interoperate with and leverage that ecosystem.
Just because you are using a Kubernetes API doesn't mean you are presuming a Kubernetes cluster IMHO. The work being done with virtual kubelet[1] illustrates that.
So, I guess I am confused. You have users that want to package their app in containers, and run those containers. However, those users don't want to use Kubernetes APIs to do it? Why?
Precisely, the dependency we're talking about here is "just" leveraging Kubernetes as a generic extensible control plane which makes it quite convenient to plug into for management of resources.
+1 we really wanted to leverage the ecosystem and have something that is immediately familiar when we decided to use kube-apiserver (and etcd) for crossplane. I think the K8S resource model [1] goes well beyond container orchestration
As I said, we have lots of customers who need a packaging format that targets clouds APIs which in some cases don't have any containers (hence no need for Kubernetes). Functions + datastore + service bus being a good example.
I know there's lots of love for Kubernetes, containers, and operators -- with me too. Still we can't and shouldn't presume the existence of Kubernetes or Kubernetes APIs to solve the problems CNAB is tackling.
From reading the spec though it looks like everything uses containers built from a Dockerfile:
"A bundle is comprised of a bundle definition and at least one invocation image. The invocation image's job is to install zero or more components into the host environment. Such components MAY include (but are not limited to) containers, functions, VMs, IaaS and PaaS layers, and service frameworks."
So, the very first step of CNAB is to run a container. And CNAB invents a new way of configuring, lifecycling, etc, this container image.
Right. We took a dependency on a container runtime and not on a container orchestrator.
One of the examples we show is an electron app that provides a desktop installer experience for a cloud-based distributed application. We presume a container runtime for this.
We expect CNAB to play nicely with Kubernetes lifecycle management, but taking a hard dependency on Kubernetes was not deemed advantageous to CNAB's design goals.
Most of the examples are primarily container-based and the specification reflects that. We will definitely have to do a better job fleshing out the design with alternative invocation image types than OCI/docker. The azure-vm driver is one such (experimental) example.
Please note that while Crossplane uses the Kubernetes API the actual server is separate from Kubernetes. This way you can use Crossplane to provision a Kubernetes cluster on the cloud of your choice. See https://news.ycombinator.com/item?id=18601440 for more information.
This doesn't come across in the press that is being made. Perhaps it is just the join messaging with Docker. The purpose of CNAB isn't clear as a generic spec as the examples are all with duffle and dockerapp. Still reading the spec though.
I was at DockerCon EU today when Matt Butcher announced this. I wondered how CNAB would relate to existing cloud native package managers like Helm, especially since a big part of Helm comes from Matt himself.
He told me that he and his team saw the need for something like CNAB after finishing their work on Helm 2, but that they would keep working on Helm 3. He also explained that Helm could be used to install some or all components of an application bundled in CNAB format, which makes sense.
I’m an avid user of both docker compose and kubernetes/helm, and have been very frustrated by the lack of interoperability between them. This looks like it might get us there, which makes me very happy. Nice work Microsoft and Docker.
Thanks! We're excited about the opportunity to align Compose and Helm with a standard packaging/distribution/management model. Same is true for other tooling like Terraform, Ansible, and cloud provider declarative APIs (Azure Resource Manager, etc).
NOTE: THIS HEADLINE DOES NOT MEAN THAT DOCKER WORKS IN WINDOWS
I have spent all day today trying to switch from running Docker in a Vagrant to running Docker for windows and can say without question that Docker and Windows are incompatible as of today.
Specific issues are numerous, but primarily it is slow, and has stupid defaults, volumes will drive you nuts, don't use with cygwin or Ubuntu for Windows either (volumes won't work) and expect everything to crash your terminals.
The trick is to install the legacy docker toolbox with kitematic and docker-compose. The desktop shortcuts dont wirk, so you will need to also fix those. You will also need ensure you are using the oracle vm virtual box. Its all a real pain to get to work, but it seems to be ok, for simple projects.
Yea me too finally. Switched from running in vagrant today and it shouldn't have felt like an achievement. It should work in cygwin and ubuntu for windows sanely, but it doesn't and it's pretty slow and temperamental so I'm not sure how many re-installs or restarts will be required tomorrow. We had issues with our redis dataset being too large for the default memory allocation settings. Here's a relevant ticket if it's running slowly for you you might get some pointers: https://github.com/docker/for-win/issues/1936
This strikes me as possibly being somewhat similar to Habitat from Chef but, like Habitat at launch, I'm having a hard time understanding exactly what this does.
I've been working on something similar with sugarkube[1]. It looks like it takes a different approach and aims to solve more of the toolchain. I'd be interested in feedback if anyone has the time...
Imagine you're running Wordpress on K8s. It actually needs ingress, Cert Manager for SSL certs and a DB. Locally you might want to use MariaDB for convenience, but in the cloud you want to use RDS.
Sugarkube lets you install everything in a single pass. In this example you'd create several different bundles (MS call them CNABs, I call them kapps to disambiguate them from apps which is an overloaded term). You'd create one for nginx ingress, one for cert manager, and one for wordpress. But the wordpress one is parameterised differently per environment to either create a MariaDB when running locally or RDS when running in the cloud. These bundles are just stored in a git repo.
Under the hood, Sugarkube calls Make with some predefined targets - right now just install, but in future also destroy - and passes a bunch of environment variables that the kapp declares that it needs. These can be loaded from a hierarchical YAML configuration which Sugarkube reads (kind of like hiera/puppet does). Oh, and it can template files as well.
'Make' just calls whatever you've implemented - Helm, terraform, any non-k8s stuff you need (there's no dependency on K8s in the architecture). You can easily drop down and ignore Sugarkube and just work directly with tools you already know.
Sugarkube also lets you control which versions of which bundles to release to your environments. It can support multiple live environments.
A final thing is it can also spin up clusters on a variety of backends - minikube, kops, and in future EKS/AKS/GKE, etc.
So altogether Sugarkube gives you a complete solution for launching clusters (ephemeral if you like), and installing your dependencies into them (all as a single golang binary).
Check out the example project (https://github.com/sugarkube/sample-project) which launches a minikube cluster, installs nginx-ingress, cert manager and 2 wordpress instances backed by MariaDB, and then loads different sample data into both databases.
It's still in preview but it can solve a real pain point around working with K8s and deploying applications.
- https://cnab.io
- https://duffle.sh/
- https://open.microsoft.com/2018/12/04/announcing-cnab-cloud-...