Docker looks like a really interesting concept, but to say that with docker you can "write once, run anywhere" is a misrepresentation at best, and an outright lie at worst.
It isn't talking about operating system compatibility.
It's talking about being able to use the same container technology on a "choice of deployment from laptops to bare metal to VMs to private and public clouds."
Docker is a new technology- I suspect the poor support that Windows has for Docker is going to cut into Microsoft's bottom line in the coming years. Microsoft MUST support high performance docker in the next year or I suspect they are going to lose a lot of server customers soon, if Docker's exponential adoption curve continues for the foreseeable future.
Let's face it- Microsoft Windows is really just a legacy OS now... They either need to switch to a UNIX-derived kernel, which would allow for native docker support (amongst a million other advantages that come from sharing a sensible common infrastructure with other OSes) or they are headed for the dustbin of history pretty soon.
Lets be honest. How many people are deploying WebLogic on Docker? How about SAP? How about any large enterprise back office application at all?
You see, Windows Server serves an entirely different market.
The kind of applications being deployed on OS level virtualisation just don't get deployed on Windows anyways.
About as close as you are going to get are Java applications which even then are usually deployed on a point of abstraction like an application server. (At the end of the day RHEL is still much more common for big Java apps)
That is not to say that Windows would not benefit from some sort of OS level virtualisation, only that it means absolutely nothing that they don't have it right now (or even for a few years).
Windows Server will continue to dominate back office deployments ad infinitum.
As much as I dislike Windows, there is a whole world of "enterprise" software running on Windows that isn't going away. SQL Server and Active Directory come to mind as Microsoft products being heavily and actively invested in by the enterprise.
If you're in the Ruby/Python/Go/Docker/whatever echo chamber it's easy to miss the other echo chambers out there. There are tons of companies out there still making a decent living from developing and maintaining software in curiously tenacious tech like Delphi, FileMaker and MUMPS.
> There are tons of companies out there still making a decent living from developing and maintaining software in curiously tenacious tech like Delphi, FileMaker and MUMPS.
Right, those are the very definition of legacy software... so where is your disagreement with me?
Those were two separate, unrelated points (though a lot of people would disagree that Delphi and MUMPS are legacy software, though FileMaker surely is).
As nice as that would be, I don't see this happening until gaming gets popular on unix based platforms. Ths would be because then it becomes easier for consumers to flee to another OS. With gaming predominantly on Windows platforms for PCs, gamers have no choice but to use Windows to play most of the games they're interested in, which locks them into Windows.
More of an observation than anything, but Docker certainly has the corporate-ese marketing speak down pat. Much more so than most other "hot" OSS/platforms. Why such a different culture? Or are they just trying to climb into VMWare's bed?
I think Ben Golub plays a part in that. As superuser2 says, enterprises make up a large chunk of Docker's market. Github is also tapping into this segment—most of their recent hires are extensions to their enterprise support, sales, and account management teams.
Docker is more useful to people with more services to containerize, and that often means enterprise. To your average start-up with at most a couple of Rails apps, it's not nearly as much of a value-add.
Don't really agree with this. Being able to configure a new instance quickly and easily is useful no matter how large you are. For that matter, so is being able to spin up an isolated environment on a dev box as fast as you can start an editor.
> Being able to configure a new instance quickly and easily is useful no matter how large you are.
The comment you're replying to didn't say it wasn't useful for the "average start-up with at most a couple of Rails apps" - it said it's less useful ("not nearly as much of a value-add") compared to a large enterprise.
When your entire environment runs on half a dozen VMs you have much less to gain by faster/easier provisioning then when you're running tens of thousands.
Docker has professional sales and marketing staff. Most open source projects don't because they aren't really businesses. ElasticSearch is a billion-dollar company; then there's Pivotal, Basho, Github, Mozilla etc., all organizations that have managed not just to become popular open-source projects, but profitable companies. Docker themselves raised $15m recently.
In my view one of the key reasons to do it is to use AWS or the like. Until they have full Docker support on bare metal that is.
P.S.
There are other reasons, e.g. Docker makes it difficult to create a separate publicly visible network interface. But these I feel will go away soon as well.
> Just a reminder, adding VMWare-like virtualization layer (i.e. full virtualization) to the mix will cost you more than 40% of your CPU
… on a couple of microbenchmarks when you define “CPU” to mean “I/O”. Even if you ignore the question of whether KVM performance is the same as VMware's (hint: no), most of the the charts in that paper contradict such a broad statement.
This article only mentions VMware twice in the body, both in passing. The majority of the article is a comparison of KVM and Docker. Many of the points in the article don't apply to VMware at all, or refer to things that are KVM specific limitations which VMware has already overcome.
In fact, the article says "KVM has much higher overhead, higher than 40% in all measured cases." - saying nothing of "VMware-like virtualization" or VMware ESXi at all.
While it's true that containerized "virtualization" traditionally has less overhead than a hypervisor like ESXi, the difference is increasingly small and in most environments negligible, especially considering the added features and flexibility of ESXi vs containers.
From my experience and tests, the performance difference between an app in a container running on bare metal vs the same app running in VMware has been insignificant, so to say that there is a 40% performance penalty is probably disingenuous.
That's not how it works. How many HN stories do you see about jails/zones? How many about Docker?
The ease of use of Docker and the ubiquitousness of Linux make for a disruptive combination.
The 'technically worse, but faster and cheaper' combination Docker offers compared to virtualization is exactly what The innovators dilemma talks about.
I don't doubt that some people use Virtualbox who would otherwise purchase VMWare Workstation or Fusion. Those products have got to be a tiny, tiny sliver of where VMware's revenue comes from, though.
To VMware Fusion/Workstation, which isn't where they make their money anyway. Nobody talks about VirtualBox on the server where KVM is much better supported.
No, because their feature set is similar for many purposes, their stuff works well cross-platform and is free. The way I see it, VMWare was a one trick pony and the horse has bolted, and while expanding their offering in to virtual networking topology provision and serious hardware infrastructure provision has been tried, it inevitably fails in subtle ways.
Simply put: people need transparency in dynamic, modern infrastructure and they don't get it from 'put me in the middle' commercial vendors. Nor does the complexity cost of the mystical one size fits all virtualization solution magically dissipate when marketers invoke the ancient spirits of enterprise requirements.
I'm not aware of many people using VirtualBox for server virtualization. That's where VMWare makes (almost all) of their money.
VirtualBox isn't better in anyway then VMWare - except it is cheaper. That isn't really enough on its own (and there are plenty of more viable competitor of zero cost is all that matters: Xen, KVM, etc etc).
OTOH, Docker is cheaper, faster, much less resource intensive, and less secure. That's disruptive, and much more difficult for VMWare to fight than another conventional virtualization competitor.
You're right of course, however I would posit that VMWare's server popularity is based upon its historical dominance in the workstation space, which is what's under threat. I should have made this clearer. Containers are apples to paravirt's oranges to v8/JVM's dragonfruit.
Yes, and even earlier - before ESX - nobody had thought of server paravirt, and there was only workstation. But these days there are free alternatives that work on all platforms and don't hound you for needless upgrades.
Besides, it feels like the fad around paravirt is over. There's a fair argument that its original server-side popularity was mostly a hack around 'doze's crappy install/config procedure, and 'doze is dying off.
Now we're left with containers, KVM and VirtualBox... the desktop replacement of VMWare workstation being the final nail in the coffin for its dwindling userbase.
VirtualBox would only be a "threat" to Workstation/Fusion. Products like that make up a small part of VMware's revenue. Enterprise is where the money is (products like vSphere, ESX, NSX).
Docker, Inc. offers Docker-related products and services and is creating a network
of certified professional support, training, and services providers. We are
committed to keeping Docker open source under the Apache 2.0 license.
They used to be dotCloud, and presumably have some legacy clients still paying them money to keep the lights on.
They raised $15 million from Greylock and Benchmark which I guess will help keep the wolves at bay until they a). figure out how to monetize or b). get bought by Oracle/VMWare or similar
> The companies are working together to ensure that the Docker Engine runs as a first-class citizen on developer workstations using VMware Fusion, data center servers with VMware vSphere, and vCloud Air, VMware’s public cloud.
I've been working to get a VMWare vCAC install going in my $corp. Have been avoiding talking about docker in meetings with them as I thought it would be embarrassing for them as privately I think it'll reduce VMWare's power.
Wasn't there a recent study published on HN that concluded container-in-VM was really bad idea performance and security-wise, and that VM-in-container was really good?
That conclusion really only applies to Linux; VMware can't run VMs in containers. When all you can run is VMs, then you'll promote $X in VMs for all $X.
"Providing machines for Kubernetes in not only necessary as a pool of raw cycles and bytes but also can provide a critical extra layer of security". [0]
Maybe it's true for now, but containers may be enough security in the future.
The reasons are actually not any different than when running on a dedicated machine; running containers is a good idea, regardless of whether it's a VM. Containerization is a smaller, granular unit of isolation than a VM, and it's complementary to virtualization, not a replacement/competitor.
Docker has their use cases: lightweight! PaaS! incremental push! 12-factor apps!
VMs have their use cases: Strong isolation! online migration! completely portable! consolidation for traditional applications!
So no, Docker will probably live alongside, next to, holding hands with, traditional VM environments, in much the same way that J2EE apps coexist with Rails.
But, the other reason that VMware won't want to buy Docker is that over the long run, these technologies become increasingly commoditized. VMware's hypervisor was innovative when it first launched, and these days, you can argue that you can get much the same functionality from KVM or Hyper-V - or even in this case, LXC and Docker for a different set of use cases.
Instead, VMware needs to make money on their management tools. Ideally, those management tools will be managing VMware's hypervisor, but they can't be afford to be so choosy. So, instead, they want a "first among equals" relationship with various open source technologies so they can be in the mix in every environment, even those where they haven't sold their hypervisor.
That's why they won't buy Docker - they don't want to sell the platform, they want to manage everyone's platform.
(note: everything I said is my own opinion, not that of my employer. We are coop-etitors with VMware in some of our business, and hence, I've got a conflict of interest I feel obligated to disclose).
mainframe still exists with x86, but the real question is: who cares mainframe. The similar argument when VM is introduced. People say virtualization only fits a subnet of use case. Now what? Look for the nextt 5 yrs, you will see vmware in a really bad position.
I think you are right and wrong. Many people are using "full machine" virtualization today, when what they wanted was just application isolation and resource control, but the only easy way to do this, was to use virtualization. As recipes & tutorials come out on how to migrate existing applications to containers from virtualization, I suspect people will review if they really need "full machine" virtualization, or just use a container.
ps. Just to be clear, I think there is still a need for virtualization.
This is not true at all. Docker fits a very, very specific use case, and is a very small subset of what we use VMs for. To enhance Docker to cover those cases you would eventually end up with VMs.
VMware has a strong GUI focus for the majority of Sysadmins and Virtual Admins that use it.
That said; there is a huge command line, shell and APIs available for power users, for people who want the next level of certification from VMware (VMware Certified Advanced Professional) you need to know how to do common GUI tasks via CLI and troubleshoot via CLI.
There is a surprising amount of tools under the hood, but if you want to compare it to something Linux based I'm going to speculate that it isn't as extensive.
Why does every Linux admin who's worked with VMWare think the CLI tools are terrible? Do they require a JVM or other prerequisite to operate? Are they not natively packaged for common distributions? Do they use non-native conventions for the platform? (e.g. using / as an option argument prefix) Are they not simple scripts that could be ported to OS X and run from Homebrew?
If my colleagues are just misinformed, that's one thing, but I'm hearing this feedback from people I generally trust. I'd love to have a neutral opinion on the CLI toolset quality.
The vCLI is kinda terrible. It's Perl so it will run anywhere but installers are only available for Linux and Windows. The core vCLI tools -- esxcfg-, vicfg-, vmware-cmd -- are geared for Host and vCenter interactions. For Guest creation / configuration you'll end up digging into the Perl SDK, which is included in the vCLI. It comes with many more scripts but you'll end up hacking on them to cover all of the functionality that you need.
The PowerCLI stuff is way better. It has comprehensive coverage of everything the GUI does and conforms to Powershell conventions. In my experience it is lacking for nothing.
Think of some simple operation you'd like to perform., like starting an existing virtual machine. Now read the documentation and try to figure out how to do it.
I looked into the vSphere APIs recently and they were pretty bad. The samples were written in C# and Java, with the Java samples being fairly well written. The C# samples were terrible.
Lovely, thanks for pointing that out. I'm a python person myself, so I'll definitely take a look at that. My sysadmins are still deploying servers the old fashioned way - I'd like to make their lives easier. We've automated (puppet) the configuration, but not the deployment.
Looking at their docs (https://docs.docker.com/installation/windows/) their idea of "running on windows" is "running in a virtualbox linux vm, on windows".
That really rubs me the wrong way and I'm surprised that the HN community lets it slide.