Hacker News new | past | comments | ask | show | jobs | submit login
Docker closes $40M Series C led by Sequoia (docker.com)
135 points by yla92 on Sept 16, 2014 | hide | past | favorite | 99 comments



This means Sequoia expects Docker to either go public or to be acquired for at least (10 * $40M / sequoia_ownership) in order to be considered a "win," right? (A "win" in the sense of being worth the VC's investment, not in the sense of being valuable to the world.)

The reason I say this is because a VC who merely breaks even on investments will eventually go out of business, so it would be a mistake to invest unless the expectation is that Docker might be a win for them.

Assuming Sequoia owns, say, 35%, then that comes to an expected acquisition price of about $1.15B for Sequoia to earn 10x their money back.

What are some hypothetical scenarios which end with Docker going public? What are some scenarios where a company would acquire Docker for north of $1B?

I'm not trying to imply anything about Docker with these questions. Personally, I love Docker. It's just fun to theorycraft.


Generally the multiple expected is lower for later stage rounds as there's less risk. Also Sequoia wouldn't have got anywhere near 35%.

That said the investors would certainly be looking for >$1bn exit.


> Also Sequoia wouldn't have got anywhere near 35%.

Bingo!

I don't think anyone would be interested in taking a round that would dilute us this much, especially when the need for money is not pressing.


The smaller the percentage, the higher the expected acquisition price. For a 10% ownership stake from a $40M investment, the acquisition would need to be $4B just to earn 10x their money back.

I was just wondering about some scenarios where Docker could achieve that kind of price.


You haven't taken in to consideration an important variable - how much of the 40M is invested by Sequoia vs. pro rata by existing investors.


Why does that matter, the other investors are hoping for returns too?


preferred vs common, participating preferred vs common (i.e. liquidation preference), possible pro-rata rights (thought at this stage further financings are probably unlikely)

disclaimer: I work at greylock, who was an earlier investor, though have _zero_ inside knowledge of the deal as I work on unrelated stuff.


I was talking specifically about the equation, not about the expected returns of investors.


This doesn't actually matter assuming the 40m bought 10%.


VMWare is valued at over $40B currently. They could scoop up Docker for a couple % of their market cap. I'm not saying that will happen, but it really wouldn't surprise me. There have been a ton of lofty acquisitions in this space.


Gods I hope not. VMWare's change resistance, pricing model, and overall attitude and (lack of) responsiveness to customers is not something I want to see applied to a project like Docker.


It is obvious that Docker will be acquired very soon. I think the Sequoia investment is a very safe bet, almost a gift. Companies such as VMware are nervous because less VMs are deployed with Docker and there is less impact in performance.

Docker doesn't replace VMs but there is an intersection in their use cases.


By whom? And for what purpose?

If Docker was about to be acquired, why would there be any interest in raising a round? That dilutes everyone for no purpose. That's not smart.


> If Docker was about to be acquired, why would there be any interest in raising a round?

Build-up a widely recognized brand. Have a complete offering (all the way up to mgmt of containers, as the picture shows). Stay neutral, support all platforms equally well, for some more time (before one of the platforms may buy it). Probably Docker is project a really ambitious picture of wanting to be the big-blue-platform-in-the-cloud in 5-10 years; a picture that --with enough series of funding-- may not even be so far out.

> dilutes everyone for no purpose. That's not smart.

Dilute 10-40% for a company that will soon be making several times the profits; that is very acceptable. If Docker would get there "in time" by itself, it would still make no sense.

But I think Docker knows they need to keep up building out their biz, or they will end up open-source-zoned: everyone knows and uses your product, but you make no serious profits.

Getting open-source-zoned is not a bad thing... Just not a straight ticket to the millions.


The person I replied to was referring to this happening very soon. I was talking about the context of very soon :)


Red Hat and Oracle will murder each other over Docker. I expect 2 to 5 billion in 3 to 5 years.


> By whom? And for what purpose?

VMware? Google? Amazon? RedHat? HP? IBM?

> If Docker was about to be acquired, why would there be any interest in raising a round? That dilutes everyone for no purpose. That's not smart.

It moves the valuation up and it increases the chances of being acquired.


I believe the ideas of containerization used by docker actually cam FROM Google. I remember listening to a speaker talk about how it was based on what Google uses for their production server and is docker is a way for them to share that technology without actually sharing their own proprietary stuff. So I doubt Google would be interested, and I think most other big companies should already have solutions to this problem (at least Microsoft and Amazon for sure).


The idea of containerization is older than Google. You can take a look at http://en.wikipedia.org/wiki/Operating_system%E2%80%93level_...

Companies such as Microsoft and VMware who are leaders in the Windows application virtualization sector (via App-V and ThinApp) don't have complete solutions in this space.

Their solutions do a lot of tricks to simulate a container (e.g: user mode hooking, filtering drivers) while Docker uses kernel extensions.


Maybe to have an existing Sequoia portfolio company pay an inflated all cash amount for Docker which effectively lets Sequoia pull money out of the portfolio company without diluting its equity from said portfolio company?


Also, VCs adding pieces of the stack to their portfolio is interesting. Assumedly some of the Sequoia companies will be thinking about Docker use, Docker already has paid plans, so some of their cash "stays in the family", they own and have a vested interest in the success of deployments further up the line, driving more revenue back down.

Not that I'd imagine Sequoia has to do this or has any issues with deal-flow, but any company doing cool things running on containers, who's going to be the VC they call first?


I'm curious as to how Docker.io and CoreOS.com live in harmony... I mean, CoreOS is built around Docker.io as a core piece, but I can see it overshadow docker proper in terms of money to be made in the long run.

It will definitely be interesting.


How are Docker monetizing their product? Is it just in hosting and support? By open sourcing Docker they have opened up the door to hundreds of competitors offering the same thing, often for a much lower cost? Is their only competitive advantage the fact that they own the project and thus understand it better and can dictate its course?

I'm sure they would make for a very interesting case study on how to do open source right.


I'll paraphrase the questions and provide answers best I can

> How are Docker monetizing their product?

We offer commercial support for Docker and also offer paid features on Docker Hub.

> Competition?

Docker is Apache 2 licensed. Anyone can fork Docker and start monetizing it tomorrow in a completely different way than we had anticipated.

This is actually a good thing. There's a separation of Docker between the project and the company, and there's a virtuous cycle of the company aligning to the objectives of the project and the project benefiting by all of the business-y things you can do.

> Competitive Advantage

We don't own the project, the community does. We fundamentally believe the value of a platform or ecosystem is proportional to the amount of competition it brings to everyone. As per above, we have no interest in locking in a competitive advantage on the Docker project that only we can benefit from.

> Doing Open Source Right

We have a long ways to go! We're certainly trying something new - but we've gathered a lot of momentum and our focus is just to continue working with the community, our great partners, and work diligently to deliver great product and support to our users.


That is one way of putting it. Another way is that you're creating a new de facto standard and making sure everyone needs to use it. The partnerships you make build your product into other products, making it the default option for anything someone might need to do with containers. Then you increase the visibility of the product (and thus the company) by getting lots of PR and making sure VCs and potential customers read it.

But competition has nothing to do with open source. Source code is not a competitive advantage, even if you get minor quality improvements like increased code visibility and test coverage. No open source company has ever forked code from an existing product, started a competing business, and stolen business away from the originator. Companies that provide services on top of other people's code, however, often fall victim to a better sales pitch, customized tailored services or a shift in direction.

And honestly, the idea that 'the community' owns the Docker project is a joke. Is the community getting 40 million dollars? Is the community making the design decisions for the product? Is the community pushing the integration of your tool with other companies and services? As far as I can see, you have a company based on a product, and you give that product away because it costs you nothing to do so. Open Source is a marketing tool, and a great one at that.


>No open source company has ever forked code from an existing product, started a competing business, and stolen business away from the originator.

Your main point stands, but one counter-example perhaps worth mentioning is MariaDB (the Enterprise version[1], not just the free version[2]). Oracle's treatment of MySQL was enough to force a reaction from the community to fork and continue to build a drop-in replacement. Although, for the same analogy to apply here a majority of the core Docker developers would need to defect to DockerFork.

[1] https://mariadb.com/

[2] https://mariadb.org/


I'm glad you bring up partners! They're what I personally focus on 24x7, so have a lot to say on the topic.

> The partnerships you make build your product into other products, making it the default option for anything someone might need to do with containers.

I think you can view some partnerships through that lens, but as a whole I do not believe this statement holds at all.

My #1 partnering goal is to make sure that the interest that exists in Docker can be realized on the services and products that people are using today. You'll never see us form a partnership that has any conclusion, whether direct or indirect, that the only proper way to use docker is in combination with partner technology X.

I think you could also view projects like libcontainer, which is written by some of the maintainers of Docker, and understand that it's being used by other projects not related to Docker at all. In some cases, even by perceived competition (like Pivotal.)

> Then you increase the visibility of the product (and thus the company) by getting lots of PR and making sure VCs and potential customers read it.

It is important to highlight the reasons we make these partnerships - I can assure you, it's not to get VC attention. That's completely short-sighted and unsustainable.

To the best we can, we deflect the visibility on the project on to others, big or small, doing great things with Docker.

> And honestly, the idea that 'the community' owns the Docker project is a joke.

I'm not laughing. Maybe you're not familiar with how the Apache 2 license works. I'd get familiar with that. Link: http://en.wikipedia.org/wiki/Apache_License

> Is the community making the design decisions for the product?

Yes. The projects design is open. There is no privileged discussion about the Docker project - it happens all in the open on GitHub and IRC. If there's a specific area of conversation that requires in-depth collaboration, we sponsor people to meet in person. This happens regularly.

> Is the community pushing the integration of your tool with other companies and services?

Yes. Red Hat is a perfect example. Pre-0.7, any Red Hat customer could not use Docker because of 1) AUFS not being available on the platform, 2) Docker not being supported on Red Hat. So anyone using Docker on Red Hat at that time was breaking their agreement. That's a problem.

> As far as I can see, you have a company based on a product, and you give that product away because it costs you nothing to do so. Open Source is a marketing tool, and a great one at that.

We can argue the relative advantages and disadvantages of free vs. commercially licensed software all day. You can write Docker off as a sheer marketing ploy, but, I'd say that's a pretty disingenuous statement to make at an individual level of the company.

I'll also say, the trade-off does not come without cost. It's not even close to free.


Great reply. I love the way you guys think, it makes me very happy that you guys are finding success.


The main way companies building or built around open source products make money is through services and expertise. Companies like Cloudera and RedHat are two prime examples, they're both built around open source projects but still make excellent money because of their service and support abilities.


True. However, Linux and Hadoop are very complex beasts that really require and benefit from a lot of expertise when deployed in the enterprise. Perhaps I don't fully understand Docker, but really there isn't that much to it. Its a great product and massively useful, but does it really require that extensive knowledge to make the most of it? Docker is really just an extra layer over LXC..


> Docker is really just an extra layer over LXC

This is not correct.

The Docker engine has an ability to use LXC userland tools as an execution environment, but it is not the primary choice. libcontainer (github.com/docker/libcontainer) is. libcontainer has adoption outside of the Docker world (Parallels, CloudFoundry, as examples) which we find to be a fundamentally good thing.

We can also spend time talking about the non-technical values of Docker, if you'd like to jump in to that.


Would you say libcontainer is a reimplementation of LXC? Are they related in the sense that they use the same kernel features with the same goal of isolating processes?


I would say that focusing on libcontainer vs. lxc vs. libvirt/libvirt-lxc, lmctfy or any of the container providers is important in how to work with the kernel and expose an API, but in general each have their own relative merits that suit different audiences.

We would prefer and encourage for competition to exist at this level, but also to support all of the options with Docker as execution drivers.

This is a small example of our approach in general - Instead of locking existing technology out, Docker is designed to integrate seamlessly the choices you're living with today. If we don't do that well, it's a bug. Let's fix it.


Ah, thanks for putting me straight.

Yes, please expound. I am largely just a lone developer looking to use docker for my fairly small projects. I currently just use Git for deployment. Is there any value to moving to Docker?


Leaving aside some of the macro debating points on open source business models, I'm curious how often this kind of competition has ever become a problem for businesses. Docker aren't the first to create a business around open source software they develop. Has this ever killed a business before?


> Has this ever killed a business before?

I think there have been some companies that have died that way, yeah. It's tough to make money from open source, if obviously not impossible. Think how hard it is to get a regular old sell-something-for-money business off the ground, then throw in a way more complicated business model and relationships with community, customers and so on.... it's complicated!


I would look at something like MySQL vs. MariaDB vs. ... as soon as the main shepherd of the project has shown not to have the best interest of the community and ecosystem in mind, a prominent fork gets adopted. It definitely takes time.


Docker are doing nothing new. There are few million to a billion dollar open source businesses out there (Red Hat is leading).


How do people actually use Docker?

In my world of bootstrapped, smaller apps looking for market traction, even if things go well, a few Linodes should be enough to handle most of the traffic I'll ever need to deal with, so this kind of thing is kind of foreign to me. I'm curious how people utilize it in practice.


I (or rather, jenkins) builds all my software in it, I don't actually use it for containerizing final applications.

In a nutshell, for me the value is in trivial repeatability. I can reproduce the entire build toolchain, test environment and produce artifacts all from a few KiB git repo which centres around the Dockerfile and submodules to dependencies.

Some of my ARM stuff takes hours to cross-compile and involves enormous amounts of fiddly babysitting normally. Dockerfiles have RUN statements (think lines a shell script) which are cached. My adjustments toward the end of a Dockerfile take only seconds to test and produces the exact same result as if it had really run each statement from the start, which doesn't sound like much but turns out (for me) to be pretty liberating compared to constantly trying to fight other automation where you have to dance around short-circuiting stuff to re-use bits of a past build to save time and get only a handful of "pristine" iterations in a day (that might differ to the iterations you rolled by hand).


I'd be super interested if you could share any of your ARM cross-compilation Dockerfiles.


They've got a lot of idiosyncrasies at the moment, some of it working around the fact ADD some/directory/ could never be cached (so my build scripts maintain some.directory.tar.gz and those are ADDed instead), but I really should. I guess I'd put it under my github profile (I'm also csirac2 there).

There's two types of ARM builds, ones which can cross-compile and those that can't. The ones which can't cross-compile are done with qemu-binfmts and we chroot into an ARM filesystem and run the build there.

Perhaps the only useful contribution would be the fact that I persist the ccache up to the docker host with a shared ccache volume, and that helps enormously especially for the qemu-binfmts builds which can be quite slow.


Where does your original (non-qemu) toolchain come from? Is that built inside or outside of a container?


(em)debian provides nearly enough of what I need most of the time. The trigger for going to a ARM chroot is when I can't get build dependencies installed properly on an amd64 host. Either that or the thing I'm compiling just isn't developed to be cross-compiler friendly and it's too much work to hotwire it to be so.

For example, say I need libfoo, I have an amd64 host (being the docker container). Sometimes I just can't get the libfoo:armhf or libfoo-dev:armhf package installed because it would break/conflict with the amd64 host's version of it in some way. xapt often helps but then sometimes screws up by re-packaging something that has an "all" arch (non-arch-specific) to something armhf specific (eg. foo-data). This ultimately either conflicts with the host or fails to be named properly in such a way that it meets the build-deps of the project.

Sometimes I know it would be easier in some cases to avoid the debian packaging ecosystem, but for my workflow and distribution requirements it brings a lot of benefits.

Edit: see here https://wiki.debian.org/EmdebianToolchain


Self-hosted PaaS platforms like Dokku, Flynn, Deis, etc give you Heroku-like deployment & management without the cost of something like Heroku. All three of these are built on Docker.

I've been running Dokku on DigitalOcean lately and it has been great (though I'm planning on eventually moving to Deis, now that DigitalOcean supports CoreOS).

https://github.com/progrium/dokku https://flynn.io http://deis.io


Right now I use it for demos... We use a stack that is little complex for someone wanting a quick taste. So provided they have a modern Linux with with docker installed they can quickly do a self demo.

And if they don't, well having a fresh image that is easily rebuildable makes it easier for me to give demos than to spin fresh VMs. Arguably I could have done the same with Vagrant, but really didn't need the overhead of the VM.

But for using it in deployment, no.


This is really cool to hear. If you're interested in having us know more about your project, I'd be more than happy to take the time. Information is in my profile.


For a good set of use cases, see https://docker.com/resources/usecases/


Truth be told, to me those are not very meaningful, as those companies operate on a scale, and have requirements that are on another planet from mine. Is Docker useful for people/organizations that are not so large? The other responses seem to think so!


Bringing up a properly configured VM that stays up to date is actually a lot of work, and the more software you pile directly on top of that VM, the harder it is.

For me, my VMs are actually dead simple and relatively homogeneous. Fresh install + lock down SSH + lock down iptables + install Git + install Docker.

Then I build from the correct Dockerfile and open whichever port(s) on the VM. Bang, instant node that does [whatever]. Time to upgrade the DB/mail server/app? Build from the new Dockerfile, stop the existing container, bring up the new one. No worrying about installing/uninstalling the correct dependencies on the VM and getting into an inconsistent state.

That's what Docker does for you.


That's also what any sane configuration management system does for you.

Don't get me wrong, I think docker is great, but this whole "I get simple idempotent machines" is a solved problem. Docker excels at bundling applications with all their dependencies as an modern analogue to statically linking, or a Linux equivalent to OSX's .app


Most configuration management systems actually make ongoing configuration management pretty hard. You end up with your supposedly declarative configuration files littered with upgrade scripts and "please uninstall that package", unless you simply wipe your systems and install them from scratch with each change.

I think the point of Docker is that it decouples that "destroy a system and start a new one" from the VPS/cloud/server provider you're using; you can talk about "logical" systems as separate from whatever they're actually running on.

Personally, my use of NixOS means that I don't have the problem, so I don't need Docker. My systems get generated from scratch (logically speaking) whenever I upgrade without having to tear down the VM, and then some simple logic figures out which services to restart.


Solved by what, Puppet and the like? It's simply too easy to get your machine into an inconsistent state as a result of someone fiddling around, and God forbid you switch from Ubuntu to CentOS or something like that.

I find it's easier to make the OS part as simple as possible and use Docker for everything else since you can strictly enforce that your Dockerfile will always build the same way regardless of which OS you build it on.


This is my experience with Chef/Vagrant and one of the principal reasons we're moving our deployment over to Docker with CoreOS. Using etcd/skydns for service discovery and fleet to deploy containers. So far it's going very well, much easier than Chef ever was.


One big difference is that if you don't declare a resource it doesn't exist anymore. With chef, puppet and friends there's a big gray area of things that are one the machine but not managed by the tool. In practice there are always things lingering on old machines that are not described by the SCM.


It changes your deployment "atom" so instead of deploying compressed artefacts, you deploy containers. This has some advantages in itself (eg. it makes it easier to do gradual upgrades of your full stack, and you can run containers side by side on the same host), but is especially nice in combination with Mesos and Marathon which enables you to scale out horizontally across your cluster.


Out of curiosity, how do you manage your servers? Chef, Puppet, Salt Stack? Simple shell scripts? Why not using something like Heroku if you want the flexibility of scaling only when needed?


I mostly manage them with emacs and dpkg/apt. You can build quite a lot of stuff on one Linode if you're charging money for it.


Went to read this blog when I found out blog.docker.com doesn't support TLS 1.2. And only has one available cypher suite

      TLS_RSA_WITH_RC4_128_SHA
Which is cool because RC4 is broken.

docker.com actually does support TLS1.2, but their blog subdomain doesn't :\


Thank you. The blog is on different infrastructure than our website and the DockerHub. We'll look at this pronto! If you discover any other security issues or concerns, please send them to security@docker.com.


Yes and thank you for responding quickly to my email.


Can someone explain Docker in layman's terms and juxtapose it against something I already understand (aws perhaps)?


In the naughts, virtualization developed as a software layer to abstract physical hardware and provide so-called "virtual machines" that, instead of working exclusively with hardware, function as a group and share resources as a pool.

Docker provides one more layer of abstraction and grouping where a machine (or, commonly, a virtual machine) abstracts its resources in order to provide them to thread-like "containers." These containers share their resources with other containers running on a given host.

A docker container, written like a spec into a `Dockerfile`, is a way to package your application as if there was a `run.sh` that would install your OS, any dependencies and your application itself -- and, importantly, run that application after everything is installed. The host can choose to surface to the world any ports on the running container, or keep them private to itself. The container draws from the host's pool of resources so long as your application continues to run inside the "thread" managed by docker.


Docker is kind of like a virtual machine, except without all the overhead of the operating system so you have a much smaller package. It also uses Linux containers ( http://en.wikipedia.org/wiki/LXC) to help with this process.

The overall idea is that I can make an app, use Docker to control things like versions of 3rd party apps that your app relies on. So for a web app I can include the version of MySQL, PHP, Ruby etc that I want, and then distribute a "dockerized" version of the app. Now when I distribute for testing or to other servers I don't need to worry about versions. It just works.

At least that's what I gather from reading their site last weekend. I plan to start using it for one of our projects soon.


Docker uses built in features of the Linux kernel to provide namespaces, called containers, in which to run isolated processes. Docker runs on Linux. Each container is built from some Linux distribution in which all system calls are delegated to the underlying Linux. Thus the container is mainly application code as compared to a VM instance in which each instance is a full copy of the OS. Notably, a docker container does not offer a GUI unless you run VNC. So one would not ordinarily develop programs within a container.

Docker makes it easy to use features of the Linux kernel that have been around for a while. Expect Microsoft to discover this technique in a couple of years.


chroot?



I tried Docker for a little while, just to see what it is. It seems like the authors never used UNIX before. Nonstandard argument format, some strange formatting in the manual page. And the idea of downloading random software from strangers from the internet and running it on your machine creeps me out as well.


Sorry you had a bad experience. When was the last time you tried Docker? As an example..

> Nonstandard argument format

This has changed

> some strange formatting in the manual page

Also has changed

> And the idea of downloading random software from strangers from the internet and running it on your machine creeps me out as well.

So you don't use any sort of package management with the distro of your choice?

BTW - you don't have to use docker the way you describe. You can `docker import` any rootfs to create a base image and only push/pull images you have created.


"And the idea of downloading random software from strangers from the internet and running it on your machine creeps me out as well."

Do you compile everything you install from source on your open hardware? If so, you are by far the minority here. If not, you're inconsistent.


Of course nobody is running random images from the internet in production. You build your own. Building custom images is not rocket science.

But to get started with docker it's incredibely easy to download an image and have something running within minutes.


I run "random" images. However, I do so by only running automated builds with source code I can (and do) first audit. Some of those images, by the way, are official and provided by the application developer / org.


Congrats to all the team!


Thank you for being a big part in the community, specifically around boot2docker. You rock!


It's very interesting to see that Docker is getting so much recognition, money and success -- while the real core of this thing, lxc is rarely mentioned and also the authors are not part of this huge success.


Docker doesn't use lxc by default anymore (although you can still run the docker daemon with it if you want). It uses libcontainer, which they wrote: https://github.com/docker/libcontainer


Good to know, thanks.


Docker is one of those FOSS that I always want to actively contribute to, but I don't lest they seem to be doing great without any help.


Someone made a contribution earlier today as small as adding a carriage return in an RST file. That is extremely appreciated by everyone.

I'd encourage you to jump in. The IRC channel is fairly active and there's a ton of places to get started at all experience levels. Let me know if you need help.


The stewardship of community contributions seems particularly poor in Docker. One of the more frustrating projects I have tried to help, and is really the only red flag. The promise of docker is awesome.

See this thread (and previous discussions around the issue) begging for docker core to participate and getting nowhere for ~1 year: https://github.com/docker/docker/issues/7284

Is there an outline somewhere on your plan for governance and stewardship for community contributions, how proposals move through the pipeline, and whether anyone outside of Docker, Inc has the commit bit?


I'm sorry to hear you've had a frustrating experience. The governance model should be outlined in the CONTRIBUTING.md file located in the repository which should address the concerns.

There are project maintainers that are not on the Docker, Inc. payroll, and getting anything committed requires the approval of at least 2. We actually consider this a litmus test for our involvement with the ecosystem, and it's fundamentally a great thing.

As for the issue at hand, I personally understand desire on both sides. I have been frustrated multiple times by the lack of being able to have multiple Dockerfiles per repo as a simple example. On the other, providing strict guarantees about context ensures true portability of Dockerfiles.

What I will say, is that this is a topic we talk about a lot, whether it be on the issues themselves or in IRC. It's tough to get the right balance.


Docker, Inc employees have popped in to make an offhand comment at various points, similar to the one you made "On the other, providing strict guarantees about context ensures true portability of Dockerfiles." but actual participation is nonexistent.

This has been the case on other issues I have seen as well, either things languish, or they get magically swept into the project, with the decision happening elsewhere.

Stewardship would mean actually explaining the position above, and discussing with the community how the issue affects them in order to gain an understanding of what we are talking about.

Yelling "repeatability" with no context and then disappearing is pretty frustrating.


Jake, you're being disingenuous. Back in April I gave a detailed explanation [1] as to why I hesitated to make the proposed change. When comments kept rolling in I followed up in June with a possible solution and a conclusion that "if somebody contributed this, we would love to merge it" [2] To my knowledge nobody has.

I know for a fact that you are aware of this since my comment was in direct response to you.

And since comments are still rolling in (even though there is already an open call for contribution, with a pre-approved design), I am focusing on it again this week [3].

There are definitely lots of growing pains in how we run the project, and having any participation at all from you is super appreciated. But the picture you pain here is unfair and inaccurate.

[1] https://github.com/docker/docker/issues/2112#issuecomment-39...

[2] https://github.com/docker/docker/issues/2112#issuecomment-47...

[3] https://github.com/docker/docker/issues/7284#issuecomment-55...


I wonder how much of this was just a way to rearrange who owns how much, ahead of the inevitable acquisition.


"the money helps show the market that the company has stability"

Free money from some dudes unrelated to the company's business really shouldn't indicate "stability", should it?


This will help them much with big customers who decide if they should use Docker - because they have now longer-term stability and support.


'Hot' here should be interpreted as a new, fresh, popular meme and buzzword - 'a cool stuff for a cloud - orchestration, you know'.

Well, in this way it is hot indeed.


Docker doesn't do orchestration and doesn't provide a PaaS.

I imagine they'll try to grow in that direction because their customers will hanker for it, but (and I'm biased here because I work for a PaaS developer) they'll find that building automagical distributed platforms is hard. Very hard.

Edit: from the blog post -- it looks like moving up into PaaS is their intention.


Why would Docker sell their dotCloud platform if they wanted to stay in the PaaS business?


God I hate it when people make excellent points directly underneath my remarks.


yep they are hot as in overhyped


Here's a copy of the blog post for anyone having trouble reading.

Today is a great day for the Docker team and the whole Docker ecosystem.

We are pleased to announce that Docker has closed a $40M Series C funding round led by Sequoia Capital. In addition to giving us significant financial resources, Docker now has the insights and support of a board that includes Benchmark, Greylock, Sequoia, Trinity, and Jerry Yang.

This puts us in a great position to invest aggressively in the future of distributed applications. We’ll be able to significantly expand and build the Docker platform and our ecosystem of developers, contributors, and partners, while developing a broader set of solutions for enterprise users. We are also very fortunate that we’ll be gaining the counsel of Bill Coughran, who was the SVP of Engineering at Google for eight years prior to joining Sequoia, and who helped spearhead the extensive adoption of container-based technologies in Google’s infrastructure.

While the size, composition, and valuation of the round are great, they are really a lagging indicator of the amazing work done by the Docker team and community. They demonstrate the amazing impact our open source project is having. Our user community has grown exponentially into the millions and we have a constantly expanding network of contributors, partners, and adopters. Search on GitHub, and you’ll now find over 13,000 projects with “Docker” in the title.

Docker’s 600 open source contributors can be proud that the Docker platform’s imprint has been so profound, so quickly. Before Docker, containers were viewed as an infrastructure-centric technology that was difficult to implement and remained largely in the purview of web-scale companies. Today, the Docker community has built that low-level technology into the basis of a whole new way to build, ship, and run applications.

Looking forward over the next 18 months, we’ll see another Docker-led transformation, this one aimed at the heart of application architecture. This transformation will be a shift from slow-to-evolve, monolithic applications to dynamic, distributed ones.

SHIFT IN APPLICATIONS

As we see it, apps will increasingly be composed of multiple Dockerized components, capable of being deployed as a logical, Docker unit across any combination of servers, clusters, or data-centers.

DISTRIBUTED, DOCKERIZED APPS

We’ve already seen large-scale web companies (such as GILT, eBay, Spotify, Yandex, and Baidu) weaving this new flexibility into the fabric of their application teams. At Gilt, for example, Docker functions as a tool of organizational empowerment, allowing small teams to own discrete services which they use to create innovations they can build into production over 100 times a day. Similar initiatives are also underway in more traditional enterprise environments, including many of the largest financial institutions and government agencies.

This movement towards distributed applications is evident when we look at the activity within Docker Hub Registry, where developers can actively share and collaborate on Dockerized components. In the three months since its launch, the registry has grown beyond 35,000 Dockerized applications, forming the basis for rapid and flexible composition of distributed applications leveraging a large library of stable, pre-built base images.

Future of Distributed Apps: 5 Easy Steps

The past 18 months have been largely about creating an interoperable, consistent format around containers, and building an ecosystem of users, tools, platforms, and applications to support that format. Over the next year, you’ll see that effort continue, as we put the proceeds of this round to use in driving advances in multiple areas to fully support multi-Docker container applications. (Look for significant advances in orchestration, clustering, scheduling, storage, and networking.) You’ll also see continued advances in the overall Docker platform–both Docker Hub and Docker Engine.

The work and feedback we’ve gotten from our customers as they evolve through these Docker-led transformations has profoundly influenced how Docker itself has evolved. We are deeply grateful for those contributions.

The journey we’ve undertaken with our community over the past 18 months has been humbling and thrilling. We are excited and energized for what’s coming next.


It's nice to see the folks building the building blocks getting some - financial - attention.

Now to yield a nice return on that they just have to turn docker into a ephemeral social photo sharing app for blind vegan Bulldogs and say they want to change the world ;)


That's hilarious. :)

I think containers as a concept have the chance to really fundamentally change the way applications are developed, delivered, and managed in data centers going forward - whether it's my own little rack sitting in a corner office, or a large scale, multi dc deployment.

We're betting on that being Docker, but, the worst thing that could happen is to become complacent and not recognize there's a tremendous amount of work left to do.


Trying not to be a curmudgeon - but I really don't see what the big fuss is about, or how docker is fundamentally changing anything.

Not to take anything away from docker being a decent tool in some circumstances - but really this methodology has been around in one implementation for ages on Unix platforms.


That's sorta where I've been coming up, too. Great, hard working group but docker strikes me as a very small component of the deployment stack, the entirety of which doesn't even have much enterprise value.

What would be a good comparison for a component company like this getting to 100s of millions or $1b?


LOL. The massive and obvious enterprise value is in having a standard way to deploy and interface isolated Linux applications along with their dependencies, along with a convenient hub for distributing and exchanging them.

No, its not the first time any technologies with _some_ these types of capabilities have been available, but it is the first time this powerful combination of those capabilities have come together in a way that has so much momentum.


That's fair. And you're absolutely right that the technology has been around for ages.

I would ask - why is it now that it's actually being used outside of the few boutique scenarios that came before?

I personally think it's a change at various levels of the industry (from the speed of ideation->delivery, inefficiency in process, dc consolidation, efficiency/density) and Docker is an integral piece in helping alleviate those problems


Docker is just one piece of the puzzle. They still have a lot of work to do, as he said in his comment. Things like container orchestration, networking, service discovery, etc. If you read the linked blog post, they actually explain this at the end of the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: