Hacker News new | past | comments | ask | show | jobs | submit login
Docker + Red Hat OpenShift = The Tipping Point for Open PaaS? (themiddlewareman.org)
110 points by tenfourty on Oct 4, 2013 | hide | past | favorite | 57 comments



> with Docker as a standard you will have a fully portable application that behaves exactly the same on one PaaS as it doesn’t on another

I wish this lie would go away. You have plenty of dependencies with a Docker app and it's trivial to either be missing them or have conflicting ones. Go ahead and copy some binaries from one random Linux distro three years ago to one made today and see if they work every time; not every chroot environment is backwards (or forwards) compatible.

While we're talking about "standards", can't we agree that LXC is still the simplest and most portable 'standard' for running Linux containers? Everyone wants a cool branded wrapper with an API, but LXC has the bare metal functionality that you need and costs you less container-specific maintenance.


Hi,

Evidently you don't feel the need for Docker and don't believe it's useful of needed, which is of course totally fine. I won't try to convince you since I've tried before and failed :) (if other are interested I cover some of the differences between docker and lxc here: http://stackoverflow.com/questions/17989306/what-does-docker...)

However there are obviously people who disagree with you and find that Docker solves a real problem beyond what lxc can do. This seems to irritate you, to the point of "bullying" every thread about docker on hacker news. Why? Is it not a good thing that we toolmakers try new ways to solve a problem? If docker doesn't do a good job, people will stop using it, and the project will fail. It's really that simple. Why not let people make up their own minds instead of going on a crusade against a project?


I can't make anyone's mind up for them. But if all they ever read are the hyperbolic marketing pieces about the glories of one tool, they're less likely to take the time to vet the claims and compare it to other tools.

There's a lot of people on HN, and many of them simply aren't familiar enough with technology not to buy into every product shilled in a blog post or news article. It's like the "HN effect" is convincing people that because it's on the front page, it's somehow intellectually stimulating, superior, or factually correct, when often this isn't the case. Luckily we have comments to put forth alternate views.


Your post assumes that every one of us on here is dumb and will fall victim to these "traps", lest we be saved by your brilliant opinions and revelatory messages (hyperbolic in their own way, ironically)..


It seems peterwwillis picked up on one very specific claim (that Docker is a portable standard) and called it out as false. That's exactly what we should be doing with false marketing claims, or else we end up with MongoDB all over again.

If the two of you have a history, that's fine, but I think the OP made a fair comment, and you've attacked him personally rather than replying to the seemingly-valid point he made.

I think the answer is that Docker is in fact less portable than previous standards (e.g. bare disk VM images), but is potentially a lot more efficient (now that AuFS has been dumped). Is that correct?


peterwwillis makes 3 claims: 1) docker does not allow standardized, portable deployment of applications 2) docker is a "fancy wrapper" over lxc and adds no substantial value, 3) anyone who disagrees with this is spreading "lies" and "hyperbolic marketing claims" which will cause the readers of hacker news to form an incorrect and uninformed opinion.

I addressed points 1 and 2 by linking to this thread: http://stackoverflow.com/questions/17989306/what-does-docker.... It lists half a dozen ways that I believe docker is different from lxc. The first item in the list is "portable deployment across machines".

I then called him out on point 3 because I consider it unfair and harmful. First it insults the intelligence of the hacker news readers ("they eat the marketing up, but I don't!"). Second it introduces FUD as an substitute for facts. Shouting "marketing hype!" in hacker news is the equivalent of shouting "anti-american!" in Congress. How many people will come away from this thread thinking "huh I wonder if docker is marketing hype after all?" simply because peterwwillis claimed it? Talk about a marketing hyperbole.


I take it you two do have history, because I didn't really pick up on #2 and #3 from his comment. I would suggest that every time you say "docker is not just a fancy wrapper around LXC", mentally people assume that docker is probably little more than a fancy wrapper around LXC. Methinks you doth protest too much!

For #1, you've linked to a thread in which you claimed docker is more portable than LXC. I think you guys are arguing at cross-purposes here: you're saying that you're more portable because e.g. you abstract away the IP configuration (true), he is saying you're not portable because you're tied to specific kernel features (also true?).

I, for one, would like to hear more of Docker's benefits clearly explained (i.e. less shipping container metaphors, and more talk of IP configuration). I'd also like to hear more about the limitations of Docker (e.g. what kernel versions can I move containers between, on which kernel versions is it secure etc).

As for #2 and #3, those seem like non-fact-based arguments, so I'll let you and peter continue screaming at each other about who is more wrong on the Internet :-)


> I, for one, would like to hear more of Docker's benefits clearly explained (i.e. less shipping container metaphors, and more talk of IP configuration). I'd also like to hear more about the limitations of Docker (e.g. what kernel versions can I move containers between, on which kernel versions is it secure etc).

Have you browsed the website at all? If you look through the Docker blog (http://blog.docker.io), the docs (http://docs.docker.io) and the user mailing list (https://groups.google.com/forum/?fromgroups#!forum/docker-us...) you will find plenty of resources.

For example here are a few videos of actual people explaining how they use docker and why they like it (Ebay, Cloudflare and Mailgun/Rackspace). http://blog.docker.io/2013/08/docker-hack-day-6-lightning-ta...

There's also an online tutorial which lets you dive directly into a command-line simulator.

If you want to dig a little deeper you can also browse the dev mailing list (https://groups.google.com/forum/#!forum/docker-dev) and our IRC channel: #docker on freenode.

I'm pretty sure none of these resources mention the shipping container more than once.


I'm sorry you took offense; it was intended as friendly feedback. I'm familiar with Docker. If you have actual answers, it would be much more constructive to post relevant links.


>that's exactly what we should be doing with false marketing claims, or else we end up with MongoDB all over again

Sorry, but I don't get the reference to MongoDB. What happened there?


I was suggesting that MongoDB's marketing claims initially went unchecked here and generally. It seems a lot of people assumed that others had done the due-diligence for them.

They started using it not knowing that it was not very concurrent, that the default transaction mode allowed data-loss, that it was slow when run in safe-mode, that it was not reliable unless replicated etc. (And these were all design issues, not implementation bugs). A lot of those people were pretty badly burned, particularly in the early days.


Ah, I was unaware of that bit of history.


A few thoughts on this, firstly you are right to an extent - Docker doesn't give you portability completely due to binary dependencies, but if for example I wanted to move my app from the same major version of CentOS or RHEL on a virtual machine to something running with the same major version in the cloud it should be relatively straightforward. You are right though that this won't work going from something like Ubuntu to Suse for example.

LXC currently has a gaping hole around security because it doesn't use SELinux - you might want to read this article: http://mattoncloud.org/2012/07/16/are-lxc-containers-enough/

The key bit is that we are starting to converge on a standard container for the Linux OS and with Red Hat working to get things working with SELinux we should have a pretty awesome container for our apps.

Finally, Docker adds a lot more on top of LXC, which is why people love it so much - you can see a comprehensive answer by Solomon Hykes here: http://stackoverflow.com/questions/17989306/what-does-docker...

Given the above I'd rather standardise on an Open project that adds a lot more value to LXC and hides that complexity away and given the largest linux vendor is putting it's weight behind this I'd say this is rather awesome!


Who says LXC doesn't support SELinux?

https://github.com/lxc/lxc/blob/99282c429a23a2ffa699ca149bb7...

      <title>SELinux context</title>
      <para>
  If lxc was compiled and installed with SELinux support, and the host
  system has SELinux enabled, then the SELinux context under which the
  container should be run can be specified in the container
  configuration. The default is <command>unconfined_t</command>,
  which means that lxc will not attempt to change contexts.
      </para>
IBM guide for SELinux-protected containers: http://www.ibm.com/developerworks/library/l-lxc-security/#N1...

Additionally, LXC is "Open" since it is GPL2, it has been around since before 2006 so it's as much a de-facto standard as you can get, it's already supported by multiple other projects and distros, and it doesn't lock you into "the Docker way" of doing things - you get to choose how you implement it. It's flexible, lightweight, simple, and stable. Docker will work for many use cases, but LXC will work for all of them. It's lynx vs wget, basically.

The libvirt-sandbox project looks like a nice way to manage sandboxed selinux-supported lxc instances that you can convert to qemu/kvm depending on your needs.


Static linking. It works.


I would very much like to use Docker/containers to provision and deploy my software stacks on a beefy server I have. The ideal for me would be a whole VM in a container (nginx/uwsgi/redis/postgres/etc), but Docker can't currently do that (it only runs a single process).

Is there a practical/good way to easily do what I want, with Docker or another tool?


Did you try Dokku[1] (git hook + Heroku buildpacks + Docker)? You can build/deploy several apps to docker containers using git push. It has PostgreSQL plugin and exposes your app via nginx. Of course it wouldn't be 100% usable as-is, but its codebase is very hackable — around 150LOC.

For example I've forked it to built a μPaaS[2] — it has out-of-the-box integration with upstart (for supervision and logging) and replaced Heroku buildpacks with the ability to define stack in Dockerfile (which is much simpler).

[1]: https://github.com/progrium/dokku [2]: https://github.com/andreypopp/upaas


The only reason I haven't tried it is because I've never really used Heroku, but I'll give it a shot now, thanks.


I actually spawn multiple processes thanks to supervisord, which also restarts them if needed.

See https://github.com/steeve/docker-lemp


That's my preference as well, I just prefer upstart to supervisord. Does anyone know how to start it offhand? It's probably just as easy as running the upstart daemon.

EDIT: Well, supervisord looks straightforward, and you only need a single config file for everything. You have swayed me, thank you for that. This will do perfectly, I will try it today.


Can't edit on mobile, but here's the example referenced:

https://github.com/dotcloud/collectd-graphite?files=1


Fantastic, thank you. This is the approach I'll follow, and that looks very well structured.


There's a great example here of a well done Dockerfile and implementation using supervisor.


Just create a bootstrap shell script that can be run on any machine. At a high level here's what mine does:

1) Install git 2) Clone a repo with all of the dockerfiles in it (each dockerfile corresponds to a container) 3) Install Docker 4) Start containers using the previously-mentioned scripts

The advantage to using a shell script for this is that you can start it manually or via an automated system. All you need is the bootstrap script and you can start your app on any machine. Well, apart from the distributions the post mentions I guess.


This is what I want the Dockerfile to do for me. For some reason, I don't really want to manage all the different components' containers myself (it's added complexity).


It's not really limited to one process you can just run a bash script to start everything or ideally some daemon management program.


This might be interesting too:

Pipework

https://github.com/jpetazzo/pipework

Essentially it lets you create private networks between containers which could each run its own service (one for Postgres, one for Redis, etc).


You should checkout ShipBuilder [1][2]. ShipBuilder is a freely available open-source self-hosted PaaS -- it aims to be an open clone of Heroku.

It uses Go, LXC, and HAProxy.

[1] https://github.com/sendhub/shipbuilder

[2] http://shipbuilder.io

Disclaimer: I am a committer on the project


Hmm, I had a look, but it's very light on documentation. So light, that it didn't really explain what the advantage is, or why I should try it, or how it works. (I tend to tune out the "open-source PaaS" offering because it's consistently been something too hard/heavy/weird to set up).


The process docker runs in the container can be /sbin/init, which spawns an entire OS. Its just not how its normally used.


Correct me if I'm wrong, but as far as I know Docker actually does run a binary called /sbin/init, it's just that they've replaced it in the base images with something that is more suited for fast startup times and running of a single process. Setting the Docker start command to init will probably not accomplish what you want.


That was true in earlier versions of Docker, but has been fixed. alexlarsson is correct :)


Hmm, I guess that could do it. I wonder if I can run the upstart daemon first to have it start my services. That would be ideal.


Contrary to what you might think, those of us working on http://deis.io/ are excited by the Red Hat / Docker partnership and its implications on interoperability. Why? Because there will never be a one-size-fits-all PaaS.

Deis happens to be built around Chef with a workflow deeply inspired by Heroku. We believe strongly in that approach. Other PaaS's are working with more experimental technologies like CoreOS and etcd, others are going the Erlang route, others seem to be writing things from scratch in Go -- with all of it Docker compatible.

We think this is fantastic for the industry and for consumers (software teams) who will soon have lots of choices in open PaaS.


I absolutely agree with you, Docker is where the community seems to be converging for standardisation around linux containers. This is a great first step towards application portability from physical, virtual and PaaS. Now it will be interesting to see who/what will win when it comes to the cartridge/buildpack side of things.


Is it really where the "community" are converging?

Docker is well-known because DotCloud is pushing it, so there's money for marketing and hype.

Docker is an attempt to make DotCloud relevant.

Docker will live on as an open-source project, but if it doesn't gain traction fast enough for DotCloud to sell its services or raise extra funding, DotCloud will die.


I'd say this is a very negative comment because you are purely criticising without proposing where the community is actually going in your point of view. It might be more constructive to give your opinion on where containers should go if not Docker. I agree with you that this has been a pivot by dotCloud, but my hat goes off to them for it and I totally respect them as a startup for doing it!


Hopefully the community will continue moving forward by getting actual work done. These technologies are all cute, but nothing here, except the scale of marketing, is groundbreaking.


What I like about Docker is that if I have a production server build based on Centos 6.4 and use Docker to install tools x, y and z, then I can easily and quickly build a VM on my desktop with Virtualbox, install the base Centos 6.4, and run the Docker builds for tools x, y, and z. At that point I have a dev environment that is as close as you can reasonably get to the production server. This means that there will be fewer integration issues in QA and fewer release issues down the road. That is what Docker buys you. Anyone who has worked with Solaris containers will argue that there is fundamentally no difference, and they would be right. In addition, you could skip Docker and just use build scripts written in bash and get the same results, and that is also true. Docker is a small incremental improvement over these earlier solutions, but what matters is not size, but that it does improve the situation. The Dockerfile is cleaner and clearer than a bash script. The registry is already built for you https://github.com/dotcloud/docker-registry and the learning curve for new people is much reduced http://docs.docker.io/en/latest/

These are worthwhile improvements.


They're removing dependency on AuFS because it's not enterprisey enough, but if compared to Docker (which, in my personal opinion, is very immature and lacks almost everything you'd expect from the container virtualization management tool except for the basic features, although there are, indeed, some workarounds for some cases) AuFS is granddad-level mature.


AUFS is what runs the dotCloud platform. The reality is AUFS is pretty amazing, and used in production, and stable and mature as you say.

But, having to require a patched kernel is a huge barrier for adoption.

AUFS will return as soon as possible as an option. It will just long term not be the default option.


Actually they're removing AuFS because it has awkward kernel dependencies and prevents Docker from running across all Linux distros. The move to device-mapper layers means Docker no longer requires a 3.8 kernel or an AuFS patch.


I haden't any problems with running AuFS on "stable" 2.6.x kernels (on Debian, Arch and Gentoo, and can't see why other general-purpose distros won't work) several years ago. I read, due to not being a part of kernel itself, it has problems with keeping up-to-date with very recent (3.10/3.12) kernels, but that's about it.

Not sure if enterprises wants to run the very bleeding edge software for sustainable mission-critical business-to-consumer yada yada.


Enterprises don't, but they often do want to run Redhat or Suse


Are all Docker images going to be required to conform to the Cartridge API, or is this another layer of standardness aimed at PaaS systems?

I have to say the Cartridge API does look quite good. Has anyone had any experience with it to share?


Hi Chris, I don't think that the Docker team have gotten that far yet in knowing where they are going to go, but you can imagine that in the future I blogged about that you might be able to layer OpenShift's gears on top of Docker - now that would be very cool! There is certainly some convergence going on here which is really exciting for PaaS.


Does this force the other major PaaS providers like Heroku to support Docker? Seems like you'd need more providers on board for true portability. Awesome news though.


Heroku, being top dog of the PaaS market, doesn't really have a lot to gain from being able to migrate. Being a market leader, they're already the default for many projects; a standard, easily movable, container for applications would quickly strip away the value they can add to a project. A commodity PaaS deployment platform would quickly drive prices down to nearly bare-metal EC2 prices rather than the high markups that Heroku currently charges.

Redhat's in an interesting position here. They've got a lot of capital & a strong presence in the enterprise but OpenShift is still a minor player in the PaaS market. They have the resources & the credibility to push an open container & give it credibility while also being in a position to benefit financially from commodity containers - anyone that moves from Heroku to hosted OpenShift is a win for them as is any company that goes to RH for their own cloud.


I really hope so, with the work that Red Hat and dotCloud are doing it will be possible for a Docker container to run on any Linux OS - which is a significant first step towards ubiquity. Whether Heroku's Buildpacks are then able to be layered on top of a Docker container is the question, not impossible as they are open source but will Heroku come on board? It seems Red Hat's OpenShift is making the moves towards this being possible with their cartridges which are the equivalent of the Heroku Buildpacks. The key bit is that the community seems to be converging around some standards that could lead to true application portability for PaaS, something we don't really have yet.


Jeff Lindsay wrote https://github.com/progrium/dokku which uses Heroku Buildpacks to build an app and then run it using Docker - it is a very neat little tool : )


Very true, this would be more compelling if Heroku got behind the project themselves because their build packs would be very portable and possibly a future standard. It seems that Docker is becoming the container of choice but how apps are layered on top is still very much up for debate.


Actually myself, a combination of OpenVZ + prebuilt VMs for each of the OpenShift components needed, would be perfect.

Then I could run it on the free ProxMox, which has built in clustering for up to 16 hardware nodes.

Given how powerful today's servers are, and how much RAM they can hold, you could spend $200K on hardware(about $10K per server with 256GB RAM, plus Infiniband switches and cabling) and have an amazing PaaS setup to offer (or use for yourself).


Lol. Still a lot of people who think that Red Hat derivatives are the only "enterprise-grade" server distros. Ubuntu and Debian work great.


Many enterprises will only use RHEL (tooling, support, market maturity etc), hence 'enterprise-grade'.


...and enterprise software vendors (eg - Oracle) officially support their software running on RHEL. If you're paying 6 figures for a software license, the difference between APT and RPM fades quickly.


What tooling and support are you talking about that RHEL has that is not available for Ubuntu?


I guess the point here is that Enterprise Linux represents a significant share of the market and if Docker doesn't work on it then it can hardly be a standard. With Red Hat behind this project it's a significant boost to Docker becoming a standard around Linux containers. Does that make sense?




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: