Hacker News new | past | comments | ask | show | jobs | submit login

> Those of us long time Debian users can appreciate having a service for those (very rare) situations in which something is not already (very well) packaged.

Hi, I created Docker and am also a long-time Debian user. I disagree with your assertion that containers are a bandaid for broken systems. Containers take the best parts of solid system packaging, and make them relevant again for software being written today - not 20 years ago.

Before starting Dotcloud I used to work at a Debian-only shop, where .debs were the only accepted vehicle for software changes, company-wide. This allowed for quality ops based on a foundation of "immutable things". But as the stack grew in complexity and the infrastructure and team grew in scale, that became a nightmare.

Here's what kills Debian packages as a universal unit of software delivery:

1) Versioning hell. When 15 developers are each deploying 10 slight variations of a build, in different combinations, on the same underlying set of machines, how do you express that as deb packages? As we found out the hard way, the answer is: you can't, not in any sane way.

2) The tooling sucks for developers. Walk up to a random developer and give them a 5mn pitch on how to package their software as a debian package, the right way. There's reason pkgr.io and fpm exist. They are bandaids around a fundamentally flawed developer experience.

Side note: sometimes I wish Docker wasn't so popular and hyped. It seems that for a portion of the engineering population, anything popular is necessarily poorly designed, by people unaware of the state of the art. As it turns out, we are very aware of the state of the art, have dealt with it for many years, and argue that there is room for improvement.

Just my 2 cents.




Like I said, the rise of slick containerization (like Docker, thanks!) is really awesome and has its uses, but some people are (ab)using it as a bandaid; I wasn't saying containers are a bandaid. Chalk it up as a criticism of the abuse of the tool, not a criticism of the tool (how's that saying go? "A successful [software] tool is one that was used to do something undreamed of by its author." -- S.C. Johnson)

And I've seen the complaints that .debs can't keep up before, usually from Ruby programmers justifying slapdash Gems. Now I'll admit that reality isn't perfect, and software can be messy. But I have to ask: why in the world are there so many "slight" variations of a build? And if they're so slight, why do they break things? That just sounds like a lack of self-discipline, and I'm not sure how containers would help, other than as a bandaid over the underlying problem.

As for the difficulties of packaging, I'm glad things like pkgr.io exist to streamline the process. And I'm glad that solutions like Docker exist to streamline the container process. But if you're spinning up a VM (no matter how easy that is) to fix dependency issues, I think you might have other issues. I could be wrong, though, and as a pragmatist, I will admit that shipping wins almost every time :)


> why in the world are there so many "slight" variations of a build?

That's what happens if you tell all your developers to stop using capistrano, fabric, ftp and heroku, and tell them to create .debs out of EVERYTHING instead. They're going to (unhappily) build and install dozens of packages per hour, because that's how developers increasingly work: by deploying their stuff very, very frequently.

In our case it was the combination of 1) rapid development and staging, 2) different environments for different customers, and 3) numerous, fast-moving and precise dependencies.

1) Rapid development and staging: there are several developers, building from their respective branches. They want to deploy and test their work, sometimes in isolation (dev), sometimes as a team before releasing (staging). The overhead of deploying a new machine every time (virtual or not, in this particular case bare metal was part of the equation) is considered overkill. But that's OK, debian package can be named, flavored and versioned, so deployment in parallel should work out, right? Wrong, as it turns out... having to manually "scope" each deployment by generating different package names, flavors and versions basically breaks every assumption of system packaging. And if you want to get a little fancy and, say map particular versions/flavors to particular git commits... Forget it.

2) Different environments for different customers. All of the above was multiplied by the fact that this was appliance software - each customer got its own build, which on a good day was stock, and when dealing with more demanding customers, involved various customizations, build options etc. This is business as usual in many "vertical" software markets where the most valuable customers always have some legacy crap software and workflows they need to integrate with.

3) Numerous, fast-moving and precise dependencies. This software had lots of dependencies, which exact version of those dependencies, and even how they were built (and the exact build environment) mattered immensely, and the whole thing was a moving target because we frequently patched those dependencies. In this particular company the dependencies had mostly to do with video processing (custom ffmpeg builds etc), but this is also business as usual in many software-intensive organizations. The project rarely fits in a neat Rails or Spring app. There's all sorts of libraries, binaries and rpc services floating around - and deb packaging forces you to break all of those down into dependencies, further multiplying the dependency hell that your life already is. It's also increasingly typical (and good!) for developers to be very specific about the exact version and build of all their dependencies (from mysql to libssl) because it affects the behavior of their application. So you can't just wing it by swapping out their particular postgres setup with something "kind of the same thing". "It's the same version, what else do you want?" --> not good enough.

> And if they're so slight, why do they break things?

I meant that they break traditional system packaging. Simply put, system packaging is not well designed for applications which A) are made of many components, B) are updated and shipped several times per hour, C) on a large set of heterogeneous machines.

... And I didn't even mention the case where sales come back saying "the customer wants RPMs...".

Good times.

So yeah, I'm sticking to containers and not looking back :)


What's the use case for 1)? It sounds like a peculiar situation to get into.

As for 2), if you ignore the parts of the tooling which are primarily for building and publishing Debian itself and only use the bits which are oriented around building a .deb, it's remarkably simple. What sucks is the documentation, I suspect because Debian doesn't want people to know anything other than "the right way". What's right for Debian isn't necessarily right for anyone else's organisation, but figuring out which parts of the tooling usefully stand alone is not easy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: