https://launchpad.net/ does this too (though just for Ubuntu distros). It's quite nice and a pretty impressive service, considering it's provided for free.
One edge-case, where fpm cannot help by itself, occurs in the python (w/ C-bindings) world, when your dev and prod environments use different libc versions. I got around that issue by running fpm inside a VM, which matches the libc version of the target system.
But it's still not as easy as it can be - so pkgr.io looks promising.
OP here, thanks for the comment. fpm is a great tool: PKGR is based on https://github.com/crohr/pkgr, which itself uses fpm. That being said, you indeed need to make sure your build system is close to your target system, and it is not that easy to package complex apps with specific dependencies. Generally, (Web) Application packaging does not have the same needs as OS packaging.
As a Rails developer, I've become tired of messing with
ruby versions, gem dependencies, bundle installations,
deploy-time asset precompilation on my servers,
etc. ... Basically, what I wanted was a Heroku-like
workflow that generates and hosts debian packages,
which is what exactly what PKGR provides.
As a customer of Heroku's excellent but rather expensive services, and a fan of deployment via package management, I'm certainly intrigued.
Unfortunately checkinstall has an inconvenient bug on (at least) Debian Wheezy which makes sudo required for package building under some circumstances.
My first thought was that there was no way I'd want to make my deploys dependent on yet another random webservice. But it looks like there's a standalone version, too. https://github.com/crohr/pkgr
"Uses Heroku buildpacks to embed all your dependencies within the debian package. That way, no need to mess with stale system dependencies"
That does mean, however, that it's now your job to rebuild your package whenever a library gets a security fix. Possibly worth the sacrifice, but an important one I think people need to understand.
OP here. Totally right, the final goal being that a new version of the package gets automatically rebuilt whenever a buildpack gets updated with a security fix.
Not sure whenever this is a good service or not (proper Debian packaging is 10% writing scripts 90% reading and conforming to policies), but I believe it would be really better if it'd be able to produce and maintain "debian" branch and tags for pbuilder/git-buildpackage, so standard package building practices would apply.
Pkgr (and fpm, the program it uses) is not for shipping a package that people can install and depend upon as part of their own OS distribution. It's for using apt-get as your orchestration system, instead of puppet/chef/etc.
You package your application and its dependencies as .debs, create an apt-mirror containing those debs, add that apt repo to your deploy-targets' sources.list.d's, "apt-get install yourcorp-system-release", and relax.
They should add a disclaimer that they are not a replacement for proper Debian packaging then. Lots of whiny people complain about the complexity of Debian packaging when really they don't understand the requirements for integrating nicely with every other installed program or potentially-installable program.
You have the possibility to add custom before/after hooks, and thus upload freshly precompiled assets from them, however there is currently no secure way to encrypt secrets (in your case, S3 credentials). This may come later on (a la Travis-CI, http://docs.travis-ci.com/user/encryption-keys/), but is not a priority right now.
No, the hooks currently supported are scripts that are run before and after the slug compilation phase. I don't think you would want to make your package installation dependent on pre/post install scripts that access the network.
According to https://wiki.debian.org/FilesystemHierarchyStandard /opt is for "Pre-compiled, non '.deb' binary distributions", and then using the standard bin/ share/ .. subdirectories. The "own prefix" convention is for RH variants IIRC
These aren't "proper" debs though -- they're not provided by the distribution, don't use the standard build tools, and so on. The closest analogy I can think of is the new packages you get from the paid portion of the Ubuntu App Store (ie, not universe), and those indeed conform to the /opt/vendor standard (with a few exceptions for things like a .desktop file).
Missing the neat config interface of pkgr (and the build service obviously!) but it's fully supported by Debian/Ubuntu now (used to build almost all of their ruby app and library packages).
At launch, only Ruby and Node.js projects are officially supported. But you can always specify another buildpack in your app's configuration to test with other runtimes (https://pkgr.io/doc#buildpack).
Installing dependencies for web apps at the system level is usually the wrong thing. You may have a deb but now you are prone to conflicts between that deb and the distro's debs.
OP here. For all the goodness offered by Docker to deploy apps, sometimes a good old debian package is all that's needed. The format is widely supported and battle-tested, while Docker requires specificities under the form of kernel version, additional software to install, and additional operational knowledge.
In short, if you already have a working Docker installation, you probably don't need PKGR, though nothing prevents you from building your containers by simply apt-get install'ing the app instead of having 10's of RUN commands to install the dependencies.
I could see pkgr being useful for creating an on-premises "enterprise" version of a web app, in situations where the target customers would like to deploy it on their own Debian or Ubuntu machines. Docker is a bit too cutting edge for that scenario.
This is exactly the same scenario I had in mind. This would allow projects to have an easier way to distribute some kind of Enterprise version of a web app. Will definitely take a look.
It seems to me that your response basically amounts to "out with the old, in with the new." I think it's better to carefully evaluate the tradeoffs of different solutions than to assume that either the old way or the new way is automatically better.
In this case, containers have definite merits. For example, you can be sure you're not inadvertently dependeing on something that happens to be installed on your systems but may not be explicitly declared as a dependency of your package. On the other hand, is containerization going to make it harder for admins to deploy security updates, especially for things that are usually system-wide libraries in the conventional distro packaging model, like OpenSSL?
I would go so far as to say that while containerization is incredibly useful, it's being used as a bandage for a lot of broken systems. Take for instance one of the most popular platforms for using Docker and other containers on: OSX. Looking there, package management is a mess at best.
Those of us long time Debian users can appreciate having a service for those (very rare) situations in which something is not already (very well) packaged. Making sure that you're not inadvertently depending on something is as easy as "apt-get install" on the target machine (usually your webserver that is running the same version of Debian as your dev machine).
> Those of us long time Debian users can appreciate having a service for those (very rare) situations in which something is not already (very well) packaged.
Hi, I created Docker and am also a long-time Debian user. I disagree with your assertion that containers are a bandaid for broken systems. Containers take the best parts of solid system packaging, and make them relevant again for software being written today - not 20 years ago.
Before starting Dotcloud I used to work at a Debian-only shop, where .debs were the only accepted vehicle for software changes, company-wide. This allowed for quality ops based on a foundation of "immutable things". But as the stack grew in complexity and the infrastructure and team grew in scale, that became a nightmare.
Here's what kills Debian packages as a universal unit of software delivery:
1) Versioning hell. When 15 developers are each deploying 10 slight variations of a build, in different combinations, on the same underlying set of machines, how do you express that as deb packages? As we found out the hard way, the answer is: you can't, not in any sane way.
2) The tooling sucks for developers. Walk up to a random developer and give them a 5mn pitch on how to package their software as a debian package, the right way. There's reason pkgr.io and fpm exist. They are bandaids around a fundamentally flawed developer experience.
Side note: sometimes I wish Docker wasn't so popular and hyped. It seems that for a portion of the engineering population, anything popular is necessarily poorly designed, by people unaware of the state of the art. As it turns out, we are very aware of the state of the art, have dealt with it for many years, and argue that there is room for improvement.
Like I said, the rise of slick containerization (like Docker, thanks!) is really awesome and has its uses, but some people are (ab)using it as a bandaid; I wasn't saying containers are a bandaid. Chalk it up as a criticism of the abuse of the tool, not a criticism of the tool (how's that saying go? "A successful [software] tool is one that was used to do something
undreamed of by its author." -- S.C. Johnson)
And I've seen the complaints that .debs can't keep up before, usually from Ruby programmers justifying slapdash Gems. Now I'll admit that reality isn't perfect, and software can be messy. But I have to ask: why in the world are there so many "slight" variations of a build? And if they're so slight, why do they break things? That just sounds like a lack of self-discipline, and I'm not sure how containers would help, other than as a bandaid over the underlying problem.
As for the difficulties of packaging, I'm glad things like pkgr.io exist to streamline the process. And I'm glad that solutions like Docker exist to streamline the container process. But if you're spinning up a VM (no matter how easy that is) to fix dependency issues, I think you might have other issues. I could be wrong, though, and as a pragmatist, I will admit that shipping wins almost every time :)
> why in the world are there so many "slight" variations of a build?
That's what happens if you tell all your developers to stop using capistrano, fabric, ftp and heroku, and tell them to create .debs out of EVERYTHING instead. They're going to (unhappily) build and install dozens of packages per hour, because that's how developers increasingly work: by deploying their stuff very, very frequently.
In our case it was the combination of 1) rapid development and staging, 2) different environments for different customers, and 3) numerous, fast-moving and precise dependencies.
1) Rapid development and staging: there are several developers, building from their respective branches. They want to deploy and test their work, sometimes in isolation (dev), sometimes as a team before releasing (staging). The overhead of deploying a new machine every time (virtual or not, in this particular case bare metal was part of the equation) is considered overkill. But that's OK, debian package can be named, flavored and versioned, so deployment in parallel should work out, right? Wrong, as it turns out... having to manually "scope" each deployment by generating different package names, flavors and versions basically breaks every assumption of system packaging. And if you want to get a little fancy and, say map particular versions/flavors to particular git commits... Forget it.
2) Different environments for different customers. All of the above was multiplied by the fact that this was appliance software - each customer got its own build, which on a good day was stock, and when dealing with more demanding customers, involved various customizations, build options etc. This is business as usual in many "vertical" software markets where the most valuable customers always have some legacy crap software and workflows they need to integrate with.
3) Numerous, fast-moving and precise dependencies. This software had lots of dependencies, which exact version of those dependencies, and even how they were built (and the exact build environment) mattered immensely, and the whole thing was a moving target because we frequently patched those dependencies. In this particular company the dependencies had mostly to do with video processing (custom ffmpeg builds etc), but this is also business as usual in many software-intensive organizations. The project rarely fits in a neat Rails or Spring app. There's all sorts of libraries, binaries and rpc services floating around - and deb packaging forces you to break all of those down into dependencies, further multiplying the dependency hell that your life already is. It's also increasingly typical (and good!) for developers to be very specific about the exact version and build of all their dependencies (from mysql to libssl) because it affects the behavior of their application. So you can't just wing it by swapping out their particular postgres setup with something "kind of the same thing". "It's the same version, what else do you want?" --> not good enough.
> And if they're so slight, why do they break things?
I meant that they break traditional system packaging. Simply put, system packaging is not well designed for applications which A) are made of many components, B) are updated and shipped several times per hour, C) on a large set of heterogeneous machines.
... And I didn't even mention the case where sales come back saying "the customer wants RPMs...".
Good times.
So yeah, I'm sticking to containers and not looking back :)
What's the use case for 1)? It sounds like a peculiar situation to get into.
As for 2), if you ignore the parts of the tooling which are primarily for building and publishing Debian itself and only use the bits which are oriented around building a .deb, it's remarkably simple. What sucks is the documentation, I suspect because Debian doesn't want people to know anything other than "the right way". What's right for Debian isn't necessarily right for anyone else's organisation, but figuring out which parts of the tooling usefully stand alone is not easy.
It builds the packages on virtual machines for many different distributions and 32+64 bit CPUs.