Hacker News new | past | comments | ask | show | jobs | submit login
Debops (2014) (enricozini.org)
190 points by chei0aiV on July 27, 2015 | hide | past | favorite | 93 comments



I like this a lot, and I love the term DebOps.

The way developers somehow think DevOps is (or should be) an abbreviation of "Developers doing/replacing Operations" is terrifying to me.

I'm also in the same boat as the author, in that I recommend and target Debian Stable + Backports (and some vendor/community repos when required).


A bit over a year ago I researched what "Devops" means, and the answer seemed to be Developers being able to push code to production without having to involve Operations. This sounds like a good goal to have (presuming you've got good unittests etc.) as it removes an unnecessary friction for developers.

What it doesn't include is all the higher-value work the people on the Operational side tend to provide, like thinking about rollbacks, machine failure, network failure, provisioning, capacity planning, change/configuration management, security, monitoring etc - which is not to say that all developers don't think about these things, their focus tends to be on developing product rather than these non-functional requirements.

I'm presently in a world of Ubuntu LTS+a few manual backports for similar reasons as the authors. My home systems are ansible managed, giving me things like wireshark installed everywhere if I need it and new machines automatically hooked into Prometheus monitoring (which is much easier with my debs). I've seen what happens if you try to manually manage machines, and know that a small bit of upfront work will bring dividends later.

Writing the core code is just one critical step in running software long term, let's not forget the rest of the critical steps to keep it running in a sane way in the future.


Many people have different interpretations of the term 'DevOps', but what you're describing in that first paragraph is Continuous Deployment.

Yes, it's related to (in that it's often a beneficial result of) but it's not explicitly the same as DevOps, by any definition I've seen, good or bad.


Among all the definitions I found, that was the only common theme.

I've been discussing this with others in the "DevOps" space and in recruiting terms a "DevOps engineer" seems to mostly mean an "Ops engineer". True Devops (in the sense of developers who also care about and perform all of their own operations) is a culture, not a job role and one that seems quite rare in the wild.


The problem is that DevOps isn't a person, DevOps is a team.

Historically large companies had strictly separate development and ops teams. DevOps is about fusing them together so they talk to each other and influence each other's thinking for the benefit of everyone.

But of course sane definitions don't make good buzzwords so now we hire "DevOps" people just like we buy "private cloud" servers.

DevOps is as much about making your developers ops engineers as Agile is about making your stakeholders developers. It's a possible side-effect, but neither necessary nor sufficient.


Definitions of the term are almost as diluted as "cloud" is. It is easier to define it as what it's not.

Look up Jez Humble for a bit more idea into the origins of the term. The term is a primarily a buzzword among larger companies and almost always followed up by some engineering cultural change objective rather than about how code is deployed or even developed and run. You know that it's another management-focused trend when there's entire conferences where people say devops constantly without mentioning anything about code and half the folks in attendance or speaking are consultants in suits that consider Excel formulas the extent of their coding skills.

So the common theme I see is "any way besides what we used to do operations traditionally." It's mostly used for "operations with some idea of what is being deployed on the stack above them." Most start-ups don't have this problem at all with modern infrastructure (no more rack and stack at your 5-man start-up probably) though by definition because rigidly defined roles are a Big Company Problem.


The problem modern startups have is that a developer sees he can make calls to AWS or Azure APIs and assumes that makes him qualified to define system architecture, security policies, deployment processes, etc.

"Modern" infrastructure (by which I assume you mean provisioned, destroyable VPS instances + associated services such as AWS, Azure, etc) is effectively just a new "how" - you call an API instead of deploying a config file or similar. You still need to know the "what" and the "why" to be effective.


I don't necessarily see the situation as inflated egos as much as lack of resources to do it better. I have rarely met developers that are excited to do operations work like defining and implementing system security policies, change control, and orchestration. It's a chore that's as exciting for them as doing their laundry.

I'm being very conservative with what "modern" means (within the past 20 years is about right). Traditional shops are still racking and stacking machines and maybe deploying VMs by hand using ITIL stuff trying desperately to slow down system changes to deal with demand rather than to speed things up like most shops have done. Where I am now, the "traditional" IT side of the house takes roughly 5 months to provision a new server (I lead operations on anything bleeding edge, which is now standard for most start-ups).

Going from using maybe kickstart files to API calls is not as big of a deal as the fact that you can even get something on demand in any way instead of going, finding another job, quitting that job, coming back in shame, and realizing that the server you asked for is finally up.


I've definitely met developers who made the choice to say "I can do this myself, I don't need someone else to do it", and I've also met developers who when tasked with "make our app run" simply say "ok, its running on port 80, so it's up right?" without any of the associated work to make it secure, reliable, backed up, etc.


The common themes IMO are either, developers doing ops (a nightmare if you ask me) or ops teams utilising more "development" like processes such as automated setup tools, configuration management, etc.

As I said, I don't treat "developers can push to an environment" to be the defining factor or the definition of devops



http://deb.robustperception.io/ is my nightly build. It's one of the few things I want to run on the very bleeding edge of.


The problem with targeting Debian Stable + Backports is that your packages are going to perpetually out of date unless you go to the trouble of packaging up your dependencies as .debs and running your own apt server. Targeting Debian stable is fine if you can get away with using three year old libraries. But for a lot of languages, like Python, 3 year old libraries are at a significant feature disadvantage when compared to their current counterparts.


When you say "out of date", you probably mean feature wise, not security and stability wise? For me that is an acceptable tradeoff for most of the packages. Most of the stuff doesn't evolve at such pace that I should have the latest and greatest (I am talking about core libs and similar). And the packages where I miss some new functionality, I can just upgrade them manually. Take Firefox for instance - if you don't like/trust Iceweasel, you can just install FF to /opt/ and trust Mozilla for upgrades. For me this is win-win situation. You have a stable and secure foundation and you put bleeding edge stuff on it when you need it.


Indeed, for myself I prefer stable releases (Debian, Ubuntu LTS, etc) and use apt pinning so that I can use newer versions of apps that I need to use. Very rarely will I have a problem; in fact the few times there is a problem it is normally solved by apt-get source ; dpkg-buildpackage.


Do you just use pinning straight out the gate, or do you check backports (or whatever equivalent Ubuntu has) first?


It depends, normally in order of preference LTS, LTS backports, then latest release. So then if you want the newer version you can do apt-get install <package>/<release>. I typically use Apt::Default-Release "<release>" in a separate apt.conf.d file.


Just yesterday I had a problem which was some combination of Debian Jessie's incredibly outdated Python CFFI library (0.8.6) and its interaction with OpenSSL which took a script from taking milliseconds to launch to over 1.5 seconds, purely due to that import.


Debian stable includes pip and virtualenv. If you want to manage your python dependencies for yourself that road is definitely open to you.


Mixing pip and debs loses some of the advantages of having a standard system, though. I think you're better off producing deb packages from PyPI in your build server.



I typically rec installing virtualenv systemwide, if a reasonable version is available by package, and managing per-app dependencies in a requirements.txt that's platform agnostic.

The Debian ecosystem is great, but I would never recommend a developer marry an app to it.


But there's no reason you can't have the cake and eat it too; you can create a virtualenv from a requirements.txt file on your build server, then package it up in a deb package that installs those files in a custom directory (not in the shared Python modules path).

That way, you get static artifacts you can deploy (instead of deploying using pip, which requires compiling any non-pure-python package on the server), you get to use apt's dependencies for non-Python packages, like system libraries, and you can easily rollback by simply pushing the previous version of the deb package, which is cleaner than rolling back with pip, in my experience.


> The way developers somehow think DevOps is (or should be) an abbreviation of "Developers doing/replacing Operations" is terrifying to me.

And what about the way management thinks that's when you don't have to pay Operations people (those guys who always try to stop progress in your company)?


Sounds like another way of saying the same thing to me, no?

No one says "we don't need someone to setup/maintain our servers".

DevOps is, in every interpretation I've seen, about who/how it's done.


> No one says "we don't need someone to setup/maintain our servers".

Of course they do. And there are a lot of products which are marketed using that idea:

"AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS. Developers can simply upload their application code and the service automatically handles all the details such as resource provisioning, load balancing, auto-scaling, and monitoring."

The configuration management/orchestration tools also help with that (you don't need anyone to provision the servers, take care of dependencies etc., you can just include these 15 Chef cookbooks, and everything will be done automatically). Btw: I'm not saying configuration management tools are bad - they are a must-have, I'm just saying - nothing will replace a person who knows what they are doing.


> Developers can simply upload their application code and the service automatically handles all the details

In this situation, you've basically traded your own Ops staff for a combination of your developers and whatever support AWS provides you - so you're back to a managed service like every man and his dog was using in the 90s.


I recently had to develop an Erlang application targeting Debian "stable". It was humiliating at best. Countless hour were wasted trying to get around bugs in prehistoric packages, or building new (and working) packages from scratch.

Never again.


Debian Jessie just came out 2 months ago and as far as I can tell the packages in it are pretty recent.

Were you targeting Wheezy?


Yes - in the beginning.

If you look here https://github.com/erlang/otp/releases you'll see that - even on Jessie - 17.1 is unfortunately way behind.


Jessie is using 17.3.4 (if I am reading the version correctly), see https://packages.debian.org/jessie/erlang

17.3.4 was released on 4th of November 2014, according to the link you provided.

Jessie "freeze" happened on 5th of November ( https://lists.debian.org/debian-devel-announce/2014/11/msg00... ), so I even wonder that they are shipping that version.

In my opinion that is not "way behind".

If you need bleeding edge you should move to unstable or testing releases, move to a rolling release distribution like Arch or build your own.


I tried to work this out as well. It is indeed using 17.3.x but its hard to nail down which point release, as the Debian changelog doesn't mention when they pull a new upstream version (vs fixing an issue in the package itself)

I figured I would just install the package and use a -v or --version arg to get the exact build version. I cannot find any way to get a patch version out of Erlang. The `OTP_VERSION` file simply says 17.3, and the most I can get from `erl` or `erlc` is "17".


OTP_VERSION lists minor releases, so that should be a "true" 17.3.


I think this is a massive downside for Linux, and saying "use unstable" just puts the onus on the user to solve it for themselves. Stuff like PHP, MySQL/MariaDB, Postgres and Python isn't part of the OS anywhere else, but it is on Linux. Making tons of third-party userspace software part of the OS distribution causes the distribution of that software to lag behind on Linux. I don't see any reason that what version of Debian you use should lock you into a default version of Erlang, instead of simply letting a user install whatever version of Erlang they need quickly and easily. For all the bleating during the systemd flame war about how Debian and Linux is about choice, it's a very constrained form of choice.

EDIT: Well, that's not true, I see the reason -- Linux isn't an OS so much as it is a family of OSes that are assembled from a lot of independently developed projects and are loosely compatible with each other, so the reason you need a Debian-specific version of Erlang is because Debian isn't quite the same as Arch or Fedora or what have you. I just don't think that it's such a great reason that users should just accept that it'll never get better.


I think you are looking at it wrong. It is not just "use unstable", it is "use unstable if you want to apt-get install stuff".

You can always install the version you want, you just have to compile it, including it's dependencies if needed (using a custom path for libraries).

If you need to do that for multiple machines, build your own packages.

You have a choice how you want to get stuff on your system.

People creating software packages that are in the official repository make it easy for multiple projects to reuse stuff and it works out for most people most of the time.


Looks like 17.3 from the package name. Today it might not be "way" behind, but it will be in a year or two. Or three.

I can see the point of having a "stable" kernel, but these freezes are mostly random. Back in its days, Squeeze shipped with R14A (not B). That's not stable, it's just a random cutoff. It means that whoever did the package did not even know what A and B used to mean in Erlang releases.

That's not something you can put in production.


This is what backports is all about. More up to date packages from sid, built for the current stable release.

If you're a heavy erlang user you could also consider helping to support the Debian packages for it, I'm sure they'd (Debian erlang team) appreciate any help offered.


Generally I start off with stable, add things via backports if possible, then build my own solution for the primary language runtime and any libraries I'm using.

The idea is to lean on stable for everything else that you don't care as much about, basically.


Live dangerously: deploy on Arch. Then you'll never have to work with outdated anything again ;)

Actually, although I've been using Arch Linux for over two years now (not in production; I'm not brave enough), I've hardly had any issues at all with its rolling updates. The worst was just having to manually delete some Java binaries from /usr/bin when the way it handled Java was updated to allow Java 7 and 8 to exist side by side.


I know a lot of people who use Arch. I myself used Arch for about 10 years. Everyone I talk to has glowing reviews of it, and they've never had any issues.

I stopped using Arch about 3 years ago after my system became unbootable after an update. It was no the first time. In the past, I would be fine reading update notes and fixing the issue. But since I started troubleshooting servers at work, I have zero patience for doing it at home.

One other annoyance that comes with rolling releases is that you should update more often, to avoid making bigger (sometimes conflicting) changes to your system. You end up reading release notes more often. I could turn on automatic updates, but I've been bitten by that in the past.

Arch also encouraged me to tinker, and I was much more likely to make breaking changes to my system than I am now just running Ubuntu. If I had the time/energy to try out new distros at home, I would probably try Nix or something similar, that emphasizes rollback capability.


The thing with "rolling-releases" is that updates will bite you in the ass exactly then when you have absolutely no time to deal with them.

So even if the system boots, it doesn't mean anything because booting a system is only one out of 100+ use cases that you will never be able (or willing) to test after each pacman -Syu..


Arch has 18.0 - better indeed, but you need to keep an eye on GitHub to find out which is the actual latest version :|


Did you try this? apt-get install -t wheezy-backports erlang

One wheezy, that would have put you at 17.1 in less than a minute.


I'm actually developing on 18.0.2. At least 17.5.x would have been better.

The fact that erlang.org does not list minor releases at all probably doesn't help. One can go to erlang.org and think that 18.0 is the latest...


Since 18 was released in the past 30 days you'd need to wait a little for debian/ubuntu/etc maintainers to work it out for (eventually) production environments.

Here is a quick howto for down and dirty backporting:

https://packages.debian.org/sid/erlang

Get the debian, orig, and dsc files:

cmd: "dpkg-source -x erlang<blah>.dsc"

cmd: "cd erlang<blah>"

cmd: "debuild"

Then you have 18.0.something source packages and installable binaries for whatever debian release you have. Check the build-depends in debian/control or try apt-get build-dep erlang first.

If you want a newer release and its just a minor change like 18.0.2 surely is... then just download their current source file and rename it like the orig file you downloaded from debian. Or go update the entry in debian/changelog to properly represent it. (dch -v 1:18.0.2-1~olgeni+1 -m)

It only takes a few minutes in prep work and a couple commands to complete. Most of the time is spent waiting for something like erlang itself to compile before it pops out installable binaries for whatever version of debian you're on. Plus now you have 1-for-1 installable runtime for all your other nodes or environments without any work.


Sounds good - let's give this a spin :)


There's also DebOps[0], a collection of ansible playbooks a " Debian-based data center in a box".

[0]https://github.com/debops/debops


I remember going over with the author on what we should name the domain[0]. I figured it was pretty unique.

[0]http://debops.org


How do you not bundle in dependencies when coding in, say, Java ? The Java packages on Debian are for the most part outdated. It's way more practical to just use Maven, build a fat jar and be done with it.


It depends who your end users are. If the people responsible for running the code do it as their full time job, then yes, bundling the dependencies is more practical because you have a team of full-time engineers to handle security updates, rollbacks, etc. But if you're pushing it out to people who expect to just be able to fire and forget, not relying on your distro to manage dependencies for you is going to result in a world of hurt.


To not bundle dependencies, use the Java packages in Debian stable and if you find one with a missing feature or unfixed bug, then backport that from Debian unstable or from upstream.


Debs and Rpms are fine for configuring the machine, as soon as you have it where machines and application services aren't a 1:1 relationship that will lead you in the direction of dependency hell. As each application will want slightly different versions you'd have to be very careful which version you installed.

You can cheat a little. For example I am using some system debs for python packages, but if there were more than a handful of python utility packages that can get unmanageable fast. That leads naturally to the fat jar/virtualenv/static binary approach with manually managed dependencies which gives you isolation between applications, and all the associated costs.


Why do applications want different versions of libraries instead of being portable?


It's very common to have breaking changes that you need to carefully manage in a company's internal code, and where managing that as a full-on deprecation isn't worth the effort. Moving faster is often the better choice compared to never refactoring and improving the codebase.


I think I would prefer to base on proven libraries with stable interfaces. Do you have any examples of these breakages?


I'm talking about internal libraries of a company, where there is no proven library with a stable interface as it's continuously evolving - similar to a young library in the OSS world that has no reasonable substitute. It might be something like a new parameter to a function, or an additional method call you need to make in a given version.


Oh. Obviously for internal libraries you should do whatever you like, including bundling. I thought you were talking about open source libraries.


Applications that are written specifically in order to be run on the servers of a given company instead of being written to be general purpose have vastly different tradeoffs involved when it comes to choosing dependencies.


I think this is a big problem in the Java world, possibly more so than many other languages/communities.


this is the best case you could make for installing Gentoo. Dependency Hell is no longer an issue. :)


And when the library is not packaged in the repo, should I then build the package myself ?

If we were in an ideal world we would do that, but unfortunately we're not, and in my experience it's way easier for the dev AND for the sysadmin to use Maven and build a fat jar.


How do you not bundle dependencies programming in anything? If you want to be portable across Linuxes, you can't even count on tcsh or env being installed at the same path. Counting on the stuff installed with the system unleashes a world of pain on the user.


>you can't even count on tcsh or env being installed at the same path

This is what tools like Autoconf and pkg-config take care of. Lots of people use them to discover executables and libraries and generate files with the right machine-specific variables in them. You should never assume that binaries are in /usr/bin or that libraries are in /usr/lib. A lot of packaging issues are caused by such assumptions.

If you use the right tools, you don't have to bundle your dependencies. Bundling introduces a serious maintenance burden onto the developers and the packagers. It's easy to avoid bundling for C libraries and things, but with the prevalence of language-specific package managers, it's become a harder problem because everyone just assumes that you will fetch dependencies through it and never use the system package manager. It's a sad state.


Yes, but many of us have no intention of being portable across Linuxes. Our software is designed to be deployed to machines we control. If users want to deploy it to some other distro, they're free to package it themselves.


Targetting a given stable distribution means that the distribution is bundling all your dependencies.

If you're upset about that, learn how to instantiate containers on your chosen platform :-)


> I build my software targeting Debian Stable + Backports. At FOSDEM I noticed that some people consider it uncool. I was perplexed.

I am also perplexed... what are these other people doing?


They are doing stuff like this:

https://news.ycombinator.com/item?id=9952356


To the downvoters: the comments on that post mention a lot of deployment methods that people not doing Debops are using.


Deploy on CoreOS maybe.


So I develop PHP. The reason I don't use Debian stable is because the latest version of PHP on there is 5.4.41, which is behind the "old stable version". That means security fixes only.

No bug fixes that resolve problems, or modern functionality (that release is over a year old), and will be EOLd in 1 month. That means a big delta of change that will be needed to handled when debian finally does get round to upgrading. Large deltas of change mean lots of risk.

It's much better to stay further up the crest of the wave and handle more regular updates, to minimise the size of the risk I'm bringing into my code at each release, than it is to stick to an old version and not handle the stream of new functionality that's coming in as it arrives.


Debian Stable has 5.6.9 at the moment, https://packages.debian.org/jessie/php5 - and shipped with 5.6.7 initially.

Security updates for php5 in Debian seem to have changed from backporting security fixes to staying up to date on any given upstream minor branch, including other fixes. https://security-tracker.debian.org/tracker/source-package/p...


Gah, well there's that argument out the window. We tend to use docker on CoreOS here, so it's not much of an issue.

Maybe next time I stand up a real VM I'll look at using Debian stable then.


For more up to date PHP packages on Debian, try http://dotdeb.org


+1 for DotDeb. For Jessie I would use the stock packages.


From memory, Dotdeb won't be shipping PHP for Jessie until 7.0 is released, because as you say, the stock release is 5.6 already.


Never understood why debian ended as the distribution of choice of the DevOps-related movements. Ubuntu is understandable - developers used it on their desktops so when they orderer their first VPS, they chose ubuntu. But why debian?

Also why it looks like RHEL family is completely out of fashion? Is it considered too "enterprisy"?


Ubuntu tries a lot of different things, and the environment swings around wildly. Sometimes you'll see a howto that has different steps for each of the past four or five releases (6mo apart). Debian is much more considered in it's changes, and is better suited for server environments. The Debian philosophy is quite clear, and you never have to keep an eye out for encroaching adware or phone-home stuff in the core systems. Debian also does less 'magic stuff' for you; for example, if you mistype a command, Debian will just 'not found' it, whereas Ubuntu will suggest-sell you a command... which takes time for the results to be parsed and presented.

On the desktop, Debian has quite a few more rough edges, and you wouldn't really recommend it for a newbie.

Regarding redhat being out of fashion, part of that is that Ubuntu was a big drawcard to bring devs to linux because they focused on polishing the desktop, so the debian family got an influx of users. Personally, I don't like redhat tooling as I find them full of gotchas and their output is usually full of chaff and hard to parse. That may be just a personal taste issue; I'm sure plenty of redhat admins find debian tooling weird and odd.


Debian is fairly stable, has been around for ages but is still active, is completely free and community-driven, and most people are familiar with the package management and config file structure on Debian (or Debian derivatives).

On top of that, it was already a pretty popular distribution. The more people use it, the more documentation becomes available, and the more new people will pick it up.


> Also why it looks like RHEL family is completely out of fashion? Is it considered too "enterprisy"?

Well, look at the website: https://www.redhat.com/en/technologies/linux-platforms/enter.... That won't appeal to anyone outside of an enterprise chair.


My read of it (as someone in the Debian/Ubuntu world) is that the CentOS split meant new sysadmins and users tended more towards Ubuntu.

RHEL/CentOS is still around, but you don't hear about it too often.


Why Ubuntu? If you're not writing GUIs, Ubuntu is just a less-stable Debian.


> Why Ubuntu?

PPAs.


PPA's are a seriously killer feature. Easily one of the most important parts of the Ubuntu ecosystem. Not enough people realize it.


What is the killer part of PPAs? It is pretty easy to setup your own repo on a server using reprepro or aptly. Is it the build-machines-as-a-service aspect? Or the ease of adding a PPA to your system? Or the fact anyone can dump some stuff into a repo and have Launchpad/Canonical/Ubuntu bless them?


I think it's the part of having bleeding edge packages easily linked into your system. Other options might be available but people simply don't know about it. And developers themselves might already provide the PPAs which means there are many PPAs available.

Might be the worst solution but is the most well known.


All of it together, while providing a relatively straight forward interface for the consumers of the package.


> It is pretty easy to setup your own repo on a server using reprepro or aptly

Sure, it'd easy to set it up yourself. But the point is that PPAs are managed by someone else and have already been set up.

If I want a new software, and there's a PPA, I can just use that PPA and be able to use it straight away.


Using Debian isn't a panacea. Neither is any particular distribution. Anecdotally, the only personal server I've had hacked was running Debian - due to it using an old openssh version (the issue was not present in later versions). It was fixed quickly, but I got 0dayed.


The only server's I've seen (and subsequently replaced, with Debian) breached were old CentOS boxes.

The key thing here is not "Debian is bad" or "CentOS is bad" it's that you need to keep up to date with security patches. For Debian that usually means a combination of using the Security Apt repo, and for things like OpenSSH, using the Backports Apt repo.

I do agree that Debian isn't a silver bullet, but in my experience it's much easier to work with from a setup/management point of view than CentOS, particularly for small shops that are't heavily invested in a full-blown CM tool - shell scripts and/or Debian config packages [1] can be used to fully provision one server or fifty.

[1] http://debathena.mit.edu/config-packages/


Related to this: Outsourcing your webapp maintenance to Debian

https://feeding.cloud.geek.nz/posts/outsourcing-webapp-maint...


Yes, working smart not hard.

The value of going home on time each night, because there is a wealth of google searches to expose and workaround known bugs should never be underestimates.


Please edit title with (2014)


Am I the only one who finds this style of usually-just-one-sentence-per-paragraph almost completely unreadable on a visual level?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: