Hacker News new | past | comments | ask | show | jobs | submit login
Spotify position in support of systemd in the default init debate (debian.org)
234 points by possibilistic on Jan 17, 2014 | hide | past | favorite | 174 comments



Out of all the people commenting on #727708, The Spotify post is a rep from an organization with systems to administrate. Just like almost every other not-ctte, not-init maintainer person on that list, so it makes sense for Spotify to comment, but not really worth giving undue weight to their post in the larger debate currently happening on debian-ctte.

If you want a better picture of debian's init decision, the bug got punted to ctte to decide on, so check out the ctte ml archives for December [0] and January [1]. Russ Allbery and Ian Jackson are the most vocal CTTE members on the list and support systemd and upstart respectively. The thread "Bug#727708: init system other points, and conclusion" is where Ian [2] Russ [3] start making the case for their preliminary conclusions.

Edit: If there are other things worth pointing out on the list put 'em here. I've really only been skimming the posts for the last few weeks. They're also considering openRC, but it appears not to be a real contender. Tollef Fog Heen is a debian systemd maintainer and has posts worth reading on both what systemd actually is and what the impact of different decisions.

[0] https://lists.debian.org/debian-ctte/2013/12/ [1] https://lists.debian.org/debian-ctte/2014/01/ [2] https://lists.debian.org/debian-ctte/2013/12/msg00182.html [3] https://lists.debian.org/debian-ctte/2013/12/msg00234.html


I should point out that while this is theoretically the official position of our infrastructure team here at Spotify, we've also got a lot of opinion and debate internally (like you have on any topic among a large team of engineers).

One other fun problem Spotify is going to tackle is how we're going to handle systemd vs. upstart if/when we transition from straight-up Debian to Ubuntu.


> One other fun problem Spotify is going to tackle is how we're going to handle systemd vs. upstart if/when we transition from straight-up Debian to Ubuntu.

After two years of upstart I am wholeheartedly regretting ever touching it. It's broken by design and currently in such a bad way that you actually might have to reboot the machine to fix it.

Upstart is the main reason I'm considering dropping ubuntu for something else.


Works for me. What is the problem?


For example, there is a long-standing bug [1] where an improperly configured job using "expect fork" can cause upstart to become completely confused and unfixable without a reboot (or a really hacky ruby script [2]).

As the comments discuss, it's not really always clear all the time when to use "expect fork" vs "expect daemon" and the rules change with a `script` block. So this hits people all the time, including me.

It's these kinds of bugs in upstart that bite people all the time. I could go on about its obtuse configuration but I can declare unequivocally that upstart configurations are my least favorite part of system administration, bar none. `Start-stop-daemon` reduces some of the pain but is also difficult to debug.

1. https://bugs.launchpad.net/upstart/+bug/406397 2. https://github.com/ion1/workaround-upstart-snafu/


Why are you transitioning to Ubuntu?


We're currently on Debian Squeeze so we need to move to something as Squeeze is approaching end-of-life.

If we stick with Debian, we'll find ourselves back in this situation fairly soon, since Debian's security team only supports oldstable for one year after a new stable is released (approximately — there isn't a set timeline available).

Ubuntu LTS releases, on the other hand, give us predictability and five years of support. That makes me very happy from an infrastructure standpoint. And the more up-to-date packages in Ubuntu repos make our feature devs happier, versus having to maintain backports for modern software.


I'm going in the opposite direction - we're an Ubuntu shop, but I'm considering transitioning to Debian. Ubuntu is too much of a moving target, it seems, and seems to want to go its own way a little too much. It's focus is less on the server and more on the desktop and mobile, though I don't know how much that would affect future work.

One of our coders had to figure out something to fix a problem in his ubuntu desktop, and reported that he'd found a guide that showed it was done a different way in each of the four previous releases. I think they're doing fine for their original target audience - naive users - but I'm cautious about their constant change and go-it-aloneness.


Will those packages always be up to date even on the LTS Ubuntus?

Will you be able to support Postgres 12 and Nginx 3 on those machines?

I find it useful to deploy the application on the same OS it was dev'd on, but in a decoupled manner. The thing I despise the most is needless upgrades of infrastructure pieces of working apps (redis,python runtime,etc). Each should get there own static version.

A couple jobs back, I would package my own JVM along with the application (in an rpm) as ops was being really slow to upgrade machines. The only thing I depend on for a box is libc.


And how did you, or how do they now, handle security updates for this bundled JVM?

Sure - Ops being slow to upgrade is annoying, Ops patching a security hole and you leaving it open could be devastating.

There is a really good reason Debian forbids embedded copies of other packages, and why I despise "solutions" like Chef's OmniBus. These things are live grenades moments away from taking the whole ship down.


We would rebuild the rpm with a new JVM and redeploy the app after testing the fix.

Ha! How many copies of Lua are living in applications that Debian doesn't have control over? There are probably package maintainers excising those as we speak.

I don't want to get into a packaging philosophy war, not enough fun over text, needs to be face to face.

This is a good read, http://vagabond.github.io/rants/2013/06/21/z_packagers-dont-... on how over zealous single tree packaging can make a mess of things.

I had to bundle in my JVM because ops wouldn't allow more than ONE on a machine. I wanted

    /opt/jvm/1.6.22
    /opt/jvm/1.7.10
And I could symlink my apps to the one I needed. New apps could get new JVMs, old apps would continue to run just fine. But they wouldn't do this because it _broke_ Red Hat file system guidelines, for whatever definition of broke.

Reuse can absolutely cause over coupling. I prefer to have tractable dependency graphs.

Take a look at

* http://nixos.org/nix/

* http://www.gobolinux.org (defunct)


CentOS + collections would be my answer to those problems (with, presumably, the unstated "we don't want to pay a lot of money for support" as a third). But I'm happy with yum/rpm.


> Ubuntu LTS releases ... software doesn't get updated (except security patches) for five years

> more up-to-date packages ... no need to backport modern software

I get how Ubuntu gives you both of these features, but I don't get how it gives you both of them at once...


Ubuntu LTS tend to backport a good number of packages. For example, if you wanted a fairly recent nginx, you will find 1.4.x built for 12.04 LTS https://launchpad.net/~nginx/+archive/stable


Is there a benefit to using the nginx PPA rather than nginx's official repos[1]?

I use Debian Stable with several upstream repos, and I've never understood why people use a PPA when upstream provides a repo (e.g., nginx).

[1] http://nginx.org/en/linux_packages.html


The particular packages you need might be obtainable from Launchpad PPAs, which Debian doesn't really have an equivalent of unless you're willing to maintain your own.

If you don't want to phase-in updates rather than depend directly on a third-party PPA, you can also setup your own PPA and copy packages to it as you wish.


The packages at release are more up-to-date, and the official backports repos are filled with newer versions.


The recent discussion on lwn.net is also interesting:

Positions forming in the Debian init system decision: http://lwn.net/Articles/578208/



Russ's points are an incredibly well thought out and written discussion of systemd vs upstart and worth reading for anyone who needs background on the two.


So that you don't kill their cgi bug tracker, here's the static archive:

https://lists.debian.org/debian-ctte/2014/01/msg00287.html


Who's responsible for changing the titles on these threads?

You've removed the interesting bit: That it's Spotify's ops people putting in their two cents.


the title changing policy here is terrible. If the majority of the discussion is based on the post title either delete the entire post or leave it. Changing the title is just confusing


There are a couple of false positives but I think the policy is generally beneficial, it prevents a lot of title-baiting.


It's not really that difficult to come up with a sensible rule to determine which titles should be "editorialized" (and pg could even make it site-wide) - old posts deserve a [year], video/pdf/other media posts should mention the general format of the media, posts whose title doesn't reflect their context (e.g. this mailing list email) should have their content prepended. All that is necessary is a word or two in [square brackets].


I agree-- the title on this post is wrong and should be changed. It should be "#727708 - tech-ctte: Decide which init system to default to in Debian. - Debian Bug report logs"


I submitted it, the title was changed and so was the author. What's going on here.


If you submit a link that someone else already posted, it just counts your "submission" as an upvote on the older post.


If you submit a duplicate URL, HN "accepts" it then redirects you to the earlier posting.


That's probably what happened.


Did they actually change the author of the post? I don't see it in your submission history anymore.


I submitted the post, but the title was changed from

> Spotify responds to "Decide which init system to default to in Debian"

I did mistakenly link to a dynamic resource, though.


why on HN top op the page comments are always not about the content but about meta / secondary stuff ?

ps. please dont be mean, do not upvote this!


The spotify bit is the least interesting thing to me.


It's the only unique bit, though. Nothing they're saying about systemd is exactly new material, whether for Debian or anyone else.


This sort of thing actually happens a fair amount, albeit usually without a "celebrity" company and often without the organisation even being named.

It's actually a difficult email to write (especially if you are disagreeing with a proposed change) as it can easily come across as a sort of childish blackmail.


Yeah, it seems like it could range from positive to negative depending on the precise tone and implications. On the one hand, if a large Debian user believes that some decisions will make lives easier for them and others harder, them explaining why they think so gives useful input to Debian that they can use to better server their users. Of course, one large company's interests might be outweighed by other considerations (and considerations of other users), but it's useful input to have either way. On the other hand, written in slightly the wrong way it can come across as bullying... "as one of your largest users, we strongly suggest you make this decision or we will have to pursue alternatives". Even more tricky in cases where the user is also a significant supporter of a project (whether through staff resources, donations, etc.).

This one seemed reasonably written to me. It was a little light on the technical details of why specifically Spotify finds systemd works best for their use case, but the three bullet points do give some information.


Well, to put some perspective into their "weighing in" on the issue: the guys at Spotify really do care about Free/Open Source Software. (Especially Noa, for what I've noticed).

I interpret their "weighing in" on the issue as a balanced and constructive feedback claim to the community, from a community member.

I think it's great when people participate and especially when they go full disclosure; iesaying who they are and from where and especially why (in a positive/constructive way).


Currently using systemd on debian myself, with a few of my own custom .service files and a lot of falling back to sysvinit compatibility -- would love to see more upstream packages contain .service files. Things like "automatically restart if the service dies uncleanly" being handled by the init system are a godsend compared to managing init.d scripts which try to do that job themselves (things like "service start" getting into an infinite restart loop, then "service stop" says "service is not running", because the init script is stuck in a loop but the service itself hasn't been spawned...)


init scripts were originally designed to be "dumb" scripts that only performed one simple task. other scripts could then be written around them reliably, without wondering what the script might be doing in the background other than loading some options and running a program.

with modern service management, a lot of extra "intelligence" has been added to how programs are executed and managed. this leads to levels of complexity and uncertainty that eventually lead administrators to hacking the hell out of the system to be able to use it reliably, usually disabling advanced features so they can control such features themselves.

on top of that, automation of service restart is a solved problem since... decades ago? it's trivial to have a service restart if it's killed. unfortunately, once such a function is added to your init system, people use it all over the place and don't consider the consequences of services restarting automatically. eventually they get bitten by an unforeseen problem and build in limits to the service restart, etc etc. system automation is a lot harder than most people think.


> it's trivial to have a service restart if it's killed. unfortunately, once such a function is added to your init system, people use it all over the place and don't consider the consequences of services restarting automatically. eventually they get bitten by an unforeseen problem and build in limits to the service restart, etc etc. system automation is a lot harder than most people think.

All the more reason to solve it once instead of in a billion different and differently awful addon packages.

Plus, systemd is actually better at this than supervisord, daemontools, etc., already because of its cgroup usage- it can reliably kill all children before restarting, even if the process forks.

Finally, I encourage you to read the systemd docs: http://www.freedesktop.org/software/systemd/man/systemd.serv...

they are quite nice, and show that, in fact, thought has gone into it.

> unfortunately, once such a function is added to your init system, people use it all over the place

I'm not super on board with this view- for almost everyone, if apache goes down, they want something to restart apache because the problem was probably transient and having apache being down is a Big Problem. Plus, most modern supervisors know about things like "restart throttling".


> All the more reason to solve it

see, this is the problem. people don't seem to understand the paradigm of system automation fully. okay, let's take driving a car automatically as an example.

where are you going? let's assume we know that all the time. now how do we get there? we have to know where we are to figure out how to get there. and on the way, will we need to stop for gas or a bathroom break? once you figure out those things (and a few more) you can get to the meat of the "code", which is how to maneuver the streets without running into anyone. that part is the easy part, because our variables are based on concrete laws of physics, traffic, etc.

the hard part is all those other variables that change based on each trip you take. did we get a flat tire? did someone run into us? is there an unexpected detour? all of these things and more may or may not happen, and you really can't account for them all.

so while it's nice to have intelligent tools, the tools should be designed to make it easy for people to customize their services to their needs, and not attempt to solve all the problems themselves. in my experience, the simpler the tool is, the less assumptions there are.

this leads you to build in things like monitoring, trending, alerting, quotas, resource limits, and generally design your system better to detect, withstand and prevent fault, instead of just reflexively killing and restarting anything when the eventual problems happen.

> Plus, systemd is actually better at this [..] because [..] it can reliably kill all children before restarting

... so that when the state of your database/index/locking/etc goes wacky because some job was killed before it could release the lock, you can then write a lock-cleaner-upper and a post-killing-script-execution add-on to systemd. i'm sure they already thought of that (or encountered it on a running system) so it's probably already a feature, but if not: yikes.

(side note: i did investigate systemd initially and found tons of things they either didn't think of, or hadn't released as features yet. i wouldn't have been able to use systemd as a replacement for my current system without some of them!)


> [simple shell scripts lead] you to build in things like monitoring, trending, alerting, quotas, resource limits, and generally design your system better to detect, withstand and prevent fault

Simple shell scripts might lead people in that direction, but in practise, few people actually go in that direction, and those that do don't go all the way. Systemd might not have all of those features, but it has most of them (which is a massive improvement for most services), and it doesn't stop you from adding the rest yourself (which is fine for services who care about going all the way).


Would you mind naming some of them?


> so while it's nice to have intelligent tools, the tools should be designed to make it easy for people to customize their services to their needs, and not attempt to solve all the problems themselves. in my experience, the simpler the tool is, the less assumptions there are.

I'm unimpressed by this argument. I've used sysv-init and upstart (but, to be honest, mostly using the sysv emulation) with Debian and Ubuntu for years, and often see problems the packaged init shell scripts. Bad 'restart' and 'reload' logic, unreliable 'stop', inconsistent output, etc. Upstart's configuration files have been a huge boon- people just screw them up less than shell scripts. Systemd service files are the same.

And really, for most things you don't need anything complicated. You want to listen on a socket. If something bad happens, there needs to be an alert and the thing should be restarted. If it can't restart, give it a couple seconds. Eventually a meat person will wander over and figure it out.

> this leads you to build in things like monitoring, trending, alerting, quotas, resource limits, and generally design your system better to detect, withstand and prevent fault, instead of just reflexively killing and restarting anything when the eventual problems happen.

systemd is great for quotas and resource limits, because they're such a heavy user of cgroups. It's an easy win. For the rest- do you build that in to your server processes?

Look, at bottom, all init systems run an executable with arguments. If you need to run a shell script to launch your daemon, you can still run a shell script to launch your daemon. The things that are important are: * Do they do the OS bootstrapping well? * Are they reliable? * What tools do they provide to manage your daemons?

SysV and Upstart seem OK at the first two. SysV provides almost no tools, Upstart has a bunch of improvements, though some are arguably busted. Systemd has done a really impressive job at the third.

The general response to that is the infinitely disappointing "I have an unspecified unhandled corner case and you are violating the unix philosophy by not using 10,000 poorly written shell scripts".

> ... so that when the state of your database/index/locking/etc goes wacky because some job was killed before it could release the lock, you can then write a lock-cleaner-upper and a post-killing-script-execution add-on to systemd. i'm sure they already thought of that (or encountered it on a running system) so it's probably already a feature, but if not: yikes.

I posted a link to the man page already, and if you go to it and search for 'ExecStart', you will see a whole list of things- you can specify commands to start, reload, or restart your daemon, to run after stopping, as watchdogs, etc. If you have these problems they can be handled just fine.


Oh great, a car analogy.


> Plus, systemd is actually better at this than supervisord, daemontools, etc., already because of its cgroup usage- it can reliably kill all children before restarting, even if the process forks.

That's exactly the wrong thing for some services, for example sshd.


Which is why it doesn't do that for some services, for example sshd :P Each login session gets a separate cgroup, so the ssh daemon and all the users using it can be managed separately.


A lot of Uncertainty and Doubt you're raising there... I'm wondering if you actually used systemd?


I've never seen an init.d script that tried to restart something automatically, and attempting to do so within the script itself is folly. Any init.d script that gets into an infinite loop would be deleted immediately. The init.d script should be starting a daemon and then it should exit. If restarting a service like this is required, then the init.d script should spawn something that monitors the daemon (there are numerous tools that do this). Recent syslog-ng comes with one built in.

There's so much confusion over /etc/rc, /etc/init.d, /etc/rc?.d, init(8), inittab(5), initscript(5), what they are and how they are related. Getting the machine setup an in a usable state should be divorced from the services that the machine runs, but this is a fine line. Personally, I consider a machine "usable" if the filesystems are mounted, the machine is network accessible, and a configuration management system starts and/or sshd is running so one can login (and even then, this last one is not a strict requirement). Service management beyond that should be handled as just another process running. It's unfortunate that systemd is being pushed as an init(8) replacement, and doesn't seem to support running as a service itself. This being the default, with the option to replace init(8), would most likely lessen the controversy and reduce risk as systemd is developed and rolled out. "Look, you've been doing process management of individual services with systemd already, and you see the benefits; did you know that systemd can also replace init, to get those benefits system-wide?" It's not presented like that, however.


Upstart also has features to handle this scenario. For my purposes, I don't really care whether they choose upstart or systemd, just as long as the init.d scripts go away.


I've always liked Linux for the simplicity of services and how they're handled through simple, clear and very understandable init.d scripts.

Any distro which takes them away stops being my friend.


> simple, clear and very understandable init.d scripts

At best, init scripts are 5 lines of actual functionality and 100 lines of boilerplate, and in my experience very few init scripts are at their best. With systemd, the config file contains the 5 lines that actually matter and nothing else. I'd consider the latter to be much more simple, clear, and understandable.

First example where a project maintains both - a particularly verbose service file, compared to an average init file; compare not just line counts, but complexity (variable-filled function calls, branching logic, etc, vs a set of key=value pairs)

https://github.com/varnish/Varnish-Cache/blob/master/redhat/...

https://github.com/varnish/Varnish-Cache/blob/master/redhat/...


Boilerplate of extremely varying quality, no less. A bad init.d script can ruin a whole day, and in my experience most packages not included in a base system are ticking timebombs to a problem.

I wouldn't describe them as simple at all. What init.d/rc.d scripts amount to is pushing all the pid0 init work out to every individual service and making them figure it out on their own.


Great example, another metric I like to use is to look at the history, to see how many times the file has been patched


Characterising the 16000+ LOC of shell in a Debian 7 init.d directory as simple and clear seems like describing sendmail configuration as easy.


Sendmail.. Eww.. Whenever one gets the idea of using a macro language to process the configuration files because writing the "raw" configuration format is too difficult, one should sit down and rethink things.


Those init.d scripts are different per distribution. With systemd the same thing is provided in a configuration file. Much simpler, clearer and understandable. Further, "service $FOO status" actually shows you stderr+stdout output, making it easy to debug (in case of failure).


Systemd lets you still use init.d scripts to run services. It works better with services that use .service files, but it still supports the old ones.


It's the cgroups man.


/etc/inittab?


I highly recommend that anyone interested in systemd who doesn't know much about it read Lennart Poettering's initial descriptions of it. I knew very little about Linux init systems and I found this incredibly interesting and informative. http://0pointer.de/blog/projects/systemd.html


Thanks for that. Very interesting read.


Glad I'm not the only person who thinks upstart is back-assward. I've never understood how its model of "start a service iff all its dependencies are running" makes any sense: it (a) starts random other services I don't care about, just because their dependencies are met, and (b) forces me to manually track down and start all of a service's dependencies in order to start it.

Though I'd be glad if someone could explain to me how it does make sense.


(a) makes sense because it is assumed that if you have the job enabled, you want it to be started. If you do not care about those services, just disable them.

(b) is indeed a PITA.


Tangent, but the 2nd point is intriguing. How does one use cgroups to set up resource limitations? Is there any kind of decent front-end? I've seen the kernel documentation (https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt), but how do I use it? For example, if I wanted to give some shell users limited accounts where they can't use more than 512 RAM and some CPU quota each, I gather cgroups can do this, but I haven't been able to figure out how to set something like that up. (Yes, I know you could handle that use-case by giving each user their own virtual machine and let VMWare or Xen or Virtualbox handle the RAM/CPU quotas, but that's often not what I want.)


You're in luck -- I've done a screencast on cgroups where I explain what they are and how you can use them @ http://sysadmincasts.com/episodes/14-introduction-to-linux-c...

Also, it is all command line at this point, but there are some concepts floating around about creating a gui @ http://mairin.wordpress.com/2011/05/13/ideas-for-a-cgroups-u...


could you double check your screencast ? My understanding is limited, but I think Systemd ver 205 changes the cgroup interaction methodology - basically systemd is a single point entry into the cgroup api. You cannot touch cgroup any other way except through systemd. FYI - this is not systemd taking arbitrary control but what the upstream kernel recommended.

My understanding is that Fedora 20 (which uses systemd 208 ) will be impacted by this, as will most other distros. However jessie is proposing systemd 204, so your method might still work.


Is there a reason for this? I'm heavily invested in Ubuntu and I'm also making heavy use of the cgroups API in its current form. Does that mean that I can't use cgroups without systemd in newer kernels? Or do I miss cgroups features?

I'm quite happy with the mountable filesystem at the moment. It's scriptable and reasonable flexible and still easy to use for solving my problems.


Found a discussion here of the changes & rationale here: http://www.freedesktop.org/wiki/Software/systemd/ControlGrou...


That sounds awful. The idea that systemd vs. another "cgroups owner" changes the API you need to use for interacting with cgroups will effectively impose a compatibility burden for every app that wants to take advantage of it.


For the screencast I was using CentOS 6.4 which is not systemd enabled.


> Tangent, but the 2nd point is intriguing. How does one use cgroups to set up resource limitations? Is there any kind of decent front-end? I've seen the kernel documentation, but how do I use it?

Use systemd: http://0pointer.de/blog/projects/resources.html

Systemd makes these kernel features finally usable.


That looks interesting for some use-cases, but I don't see a way to use it to assign limits to users, or to processes that aren't started as systemd services. What I have in mind is that I give someone ssh access, they run R or matlab or from the command line, but limited to 512mb (or 1Gb or whatever) of total memory.


On RHEL systems you can use the 'cgred' service to classify processes based on the user running it or the command itself. For example, say you have user X who always logs in and runs rsync commands and saturates the link. You can automatically classify this user or rsync process into a cgroup that throttles the network IP (or in your use case, memory) -- problem solved.

There is some great documentation @ https://access.redhat.com/site/documentation/en-US/Red_Hat_E...


It's not RHEL-specific. Ubuntu and pretty much everything that includes cgroups tools has it.


You could build a matlab docker container and limit memory with 'docker run -m 512M delirium/matlab' :)

See http://docker.io


You're in luck; this specific usecase is documented on the Arch Wiki:

https://wiki.archlinux.org/index.php/cgroups#Matlab


I have been investigating this over the past couple of days for a pet project of mine (a local college needs to setup a programming competition - need to resource limit compile jobs).

The way that systemd handles it is three-fold:

One, in the .service file, you can put in resource constraints [1] - which will be honored by a service on startup.

The second way is to create a .slice file which is exactly that - a slice in the cgroup hierarchy and where you can assign some resource limits (where it is bound by the slices above it in hierarchy). All services that are assigned to this slice will SHARE that cgroup resource constraints. For your specific requirement, this is the way to go. Create a user.slice (to affect all user accounts) or a user-<some uid>.slice to affect on an individual basis (still will inherit from user.slice).

The third way is to invoke systemctl set-property (with properties defined in [1]) to affect an already running process.

What I am attempting to use is systemd-run [3] to start transient processes (not background services) with resource constraints. However, I have filed an enhancement request on some missing features here [4] . Feel free to star it up !

[1] http://www.freedesktop.org/software/systemd/man/systemd.reso...

[2] http://www.freedesktop.org/software/systemd/man/systemd.slic...

[3] http://www.freedesktop.org/software/systemd/man/systemd-run....

[4] https://bugs.freedesktop.org/show_bug.cgi?id=73685


There is runguard [1] from the DOMjudge project. It may be able to do what you are looking for.

[1] https://github.com/DOMjudge/domjudge/blob/master/judge/rungu...


well yes, and there is "isolate" from the Moe project. But systemd is teh new shineyyyy.


Have a look at online judges created by various universities for the ACM intercollegiate programming contest (ACM ICPC). Back in the day I used the one from the university of Valladolid, but I can't seem to find the source code now.


This answer explains how to create a cgroup witht he quota and then assigning the user to it: http://unix.stackexchange.com/a/34335

See also http://manpages.ubuntu.com/manpages/lucid/man5/cgrules.conf....


Excellent, thanks! cgrules/cgconfig seems to be what I was looking for.

Fwiw, it looks like those files don't do anything in the default Debian install; you have to install cgroup-bin to pull in the daemons that consult them.


change their shell to a bash script that calls bash and has a call to ulimit in it(this will affect everybody though I think). there's another program called cpulimit that you could use.

here's some cgroups resource limiting "resources":

http://dustinoprea.com/2013/11/05/better-resource-throttling...

http://www.janoszen.com/2013/02/06/limiting-linux-processes-...


ulimit only assigns per-process limits though, not per-user. Afaict you need cgroups to limit users' total usage across processes.


It's a lot easier for one systemd-loving company to switch to RHEL/CentOS/Fedora than for everyone else to deal with problems with systemd.

systemd is at least controversial enough in ways that matter to Debian (portability off Linux, etc.) that I think it's nice not to have a monoculture here.


Really? I've been away from Linux for a few years, but it used to be my career. I recently checked out systemd to make a service on a CoreOS box.

systemd replaces so much overlapping logic, has config files for services rather than scripts, and uses an existing known config format. Basically everything that sucked about SysV init. What's not to like?

Edit: I see you've added portability off Linux. That's a very small concern, and certainly lower than 'deep Linux integration' for most Linux users.


> I see you've added portability off Linux. That's a very small concern, and certainly lower than 'deep Linux integration' for most Linux users.

It didn't help systemd's popularity that the people behind it have tried to do disgusting stuff like convince GNOME to make GNOME dependent on it, which doesn't just effect "portability off linux" but the portability of GNOME. (I don't believe they got that one through)


What happened with GNOME is accurately described by myself in:

http://blogs.gnome.org/ovitters/2013/10/10/wayland-vs-usrsha...

http://blogs.gnome.org/ovitters/2013/09/25/gnome-and-loginds...

I'm a GNOME release team member. We get really nice contributions from OpenBSD. But 100% of the development is done by people working on Linux. It has been that way since at least 5+ years.

This "portability" complaints is IMO empty talk: actually contribute and make things happen. However, at the moment systemd will share various things across distributions and desktop environments, allowing GNOME developers to not need to maintain as much code as we do (e.g. the freedesktop.org ConsoleKit).

I'm guessing people won't read the blogposts, but oh well :P


this is incorrect. Gnome depends on logind which is one component of systemd and which has ALREADY been adopted by upstart itself [1]

logind is basically a much better replacement for Consolekit (which is not actively maintained) and the Gnome devs want to use the other part of systemd (e.g. "systemd --user" ) to manage user sessions rather than write and maintain the code. It makes complete technical sense.

[1] http://phoronix.com/forums/showthread.php?78518-Ubuntu-Plans...


Unless you're concerned about things with compatibility with other kernels, which is something that the systemd folks explicitly don't want to do. The biggest problem with pushing all the systemd baggage into gnome, instead of working to create actual standards and apis, is that the bsd folks, the opensolaris folks, etc, who all have their own init systems that have different pros and cons are being bullied out of any say in gnome. Unfortunately, Gnome is packed with people from redhat who have no desire for things like consensus, and would rather chase the flavor of the week.


Its less a question of "don't want" than cannot. Systemd leverages core Linux kernel features like cgroups and soon, kdbus. All of this, mind you is what upstream kernel wants.

Other OSes - like Hurd and Free BSD - lose out, because they don't have these features. The systemd developers have invited others from the community to be maintainers, if they are interested in porting - but nobody has responded.

I want to leverage my Linux box better - already I'm seeing a use case for groups and systems to build and contain services for myself. I don't have any problems with Hurd, but don't want it to be dictating how I run my servers.

The only reason the CTTE discussion went on for this long is because of "greater good" outside of Linux. Else Systemd is the best for people living off Linux.


You're changing the goalposts. The initial discussion was about gnome, and their desires to ignore the many many other kernels it currently runs on in a short sighted desire to minimize developer issues. In those cases, it is absolutely a detriment to the community to force more and more requirements on systemd, instead of putting out specifications for generic dbus consumers and producers. In this context, I would definitely argue that it's not in the project's best interests to depend on another project that has stated they have no intention to make things work across different kernels.

This is the issue that people who only deal with linux don't understand. Were I to try to put software in gnome that depended on features only found in ZFS, I would be rightfully criticized for it. Why should Linux-only features get a free pass?


What you're saying is utterly incorrect, speaking as a GNOME release team member. Pretty poor to first complain about moving goalposts, yet at the same time misrepresent what was done.

In the beginning of 2012 (Jan/Feb) I highlighted the various problems. Nobody stood up to help out. Now it is 2014 and we do NOT rely on systemd.

Further, we DO rely on dbus APIs and can rely on ConsoleKit still.

Get your facts straight please.


What was done was certain parties made a one-sided announcement that ConsoleKit was deprecated without any discussion regarding the people who used it. Where was the call for new maintainers? Where was the call to see if the project still was viable? Re-reading the "discussion" thread still feels to all the world like a decision was made, regardless of the merits of keeping the project alive, because Lennart et al wanted to move everyone kicking and screaming to their new toy.

Furthermore, where is the GDM documentation on what it produces/consumes regarding dbus? Where's the API spec. I can find that it does use org.gnome.DisplayManager as its root, but what more from there? I have the feeling that few people are willing to help out here because Gnome feels about as transparent as a brick regarding these decisions.


Not just that, Poettering has said that he believes the only platform worth targeting is Linux.


Is Hurd successful? What about SmartOS? How any embedded boxes use BSD compared to Linux, vs ten years ago. Where's the trend headed?


Being better than SysV init is not something systemd has a monopoly on...

Portability off Linux isn't a small concern for the Debian project as far as I know

Red Hat related distros are very good distros based on systemd and are still available, it isn't necessary for Debian also to adopt systemd in order for people to use and enjoy it


Debian has an active port to using the FreeBSD kernel, as well as a port to Hurd. So yes, portability off the Linux kernel is a concern to Debian.


In its defence, the portability problems are largely because the linux kernel has awesome features (ie, cgroups) and other kernels don't -- I think it's better for the world if we encourage the other OSes to add cgroups-like APIs to their kernels rather than hold linux back


Should the linux world have adopted the FreeBSD jail API rather than spinning their own lightweight containerization API?


Why turn things around? Systemd makes Linux specific features available. Nobody ever complained about FreeBSD having jails. Yet there are complaints that Linux has Linux-only features.

So by using your FreeBSD jail example: Because there aren't any complaints regarding jails, systemd should not be complained about.


on the other hand wouldn't that imply that people can't innovate with other solutions in the same space (groups') because they are locked into it by the init system depending on it?


Common feature sets don't necessarily mean lack of innovation -- for a parallel I'd say kernels = web browsers and group APIs = html.


What systemd problems? Please elaborate.


Two things I love about this: ops guys getting involved in the debate, and the fact that Spotify has somebody who's job is to be open-source ombudsman.


The debate moved to the CTTE long ago. Stuff like this is noise at this point.


While you're right, don't be such a grump. It's cool that some business try and stay involved in this :)


I have to admit, I was dragged kicking and screaming into the new init world, and really didn't want to leave SysV init. I haven't used systemd, but I have been using upstart and it's grown on me.

After reading this post, I did look at: https://wiki.debian.org/Debate/initsystem/upstart

Which does spell out some reasonable pros and cons of both systems. I'd say the biggest problem with upstart (and possibly the deal breaker) is the licensing terms. Yes, it's open source, but no, I really don't want to have to fill out Canonical's contributer license agreement if I want to contribute. I'm not really sure why Canonical doesn't just GPL like everyone else and leave it at that.

There are some tools out there for upstart, but one thing which would be really nice would be a built in command for visualizing the init graph in ascii. It doesn't have to be fancy, just enough to know where your init script is going to be called in the boot process.


> I'm not really sure why Canonical doesn't just GPL like everyone else and leave it at that.

Because they want to be able to have the benefits of proprietary code for themselves (being able to provide their own software under a proprietary license to other parties), without allowing anybody else that same ability.

If it were under the GPL and each contributor retained their copyright over their submission (as is the case with the Linux codebase), this wouldn't be an issue. But by having complete ownership over all the code in the codebase, Canonical is free to change the license at any time they choose.


> If it were under the GPL and each contributor retained their copyright over their submission

DISCLAIMER: I'm not a lawyer, everything I say below may be totally wrong.

As far as I know, this is exactly what happens. Canonical's Contributor License Agreements stipulates a form of joint copyright assignment where both Canonical and the author have ownership over the contribution, thereby creating two source trees: one on which Canonical has full control over, the other owned by the collective of individuals who contributed to the project (just like the Linux kernel).

Even RedHat requires you to sign a similar CLA in order to accept contributions for Fedora, 389 Directory Server (LDAP), etc. The FSF does something similar for their project, so does Node, Apache, etc. Canonical's CLA used to be nasty but since they started using the Harmony Agreements I think their terms are reasonable.

Personally, as long as they don't ask me to relinquish all and every right on my contribution and/or ask for unreasonable provisions, I'm fine with a CLA.


This is incorrect. Canonical formerly used a (non-joint) copyright assignment agreement, which was widely criticized. It later sponsored the Harmony project which was an unsuccessful effort to popularize a standardized suite of contributor agreements. When the Harmony agreements were released in 2011, Canonical began using one of the Harmony CLAs. It doesn't stipulate copyright assignment; rather, the contributor grants Canonical a broad copyright license that is similar in effect to copyright assignment except that the contributor retains copyright ownership of the contribution. It is not dissimilar in policy to the widely-used Apache model of CLAs. I'm a strong critic of CLAs generally, particularly these types, but they ought to be described accurately. The effect of use of such CLAs by entities that will (by employment of a majority of developers, say) hold copyright over much of a relevant codebase is that you have little pieces of code that are owned by outside contributors but licensed in under maximally broad terms which basically give the inbound entity the ability to do whatever they want with the code, short of assigning copyright in it.

> Even RedHat requires you to sign a similar CLA in order to accept contributions for Fedora, 389 Directory Server (LDAP), etc.

No (with some legacy use of Apache-style CLAs basically confined to some JBoss projects and gradually being dismantled).

I've been Red Hat's open source lawyer since 2008 so I think I can speak authoritatively on this. Red Hat does not use a similar CLA for contributions to Fedora (though at least nominally it used an Apache-style CLA prior to 2010). Since 2010 Fedora account holders agree to the Fedora Project Contributor Agreement (http://fedoraproject.org/wiki/Legal:Fedora_Project_Contribut...) which is a simple agreement that says code is licensed by default under the MIT license (and content under CC BY-SA) unless the contributor indicates otherwise. 389 basically followed the Fedora approach for reasons best understood as historical inertia (by coincidence I noticed last night there is some incorrect information about this on the 389 website).

Red Hat starts up tons of open source projects, and most of them do not require contributor agreements. When I got to Red Hat in 2008 this was already the case, if only because of the nature of Red Hat's culture at the time. But the company was on the verge of applying a uniform (Apache-style) CLA requirement across all Red Hat-maintained projects -- much like so many companies do today. I very soon realized how problematic that would be and I think one of my big accomplishments was not only to stop that from happening but also to reverse it (the trend for that set of projects that used CLAs in the past has been to get rid of them), and to formulate a strong legal-policy basis for doing so, maybe best recorded in my two-part critique of the Harmony contributor agreement suite in 2011: http://opensource.com/law/11/7/trouble-harmony-part-1 .

So please do not use Red Hat as evidence of support for use of broad contributee-friendly contributor agreements. Red Hat has been the leading company, maybe the only company, articulating an alternative viewpoint.

[Edited to fix formatting and to tone down initial scorn slightly] [Edited to acknowledge JBoss situation] [Edited to note Harmony agreements unsuccessful in that few use them]


@fontana, while I think Red Hat's Fedora CLA is better than most, I still think it's problematic that its fallback is a highly lax, permissive license, rather than "license of the project". I still don't understand why Red Hat won't default to inbound=outbound for Fedora.

FWIW, I collected a lot of links to various anti-CLA materials in my anti-Harmony blog post: http://ebb.org/bkuhn/blog/2011/07/07/harmony-harmful.html


The 'default to MIT' idea was developed primarily by Tom Callaway. The policy for documentation is in effect inbound/outbound since CC BY-SA became the standard Fedora docs license at a certain point (replacing the horrific OPL).

I think it has to be understood in light of the historical circumstances that existed during 2008-2009: the fact that Fedora had begun using an Apache-style CLA a few years earlier, and the fact that Red Hat was, for a moment, contemplating broad use of an Apache-style CLA (until about mid-2008).

"License of the project" certainly makes sense for nearly all FLOSS projects. Certain aspects of what distros do might be a little different though. Fedora is a project but it has no true 'license' as such other than the sum total of all the licenses of the pieces of the distribution and other things associated with the project (such as infrastructure projects and wiki content). So there's no single "license of Fedora". There are some Fedora-specific projects where 'license of the project' would work and I would say for those projects the Fedora contributor agreement is of dubious value. The other scenario, and the main one originally contemplated for the Fedora CLA, was the somewhat absurd case of RPM spec files. It's not clear to me that spec files, to the extent copyrightable, should match the 'license of the project' being packaged (which is often not pin-down-able to a single license).


The thing about CLAs, I always figured that if I'm working for a company (by giving them license to close-source my contribution to a not-BSD-flavor licensed codebase), I should be getting paid.

OFC small changes or contributions to something like the FSF have to be judged differently, but thats the big picture.


> I'm not really sure why Canonical doesn't just GPL like everyone else and leave it at that

That is about copyright, not about the license. Organizations who requires this includes the FSF and Mozilla for example. See:

http://www.gnu.org/licenses/why-assign.html


But FSF/GNU contributions are guaranteed to stay under a Free Software license. Canonical is not similar. Not at all the same situation. Further, IMO FSF/GNU/Mozilla are also bad for requesting this.


The FSF doesn't require this for all GNU projects. Each GNU project can decide if they want to require a CLA or not.


I'd say the biggest problem with upstart (and possibly the deal breaker) is the licensing terms.

Upstart isn't even a real solution for Ubuntu: they still use SysV init scripts for many daemons (I don't know why exactly). Because upstart cannot monitor daemons that it started through init scripts, it's not reliable for process monitoring...

Also see: https://wiki.debian.org/Debate/initsystem/systemd#Upstart


From linux conf australia 2014 : the 6 stages of systemd http://www.youtube.com/watch?v=-97qqUHwzGM‎


Yeah, that was me as well.


The most remarkable thing about doing this talk was how many people have told me, either before (on hearing what I was talking about) or after, that they had various strong opinions on the topic, but hadn't spun up systemd on their servers.

The most gratifying was the number of people who said they'd give it a try after the talk.


Was it based on previous software shipped by L.P (pulseaudio was a difficult exercise to say the least) ? or just because of change of decades old SystemV habits that felt heavy and potentially scary ?


I didn't conduct a scientific survey, but I think more the latter than the former; also, a lot of peoples' knowledge has been very much based on the public debate, which tends to the idea systemd offers benefits for desktops. Which may be true, but I see far more benefits for desktops.


(That should be "more benefit for servers", of course)


I think this is the most important bit "dependency model of systemd is easier to understand, explain and work with than the event based counterpart of upstart.". IMO upstart is over-engineered to say the least.


Systemd is the Linux init system. Using anything else at this point is folly. This debate is a tempest in a teapot stirred up by whiners and haters who are resisting what the community overwhelmingly supports as a technically superior, easier, and designed-for-Linux init system.


It's better than SysV init, but there are a dozen other init's out there, all with different properties. Not everyone cares about "linux on the desktop," and not everyone thinks that freedesktop.org is doing the best job as far as that goes either.


It's better than SysV init, but there are a dozen other init's out there, all with different properties.

And none of them leverage Linux's features as extensively as systemd. By the Linux init system, I mean the one overwhelmingly supported by most of the Linux community, and the one which is best designed to work with Linux.

Not everyone cares about "linux on the desktop," and not everyone thinks that freedesktop.org is doing the best job as far as that goes either.

Who's talking about desktops? Systemd is not a freedesktop project, and it was designed to make server administration easier by offering much finer grained control and monitoring of system services with a much better interface. Many of its most controversial features -- such as binary log files -- don't make sense except in a server context where the tampering-attestation features of systemd-journald make it a far more secure solution than plain text logs.

You should really read Lennart's essays and blog posts about systemd to get a clearer picture of its tremendous advantages.

Whatever you do, you'd best get used to systemd. It has all but won.


I'm not trying to be snarky here; but systemd is hosted on freedesktop.org, is it not? That is its primary place of development as well, no? Admittedly, I've always associated systemd as a project as part of this nebulous "effort" to bring linux on the desktop forward. I also assumed that's why they went to the trouble to integrate systemd into dbus so heavily.

I have read some of Lennart's essays -- but I generally strongly disagree with his conclusions, so I haven't been paying attention for a while. I think his solutions are over complicated and rely on too much impenetrable technology to actually save me any time. The arguments have been made elsewhere, so I won't go into it that deeply, but I think departure from the "unix way" is folly and will only make administration more difficult.

As far as getting used to it goes; this is still unix. I can still install whatever initd I like after the fact. Which I do. Yes, this means I have to do more work than I would have to do if I relied on vendor packages, but we generally use custom packages for all our critical services anyways. Since we're going to have to deal with it whether or not we're using systemd, and we're already rolling out a non-standard initd to our distributions, we don't feel we have to make any investment in learning systemd.


Freedesktop provides hosting for systemd, but it is not really a project under their banner.

I won't go into it that deeply, but I think departure from the "unix way" is folly and will only make administration more difficult.

"There is nothing more gray, stultifying, and dreary than a life lived inside a theory." --Jaron Lanier

Unix as a design philosophy is dead, or it is in serious need of a revamp in order to cope with the realities of modern systems. The complexities of modern software call for parts that function together as an integrated whole and are designed to work with each other -- not parts that fulfill one limited task and abdicate all further responsibility. It's time to let go of the 1970s conception of the Unix way if we want to build Linux into a modern system.

In fact both the dominant desktop Unix -- Mac OS X -- and the dominant proprietary server Unix -- Solaris -- employ an advanced init system similar in many respects to systemd. So there is already established precedent in the Unix realm for what Lennart is doing for Linux with systemd.

And again, there is overwhelming support for it in the Linux community, so much so that support for non-systemd configurations is already drying up in a variety of third-party upstreams. Systemd is the path of least resistance.


> Unix as a design philosophy is dead, or it is in serious need of a revamp in order to cope with the realities of modern systems.

Oh, you young whippersnapper, there is still so much in the world for you to learn.

IT is the poster child of the NIH syndrome. No other knowledge area suffers from it as much as IT does. For a system design paradigm to have survived 50+ years in this environment, it must be quite good.

Settling on dbus for IPC is about as bad as settling with CORBA would have been ten years ago. Settling with a simp!e Unix socket following the everything-is-a-file paradigm is just as correct today as it was in the sixties.


dbus provides useful abstractions -- like broadcast/multicast -- that are lacking in the traditional Unix IPC mechanisms and are cumbersome and slow to implement.

That's a big part of why it's superseding almost every other IPC mechanism out there for critical system messages, and why it's going into the kernel.


What made me hate upstart was that it still is willing to get the default runlevel from /etc/inittab, but it doesn't bother to emit warnings to the user that the rest of /etc/inittab is completely ignored, at least on the version of CentOS I was using. I think it's particularly user-hostile design for any program to accept configuration that does nothing (excepting whatever comment syntax a particular config file uses.)

I would vastly prefer if it either made lots of complaints about the ignored lines in inittab, or even if it it failed hard -- because then I would have to fix the problem immediately.

I have used systemd, but never actually poked around at its configuration (having just run Debian with it as a desktop user), so I can't properly judge it yet.


I don't understand how this is still anything to discuss. systemd is the only option; GNOME is already using it for some parts and will probably not hesitate to integrate it even more deeply to push Poettering's agenda.

Whether or not the sysadmins want to learn systemd and regardless of whether it's the technically superior super-init system (you can't really call systemd an init system anymore), it's going to be forced down our throats anyway. What's with the holdup?


GNOME is far from the only FOSS desktop these days. A desktop that demands you have a specific init system is not onw I care to use.


Systemd provides various things that desktop environments want to use. This is not limited to GNOME. I talked to Enlightment and they're going to make more use of various bits (e.g. user session). Similar with KDE.

Dismissing it entirely like you're doing and pretending this is about forcing: good luck with that. Just like distributions were forced to use it right? Not merit, just pushed?

Try coming up with something concrete. Until that the lack of specifics in your posts tells me enough.


I'm confused about debian stable vs testing vs unstable. Is there a branch that keeps relatively recent versions of packages and is reasonably usable as a daily driver?


Yes there it is, it is exactly what unstable is. Packages in there are reasonably recent and generally match upstream in functionally. There are occasional compatibility problems between packages but that is something that you cannot avoid if you want to have "relatively recent versions".


I tend to prefer using testing for getting recent packages on a desktop -- packages automatically migrate from unstable to testing if no bugs are reported within 10 days, which tends to mean a slightly more stable system, but still with reasonably recent versions.


So unstable is reasonably stable?


Except for experimental, which isn't even a coherent system, all of debian's branches are relatively stable. For personal desktop use I find unstable (Sid) to be just fine. I know it sounds bad but my rule of thumb for avoiding major problems is this: If synaptic wants to delete a bunch of important looking packages during an update - hold off a few days for things to settle down.


Yes. Been using it on my desktop for five years, had one single instance of breakage, which is better than my Ubuntu workstation.


I'm confused about debian stable vs testing vs unstable.

As the joke goes: unstable means testing, testing means stable and stable means old.


I've always heard it joked that "unstable is two letters from usable, and stable is one letter from stale". (I personally run unstable on laptops/desktops and stable on servers, and appreciate both.)


I use Debian on all my computers. The stable branch for servers and testing for others. Debian testing is sufficiently up-to-date for my taste. The very few things that are not can usually be installed anyway through backports or other repository. The last case I had (but that's a perhaps once in a year scenario) was with Batteries [1]. I needed version 2.1 but Debian testing only has 2.0, what I did is simply to install Batteries through opam [2] rather than apt-get install it.

[1] http://batteries.forge.ocamlcore.org/

[2] http://opam.ocaml.org/


do you ever use a PPA ? Most of the custom repos are meant for ubuntu and I cant ever find any which will respond to "sid" or "jessie" or something.


Nope I never had to use a PPA repository, sorry.


> Is there a branch that keeps relatively recent versions of packages and is reasonably usable as a daily driver?

For workstations where I usually want new packages I use unstable. It's not really "unstable" (IIRC Ubuntu is based on it) and has always up to date packages and bug fixes in short time. An unstable installation didn't break for me once in the last 4 years.

The problem with testing is that bugs need too long to get fixed (weeks to months). It's the testing environment for the next stable release so it doesn't really change too often and the packages aren't really the newest versions.

For servers I'm driving stable because I'm rather conservative with servers.


"Testing" is probably what you want.

Unstable is all relatively recent but likely to break regularly: it is its job to break so problems are found before packages are moved on.

Testing is less up to date but should be pretty stable. Before each new release Testing is frozen except for changes to fix problems (i.e. new new packages or version updates, except where an update/patch fixes a problem that isn't deemed insignificant for release), once the freeze is complete and the result declared ready the current Testing becomes the new Stable. It used to be that Testing did not necessarily get timely security updates like Stable, so it was strongly recommended against for production and/or public use, though this is no longer the case except for a period after a new release.

"Stable" is just that: it changes as little as possible, updates generally being only to fix security problems or other significant bugs that are found. This means that packages which see a lot of active development can become quite out of date with respect to new/updated features.


That said, unstable isn't really THAT unstable...it's generally (at worst) no worse than the non-LTS versions of Ubuntu or Fedora etc


People will recommend "testing" to you but I would advise against it... My path was: stable->testing->stable, because while on testing random stuff would break (laptop wouldn't sleep when I shut the lid, UI glitches, etc.). Plus, on testing I had to frequently update packages I didn't care about updating. I use stable now and when I need a recent package I now instead: compile from source/find backports/use a vm/vagrant

It's the price I pay for a stable system. Vagrant for my dev environments makes "stable" a much easier choice.


>Is there a branch that keeps relatively recent versions of packages and is reasonably usable as a daily driver?

Recently, I've been looking for an alternative to Debian testing or unstable for the desktop that doesn't stop getting updates during the stable freeze and doesn't break afterwards. It seems like the best option here is one of the rolling release distributions based on unstable like aptosid or siduction; I am downloading an aptosid ISO right now to try it out.

Can anyone here relate their own experiences with those?


Debian "unstable" is the "bleeding edge" version. New packages go in there on a pretty frequent basis, and things change often. Sometimes there is breakage. I would strongly advise against running this unless you're capable of and willing to fix serious breakage every now and then. And dealing with package upgrade conflicts. Sometimes you can't just "update everything", because someone just uploaded X version 2.1, only package Y has a strict versioned dependency on X <= 2.0, so you need to decide what you want.

I used "unstable" on my desktop for a while, but once every 2-3 years I'd end up with a bug that would break the early boot process, and I'd have to muck around with rescue disks and chroots and reconfiguring grub and the like. So I'd advise against this, unless you want to actually get involved in the Debian project.

Now I run "testing" on my desktop. When packages have been in "unstable" for a certain period of time (typically 10 days, although this is changing) without having a Release Critical (RC) bug filed against them, and if the move won't break any other packages, then "unstable" packages migrate to "testing". It's fairly up-to-date, although sometimes a large group of packages that need to transition all at once will take a while to synchronise.[0] And there are explicit "transitions" for some widely-used base libraries when they undergo ABI changes.[1] But you generally don't need to worry about this, as it'll mostly happen behind the scenes.

Still, if you're going to run "testing", I would still recommend installing apt-listbugs, which will tell you if there are any new RC bugs in any packages you're about to install/update (e.g. that were discovered only after the package migrated to testing), and allow you to decline the update.

Then, every couple of years or so, Debian is "frozen" for a few months, and no new updates are allowed that don't fix RC bugs. Once all the RC bugs have been fixed, and these fixes have all migrated to "testing", that version of "testing" becomes the new "stable". ("unstable" and "testing" can get a bit out-of-date during this time, but generally not too badly). "stable" is then "stable" in that it doesn't change at all, except for security updates, until the next "stable" is released a couple of years later. (There is also "backports" which packages newer releases of some software to go on top of stable, but you can't really rely on anything in particular getting a backport version.) This is handy for large deployments where you need to support users who will freak if their menus move around, or servers, or if you want to create your own distro and need an unchanging base platform to work from.

If you're moderately technical, and want fairly up-to-date shiny, I'd recommend Debian "testing". If you're interested in Free Software in and of itself, I'd definitely recommend it.[2]

[0] https://release.debian.org/migration/ [1] https://release.debian.org/transitions/ [2] http://www.debian.org/social_contract


Thanks for the detailed breakdown.

I currently run Arch on my main computer (and have for the past few years), Ubuntu on a few others, and have dabbled with some other distros for fun. I'm thinking of giving Debian a shot for my main computer, though.

How would you compare unstable and/or testing to Arch (if you have any experience with it)?


Testing's probably relatively similar to Arch, while I'd liken unstable to the AUR. IME though Arch makes living the rolling-release lifestyle substantially easier -- pinholing certain packages from Unstable can be difficult, whereas yaourt and the like make accessing the AUR easy.

There's also something to be said for mindshare. Ubuntu and its PPA system make it easy for people to build packages, and due to Ubuntu's popularity, there are lots of them. AUR does the same. Finding exotic packages for Debian (eg. Steam) can prove difficult.


Sorry, I don't have any experience with Arch.


The bigger question here for me is why Spotify needs 5000 physical servers.


Not only that, but why with that many servers pages (playlist top, artist/album details, etc) are often very slow to load.


Debian.org's server seems to be having trouble. This is getting traffic from both HN and /r/linux.

Cache: http://bugs.debian.org.nyud.net/cgi-bin/bugreport.cgi?msg=35...


Just saw this interesting post by Noa at Spotify (speaking on his own behalf) later in the thread, about the trade-off between monoculture on the one extreme, and effort spread too thin between competing solutions on the other:

https://lists.debian.org/debian-ctte/2014/01/msg00306.html


Keep in mind, this is coming from the same company that gives their devs root: https://www.youtube.com/watch?v=pts6F00GFuU#t=169

Otherwise, I agree with their endorsement of systemd.


[deleted]


Call me old school but I don't see why we can't just keep compiling everything from source. If one package has a dependency on another, just check if the other package is built or refuse to build. What is driving this need for "modern package management" ?


Did a custom kernel under RHEL beta on this Thinkpad x61s core duo 2 laptop. Took 3 hours and used 16Gb of hard drive space. Compiling R takes about 20 minutes. Binaries are quicker.


Because it's a time wasting process? Unless of course, everything you need to compile is relatively small.


Exactly. Time constraints and laziness are my two primary reasons for preferring a package manager over source compilation. That's not to say a good sysadmin shouldn't know how to compile from source--you should--but building every package from source and wasting time on a system with a strong package manager almost seems criminal or at least disrespectful to the people paying for that time. Package managers are a time saving device, and these days they're generally very good. If you have a need that isn't met by a modern package manager, I can't help but think it's an exceptionally niche use case.

I used to compile many of my packages from source because of tunings that weren't present in the pre-built packages or pre-built packages that weren't available/out of date (this was probably on FreeBSD some 10+ years ago, although I'm reluctant to include building from ports in this because that is mostly automated and doesn't really count). But when I migrated to Gentoo where builds are automated (and then to Arch) and later experienced the Debian/Ubuntu side of things, I began to appreciate what a colossal waste of time it was to dig through configure options, set flags, etc., much less doing everything else manually. It's just more convenient to allow the platform itself to take care of things or offload worrying about dependencies to the package maintainer who probably knows more about building these packages than I do (or care to).

Again, it's not to disparage the value of building from source (or knowing how), but I have other things I'd rather put my time into, like whining about compiling from source. ;)


I think arch has it right with the rolling release approach containing binary packages that are built with a minimalist philosophy.

Need to tweak a package to include something else or be built a different way? ABS is fantastic because you most likely won't have to change much and the package can still be kept under package management.


I can't agree more!

I started dabbling with Arch sometime between 2011 and 2012. It's my primary desktop OS because of being both a rolling release distro and being a rolling release distro that distributes packages in binary form. Don't get me wrong, I love Gentoo, but I find my patience waiting for large builds (KDE, X, Firefox, etc) diminishing over time. The AUR also has such a vast number of packages, I can't really think of more than one or two circumstances where something I needed wasn't available (or I was too stupid to know what to search for which was probably the case).

ABS is a godsend for exactly those reasons you cite since you don't have to worry about installing tons of rubbish into the file system for cleanup later (if there is a later). Though, I've found using the ABS SVN repos handy once or twice in the event I needed an older version (and wiped my cache, not realizing ARM was available at that time).

All things considered, Arch is the best of both worlds in terms of a strong, binary package manager and the ability to tweak packages through custom builds. I wouldn't suggest it's a panacea but it comes darn close. I'll risk coming off as an evangelist, but I can't help myself: When I discovered Arch, it made Linux a tremendous joy to use and helped me rediscover the passion I felt when I first discovered Gentoo many years ago. The difference is that this passion has been sustainable. :)


That's been my experience with MacPorts, which does have some binary packages, but much lower coverage than a distribution like Debian, so fallback building from source is common. It can be really annoying when you just want to install some utility and it takes literally 3 or 4 hours to install, because somewhere in the dependency or build-dependency chain is some big and slow-to-compile thing like GTK, X.org, or a newer GCC.


Gosh, you're reminding me of those days of compiling KDE or Firefox on an old box under Gentoo.

I think I still wake up in a cold sweat from time to time. :)


Both systemd and upstart have added a level of complexity I would prefer to avoid. The simplistically of sysvinit is greatly missed.

But then again I am an old fuck so..


The complexity is there, you just don't see it. Systemd handles all types of services. Sysvinit does not. Systemd abstracts a lot, making it easier per service (by putting more in systemd). In sysvinit a lot is copy/pasted across services. IMO making everything more error prone.


The person that wrote is a "Free Software ombudsman" ... Pretty serious stuff we got going in software now. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: