Hacker News new | past | comments | ask | show | jobs | submit login
Maintainers Matter: The case against upstream packaging (kmkeen.com)
132 points by keenerd on June 15, 2016 | hide | past | favorite | 95 comments



I agree with a lot of this. GNU/Linux distros are going down a very dangerous path with Snappy, Docker, Flatpak, Atomic, etc. I think a lot of this is responding to the fact that traditional systems package managers are quite bad by today's standards. They are imperative (no atomic transactions), use global state (/usr), and require root privileges. Snappy and co. take the "fuck it, I'm out" approach of bundling the world with your application, which is terrible for user control and security. Instead, I urge folks to check out the functional package managers GNU Guix[0] and Nix[1], and their accompanying distributions GuixSD and NixOS. Both Guix and Nix solve the problems of traditional systems package managers, while adding additional useful features (like reproducible builds, universal virtualenv, and full-system config management) and avoiding the massive drawbacks of Snappy and friends.

[0] https://gnu.org/s/guix

[1] http://nixos.org/


Atomic actually solves somewhat different problem as you think.

It uses rpm-ostree[0] as a basis "to bring together a hybrid of image-like upgrade features (reliable replication, atomicity), with package-like flexibility (introspecting trees to find package sets, package layering, partial live updates)".

rpm-ostree itself is a layer above OSTree[1], which describes itself as a "git for operating system binaries" - "OSTree is a tool for managing bootable, immutable, versioned filesystem trees."

Yes, it can run docker images; but so does CoreOS or any normal distribution and they are not based on rpm-ostree at all.

[0] https://github.com/projectatomic/rpm-ostree [1]https://wiki.gnome.org/Projects/OSTree


As a happy nixos user, I have to say that it's conceptually great, but that right now, the management tools are abysmal. Not only are the command line utilities unintuitive to use and pretty slow, but there is simply no graphical frontend, which means that I'm unable to switch family members to it.


Your complaints have been heard, and there are a lot of efforts to address them :)

There's currently an effort to make the command line utilities easier to use[0]. There's also an effort to bring PackageKit to NixOS, which should make a few GUI package managers work[1].

[0]: https://github.com/NixOS/nix/issues/779 [1]: https://github.com/NixOS/nix/issues/233


That's really good to hear, I'm looking forward to it.


Do you find a good range of packages available?

I'm downloading the live image to have a look/play


Most of the "standard" packages are already in it, and those that aren't are fairly easy to add in. I created and maintain the Matrix.org Synapse package, for instance, and it was a breeze to package myself, even without any prior experience of nixos packaging.


Snappy, Docker et al. are the newest beasts in the evolution of packaging solutions. Nix and Guix decided evolution was too slow and to skip ahead an epoch or two.


Docker isn't a packaging solution. It is an excuse to avoid packaging.


Nix was started before docker


You're forgetting GoboLinux, which had these issues solved long before NixOS/Guix existed.


To be fair, Nix takes it one step further.

In Nix, if you recompile the same version lib with new options, it will get a new branch in the Nix tree. While it will replace the existing one in Gobolinux.

I think there is a option for the Gobolinux Compile tool to behave more like Nix, but i have not explored its behavior much.


I started reading your comment and around the 3rd line I already knew it was you without looking at your nick :D I am thinking about going back to Guix soon, this was an awesome experience for me even though the philosophy was a bit too "free" for my hardware.


I wonder if the Nix approach would be more popular if a schism (between Nix and Guix) hadn't developed so early. When you talk about Nix and Guix, a novice's natural first question is, "Which one do I use?"


I wouldn't call it a schism; this sounds much too negative. They are different implementations of the same idea. Nix uses an external DSL, Guix an embedded DSL (using Guile Scheme for everything). The projects are not competing against one another. Problems that are solved to make software packageable in Nix benefit Guix and vice versa.

As a Schemer and GNU person it was easy for me to pick Guix. I've been contributing to Guix for a few years now. I find the tooling of Guix to be more accessible and more hackable.

In my experience when packaging Java for Guix it seems to me that Guix is a little more principled, where Nix would more readily accept binaries for bootstrapping (e.g. prebuilt jars).

Considering how different functional package management is compared to the alternatives I think that novices could pick one or the other; once they are used to functional package management they can better make a decision to switch to another implementation.

For bioinformatics software, though, Guix is far ahead. It's used at our bioinfo institute and in big projects like GeneNetwork.


The thing is, they are competing for package maintainers and users. A Nix package can't be natively used by Guix, and vice-versa, so a package have to be written for both.


Is there a good heuristic to answer this question?


Nix until some intrepid souls set up a large supplemental non-free repo for guix


Heh, my thinking basically. As Guix is a GNU project, it adheres to FSF definitions of freedom. That means there will be no proprietary drivers or similar in the official repositories.


This is true. We offer deblobbed Linux (linux-libre) by default.

However, it is very simple to customise packages, including the kernel package, e.g. to apply patches, use different sources, or to exercise your right to disagree with the Linux libre upstream on what blobs should be deleted from the kernel.

That said, I consider freedom by default a feature and it works very well on most of the hardware I use (an exception is an on-board Radeon graphics chip in a desktop machine I don't use much).

Creating package variants is almost trivial; it's certainly no harder than, say, customising Emacs. Guix blurs the lines between user and maintainer, so using custom package definitions is a supported use-case. At work even our scientist users create custom packages in case they are not available in Guix upstream yet.


Sounds very similar to my experience with Gobolinux. If the source is packaged sanely (proper support for --prefix or equivalent) a user can roll up a recipe ready for Compile with a single command and a url to the source needed.


Some software is promoted on its merits and some software exists just "because it's GNU" (e.g. Hurd, Guile, Guix, gNewSense, GnuTLS, Shishi, GNUstep, Gnash, etc.). It's pretty safe to ignore the latter kind.


GnuTLS is actually a good thing. OpenSSL has become a monoculture and there's a lot of good reasons to not use it (in addition, the license isn't GPL-compatible so it's a pain to link against in projects too).


Gnash, really? What other open alternative to Adobe's player would you recommend? For example, Gnash is the only way to get Flash support on the Raspberry Pi - is that not valuable?


As someone who works on Guile and Guix in good part because of their technical aspects and friendly community, this comes off as pretty insulting and ignorant.


I feel like this article is sort of attacking a straw man.

> The promise: Sandboxing makes you immune to bad ISVs. The reality: It protects you from some things and not others.

Well yeah, but the same could be said of maintainers. Maintainers let things through all the time, and sometimes they cause problems (hello, Debian weak keys).

The reality is a bit complicated, and it boils down to something boring like: if the maintainers are better than upstream, maintainers are good, and if they are worse than upstream, then they are bad. But that is both tautological and vacuous, so it is not especially useful insight.

The real insight is this: there just aren't enough maintainers to go around. Debian has 1500 contributors and 57,000 packages, which is 38x as many packages as people. Now maintainers have a lot of tooling to make that more tractable, and upstream developers have their time split between multiple packages too, and some packages are important enough to get the attention of several maintainers fulltime. But do we really believe 1/38th of a person is judiciously considering how to overrule upstream's decisions, attentively investigating user bug reports, and so on?

More likely, the maintainers do not even claim to be doing that, because they have not packaged the software at all. I use lots of unpackaged software; until recently nobody was packaging Chromium for example. Java packaging is really unreliable. And so on.

Ultimately, Ubuntu Snap exists because there is a lot of unpackaged software, and a lot of software packaged poorly. It would be nice if we could wave a magic wand and get 38x more maintainers, but we cannot.


Package maintainers can, and do, fuck up.

They also catch things. OpenBSD are particularly good at this, and Debian have a pretty good track record (exceptions noted) as well.

The most critical aspect of the package maintainer is that they don't have a horse in the race -- they're not representing the interests of the software developer, but of the users (or at least the OS). It's an additional step for independent review, a fresh set of eyes, and a set which operates outside the disciplinary scope of the developers, at least in theory. As Celine's Second Law states: the truth can only emerge in a non-punishing situation (from Robert Anton Wilson), and that's the situation an independent maintainer has.

Yes, shoving square pegs into round holes gets problematic, non-free software doesn't package well, and there may be delays. But independent software packaging done right adds tremendous value.

It also avoids huge scope of costs, the annoyance, security, privacy, and surveillance deadweight losses of proprietary, ISV packaging. I remember seeing this in the Microsoft world in the early 2000s, and being simply staggered at how bad the situation was. And yet, it's precisely the same set of dynamics, and inevitable consequences, which are now infesting the mobile world. Android is all but unusable as a consequence, and Apple have announced that virtually all pre-loaded software will be removable from the next generation of iOS.

It's the position of at least some Linux distros, most notably Debian (and many derivatives, at least in part) that the interests of the users come first, as expressed specifically in the Debian Social Contract, Debian Free Software Guidelines, and Debian Policy, and which are technically supported through packaging systems, bugtracking, and updates, which are the real secret sauce.

That's a lesson the technology community seems not to have learnt.

https://www.debian.org/social_contract


> sometimes they cause problems (hello, Debian weak keys).

> until recently nobody was packaging Chromium for example.

To name one example, SlackBuilds.org has had Chromium since 2010, although admittedly that's not so long ago as the Debian weak key cockup, which was 2006.

Maybe your examples could use an upgrade to the latest stable version.


Chromium may have been packaged in 2010, but it was released three years before. The Debian weak keys "cockup" may have been created in 2006, but it was discovered two years later. Maintainers had long windows of time to add value. Did they?

I don't mean anything personal by it. If I was maintaining 38 packages on my nights and weekends I'd do a bad job too.

But examples don't go out of date unless you present some force that takes 30k unmaintainable packages and turns them into 50k maintainable packages. What specific advance in software maintenance do you believe improved the art of software maintenance by an OOM?

But if you want to talk recent examples, we could talk about how nobody's packaging Swift.


Chromium only became compatible with linux in 2010. https://googleblog.blogspot.com/2009/12/google-chrome-for-ho...

Arch Linux packaging files have source history going back to then as well.

Chromium is also an example of a package that took a long time to appear in other distros, because it's really written with the app mindset. It copies and incompatibly customizes most of its dependencies. No one would consider that a great idea for ongoing maintenance and security patching for a typical project, but of course this is a google product-oriented thing with nearly a thousand well-above-average-skill developers assigned, so it's not a problem for them to manage surprisingly huge volumes of code and changes.

Web browsers like Chromium are also a good example of the kind of modern software which doesn't work well with the debian release model, because it's "unsupported and practically broken" in like half a year. That's not true for the "make" utility, or for gimp or inkscape or libreoffice or audacious or countless other useful applications and tools which are not browsers.

I really don't like the fact that modern browsers are a crazy-train and there's just no getting off it.


Yeah Chromium is a mess. To get and build the source you first have get the special tool set they use from their git repo.


> But do we really believe 1/38th of a person is judiciously considering how to overrule upstream's decisions, attentively investigating user bug reports, and so on?

It looks much worse than it is, because most packages are really boring from a maintainer perspective.

For example, KDE releases a hundred or so applications. Some of them are probably very interesting to package, but for most packages, `cmake && make && make install` is just fine (maybe plus a certain KDE integration macro that your packaging tool already carries).


> until recently nobody was packaging Chromium for example.

How recent is "recently" ? Only Fedora and it's downstream distros do not ship Chromium. Everyone else has been packaging it for a few years now.


This seems like an overly optimistic view of the abilities and resources of distribution package maintainers. The reality is that no matter how hard they try, it's almost impossible for them to keep up, and even with a non-LTS Ubuntu release train I still need to use a number of PPAs and other mechanisms of install to get some things I use on a daily basis.

And that's with both Canonical and Debian's resources involved in making sure that I get a relatively up to date base set of packages.

Never mind the issue of when your politics don't quite align with any distribution maintainer's in the whole.

(Note: edited to clarify 'bleeding edge ubuntu' as meaning the non-LTS release train)


When you serve out an entire system for a group of diverse users with lots of installable programs, it's important to ensure that parts will work together. Especially in Unix where namespace clashes in /bin, /lib and /etc may cause serious problem for system administrators and users. Thus, every stable release has to ensure that none of provided software shall do nothing nasty.

But with bleeding edge, you lose the control over this. It may be worth it for some users, and not for some others. That's a trade-off, and I myself am at the maintained-package-repo+use-only-stable-releases side; thus I use FreeBSD. Debian and Ubuntu mainly provide stable server OSs, so they're on my side too. If you want bleeding edge software, you should migrate to an OS that provides bleeding edge software, tho should know that it's called bleeding edge is called bleeding for a reason. An OS that has to release a version that'll be stable for the coming five years simply cannot provide bleeding edge packages.


> An OS that has to release a version that'll be stable for the coming five years simply cannot provide bleeding edge packages.

Wanting software more than every five years is hardly "bleeding edge".


If what you have is fine and works, why want new software? And if you want more frequent updates, you can (a) use a faster-moving branch, which BTW Debian and Ubuntu have; or (b) switch to an OS that provides you with more recent software, e.g. Arch Linux, Manjaro, Gentoo (? not that sure about this last one). Declaring sth. stable takes time, especially if you also incorporate new stuff into it. Even after five years of development, Debian and Ubuntu release patches, because of errata in their releases.

But if you want secure, stable OS that wont drop the eggs, and also rolling release, well, nobody can do anything about it.


I don't want to switch to ubuntu-devel or sid just to get a new version of VLC.

I just did a `snap install vlc` and got a recent version of vlc without having to upgrade my entire operating system.


That's OK. Nobody has an objection that you do it that way. It's just that there are acceptable flaws and shortcomings for some use cases, and some OSs care more about those.


With functional package management (as in GNU Guix or Nix) you no longer have namespace clashes and packages can be added without too much concern for unrelated packages.

I find functional package management to be a vast improvement over the packaging situation --- unlike incarnations of the latest bundling/appification trend such as Snappy or Docker.


I didn't say I want an entirely bleeding edge system. I said a bleeding edge ubuntu, by which I mean keeping up to date on the non-LTS track (I updated my original post to say that, because I can see where the confusion lies here).

Out of the 2500 or so packages installed on the computer I'm typing this on, I want about 10 of them to be bleeding edge. Maybe 20.


So isn't having a separate channel for those 20 that change and update often but thus more prone to risk than the other 2480 packages that you probably would like to work and not bother you a good thing for your productivity and security? Now I last used ubuntu 2-3 years ago and IDK a lot about PPAs, but, what are the shortcomings, exactly? Clashing dependencies (i.e. sth. in your ppa wants libbob 2.0, but ubuntu has 1.8)? The packages you want don't have PPAs?


PPAs would work if they would be widely adopted by all the upstream software whose latest versions people desire AND the PPA infrastructure is adopted by all major Linux distributions so that developers don't need to package in N different formats. But as it stands, leaving it up to distribution maintainers means programs tend to lag several versions behind, while for faster updating each distribution has its own incompatible way, which means developers often give up and don't package at all and leave it to the maintainers, or just select one method like PPA or AUR.

As far as I can see, Snaps/Flatpaks aim to change this, but at the expense of introducing much less manual oversight and inefficiencies in terms of storage, at the very least.


I'm really not sure what you think I'm arguing here. The situation I have with Ubuntu+ppas is satisfactory to me, and it is also anecdotal evidence that "leave it to the maintainers" (asserted by the OP) is not a complete solution to real problems (my assertion).


While I agree with a lot of this as well, I feel like downstream packaging has a lot of issues.

I'm working on a project (licensed under the GPLv3) and I have decided to be my own maintainer, i.e: building and distributing packaged versions of my project all by myself:

1. my project is young and unknown, I have no choice but to package it myself (I really like AUR by the way);

2. it lets me write and maintain step-by-step documentation;

3. it allows me to enforce a consistent interface for my project across operating systems, I know my project can be used the exact same way on any operating system I have packaged it for;

4. I have had bad experiences with maintainers having no idea what they are doing and completely breaking my software;

5. It allows me to build a full release pipeline including both source and binary releases.

The first point is sort of a big deal, having packages is important to drive adoption, therefore I need to invest my time in that. I'm not gonna wait for some random folks to package it a few years down the road (if that ever happens).

My work is not incompatible with downstream packaging, but I do intend to work closely with (or directly maintain) any downstream package.

I really feel like this is sort of a catch-22.


According to OP the maintainer fairies will come and package your project!

It doesn't even need a Makefile. They will create one for you.

(They need to package the 3870 other requested projects first, though https://www.debian.org/devel/wnpp/requested )


I presume you mean lightsd. Nifty stuff. What do you think of Suse's OBS? I believe you're squarely in its target audience.

And hats off to you for going the extra mile for your project! Your PKGBUILD even looks quite good. Would you like some assistance in writing a -git pkgbuild for people who want to try the absolute newest commit?


Thank you! Yes, I was referring to my work on lightsd.

I didn't know about Suse's OBS until this thread, I guess I need to look into it. I have been building my own CI/CD pipeline on top of Buildbot so far. It's a PITA, but leaves a lot of room for creativity, hopefully it will be rewarding down the road.

I don't know about the -git thing for PKGBUILD, what's the syntax/documentation?


SUSE's OBS is in my biased opinion (I work at SUSE), probably one of the better ways of solving this problem for ISVs. You don't need a build infrastructure, you can get feedback from distribution maintainers and submission to the main distribution is surprisingly painless. Actually you can build packages for non-SUSE distributions on OBS as well, so it's even more versatile than just the openSUSE community's distributions.


It is briefly mentioned in the manpage, https://www.archlinux.org/pacman/PKGBUILD.5.html, but basically a pkgbuild can use a git repo as a source directly. Here's an example:

https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=pacma...

Compare it to pacman's release-based pkgbuild:

https://git.archlinux.org/svntogit/packages.git/tree/trunk/P...

The major differences are the "git" url in the sources, the provides/conflicts metadata and the dynamic pkgver() function.


Nice, maybe I'll consider this for dogfooding, thank you for the explanations!


Preface:

1. I use Arch Linux for my desktop and I like it.

2. I hate docker's repository systems (and other upstream packaging)

But this article writer sure is putting the maintainers on a pedestal.

More fundamentally, the maintainer is the primary line of defence and interaction between users and developers. Maintainers shield developers from uninformed users, allowing the devs to write software with less support overhead. Non-bugs are caught and filtered out. Low-quality bugs reported to the distribution's tracker often becomes a good bug when the maintainer reports it upstream.

...

Maintainers also shield users from developers, offering a layer of quality control...choosing a subset of the (subjectively) best software FOSS has to offer. Maintainers will disable features that they feel act in bad faith. Maintainers' greatest power is the ability to outright say "This is not good enough for our users"

Really? Then what about this poor sap who had the misfortune of releasing one buggy minor version, and is now forced to support that for the next 5 years even though the bug was fixed in the next version?

https://news.ycombinator.com/item?id=11452432

The maintainers actively tried to sabotage his attempts to tell people to upgrade to a new version in the name of 'consistency.'

Sure, consistency is great, but only if we (the maintainers) don't have to do any work (such as backporting). We offload all the supports to the developer and have him do support work for us. Then when he complains we will call him a whiner and laugh him off...


While I'm more on jwz's side in the xscreensaver issue, it's a fair bit more complex that the way you're presenting it.

You're also missing the forest for the trees - if no-one maintained xscreensaver packages for distros, then there would be considerably fewer installs of it. Centralised package management is a very powerful feature, and without it, you're reduced to the Windows style of installing software: hunt around on the web for it, try and figure out if you're getting it from the official repo, try and figure out if the download is trustworthly, install it separately, hope it doesn't sideload malware, then maintain it personally as time goes on, and maybe even suffer every second tool having it's own self-updating service phoning home and nagging you (plus you may not even be able to easily uninstall the software if the author didn't offer that option). It's easy to do that for just one bit of software, but it's a real chore to do it for all your software.


Why not get the benefit of both worlds but having centralised repositories for Snaps/Flatpaks? This way developers simply have to push their builds to it. Of course, also retain the traditional packaging model for much of the underlying system, and we seem to have hit a sweet spot between established practise and the anarchy of Windows land.


Sure. I'm a pragmatist - if that would work, I'd be happy. One major problem is fragmentation/network effects though - you'd need to get enough developers onboard to make something effectively the 'default' userspace packaging system. Major distros already have a big NIH problem with system packaging, every one having it's own system.

Packaging is a difficult problem - the 80% use case is easy to solve, but the edge cases really require elegant forethought. It also seems that every distro has a different idea of what that 20% is...


That's one of the reasons I love arch, it has AUR. Don't like packaging, just create your own.

> Really? Then what about this poor sap who had the misfortune of releasing one buggy minor version

Can't you do a minor do a minor version upgrade? Even the stable distros do updates all the time, just nothing major.


The maintainers actively tried to sabotage his attempts to tell people to upgrade

Ah, poor maintainer. Releasing software with a self-destruct timer is not an acceptable way to "tell people to upgrade".


No, but it's an effective way to say "fk you" to stubborn/lazy/backwards maintainers and/or distros who wont update.


Yes, those pesky stable distributions that prioritise bug fixes are clearly to blame. While it might be annoying for you (and me to some extent), some people actually need to have a stable system. If I was installing a GNU/Linux distro for a family member I would pick Debian or openSUSE Leap over a more rapidly updating distribution -- I just got burned by such a distro yesterday and I'm still reinstalling my machine.


>While it might be annoying for you (and me to some extent), some people actually need to have a stable system.

If desktop packages like the XScreensaver is what makes your system unstable, then you have worse problems...


I know, right? It's not like the maintainers can backport the bug fix in the first place, or upgrade 1 version forward.


Personally I think Nix (https://nixos.org/nix/) does this right: packages are easy to install and upgrade for users, and developers and maintainers can easily create and update them. And if someone does not like the default collection of packages, it is as easy as starting one from scratch, or forking the existing one on GitHub, but at least it doesn't require reinventing the entire system.


Pretty much.

The base problem is not ISVs or maintainers. But that the rigidity of the package managers force maintainers to either use one version of a lib for the duration of the distro version, or play musical chairs with the package names to get around conflicts.

Either option makes it hard for third parties to produce packages.

The likes of Nix allows multiple versions of a packages to exist side by side, and ensures that each program gets the version it wants.

This allows a third party package to request a newer version of a lib than the distro provides, as it will not conflict with the distro provided packages while retaining the naming scheme.


I second this. I'm using Guix both privately and at work to provide bioinformatics software on our cluster. We have an additional repository of package variants for things that shouldn't be part of the default collection.

Writing package expressions is usually very simple, especially since we can use package importers to automatically generate and update them.


I cannot agree with this enough. I help package the container tools for openSUSE and SLE (I'm also an upstream maintainer of one of the tools as well, so I see both sides of the picture). But I personally am against ISVs making packages (especially "universal" ones) -- if they want to provide a container deployment method then provide the Dockerfile so people can build and curate it themselves.

It's really frustrating when upstream turns around and says "actually, we provide packages -- not you". In fact, we've had cases where a certain project suggested that a reasonable compromise would be that we change the name of the package in our distribution to "reduce confusion to users". Wat.

However, I do understand their point somewhat. If you have users pinging your issue tracker and they're using distro-supplied packages, then you are getting a lot of noise. But the right way of reporting bugs is to complain to your distribution (we actually live and breathe the distro you're using!) and I feel like upstream projects should make this communication model more clear rather than complaining to distros about users submitting bugs in the wrong place.

EDIT: Thanks for the SUSE shoutout! OBS is pretty cool and the people developing it are pretty cool people too.


Yes, upstream vendors would love to have total control over how any user uses its software. But there is no guarantee that at least one of the vendors will not have a really shitty install or uninstall script which will do something like this:

  #!/bin/sh
  DIR=/home/$USER/ftp
  rm -rf $DIR
  rm -rf /home/guest
  adduser guest
.... assuming that /bin/sh is symlinked to bash (it isn't always), assuming $USER exists (it doesn't always), assuming the user 'guest' doesn't already exist with important files to keep... you really don't want to know how many ISVs actually have install scripts like this.

Users can still install ISV-packaged software in distros. But ISV packages typically hurt the system by removing things like vital dependency tracking information, or replace files that other packages provide, or won't be tracked for security patches, or will be installed into non-standard paths.

--

I have been a package maintainer for 6.. no wait 8... no, 9 Linux distributions. I have created thousands of packages (easily 10,000). I have maintained forked distributions for corporate use. I have also developed dozens of individual software projects which all were designed to be packaged by maintainers, and were adopted into Linux distributions by somebody else.

And i'm telling you: Anything other than maintainers packaging software is a fucking nightmare for everyone involved. We have a good system! Devs, make your software so it's easy to package. Maintainers, package it for your particular distro. Users, just use what's in the distro, OR follow the upstream's instructions on installing _at your peril_.


Enough of this. People have the right to install last version of software easily without having to upgrade the whole server at once. ISV have the right to package their own software.

If MongoDB is a commodity I can gladly stick to the distro version. If I'm developing something of more cutting edge I want and will install the latest version from the ISV repo.


Who said you have to upgrade a whole machine when updating a package? Literally every package manager worth its salt allows you to upgrade a single package. And you can usually specify what version you want too.

Yes, ISVs have the right o package their software. But as someone who is both a maintainer of a free software project and packages software for a distribution, I can tell you that I definitely do _not_ want to deal with packaging for every distribution (especially if it's some magical "universal" package). I've used enough distributions to recognise that they all have differences that you really can't "just package for every distribution with one setup".


> Who said you have to upgrade a whole machine when updating a package? Literally every package manager worth its salt allows you to upgrade a single package. And you can usually specify what version you want too.

Only by playing musical chairs with the package names to avoid collisions.

And that in turn complicates dependencies.


You don't need to upgrade everything just to upgrade a single package. With traditional GNU+Linux distributions this isn't always quite true, because it's hard to install different variants side-by-side.

With functional package management every application and library gets its own namespace. Installing something is a matter of picking individual items from their own namespaces and creating a profile, the union of all outputs of all packages that are to be installed.

This makes it possible for users to install not only different major versions for selected applications but even variants of the same package (e.g. different configuration flags or different compiler).

At the bioinformatics institute we are using this to provide scientific software to cluster users. With Guix, users can create custom variants with very little effort without affecting the system state or other people's software profiles.

There is still a lot that can be done by improving package management systems. Giving up by going the route of appification (like it's done with Snappy or when Docker is abused as a packaging tool) is a bad choice, in my opinion.


Yes they can, there is nothing preventing them creating the package. But it shouldn't be a primary way of installing package. If you really need bleeding edge for development you can build it yourself.


I'll just let my ISV do that for me, thank you.


This type of attitude completely ignores regular desktop users who may one day want to use Linux over Windows.


A regular desktop user does not want bleeding-edge software; he will do just fine with last year's VLC. (A notable exception being the web browser.)

The priority for a regular desktop user is a nicely integrated system, and that is something that ISVs cannot deliver because they don't know what to integrate with.


Except for the security stuff, i think many would be quite happy with last years Firefox, Chrome or even IE.


Which is exactly why I've given up on desktop Unix. With Windows, instead of having to wait for the maintainers to declare a package worthy of their distribution, I can just run the vendor's installer, without any fuss or hassle. FreeBSD, IMO, gets packaging the most correct with separating user packages from the core distribution, but there are still times when they'll have an old version as well. Relying on a single app provider, be it apple or debian, is just silly.


It's about trade-offs. There's no magic bullet (surprise!).

For example, some folks hate the Windows way because it has drawbacks like a different installation look/feel/process for each application, there's no automatic updates (unless the application has it built-in, which most don't), there's no automatic dependencies (so each app installer has to bundle its own versions of things, which also don't auto-update), there's no single place to go to look for a program that does something you need, automated installations are hard, inconsistent, or non-existent, security issues are harder to deal with (some image library issues from many years ago are probably still in the wild, bundled into some old version of some application), ...

Please keep in mind that "just silly" is just perspective and priorities.


I still recall Microsoft finding some bug in their visual-c++ set of dlls. Problem was that just about nay software shipped with their own bundle of those dlls.

So All MS could do was to release a scanner to look for faulty versions, and beg people to pester ISVs about getting updates.


That's really good case about upstream packaging. Do we really want yet another android story? I trust maintainer which is volunteering it's time to ensure the package complies with distributions guidelines.

With universal packages we would have universal config file paths. So for example: apache2 on debian based systems will store documents on /var/www while something like arch will store them at /srv/httpd. Similar thing happens with a `sites-available` accross distros. Different distributions are doing things differently, which users are expecting and putting upstream developers in charge of that will make a huge mess, because it's more convenient for them to store configuration at a specific location accross every distribution. Yes that makes a pain to write deployment scripts accross difrerent distros, but IMO consistensy is a key to a good distribution.

Another good key point here would be testing, different distributions aim for different things, while debian aims for maximum stability and uses quite dated packages that are known to work. Where there are distributions which are always on the bleeding edge. Will upstream backport security patches for old packages? I really doubt they would like to maintain packages for years to come.

I don't say it would be all bad, there certainly would be great developers which would benefit from universal packaging, but that will not always be the case.


Linux has never had crap bundled? I guess the author forgot Canonical's "shopping lens" debacle, which was fully as sleazy as any ask toolbar bundling nonsense.


Canonical is a special case. They are just as much an ISV as a distribution. Conflicts of interest are going to arise.


And the reaction from the community was so fierce that they stopped. That was the second point the author was making. I don't recommend Ubuntu any more because of unethical actions like that, and I know quite a few people who do the same.


that's not the kind of bundling people are talking about.


Did you even read the article? That's exactly the kind of thing the author claims distribution maintainers protect users from.


"This is why Linux doesn't have spyware, doesn't come with browser toolbars, doesn't bundle limited trials, doesn't nag you to purchase and doesn't pummel you with advertising."

The shopping lens was openly added as a marketed feature, and could be turned off with one action. You could, I suppose, class it as spyware, but it wasn't surreptitiously sideloaded - it was touted as a default feature. It didn't alter the browser, wasn't limited run, didn't nag you, and while I didn't use it much, it could hardly be called 'pummelling with advertising'.

It was an openly-acknowledged experiment that failed, and was removed.



Note: It may be worth providing a bit more info about why you are linking the article. It's a relevant example of the counterpoint playing out in real-time, which people may miss from the simple preface "In other news:"

To ensure the operating system is stable and reliable, OS distributors do not make major updates to packages such as OpenSSL during the lifetime of each release.

Not entirely true. RHEL backports bug and security fixes continuously, and select feature enhancements are rolled out with point releases. For example, RHEL 7.2 updated mod_nss, and enabled TLS v1.1 and v1.2 in NSS (I'm not sure how that interacts with Nginx, if at all). I imagine RHEL 7.3 should be coming out any time now. Maybe it will help with this.

1: https://access.redhat.com/documentation/en-US/Red_Hat_Enterp...


I don't see the contradiction between Flatpack/xdg-app and keeping packages downstream. http://openbuildservice.org/ could end building xdg-app images in addition to rpm, deb, Arch and images it already builds. Why not?

That would keep security, reviews & processes while stopping the nonsense of requiring root, global state & single versions rpm/deb enforce to the user.


The author's point about "dogfooding" jumped out at me more than he probably intended, but that's probably because it's an aspect of Slackware's SlackBuilds repository (SBo) that I can't say enough about.

Quite often, SBos are maintained by folks who use (even depend) on the software they are responsible for. As such, well-tested and sanely configured software tends to be highly valued by maintainers, and package responsibilities will commonly change hands when a maintainer no longer uses what (s)he's responsible for.

The mechanics of an SBo are also dead simple, making package review and modification by the user straightforward. This means that rolling your own package is straightforward, enabling you to easily become a "dogfooding" maintainer yourself for any package not already in the repo. Few distributions allow ordinary users to close the official package loop with anywhere near the ease--both technically and bureaucratically--that SBos do.

SBos are not the Perfect Package model by any means. But in terms of "dogfooding", ISVs and big distributions (especially Canonical) could stand to learn a few things from a relatively unstructured community of volunteers with a taste for the KISS principle.


Unless I'm misunderstanding, I think Chef has been doing this for years with omnibus packages.

The reason we did this is that we had to support RHEL5/6 and old Solaris and AIX distros.

We struggled for several years (and I personally struggled for at least a year) with supporting RHEL5 and it simply failed. That distro came with ruby-1.8.5 and upgrading it to even 1.8.7 was a major PITA. The ruby-1.9.2 in RHEL6 was also buggy and crashy and segfaulty and we didn't remotely have the resources to attempt to debug it and patch it for RedHat.

The reality is that we just wouldn't have been able to support those old distros at all.

I think there's a myth here that the human resources to support packaging and debugging apps against distro-supplied libraries and supporting packages magically appears out of thin air. It does not. As those supporting libraries age it consumes more and more time to support "back porting" onto aging versions of frameworks, which cannot be upgraded in the distro because of compatibility with other apps. That creates a large amount of serious shit work (seriously, nobody enjoys doing that) and open source doesn't make it magically appear.

I also think there's a misconception that user's have chosen to use those versions of software. Largely they've chosen that because they need the compatibility with a large body of their own custom in-house software and their internal porting effort is costly in time and labor. Largely they don't care if you install an upgraded version of supporting libraries so long as it doesn't impact any other running software and its contained.

For another example of software that has been doing this for years, check out vagrant.

Admittedly, there's issues with this since openssl vulnerabilities require the release of new software versions to upgrade the embedded version.


I maintain that the best of both worlds is a scheme like Nix or Gobolinux. It allows multiple versions of libs to exist, without having each program come packed with half a distro.


The case against distro packaging:

- Desktop Linux has a small user base. Further splitting it up with different packages for each distribution makes it further a non-target.

- You can only install software cleanly that is packaged by your distributor. No distributor packages software that costs money. Many distributors try to prevent you from installing software with licenses they don't approve.

- If your user is using a stable distribution they will have old software. The distribution might only push bug fixes if they are security related. Users will report bugs that are already fixed.

- Maintainers often have no idea what they're doing, because they aren't working on the project and there are rarely other developers that check their patches. Worst case this leads to something like the Debian random number generator bug.

Distribution packaging for user facing programs has large drawbacks with marginal benefits, which can be mostly broke down to the not invented here syndrome.


All true. And yet, Steam is the most used package manager of the linux desktop today and the exact opposite in every single point. If Distros don't adapt they will become just kernel maintainers.


Anyone tried 0install?


Not to get unnecessarily political, but there is almost certainly an analogy to be drawn here to American federalism and the state v. federal government interplay on behalf of their citizens.

Taking the analogy further, the App Store trend is analogous to the 17th amendment[1] superseding the states' right to appoint senators. The relevant implication being that tribal power inevitably gravitates toward centralization on behalf of the "user."

[1] https://en.m.wikipedia.org/wiki/Seventeenth_Amendment_to_the...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: