I cannot help but wonder if "apps" aren't going to eventually replace package managers. That's certainly what Docker-like containers try to do.
It might sound crazy when you consider some of the services typically installed with a package manager but ultimately there's no reason why an "app" cannot run in root, many root packages aren't actually installing drivers, they're only root for historical reasons (e.g. being able to read/write to /etc).
Why, for example, couldn't a web server be an app? Or a DNS client? Or a DNS server? Sure, there are certain system services which cannot (anything installing drivers/filter drivers/core UI/sound subsystem/graphics subsystem) but MOST things can.
I'm certain containers will end up replacing package managers. This idea is being pushed by the systemd guys.
I think its a step backwards, as you loose tight control over dependencies. E.g., in case of a security issue, one cannot easily patch all containers if they are blackboxes.
IMHO, the right tool for this is an improved package manager such as Nix or Guix (which incidentally also support containers in their own way). Perhaps the future should be a Nix-like package manager for dependency management, and container-ization for security and resource limits.
Well, at least xdg-app has the concept of "runtimes" shared among applications.
If a lib/bin in a runtime has a security issued, the whole runtime might be updated. Transparently for the apps running over it.
A runtime might be FreeDesktop-1, Gnome-3.14 for example.
Lets say a 0day is discovered and patched in gtk 3.14, a new version of the Gnome-3.14 is issued and dl by the clients.
Magically (with the help of overlayfs and co) all the apps depending on this specific runtime have a secure gtk.
The problem is the package managers used by most distros getting hung up on there only being a single version number of each package name installed at any one time.
Sort that out, like say how Gobolinux does it, and you do not need containers.
But containers is all the rage on the web these days, so ipso facto is also must be all the rage for desktop Linux.
Back before Gnome and Freedesktop, desktop Linux was a kernel up project. But more and more these days however it is a web down.
Meaning that web people get interested in using the L in LAMP on their desktops, start looking into Gnome/KDE, then get involved in Freedesktop plumbing, and all the while bring their web-isms ("move fast and break stuff" being the most annoying) with them further down the stack.
I use debian all the time, and the major libraries are versioned, and some have more than one version available. Debian is a very common linux distro, and through it, a lot of child distros get it too. AFAIK, RedHat and Fedora do this too.
So, do you mean that the state of adoption is bad for niche linux distros?
I mean, I guess, but they tend to all be built out of packages assembled by a traditional package manager. Corralling and testing a bunch of things from a bunch of different sources and making sure they work together is hard. Plus, C library wrangling is becoming a bit of a lost art.
It might sound crazy when you consider some of the services typically installed with a package manager but ultimately there's no reason why an "app" cannot run in root, many root packages aren't actually installing drivers, they're only root for historical reasons (e.g. being able to read/write to /etc).
Why, for example, couldn't a web server be an app? Or a DNS client? Or a DNS server? Sure, there are certain system services which cannot (anything installing drivers/filter drivers/core UI/sound subsystem/graphics subsystem) but MOST things can.