It makes me really sad that this is necessary. Unix has a concept of shared libraries. And somehow it managed to get ruined so irrevocably that there's no going back. This—this was a solved problem! It really was. It was solved, and then we unsolved it when we decided that 'move fast and break things' was more important than ABI stability. And now shared libraries are completely useless. I struggle to name a single useful c library that's useful as a system-wide shared library nowadays. Libc/m doesn't really count because it's part of the language, and libz is small enough that everyone who needs it statically links it. Everything else is neither ubiquitous nor stable enough to be used. HOW DID THIS HAPPEN?
Private developer wants to distribute binary + shared dependency libs. On Windows they package it into an installer which unpacks it into the target destination & everything works. On MacOS the user gets a folder that acts like file within which everything is stored. Additionally there are reliable releases so something targeting a minimum of MacOS 10.14 has a reliable way to specify that in the toolchain & know that the prerequisite runtimes are there (Windows is a bit less elegant here but still manageable).
On Linux you have to provide RPMs for Redhat, DEB files for Debian-variants, ??? for Gentoo users. Moreover, your dependencies have to be managed in a totally bizarre way & you need special launchers to put your shared libraries elsewhere & add them to the path to avoid making assumptions about whether or not the user has the right prerequisites. Or you run your own apt/yum/etc servers to host your packages & play nice within the ecosystem. Additionally, some do periodic releases. Some do rolling releases. Considering how small of a population Linux it makes it more headache than it's worth to target for commercial shops that are cross platform as most of their customers are. That also isn't getting into the mess that 32-bit vs 64-bit is on Linux.
Finally, the big advantage is that the release is done by the author. No more package maintainers providing questionable maintenance across a bunch of distros.
>Finally, the big advantage is that the release is done by the author
I see that as the biggest net win of the current system although it may seem inefficient or bureaucratic. I do not trust the authors. The only modicum of sanity and trust comes from the fact that debain/fedora maintainers are actually on your (the user's) side and have strong rules and guidelines about everything. Desktop linux doesn't have that much meaningful isolation or sandboxing that malicous apps cannot circumvent. It's only now that we are seeing some efforts in this direction. Still, it's quite far from something like android where you can quite safely run arbitrary applications.
It's not just threat model, developers are increasingly focusing on fast iteration and annoying users with constant and often unwanted updates, something debian saves users from, very few users care about always having the latest features and bugs or want to become beta testers. Not to mention the privacy shitshow from developers wanting telemetry or more nefarious reasons.
Software repositories like debian and the apple app store are great because the put a layer between the developer and the users and require a 1-1 trust calculation.
Distributions in their current form are almost harmful. I like what they do, conceptually, but that model you're describing should only apply for the base system. I want Firefox to update ASAP, I want VLC to update ASAP.
The distribution model should only apply to libraries and base tools. And even those should be versioned so they can coexist easily and I'm easily able to install any app, from the ones that want GTK1 to the ones that want GTKLatest.
Firefox is the perfect example of why I hate user facing apps updating constantly. They're always adding random features, breaking plugins (still don't have vertical tabs working properly) and shifting the UI around. It was much better back when they had stable releases.
> The distribution model should only apply to libraries and base tools.
As long as nothing breaks it doesn't worry me how many times libc is updated, it's the user facing changes that interrupt me I want to avoid.
> And even those should be versioned so they can coexist easily and I'm easily able to install any app, from the ones that want GTK1 to the ones that want GTKLatest.
If they can't commit to stable releases and non-breaking API then they aren't going to commit to maintaining the 15 versions of GTK that you'd end up with on your system, that's the worst of every world.
If upstream isn't interested in maintaining a stable version (or more realistically doesn't have the resources), someone'll have to fork it, rename it and release it as "stable foo fork". Upstream makes calculated decisions (if you want to be charitable) w.r.t. the resources they have, the new features they want to add, stability etc. If those trade-offs are not what you want, you'll have to use different software. Same applies to e.g. the telemetry.
And from experience e.g. Debian maintainers often don't look at the code of the package they publish e.g. jwz's time bomb in XScreensaver, let alone backport bugfixes to the package version from the earliest maintained upstream stable version.
I think the parent means that, for opensource software, they trust their distro maintainers to read the source code and only publish trustworthy software.
I know, but with his model a random third party decides what's best for that software.
That third party has screwed the security of the package on occasion (Debian being a famous example: https://www.schneier.com/blog/archives/2008/05/random_number...), has delayed package updates for years if not decades (I don't even need to provide an example, just do a diff of stable upstream versions and your favorite distro's package versions), has even broken packages on occasion, etc. And let's not the frequent cases where there's a personality clash between the upstream developer and a package maintainer...
And this model also assumes that a package maintainer has the time or expertise to actually audit the code fully and correctly. Really bold assumption!
1. define your dependencies (try to be conservative so you're not reliant on bleeding edge features)
2. Make a good app that people want with a fairly simple build step
3. Support one or two major distributions
4. Ask for help in bundling for everything else
5. Fix issues as they're discovered
If you ask nicely and people want your app, the community will help you out with the rest. Just look at Steam, which is really only supported on Ubuntu and SteamOS, but is packaged by pretty much everyone.
Private developer wants to distribute binary, have system include shared dependency libs because—guess what, they're shared. That's what we could have had. That's the problem this is getting around.
>It was solved, and then we unsolved it when we decided that 'move fast and break things' was more important than ABI stability.'
I don't believe it was "solved", and I don't believed it was caused by people "moving fast and breaking things".
Linux became HUGE, in terms of the number of people involved, and there is no master body that would govern anything. As an application developer, you could build against libfoo, then your app break on debian because they never updated libfoo besides some strange monkeypatch, and be broken on redhat because they decided to fix bug #12494 differently from the developers of libfoo.
God himself has decided it's too hard, too complex and too much of a clusterfuck of competing motivations to expect that RedHat/Debian/FooBarLinux will reliably all provide the same shared library. And the user doesn't case, bandwidth the cheaper than disk, and its most certainly cheaper that user's time.
That's a pessimistic and imo unwarranted attitude. There are absolutely platforms that maintain backwards compatibility for decades on end. The most salient example is the linux kernel, but—-well, linux distributions are not a monolith. Fine. How about the c language, or common lisp? Perl packages are installed relatively globally, and things don't really break. And sure there's some fragmentation there, like with sbcl-only lisp packages, or gnu extensions, but by and large I can compile an arbitrary perl/common lisp/c package with an arbitrary implementation of that language and, at least in the case of perl and lisp, it works basically the same way it did 10 years ago. This is possible, because it has happened.
I will grant, however, that it is hard. Newer languages—rust, c#, java, python, js—are trying. Of them I would say rust is the best, followed closely by c#, with python and js taking a distant 3rd and 4th, but none of these match the older languages. But—rust is newer than c# and java and python and js. They decided: we're going to make a high-quality, stable package repository, with a culture of stability in packages. And aside from stuff requiring nightly, that seems to be happening, it's still pretty good, and it's still a hell of a lot better than linux. Granted, linux has a much more difficult situation to wrangle, and because it's less monolithic than those other constructions, tragedy of the commons tends to occur, but I think all it takes is a group of people deciding that they will take the work to make things right, getting support, getting lucky, making a super-dynamic linker, and we can fix this.
For example, distribution's package update cycle is so slow that some packages are never updated to upstream for some reason. For example, x11vnc's latest version is 0.9.16 but Ubuntu/Debian has locked down to 0.9.13 for more than a half of a decade [1]. ddclient is also a good example [2].
In my case, I made a program that requires >= Python3.6 but I realized Ubuntu was stuck in 3.5 for years so that I was forced to use AppImage to include in-house compiled Python3.6.
Not to mention that Electron apps are simply not compatible with distribution's dependency management.
It was never really solved. Shared libraries create huge integration testing headaches and the linux landscape fragmented early. This means that there are a lot of combinations of libraries out there in lots of different versions. Some distributions stick to really old versions of stuff to preserve backwards compatibility. E.g. Red Hat is really awful if you are a developer since it pretty much has the wrong version of anything you'd care about. But I remember this being a real PITA when you wanted to use e.g. the current versions of java, node.js, python or whatever and then realize that all of these basically are 2-3 years behind because the LTS version of the distribution was released just before the major release of what you need. That, and the wide spread practice of engineering around that (e.g. building from source, installing from debian unstable, etc.) results in a lot of problems.
Luckily these days you package the correct versions up as Docker images so, it's much less of an issue. The amount of space you save with sharing your libraries does not really outweigh the benefits of shipping exactly those things you need as a statically linked binary.
Doing the same for user space software makes a lot of sense. Apple figured this out in the nineties already. Mac software never really needed installers. Drag the app to the Applications folder, delete to remove. Just works. The only package managers on OSX are for managing OSS linux software.
Nowhere is it written that a system can only have one version of each library. Indeed, there is an entire system (soname) for resolving which of many versions of a library a binary should run with. There is no reason at all why you couldn't have a system where all the libraries are shipped in your application folder, but it uses the system ones where available. There's also no reason why every distro couldn't have every version of every library, and all software since ever would just work with the system version.
> I struggle to name a single useful c library that's useful as a system-wide shared library nowadays.
Libcurl is the only one I frequently use that seem to take it seriously, there might be others but frustratingly few will indicate what level of binary compatibility they guarantee, it would be nice to have a list of reliable libraries. Not just for binary compatability but I would be nice to know how much churn you can expect with any given library.
On the other hand, considering the state of the software industry with relation to privacy (including some open source products) running software from arbitrary sources is becoming less and less viable anyway, I'm not sure it's a problem worth fixing.
Microsoft treats Windows APIs as a contract with the developer, with behavior defined by specifications. Are Linux libraries typically managed that rigorously?
Linux the kernel does that—see linus's rants about how 'kernel does not break userspace'. Unfortunately, someone decided at some point that versioned libraries should be a thing, and then all hell broke loose when openssl and glibc decided to break compatibility in minor versions so now it's just a free-for-all.
The problem that's being solved is that vendors are incompatible, which could easily be solved if they were compatible. They all run a linux kernel. They all use ELF and x11 and opengl. If I compile 'hello, world' on one distro, I can drop it onto another random distro and it'll still work, but after a certain threshold of complexity, that stops working. It doesn't have to stop working.
The glibc version tagging has to do with specific symbols; hello world just uses printf which is there since forever. (Actually the compiler probably optimizes it down to a syscall but.)