What users actually depend on Linux's iron-clad promise to never break binary compatibility? It's curious to me that this issue is considered so sacred and yet I rarely hear the rationale or actual user stories of people who want to run 20-year-old userland on a new kernel.
In particular, note that the promise of kernel binary compatibility does not guarantee that an old binary will run on a modern Linux distro unless the binary is statically linked. Most user-space libraries bump their major version number every so often, so it's unlikely that the required .so's for a very old binary will be present on a new system.
You raise a good point. Why are most open source programs dynamically linked again?
Static linking is one of the nice things about the OSX ecosystem. If only Apple wouldn't unnecessarily break their runtime environment every few releases.
> Why are most open source programs dynamically linked again?
Probably because dynamically-linked binaries are smaller, use less memory (by avoiding duplication), and can get fixes or security updates without rebuilding or re-deploying. When you're a distro it hardly makes sense to ship a copy of libc inside every single binary. Any fix/update to libc would require re-downloading basically the whole system!
Maybe the problem is saying all programs have to behave the same. It would indeed be irritating to have every unix-tool link to it's own libraries. But most well-known tools are small programs that don't get constant feature updates. But it's a whole different thing for typical desktop applications. Those are often way more often updated than the distros and there's no 2 distros out there having the same library versions which might be needed for an app to run. Still distros do not like if such applications use separate library versions... and that just doesn't work so well except maybe for the handful of high-profile applications which have enough maintainers. I mean even keeping an up-to-date Iceweasel on Debian already caused me trouble with library dependencies just some months ago - and that is probably the most prominent desktop application in the free software world.
It's getting even stranger in the world of free games. When games need library fixes (very typical situation as games and engines are very interwoven), which are not yet in a distro (and won't be for some time because the library is for example not yet officially updated or maybe a certain patch simply won't be included). On a system like Windows it's no problem - modify the library source and add the dll. On Linux distro's... well, just not easy possible. Which means funny enough that it's easier to modify library sources in the in the proprietary microsoft world than in the free software world. We got freedom coming with library + distro gatekeepers so to say... which pretty much sucks (even for the library authors).
Strong versioning is key. Only allow links to be redirected to bug fixes, never feature enhancements. If you wrote your code for 1.0, you'll continue to link against 1.0, barring some bug fix (1.0.1). Your code should never be silently upgraded to link against 1.1. Microsoft also gets this right with the global assembly cache.
As an aside, DLL Hell was coined by Szyperski, who still works for Microsoft.
Well they are dynamically linked but usually only to Apple provided libraries as far as I know. It's not like every app installs a dozen new dlls to your system and that's what makes the big difference. It's what makes Mac apps (at least traditionally) standalone and executable from anywhere.
I fear however, that Apple is currently destroying this design philosophy with the sandboxing. At least the application data folders have become way more complex now and I wouldn't like to troubleshoot them anymore, something that has been always been super easy.
Technically, outside of Apple frameworks, they're typically dynamically linked to frameworks located inside the .app folder. This is by no means mandatory though.
If only Apple wouldn't unnecessarily break their runtime environment every few releases.
What runtime-breaking changes are you thinking of? Apple's been very good about not breaking the binary compatibility -- of the runtime, or of their frameworks between OS releases.
Sure, there are some differences in 64-bit systems from 32-bit systems. But, if you take a 32-bit Mac app from five years ago, and run it, it will still run just as well today as it did when it was first compiled.
>Why are most open source programs dynamically linked again?
To save on bloat of shipping multiple redundant libraries with every app and to guard against the perceived 'DLL hell' in Windows(which was fixed a decade ago). Ironic that ended up in 'Dependency hell' with RPMs and Apt packages requiring specific versions of libraries.
Read through this excellent discussion thread if you're really interested in the problems with linking and the lack of a Common Object Model in Linux.
This is incomprehensible. "DLL Hell" is what happens you use dynamically linked libraries without managing versions and compatibility correctly. DLL Hell can only happen when you are doing dynamic linking so I'm not sure what you can possibly mean that dynamic linking "guards against perceived 'DLL hell'".
Sigh, sneering and snark and also downvotes on my GP comment. That's HN I guess. Must.. resist.. temptation.. to be snarky myself.
Anyway back to the point...
>Oh boy SxS. So now you have seperate DLLs for every application.
No, only if the application uses a different version that explicitly is marked as NOT being compatible. If the version used is the same, you do not have separate DLLs for every application.
>How about just linking it in statically and make the applications standalone?
That will needlessly bloat up the application.
Assume 10 applications need library X version 2.3 and one application needs 2.2. With SxS, you will have one 2.3 DLL and one 2.2 DLL. If you link it statically, the same code will be duplicated in 10 EXEs. Multiply this by all applications and DLLs used across applications.Not to mention waiting fot 10 apps to update to fix a security bug.
I can see the benefits in terms of security but not in terms of space. So what if the application folder uses half a gig more, if I never have to use installers for most applications.
And as far as your example goes: From what I understand most Windows developers just use a fixed library version number to specify what dll to use. When that happens, security updates won't do any good either. IMO as long as APIs can be changed in between library versions, developers will always be responsible themselves to upgrade to the newest libraries. It's a nice idea but it just doesn't reflect the reality in the world of Business application where incompatibility directly result in monetary losses.
This rant makes little sense to me. Linking and COM are mostly orthogonal issues for once, and ABI compatibility in windows is not directly tight to COM.
I think the lack of ABI compatibility in linux has more to do with incentive and economics: innovating while keeping ABI is very difficult and ressource-consuming. I don't understand why Miguel would cite Apple, as they are pretty lousy in that department, whereas MS famously spend tons of resources on that issue.
Commercial vendors do. It is surprisingly hard to maintain proprietary software for linux, as different versions ships with different shared libraries, have different ways of doing things. Even maintaining systems that need to run on RHEL 4 through 6 takes _a lot_ of effort. And when you're creating commercial software, effort means time and money.
The people who write and deploy software to limited markets appreciate it. Have you seen how many Windows 2000 deployments are still out there? It's the same with BSD and Linux - there are a lot of 10+ year old systems out there doing useful work. Functional binaries save hundreds of hours of work to maintain the software.
I don't think your argument applies: you are talking about running new binaries on an old OS/kernel, but this is not guaranteed to work. New system calls and kernel interfaces are added all the time, so new binaries are unlikely to work on old kernels. The promise is in the other direction; old binaries are guaranteed to work on new kernels.
At my office, we actually still have some Windows 3.1 servers in production alongside some Windows 2000. Replacing the software running on it has proven to be a bit more of a trick than previously thought.
Anyone who relies on (for example) Oracle Database on Linux. Not so much for the compatibility itself (oracle do maintain their software), but because it builds trust between the Linux kernel team and the software vendors who are shipping stuff on top.
If it wasn't for this promise, I think it's much less likely that people happily developing for Unix would have moved over to Linux
I suspect windows end-user don't care so much either, but the enterprise world does. The number of Fortune 500 companies that still use windows xp because they can't upgrade is most likely significant. People also often underestimate incompetency and things like "we lost the source of this software", "nobody knows how to upgrade this codebase at an affordable price".
This is the bread and butter of companies like RH (many companies are still on RHEL 5, and most of those have some legacy RHEL3 systems in my experience).
There are definitely occasional stories about people running old binary apps.
But I think the real reason for this policy is its simplicity. I don't know if Linux developers are disciplined enough to follow Solaris-like deprecation schedules. When you allow exceptions, there is a tendency to allow more and more of them.
In particular, note that the promise of kernel binary compatibility does not guarantee that an old binary will run on a modern Linux distro unless the binary is statically linked. Most user-space libraries bump their major version number every so often, so it's unlikely that the required .so's for a very old binary will be present on a new system.