Indeed there have, and they haven't taken off... in Linux. Which is one of the reasons I think the Linux Desktop is unsalvagable. If the community would rather keep recreating the package manager and never fixing any of its problems as a distribution mechanism than go with the obvious and simple solution, then it is no surprise they have such a small share of the Desktop.
Just use folders guys. Mac classic used to essentially do that (it was technically a single file with a resource fork), DOS did that, RiscOS did that, NeXTStep did that and modern MacOS inherited from it and still does that, A lot of Windows applications still work like that even if they don't advertise it, and I'm sure there's a bunch I'm forgetting. Linux Desktop seems like the outlier here, insisting on spreading everything over the file hierarchy and interlocking it all like it's still a server from the 70s.
And then every package comes with its own libraries, which don't get updated and end up with duplicates everywhere. It's the same reason that Linux (the kernel) emphatically refuses to support out of tree drivers. It means that you have to make the effort to package it, yes, but once you've done that you get dependencies essentially for free. And as the end-user, I can update EVERYTHING on my system with one command, rather than the Windows hell of a dozen updaters running in the background constantly.
> And then every package comes with its own libraries
Only if they aren't part of the base OS set. This is how basically every operating system except BSD and Linux do things, and they have an order of magnitude more adoption than the Linux Desktop. Hell Android even uses the Linux kernel and has an appstore and still does that.
> It's the same reason that Linux (the kernel) emphatically refuses to support out of tree drivers.
Well no, that's because they insist that drivers can be better maintained (because it forces them to be open) and don't have to tie their hands supporting an ABI. As an example of the downside of this policy, see nVidia drivers on Linux.
Yes, it's a tradeoff, but there are a lot of downsides to package management that its proponents completely ignore. Case in point: the prevalence of using containers to run software without having to deal with conflicts created by trying to intermingle everyone's dependencies, or install up to date software without having to go through some repo, or distributing for multiple distros without having to maintain packages in two dozen repositories.
Even Linus distributes with AppImage. Probably just a stupid Windows user.
"Case in point: the prevalence of using containers to run software without having to deal with conflicts created by trying to intermingle everyone's dependencies,"
That's a problem on GNU/Linux; it's not a problem on illumos or BSD based operating systems. Don't use GNU/Linux or package 3rd party and unbundled software in /opt, configuration in /etc/opt and configure the software to use /var/opt (as per the FHS specification) and the problem goes away.
It's the clueless developer problem, not an OS packaging problem.
Maybe for that particular problem, it still doesn't do anything for many others. For instance, what if I want to install an application on a different disk? In grand UNIX tradition, the scheme you outlined still spreads an application's files all over the tree.
I suppose you'll call that a packaging problem too, and I agree: you should package applications as relocatable directories that contain all their non-OS-provided dependencies.
"still spreads an application's files all over the tree"
No, only three directories: /opt for application, /etc/opt for configuration, and /var/opt for applications' data. Please read the specification, either FHS[1] or AT&T original[2] from whence FHS came. Good engineers seek out and read specifications before they start any planning and work.
When you package applications in this way, only /var/opt needs to be backed up.
Right, so instead of your entire application being in one directory, it is in fact spread across 3 disparate ones. Why not /opt/<APP>/(var|etc)? Would make too much sense I guess.
I've read the spec, it's crap. There is no value to following a crap spec.
You might have read it, but you didn’t understand it, and the reason you didn’t understand it is because you don’t understand the concepts behind UNIX. No matter; here is your next stop:
“The art of UNIX programming”
...punch that into a search engine, read the book. Then we shall continue.
I understand the concepts just fine. They're from the 1970s, and probably made more sense then, but it isn't the 70s anymore. Hell, the people who made it moved on and improved it with Plan9 and even that was decades ago.
Stop treating UNIX and posix like they're some kind of religion.
If you had understood them, you wouldn’t have made the statements you made. The delineation between /opt, /var/opt and /etc/opt is intentional: when the content in /opt and /etc/opt is packaged, only /var/opt/application needs to be backed up because that is the variable portion, the data. There are other factors like the linker mapping and ABI versioning consumed by the runtime linker as well as a separate stack of shared object libraries that play into this scheme, since except for libc and libstdc++ the libraries used with the OS aren’t supposed to be linked with. That’s how I can see you haven’t grasped the enirety of the subject matter at hand, which is why you were told to go read some more. Packing each application in her own directory with her own libraries might be convenient, but it’s dumb because of all the library code duplication, storage consumption and the nightmare which will ensue come time to patch the software. These kinds of stupidities are reserved for Microsoft®️ Windows®️ but have no place on UNIX®️ where operational maintainability and stability are the highest of priorities. I’m running infrastructure across datacenters here, not putzing around with a lone application. My worries are ever so slightly broader than concerns of individual lone desktop PC developers with only convenience in mind.
Just use folders guys. Mac classic used to essentially do that (it was technically a single file with a resource fork), DOS did that, RiscOS did that, NeXTStep did that and modern MacOS inherited from it and still does that, A lot of Windows applications still work like that even if they don't advertise it, and I'm sure there's a bunch I'm forgetting. Linux Desktop seems like the outlier here, insisting on spreading everything over the file hierarchy and interlocking it all like it's still a server from the 70s.