Coming from Windows a year or so ago, the Linux filesystem has been probably the most confusing thing about the switch (I've read about the logic behind /var and /bin and /usr several times but it was not intuitive to me and didn't stick). I generally have no idea where my config files are stored and can barely find where programs keep their data. And then I download some software that doesn't install itself and I have to figure out where to put it... The fact that this is all only sorta-kinda standardized between distributions, and that you often have separate config files for the system, for the user, and for the session doesn't help either.
It's not clear that GoboLinux entirely solves this issue but at first glance it looks like it's trying and the /Programs organization is pretty intuitive.
What's not appealing at all though is using a distro that doesn't have custom packages for all the software I'm using (aka is not Ubuntu or RedHat based). I'm not at all comfortable building from source--when I've tried it I've rarely been successful--and the prospect of further winnowing down my choices (from not just Linux-compatible software to software that however many GoboLinux users have uploaded to their Recipes directory) is, well, pretty intimidating.
/bin, /usr/bin, /usr/local/bin, /opt... is a mess on unices. There is no profound idea behind it. Someone needed a folder that was not interfered with, then someone started interfering so a new folder was born, and so forth. However, they are mostly just prioritized copies of eachother. / comes first, then /usr, then /usr/local, then /opt. The structures in them are identical.
On Windows, however, the only organized region would be that applications get a folder, but there's multiple potential locations (Program Files, Program Files (x86)), and they might also dump a bunch of things in various user-local folders, as well as the giant landfill that is C:\Windows. It lacks structure, to the point where there is even clones of UNIX structures in there (C:\Windows\System32\drivers\etc\hosts?!?).
That's one-sided and misleading. In fact, the Windows model of "/Program Files/%COMPANY_OR_PERSON%/%PACKAGE%/", "%USERPROFILE%/AppData/Local/%COMPANY_OR_PERSON%/%PACKAGE%/", and suchlike is akin to NeXTSTEP's ~/Apps, /LocalLibrary, /LocalApps, and so forth; both are from the early 1990s.
Similarly, the Windows organization of %USERPROFILE%/AppData/Local/ , %USERPROFILE%/AppData/Local/Temp/ , and so forth has a parallel on Freedesktop operating systems.
People have also observed on this very page that far from being new-fangled, as it has been mischaracterized on this page, GoboLinux has been around and doing things like this for almost a decade and a half. (Of course, as NextSTEP and Windows NT 3.5 show, the idea is almost a decade and a half older than that, even.) It is interesting to note that slightly earlier than that, Daniel J. Bernstein, around the turn of the century, proposed a /package hierarchy (for package management without the need for conflict resolution) and a /command directory.
It has been a long time since I read those email correspondences. They are quite hilarious. However, despite the pedantry, /usr/local/bin and /opt/bin are all about not interfering with /usr/bin and /bin. /usr was, of course, made to be a home folder, and converted for the silly reasons mentioned in those old emails (although this was changed back in Plan9 and Inferno), and /usr/bin has maintained a similar idea of being non-essential binaries in many current distros (/bin often belonging to one package, and /usr/bin containing many independently managed packages).
You seem to have missed that I did not complain about "Program Files" being nicely organized folders containing all relevant application data (although I am highly allergic to unnecessary capitalization and poor naming), nor say that Windows was the only platform with a mess, but rather complain that there are multiple such program locations (Program Files, Program Files (x86), and some applications install to user folders), that Program Files contain a bunch of entries for things aren't programs (helpers, things that installed elsewhere for stupid reasons, etc.), that AppData is a hidden mess, and that the Windows folder is a ghastly landfill that contains everything under the sky, including things that should have gone in the other locations (system applications, libraries, ...), things that didn't fit anywhere (most of whats in here), and even configuration files in UNIX directory structures (/c/windows/system32/drivers/etc/hosts). It's a disgusting mess in there that cannot by any meaning of the word be considered organized.
Note that I have not used Windows for at most 60 contiguous minutes for the past many years, so my memory may fail me.
Repeating the same thing does not make it any less one-sided and misleading. In fact, Microsoft discouraged applications from putting anything into the the windows directory or the system directory from the middle 1990s onwards.
And of course the windows directory being a subtree with everything that is part of the operating system, is exactly the same model that people have been adopting with the "/usr merge" from AT&T Unix System 5 Release 4 onwards, and only a "landfill" if one thinks having the operating system in its own single subtree off the root is a "ghastly" idea. The range of operating systems that have done this very thing over the years, with the likes of C:\DRDOS, C:\WINDOWS, C:\OS2, /System, /boot/system, and /usr, are really not for you in that case.
By the way, your description of priorities with / overriding /usr overriding /usr/local was as misleading as your idea about interference rather than the far more mundane and ordinary having a second disc. In fact, on systemd operating systems the default path is to search in the order /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin which is the opposite order to your description. FreeBSD nowadays uses a similar order, with _PATH_DEFPATH being /usr/bin:/bin , although _PATH_STDPATH is /usr/bin:/bin:/usr/sbin:/sbin . OpenBSD goes with /usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin for both.
The misleadingly simplistic and wrongly ordered /, /usr, /usr/local wasn't even the historical case, moreover. Back in the 1980s people had the likes of /usr/ucb, /usr/lbin, /usr/bin, /usr/5bin, and sometimes even /etc and /usr/etc on the search path. There were actually quite a lot of these. AT&T Unix System 5 Release 4 had them all under /usr in 1988.
First you got /bin on boot, enough to mount an external drive to mount /usr and you're set. That's what I heard was the case back then.
These days it's non sensical and I heard about an effort to symlink either of it to the other and consolidate the binaries.
/opt is alright. It's where you dump your tar ball and compile your custom stuff and place it under not to mix stuff up with package managed software. Not sure how people differentiate against /usr/local though.
/usr/local vs /opt, afaik, is mostly a case of 3rd-party commercial software ending up under /opt, whereas your more personal scripts and hacks are in /usr/local.
I'm not particularly bothered about /bin vs /usr/bin, these days everything is in /usr/bin -- unless you are in an embedded scenario, and at that point everything is custom anyway.
The one location that doesn't make sense is /usr/sbin, which is hopefully being phased out.
Tbh the only thing I really miss is having ~/bin in PATH by default (and ~/lib, ~/share etc similarly picked up where necessary), since installing software in personal userspace is still pretty common in real multiuser environments; standardizing that would help, imho.
> Tbh the only thing I really miss is having ~/bin in PATH by default (and ~/lib, ~/share etc similarly picked up where necessary), since installing software in personal userspace is still pretty common in real multiuser environments; standardizing that would help, imho.
Not sure that will become standardized, it's a big security risk. Usually you want ~/bin to be at the front of your PATH so it takes priority, but the downside is... well that it takes priority. It's easy for someone to subvert the stuff in PATH because they only need your permissions.
"These days it's non sensical and I heard about an effort to symlink either of it to the other and consolidate the binaries."
Yeah, Fedora wants to move everything to /usr, last I heard. I personally think it's a stupid idea (/usr is just a vestige of Unix's early development and was never meant to be a permanent part of the Unix filesystem layout) and would prefer everything to be under /, but whatever.
"Not sure how people differentiate against /usr/local though."
/usr/local is typically reserved for stuff built from source. OpenBSD also uses it for things installed via the package manager.
/opt is a lot more freeform. Yeah, you can create an /opt/bin or /opt/lib or what have you, but the way I've seen it used (and the way I personally use it) is something closer to Program Files (so I'll have /opt/firefox or /opt/julia or whatever). A lot of commercial software packages will install to /opt in a similar manner.
Fun fact: /usr exists because Thompson and Ritchie ran out of disk space on the root disk of their PDP-11 [1] in the early 1970s.
/usr was originally where the user's files (everything but the OS) lived, but when they ran out of space they decided to just move OS stuff into it.
The whole "/bin is available on boot until /usr is mounted" story, which a classic form of post-rationalization, though it was presumably a useful distinction for a while.
The content of hier(7) looks a bit out of date, e.g. it's missing "/run" but still documents the whole archaic "/usr/X11R6" hierarchy that went away together with imake a decade ago.
file-hierarchy(7) appears to be a more modern alternative.
(Cue complaints from users of a real UNIX about how GNU/Linux is such a cobbled together poorly documented hodgepodge...)
On your last point about installing software: I've used Ubuntu and Fedora, and finally settled on Arch ~4 years ago. Interestingly, although Arch has a reputation of being a "DIY" distribution (in the sense that it targets power users), I've had way more trouble installing software on Ubuntu and Fedora than on Arch.
Arch has what is called the Arch User Repository (AUR). It's a repository of recipes for building all kinds of software, and it's submitted and maintained by the users, not maintainers of the distro itself. Whenever there's a package missing in the official repositories, I almost always find it in the AUR. AUR has no binary builds---instead, you download files and scripts needed to build the software from source. You then run the makepkg utility, and you get a package you can install. (There are also helper programs which automate this whole process for you.) The result is that the package manager tracks the files properly, and you can keep track of which version you have, can remove the software cleanly, and update it whenever you like.
Today, when I'm at a Ubuntu box, it's a pain in the ass to install some less-known software. I can download the sources and then try to build, and I usually succeed, but not before I've figured out the right flags, prefixes, build-time and run-time dependencies, etc. I also get questions pertaining to this from my friends that have recently switched to Linux, and I don't really have a solution to offer. Yes, there are PPAs, but more often than not, PPAs are outdated and are binary-only, meaning you can't really review how the binary packages came about.
AUR really solves the problem nicely. I'd like to see such a concept in another, more beginner-friendly distribution, that I can then recommend to people around me who are considering making the switch.
I tried arch a couple of years ago (in a VM), and while I liked it and the reasoning behind it, I had certain misgivings that led me away from it being considered for any future use.
The main one being that whole AUR thing: What happens if there isn't a recipe in there? For example - you want to install something which only has a binary, no source, and can't be (legally) redistributed; what then?
Well - to "install" the software, you basically have to go thru all of the steps to build your own recipe, noting down what you do manually, and eventually getting to the point where you have a script (or whatever) to install the software. Then, you undo all of your manual work, and run your arch package to do the real install. Tada - you now have a working package for that piece of software, and have spent potentially hours to get to that point.
As a bonus, you can't redistribute that recipe, because it is based around a binary blob that was purchased and you don't have the rights to redistribute it.
At least, that's how I interpreted it. I honestly don't recall what piece of software I wanted to use, but I think it may have been EagleCAD - which has a Linux port, but is binary only. I think it only has RPM and DEB packages, so to get it on arch, you'd have to build your own package.
It wasn't so much the fact that you had to do this, just that it wasn't an easy and straightforward process (from what I recall), and if you only need to do it once or twice, it would be pretty arduous. Maybe by now (haven't looked) they have some kind of package conversion tool, which would be a great addition.
> AUR really solves the problem nicely. I'd like to see such a concept in another, more beginner-friendly distribution, that I can then recommend to people around me who are considering making the switch.
Switching operating systems and tools is always a messy experience.
Coming from Linux and macOS to Windows this year, the Windows filesystem has been... well, one of the very confusing things. Although I understand the Linux fs is messy (and macOS improves a bit on that), I'm used to it and it makes sense.
OT, the only good bit I found is the Outlook directory, mail, calendar, Office experience (but that will not make me stick to Windows either).
Looks somewhat interesting. Essentially, it just switches the primary index from "type of file" (e.g. binary, config file etc.) to "program this belongs to". That has some advantages, as humans are more likely to want to search by program. I do have some issues with the distro's execution of this idea, though - the path names are far too verbose, and I'd rather not have to use the shift key twice in every path. I'd take '/etc/foo.conf' over '/Programs/Foo/Config/Foo.conf' any day.
Who is searching for system package manager installed binaries in the filesystem by absolute path?
On Arch pacman -Ql <package> shows me the equivalent of a /Program/<package>/ directory.
Installed packages are organized for the benefit of the runtime environment since the user should never have to interact with the data hierarchy, much like how Android has a Data/<program name>/<random stufF> share dir with the packages cached elsewhere.
The only real problem I have with the current standard FHS is how terribly named everything is. There are mechanical reasons why having all your libraries in /lib or all your binaries in /bin is necessary to the environment and linker.
AFAIK, gobo "just" redefines the directory hierarchy along with a small kernel patch that lets you see an equivalent standard hierarchy. It has none of of the architecture for build reproducibility that nixos and guix boast.
Okay so correct me if I am wrong: the original root structure is still there just hidden with this Gobohide thing. (I do not like this - turn off immediately).
You download tarballs into Programs and unpack and configure make install there, it 'just works'TM somehow, (I like this - sort of). I think 'Couldn't you create a script which creates a load of symlinks (for any distro), put it in a folder on your desktop?'
Okay,.. help me here - what problem are we trying to solve again? I like the nice view of the applications we have, but just thinking quite often packages have their own symlinks for like lib.so.2-->lib.so.3 will this have symlinks to symlinks to... so now thinking is it really worth the effort for a nice view of the applications? What else do I get?
That thing about the bunch of symlinks actually reflects how Gobolinux was first created :). Onde of the devs didnt havê root access in their university lab computer so they came up with this alternate organization that let them install everything they needed inside a directory in their home folder. He grew fond of that folder organization and later on grouped up with some friends to find out what would happen if they made a custom distro where everything used the alternate organization.
As for what benefit this brings... One big one is that it makes it very easy to mix together programs installed via the package manager and programs you compiled from source by hand. There is no need for an alternate /use/local hierarchy. Similarly it it better at handling those situations where you might want to install multiple versions of some software. It can create different versions of that virtual root filesystem but with symlinks that point to different versions in each one. The end result is similar to what you would get with chroot and other "container" tech but in a quite elegant way.
There's also the fact - if I understand it correctly - that the filesystem in Gobo -is- the "package database"; if it is in the filesystem, it's installed and can be looked up easily by the system. Or browsed manually. It's also friendly to manual installs; once you install things manually, it's now a part of the package database!
That's the one thing I hate about package managers - if you need to install from source, the manager has no clue about it; in fact, you can have multiple versions installed - a new version "from source" and an old version from a real package via the manager! It's also possible for the manager to uninstall stuff it knows about - along with part of the stuff of the new version, depending on various things.
I ran into that earlier when I had to upgrade and munge my Ubuntu system to get TensorFlow to work properly with Python 3 and some other stuff and crap I forget about; long story short, I can no longer perform an upgrade to my system (I'm on 14.04 too!) - the tool b0rks hard when I try, and I honestly don't recall enough to return my system back to working status (I also had to install a major upgrade to gcc - which necessitated upgrades to other things, new manual symlinks, hours of swearing).
This was all needed for the Udacity Self-Driving Car Engineer Nanodegree because of their system requirements for code and such (had I been running 16.04, it would have been easier - most likely - but that was also fraught with potential issues, which is why I didn't pursue it). Looking back on things, I probably shoulda used a container or VM, or built a new machine, but I was pressed for time. I was assured in the beginning, before the class started, that what I had would work - like many things in the course, this wasn't true (I'm not saying they lied or misrepresented stuff - my cohort was only the second cohort from November, and so I am a part of the "guinea pig" crowd as they debug the new course - paying beta tester, if you will).
A system like Gobo might have been useful for this...
On Mac OS X, it has been considered that putting an application folder/icon in the trash (even if some other stuff might happen in the process), was a superior approach to Add/Remove Programs or find package name/uninstall with package manager approach of most distros.
So maybe you are a die hard fan of old ways, but it clearly is a mess and not user friendly at all. Just look at why the LSB was created and more importantly, why nobody is following it 100%. If that does not seem like a problem to you I don't know what to tell you.
As for the root structure, they do mention in the article that it is optional. It can be turned off by disabling the kernel extension.
Many readers of HN won't need this explanation of the Unix/Linux/MacOS filesystem hierarchy, but for those that find it confusing here is a very brief summary. See Wikipedia for a more in depth discussion [1].
When Unix was invented, the concept of hierarchical filesystems wasn't new, notably Multics had a hierarchical filesystem, but there were conflicting visions. At the time, most other operating systems divided the filesystems between a number of top-level containers containing files, but no nested containers. IBM's contemporary time-sharing system, VM/370, supplied users with a set of top-level "virtual" drives each containing any number of files. There were no recursively nested directories.
The designers of Unix wanted a simple Operating System suitable for software development so they tended to implement an idea and then use it as much as practical in the OS design. Rather than have top-level containers like drives that contain folders that contain files, Unix just had directories and files (both implemented with the same on device structures called inodes). This combination of simple ideas fully generalized can be seen throughout Unix: multiple pathnames can point (link) to the same inode providing a form of aliasing for filenames, the api for files is generalized to include access to (possibly raw) devices so devices show up in the filesystem hierarchy, etc.
[/]
The top level directory has the name /, unlike other systems all filesystem pathnames start at this single point.
[/etc]
System configuration is found here. Most of these files are simple text files that may be edited by system administrators. In the past administrators might directly edit /etc/passwd to remove a user. Now, things are more complex, but there is still a backwards compatible /etc/passwd file (containing encrypted passwords, etc.).
[/bin and /sbin]
Unix, from the beginning was, like it's inspiration Multics, a multi-user system. Reconfiguring the system, for example to add a printer or drive, normally required running in single user mode at a privileged level. For this reason the programs needed by administrators running single user were segregated and placed in /bin. The rest of the system could be left offline, useful when working on the rest of the system.
As Unix grew, more and more utilities were added to /bin until it made sense to segregate it into those utilities needed when even a normal user might find themselves in single user mode vs a system administrator doing something dangerous. The super-user type programs now go in /sbin while the essential, but far less dangerous utilities go in /bin. Ordinary file copy is /bin/cp while the reboot command is /sbin/reboot. Some of the division look arbitrary to me, like /sbin/ping instead of /bin/ping, but there is probably logic behind it.
[/usr and /var]
Once booted up normally in multi-user mode, /usr contains system read-only content (for example, programs and libraries) and /var contains system content that is variable (like log files).
The /usr directory is large and subsequently divided into a number of second level directories.
[/usr/bin]
This is where the rest of the Unix "binaries" (i.e. programs) reside. So while ls (the list directory command) is /bin/ls the c compiler resides in /usr/bin and is /usr/bin/gcc.
[/usr/sbin]
Like the division between /bin and /sbin more administrative commands not necessary for single user mode (see /sbin) are placed in /usr/sbin. For example, the command /usr/sbin/setquota used to set disk quotas on users is in /usr/sbin and not in /usr/bin.
[/usr/lib]
This is where Unix supplied software development libraries go.
[/usr/include]
This is where Unix supplied (predominantly C and C++) include files are placed.
[/usr/share]
System documentation, the man pages, and spelling dictionaries are examples of the files found in /usr/share. They are read-only files used by the system under normal (multi-user) operation.
By now /usr/share is full of further subdivisions since many programs and systems need a place to put their read-only information. For example there is a /usr/share/cups for the Common Unix Printing System to store information about printers, etc.
[/usr/local]
System administrators may add additional local data and programs to their system. It make sense that these would be segregated from the rest of the files under /usr that come with Unix. The contents of /usr/local are local to the machine and are further subdivided into programs in /usr/local/bin and read-only data for these programs in /usr/local/share.
[/usr/X11...]
Once, one of the biggest subsystems in Unix was it's support for the graphical user interface known as the X window system. There were development libraries and include files, commands, man pages, and other executables. Rather than swamping the neat file hierarchy with these new files they were placed under a single subdirectory of /usr. You will probably not need to look at this very often.
[/home]
Users home directories are placed here; mine is /home/todd. Often /home is on a different physical partition so that it can be unmounted and a new version of the operating system can be installed without touching /home.
[/tmp]
Temporary files, for example that don't need to be backed up, are placed here.
[/mnt]
I've mentioned mounting and unmounting. Unix systems support a number of different filesystem formats, and devices can contain multiple filesystems (for example on distinct partitions). These filesystem become accessible once they are mounted into the hierarchy. Users need to temporarily (and now automatically) mount USB drives somewhere so that their files can be accessed via a pathname. In the past, users would just pick a directory to mount over. Now, it's traditional for temporary mounts to be placed in /mnt.
[/dev]
In Unix, low level access to devices is done through device drivers that support a number of file interfaces. Because they appear as special files they can be found under this directory. This is a convenience for programs because it makes naming and access to devices very similar to naming for ordinary files.
[Finding out more information about Unix and its hierarchy]
The Unix manual pages are a great resource. Originally, the manual was a physical book (I've still got a couple) divided into sections. Section 1 were the commands one might type on the command line. Now it is much easier to simply use the man command to access the exact same content. Use a terminal and enter "man ls", for example, to get all the information on the ls command to list directory contents. (You will be surprised at the number of options.)
When you aren't sure of the command's name try the command "apropos" followed by a word you'd like to find the man page for. Passwords are an important subject so "apropos password" on my system lists 47 different manual pages for information on passwords. I see one listed as "passwd(1)". The (1) means it's section one so its about a command named "passwd", the command for changing your password.
Here are the sections:
(1) Commands
(2) System calls
(3) Library functions
(4) Special files like device files
(5) File formats
(6) Recreations like games
(7) Misc info
(8) System admin and daemons
If I was looking for the format of the password file I would enter the section number 5 as the first argument to the man command: "man 5 passwd". This would show me the documentation for the password file, /etc/passwd.
Filesystem hierarchy manual page: "man 7 hier" or, since it is in only one section: "man hier". To find the name, "hier", of this man page one could have used the command "apropos filesystem".
Version control of installed packages would be awesome.
I am still on Arch coz I am lazy.
It would be perfect if it worked like nixpkg or guix but simpler
Nix is much easier than it used to be, assuming you go all-in. Specifically, NixOS is; nix-without-NixOS has weird corners.
If you haven't tried it in the last year or two, then you might want to give it another try. Keep to some simple rules:
- Look for 'enable' toggles for particular software (for configuration.nix), rather than adding just the package. If it exists, use it; it wraps any extra configuration that's needed.
- Don't use nix-env. If you want a package installed temporarily, use nix-shell. Your system is probably single-user anyway, right? This way you can put configuration.nix in git.
- If you must use unpackaged software, then install and use steam-run. Despite the name, it's really just an Ubuntu-esque chroot. (...which duplicates the Steam runtime environment, yeah.)
Forcing people to build package recipes for everything they want to use is helpful, but not actually required. :)
I came here to ask how GoboLinux compares to NixOS, because NixOS sounds like the perfect distro to me, and your comment pushed me to actually try it, thanks.
Wow. Thanks a lot. You just covered all my questions about using nixos.
I was confused about toggles vs packages. And about nix-env for a single user.
Now i get it to the level of trying. Thanks you
nix-env has its uses, but at least while you're learning it's better to put anything you need into systemPackages. Mainly because /etc/nixos should normally be a git repository.
There isn't a good story for doing declarative package management with user packages (nix-env isn't it), though there are a hundred hacks.
- It uses Scheme rather than Nix. Advantage Guix: Nix wasn't intended as a general-purpose language, and is only reluctantly letting itself be pushed therewards.
- It's philosophically pure. The package repository doesn't allow any kind of non-free software, whereas Nix allows it behind a global allow-unfree switch. Advantage: Nix?
- It's smaller. Perhaps as a result of the first two items, Guix has fewer developers and fewer packages.
Beyond that, they're basically the same design implemented two different ways. The Guix project is nice, and if NixOS wasn't around I'd use it. The communities have cordial relations, with a lot of overlap.
So they take the MacOS file structure to Linux? Sounds horrible to me. I always loved the structure of the Unix file system. It's simple and rather efficient. You wonder if a specific binary is installed? Look in /bin. You wonder if where to configure something? Look in /etc. You want to write a program that gives you a list of all binaries installed (like a shell)? Use the path variable that contains a few paths and check their contents. In gobolinux this variable is either huge or they use yet another mechanism. It clearly will not get better though.
The file structure should be efficient for the computer and can be abstract for the user as they want a icon on their desktop and not much more.
I think "/usr/bin" is simple and efficient, and Fedora/CentOS have converged on that a couple of years ago (/bin is a symlink to /usr/bin on those distros). /sbin is the same (linked to /usr/sbin).
I find macOS paths atrocious (too long, too inconsistent, and derived completely outside of UNIX traditions and norms) and I certainly wouldn't want them on my Linux system.
I'm not the one suggesting a new-fangled scheme for the filesystem hierarchy. I think whoever is suggesting something hugely disruptive would need to be the ones to defend it. I'm saying, "Hey, we got this thing that's worked for a few decades. Changing it is fine, if there's a real problem with it, and solving it can't be done without disruption."
The person I responded to said the current Linux pattern of "/bin", and "/usr/bin", and "/usr/local/bin", etc. is inefficient (whatever that means). I said I disagreed (as much as one can without really understanding what is efficient about the new scheme or inefficient about the old) and pointed to a change that Fedora has made in recent years that simplifies it somewhat and recognizes that systems have changed (and the way we boot them has changed).
I'm not saying it is more efficient, I'm saying it's not less efficient (again, whatever that means in this context, you'd have to ask the person above me that I was responding to). But, I do find "one application per directory" very messy. My package manager knows where everything is so I don't have to.
This may not be awful. I'm not saying it is bad. I'm saying, "if we're going to change everything about the way we install, manage, update, and use software on Linux", there'd better be a damned good reason for it. It must be a clear improvement. This does not strike me as a clear improvement. It's clever for the sake of clever, without any significant value.
So, tell me why I'd want this. (And, "it's like macOS" sure as heck aint sellin' it.)
Have you ever had to work with a piece of software, where, since you were not using Gentoo or something, your package manager had a very old package and had to install a new version from source?
I remember around 1999 some sysadmins would say, create your own packages from source stuff that you need. Given the current filesystem, permissions, etc. creating your own stuff is not for the faint of heart. So what you did was search the web in hopes somebody had done it for you.
Fast forward to recent times, if you had to do devops where PHP was involved in some way, you would have felt remi was a guy that should be receiving free beer. Explain why we have arrived to that sort of situations.
Luckily nowadays, the world seems to have centered around Ubuntu, from Docker to whatnot, which makes everybody target Ubuntu at least. But to this day, I'm still a very fond user of Gentoo, even if the compile everything from source approach is a bit crazy, and its latest reincarnation, Funtoo, precisely because it gives me a machine that has everything I need when the time comes to deal with "the package not in my package manager" TM.
Sure, I remember 1999 (and I've built my own packages thousands of times since then, and still do a couple times a week, though mostly for my own software for distribution).
I understand the pain that older packages sometimes cause, but I think the notion that we should throw away good package management in exchange for...I'm not sure what exactly you're suggesting this new scheme provides, is a good trade. Gentoo is not a realistic option for most situations where you need stability over some length of time.
I looked over GoboLinux, and I see they offer multiple versions of packages, and that's cool and all. There are repos that do the same for CentOS/RHEL; Software Collections Library is a very good one that is sponsored by Red Hat. And, there's stuff like Flatpak coming down the pike which allows packages to be bundled up in a container with everything it needs.
One could argue that GoboLinux foresaw these needs (it's been around since 2002, apparently) and came up with a partial solution to the problem. I can't argue with that. But, it was, IMHO, so partial that it wasn't worth the trade. I'm not sure containers-as-packages are worth the trade in some cases, either, but it seems inevitable. So, I'm embracing it in my own work. It's definitely got some advantages.
Yeah, uhm, no. /bin and /sbin contain tools fundamental for booting the OS. /usr
can be mounted from NFS resource, and as such, can be missing at boot time.
/usr/local is for things installed locally by sysadmin.
This split comes from the times when it was not uncommon for workstations to
share installed programs using network filesystems.
This way certain directories could be mounted read only (i.e. reside on a read-only partition/disk). Some directories were supposed to be governed by the core distribution, some were for user-installed software etc. It makes sense given the original intentions & restrictions. It may look weird on a one-person working station.
That's not the whole thing about LFS. The split by kind rather than application is also a "feature" (or a bug, depending on your opinion). I used to like the fact that documentation was in one place, binaries in another, configuration etc etc
When you know that some default directories in Unix are result of one of the authors running out of disk space, and then religiously followed by everyone, finding countless arguments why is it the right way, how could you take it truly seriously?
Or bad quality of keyboards in the past forcing use of super short commands/directory names?
Typing commands was originally limited by the speed of a teleprinter (before screens became standard), not the quality of the keyboard (QWERTY keyboard was created to slow down typists that were so fast they broke typing machines so I'd assume that it already limited typing speed enough).
This is also likely where the prompt character comes from, today we can do without it because terminal emulators remember the input and execute it once the current command is done but back then it actually prompted you to start typing.
Teleprinters is also where the word 'tty' (there are tty and tty with numbers in /dev that represent virtual terminals and you can play with them a bit if you know how) came from.
Hidden files starting with a dot was also a quirk of the ls reimplementation at the time of making filesystem hierarchical that was supposed to hide . and .. (current and parent dirs) but the check it did was for the first character being a dot, thus the concept of 'hidden files' was born.
Just because they were optimizing for length doesn't mean they threw mnemonic potential out the window.
And either way it's quite arbitrary because almost all command names are still based on English that is a second language to many (including yours truly, but I'm not complaining, I find the usual unix shortcuts rather nice and intuitive most of the time and most of all - extremely quick to write, PowerShell makes me shudder and I had such big hopes for it).
Many concepts are also unique to unix/computers/low-level usage so any name will be non-intuitive. If you never encountered the concept of mounting a filesystem and don't know what an inode is then 'list inodes too' and 'list mounted filesystems' are no better to you than 'ls -i' and 'df'.
And someone who knows will prefer the latter, because it's that much quicker to type and harder to mistype.
/bin is a link to /System/Index/bin. And as a matter of fact, so is /usr/bin. And /usr/sbin... all "binaries" directories map to the same place.
https://gobolinux.org/at_a_glance.html
Technically /bin and friends are still there, as hidden symlinks into /System/Index.
In theory one could have a variant of /Programs live under /usr or some such, and just aim the symlinks into the traditional FHS. But the Gobo people figured that if they were to build a distro from scratch (initial Gobolinux releases piggybacked on existing minimal distro installs) why not see how far one could go.
Unlike various other projects out there, Gobolinux plays nice with the FHS. There is no demand from them regarding how the rest of the Linux community should structure things.
When reading the overview, it says that you can enable the classical /etc and /bin (and /usr/{s,}bin) which will contain liks. These are hidden by default with a kernel module though, but always present.
This is exactly what I have wanted for many years. This helps make Linux understandable and accessible to new computer users. Fantastic work, and thank you!
I find that to be quite a weak argument. Avoid conflicting with known standard structures such as "bin" ans "etc" if necessary (which it isn't—you could theoretically run all applications in a filesystem namespace mimicking their expected structure), but that should be the only concern. Non-standard structures might clash anyway (capitalized folders in root does unfortunately occur).
Because computers where originally designed to work within the given limitations.
In current time, where we have more computing power and storage than we know what to do with it, we can allow computers to be inefficient for the sake of becoming easier to use.
For some people Unix is like a religion. What the GoboLinux developers did is wonderful, and yet some people will criticize them because "what about Unix traditions?!".
The FHS kinda grew organically, as best i can tell.
As the initial installs of Unix bumped into problems, local solutions were found that was then documented and presented as gatherings or in newsletters.
So place your programs in a directory structure that makes sense and use symlinks to maintain compatibility. Simple and elegant.
But... how do I go about actually installing those programs? Web search and then git-clone / manual download? How are dependencies resolved?
Admittedly I haven't read the documentation, just the overview.
On a related note, I feel like Arch struck a nice balance with pacman and AUR.
There are two sets of programs/scripts (much of the gobo tools are actually shell scripts, with a few binaries or python scripts for higher performance). Scripts and Compile.
A package in Gobolinux is basically a tar-ball of the versioned dir in /Programs.
So if a precompiled version is available in the Gobo repo (sadly fairly limited as there is not really any resources for a compile farm available), InstallPackage will fetch the latest from there, unpack it, and run SymlinkProgram to have /System/Index updated.
Inside each package or recipe there is a Resources directory, inside there are a few files that describe the program or recipe, and a list of dependencies. these are parsed before compile or install, and additional updates/installs suggested.
Compile is the recipe equivalent of InstallPackage. It will parse a recipe, check dependencies, and download the source from the included url. The compilation and install will then take place in a union mount overlaying /System/Index, redirecting writes to the target dir in /Programs. After that completes successfully, SymlinkProgram will again be run to update /System/Index.
If you want to make a new recipe, there is MakeRecipe. You give it a name, a version number and a url (though it can attempt to guess the first two from the tar-ball name if left blank), and it downloads the tar-ball, unpacks it, and sets up a basic recipe of the content is one of the tools it has built in support for. Mostly a recipe is a set of steps to make things compile, things like switches to pass to configure.
If you simply want to a newer version of an existing recipe, there is NewVersion. All you may need to feed it is the new version number, though it may need a url as well if the one from previous recipes are no longer valid.
It is a sorce-based distro so yes, basically you download source tarballs from the web and compile them locally. Gobolinux comes with scripts to help you conpile things using the custom hierarchy and for common patterns (like autotools stuff with configure make install) it is as almost as easy as it would be in a regular distro. You can then write scripts to automate this (including specifying all dependencies and where you got the tarball from) and share them with other uses on the Gobolinux Recipes website.
I've occasionally played with Gobo for years now. I am delighted to see it coming back to life of late.
One thing I've always thought would be ideal would be if either or both of the 2 Linux desktops that natively implement the idea of application-directories were top support Gobo app-dirs.
It's currently pretty annoying to set up gobolinux in "home-dir" mode. I tried, and the documentation failed to link to the correct scripts, and after searching the scripts out, I got a "permission denied" error when I tried to download them, and a request for a user/password combo that wasn't documented anywhere.
What I wanted to do was create an "append only" "everything is installed" operating system on top of ipfs. Gobo, with it's ability to have multiple versions of each component in the same tree, seemed perfect for that. The idea being that you just run an executable right out of the ipfs mount.
Best i understand, the "home-dir" mode, or Rootless, was depreciated because it needed a major rework to be compatible wit their switch to the /usr like /System/Index.
Gobo is a cool district - one of my favorites. I used it for a while in college with good success. Back then I recall the process for packaging being a little rough.
I should give it another go to see how it has evolved.
The change from /System/Links to /System/Index was done in part because of the issues with packaging.
While Links used a gobo unique set of directory names to house the symlinks, Index basically acts like /usr/local.
meaning that these days a compile is done pretty much as if one would do ./configure --prefix=/usr/local. But the install stage is captured by a union mount and the files transferred to their designated location in /Programs.
Sadly this change makes it that much harder to run Gobolinux in Rootless form out of a home dir or similar.
Kind of interesting, but honestly I'd like to see a much more drastic departure from UNIX roots. If you want to clean things up, go all in. I wouldn't hire a cleaning service to clean 1/5th of my house (especially if they left some clutter in that 1/5th because they thought I might miss it).
A small change like this is cool but pacman/aur leaves almost nothing to be desired for dealing with my programs. I do like the idea, but I would need to see a compelling philosophy for the __whole__ system.
Have to say -- it's looking real good otherwise. Cool scifi fonts and colors, awesome by default. Apparently it also skips systemd and pulseaudio -- a big plus, a big plus indeed.
A little something to note, with the latest release there is a tool, Alien, that integrate language specific package managers with the Gobolinux directory tree.
Meaning that downloads from cpan or similar will be placed in a sub-directory of /Programs and managed just like any other Gobolinux package.
Right now perl, python, ruby, and lua are supported (afaik).
I live the gobo fs, but I don't lie the rest of the distro that much. I've often though a bigger distro like manjaro could use the fs and show the world how good it is.
I've been sysadmining for a long time, and even I get lost in the old dir structure sometimes, I've just gotten good at the find command.
It's not clear that GoboLinux entirely solves this issue but at first glance it looks like it's trying and the /Programs organization is pretty intuitive.
What's not appealing at all though is using a distro that doesn't have custom packages for all the software I'm using (aka is not Ubuntu or RedHat based). I'm not at all comfortable building from source--when I've tried it I've rarely been successful--and the prospect of further winnowing down my choices (from not just Linux-compatible software to software that however many GoboLinux users have uploaded to their Recipes directory) is, well, pretty intimidating.