Just in case anyone is wondering why we had (or used to have) /bin and /sbin directories as well as /usr/bin /usr/sbin here is my understanding of it.
It is because on old Unix systems there frequently wasn't enough space to store /usr on the root disk. Therefore /usr was a separate disk which might not be available during boot. This meant that you needed to put all the binaries you needed for boot in /bin and /sbin because /usr/bin and /usr/sbin might not be mounted yet.
This no longer makes sense in the days of large disks which fit /usr just fine and initrds which can mount all the disks before kicking off the things which might need those binaries. In fact I think the initrd has taken the place of /bin and /sbin on a modern system containing copies of all the binaries needed to mount the disks (like mount and fsck).
> It is because on old Unix systems there frequently wasn't enough space to store /usr on the root disk.
It was really a single system in 1971 that kicked off this trend. Originally /usr was for user files, like /home is today (do you'd have /usr/dmr, /usr/ken, etc.) / and /usr were two physical disks, and at some point the / disk (a 1 or 2MB disk IIRC) was full to they put some things from /bin in /usr/bin as a hack. And it's "stuck" since then. This is why /usr has such a weird name, because it was intended for user files (/system, /software, /additional or something would make more sense, abbreviated to /sys, /sw, or /add of course). Initially Unix was a quickly developing system for internal use, but it was actually used and there was often some tension between "do the right thing" vs. "don't break too much". Some of the weird things in C are due to that as well (e.g. &'s precedence being a good example).
I guess some other systems that ran Unix ran in to similar problems, but by the time unix started to gain some adoption in the mid to late 70s disks were larger too, so I don't know if it was ever really needed beyond that initial research unix system used only internally at Bell Labs. I believe they updated the disks on the original Unix system not too long after this making this hack superfluous, but they kept it for compatibility (and because it might be useful again in the future).
I am so happy to see that this correct explanation has finally spread.
When I first started posting on HN the standard response you would get was the retcon explanation like the parent comment or even more nonsensical ones like "usr == UNIX system resources".
At the time cursory searching could not find any actual explanations for this. I started a deep dive into Usenet history and reaching out to folks who knew the history, looking for scraps of info in old Unix books, etc and found the correct explanation as you posted. It was entirely an ad hoc solution to running out of disk space. /usr got moved to the second disk. When root got full again they moved some large binaries to /usr/bin. When the /usr disk got full the user home directories were the easiest thing to move so they moved to /home on a third disk. Every other explanation is purely post-hoc rationalization.
Since then I've been pointing it out when the topic comes up and I am happy that general knowledge of the actual history has started to spread.
Note: I am not taking credit for this, many other people have been correcting the story both before and after me. I'm just glad the true story is seeping into the collective tech community. Sometimes misinformation or incorrect "facts" seem to linger no matter how many times and how many people correct them... but this makes me hopeful that misinformation can be corrected.
Is that also where /usr/share got its name for application data? Data that's shared among the rest of the users, as opposed to in their respective homedirs?
share is for architecture-independent data (ie data which is “shared” by all architectures). Executable binaries are not shared, but things like man pages are.
I believe it dates back to the heyday of diskless workstations. It was commonplace to have dozens of workstation machines getting their OS files over NFS from a central server.
Thus why the (per-machine, read-write) /var became standardized. Earlier /usr contained a mix of writable things (like /usr/log). Now it was important to make /usr something that could be NFS mounted read-only by a bunch of NFS clients.
There were also UNIXes that ran on multiple architectures, like SunOS on 68k and SPARC. In theory, you could have them both use the same read-only /usr/share mount and save some resources on the server.
I say "in theory" because I don't know how many sites ever implemented this in practice. There were definitely places that supported mixed-architecture diskless fleets, but I'm unsure how many of them were committed to keeping the OS version pinned between them... or how many then went through the extra work to make /usr/share a separate mountpoint. I'm sure some people did it, but it's not something I remember seeing personally.
Still, I feel it's somewhat useful to help humans understand what parts of a package are CPU-specific and which aren't. Probably good that it's lived on.
Around 1990 I used Unix clusters. Not only 68000 Sun mixed with SPARC, but also the other Unix workstation vendors. There was a lot of NFS and also completely diskless machines. /usr/share always meant it's architecture-independent and can be used by machines of any architecture.
It should also be noted that, while this change is Linux-specific, it does not directly break software which also targets BSD or nix-like OSs.
Files that were previously in /usr/bin or in /bin can now be found in EITHER of these locations, since one symlinks the other. So no previous expectation was really broken.
Only software built on merged systems fails to run on unðmerged systems. This should not really happen, since the usr merge was a one way trip, not a flag that you're supposed to turn on and off. You'd also never build dynamically linked binaries for BSD on Linux, so that should not be an issue.
But, for some reason, Debian chose to make this merge something that individual systems could turn on and off, which is a terrible idea for part of the base system. It's like letting users pick if they way `/bin` or `/binaries`. Having such heterogenous setups in regards to something so basic and foundational is asking for breakage.
> Files that were previously in /usr/bin or in /bin can now be found in EITHER of these locations, since one symlinks the other. So no previous expectation was really broken.
I don't know, I just hit breakage the other day. I have /usr/bin before /usr in my path (which is the default on Ubuntu at least); I have muscle memory to use dpkg -S `which $foo` to figure out which package a binary is, but that doesn't work if dpkg thinks the binary is in /bin (e.g. ping), since it'll ask dpkg who installed /usr/bin/ping, which is nobody.
It is small fiddly things like this all over people's packaging and personal scripts that break.
Who installed '/foo/bar/baz' when '/foo' is a symlink to '/usr/bin'?
I'm 100% in favor of the DPKG maintainer's perspective of "do ugly symlink farms" and then "reap what you sow" (ie: if you don't like there being a symlink there, then fix the offending package).
Debian didn't "choose" anything. It's just a bunch of people who wanted merged-/usr to become reality created a tool to convert a system until a project-wide decision is made. The decision has been made ultimately, but the dpkg maintainer blocks it from being implemented.
Adding to this historical reasoning, it is to be noted that in traditional Unix, /usr was pretty much the equivalent of /home in Linux systems: the parent directory for user home directories, i.e. 'usr'.
The existence of /usr/bin can probably be explained by the reasoning that, just like nowadays Linux systems often have a really big /home partition, early Unix systems eventually ran out of space for whatever volume would hold /bin, and with 'reinstalling' not remotely as easy an option as it would be these days, storing an extra 'bin' etc. directory in the 'larger partition for user directories' became commonplace.
That's how I've learned it around 1995, and that the division between /[s]bin and /usr/[s]bin has the reason that the former are required for system boot, while the latter are mounted via NFS from a central server - why waste local disk space when these would be identical for every system anyway (in a homogenous environment, at least).
While that made a lot of sense to me back then, I've become aware that it might not be absolutely accurate, though.
I just don't understand, if people want to merge /usr and /, why they insist on keeping the gratuitous /usr prefix. It can all just be rolled up into / : /bin, /sbin, /share, /lib, /include, /share, /src and so on.
Not technically, but I've built a number of "appliance" linux systems for clients, and to improve reliability I just make the entire disk read-only with the exception of /home and /var.
The few locations outside of /var that sometimes need to be writable (in particular, /media, /mnt, and sometimes /root) can simply be symlinked into /var.
I think the path of least resistance is symlinking /usr/bin/ to /bin, but now that I think about it I think I misunderstood GP's suggestion.
GP's idea, as I am now thinking about it it, is that you could basically move everything out of /usr into /, effectively getting rid of /usr.
Symlinking /usr to / seems like a dubious idea (since we'd get weird things like /usr/etc/passwd) but turning all of its top-level directories into symlinks seems like a possibly OK idea.
Looking at my ubuntu installation, /{bin,sbin, lib,lib32,libx32,lib64) are all links to /usr/{...}, which seems backwards to me. I think they should have hoisted everything into / and made /usr/* symlinks for backward compatibility.
macOS still has a traditional BSD style /bin (37 utilities) and /usr/bin (1000+).
(But why is /bin/sleep a 150K executable? maybe it's a fat binary in more ways than one?)
The obvious followup question that jumped to mind is "what's the difference between /bin and /sbin?".
Both contained files required to boot in single user mode, however /sbin contains executables not usually executed by normal users. sbin short for "System Binaries" maybe?
Hrmm what distro? I haven't looked in sometime, maybe something changed (Slackware here). My understanding was static linkage in boot related folders was to minimize chance of accidentally losing a library/dependency and breaking booting.
Separate /bin and /sbin were so that Unix systems could start with a small, minimal, clean root disk that could be read only or effectively read only. The /usr partition was sometimes mounted separately, but this was more for proper organizational/systems design division than it was for disk space reasons.
This is still a useful division in many Unix systems today (e.g. IOT) where starting from a known-good minimal image and then layering filesystems on top can help reduce complexity and prune the debugging tree.
It's not surprising that the ibm/redhat/systemd/freedesktop crowd doesn't care about this stuff, but it's unfortunate.
Peace to Rob, but disagree with his recollection and the motivation. This was a very active topic in the early commercial Unix days as POSIX etc. were being formulated. It was very common to keep a tar backup of / and to separate out your /usr mount not because you didn't have space on /, but because you wanted to be able to lose/upgrade/etc. / without impacting /usr, and perhaps to move/copy /usr to another machine. The separation of concerns, not the disk space, was a driver for why we did it that way.
Well you are wrong. All of that was a retcon after the fact. Ask the folks from Bell Labs because that's where the story comes from:
/usr meant Users. Duh.
They had one disk because disks were expensive. The disk got full, they got a second one, so they moved /usr to the second disk.
Then the root disk got even more full and they moved some binaries to /usr/bin purely for disk space reasons.
Then /usr got full but by this time it was too baked in to change so they invented /home and moved user home directories there on yet a third disk.
Union mounts and other such solutions hadn't been invented yet. It was entirely an ad hoc solution to an immediate problem. This is simply a historical fact. I don't know where the urge to retcon a bunch of justifications for it comes from.
> Then /usr got full but by this time it was too baked in to change so they invented /home and moved user home directories there on yet a third disk.
This does not align with my memory at all. /home came much later. Both AIX and SunOS/Solaris had it early, but not anywhere near as early as the / and /usr split.
I started using Unix (as a sysadmin and systems programmer) in about 1986, and my recollection is closer to Rob's.
1. Never used tar(1) for backups till much later, always dump(1),
2. Never wanted to separate backups or upgrades of / and /usr, but always wanted and needed to do that for /usr/local and wherever we put home dirs in those days (which I don't recall right now).
3. The / and /usr thing was definitely explained to me in terms of disk space, more by comparison with the VMS system I was also managing which had 1MB drives. But I was too late to have "been there".
It's entirely possible that there were multiple rationales/retcons at various stages, maybe in serial or parallel; we could all be right. My 'been there' is also from ~1986 (AT&T 3b2, 300 baud acoustically coupled to an AAA-30). That said, the idea of / immutability and /usr variability was highly valued for its systems encapsulation properties for many decades, regardless of how we got there. As others in this thread have indicated, it's still considered valuable for that reason in space-constrained and security-critical domains (e.g. IOT).
Democracy is pretty decent, but comes with some big flaws: tyranny of the minority, enormous amounts of back-and-forth to get anything done, big egos at any level can stop progress.
Debian is the very example of it.
Case in point: other distros forced the usr migration and very few problems were had. Debian put the idea through a committee, of course a minority wanted to keep the old behaviour so Debian decided to support both, guess what, supporting both means having two problems now.
> Democracy is pretty decent, but comes with some big flaws
I'd argue you are overstretching one example of one broken system to a whole class of systems. We can reframe these big egos as people who do not believe in democracy, but great believers in feudalism. (Though I do not know all the details what is going on in Debian, may be I misunderstand the affair, and I don't want to mark some specific people as anti-democrats, but I don't know how to aviod it, sorry).
We see how individual developers say "get off my lawn". The social dynamics of a collective decision making doesn't mean for them a thing. Any democracy needs a legitimate way to reach consensus. And everyone needs to conform to a consensus. Legitimacy of procedures must be enough for everyone to believe in the consensus or at least to believe in their obligation to conform. And the more power someone have, the more his obligations to conform are.
I mean, if I'm a regular voter without any special powers to resist consensus, then I cannot brake the consensus, I cannot stop system from working without resorting to really destructive and antisocial behavior. But if I was a president, I would have power, I could resist. But if I did then it would be not a democracy but an autocracy. If I was something in between a voter and a president, then the situation would be something in between, though probably the president may interfere with my plans, use their powers to stop me ruining the system.
In Debian it seems every developer maintaining something important enough have powers to resist any consensus reached. And moreover there are some who actually use their power to resist. And the system as a whole doesn't treat such behavior as them undermining the system and may be undermining the very idea of democracy. I'd say that Debian is playing democracy but didn't invested enough into building a mythology of a democracy, into making people believe in a divine right of democratic procedures to rule them all.
Though from other hand, it may be not a bug, but a feature of a system, because it pays to people by handling them some power, I believe it helps them to not burn out too fast. It charges the community with struggles and a constant fight, it provokes people-centered procedures, not rule-centered.
How much of Python's success is down to the BDFL(-delegate) approach of governance? It seems a lot of projects have adopted it and Python's PEP system for managing new features. I wonder if Debian could use a similar sort of elected presidential system.
I am of the opinion that BDFL style governance is best in software. In the real world the problem is a bit more hairy, but if you have an issue with a tyrant in open source, you can just fork the project.
A BDFL solves the bureaucracy problem (they mandate, everybody implements) and the big ego problem (the biggest ego is at the top by definition). Also BDFL can have vision, something a committee will never have.
In my humble opinion, people like Torvalds and Jobs are the secret to wildly successful software.
The trick with BDFL is, of course, lucking into a suitably B BD, and hanging onto them for as much L as possible.
And benevolence is not an attribute that can be passed to one's successor.
Torvalds and Jobs have done well, but are examples of survivorship bias. What about all the software projects helmed by autocratic douchebag dictators which never went anywhere because they were unable to attract enough contributors to feed the dictator's ego to the point where the community became self-sustaining?
Debian's been going longer than a lot of other software projects, and kept going after the initial founder(s) left the project. The process is messy, sure, but it sure seems sustainable so far.
Precisely. Everyone adores a caring and benevolent dictator. The problem is that the sort of personality traits that inspire someone to pursue a position of power make it overwhelmingly likely that your dictator will be malevolent rather than benevolent. There's a reason that examples of effective BDFLs arise from cases where the dictator was in place before the project became popular.
The other problem is the continuity of leadership. Despite its flaws, one of the strengths of democracy is that the code paths for the transition of power are explicit and regularly exercised. The only reason that anybody has any faith in any potential replacement for Torvalds is that, presumably, Torvalds will hand-pick his inevitable successor. But the successor's inevitable successor will have none of the same legitimacy, and by that time (be it decades from now) I expect the project to transition away from a BDFL model out of necessity.
I find the role based FreeBSD core team approach more adapted to projects such as Debian. They also started with BDLF and then moved on to the core team approach. That's several BDFLs, every one of them with different responsibilities. It's 9 people in the case of FreeBSD. They've tried it with 20 people and then scaled back to 9:
"Deadwood and apathy in the 20-member core team lead to creating bylaws that set up a 9-member elected core team
First elected core team in 2000 with few carryovers from old core team" [1]
It should also be an odd number if they vote, so that there won't be a tie in votes. The same approach is used in kendo and other Japanese martial arts examinations, the number of examinators is always an odd number.
This also solves the "what if the BDFL is hit by a bus" problem: another member is appointed.
I can't deny Python's success, but I wouldn't hold it up paragon of change management, either. 13+ years after the release of Python 3 and 2+ years after 2.7's EOL, I'm still dealing with Python dependencies that don't work on Python 3 because maintainers preferred to pretend that Python 3 wasn't happening and that Python 2.7 would be around forever.
It's confusing as heck trying to figure out what I should even expect to work, because some authors treat it as obvious that their code will work on both Python 2 and Python 3, while other authors treat it as obvious that they still only support Python 2.
I've had a lot less trouble with merged /usr. I guess it's not a fair comparison, but it does suggest to me that there are more important factors at play.
Guido van Rossum openly admits the move from 2 to 3 was a bit of a disaster. I think some of the decisions made since then have been more conservative in a bid to avoid such damage again. And talk of a Python 4 is for the most part academic.
It sounds like the package authors sticking with Python 2 that you're dealing with are just stubborn beyond belief. The rest of the world has moved on whether they agreed with the changes in Python 3 or not. Hopefully if the packages are useful enough to others and the licence allows it, people will fork them and make them work with 3.
>It's confusing as heck trying to figure out what I should even expect to work, because some authors treat it as obvious that their code will work on both Python 2 and Python 3, while other authors treat it as obvious that they still only support Python 2.
I believe grand majority of packages already dropped 2.7 from latest releases, given that it's officially dead for over two years.
The Debian Project Leader election voting is going on right now, but the powers they have under the constitution mean the position is mostly a figurehead/administrator. Governance is more distributed in Debian; there is the technical committee to resolve technical disputes, the release team, the archive admin team and other teams.
I mean, C++ is designed by committee, and it's hard to deny that it's been pretty successful too, regardless of what people's opinions on the language are. Both styles have flaws, but both can be adequate if you put in the right incentives and the right people.
I'm not sure the success of C++ has as much to do with the C++ committee as the fact that there were few alternatives competing in the same space until the past decade.
I didn't say it's due to the committee, I just said it succeeded and had a committee. Python succeeded and had a BDFL, but I wouldn't say it's due to that either - lots of projects pull off the latter but not the former.
this is a trendy thing too, very often minorities who felt ignored, are now pushing to get more because of the tiranny of the majority had flaws (but to me is mostly unavoidable due to the natural economies of scale it grants).
I don't know how one can design a social system where you balance both in the right way.
> the tiranny of the majority had flaws (but to me is mostly unavoidable due to the natural economies of scale it grants).
In a different context, this is a source of frustration for me as a (partially) blind person advocating accessibility for blind people. The world is designed around the assumption, correct for most people, that people have the high-bandwidth, low-latency sense of sight. And that does lead to the most effective interface for most people. That assumption is so thoroughly baked into everything that I sometimes think it might have been better if, through some kind of eugenics, I and people like me (blind from birth) had never been born at all, so the world could go on with that assumption (as it mostly does anyway) without leaving us out. I know eugenics is a taboo idea though, and it has its own problems.
First of all I think Eugenics would/will quickly lead to something akin to runaway selection: people will make increasingly absurd decisions mainly driven by status markers / perceptual drift and the human race will breed itself into some mad corner. So I don't believe Eugenics is ever going to do what even its most ardent supporters imagine.
Second of all the human race does not have some over-arching goals we need to meet like a business. It is or should be like a club run for the benefit of the members: let's create good conditions for people. We can tolerate a bit of complexity and diversity.
> Second of all the human race does not have some over-arching goals we need to meet like a business. It is or should be like a club run for the benefit of the members: let's create good conditions for people. We can tolerate a bit of complexity and diversity.
Individuals and groups (including businesses) do have goals though, and it seems natural that some of them feel that they're being thwarted in their pursuit of those goals by expectations that they accommodate every little minority (including the one to which I belong). The backlash against legislation requiring accessibility, for example, is real, as seen in this thread: https://news.ycombinator.com/item?id=30726471
"Poor businesses forced to build ramps and make their websites screenreader-accessible" seems a distinctly weird take to me.
If we're no longer taking even the slightest bit of care towards looking after our fellow Man then it's not a world worth living to me.
EDIT: It's also not just a "tiny minority", about 13% of the world's population have serious vision impairment. Making streets, businesses, services, products accessible is the very least we can do. It's scary that someone who suffers from this himself would chalk it off as inefficiency and waste.
It's somehow similar to the way so many of us expect actually evil people to somehow come with horns or other evil-indicating visual accessories. The reality is that evil people wear suits, and jeans, and shorts and hats and look just like the non-evil people.
So it is with "reasonable". The fact that someone can phrase their objections to accessibility that doesn't make them immediately sound like a prejudiced ignorant lunatic doesn't actually mean that they are not a prejudiced ignorant lunatic (it doesn't mean that they are either, but you should remain suspicious).
For myself, I had a revelation about such matters when my daughter had major hip surgery (twice). While normally a fully mobile and athletic person, she had to spend several weeks (twice) with a wheelchair. Suddenly it became clear that the accomodations we have made in this direction are not just for people born with disabilities that prevent them from walking: any one of us could find ourselves, either temporarily or permanently, benefitting from ramps and door openers and curb cuts etc.
I am absolutely certain that the same is also true of accomodations made in the direction of visual impairment, hearing impairment and just about any other condition that deviates from some (often hypothetical) state of "full functionality".
Please, protect yourself from the backlash from these "seemingly reasonable" people. They are ignorant, selfish and of limited scope in their thinking. You deserve better.
I saw a bit of rhetoric a while back about how it's not "disabled people and non-disabled people", it's really "disabled people and not-currently-disabled people."
Between spending several months on crutches ten-ish years ago and helping care for several elderly relatives, I'm a believer.
Then we should start with the assumption that they're not, right? I'm guessing that starting out by assuming the worst in others is one thing that contributes to the current polarization in US politics.
> Please, protect yourself from the backlash from these "seemingly reasonable" people.
What specifically do you advise that I do here? I don't want to just ignore challenges to the idea that accessibility should be a requirement. If I pay attention to these people and put in the effort to understand why they think as they do, then I can become a more effective advocate, or possibly even revise my position. Yes, the latter may lead me to question whether I should even exist, but it seems to me that a healthy mind should be able to dispassionately contemplate hypotheticals that even threaten oneself.
I think that empathy is something we should all try to cultivate more of, and I try to do this myself (much to the derisive contempt of some people who do indeed think you should just assume the worst based on the smallest possible evidence).
However, while we definitely need more empathy, the world is also full of stupidity, and there's a point in everything where you need to be able to stop putting your energy into fighting the stupid. There's a point where you have to say "no, wait, these people are not actually arguing in good faith, i've explained to them over and over again why they are wrong, why the evidence of the last N years shows them to be wrong, and they have no answer to this, and just keep repeating the same falsehoods. I'm done".
Now, if you're not at that point with the people you engage with about this, great, keep aiming for empathy and understanding. But if you are, move on.
There's a company I collaborated with once who had an epiphany when they realized they had been putting way too much energy into trying to prevent people ripping off their software. They changed direction and focused on ignoring the people who did that, and instead tried to provide the best possible customer service and support they could to people who had paid them. Things got better for everyone. I think there's a general lesson here about where best to direct one's energies.
People are not reasonable. They simply have their reasonable moments. If someone is angry because someone else asked for something then I think it's pretty obvious they're not in reasonable-mode.
You need to realize we are currently implementing eugenics, we just don't know what our goals are.
There's two types of eugenics, positive and negative, based on the root of postulate. A positive eugenics program selects *for* specific qualities like hair color. Negative eugenics selects *against* deleterious phenotypes.
You still need to exercise caution. For example, selecting against depression may be long term bad because the real thing we need to do is reform society to reduce depression. But there are some things, like being born with a disability, that have obvious impact outside of a functioning society that it seems reasonable to select against them.
W.r.t. the parent comment to yours, I'm conflicted; I've spoken to people and seen how many issues they have getting accomodation. If people were forced to care, making interfaces for blind people would be pretty easy. But many people refuse to care.
The idea of genetic drift due to lack of selective pressure has been studied (recently, and in a way not connected to racism) and was pretty grim. Unfortunately due to the connotations eugenics typically carries it has remained poorly investigated.
> the human race will breed itself into some mad corner
Bit of a tangent, but I think sexual reproduction helps to avoid this. Picture a set of points (people) in Euclidean space. Pick any two, "a" and "b", at random. Add a new point "c" at the midpoint. Unless the overall shape of this swarm is highly nonconvex, then this point "c" is going to be more in the "interior". If we formalized things a little more, we could prove that this operation is a contraction. The Fixed Point Theorem would apply, &etc.
So, once your set has reached some convex shape, then there's a balance between these two forces: The contracting force of sexual reproduction, and the expanding force of random mutation.
There's also selection of course. I tacitly assume there isn't much more of that happening right now? I could be wrong though; there may be some (strong?) selection against education...
Interestingly, it's these educated who I assume would "benefit" from a eugenics regime. But, one bad meme, and the whole thing goes bad...
You could just have the government mandate a test, and any embryo that has markers (genetic or otherwise) of a chronic debilitating condition is not allowed to be brought to term. If parents do choose this, the kid will fall outside of the healthcare system for its condition and parents will have to foot the entire ongoing bill.
Eugenics does not automatically mean we all customize our embryos to be 2m tall geniuses with patterned body hair.
More of an aside, but I’ve always wondered where humans would end up if we did do unbridled genetic engineering. Is 2m the ideal height? 1.80m? 2.40m? What would we discover is the optimal IQ? Etc.
Is 2m the ideal height? 1.80m? 2.40m? What would we discover is the optimal IQ? Etc.
I think Brave New World treats this question pretty well. We'd probably discover that there is no 'optimal' height or IQ and we'd still need the whole span of "Big Dumb Brutes" to hyper-geniuses. The secret is convince everybody the IQ/physique they've been assigned is the 'actual' optimal and they're the lucky ones that everybody else should envy.
One thing to keep in mind is that eyesight is a continuum between better than 20/20 to completely blind like you.
That means that without going the only-the-perfect-survive route, there are always going to be issues with how the world operates. For any given human problem, it's probably rare that it's optimized for even half the population.
Doesn't help you, being on the extreme of the eyesight problem, but the fact that we need to accommodate blind people also helps a bunch for people with vision issues and (commonly imperfect) correction.
So while you can consider yourself to be in the small percentage of completely blind people, you can also consider yourself to be in the majority of people with vision issues. Should we have avoided all of them in this world?
And that's wherein lies the issue with eugenics or any sort of artifical selection of our descendants: where is that line?
And just judging by this single comment, you are at least capable of eloquent, reasonable discussion, which is probably untrue for at least half the population.
Correct. Legally blind, meaning, for example, that I can't drive, but with enough sight to read text up close if it's large enough. Edit to add: I do increasingly depend on a screen reader, though not yet for my actual programming work.
ETA 2: Sorry for the confusion in the original comment. I've become wary of terms like "visually impaired", because they wreak of over-sensitivity and political correctness. But there's also something to be said for accurately and clearly communicating the actual condition.
Not only is that impractical (how can you reliably filter blindness before birth? I suppose you could test babies eyesight at birth and kill them if they're blind?), but what about the people who get blind during their lives? They are the vast majority of cases of blindness!
It seems very odd to propose that the solution to a problem is to liquidate all people who suffer from that problem.
My first experience with someone in your position was Geordi LaForge. I found it so inspiring how he overcame - though technology - his blindness and how his innovation became central to fulfilling his craft's mission and often securing the safety of its crew. I often wondered what would have become of him had he been born in an earlier age, without the technology to provide him with an alternative high-bandwidth, low-latency sense.
I am blessed that I have all my senses and so do my children, but I absolutely believe that the 9999 other traits that you have to share should not be lost just because of a single non-optimal trait.
> My first experience with someone in your position was Geordi LaForge. I found it so inspiring [...]
To be clear, that's a fictional character in a fictional future that may or may not ever come. Sure, we can use assistive technology, e.g. screen readers, to enable us to work. But with our current technology, that requires cooperation from developers of platforms, applications, websites, etc., and as I said in another comment, advocating for that sometimes seems futile. We do sometimes make progress though.
> I absolutely believe that the 9999 other traits that you have to share should not be lost just because of a single non-optimal trait.
Yes, Roddenberry used his fictional future to show us what our future could be. That was the point that I was trying to make without being too explicit - that could in fact be our future. If we choose it.
Maybe I'm succumbing to the cynical zeitgeist, but I figure the future will be primarily whatever the wealthy minority wants. If that's so, then maybe the best hope for people like me is a future like the one portrayed in the opening of Ready Player Two (generally not a very good book), where a VR-obsessed multi-billionaire funds work on neural interfaces, starting with implants for disabled people and culminating in the OASIS Neural Interface headset. Yes, it would feel wrong to be used as a means to an end, but it would answer the question of how a world-dominating VR as depicted in those books could be made accessible.
There a lot of different biases in the world of tech, sightedness is just one of them.
Are there any operating systems written from or computers built from a blind perspective, or are the blind just using the accessibility features of sighted operating systems?
> Are there any operating systems written from or computers built from a blind perspective
Yes, but they struggle to keep up with the mainstream, especially considering the runaway complexity of the web. The one notable open-source example is Emacspeak [1]. The rest, as far as I know, have been proprietary and often overpriced products.
Just want to encourage you to keep pointing to accessibility problems loud and clear. While CSS complexity was mostly driven by ads, ecommerce, and porn, there are people who tried hard to do the right thing on the web (Paciello Group, W3C's WCAG, etc.), thereby also contributing to unwarranted complexity, who could benefit from feedback. And I don't have to tell you, but if you think you're not affected because you're young, sight/focussing problems kick in at about the age of 50.
"Debian took a more incremental approach, in part because it strives not to make wholesale changes to users' systems like those required by a flag-day upgrade to a merged /usr. In 2016, the ability to voluntarily switch to that scheme was added, then some attempts were made for newer versions of the distribution to be installed with a merged /usr by default. [...] The location of some files was being resolved at build time to point into /usr/bin (for example), but those files only existed in /bin on the non-merged systems."
The fact that the committee took a decision that the core package manager can't support is baffling to me.
The committee took a decision that the package manager /doesn't/ support.
If I've learned one thing, it's that there's very little that software can't be made to do. Whether it should is an entirely valid question, and hopefully figured out before assessing the "could" aspect.
> The fact that the committee took a decision that the core package manager can't support is baffling to me.
Why? Presumably the committee would be thereby deciding to drive development of the package manager toward whatever is required to support the high-level goal.
From a user perspective the high level problem would be merged directories. Which dpkg apparently already supports. Just not the way the committee wants it done.
> deciding to drive development of the package manager
Which is the hilarious part: No one seems to be working on it. The package maintainer for dpkg isn't required to it as long as he accepts reasonable patches, but only half finished patches (that identify themselves as such) are coming in when the reported issues are not outright ignored. Seems liked Debian suffers from the same issue every open source projects suffers from: A bunch of lazy and entitled as fuck users insisting on features without putting any effort in themselves.
Linux development in the present era is very heavily driven by Red Hat's needs and desires, and I don't think they have this problem because they don't support upgrading from one release to the next using the package manager - you have to reboot into the installer and do a kind of reinstall in place. So they can just require that all packages only have files in the non-symlinked directories after the flag day. Debian is different - you have to use the package manager to upgrade, which means you're going to be running with a mix of new and old packages at least for a while.
Red Hat is not only RHEL. Red Hat's Fedora supports distro upgrades and they didn't have much issues pulling it off. I was using Arch Linux when the merge happened and it was a non-event.
It doesn't seem too difficult to support; what are the edge cases?
Who owns a file in a path that includes a 'well known symlink dir'?
So maintain a list of well known symlink dirs. When checking a file path against the database, the normalization step should check if the known symlink dir is any point of the path; if it is the redirect should be resolved to the well known target instead.
E.G. owns('/bin/bash') ... the path mutates to '/usr/bin/bash' because the location '/bin/' is already known to really be '/usr/bin/'.
You're spreading misinformation too: based on the article the dpkg maintainer is trying his hardest to obstruct fixes and implementation of the feature despite the voting and consensus.
> Democracy is pretty decent, but comes with some big flaws: tyranny of the minority, enormous amounts of back-and-forth to get anything done, big egos at any level can stop progress.
The problem is not in democracy itself as a concept. In both the open-source world and in society itself, the problems arise only when the demos (the population) either grow disinterested in democracy or is small in numbers.
As for "tyranny of the minority" - I hope to never see that phrase again. Protections for minorities in democracies exist for a very good reason. In the case of projects such as Debian to reduce the chance of solo maintainers quitting over frustration about being overruled, in democracies to prevent atrocities and hold up human rights for vulnerable people (e.g. disabled).
> As for "tyranny of the minority" - I hope to never see that phrase again. Protections for minorities in democracies exist for a very good reason.
There is plenty of evidence for the theory proposed by Taleb[0] on this one, he wrote:
"It suffices for an intransigent minority – a certain type of intransigent minorities – to reach a minutely small level, say three or four percent of the total population, for the entire population to have to submit to their preferences."
There is the political/sociological meaning of the word "minority" that you seem to refer to, and there is the linguistic/logical/mathematical meaning (those who are not in majority or a small group).
I am pretty sure the use in the phrase you object to is of the latter form.
>> As for "tyranny of the minority" - I hope to never see that phrase again. Protections for minorities in democracies exist for a very good reason.
I digress, but how are these protections an inherent feature of democracies. Even in a democracy (most democracies are representation democracies) people can vote for policies which may discriminate against people. This would be again democracy in action.
In a liberal democracy, it is understood that the will of the majority is not the only thing that matters.
For example, having the majority vote to strip a minority from their religious freedoms is not OK.
The irony is the good parts of "democracy" aren't actually the democratic bits at all, but rather a basic respect for civil liberties (a.k.a. natural laws) which are subject to neither democratic votes nor authoritarian decrees. The key is not to lose sight of the fact that what the majority wills is not always right.
This is why attempts to impose democracy from outside tend to fail. Giving people the vote doesn't automatically lead to respect for their fellow citizens' civil liberties, which is much more fundamental.
I don't compare you with a nazi, but you should understand that minority's can take over a country, that's what the nazis did but also the communists in romanov-russia...china is more complicated. It's something completely different, but maybe "nazi" is a red flag for you in comments...because not comfy.
EDIT: And more actual, the Taliban's in Afghanistan, an absolute minority but the one with weapons, training, connections to local "war/land"-lords and the will to take over the country.
Or the Alawites in Syria, or the Europeans in 18th century North America, or the Mongols, or a hundred other examples. There's a line somewhere between less-violent "minorities taking over a country" and more violent Mongol-like "taking over", but I would say that the situation of a minority group seizing power is not significantly less common than of a majority group seizing power.
Godwin's law is not exactly a law, but an observation. Your comment is justified, because the current zietgeist is "majority wrong and minority right" and your comment is highlighting the dangers of such a process.
If you define majority as 50%+1, and minority as 50%-1, almost all democracies in the world have the minority in the power (unless they get 100% turn out rate with all votes contributing to parliamentary seats). Winning 60% of votes with a turn out of 70% voters is only 42% of the population.
Nah, people staying home on polling day doesn't stop them from being part of the majority opinion. If an option gets 60% of the vote, and there are no shenanigans going on, then that option is almost certainly the choice of the majority.
That's a pretty flexible definition of "choice" ;)
Does this also hold if the vote went 51% to 49% (in dual party system)?
Democracy is there to allow us to express a preference. Not voting is exactly that, a preference to not vote, and reasons are certainly various (including the one you mention of supporting the likely winner).
> That's a pretty flexible definition of "choice" ;)
If you choose not to vote, you're putting endorsement toward what everyone else does. If nothing particularly weird or bad is going on, the people not voting should be similar to the people voting.
And you can pretend I said "preference" if you don't like the word choice. Doesn't change my argument.
> Does this also hold if the vote went 51% to 49% (in dual party system)?
No, statistically that's too close. But when you get 60% of a 70% turnout, to reverse that the rest of the population would have prefer to vote about 3:1 in the opposite direction. That's not likely.
>However, despite waging a campaign of terror against their opponents, the Nazis only tallied 43.9 percent of the vote on their own, well short of a majority to govern alone.
One problem that the Weimar Republic suffered from is that there were too many small parties. Absorbing them is easy if you are the trump of the 30s and are willing to use violence behind the scenes to intimidate anyone who opposes you. Also, it helps to have connections with people who hate democracy in important political positions.
on the other side, debian is one of the most stable platforms, more than even commercial ones (binary compatible centos is just a blink in the lifetime of debian for example :D) .
just because people care about the technology...
and upgrading is important for small business with less money. and this shows how hard it is to achive an good upgrade path.
The reasonable explanation for that is that in a monarchy, ostensibly all bodies are working towards common goal (as dictated at the top). Whereas in a democracy, in many cases there exist bodies with equal power yet conflicting interests.
Monarchies seem to work well in cases where "the top" has interests that align with the interests of the subjects, and is well-informed. Jordan seems to be a great example. Yet history is full of examples of long-standing monarchy-type organizations where after only a short time of disconnect between the governing and the governed, the entire system fails. A Frenchman could probably provide good examples.
Communism as the rule of "the proletariat" instead of a ruling nobility has created the worst horrors of the 20th century. You may think of monarchy and communism as the same thing "because they are dictatorships", because the frame of reference you are used to is contrasting everything to "democracy", but they really really are not.
I understand the problem dpkg is facing (can't know the canonical pathname for an object is, in some cases), but I have huge problems understanding why a symlink farm is preferable. The last thing I want to see on my Debian system is directories with hundreds or maybe thousands of symlink entries. There must be a way to let dpkg do its job if the directory-symlink approach is used instead, surely? (/bin -> /usr/bin)
If you link /bin to /usr/bin then all binaries will be available under /bin, and developers will hardcode paths like /bin/python3. This will make programs non-portable to other distributiobns with separate /bin and /usr/bin. The link farm doesn't have this problem.
Did that end up becoming a problem in all the other distros using aliased directories, for example Fedora which has 10 years of experience with this scheme? An quick look on Fedora 35 with
doesn't really show any objectionable binaries being referred to via /bin (I count those I excluded in the negative lookahead as certainly non-objectionable).
(I used the variable-length negative lookbehind [1] because I wanted to reduce false positives, but just scrolling through the results ended up being easier)
I wish two things happened... All OS's around the world implementing "#!python3" (or similar) as a shorthand to "#!/usr/bin/env python3", and in the meantime using #!/usr/bin/env to transition.
The reason that hasn't happened is because `#!` is a kernel level thing, but the `PATH` env is a userspace thing. I suspect even adding env parsing to the kernel would not be viewed as a favorable change…
It could literally be a shorthand for /usr/bin/env, actually delegating the PATH handling to user space. The path to /usr/bin/env could be a boot arg or runtime variable (sysctl). It could probably be implemented in Linux (without touching kernel code) via binfmt_misc. It's mostly a matter of people agreeing to do this.
Dealing with hard links in the real world is a bit tricky. For example, "mv a b" is very different from "cat a > b". You have to do the equivalent of the latter in order to update a file in-place without updating every single hard link, which is problematic since writing to a file is not atomic the way an unlink+rename is. It's a bit of extra logic for the package manager to deal with, while a symlink is just a symlink.
I'm surprised that they are pushing to remove /bin and keep /usr/bin. Why not the other way round? Let's keep our binaries out of /usr! (as a first step to remove /usr altogether, since it has a confusing name).
> First, technical challenges: HURD tried to move from /usr to / and had quite a bit more trouble with it, and that trouble warned against trying it on a larger scale. It's much easier to move from / to /usr than from /usr to / ; fewer things break.
> Second, consolidation: moving into /usr gives us a single directory containing all files managed by the distro / package manager. This has quite a few advantages. It makes sharing /usr among several chroots or containers feasible (one /usr, different /etc and /var). It makes versioning and A/B upgrades (upgrade to new image, fall back to old image if new image doesn't boot) simpler and easier. It makes it easy to lock down /usr with something like fs-verity.
> I'm surprised that they are pushing to remove /bin and keep /usr/bin. Why not the other way round?
That is explained in the “Case for the usr merge” essay: having all the readonly system stuff under a single root directory makes the system much easier to manage, and simplifies useful scenarios like having the system on a network share, or sharing the host’s read-only across guests: with a merged usr, you just have to manage a single mount point or directory rather than half a dozen which must be kept in sync.
Also /usr is not just /usr/bin. Sbin, and lib (and lib64, and lib32) are also part of the usr merge.
$HOME/bin is useful as well. I like that I can have my own software (that I wrote and compiled) in my home folder and don't need to ask a sys admin to install it for me.
> What changes is simply that you need to boot with an initrd that mounts /usr before jumping into the root file system.
> On Fedora the root directory contains ~450MB already. This hasn't been minimal since a long time[…]
> There is no way to reliably bring up a modern system with an empty /usr.
These seem like workarounds for self-inflicted Fedora problems. Having a known-good /bin from boot time seems strictly better than an extra copy of some of /bin in a ramdisk that almost nobody works with because it’s quickly thrown away.
Keeping /usr/bin is a smaller change, because that’s where most binaries were before the merge. Only a few essential repair tools traditionally lived in /bin because that made each host store its own copy (rather than maybe depend on NFS).
There will always be sets of tools managed by the distro or my org or my team or me, and it’s pretty important to segregate them to avoid and resolve conflicts, so I expect at least four entries in $PATH for the foreseeable future. Plus whatever messes vendors care to dump somewhere in /opt.
Also having everything system specific live under a single path means easy backups, easy immutable trees that can be A/B swapped for seamless and online updates with easy rollbacks.
No more wondering where the various tools live since everything lives in /usr/bin and more.
The original point of a non-merged /bin (and /sbin) was to contain essential binaries for system recovery and early boot, that should be available even when a separate /usr/ partition is not yet mounted. Keep in mind that the approach of booting from initrd is quite non-idiomatic and used only for convenience wrt. a generic distro install; you're generally meant to recompile your kernel so that it can mount / directly at boot and find everything it needs there, without requiring to mount either a ramdisk or a separate partition. This means that early-boot stuff must live there in order to get the benefits of separate partitions in the first place.
> The original point of a non-merged /bin (and /sbin) was to contain essential binaries for system recovery and early boot, that should be available even when a separate /usr/ partition is not yet mounted.
The original point of a non-merged /bin was that /usr/bin was a spillover from when they ran out of space on the primary drive back in the 70s, so there was a slapdash split where the stuff necessary for bringup was kept in /bin and the rest would be kicked down to /usr/bin, in a very inconsistent manner depending on the system’s evolution.
In fact that’s also why users were moved from /usr (the original location, hence the name) to /home: /usr ran out of space, so they added a third disk and moved the user directories over there.
> Keep in mind that the approach of booting from initrd is quite non-idiomatic
How can you claim that it’s non-idiomatic when it’s the standard approach?
You still need an initramfs in the case your bootloader/kernel doesn't support your filesystem; for eg GRUB didn't support LUKS2 and the Linux kernel can't boot LUKS directly without userspace tools to input the passphrase etc.
Your distribution needs an initramfs for that reason; you as a user can just recompile the kernel so that the modules it needs are built-in rather than needing to be loaded separately from a ram-disk. If you need an early-boot file system at all (as with an encrypted /), it can be provided in /boot/ and then "swap root" to / at boot like initrd does.
Nowadays a network files protocol fits inside the system initializing ROM, so all of those old issues are moot and we have completely different problems to optimize for.
Giving that on Ubuntu this was done (and in other distros), and there isn't any issue ( I just noticed that I have a merged /usr for a long time), the dpkg maintainer it's being a jerk.
As far as I can find there are issues, namely that installing a package with dpkg can leave the system in a bad state. If I understand it correctly dpkg normally prevents by checking the paths it modifies, which ends up non trivial if the paths are aliased.
> the dpkg maintainer it's being a jerk.
He is required to accept working patches, priority by the committee seems to be the removal of warnings about the broken support instead of actually fixing it.
The thing is that no one wants to interact with a maintainer hostile to the very idea of your patch. Sure, they may be "required" to accept it, but it's not going to be a fun process for anyone involved.
Part of Boccassi's concern with their patch is that it might not be merged at all due to what they perceive as "moving goalposts" and "excuses". I think that's not entirely an unreasonable concern, and no one likes to work on patches that will never get merged.
I don't know what the path forward for Debian would be here. Things seem ... difficult. Getting some of the social tension solved and having people "kiss and make up" would probably be a good start, but that can only work if people are willing.
> The thing is that no one wants to interact with a maintainer hostile to the very idea
His hostility seems to be based on actual reasons if it is true that other systems that performed the unification basically throw out any guarantees made by dpkg. Providing a patch that fixes that would get rid of those reasons.
> Part of Boccassi's concern with their patch is that it might not be merged at all due to what they perceive as "moving goalposts" and "excuses".
Bocassi is also the guy who claimed the first incomplete patch was rejected without further comment, which going by the article wasn't the case. So his opinion of the maintainer is at best misinformed, at worst intentionally deceptive.
Also going by the article a merge can be forced by the committee. So this seems to be a non issue.
There seem to me some conflicting accounts and perspectives. Boccassi took the "broken by design" snipe in reply to his patch as "this is going to get rejected", which I don't find completely unreasonable. There were also some other more technical comments from other people, which other people took as "this could get merged, if it gets addressed", which is not unreasonable either.
Yes, the CTTE can force a merge, but that's a pretty uncomfortable situation for everyone – at the very least it's going to be an uphill battle. I wouldn't just dismiss it as a "non-issue".
I spent a good time reading up on some of the mailing list threads this afternoon as I thought it's an interesting social problem, but with a long simmering conflict that's been going on for years it's hard to really get to the bottom of things. No one here seems especially constructive, but it's hard to really get to the bottom of the full context of all of this. I can definitely understand people's lack of motivation in writing fully polished patches if the dpkg maintainer is constantly railing that it's all "broken by design" though.
What a pity. Merged usr is so convenient to manage.
I feel like most of the objections & problems relate to the transitory phase, to having to support both, to trying to install old packages. I was thinking Bookworm was committed to a merged usr, but sounds like the next release after, Trixie, will be the first pure usr-merged, when everyone should be on the new thing.
For all the dpkg maintainer's griping, it sure doesnt seem like there are viable alternative implementations available to aliased directories.
Yeah, ideas and projects are often dismissed due to coming from the systemd group.
For this very reason, I'm rather bothered that gummiboot was renamed into systemd-boot. It's a very simple, nice tool, that's usable in plenty of non-systemd environments... but the naming just makes it unpopular with the crowd that doesn't like systemd.
I didn't even realize you could use it outside of systemd. I never looked at it in detail, but if something is named systemd- then I think "part of the systemd suite and intended to be used with systemd" is a fairly reasonable assumption.
I'd still be hesitant to use it to be honest, as systemd has on several occasions broke people's systems and the response was "you're holding your phone^H^H^H^H^H systemd wrong". Well, maybe, but you broke my system and before it was perfectly fine, and that's kind of an issue for me. Especially with something as critical as booting my system, I want to be able to just rely on it without breakage "because I was using it wrong". Linus' "we don't break userland"-policy was a great piece of insight. I wish systemd had a similar attitude.
Some of the systemd criticism has gone off the rails (...and then some...), but there's a number of things one could reasonably criticize about the project and development style, IMHO.
Systemd-boot is a dream. Why anyone would tolerate or think grub2 is an acceptable piece of software is beyond my understanding.
grub.cfg used to be human editable, but has evolved & morphed into some massive gnarly twisted mess of inscrutible noise that only multiple layers of shell scripts can output. It's become a write-once-read-never disaster.
And if i recall it's not even live. You still have to install that config.
Systemd-boot (nee gummiboot).is such a huge breath of fresh air. Simple senisible plaintext entries that one cam modify in any old text editor, which have immediate effect. It's so pleasant & simple.
Alas debian doesnt seem to ship any hooks for updating systemd-boot with kernel updates. There's a shell-script to write/remove the entries but one has to go write their own hook & figure out the variables to marshal into the script's arguments. Please Debian!
Oh yes, grub is a pain; no argument there. I didn't even like grub 1 and in grub 2 it got several orders of magnitude worse.
I did almost use systemd-boot a few weeks ago though; I moved my SSD to a different laptop and that somehow accidentally booted some remanent of the Windows boot manager, which automatically and helpfully hijacked the lot and now it didn't boot in either the new or old laptop. I ended up using Grub as my distro doesn't provide systemd-boot at all (let alone update hooks), and aside from the hiccup a few weeks ago I haven't had to look at it in over a decade; so for all its ugliness it does "just work" for me, and I figured looking at alternatives would be a bit of a waste of time.
I miss the times where I used FreeBSD and the MBR bootloader they had (have?) just automatically detected things and it would always work without any keffufing about. Just dd these 512K to the start of your disk and presto!
I loved grub (1)--being able to fix a boot configuration issue during boot time was such a amazing upgrade over lilo. When I first saw grub2 and learned how it worked it struck me as the platonic ideal of the second system syndrome[1]. Everything about it was more complicated, expansive, configurable. The fact that they need `grub-mkconfig` is a sign that it went horribly wrong.
I had not heard of systemd-boot, I will check it out…
> I didn't even realize you could use it outside of systemd. I never looked at it in detail, but if something is named systemd- then I think "part of the systemd suite and intended to be used with systemd" is a fairly reasonable assumption.
The tight coupling of systemD is a popular criticism of it.
And this is another fine case demonstrating how very very wrong that criticism is.
All in all extremely few systemd subsystems require systemd. Systemd's pid1 requires only one subsystem, journald, which one can mostly disable/defang if they really dislike it.
It's much more like a monorepo than monolith. There's standard practices & libraries between the projects- unit files, ability to ask for machine readible outputs, others. But these cross cutting concerns mostly get compiled in. The actual interdependencies between subsystems are few & far between. Feels like the critics dont really understand what they are trying to criticize.
You say this, but systemd-udevd at one point would sigkill every process on the system if it was not started under systemd (or otherwise in its own cgroup).
Considering the massive amount of unnecessary and unjustifiable trouble that so many people have experienced over the years with systemd and PulseAudio, it's no wonder people would distrust everything coming from the developers and proponents responsible for such software.
That's really unfortunate naming then. Any systemd-foo that "works on it's own" is (rightly) assumed to be tightly coupled to the rest of the systemd ecosystem.
As somebody who really does not care either way, all this just sounds more like moving problems rather than solving a problems. At least, I don't think there are many Linux systems where you can safely remove /bin from the path just yet. I just checked. My Manjaro install has a /bin and it is full of files that look like I would not want to lose them. So, they fixed the /bin to /usr/bin thing for some packages but clearly not all packages in Arch. I bet/blindly assume that's true in Fedora as well.
I've always struggled with figuring out where stuff lives on the file system in different linux and unix derivatives. It's never where you expect/want it to be. I've used solaris, hp ux, mac os, and linux over the years. They all have similarly named directories, mostly for weird historic reasons. Is it /var/usr/god/knows/what or in /opt/foo/bar or in /usr/var/lib or /usr/share/lib. What about /etc, /usr/etc, /usr/local/etc?
You can take almost any 1,2,3, and 4 symbol permutations of share, lib, var, bin, opt, and sbin and probably find something that expects files to exist under that path. Reducing the number of permutations would probably be helpful.
People seem to just roll the dice and create some place where shit lives based on mostly just vague intentions, rules, heuristics, and interpretations of those associated with long dead unix variants from the nineteen eighties, weird naming conventions, and what not.
So, what's the rule here? Some 'special' packages should install to /bin and some other not 'special' packages should install to /usr/bin? Why? Is there any agreement about what constitutes a 'special' package? Is there a good functional reason for having both directories? And then the whole bin/sbin distinction is kind of arbitrary as well. Some binaries are statically linked, others are not and require libraries to exist in yet more shared directories on a library path. It all boils down to users requiring a PATH variables (and, inevitably, a lot of other/similarly named variables). And of course the order of paths is also super relevant and kind of the point. So you look in /usr/bin before or after you look in /sbin? It's all a house of cards. Brittle by design and convention.
The notion of taking a package and then fragmenting it over a multitude of shared directories is the problem that needs fixing. The role of a package maintainer is bridging those different notions of where stuff should live between different distributions with some convoluted scripts. It's a job that should not need doing and code that should not need to be written.
The main differences between linux distributions boil down how none of them actually having solved this problem that ended up doing only loosely similar but clearly different things for mostly obscure reasons. They don't agree on where stuff should live, how it should be moved/copied/linked there, where and how things are configured, etc. Most of these differences are kind of arbitrary and petty. /sbin, /usr/bin, /usr/sbin, /bin, /usr/local/bin, etc. who cares? I pretty much need all of these in my path for things to work as intended. I don't see a good reason for more than 1 of these to exist.
Apple kind of got this half right with the notion of mounting a package rather than installing it. Most applications install/uninstall via drag & drop. I always thought that was a neat idea. Of course, they then made it complicated by having /User/<uid>/Library/* and /Library/* directories anyway. So, most applications leave a lot of clutter there after you drag them to the trash-can. And they also have the usual contingent of unixy directories. And package managers like brew, macports, fink, and whatnot that sort of carved out different places in the filesystem where their stuff lives. So, I wouldn't go as far as saying that Apple solved the problem. But it does look like progress to me.
Mounting stuff rather than fragmenting it all over the place is progress. Docker does this. And so do Flatpak and Snap. Flatpak and Snap are kind of tedious in their own way (e.g. opencl support is a PITA with Darktable and other packages that need that). But at least I have some stuff that I installed with those that actually works without having to be customized for every linux distribution.
> My Manjaro install has a /bin and it is full of files that look like I would not want to lose them. So, they fixed the /bin to /usr/bin thing for some packages but clearly not all packages in Arch.
I didn't read the rest of your long comment because this is incorrect.
/bin is a symlink to /usr/bin on arch. There are no files in a folder /bin. Any package that tries to install a file in /bin will throw an error that it conflicts with the filesystem package.
Arch goes one step further, there is no /sbin or /usr/sbin either. Both are also symlinks to /usr/bin
Instead of all this drama the person that cannot see any way forward, despite a valid example of Ubuntu making this work, should be removed from the critical path by the instrument of a fork. Otherwise it will just be a matter of time before another bike-shedding, multiple years-spanning incident happens again.
By creating a fork and proving that it works, the discussion becomes about whether to merge patches or adopt the fork rather than sterile "what if"s.
I wish most *Unixes would go the Gobo Linux route with a Program, Users, System, Data, Mount file hierarchy that is just symlinks under the hood to the old hierarchy
Why rename? How does that benefit us? The only real difference gobo offers is separating program and system files, but I'm not even sure what the difference is there.
In the the gobo documentation^1 section called "But what about Unix compatibility?"
> The GoboLinux system layout seems to be a major departure from the Unix tradition. Does this mean all programs need to adjusted so that they work with the new layout? Fortunately, the answer is no. Through a mapping of traditional paths into their GoboLinux counterparts, we transparently retain compatibility with the Unix legacy.
~] ls -l /dev/null | cut -b 45-
/dev/null
~] ls -l /bin/sh | cut -b 45-
sh -> /Programs/Bash/4.4/bin/bash
> There is no rocket science to this: /bin is a link to /System/Index/bin. And as a matter of fact, so is /usr/bin. And /usr/sbin... all "binaries" directories map to the same place. Amusingly, this makes us even more compatible than some more standard-looking distributions. In GoboLinux, all standard paths work for all files, while other distros may struggle with incompatibilites such as scripts breaking when they refer to /usr/bin/foo when the file is actually in /usr/local/bin/foo.
> You may have noticed that the Unix paths did not show up in the system root listing in the very first example. They are actually there, but they are concealed from view using the GoboHide kernel extension. This is for aesthetic purposes only and purely optional, though: GoboLinux does not require modifications in the kernel or any other system components. But our users seem to like it a lot. :-)
So it doesn't break the world. The faq^2 also indicates this wasn't a change to make Linux more newbie friendly, but in my own words I think it does.
From the original proposal from 2012 that is linked to in the article:
> The primary commercial Unix implementation is nowadays Oracle Solaris. [...] By making the same change in Linux we minimize the difference towards the primary Unix implementation, thus easing portability from Solaris.
I wonder what he means by "primary"? I'm reasonably sure than in 2012 OS X was used on more machines than Oracle Solaris.
I did a solaris 10 course in 2008, even then it felt that it was on the way out, with some big iron companies still having them of course, but then they still have systems even older than that.
Solaris 10 did have some neat features not found in Linux then, not just ZFS, but containers etc.
By 2012 as the concept of things like AWS was gaining traction, I'm surprised that legacy systems like solaris was even a consideration, I'd be very surprised if solaris was chosen in any greenfield installation (rather than extensions to existing companies with a lot of solaris/oracle) in the last 15 years.
And if you recall correctly in the Debian community, adopting his systemd was a long and challenging 2-year long debate. Debian adopted systemd three years after Fedora did. Afterwards multiple senior contributors resigned from their positions due to extraordinary stress levels caused by ongoing disputes about systemd.
So when I tell you the Debian community mainly cares about servers, I'm not bullshitting you and I know who the fuck Lennart is.
His workstations, and not to be snooty about it either, it's just like some people prefer crunchy peanut butter, and some people ruin their lives with creamy.
I liked init systems of yore. I don't ask to toss out systems because its very useful in some settings, however, I prefer the init systems we had, when it comes to my personal workstation preferences.
All of my non-professional work systems are OpenBSD and I will tell you that BSD init is a friggin breeze.
Sadly that ship has sailed, but it's finally 10-12 years later that there are some niceties to systemd that make it worth it to me (systemd-resolved in particular).
Can't speak about AIX, but it was massively more used than HP-UX. Solaris was still under pretty active development and adding new and cool features in 2012, while HP-UX seemed pretty much abandoned by HP. Itanium on HP-UX was HPs last big push on HP-UX, but after that failed to catch on they seemed to give up on the OS.
As a Debian user in a previous life, and a current NixOS user, this whole debacle amuses me greatly, as I no longer give a shit about paths like /usr/bin or /bin, beyond a handful of binaries needed to bootstrap my environment. I've even moved those paths into a read-only filesystem on some systems!
This could happen in any distribution which does not have a powerful dictator at the top. Almost nothing about this issue has to do with paths. It is one maintainer of a core package who obstructs the Debian project which is enabled by Debian's distributed nature with no powerful dictator at the top.
Yeah came here to say the same. For them, silly tradeoffs between uniformity and what might be considered ambient authority or separation of concerns. For us? Nahhh best of both worlds.
>Improved compatibility with other Unixes/Linuxes...
A quick check reveals that OpenBSD presently has separate /bin and /sbin. FreeBSD also has a separate /lib. Exactly what were the other Unixes/Linuxes that this was supposed to improve compatibility with? I hope was not just Solaris...
OpenBSD also has /usr/X11R6, where it puts the X stuff (their X distro is named Xenocara). This is another piece of legacy from back when you might want to have both /usr/X11R2 and /usr/X11R3. These days the X11R6 directory actually contains X11R7 shrug.
FreeBSD used to do this too, but got rid of it in FreeBSD 6 (I think? Maybe 7? About ten years ago).
That's because "packages" in the BSD world (at least openbsd) are considered third-party options, the base system is considered static. Linux distributions don't really have the notion of a static base system with "package" addons, the distribution packages are the system.
The idea is, if something hardcodes e.g. /bin/grep because it was written on a system where that's where grep is, it will suddenly start working because now grep is also accessible via /bin/grep.
So it's compatibility in the sense of making things work (instead of failing over pointless differences), not in the sense of being the same.
Current state of the UNIX file hierarchy is a mess. Many incarnations of GNU/Linux systems didn't contribute to a more elegant yet simplified directory structure. Just pick the one you like the most and serves the purpose of your desires.
As for merged /usr, recently using Arch Linux for desktop use. Which accomplishes this by symlinking, which is ugly. Arch certainly is not perfect. Not sure if this is the way.
So that is the reason why that weird "unsupported" message showed up at a system I was upgrading some time ago.
What I don't get... yes, it may be possible to get a dpkg database into an inconsistent state. But why is everyone hoping on a patch that may or may not appear to prevent these bugs, instead of deriving a way to fix a system with an inconsistent dpkg database?
Current state of the UNIX file hierarchy is a mess. Many incarnations of GNU/Linux systems didn't contribute to a more elegant yet simplified directory structure. Just pick the one you like the most and serves the purpose of your desires.
As for merged /usr, recently using Arch Linux for desktop use. Works decent enough to getting things done.
Elaborate. I don't find the UNIX file hierarchy complicated, but there are some historical quirks and artifacts (like `/var/lib`). The difference between `/usr/X` and `/X` is about the worst of it IMHO.
Everything important is usually at most 3 levels deep and the short names are very convenient (versus "C:\Users\xxx\Documents and Settings").
Compare to things like Windows: c:\windows\system32 that has 64 bit stuff in it, the whole c:\windows\syswow64 b.s., c:\Program Files (x86) and c:\Program Files, the hierarchy in the Windows registry which has "Windows", "Windows NT", "Microsoft Windows" (probably), etc.
All file system hierarchies contain complicated rules and systems that have been built in by the people who designed them decades ago.
I still don't know what I'm supposed to do with /usr/local, what the difference is between /usr/share and /usr/local/share, what the point of /opt is if programs install their files and dependencies in /usr(/local?)/lib anyway and why I have /usr/lib, /usr/lib32 and /usr/lib64 when only two directories and the right environment variables should suffice.
There are subfolders in /dev that feel like they don't need to be subfolders. There's /tmp and three other places that contain temporary files that should get cleaned on boot. PID files appear strwen across /var/run and /tmp/<magic directory name>.
/var can contain just about everything. Most of /var feels like it should actually be inside /var/spool but you're not going to see much in there except for a mail queue to nowhere on desktop Linux machines.
Then we come to the XDG standard everybody just blatantly ignores that tries to bring order to the chaos that is program-generated files in the home directory.
And then there's also snap. Snap looks at any directory convention, laughs, spits in your face for good measure, and creates a folder called "snap" wherever the fuck it wants to. I'd purge it from my system if the snap people hadn't convinced some tools I use to support it as the main distribution method.
I'm sure there are guides out there that explain every directory and their purpose. I've read one of those guides, noticed that at least a third of the common programs I use clearly haven't read it, and forgotten the details already. The file hierarchy of a fully-fledged desktop Linux is kind of a mess, and that's just what you get when your core system is formed by combining the work of hundreds or thousands of volunteer projects.
>I still don't know what I'm supposed to do with /usr/local, what the difference is between /usr/share and /usr/local/share, what the point of /opt is if programs install their files and dependencies in /usr(/local?)/lib anyway [...]
/opt - You put everything related to program foo under /opt/foo . Binaries, libraries, configs, it doesn't matter; all go under /opt/foo. Everything specific to foo should be under /opt/foo, such that deleting /opt/foo also wipes all traces of foo from your system.
/usr/local - Equivalent to /opt/jeroenhd, with a substructure mirroring /usr, ie top-level bin, lib, share directories. If jeroenhd compiles multiple things foo and bar and they all end up under /usr/local, removing them after the fact is hard due to the difficulty of determining what files are foo's and what are bar's.
You say "if programs install their files and dependencies in /usr(/local?)/lib anyway" as if you don't have a choice, but that's up to the programs. Eg anything with a configure script should let you configure the prefix to /opt/foo so that it installs there instead of /usr/local
>[...] and why I have /usr/lib, /usr/lib32 and /usr/lib64 when only two directories and the right environment variables should suffice.
/usr/lib - Architecture-independent libraries
/usr/lib32 - 32-bit libraries
/usr/lib64 - 64-bit libraries
(For the Debian family the arch-specific libraries are mostly in subdirectories of /usr/lib named for the triple, eg /usr/lib/x86_64-linux-gnu)
Not sure which of these you'd excise to be left with only two, or what env vars would have to do with anything.
I like to point out that I'm in favor of standardization of directories. Also understand the legacy argument since a lot of software, by default uses it's file location.
Directories like var, opt, usr need some rethinking. Hence the UsrMerge I presume?
What goes where. I have been a sysop for a long time. Came from a time that computers where simple by comparison to now. Limited in functionality, but just worked. Seen many changes. I the past there were less directories. Though time we added a couple. E.g. opt, srv, media. Or the dev, proc and sys. What happens when something new popup? Create a new directory? Possible. Maybe now is the time to consolidate and rethink certain directories? Better names? E.g. system instead of usr. Location where put libraries, maybe consolidate into one? Location for data (stores)? Etc.
Off cause current hierarchy isn't complicated. But it's not intuitive and can be confusing. Even the windows directories are better readable? I think naming must be more intuitive. Especially with regards to new or inexperienced user. Just an idea. Make unix accessible to everyone is a good goal.
Starts with readability in my opinion.
Oh, didn't mention software development. Which is an entire different ballgame with respect to device files, magic files etc. Ugly to boot. The mantra everything is a file is also not true.
The Filesystem Hierarchy Standard[0][1] is used by most Linux distributions as the authoritative source and description of the various system directories. As other commentators have already noted, the hierarchy on UNIX system has always been a bit of a mess, the FHS is an attempt to clean up while maintaining backwards compatibility.
I've always liked the /usr distinction, with systems set up so they will at least boot without /usr mounted (at an absolutely minimum, a statically linked /bin/sh is enough to get a command prompt). This is particularly valuable for setups where a large number of PCs can mount /usr read-only from a remote server (we run quite a number of diskless x86 amd ARM systems like this).
Bind mount your small system's /usr to /lower, mount your bigger /upper, and use overlayfs to combine them.
Looking forward ro some blog posts discussing how to support this use case; it's a good one. I think usr-merge is so much more managable though that it's totally worth doing.
Must admit I'm surprised there don't seem to be more users out there doing exactly this. Makes installing a new program for everyone instant (put it in /usr, now available to all) while allowing personal customisations with the ability to reset the system to a pristine state in a second. Esp. combined with diskless, the computer hardware becomes irrelevant/disposable - just grab a fresh one, boot the same image and carry on (or boot your personal image at a hot desk).
Sigh ... gobolinux solved this quite elegantly over 20 years ago, and to this day most folks don't quite grok how simple and elegant their solution was.
> /bin is a link to /System/Index/bin. And as a matter of fact, so is /usr/bin. And /usr/sbin
So, they're doing exactly the same thing as Debian is doing on the default merged configuration ?
Issues seen on Debian are not the system using symlinks or not, but packages failing when exchanged between system that still had both /bin/ and /usr/bin as separate directories, as it was historically, and the new merged systems. Gobolinux being a from scratch built distro with no transition history, I am not sure how it can be seen as a better example here.
I'll just copy & paste from earlier posts I made on a gobolinux-related thread some time ago (can't be bothered to rewrite the same arguments in different words)
"Anyone who wants to try out gobolinux for real would at least do some basic reading first. Its that distinct a distro that you kind of have to, and no problems with that. Not everything innovative can be expected to be digested without a modicum of work up front.
...
Sometimes the simplest alternative implementations require the most up-front understanding, because their simplicity challenges long-held preconceptions about how things should be.
Thats Gobo in a nutshell.
At its heart, its actually a very simple distro. Which uses simple tools, and the filesystem, to lay everything bare in front of the user.
Funnily enough, Gobo is one of the few distros where you can refer to /bin/<any-linux-executable>, and as long as actually have that pkg installed (yes, under /Programs), then /bin/xyz WILL be found. Guaranteed.
Thats by design."
And:
"The kernel module that hides the standard dirs under / is entirely optional, and is only there for aesthetic purposes.
If you don't want it, don't load it. Absolutely nothing in gobolinux requires that kernel module to be running. Again, its just purely for aesthetic purposes."
It's essentially like Macos's /Applications directory, except where Macos supports the traditional POSIX fhs via a hidden /private root directory, Gobolinux hides these directories with a filesystem kludge.
I recently did something similar refactoring XDG endpoints to get rid of the standard home layout. I was quite surprised everything just worked, and there wasn't an issue of directories being recreated in their original locations or file templates being missing. Quite liberating if you ask me.
I'm sure how you're getting to this conclusion. GoboLinux filesystem hierarchy just separates executables and shared objects by name and version [1]; unlike the standard FHS where everything goes into /usr/lib, /usr/bin, etc.
For backward compatibility there are symlinks (that takes no disk space) to the standard directories, but these are hidden by the kernel so they don't show up and clutter your file manager.
If you'd actually read the article, you'd see that the problem isn't merged /usr. Other distros moved to merged /usr just fine.
Also, the decision to move Debian to merged /usr wasn't done by Poettering, it was done by the Debian technical committee. So please stop spreading such ad-hominem nonsense.
Haven't used debian or debian based systems for many years. They do too much "automagic" things behind your back and thus tend to brick themselves, especially during major updates. Not really sure how they got so popular. The packaging ecosystem is also kinda mess.
Huh? I have the opposite experience. Major upgrades always went flawlessly for me. With the rolling testing-Release on my workstation/laptop, I sometimes run into dependency problems during upgrades. But these can always be solved by a few standard techniques. Not really beginner-friendly, but quite reliable.
I have had many more problems when upgrading Ubuntu installations.
If you are running testing, it doesn't get security updates except through regular package migrations from unstable, so I recommend pulling in security updates from there manually or automatically. At least Firefox and Linux have regular security fixes in unstable, so you might want to use apt pinning to use the unstable versions. You can also use debsecan to automatically and temporarily pin security uploads to unstable using the technique mentioned here:
What killed Debian for me was when doing an upgrade of Apache and PHP it automatically restarted Apache temporarily without PHP configured, then the upgrade of PHP failed and left Apache running completely without PHP support for a while until someone noticed, and Apache happily kept serving .php files as text/plain during this time.
A package manager should in my opinion leave daemon stopping/starting to the administrator, as they know their configuration better than any package manager ever can.
Yes, and I'm sure you could in 2013 when this happened as well, but it comes enabled by default and I learned not to enable automatic upgrades on Debian derivatives instead.
These days we mostly deploy our stuff using containers anyway, and for my own personal machines I've just switched to Arch Linux since.
From this comment, I assume you ran testing or unstable.
Stable is great for letting up a server and let it run on its own for years without any maintenance (HN warning: No maintenance is of course never a good idea, but at least Debian made it possible)
Testing ran in update conflicts at least once a month, but it was generally easy to resolve without bricking.
Haven't run debian en-mass for a long time, for stable servers I stick the latest ubuntu 2004 and have unattended upgrades on, I haven't had any issues yet
Requesting you take cloud out of your name. We release you from the team. I regret to say we are going to have to provide you with a win11 laptop, and the authorities are on the way to claim the hardware you appear to be currently abusing.
We can now kiss-gone the security for embedded world and its IoT-related security brethren.
It is often the directory separation of /bin and /usr/bin that is used to denote the extent of firmware’s scope between these binaries that are require for a boot up and what are they require to support their applications. (Thanks, PDP-11).
Such wonderful boundaries of IoT upgrade scope can and is often denoted as two-stage upgrade with /bin as read-only; now, not so much.
This is yet another case of IoT system engineering design getting steamrolled by the diminutive desktop-mindsets despite progress by its systemd author.
Try yanking that Ethernet cable from its physical socket and watch if your long-running deeply-stateful process can survive. (It won’t, unless you retrain it against KISS principle to ignore such netdev disappearance.)
First of all, this is a reliability question, not a security one. If someone malicious already has write access to /usr/bin then you're already screwed, there's not much damage you can do with a compromised /bin that you can't already do with a compromised /usr/bin. Not to mention that they probably also have enough privileges already to do whatever damage they want to do, without having to trick some other process into running their payload.
From the reliability perspective (protecting against accidental damage), the /usr merge makes it easier to set up A/B booting and protect the entire system, rather than blessing a small subset of the system and preventing anyone from upgrading those components at all in the future.
Perhaps it would be better if you stopped implying and started to make your point explicitly. Because if your point is that there is some security boundary in Windows that allows an application installer to write to %PROGRAMFILES% but not to \WINDOWS\SYSTEM32, then you are sorely mistaken.
It is because on old Unix systems there frequently wasn't enough space to store /usr on the root disk. Therefore /usr was a separate disk which might not be available during boot. This meant that you needed to put all the binaries you needed for boot in /bin and /sbin because /usr/bin and /usr/sbin might not be mounted yet.
This no longer makes sense in the days of large disks which fit /usr just fine and initrds which can mount all the disks before kicking off the things which might need those binaries. In fact I think the initrd has taken the place of /bin and /sbin on a modern system containing copies of all the binaries needed to mount the disks (like mount and fsck).