Hacker News new | past | comments | ask | show | jobs | submit login
Understanding the bin, sbin, usr/bin , usr/sbin split (2010) (busybox.net)
378 points by kylerpalmer on May 6, 2016 | hide | past | favorite | 129 comments



For all people that think this is still a issue: it is not:

* Arch Linux: https://wiki.archlinux.org/index.php/Arch_filesystem_hierarc...

* Fedora: https://fedoraproject.org/wiki/Features/UsrMove

* Debian: https://wiki.debian.org/UsrMerge

* Ubuntu: https://wiki.ubuntu.com/FoundationsTeam/Specs/Quantal/UsrMer...

And this is true for quite a long time. I think the last distro that did /usr merge is Debian and this is already 3 years ago. So I find surprising that people still thinks that this is a issue.

The only thing that remains is the difference between bin and sbin that Fedora/Ubuntu/Debian still maintain (Arch Linux simply merged everything).

P.S.: of course, there should be still distros that use a split /bin /usr/bin, /lib and /usr/lib, etc. OpenWRT is one of them. However I think the major distros already migrated.


Excuse me my naivety but I don't really understand what you mean by "This is not an issue and hasn't been for quite a long time". When I do ls / on any of my machines I still see /bin /usr/bin and all the rest.


They are all symlinked - just different names for exactly the same directory in the file system. If you compare the contents, you will find they are identical.

Edit: Nevermind, I am wrong :)


  $ ls -1 /bin | wc -l
  127
  $ ls -1 /usr/bin | wc -l
  2271


The Ubuntu document is from 2012 and the merge is not done on 16.04. How can we know if the change is still planned, and when? Is there a roadmap somewhere?


Arch still occasionally has some packages installed to /opt. Annoying as hell, but they expect to be there for some reason. Android-studio was the latest unexpected surprise for me.


I've seen distros choose /opt for software not built by the maintainers, but simply repackaged in binary form. This would include software like oracle/sun java.


Once upon a time, I read the LSB[1] filesystem hierarchy, of which the only thing I consider useful was that non-distro-provided third party software (i.e. anything you'd buy on a disk and install) should live in opt. It didn't make a huge deal of sense then and it still doesn't now, but it sorta provides a way to know which directories can safely be rm -rf'd.

[1]https://en.wikipedia.org/wiki/Linux_Standard_Base


fwiw that's from AUR, not Arch official.


Debian only enforced the fact that if there is a /usr partition, it should be mounted by the initramfs. Moreover, the advanced partition scheme in debian-installer doesn't propose a separate /usr anymore. /usr can still be a separate partition and the split between /usr and /usr/bin still exists unless you install the usrmerge package.


"Note that Arch follows the systemd FHS convention of symlinking"

The Systemd convention? What the fuck? Does Systemd have to touch everything??


The systemd maintainers were one of the first ones to put together a technical proposal and do a lot of the implementation work for the UsrMerge, in a way that was reusable across distros.

systemd itself does not depend on being on a UsrMerge'd system, and otherwise, the proposal does not have anything to do with systemd.

From https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...

"Note that this page discusses a topic that is actually independent of systemd."


You are quoting 2013 doco. systemd has (again) changed since then.

As of 2016, the position is that whilst there is still code in some of the program to handle a split /usr, a significant part of the system (in particular Plug and Play device management) now references /usr and depends from it, to the extent that it is already a requirement that /usr be always present: i.e. that it either be on the root volume or be mounted by /init (on the initramfs) before it invokes systemd.

* https://lists.freedesktop.org/archives/systemd-devel/2016-Fe...

Lennart Poettering is indeed now pushing for systemd to impose a similar requirement for /var .

* https://lists.freedesktop.org/archives/systemd-devel/2016-Fe...

* https://lists.freedesktop.org/archives/systemd-devel/2016-Fe...


Uh, I don't see how this is related to UsrMerge? You can still have separate /usr/bin and /bin directories, you just need to make sure /usr is mounted and accessible during boot. Requiring /usr be available during boot has long been the case for Linux, even before systemd came along.


So, as Linus controls the kernel, Lennart wants to control userspace?


I'm not sure Torvalds care about control either way, he just wants to build a kernel to be proud off.


I have a sneaking suspicion one day systemd is going to try to replace the gnu in gnu/linux.


Why, when they can grab the reins?

Iirc, glibc used to be maintained by a RH guy until he got on so many people's wrong side there was fork made that ran for a number of years until the original was merged into it and the fork renamed.


That one curious definiton of "reusable"...


Systemd walks into a bar, shoots the owner, proclaims it is the new owner, turns the bar into a casino, adds a hotel, a grocery store and an init system.


systemd opens a new bar right next to the most popular bar in town, becomes far more popular (possibly due to better advertising), people complain that their beer tastes worse, and when other people tell them to just go to the original bar, they complain that all their friends are at the new bar.


Also closes the doors on the auto shop next door. Proclaiming you can either go through the bar to get to it, or climb over the barbed wire fence out back to get in.

Others instead use abandoned tools from said auto shop to set up a new one across the road, and get daily ridiculed by the systemd patrons for it.


systemd stopped selling just beer a long time ago. It now sells books, used cars, cement mix and cheap Chinese take away.


Systemd has basically become the enforcement arm of Freedesktop, that in turn is a cover for RH/Fedora running the Linux user space show.


> And this is true for quite a long time. I think the last distro that did /usr merge is Debian and this is already 3 years ago. So I find surprising that people still thinks that this is a issue.

It hasn't happened on openSUSE yet:/


See https://news.ycombinator.com/item?id=11626765 for two people recently discussing the actualities of this.


Nitpicking:

If you look at the very early versions of the Unix manuals, before even /lib was invented, there was a notion that /bin was the "system" (section 1) and /usr/bin was "user software" (section 6). This fell by the wayside when they ran out of disk space on / and started putting section 1 commands in /usr/bin. At some point the system/user distinction was abandoned and everything but the games got moved to section 1 of the manual.

(Back in those early days /lib files used to be in /etc. So did /sbin. There's still a meaningful semantic difference between programs any user could run vs programs only useful to root.)

On *BSD there is no initramfs equivalent and the / partition still serves its traditional role of the minimal startup environment. And /home never existed before Linux - it was always something like /usr/home as far as I can tell.


  > And /home never existed before Linux
SunOS used /home before Linux existed.


I can't help think that initramfs is a pox on Linux. It seems to lead to all kinds of "solutions" that are bothersome at best to maintain unless you have big dollars at your back.


I've never quite understood it myself.

It's far from mandatory (I've run Gentoo for years without it) and when you have to fix stuff in the mkinitrd script, you just cry...


NixOS is one of the few distros that recognizes and solves this problem:

https://nixos.org/nixos/about.html

A solution like theirs should become the default. Their transactional, package management too given the number of times my Linux packages got broken by some ridiculous crap. That even a smaller outfit could knock out two problems, one major, shows these bigger distros could be gradually knocking out such issues themselves.


It's a lot more work to retroactively impose a new FS layout on an existing distribution than to come up with a new distribution.

If e.g. Ubuntu did this then

1) They'd have to fix all of the packages in the main repository, and then PPAs?

2) Lots of third party software targeted on Ubuntu would break; this would cause a lot of users to want to stay on pre-change versions of Ubuntu, which probably means it would get forked.


"They'd have to fix all of the packages in the main repository, and then PPAs?"

The NixOS site made it look like they didn't have to do that with their solution: just give a meta-view on what's already there for the users.

"Lots of third party software targeted on Ubuntu would break; this would cause a lot of users to want to stay on pre-change versions of Ubuntu, which probably means it would get forked."

This could happen with any number of changes. Might still be worth it for at least major changes like making broken packages recoverable. I can't imagine myself getting off Ubuntu just because package failures can't brick my system anymore. Some developers might gripe about having to make changes but that makes them look like assholes, not Ubuntu.

I mean, VMS had this feature in the 80's with their versioned filesystem and transactions at app level. It's about time at least the package managers of Linux can do same reliably for apps. At least one of them did to their credit.


1) A lot of Ubuntu (really Debian's) packages are ancient software that nobody uses. That being said, some packages are not, and there is usually a bit of effort needed to get them to work. OTOH NixOS is still growing, whereas Debian is declining (Ubuntu might pick up the slack, but it's not clear)

2) Most third-party software is indeed broken out-of-the-box on NixOS; but patchelf and fhsuserenv have been developed so that it can run regardless (there's steam, flash, adobe reader... all patched to work. Running a .deb is as hard as specifying all of its dependencies).


I was about to ask if any BSD's handle this better, great to see that NixOS does.


In FreeBSD the base system stuff goes into /bin /lib and /etc, and all the ports target /usr/local as the prefix. Even the config files of the ports go to /usr/local/etc.


I really like how freebsd does it, nixos has me interested too.


Though NixOS and Guix form a different approach than the FreeBSD's, that is they treat packages as isolated immutable objects and use an unconventional directory layout (wanted to note this, because someone who doesn't know the OSs named may have thought that FreeBSD and NixOS/Guix approaches are similar).


You don't lose anything too, you can still imperatively build packages if necessary.


What do you mean? That sentence can read a few ways.


Interesting.

First there was /bin for the things that UNIX provided and /usr/bin for the things that users provided. Part of the system, it lived in /bin, optional? it lives in /usr/bin.

In SunOS 4 (and BSD 4.2 as I recall) the introduction of shared libraries meant you needed a "core" set of binaries for bootstrap and recovery which were not dynamically linked, and so "static" bin or /sbin (and its user equivalent /usr/sbin) came into existence. Same rules for lib. /lib for the system tools, /usr/lib for the optional user supplied stuff.

Then networking joined the mix and often you wanted to mount a common set of things from a network file server (saved on precious disk space) but there were things you wanted locally, that gave use /usr/local/bin and /usr/local/lib. Then in the great BSD <==> SVR3.2 merge AT&T insisted on everything optional being in /opt which had a kind of logic to it. Mostly about making packages that could be installed without risking system permission escapes.

When Linux started getting traction it was kind of silly that they copied the /bin, /sbin, /usr/bin, /usr/sbin stuff since your computer and you are the only "user" (usually) so why not put everything in the same place, (good enough for Windows right?)

File system naming was made more important by the limited permission controls on files, user, group, and other is pretty simplistic. ACLs "fixed" that limitation but the naming conventions were baked into people fingers.

The proliferation of places to keep your binaries lead to amazingly convoluted search paths. And with shared libraries and shared library search paths even more security vulnerabilities.


> good enough for Windows right?

Only in the minds of people who don't know much about Windows.

The Windows model is, after all, for globally installed individual packages to be largely rooted at "/Program Files/%COMPANY_OR_PERSON%/%PACKAGE%/" with "%USERPROFILE%/AppData/Local/%COMPANY_OR_PERSON%/%PACKAGE%/" as a root for per-user data. The former dates all of the way back to Windows NT 3.5, the latter "merely" dating back to Windows NT 4 (where it was "Application Data" rather than "AppData").

* https://blogs.msdn.microsoft.com/cjacks/2008/02/05/where-sho...

Unix and Linux people will recognize this as akin to NeXTSTEP's ~/Apps, /LocalLibrary, /LocalApps, and so forth from the early 1990s; and to Daniel J. Bernstein's /package hierarchy (for package management without the need for conflict resolution) from the turn of the century.

* http://cr.yp.to/slashpackage.html

* http://cr.yp.to/slashcommand.html

And a few years after NeXTSTEP introduced its directory hierarchy, SunOS 5 (a.k.a. Solaris 2) introduced the System 5 Release 4 /usr merge.


> since your computer and you are the only "user" (usually) so why not put everything in the same place

The Linux (and late UNIX) way was not changing the /usr for each user, but on sharing /usr through several computers. This way, you had to administer just one computer, for a network of any number of terminals.

That become obsolete only after modern package and configuration managers were created.


I wonder what is more likely in our lifetimes - a sane Linux filesystem layout, or viable fusion power. Honestly I'm not sure.


gobolinux - http://gobolinux.org/ - has had a completely different filesystem layout, for many years. It's languished in the past few.

archlinux has for a couple of years symlinked /bin, /sbin, and /usr/sbin to /usr/bin, and /lib, /lib64 to /usr/lib https://wiki.archlinux.org/index.php/arch_filesystem_hierarc...

If you make your own linux distro from scratch, you can get many of the open source pieces of a linux distro to use whatever layout you want, while some pieces do require patching and fixing ... but you do have the source :)


The problem with Gobo is that this progression never works:

New FS Layout -> User Adoption -> Profit

Because at the end of the day the layout of the filesystem is never the selling point on any given distro. It always comes down to the software - either the amount of it, or the newness of it, and that is why people flock to Ubuntu and Arch respectively.

It is much more prudent to argue for filesystem improvements in those distros, and get them implemented there, than to fork it out. Gobo demonstrated both the validity and the utility of the approach, but it is up to distro makers to actually use the evidence given that the traditional Unix layout is garbage.


Much like Gobolinux, Guix also has a completely different filesystem layout. Packages are installed into their own namespace under /gnu/store but then linked to expected locations under a profile root (such as ~/.guix-profile).


Gobolinux still house the old layout for compatibility.

Heck, i think it could adopt any any FS layout it wanted as long as some defined place to house the "Programs" tree was provided.


re gobolinux

That's the one I was trying to think of! Yeah, they solved the directory problem. NixOS solved the packaging problem. Little projects doing what they can to fix the UNIX/Linux mess.


Reasons why you still might want to keep / and /usr isolated:

1. NFS mounts. Your local or initial BOOTP image has a minimal root, your (non-root-writeable, BTW) NFS /usr has Other Stuff.

2. Mount options. Root is often (and perhaps still must be -- /etc/mtab for example -- I've stopped closely tracking discussion to make root fully read-only) writeable. It may also require other mount permissions, including device files and suid. Other partitions don't require these permissions, and there is some Principle of Least Privilege benefit to mounting such partitions without them. /usr requires suid, but not dev, and may be nonwriteable except for OS updates.

3. Recovery partition. I'll frequently stash a second root partition, not typically mounted. It has minimal tools, but enough to bootstrap the system if the primary is hosed for whatever reason (more often myself than any other). Without a clean / /usr split, this becomes more complicated.


mtab is not a problem anymore. On Arch, /etc/mtab is just a symlink to /proc/self/mounts.

As for the recovery partition, you don't need the split for that, either. Just have a live system on the recovery partition that mounts the normal root FS. Then you can chroot into there for recovery tasks.


Right, that seemed to be the mtab solution Debian were angling toward. I think there were some odd edge cases where it didn't behave well, though I don't recall what those were. Perhaps the ability to specifically edit the contents to allow fixing of fubared mounts -- almost certainly loopback or NFS, both of which get quite twitchy at times.

I don't recall my precise thinking on a clean root vs. /usr split on the recovery partition, though it may have avoided some confusion over binaries. Or perhaps that you could mount the /usr partition itself independently if you wanted, assuming primary root was hosed.

Not being able to mount a separate /usr would negate that option.


> Not being able to mount a separate /usr would negate that option.

You can mount the root to e.g. /mnt and symlink (or bind-mount) /mnt/usr to /usr, if that's what you need.


Bind-mounting does give you some options. Still doesn't help if root's hosed.

And it wasn't an option when I'd first come up with this clever scheme.

One of my current challenges with Linux is identifying which information/education of mine is wholly outdated. This will happen to you in time as well....


The elephant in the room is /opt, /etc/opt, and /var/opt. The System V and filesystem hierarchy specifications say that those locations are, and I quote, "for third party and unbundled applications". Yet some distributions, like for instance Debian or Ubuntu, do not even include them, precluding commercial software vendors from ever delivering software for those operating systems (no, an unbundled application can never be delivered into /usr, because that is vendor's namespace, and an updated version from the vendor might very well bring a production application down).

/opt: application payload

/etc/opt: application's configuration files

/var/opt: application's data.

For applications with more than two configuration or data files, it is a good idea to create /etc/opt/application and /var/opt/application.

If your company is mature enough to understand what system engineering is, and employs dedicated OS and database system engineering departments, /opt/company, /etc/opt/company[/application], and /var/opt/company[/application] become very practical. If this approach is combined with delivering the application payload and configuration management as operating system packages, one only need worry about backing up the data in /var/opt, and that's it.


> Yet some distributions, like for instance Debian or Ubuntu, do not even include them, precluding commercial software vendors from ever delivering software for those operating systems

What? Why should it be impossible for a third-party package to just create /opt? They will probably need to extend the PATH and LD_LIBRARY_PATH, but /etc/profile.d is very much standardized AFAIK.


> Why should it be impossible for a third-party package to just create /opt?

It is not impossible, obviously, but /opt and such should come from the operating system's vendor, and the vendor should be the only one to decide which filesystem permissions to provide: 0755? root:bin or root:sys? root:root? bin:bin? The vendor should decide that, since a vendor is supposed to know their operating system best.

Third parties might not agree, or even decide correctly for that operating system.

This is system engineering and architecture, something which beside operating system vendors, software vendors do not have a clue about in the slightest.


Indeed Adobe Reader for Linux used to land itself in /opt on Ubuntu.

The power of run-as-su installers...


The FHS thing in Debian and Ubuntu must be relatively new, I certainly have no recollection of it back then when we looked into delivering unbundled software for it. And we did look for it, and we even combed through the Debian packaging specification back then.

Whatever it might be, or has been, if Debian and Ubuntu did get /opt, /etc/opt, and /var/opt and are now one small step closer to being System V compliant (from which FHS stems), I for one am very glad for that.


Debian does include /opt and friends, as required by FHS: https://sources.debian.net/src/base-files/9.6/debian/postins...


I like that FreeBSD dedicates a man page, HIER(7), on this topic - https://www.freebsd.org/cgi/man.cgi?query=hier - "layout of file systems"



systemd has a man page on this as well, file-hierarchy(7): https://www.freedesktop.org/software/systemd/man/file-hierar...


/{bin,sbin} is for stuff you cannot live without (e.g., sh, mkdir, chmod, rm, ln, etc.)

/usr/{bin,sbin} is for stuff you can live without but expect to be there (e.g., make, grep, less).

/usr/local/{bin,sbin} is for stuff you/your local admin installed (e.g., mosh, cmake).

Also, I use $HOME/{bin,sbin} for wrapper scripts, binaries that I need but don't want to install system-wide (single-file C utils that come without man pages, stuff like that).

I'm not sure where the confusion comes from and I don't really see any advantage in merging / and /usr. On the flip side, I do think there's value in keeping /{bin,sbin} as small as possible (because that stuff has to work).


Regarding your last sentence, why does having grep in /bin reduce the chance of mkdir working?


I doesn't reduce the chance of mkdir working but it does increase the chance of you missing a bug in the part of your source tree that needs to be rock solid.


Did you read the link?


Yup. I'm not sure what you're hinting at though. Are you saying that's not really why they exist? True, but that's how people use them.


MacPorts use /opt/local :)


And, interestingly, HomeBrew uses /usr/local "because Apple left it for us when they abandoned this whole mess" ..


/use/local/bin has the big advantage that it is in the default PATH.


I was about to say, I've seen /opt/local somewhere. :)


And Fink uses /sw. Which left me looking for some other place to dump stuff I compiled myself.


The /bin vs /usr/bin split makes perfect sense but I always thought /sbin was superflous to happy to see it being deprecated by many distros.

I expect with the increasing moves towards app stores and sandboxing on all platforms that the days of installing packages contents all over the filesystem are limited and things like xdg-app are probably going to take over with an app being mounted into the filesystem in a sandbox as it is run.


Funny. I always thought the /bin vs /sbin split made more sense than / vs /usr. I very much prefer that my shell's autocomplete does not stop on binaries that I have no business running as a normal user, so I like that root-only tools are in /sbin.


One outcome of this that drove me nuts back in the day on Debian system was libpcre was installed in /usr and grep was in /bin. This meant perl regex were not supported because the maintainers of Debian didn't want the dependency on /usr from things in /bin, and didn't want to "bloat" / with something as distasteful as libpcre.


How badly would things break if we tried to "fix" this split today?


Probably not too badly if we took a phased approach.

1 - Establish a new, consistent filesystem hierarchy. New code should use the new hierarchy. Move the legacy stuff to the new hierarchy and symlink the old locations to avoid breakage.

2 - After some time, make the legacy locations read-only. This will highlight any non-conforming code.

3 - After even longer, delete the legacy paths.

It would maybe be even easier if we could redirect writes (I'm not too versed in filesystem capabilities) from the legacy paths to the new ones.


> Establish a new, consistent filesystem hierarchy.

systemd is kind of enforcing this already, preferring /usr (where most of the packages go) over naked paths. This is the trend on relatively modern desktop systems, which thankfully preserves standard fs hierarchy while mostly keeping everything in one place. (Unfortunately there are bunches of non-desktop uses for Linux out there.)

See also https://freedesktop.org/wiki/Software/systemd/separate-usr-i....

> It would maybe be even easier if we could redirect writes (I'm not too versed in filesystem capabilities) from the legacy paths to the new ones.

Quite a few distros (and users like Rob Landley) use symlinks from /bin to /usr/bin and the same goes for /lib. sbin is sometimes linked to bin. This does some of the redirection.

(I am not aware of the /usr/tmp and /var/etc stuff mentioned in the message.)

> After even longer, delete the legacy paths.

Nah. Millions of scripts are writing she-bangs like #!/bin/sh and I don't really expect to see this being changed...

Workarounds using `env` is kind of crippled since a.) it introduces an extra level of dependency and indirection and b.) it stops the user from reliability passing options to the interpreter (Linux only cut at the first space, with all sorts of other *nixen doing different things with multiple spaces.)


Still, clutters and confuses new users. In wonder if some of the fs apis could hide the silly/redundant directories?


I much prefer compatibility symlinks actually existing in the file system over some complex, hidden pathname resolution override riles lurking in the kernel. That way, Windows lies.


> lurking in the kernel. That way, Windows lies.

Windows implements many overrides (e.g. "My Documents") as NTFS junctions with some hidden (+ system?) property set. (Sometimes you will find a proper hidden property handy, like in this case.)

Stuff like CON, NUL in non-UNC paths do hurt, though. Device files from an age without directories....


Don't get me started on Windows' bizarre, barely-documented behaviour. Magical filenames pointing to devices and file virtualization are bad enough... just try to decipher whether a given registry path is what it appears, or is redirected through yet more hidden virtualization magic some time. And we haven't even come to the truly wondrous series of overrides that occur when looking up names in the NT object namespace...


Some OSes, like Solaris, have already "fixed" this. However, the uglier thing that needs fixed is the initrd/initramfs. It served a purpose in 2002, but now is almost totally a historical artifact that adds more complication to the boot process. On distros inflicted with it, it makes boot problems even harder to fix because you have a layer-0 filesystem that has its own set of basic tools that are not used, or visible, at any other time. It kinda made sense in 2002, when you had to make sure that your boot partition was in the first 8 gigs of the filesystem, but nowadays, it is just a source of redundancy and errors.


None. On Fedora and Arch Linux at least, /bin, /lib, and /sbin are symlinked to the same directories in /usr/.


One workaround you can do is to add lots of symlinks to keep the old paths working. This is what they do in Gobolinux (a Linux distro that has a radically different filesystem organization)


One funny thing about Gobolinux is that you could still do the equivalent of a / /usr split.

You never really search /Programs for binaries, you aim that at /System/Index (housing the equivalent of the classic FHS for compatibility reasons). This means that things do not really need to live in /Programs to be useful.

And because of that you have two tools DetachProgram and AttachProgram that allow you to place a package (really just a tar-ball of the /programs branch) in an arbitrary location and have them be symlinked into place.

Thing about Gobolinux is that it do not depend on special Gobo only features. Everything is built around tools you will find in any *nix, and any action can be performed manually if need be.

Nor does it demand that something upstream do something special to placate them, as long as --prefix or similar actually works.


I want to react to the signature FUD: GPLv3 is clearly a superior copyleft license to GPLv2 despite Linus' latently permissive opinion on the anti-TiVoization clause.


It's not FUD, and GPLv3 is not clearly a superior license. It may surprise you to learn that different people value different things and have different opinions, and so want different licenses. GPLv3 is really a pretty extreme license that imposes a lot of restrictions that a lot of people simply don't want. If GPLv3 does what you want, by all means use it, but don't denigrate the choices other people make.


I wonder if people even know what those restrictions are and how exactly the anti-tivoisation clause works. It does not forbid anyone from running software on tivoised devices, which is what some people seem to think it does. All that it requires is that if you distribute the software primarily to be used on a User Product (as GPLv3 calls it), then as part of the installation information (which in GPLv2 used to just mean install and build scripts), you must also provide the signing keys for the hardware.

This also does not mean that GPLv3 makes software signing impossible and that you must forbid users from rejecting unsigned software if they so wish. It doesn't mean that you have to distribute every secret key that you use for signing software. It merely means that you have to give your users a way to install software on the User Product as they see fit, if they see fit. It's up to the user to decide to override any signing feature or not. It's very much in spirit with GPLv2 that required installation scripts. As far as GPLv3, hardware signing keys are just another part of installation scripts (the actual term used in GPLv3 is "Installation Information").

http://radar.oreilly.com/2007/03/gplv3-user-products-clause....

https://copyleft.org/guide/comprehensive-gpl-guidech10.html#...


Or to put it another way, GPLv3 mandates that your hardware be insecure (because you cannot prevent a malicious actor from installing malicious software on someone's device, which would normally be done by requiring all updates to be codesigned by the manufacturer). And it's not just limited to software that would be installed on the hardware; companies like Apple won't even allow employees to install GPLv3 software on their work computer because if a single piece of GPLv3-licensed code makes it into iOS, even completely accidentally, the license demands that Apple release their root signing keys to the world, completely destroying the security of hundreds of millions of devices.

Also don't forget the patent stuff in GPLv3. That's not quite as scary as the anti-TiVoization stuff, but it's still pretty significant for large companies.


> GPLv3 mandates that your hardware be insecure (because you cannot prevent a malicious actor from installing malicious software on someone's device, which would normally be done by requiring all updates to be codesigned by the manufacturer).

I think that's wrong: http://www.gnu.org/licenses/gpl-faq.en.html#GiveUpKeys:

> I use public key cryptography to sign my code to assure its authenticity. Is it true that GPLv3 forces me to release my private signing keys? (#GiveUpKeys)

> No. The only time you would be required to release signing keys is if you conveyed GPLed software inside a User Product, and its hardware checked the software for a valid cryptographic signature before it would function. In that specific case, you would be required to provide anyone who owned the device, on demand, with the key to sign and install modified software on his device so that it will run. If each instance of the device uses a different key, then you need only give each purchaser the key for his instance.

It sounds like a manufacturer could ship a GPLv3-compliant User Product with two signing keys: a secret one known only to the manufacturer, and a second per-device key than can be used by the end-user to install modified software. Such a manufacturer wouldn't be forced to disclose their signing key because the user already has access to an equivalent.

I also imagine it would be kosher for the device to provide the user the option, in the name of security, to permanently clear the second per-device key, so only manufacturer updates can ever be installed.

> companies like Apple won't even allow employees to install GPLv3 software on their work computer because if a single piece of GPLv3-licensed code makes it into iOS, even completely accidentally, the license demands that Apple release their root signing keys to the world, completely destroying the security of hundreds of millions of devices.

If Apple is behaving that way, it's Apple's own fault, not the GPLv3's. They could have chosen to design their software in a way that would have given them better options if they are ever forced to comply with the GPLv3, but they chose not to.


What you've just described is a completely insecure hardware platform. Giving out a second private key that can still be used to install third-party updates is just as insecure as giving out their normal private key, the only real difference is if anyone actually looks at the code signature they can tell the difference between third-party and first-party code. Allowing the user to lock out that second key doesn't fix anything because 99.99% of users will never do that, or even know it exists.

> If Apple is behaving that way, it's Apple's own fault, not the GPLv3's. They could have chosen to design their software in a way that would have given them better options if they are ever forced to comply with the GPLv3, but they chose not to.

That makes no sense. What you said is basically equivalent to "That's Apple's fault, they could have just designed their hardware platform to be completely insecure".

GPLv3 is fundamentally incompatible with having a secure hardware platform. This is absolutely GPLv3's fault.


>GPLv3 is fundamentally incompatible with having a secure hardware platform

That's funny because Chromebooks ship with GPLv3 code.


I'm not familiar with the security model of Chromebooks. How does it work?


Out of the box, they only accept Google OS updates. For reference, the OS is basically Gentoo+Chrome.

If you've a mind to, you can put the Chromebook in "developer mode" with a magic sequence of commands (requiring physical access as it's a boot procedure). This gives you root. To protect naive users from being exploited, a Chromebook in developer mode will display a nasty warning on boot. This warning lingers on the screen for an annoying amount of time, accompanied by a beep. The only way (and it's undocumented) of skipping the wait and the beep is with Ctrl-D; pressing any other key while on this screen results in the Chromebook being completely reset. All this is to ensure that it's virtually impossible to run under developer mode unintentionally.

If all this gets too annoying and you want to use the Chromebook as a fully general device, you can blow it away by replacing the bootloader. Doing this requires disabling write-protection on the flash, which requires opening the case.

Disclaimer: I own a Samsung ARM Chromebook. For all I know the precise details may vary across the Chromebook line, though I believe they all work similarly. Fuller details are here: https://www.chromium.org/chromium-os/developer-information-f...


Interesting, but from your description it sounds like if I can get my hands on someone else's Chromebook I can still put it into developer mode and install my own software on it, right? Sure, this mostly prevents someone from using a Chromebook that someone else installed malicious software on, though if that other person had time to open the case they could even get around that. But this doesn't solve the case of I have a device with private info on it, you get the device, and want to bypass the security measures on the device to get at my data. Or in other words, it doesn't solve the situation seen in the recent Apple vs FBI case.


> Allowing the user to lock out that second key doesn't fix anything because 99.99% of users will never do that, or even know it exists.

That's easily solved: On initial device setup, the user could be asked to explicitly disable the key by asking something like "Would you like to secure your device by permanently disallowing the installation of untrusted software? Y/N"

Also, as someone else clarified below, the alternate key I was proposing is unique to the device. So any attack against you would have to be tailored to your device. That leaves the main threats as: 1) an NSA-like shipment-interception and 2) someone getting access to your user key after the device is in your possession. 1) can be mitigated by buying your device in a random retail shop, and 2) can be prevented by immediately permanently clearing the user the key.


Now you have a number of thorny design decisions.

For example, can the user change their own key? If yes, the device can never be trusted second-hand - the OS might have been modified to secretly accept more keys. If no, then the device also can never be trusted second-hand - the previous owner could well have retained the key. And which key is superior, the manufacturer key or the user key? Can the possessor of a key override actions attempted using the other key?

Also, as written, this is just a bad solution. So you ask the user on initial setup. Let's say 99% of users won't know what the question is asking. So maybe they hit "yes", and now we're just back to iPhones and no user freedom, for them or any subsequent device owners. Or maybe they hit "no", and now they're no more secure and there was really no point in asking them. Maybe it's a 50-50 split, depending on how they feel (and not related to how they would choose given informed consent). Or maybe the user is informed, in which case they are very likely to just hit "no", if for no other reason than to preserve the resale value.

Don't get me wrong, I do think this could be done right. But attempting to guarantee "security" (that is, the dubious sort of security that is manufacturer-only updates) to naive users, freedom to power users, and preserve those guarantees as the device changes hands... well, that's a Hard Problem. I think only Chromebooks have attempted it so far.


> If yes, the device can never be trusted second-hand - the OS might have been modified to secretly accept more keys.

That's always going to be a risk with second-hand devices (or even new devices). Who knows what kind sneaky things someone did to a device that they had long-term access to. You can never really totally trust something that you didn't build yourself from the ground up, so you always have to accept some level of risk.

That UI solution was just a random idea, I'm sure there are better ones. Like you said, it's a Hard Problem.


Each user would get its own private key that could only be used on its own device. It could not be used to sign software for someone else device


Hmm, that's actually the first suggestion I've heard that's at least technically workable, though it's pretty darn impractical to do at any kind of scale.


> though it's pretty darn impractical to do at any kind of scale.

How so? It's a solved problem. Don't smartphones already have numerous device-specific identifiers and keys loaded to them? All they would really need to do is slap an extra sticker on the device with the device-specific key printed on it. For instance, my phone came with a sticker with its IMEI printed on it.


IMEI can be recovered once you throw the sticker away. A private key that can be used to install new software cannot be recovered, or if it can that's a huge security risk.


The device should not have the private key loaded onto it at all, just the corresponding public key. The private key would only exist on the sticker and thus be unrecoverable if you threw it out or destroyed it.

The IMEI example was just to demonstrate the ability to slap a device-unique code + matching sticker on something during manufacturing at scale.


Slapping stickers on something is not a scaling issue. The secure storage and transmission of private keys is. The idea that you'd just slap the private key on a sticker and have no other record of it is not something I even considered because that's very user-hostile behavior (unless they already have the intention to mod the device software before they buy it, they're unlikely to retain the sticker). It also screws with the secondhand market, both because secondhand devices won't have their key, and because the original owner obviously had access to the key and could have tampered with the software on the device before selling it.


maybe a physical button to generate a new private key + QR code for the PK displayed on screen ?


You're confusing physical access with ownership. If I have physical access to a device, that does not mean I should have the power to install compromised software on it, because it may not be my device at all.

This is a surprisingly common oversight that the hardware freedom crowd keeps ignoring. I don't understand why so many people just implicitly assume that physical access means security should be thrown out the window.


As I mentioned in another comment in this thread, it is already done for randomized administrator and wifi passwords in consumer routers.


Actually, I think Apple is already more or less compliant with this part of GPLv3, as they allow developers to self-sign the software, right? You just have to pay for a self-signing key. That's sufficient, because it means you can run modified software. All they have to do is lift the restriction to pay for the self-signing key.


I'm talking about the OS here. But the App Store is also incompatible with GPLv3 and GPLv2 for other reasons.


Is there a problem with, say, printing the key on the device case, as is done with administrator passwords for consumer routers? Or, if you're worried about casual skimming, inside the case? Under a tamper seal if it makes you feel better? You're basically at the Chromebook security model by that point, which seems well-regarded around here.

It's pointless anyway trying to "secure" hardware against a sufficiently determined attacker with physical access. There's an argument to be made that physical access should equal software ownership, philosophically.

It's also worth noting that requiring all updates to be signed by the manufacturer does not protect you from malicious code, as manufacturer updates can also be malicious. Ultimately, the "owner" of the device should be at the top of the pyramid of trust.


Physical access should not equal ownership. Haven't you been paying attention at all to the Apple vs FBI case? Apple works very hard to keep devices secure even when they're in the possession of someone else. Obviously in this particular case a third-party firm was able to crack the iPhone (though without saying how), but you can bet Apple is doing everything they can to figure out how and fix it.


GPLv3 mandates that your hardware be insecure

Oh please. It really doesn't, it merely mandates that the user is able to override a manufacturer's lockdown. Not that any nitwit should be able to override a user's lockdown.


A platform that is insecure-by-default and relies on users being knowledgeable and motivated enough to learn how to lock down their own devices is still an insecure platform.


People have taken their time to draw you a picture of a solution you could have come up with yourself. They are not talking about opt-in security, you are the only one. GPLv3 does not hinder security. You are furthering FUD.


> Also don't forget the patent stuff in GPLv3. That's not quite as scary as the anti-TiVoization stuff, but it's still pretty significant for large companies.

The Apache license has a nearly identical clause (in fact, I believe GPLv3 was inspired from the Apache license) and companies like Google and Apple have used the Apache license without destroying their business. Anti-retaliation clauses don't seem to be a death knell.

http://en.swpat.org/wiki/Patent_clauses_in_software_licences...


GPLv3's patent clause is effectively identical to Apache License, Version 2.0? I wasn't aware of that. I haven't done much research on the patent angle, I only brought it up because I actually talked to one of Apple's lawyers at one point about GPLv3 and they brought up the patent stuff as an issue.


Frankly v3 just clarifies ambiguities in v2 that corporations where driving truckloads through.

It solidifies the freedoms the FSF champions.


I explicitly said a superior copyleft license. That is, I stated the value I presuppose, and in this context GPLv3's superiority to GPLv3 is clear. What's "extreme" is denying users control of their own devices, not the reverse.


GPLv3 is still not clearly superior to GPLv2. GPLv3 is only superior if you agree with the new restrictions added in GPLv3, but again, not everybody does. And you know damn well that this is purely opinion, because you already referenced Linus Torvalds's stance on this matter.

There's two issues at play with copyleft licenses. The first is making the source available to others, and the second is allowing others to install modified versions of the software. GPLv3 mandates both. GPLv2 was certainly intended to mandate both, but ends up mostly just mandating the first. If all you care about is having other people who use your software release the source to their modified versions, then GPLv2 is clearly superior to GPLv3 because it has fewer restrictions. Similarly, if you care about having source made available but you also want to have your software become as popular as possible, you may opt for GPLv2 because it's a lot more likely for companies to use your software than if it's GPLv3.


Copyleft is those user-protecting restrictions. I'll refrain from repeating myself.

By your "fewer restrictions" logic, permissive licenses would be the ones most copyleft. Again, the purpose of copyleft is proliferating "freedom-respecting" software, not amassing - open source. Code that is open, but you can't utilize freely because of a locked-down device or a patent - "misses the point". Preceding references emerged organically and... whoa.

Most GPLv2 bias which isn't caused by Linux' licensing is due to the kind of FUD you've perpetuated in this thread, not any actual issue with GPLv3.


> By your "fewer restrictions" logic, permissive licenses would be the ones most copyleft

Please don't strawman me. I already explained how there's two different freedoms that copyleft licenses seek to protect, and it's perfectly valid for someone to care only about the first freedom but not the second, and for such a person the GPLv2 is superior.

> Most GPLv2 bias which isn't caused by Linux' licensing is due to the kind of FUD you've perpetuated in this thread, not any actual issue with GPLv3.

Contrary to what you may believe, FUD is not defined as "any opinion you disagree with". And by calling my arguments FUD instead of actually trying to address the points I made, you're just telling me that you can't actually argue against what I said so you'd rather try and discredit me.


Ignore, pretend, you'll still be wrong. I'll call it like I see it.


Today I learned that /usr actually means user, not Unified System Resources. Damn, one less thing I can nitpick about.


I thought 'usr' stands for "Unix System Resources"?


I think someone made that up retroactively.


Well this partially answers my question at https://news.ycombinator.com/item?id=11647487


I love the quote in the signature :

GPLv3: as worthy a successor as The Phantom Menace, as timely as Duke Nukem Forever, and as welcome as New Coke.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: