I think a lot of things he listed in how things currently are (under "Upstream Projects") are either wrong or overblown.
For instance, the claim that the Linux packaging scheme doesn't work for proprietary, closed-source software. That's just plain wrong. No, proprietary software makers can't rely on distribution maintainers to package their stuff, however there is nothing preventing them from packaging it themselves, and making it available in a 3rd-party repository. Both deb/apt and rpm/yum have had this facility for ages, and it's how a lot of Linux users get either software not supported by the distro (like media stuff that Fedora refuses to include), or software that the distro drags its feet on updating. Proprietary vendors can easily use the same system to make their software available to users, and to allow the software to be easily kept up-to-date using the existing system instead of running a separate "update checker" constantly in the background like is common on Windows systems. The main problem is that there's more than one Linux distro out there, and they're not compatible with each other. But as Google does already with Google Earth and Chrome, for instance, it's not that hard to just target RH/Fedora and Debian/Ubuntu/Mint.
Another important point is libraries. They complain about the library dependencies all being different on different distros. But there's nothing stopping proprietary vendors from statically-linking everything, which is how they usually do it anyway, both on Linux and on Windows. Sure, it bloats up the binary, but it avoids a lot of library version and dependency problems, and with modern HD and RAM sizes, it's not that much of an issue.
It seems to me that they're proposing an enormously complicated and bloated system here in an effort to increase usage of desktop Linux, but this isn't going to help.
Running 3rd party packaging repos does work, and we do it at Keybase, but it's surprisingly annoying. Distro upgrades disable your repo by default, and your packages need to install a cronjob to detect when it happens and fix it. And because that scenario is difficult to test, you inevitably get bugs like this one: https://bugs.chromium.org/p/chromium/issues/detail?id=660145
So it works, but I really wish we didn't have to maintain it.
This definitely sounds like a bug to me. Assuming your company is maintaining 3rd-party repos for both the older and newer versions of the distro, doing a distro upgrade should, logically, switch to your repo for the newer OS version automatically, so it's completely seamless to the user. If the distro isn't doing that, that seems like an oversight.
Have you considered filing a feature request with the distributions about this? I can see why they do it this way, but there are certainly ways around this problem. (For example, a repo's metadata could include references to the identical repo for other OS versions, so that an upgrade can automatically switch to the appropriate repo.)
> The main problem is that there's more than one Linux distro out there, and they're not compatible with each other. But as Google does already with Google Earth and Chrome, for instance, it's not that hard to just target RH/Fedora and Debian/Ubuntu/Mint.
Google is quite an unusual case, though: they need to provides comparatively little end-user support for those products (relative to the absolutely huge sizes of the company and the user-bases). They already have the resources to build Linux versions, and they probably want them for their large internal pool of Linux users any way, but it is not business-critical to them whether Google Earth or even a given version of Google Chrome work well for you as a Linux user on your particular system. AFAIK they don't have to provide training, phone support or even a huge amount of detailed docs as part of offering the Linux builds of the products.
For companies with proprietary, licensed products, particularly the really feature-rich ones that focus on business or professional users, the product has to work without glitches or quirks on every platform that they support. When it doesn't, things rapidly get expensive: diagnosing bugs in the field and shipping fixes is still frequently time-consuming and labour-intensive. Even maintaining a test process for five Linux distributions to validate releases is not going to be cheap, and probably not cost-effective. They also need developers, QA and support staff with a working knowledge of two families of Linux.
And if they did it, they would consistently get requests for supporting more distributions, and have to keep politely saying "no" over and over again...
It does make economic sense for some companies to produce Linux versions of their products today, but I think that it's fair that each of them is unusual in some way. The situation needs to change for the economics to work for the average ISV.
> The scheme we propose is built around the variety of concepts of btrfs and Linux file system name-spacing
How many here use Btrfs in production. Facebook does I think. Who else?
Tying something to the filesystem seems a good way to optimize and get a lot of benefit without too much work. But only if that file system is already prevalent.
But I am a bit worried about Btrfs. Is there even a consensus on Btrfs being "the future". There was at some point I think. I've seen https://bcache.evilpiepirate.org/Bcachefs/ touted as the "future" of FS on Linux.
EDIT: Oh 2014. Well it's already 2017 so the question is even more relevant regarding where do we see Btrfs heading?
I don't really have any deep insight into the status of Btrfs. A few years ago I was playing around with my Raspberry Pi and it required me to build the newest kernel just to work--I also remember hearing they wouldn't take bug reports until you were using bleeding edge tools. Not very comforting if you were looking to roll out into production.
On the flip-side, Synology just rolled out Btrfs in and update and I'm pretty sure it's the default partition type going forward. They don't expose all the fancy features like dedup, but the fact it's default shows some confidence.
So it sounds like it's being actively developed and getting closer to being trustworthy.
Synology has had btrfs for a while. Their latest release just made it available for more models.
That said, when I bought a 916 last year, I ended up sticking with just ext4 as I needed something that would definitely work. I anticipate that I will at some point switch over.
I'm at pretty much a loss as to why anyone who doesn't specifically need something that btrfs and btrfs ALONE provides (writable snapshots from the top of my head although I'm not sure how that's different from a zfs clone.. One command?... I'm sure I read something cool about 'cp' being faster or whatever; is there anything else?) wouldn't go with ZFS instead.
Sure, you might need 5-10 lines of puppet or ansible to get it going in your linux of choice, but that's our job isn't it? And you're probably only using this on your storage boxes which are likely big pets instead of cattle (I use it on docker hosts too, but w/e)..
I've used ZFS in prod for 8 years at least (yay solaris, thumpers, fishworks etc) and it's never caused me any of the problems that things like VXFS, various sans, drbd etc have and I've run a bunch of pretty big infra on it (multi PBs).
Lets be realistic, you should really be picking your battles. Storage is almost always better to buy (read: cheaper/easier) than to build yourself unless you're one of the big boys (say, a cdn, some huge site with specific requirements or whoever.. Those folks probably have fs&kernel devs on staff).
As a responsible systems person you might tell your client just to give some money to emc or whoever as it'll be cheaper than having contractors build something where you need to care about filesystems, but if you absolutely must build it yourself, then go with the stuff that's mature, widely used and has all the nice features you might want from a FS (COW, Snapshots, compression, etc). Even then, probably your client won't have the staff to support your custom setup for more than about a year after you leave and they'll end up doing that anyway. Yes, I know some folks pulled it off; but I've not seen it yet...
As things stand, I cannot trust btrfs. I don't have a single reason to use a filesystem that has no users and has significant data loss issues in specific raid configs[0] when I have an alternative avail that works perfectly and is widely used and I have no idea why anyone else would either...
Losing data kills companies. You can't really fuck around with your software choices to do this stuff if you're a responsible person; save the fancy new tech for your apps not your infra....
There are, of course, gotchas with ZFS. You will get fragmentation on big pools over time which will hurt perf so you'll need to rebuild some storage nodes completely from time to time, but if you're building your own storage then this should be something you're capable of..
If you're custom building a few PB of redundant storage and your reason for not using zfs is you need to rebuild cluster nodes sometimes and you can't do that, then you shouldn't be be building storage on that scale.
BTRFS doesn't have that problem? Cool. I'd like to not have that problem too, but that doesn't mean that I'd use BTRFS just because I need to do those node rebuilds sometimes (which is good practice anyway isn't it?).
I'd maybe use in places I don't care about data safety, like as a storage driver for docker hosts (apps only) -- although I'd need a reason to bother trying...
> I'm at pretty much a loss as to why anyone who doesn't specifically need something that btrfs and btrfs ALONE provides (writable snapshots from the top of my head although I'm not sure how that's different from a zfs clone.. One command?... I'm sure I read something cool about 'cp' being faster or whatever; is there anything else?) wouldn't go with ZFS instead.
Licensing: the ZFS license is, last I checked, incompatible with the GPL. That makes it a non-starter for me at least.
The suse folks adopting btrfs says more about suse than it says about btrfs. I still remember in the past how I got tricked into believing ReiserFS (3, not the 4 that never made it into the kernel) was a good filesystem despite the broken fsck, the lack of defragmenting tools and many other issues listed there :
https://en.wikipedia.org/wiki/ReiserFS#Criticism
It was also the only time I've had a FS really eat my data after forced shutdowns (cutting off power).
They kept pushing ReiserFS as the default until 2006, 5 years after the release of ext3 which added journaling to the ext family of filesystems. Ext3 was a much better filesystem all around. Suse only switched to ext3 because of the controversy with ReiserFS's author and the uncertain future of Reiser4, not because they admitted that ReiserFS was bad.
I will not make the mistake of putting any worth to suse's words again. Doing it again with btrfs shows they really have a knack for going edgy with filesystems while absolutely no other linux distribution is willing to recommend btrfs.
I trust the Red Hat guys to show more care and this is what they had to say only a year ago about btrfs:
"The btrfs developers keep telling us that it's not ready, so we're following that. (From one of our storage exports: "Btrfs will be ready in two years. The problem is, that's also going to be true next year, and in two years....") We try to be first where we can, but not at the cost of data loss for users.
-- Matthew Miller, Fedora Project Lead"
Doesn't look good to me.
As long as RH or Debian do not start recommending btrfs one cannot give it consideration in good conscience.
One way to look at this is in what technologies each company has spent resources. Suse has more Btrfs developers than I can count on one hand. Red Hat has zero these days. Red Hat might have more LVM and XFS developers than I can count on one hand. So it stands to reason each company's output will be biased, they're going to support (development, QA, and tech support) the things they're spending resources on.
Considering a big chunk, possibly the single largest chunk, of upstream is Suse, and they use it by default for several years on both the opensuse and enterprise offerings, it doesn't really make sense at all that 'btrfs developers say it's not ready'. This just doesn't square. What's going on in my opinion is, neither Red Hat nor Fedora have the resources, nor are they willing to add resources, to triage Btrfs related bugs and therefore Fedora isn't ready for Btrfs, not the other way around.
Even Suse goes very light on what is supported with Btrfs multiple device stuff, by the way. Single device volumes, I've reported bunch of minor bugs, no data loss ever. Multiple device stuff is difficult to qualify: if you're familiar with the warts you're at a net advantage over mdadm and LVM RAID. If you're not and run into trouble, there are traps and Btrfs's claims of focusing on fault tolerance and ease of use can betray the user.
> it doesn't really make sense at all that 'btrfs developers say it's not ready'. This just doesn't square.
Just looked at this page: https://btrfs.wiki.kernel.org/index.php/Status and I can see how they would interpret that as "not ready". There a good number of "mostly OK" ones, one is unstable. With comments like "write hole still exists, parity not checksummed", "auto-repair and compression may crash ", and others.
It might be good enough for some but I can see many customers of Redhat would not want to trust their crown jewels to anything that stays "mostly OK".
Not the first time that a systemd talk proposal by Poettering got rejected from LPC. I wonder why his proposals keep getting rejected when they are so tied to the plumbing layer?
Thank you - I didn't realize this was so old. So knowing that - what happened to this idea? I've never heard of it, so I'm assuming it didn't catch on.
I don't think much development was done on this specific plan but some of the same ideas (minus the btrfs dependency) can be found in OSTree and Flatpak.
I feel that Nix solves a lot of the same problems in a better way. The ideas implemented in that project have a lot of potential and I wish that the community focused more on user experience for new users. Since I've started using Nix, I had so many cool ideas about what it could be used for.
It's a shame that it has such a steep learning curve.
On one hand I am excited by all these amazing things GNU/Linux will be able to do.
On the other hand, I often miss those simpler times in which you could install Slackware, boot it and now pretty much exactly what was running, where, how and why.
I'm not telling I want to go back though... Nowadays I can buy a not-so-old laptop, burn an Xubuntu usb stick and pretty much assume everything is going to work. I guess we will have to accept a tradeoff like this.
Why a new partition type and reliance on btrfs? I've had real problems with btrfs fairly recently.
Can't all this be achieved with containers and file system overlays already?
2. Create a constraint based language to express package dependencies and make the deps fine grain (as in separate runtime from buildtime, etc).
3. Either make union mounts a first class citizen or restrict what an application can see via containers and part-out a complete system via containers instead of creating a complete system piecemeal.
For instance, the claim that the Linux packaging scheme doesn't work for proprietary, closed-source software. That's just plain wrong. No, proprietary software makers can't rely on distribution maintainers to package their stuff, however there is nothing preventing them from packaging it themselves, and making it available in a 3rd-party repository. Both deb/apt and rpm/yum have had this facility for ages, and it's how a lot of Linux users get either software not supported by the distro (like media stuff that Fedora refuses to include), or software that the distro drags its feet on updating. Proprietary vendors can easily use the same system to make their software available to users, and to allow the software to be easily kept up-to-date using the existing system instead of running a separate "update checker" constantly in the background like is common on Windows systems. The main problem is that there's more than one Linux distro out there, and they're not compatible with each other. But as Google does already with Google Earth and Chrome, for instance, it's not that hard to just target RH/Fedora and Debian/Ubuntu/Mint.
Another important point is libraries. They complain about the library dependencies all being different on different distros. But there's nothing stopping proprietary vendors from statically-linking everything, which is how they usually do it anyway, both on Linux and on Windows. Sure, it bloats up the binary, but it avoids a lot of library version and dependency problems, and with modern HD and RAM sizes, it's not that much of an issue.
It seems to me that they're proposing an enormously complicated and bloated system here in an effort to increase usage of desktop Linux, but this isn't going to help.