Hacker News new | past | comments | ask | show | jobs | submit login
When root on ZFS breaks on Arch Linux (mnus.de)
65 points by LinuxBender on March 26, 2023 | hide | past | favorite | 95 comments



This is one of the things NixOS is good for. There can be a special version of Linux that is only updated once some regression test on ZFS passes:

`config.boot.zfs.package.latestCompatibleLinuxPackages`

That way the main maintainer can still follow HEAD, whilst the ZFS maintainer can make sure that Linux is updated for ZFS once it’s been tested (or even automatically, but I don’t know if it’s that advanced yet).


I see a lot of smart people recommending Nix, but whenever I look at it, it seems like there are a ton of sneaky config options that one must get right to get a working system...


It depends (and not any more than applications that need sneaky config). Simple systems can have quite simple options.

At the very least you can take a working copy and figure out exactly which configuration is necessary. Hard to do that without NixOS. Especially with nixos-option.

And the options are all documented in a central place. The module definitions using the options are also “easy” to search through (Language is a bit complicated, but not in a way that grep doesn’t work).


nixos-generate-config usually produces a working config out of the box. And if you need to change some obscure config options, everything is well documented with "man configuration.nix"


I like https://search.nixos.org/options , but while nicer it enters the "need somewhat working system to fix the system" bootstraping issues.


> but whenever I look at it, it seems like there are a ton of sneaky config options that one must get right to get a working system...

Well, on any system there are tons of sneaky config options that one must get right to get a working system; it's just that on non-nixos, those aren't deterministic/version-control friendly/reproducible/centralized.


And even if you don't use that, the build just fails early without consequences.


A workaround that works locally (and only locally) is to build zfs-dkms (AUR) yourself and modify it so it doesn't break on these GPL-only symbols. While the licenses would forbid distributing it as such, you can do so on your own machine just fine.

And if your root is on ZFS, make a snapshot before updating! I set up a pacman hook[0] which runs zfs_autobackup[1] which automatically manages snapshots, so I can always easily roll back to a non-broken state. The ZFSBootMenu[2] bootloader makes that extremely fast without even needing a bootable USB-drive. :)

[0]: https://wiki.archlinux.org/title/Pacman#Hooks [1]: https://github.com/psy0rz/zfs_autobackup [2]: https://zfsbootmenu.org/


I just ran into the same issue yesterday when I was updating one of my machines. Furtunately I caught the DKMS error when mkinitcpio ran after the upgrade:

  ==> dkms install --no-depmod zfs/2.1.9 -k 6.2.8-arch1-1
  Error! Bad return status for module build on kernel: 6.2.8-arch1-1 (x86_64)
  Consult /var/lib/dkms/zfs/2.1.9/build/make.log for more information.
  ==> WARNING: `dkms install --no-depmod zfs/2.1.9 -k 6.2.8-arch1-1' exited 10
Issues like these are when being on a rolling-release distro might bite you.

To mitigate the risk a bit, I always have the linux-lts kernel package installed on all of my Arch Linux machines, which provides a fallback option when the mainline kernel fails, or some major kernel issue surfaces. In my case with the LTS kernel at 6.1.21-1-lts, DKMS build of ZFS on LTS succeeded, so it would have been a fallback option to boot even if I would have missed the error for the mainline kernel DKMS.

  ==> dkms install --no-depmod zfs/2.1.9 -k 6.1.21-1-lts
  ==> depmod 6.1.21-1-lts
I don't have my root on ZFS, I only use it for data storage, but if I ever did,I would make damn sure to build myself a live ISO with ZFS support and all drivers necessary for network access, ahead of time, in case I need to do an emergency repair. It doesn't need to always be on the latest kernel, just recent enough that you are able to mount all partitions and to chroot into the system.

For this issue a workaround with Kernel 6.2.8 is discussed in the AUR comments for zfs-dkms: https://aur.archlinux.org/packages/zfs-dkms This would involve editing the PKGBUILD for ZFS to patch the license for the symbols to GPL, though I'm not sure about the legality of this approach.


> For this issue a workaround with Kernel 6.2.8 is discussed in the AUR comments for zfs-dkms: https://aur.archlinux.org/packages/zfs-dkms This would involve editing the PKGBUILD for ZFS to patch the license for the symbols to GPL, though I'm not sure about the legality of this approach

In the US there is 17 USC 117(a) which might apply:

> Making of Additional Copy or Adaptation by Owner of Copy.—Notwithstanding the provisions of section 106, it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided:

> (1) that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner, or

(Section 106 is the section that says you need permission from the copyright owner to make copies or adaptations).


But you already have permission to modify the sources? The GPLv2 gives you that permission?

I just don't understand this position -- how dare you modify the sources we expressly gave you permission to modify?


Yes, the only problem here is if you redistribute those built binaries and images.


Exactly, or, you know, maybe it's a problem. But it is the same problem I would have had anyway.


This whole GPL only symbols thing seems to happen to zfs every kernel release. They will find a way around it eventually, even if that means implementing stuff in zfs or spl.


The main problem here is Pacman's handling of "PostTransaction" hooks (which in this case builds the zfs kernel module). The whole installation should have failed but it didn't. Individual hooks can specify "AbortOnFail" but it's available to "PreTransaction" hooks only.


Like many fun adventures in Linux, this begins with adding some 3rd party repos (and/or packages) into the system. In this case doubly so because:

> I'd probably have caught the red error if it wasn't followed by tons of log output from building AUR packges :)

Although I do agree that noticing important messages from even plain pacman output can occasionally be difficult, although I haven't seen many other package managers do it any better.

> But I can't, because I can only build a live ISO with Linux 6.2.8 easily, which is the original problem

That feels bit weird situation though. I'd imagine being able to build downgraded images would be useful in general, so interesting that apparently its not so easy.


These sorts of events are why I know quite a number of people who do ZFS-for-everything on their FreeBSD systems but ZFS-for-everything-except-root on their Linux boxen.

Seems like in a lot of cases having all of your data on ZFS gets you almost all of the awesomeness of having ZFS around with fairly minimal downsides.

(deciding if this is true for any particular system is left as an exercise to the person designing the deployment)


This is somewhat understandable.

But to be clear, this is also an extraordinary set of circumstances -- a very recent, but marked stable kernel, on rolling release distro, with a back ported set of changes, breaking a dkms ZFS build.

The user was notified and was able to take remedial action. My only issue would have been -- I'm not sure Arch made this very easy.

FWIW I use Ubuntu LTS and ZFS on root works great. IMHO every distro should just ship ZFS. Ubuntu has proven this is just not a practical difficulty anymore. Let's quit the charade of making users compile their own kernel modules for each kernel release.


Oddities have been seen on other distros, and if it's a server, trying to deal with it remotely has additional challenges.

Ubuntu seems to be doing reasonably well at not shipping ZFS related oddities so far, but not everybody runs Ubuntu.


It's not a charade, it's the looming shadow of Oracle lawyers.


Who I'd note haven't done anything so far? This is what is so mysterious to me -- if this is the blockbuster case some on Reddit have made it out to be, why hasn't there been a suit against Canonical?


Does the "blockbuster case" line of thought you've seen suggest it should be easy for Oracle? From the differing legal opinions I've read it seems a legal quagmire. That's a quagmire that _both_ sides of the argument have to wade in to, just Oracle gets to choose if and when that happens. They might just be a bit gun shy after their last 10 year long open source case.


Can someone explain the purpose of GPL only symbols to me, and why existing long standing APIs keep getting changed to GPL only? From the outside it certainly feels like all it does is (intentionally?) break things.


> Can someone explain the purpose of GPL only symbols to me

The official explanation, AFAIK, is that modules using symbols marked as GPL-only are so tied to kernel internal details that they can only be considered as a derivative work of the kernel.

> and why existing long standing APIs keep getting changed to GPL only?

If you accept the explanation above, the simplest to understand reason would be that they not being marked as GPL-only was a mistake. That is: modules using these symbols would already have been a derivative work of the kernel, it's just that the developer wasn't being warned of that before.

The fact that GPL-only symbols are a new thing (originally, there was only EXPORT_SYMBOL, the EXPORT_SYMBOL_GPL machinery was only added much later) increases the chances of that happening, since older symbols might not have been marked correctly yet.


the "derivative work of linux kernel" argument really doesn't hold for ZFS, regardless of what the linux kernel developers state

it was originally written for Solaris

and today the same source tree works on FreeBSD and illumos, equally as well as it runs on Linux


That is indeed the goal, they explicitly want to break compatibility. Their responses on the mailing list even say that. GPL or GTFO.

It's a definitely annoying as a user. OpenZFS is much nicer than BTRFS, but you have to accept that breakage of the kernel devs may cause you issues.


That's the inference I was trying to avoid making. It's frustrating because a big reason I use Linux is it gives me the freedom to build the system I want exactly how I want it, but a couple times a year I get this reminder that the devs are actually against me in this.


personally I will abandon Linux before I abandon ZFS


I think that is the point.. the kernel devs want to break 3rd party modules, because they don't like stuff distributed outside the kernel.


> because they don't like stuff distributed outside the kernel.

This mechanism doesn't prevent stuff from being distributed outside of the kernel at all, a GPL-licensed module could stay out of tree for a long time.

They even have a documented mode in the kernel Makefile to compile out-of-tree modules, as described at "make help": "make -C path/to/kernel/src M=$PWD modules modules_install"


Ah man been there, done that!

I now specifically use the dated archive mirrors for arch mirrorlist as well as the zfs-linux AUR package over the DKMS. Been a lot less of a headache dealing with it that way. I just bump the date up to as far forward as the two packages were aligned and off ya go.

I do wish the licensing would get sorted out because I love ZFS but dealing with always having to juggle that dependency is annoying.

Also mkinitcpio should be much more aggressive on failing on errors.


I'm on the zfs-linux AUR package in the archzfs repo and not DKMS. I'm still waiting for zfs packages for the latest kernel but they haven't been cut yet; I'm wondering if this has something to do with OPs problems that are being encountered, and so there's a bit of a delay on the next zfs release.


Root on ZFS just feels too much like playing with fire to me.

Fixing an unbootable ext4 filesystem seems tricky enough already.

Am I just in the dark on how to easily repair boot issues, especially on more modern filesystems?

I feel like the OP probably had to put countless hours into working through all these issues to finally get back to a fully functioning environment.


It’s super convenient though! Snapshots, clones, compression, encryption, differential backups and whatnot. Though much of it can be had with btrfs, too, these days.

Repairing many boot issues is highly trivial. Just chroot in, fix whatever is wrong (boot manager/UEFI vars/kernel/initramfs/…) and done. Pretty much like bcdboot on Windows or fiddling with the registry offline. Certainly, you need to know your stuff. But the basic Linux boot process is quite simple and easy to understand. There is no hidden magic except perhaps sort-of in the initramfs.

The only hard part is sometimes finding out what is wrong: Wrong/mangled kernel command line? Bug in some initramfs hook? Or actual failing hardware? But all that is mostly not specific to whatever filesystem you use.

ZFS certainly complicates things a bit in that you need a compatible live medium. But if we take a step back, you also do for LVM, LUKS, Software RAID, …

Of course, if we’re talking about failure modes of the filesystem itself (corruption, hardware issues and the like), ZFS can give you a pretty bad time, because resources are not so readily available on the net.


It's pretty easy to avoid if you keep your previous kernel in your boot directory... new kernel won't boot? Just boot the old one until you can fix the new one.


I should add that, in my brief searching, to learn more about my own question, I stumbled on this. It seems much akin to the very cool rEFInd bootloader:

https://docs.zfsbootmenu.org/en/latest/


(one of the ZBM developers here) - we strive to provide the best recovery environment possible. Between cloning/promoting previous snapshots through our interface, a full shell with ZFS userland tools, and the ability to chroot read/write into any boot environment (or read-only into a snapshot), we can help you out of quite a few common binds. There's usually no need to keep a recovery disc around!


This is part of what I love about Hacker News so much. It seems like almost any item posted on here quickly has someone that is involved with the project commenting on the post, offering their help and insights!

I'm looking forward to running ZBM through it's paces when I get some time.

I actually found out about it from a Reddit comment by Jim Salter a.k.a. Mercenary Syadmin, a huge ZFS advocate.


I use ZFSBootMenu on my servers. I was going to mention it. Love it.

Only had to use it once to boot to an older snapshot, but it was so much nicer than the grub console.

Do have to make sure you don't let it get too old and upgrade your zpool features that it doesn't support, but that's always true with ZFS boot.


Why would you bother? It'll be quicker to just blow it away and reinstall.


You're not concerned about all your system settings, installed applications, and any other data worthy of preservation on your root disk? Or am I misunderstanding how Root on ZFS works/boots?


No, why would I be?

In what world would you even lose all that?


What specifically would a reinstall cover then? Is it just a boot partition/stub (FAT?) that loads the root ZFS pool? It all seems like a drastically different architecture than I'm used to seeing on a system.


Why would you keep /home on the same partition? Why would you be manually adding in apps after a reinstall?

You just script this shit.


> ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_start_io_acct'

> ERROR: modpost: GPL-incompatible module zfs.ko uses GPL-only symbol 'bio_end_io_acct_remapped'

I don't understand why there isn't a flag to ignore these and accept a tainted kernel so that this type of issue doesn't break things. Then again, pacman should have aborted the process as well.


> ignore these and accept a tainted kernel

That isn’t possible as I understand it.

Loading any non-GPL module will mark the Kernel as tainted, but that’s not the issue here.

The problem is that the ZFS module uses Kernel symbols that are marked as GPL-only, meaning that the Kernel will refuse to load the ZFS module as it isn’t marked as a GPL module. There isn’t any way to bypass this short of recompiling the Kernel to remove this restriction.

The legal status of this is uncertain, since a number of questions naturally arise:

1. Is it acceptable to mark a Kernel module as a GPL module (this is done via a macro in the source) but release it under a license that is not GPL-compatible?

2. Is it acceptable to recompile the Kernel to remove this restriction?

As far as I know neither of these have ever been tested, and there are differing opinions.


If a kernel with the restriction removed is compiled locally and is never redistributed, why would it be illegal?


The argument goes along the lines that GPL-only symbols are so tightly coupled with the internals of the Kernel that by using them you are forming a derivative work of the Kernel, as opposed to just using interfaces in the same manner (legally-speaking) as a userspace program.

Producing and distributing a module that uses these symbols under a non-GPL compatible license is already an infringement, because you have produced a derivative work and distributed it outside the license conditions.

If you recompile your Kernel to remove the restriction then you may be aiding the module developer in committing a copyright violation.

I’m not sure how much I buy this argument personally, but I have seen it advanced (I think on the LKML).

UPDATE: I found it: https://lwn.net/Articles/154603/

Quoting Linus:

> I think both them [the lawyers Linus claims he spoke to] said that anybody who were to change a xyz_GPL to the non-GPL one in order to use it with a non-GPL module would almost immediately fall under the "willful infringement" thing, and that it would make it MUCH easier to get triple damages and/or injunctions, since they clearly knew about it.

Again, not endorsing this argument, just pointing out that some people (e.g. Linus) claim this isn’t permitted.

I’m not entirely sure why people think this comment is worth downvoting: I have been very clear that I don’t really buy this argument, but I thought it was interesting and relevant to point out Linus’s take on the matter.


..."you are forming a derivative work of the Kernel"

Which in the case of ZFS is obviously nonsense. ZFS was developed on Solaris and ported to many other OSes. Its certainly not a derivative work of the linux kernel. This is a case of the linux kernel devs being bullies, and trying to make life harder for 3rd party modules.

If they can claim copyright violation for using a GPL-only symbol which didn't used to be GPL-only, then I think ZFS devs can claim an antitrust violation for locking them out of the kernel. Both legal concepts are absurd..


> This is a case of the linux kernel devs being bullies, and trying to make life harder for 3rd party modules.

I’m inclined to agree, but the fact that ZFS was developed on Solaris and ported to Linux is irrelevant in my opinion: the specific Kernel module using the GPL symbols would be the derivative work, not ZFS as a concept.

Your point about symbols being retroactively made GPL-only is very interesting, and I think it really undermines the Kernel developers’ position.

Again, I don’t really buy Linus’s argument at all, but I think it’s relevant to the discussion, which is why I mentioned it.


> the specific Kernel module using the GPL symbols would be the derivative work, not ZFS as a concept.

Why, as compared to a kernel module linked to non-GPL-only symbols?


> Producing and distributing a module that uses these symbols under a non-GPL compatible license is already an infringement

Producing the module is not, on it's own, an infringement. It's only if you distribute or publish the result that you get into trouble with the GPL since you're not in a position to distribute re-licensed ZFS source code[1].

Thus it should be perfectly fine to have a package which downloaded the ZFS code and compiled a kernel module for that machine, as long as the user did not distribute the compiled kernel module to others in any way.

[1]: https://opensource.org/license/gpl-2-0/ (section 2)


It wouldn't. Moreover I can't see that there is any reason that redistribution of such a kernel would be against GPLv2, the licence under which the kernel is distributed.


> It wouldn't.

Are you a lawyer? Asking in good faith, because if you are then this is the first qualified opinion I’ve seen on the matter.

Everything else has been people claiming to have received legal opinions, see for example the email from Linus in my sibling comment.


TBF, so long as we are arguing from authority, Linus isn't a lawyer either, and has produced heresay ("I talked to some lawyers..."), but no attorney's opinion on the matter.

The reasonably charitable case for your parent comment is -- Linus suggests such linking might show "willful infringement" IF an out of tree module used these symbols, AND, as indicated by the parent, these changes were distributed together implicating the GPLv2 license's Sections 2 and 3.

Otherwise how would this ever be a problem? If I linked to a GPLv2 only interface in the privacy of my own home, never distributing the changes, what section of the license requires me to make available my changes? The GPLv2 gives me explicit permission to modify the sources?


> TBF, so long as we are arguing from authority, Linus isn't a lawyer either, and has produced heresay ("I talked to some lawyers..."), but no attorney's opinion on the matter.

Yes, that’s essentially a rephrasing of my comment that you replied to.


Then perhaps I misunderstood your comment? It went pretty hard at a guy/gal who was simply stating the obvious -- permission to modify sources is already given within the GPLv2 itself. Imagine this colloquy --

Linux devs: "We've created a GPL only interface. Here are the sources."

Downstream user: "Okay I've modified these symbols to be not be GPL only anymore."

Linux devs: "You can't do that!"

Downstream user: "But you've given me permission to modify the sources in your license?!"

Certainly a copyright holder can give special additional permission to use software in contravention of the express terms of the license. The only problem is getting the copyright holders agreement to do so. My understanding of what you have argued is -- a copyright holder can place additional restrictions on the use and modification of software in contravention of the express terms of the license hidden within the software itself. You state:

    The legal status of this is uncertain, since a number of questions naturally arise:
    1. Is it acceptable to mark a Kernel module as a GPL module (this is done via a macro in the source) but release it under a license that is not GPL-compatible?
    2. Is it acceptable to recompile the Kernel to remove this restriction?
    As far as I know neither of these have ever been tested, and there are differing opinions.
This line of thought is problematic for several reasons. This most salient of which is -- this is exact argument used against ZFS in the kernel: that the CDDL's additional terms/restrictions are incompatible with the GPLv2.

Imagine again --

Linux devs: "We've created a kernel module that can't be used by Russians invading Ukraine."

Other kernel devs, estates of deceased developers, etc.: "But our understanding was that all our contributed sources would be available under the GPLv2, not some weird hybrid which can be modified willy nilly with new restrictions by any future dev."

When someone says -- "Here is your license to use my software," it is reasonable to believe you can take them at their word, and there aren't additional license restrictions lurking somewhere else.


Yup, this popped up for me about a month ago on a Raspberruy Pi system, fortunately not on the root, but still. On ARM the kernel_neon_begin / kernel_neon_end symbols were made GPL only[0] which is quite puzzling on many levels.

The discussion of the reported issue on the OpenZFS GitHub is very educational, both on the maintainers side (useful, sensible), and the other commenters syde (kinda "thermonuclear")[1]

In the meantime I just downgraded the kernel as well, fortunately still had it in the pacman cache; then pinned it for now.

Sent an email to to the patch creator / signer-off too, to feed back on that "this is not expected to be disruptive to existing users." part of the commit (as in [0]).

[0]: https://lore.kernel.org/all/20221107170747.276910-1-broonie@...

[1]: https://github.com/openzfs/zfs/issues/14555


Does Arch Linux have boot environments (or equivalent)?

     Boot Environments allows the system to be upgraded, while preserving the
     old system environment in a separate ZFS dataset.
* https://man.freebsd.org/cgi/man.cgi?beadm

    The beadm command is the user interface for managing ZFS Boot Environments
    (BEs). This utility is intended to be used by system administrators who want
    to manage multiple Oracle Solaris instances on a single system.
* https://docs.oracle.com/cd/E86824_01/html/E54764/beadm-1m.ht...

> A ZFS boot environment is a bootable clone of the datasets needed to boot the operating system. Creating a BE before performing an upgrade provides a low-cost safeguard: if there is a problem with the update, the system can be rebooted back to the point in time before the upgrade.

* https://klarasystems.com/articles/managing-boot-environments...

Or perhaps:

> In essence, ZFSBootMenu is a small, self-contained Linux system that knows how to find other Linux kernels and initramfs images within ZFS filesystems. When a suitable kernel and initramfs are identified (either through an automatic process or direct user selection), ZFSBootMenu launches that kernel using the kexec command.

* https://github.com/zbm-dev/zfsbootmenu


Yes, but like every part of ZFS on Arch, not by default. https://aur.archlinux.org/packages/zfsbootmenu


Indeed. In fact, my main reason for ZFS on root is boot environments. If something breaks, its trivial to roll back the rootfs as it was in the old BE, before the upgrade. And I run ZFS on FreeBSD.


I believe only ubuntu half-supported boot environments. I don't know of any other distro that even bothered to support zfs outside of supplying the package for users to install.


The part, where wireless nic is not allowed to be enslaved to a bridge, I believe that it was allowed at one point in the past. The root cause is that the wireless frame has 3-4 MAC addresses versus Ethernet just 2. But somehow software should be able to do something about it. Why not?


This can work with MAC address translation (NAT, but on a lower level) and connection tracking. Not exactly a great solution. Alternatively, you can have one association with the access point per device, which is what some off-the-shelf repeaters do. Drivers often limit these to very low numbers.

This wasn't possible in the past either. There's WDS (4-address mode) and some drivers that did what I describe above under the hood. That's it.


That part of the story was a plot hole for my. Why did the protagonist not just set up a forwarding proxy, or IP forwarding, on the laptop?

Tor is convenient.

https_proxy=socks://laptop:9050 curl ...


If you're running Linux with ZFS, make sure to have a ZFS-enabled live medium available. The Arch ISO is easy to modify. Various variants with ZFS exist around the net. I believe Ubuntu also ships with ZFS nowadays. I think NixOS too? Not entirely sure, it's been a while.

If you go custom, you can also add drivers/firmware for whatever exotic hardware you have!


The script available there: https://github.com/eoli3n/archiso-zfs makes it extremely easy to add ZFS support to any Arch ISO after it has booted. You can copy any standard ISO to a USB drive, boot off it, then run `curl -s https://raw.githubusercontent.com/eoli3n/archiso-zfs/master/... | bash` and you'll have ZFS support in a few seconds, without having anything to worry about.


> without having anything to worry about.

Nothing to worry about except network access, of course, which was the issue in this case. Much better to have it not with ZFS support already enabled.


Indeed (I guess you meant "much better to have it with ZFS support already enabled"). I simply mentioned this for people unaware of the existence of this script who don't have network issues with the Arch ISOs.


Yes, NixOS also ships ZFS in their ISO. It's been super easy coming from ZFS on Arch to NixOS, no hassles with adding custom repos, using DKMS, etc.


    Oh would it be nice if ZFS wasn't CDDL or if the CDDL/GPL compatibility would be settled.
For all practical purposes, it doesn't appear to be a problem. Ubuntu has shipped ZFS for years. As much bellyaching as we hear from some quarters about the hypothetical harms this has caused, no one has brought a suit, because my guess is -- it would be a pretty terrible case for a copyright holder from the Linux side. See [0] in addition to all the other ink that has been spilled re: this matter.

Linus may not feel comfortable with ZFS included in mainline. I'm not sure distro maintainers need to feel the same way.

[0]: https://www.networkworld.com/article/2301697/encouraging-clo...


To be fair, Larry has the deepest pockets, and it takes some balls to stand up to the likelihood of a lawsuit.


> it takes some balls to stand up to the likelihood of a lawsuit.

Likelihood? It's been years Canonical is doing what I've told be the Reddit-rati is a blatant copyright violation and ... crickets.


Larry knows if he were to sue Shuttleworth/Canonical it could potentially bankrupt them both.


My guess is Larry could just buy Canonical and turn it into a parking lot.

But maybe this is the long game -- Larry is hoping to trap IBM/Red Hat in a suit when they add ZFS to the kernel, but I'm not sure that's how copyright litigation works. If you forgo litigation for years and now decide to sue, I'm not sure you get to assert copyright now.


I dual booted my machine Win10 and Arch on ZFS Root. Admittedly I used Win10 like 99% so there was always a laundry list of updates when I booted into Arch. It seemed ZFS broke 20% of the time I updated. It was either the silly GRUB config fix that wasn't upstreamed getting overwritten or the ZFS module build somehow failed. I got so frustrated that I just nuked the whole install.

I realize that ZFS is a weird bolt-on but the fact that kernel modules failing to build doesn't cause significant error messages and/or failure is silly. Arch is great but Arch on ZFS felt like my Gentoo days: I want to have fun using my computer, not constantly fiddling to make my computer usable.


It's not worth your time to use grub if you run zfs... the zfs module in grub only supports the ancient feature flag set from Solaris 9/10, its a miracle that it even works on Linux at all...

It also doesn't help that grub is spearheaded by Oracle employees, and as such, they couldn't give a rats ass about maintaining the module. (And if they do, it'll be to add more Oracle Solaris features rather than zfs on linux/bsd/illumos)


I have the kernel and zfs modules in IgnorePkg in /etc/pacman.conf. This way I have to update them explicitly.

I usually have to manually install a kernel from archive.archlinux.org/packages but I find this much less fragile than DKMS.


… it still continues to work on NixOS after rebooting into the previous generation. ;)


The Arch derivative CachyOS[1] has ZFS-enabled kernels by default. And their repos can be added to existing Arch installs[2], so I've found this a nice way of working around the non-ideal state of ZFS integration in Arch. (Plus their kernel choices have other interesting features.)

[1]: https://cachyos.org/

[2]: https://github.com/CachyOS/linux-cachyos#cachyos-repositorie...


Not an arch user but a zfs user. I skip zfs on root , except on FreeBSD because that is the most solid. For Linux info a basic raid 1 for root and boot and create a zfs mirror on the other drives. Normally I have 6 drives (mirror plus the 2 os drives). I then take some space on the os drives to create a slog mirror for zfs. The heavy usage comes on the zfs mirror anyway the os is really just there to power vms.


Honestly, arch seems like a fragile distro.

My experience with arch is: install the distro files, set up efistub booting, reboot... install updates a couple days later and wind up with a system that can't reboot, and if you force reboot it, it won't ever mount the root fs again.

I've never had that happen on Gentoo, or debian/ubuntu or fedora, or suse. Only arch could be so temperamental to break itself on kernel upgrades.


I've been running it for 8 years or so, and it's been fine. I have yet to need to do a reinstall on any of my computers. The rolling release has been great from a maintenance standpoint. There is the occasional problem, of course. Most recently, pipewire snuck its way in as a dependency. Not having been configured sanely, it caused some minor irritating bugs as it clashed with pulseaudio.

My newest desktop's bios do tend to mess up my boot configuration after a firmware update. Windows boots directly instead of grub. It's a 10 second fix from the command line, whether in Windows or in Linux.


I'm sure part of it for me has to be the efistub. I don't understand why anyone would want to bother with a bootloader in modern times because efi will let you pick and choose what to boot just fine without one.

Clobbered bootloader is also why your system goes back to windows after an update (windows has always had issues cohabiting with other operating systems that weren't windows)


I'm not too concerned about the details of how the kernel gets loaded into memory, whether it's the firmware or efistub or grub. I just like grub because it's familiar and has a nice menu that makes it easy to tweak options.


This: https://i.imgur.com/W3edB3t.png

Seems nothing changed for the part 5 years ...

Moar details here: https://is.gd/BECTL

In other words - when it comes to #ZFS - just use #FreeBSD - you will thank me later.


This is one of the reason I use NixOS. I think it has the best ZFS support possible (at least on Linux)


My solution for problems I've hit similar to this though not ZFS related (and by no means a catch-all) was to install linux-lts alongside linux and have boot entries for both


I prefer FreeBSD if I am going to use ZFS, never had a problem and I always use zroot mirrors. On linux I have had many issues, I won't use it on linux anymore.


This can't happen with NixOS.


I had similar issue with BTRFS thanks to new faulty DDR4.

- CRC checksum for some blocks on fs was wrong

- I discovered faulty DDR4, replaced it, without knowing FS corruption

- system started randomly kernel panicking

- no msg in logs (fs died with kernel)

- eventually I experienced kernel panic while at computer, and seen msg on screen (broken checksum or something)

- there was no way to repair filesystem. I had to recover as much as data, throw away some files and start over on fresh

- I use BTRFS snapshot streaming in backups, wrong checksums propagated to backups. I had to restore from 1 week old backups

- Since than I use RSYNC backups as well

- And I test all PCs with memtest monthly overnight. It is also a good stress test for CPU.


> And I test all PCs with memtest monthly overnight. It is also a good stress test for CPU.

I think that the consensus in the PC enthusiast community is that memtest is quite dated and not very good at finding instability in modern hardware.


I’m not particularly up-to-date on that subject, so someone else can probably provide more info, but FWIW PassMark’s MemTest86 has had ongoing development, while the open source MemTest86+ was effectively a dead project for nearly a decade before some recent activity - so there’s a fair amount of difference between the two.


what's the community recommendation these days?


I am not sure what consensus is, but I use this: https://memtest.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: