Hacker News new | past | comments | ask | show | jobs | submit login
NFS Shares with ZFS (klarasystems.com)
57 points by veronica_avram on Feb 24, 2022 | hide | past | favorite | 25 comments



Article doesn't really answer the question. Why would I want to use the ZFS "sharenfs" feature instead of normal exports? NFS seems orthogonal to a disk-based file system's setup, and now I have to worry about configuration in two places. Is there some advantage in speed or security?


It's the same advantage as, for example, letting ZFS manage your mountpoints: the pool itself contains all the configuration related to the file system, including NFS exports. It'll allow the export to be inherited to child datasets, removed when the file system is removed, change paths when the file system is renamed.

If you'd rather manage them the traditional way, that's there too (just as ZFS offers mountpoint=legacy if you want to use fstab).


Thanks for the explanation. Not really a fan then, I would not expect on-disk file system attributes to control a network service.


It's been that way since 2006 when ZFS hit production with Solaris 10 u2. It's nothing new. Works a treat, because it makes ZFS properties the central point of system administration for many mundane and disparate tasks, like compression, mounting, SMB, NFS, et cetera, without having to worry about implementation details or running additional commands.


Might help to think of the filesystem attributes as being ACLs restricting sharing, with the whole ZFS array being shared except where prohibited by ACL. Just that "no ACL" means "restricted." (That's basically how Windows SMB volume-sharing works.)


Plan 9 works like this. A file server that serves disk file systems (CWFS on 9fornt) can be made to listen for tcp/il connections as well as the ability to serve parts of a file system or namespace using exportfs (a file server itself).


I use it because it is easier and you don't need to manually run exportsfs to reexport


And boom, you have two sources of half-truths.

I admit I hated this ZFS feature in Solaris 10 already, and it was the reason for more than one unaccounted leftover NFS share that ended up in bright red in audit reports.


That is because it was done by hand; if it had been done by a SVR4 package, it could have been easily queried by the existence of that package on systems, and removing them would unshare the NFS mounts.


I use this in my home server to provide the home NAS capability. Just have a simple zfs setup with auto snapshots for rollback, mirroring for disk failures, and of course regular rsync backups to LUKS encrypted USB drives. My offsite was the office but thats become more of a concern due to WFH.


Sorry to hijack the thread, but has anyone successfully used NFS for anything worthwhile? Samba is fast, but has problems with unix permissions. NFS was slow as molasses every time I tried to use it.

I'm using sshfs for most of my remote filesystem needs now.


I've been exclusively using my $HOME over NFSv4 across many boxen for years. My NFS is exported by TrueNAS CORE [1] powered by an EPYC 7443P (recent upgrade). I can saturate my 100 GbE (~ 12.6 GiB/s) with NFS traffic.

In my testing, NFS meaningfully but not substantially beat Samba in bandwidth and with much less server load (sorry I didn't keep data).

Even though using a NAS is slower than a native RAID0 of NVMe/PCIeG4 drives, it gives me shared, automatic snapshots, automatic replications, etc. It's totally worth it.

NFSv4 rules.

ADD: [1] TrueNAS SCALE was _just_ released and I'll be evaluating this seriously and expect to migrate to it.


> Sorry to hijack the thread, but has anyone successfully used NFS for anything worthwhile?

I'm sure it's less common these days with everything on "the cloud" (far less convenient), but in the old days entire companies ran on NFS. Thousands of workstations, with dozens-to-hundreds of NFS servers mapped to a global filesystem hierarchy, automounting as needed.

So the answer to "has anyone" is yes, lots of companies, for a couple decades.

I do miss it, it was so super convenient to have everything in what seemed one giant filesystem.

At a tiny scale I still have the convenience at home. My file server (ZFS) exports various filesystems to all other machines in the house (via zfs sharenfs).


I still do this, also. I have "/share" NFS mounted on all my internal systems, including my Mac desktop.


I use it for everything, from storing personal files to mass-provisioning systems and software. Works like a charm.


Remotely-related question: how can you scale out a service that uses ZFS, so as to have N instances of that service on different hosts, but let them all use the same ZFS pool?

Can this be done with using something like NFS+ZFS?


If you wanted to have 2 redundant hosts, you could use something like GlusterFS for the replication. I have only tested it with XFS, but apparently it supports ZFS now: https://gluster-documentations.readthedocs.io/en/latest/Admi...


Assuming those instances all need shared access to that storage, you would have to build some sort of shared storage on top of the ZFS solution. This might mean NFS shares, or a relational database, etc, depending on the nature of the shared data. You still end up with a single point of failure at the storage layer, and a potential performance bottleneck.

If the data they need shared access to is immutable (for example, the static files served by a webserver) I would much sooner bake them into a versioned container image and share that. It'll perform better by virtue of being on unshared local storage and eliminate the single point of failure/bottleneck.

Another option for eliminating the single point of failure and performance bottleneck is a distributed filesystem like cephfs or glusterfs.


I haven't used ZFS, but you can easily use NFS to share a filesystem from a central (ZFS) "file server". NFS works very well over wireguard, as wireguard gives you "trusted" source IPs, and NFS uses that to check permissions (It would also work well over yggdrasil/cjdns).

I've been wondering the same for distributing storage arrays around multiple sites, with local caching. Ceph seems to be the way.


Scaling out a _service_ that uses data managed by ZFS is where NFS helps you. You simply mount the same ZFS storage in multiple hosts via NFS.

If you meant scaling ZFS itself such that the shared storage is resilient to a full host failure, I’m afraid ZFS is not your best bet. ZFS scales vertically very nicely. Heck, I have a single host with 2 X 100G NICs and 4 x 40G NICS fronting 1.2 PB of ZFS managed disks. If that host goes down, say from a motherboard failure, all 1.2 PB becomes inaccessible until I can hook up the JBODs to a different host physically.


Solaris (and by extension illumos-based operating systems) solve that problem with the pNFS protocol.


A ZFS pool is something only one host at a time can use. You don't share the pool, except maybe for redundancy with enterprise disks that can attach to two hosts; I'm not sure if OpenZFS or OracleZFS have features for that, or if you'd need to do it yourself.

You can share a filesystem (see the article for one way to configure that) or a volume (like a disk image, could be used for diskless hosts; iSCSI is a standard way of doing that).


Multiple services can mount a network volume over NFS/CIFS/SMB. The pool is not something you’d see or deal with outside of the host that’s operating the pool. It’s a local concern. Externally you’d expose network shares.


do you mean file-sharing? (that's what export & exportnfs are for.)


Interestingly it seems that TrueNAS (previously known as FreeNAS), which is based on FreeBSD, doesn't seem to use the "sharenfs" property




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: