Interesting that this was tested at Framestore (the visual effects company), they used to be a Lustre shop years ago then moved to commercial NFS servers and must be back onto commodity hardware now.
I could see a use for both - using Lustre as a shared SAN for their flame/smoke/etc suites and using pNFS for workstation/render-node shared storage. at least that's how i've deployed things in the past.
i could see this replacing isilon (which is built on freebsd) for the later use-case :)
This seems pretty cool in principle, but I got as far as a single nonreplicated MDS before I lost interest in trying it out at this stage.
Admittedly, my background is in HPC where metadata performance and reliability requirements would preclude using something like this. Does anyone who is a more familiar with FreeBSD than I am (which is pretty much anyone who’s used it in the last 10 years) have any more information on long their term plans for at least replicated block storage failover or cache coherency in an active/active MDS setup?
You misunderstood. The problem is not forwarding Kerberos tokens, the problem is obtaining Kerberos tokens. Unless some extensions are employed, you need to use password authentication. Even if kerberos extensions are employed, you must still use kerberos auth instead of ssh public keys. If you try to use ssh public key authentication (instead of telling ssh to use kerberos authentication) you can't log-in to the machine because you can't access your home filesystem, which is kerberised. In fact, depending on your configuration, you might not be able to log-in at all since the user's filesystem might not be mounted at all (waiting for the automounter).
If you want to store user directories on kerberized NFS, users must use kerberos authentication.
Kerberos actually operates on keys. The nfs service key is typically a random one stashed in the keytab.
I don't understand the objection to Kerberos tickets -- it's a single-sign-on system. You can obtain a ticket with a public key if you want, see e.g. http://www.h5l.org/manual/HEAD/info/heimdal/Setting-up-PK_00...
I don't know how easy it aould be to use SSH keys, but I once designed an authentication system for a scientific facility that was based on PGP keys and a web of trust reflecting administrative/social structures in the days before things like Eduroam and (viable?) NFS4, but not before AFS, which has been successfully used in the same way for a long time.
I don't know why anyone would think "Kerberized" wouldn't involve Kerberos, but it's nothing specific to user directories.
OrangeFS (nee PVFS) is a userland parallel filesystem (with a kernel driver in Linux) that can use two types of public key security if you want that sort of thing, but I gather that's not widely used.
NFS tends to be living in the kernel, which sounds like a terrible place for TLS. That said, IPSec lives in the kernel to some extent, you could layer NFS on top of ipsec....
Netflix's TLS support is an extremely limited hack that suits their performance needs. It does not support initial session negotiation or rekeying. They do the former in userspace before handing off a symmetric key to the kernel, and drop connections in the latter case, relying on the client to reconnect. There's no chance it will be merged to FreeBSD; it's not a general solution.
As long as we're talking about TLS and IPSec, though, I'd point to Wireguard as maybe something viable for kernel use.
Red Hat worked pretty closely with NetApp and their implementation of pNFS. I've used it and it is very nice for mitigating any downtime while you perform maintenance on a controller.
Apart from Kerberos, you can use it in a safe way in all-squash mode, where access from certain IPs are always interpreted as a certain UID. So if you're e.g. distributing built code to your network, doing all-squash to a user with read-only permissions would work.
This requires a trustworthy routing config (or a way to write files using something other than NFS) but allows untrusted clients as long as someone is keeping them out of trusted subnets.
Note that unlike all-squash, root-squash is not a meaningful security measure, since an untrusted root user can just become whatever UID they want. It's simply a (useful) protection against mistakes as root.
I've actually implemented NFS servers professionally (for my sins) and used various competitor implementations. It's possible with the mechanisms others have mentioned, but ultimately each vendor has its own specific ways to circumvent the "Kerberos is annoying and often not well configured at the client" issue[0]/allow various 'hacks' to be deployed, so if you're interested in a specific vendor's product (e.g. ONTAP) the best option is to look through their online manual for the specific CLI/Web GUI features, usually around root squashing and the like.
[0] I actually really like it, but the reality and customer feedback is what it is...
In what way are implementations not inter-operable? The SunOS and Linux implementations, at least, coexist happily (modulo ACLs). Do others fail to implement the standard?
1. There aren't many NAS devices that gracefully interact with both Active Directory and Kerberos (if you care about that, for both SMB and NFS access to the same storage). The best that I know that is still commercially available is ONTAP (I was not employed by them, but a competitor). I don't know much about its internals, but was impressed with the features, team, my own play with it, etc.
It's also the most expensive. If interoperability with AD is not an issue, then disregard.
2. Mac OS X support for NFS v4 and its variants is abysmal. To the point that I could craft packets that would cause a reboot on the latest OS (which to me yells probable 0-day... but hopefully fixed since then and security analysis is not my expertise). Their SMB client is really good though, second to Windows in terms of keeping up with features, using them correctly, etc.
3. What's the use case for the mobile devices? The NFS/SMB clients I've seen for mobile are clunky as hell, but it was also never my focus.
While still only my opinion, this is true up to early 2017. I've since moved to another industry and don't actively research it anymore.
I've used krb5 and nis in an osx environment with NFS as file store. My basic opinion is that osx is not about unix legacy and they move as desired. Running on osx is agsinst a moving target.
If you can trust the existing machines, but are worried about new machines sniffing and spoofing, you can set up IPsec between the NFS clients and the NFS server.
I wish there was a middle ground where you could issue a ticket to the server and not have to issue one per user but still have the traffic encrypted between NFS client and NFS mount.