Hacker News new | past | comments | ask | show | jobs | submit login

Out of curiosity, how would you compare this particular distribution of complexity to that of setting up and using ZFS?

ZFS gives me set-and-forget sha256 checksums that are kept up to date in as data is written in real time. And the checksums are block-indexed, so 100GB files do not need to be-hashed for every tiny change.

I also never have to worry about the "oh no which copy is correct" logistics nightmare intrinsic to A->B backup jobs in nightmare scenarios where they fail halfway through, as both copies of my data (I'm using 2-way mirroring at the moment because that's what my hardware setup permits) are independently hashed and ZFS will always verify both sets on read. (And reads are fast, too, going at 200MB/s on a reasonably old i3. No idea how any of this works... :) )

As for cross-platform compatibility, my understanding is that if the pools are carefully configured correctly then zfs send/recv actually works between Linux and FreeBSD. But this kind of capability definitely exceeds my own storage requirements, and I haven't explored it. (I understand it can form the basis of incremental backups, too, but that of course carries the risk of needing all N prior tapes to the current one to be intact to restore...)

The two mostly-theoretical-but-still-concerning niggles that I think a lot of setups sweep under the carpet are ECC memory distribution, and network FS resiliency.

All the ranting and raving (and it does read like that) out there about ECC means nada if the RAM in the office file server in the corner implements defense in depth (if you will), while files are mindlessly edited on devices that don't have ECC, through consumer/SOHO grade network gear (routers, switches, client NICs, ...) that mostly also don't use ECC. Bit of a hole in the logic there.

I did a shade-beyond-superficial look into how NFS works, and got the impression the underlying protocol mostly trusts the network to be correct and not flip bits or what have you, and that the only real workaround is to set up Kerberos (...yay...) so you can then enable encryption, which IIUC (need to double check) if set up correctly has AEAD properties (aka some form of hashing that cryptographically proves data wasn't modified/corrupted in flight).

SMB gets better points here in that it can IIRC enable encryption (with AEAD properties) much more trivially, but one major (for me) downside of SMB is that it doesn't integrate as tightly with the client FS page cache (and mmap()) as NFS does, so binaries have to be fully streamed to the client machine (along with all dependent shared objects) before they can launch - every time. On NFS, the first run is faster, and subsequent launches are instantaneous.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: