Hacker News new | past | comments | ask | show | jobs | submit login

> To add to the downsides, you can't expand RAIDZ vdevs.

> If you start with a 6 disk RAIDZ2 and want to add a couple more drives, you can't. The only way to add capacity to the pool is to add an entire new vdev.

This is mostly true, and in many situations for all practical purposes is true. Not fully understanding this recently cost me a few hundred dollars (2x 8TB external drives) when I needed to expand my 4 disk raidz2 to a 6 disk raidz2.

However, what _is_ possible is expanding a pool by incrementally replacing drives with larger capacity equivalents. This wouldn't work in my situation as I already was using the largest consumer drives available - but the next time I need to expand in a few years it may be a possibility.

> ... expandable file system that handles parity/erasure coding as well as bitrot that can be used right now

Unfortunately I believe this to be the case. However in practice, so long as you know up front your data store is not expandable, it's fairly easy to work around without it being too much of a hassle. If having two zpools when you need to expand is a burden for your use case because the data necessarily will be partitioned across pools (ie: in multiple directories), there are tools that will present multiple filesystems as a single mount point to linux.

Alternatively, mirrored vdevs [0] as opposed to a classic raidz2 configuration are much easier to expand, as you only have to replace 2 drives to expand your total storage space (a single vdev) as opposed to having to replace all of your drives with higher capacity drives as with raidz2.

> and Ceph might be more usable

After all that rambling about zfs, the main thing I wanted to touch on: I'd caution most people to not use ceph unless you already know you need it. For the average user, it is introducing so much unnecessary complexity that will just create future headaches when using it in a non-distributed manner (if zfs is an alternative, it must be non-distributed). Which isn't to say ceph isn't useful, it's just built to be optimal at solving a slightly different problem than a filesystem on a single compute can solve.

[0]: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: