Hacker News new | past | comments | ask | show | jobs | submit login

Funny because I have the opposite experience. The main issue with btrfs is a lack of tooling for the layperson to not require btrfs-developer level knowledge to fix issues.

I've personally had drive failures, fs corruptions due to power loss (which is supposed not to happen on a cow filesystem), fs and file corruption due to ram bitflips, etc. All the times btrfs handled the situation perfectly, with the caveat that I needed the help from the btrfs developers. And they were very helpful!

So yeah, btrfs has a bad rep, but it is not as bad as the common feeling makes it look like.

(note that I still run btrfs raid 1, as I did not find real return of experience regarding raid 5 or 6)




It's funny because Facebook uses btrfs for their systems & doesn't have these issues.

ZFS lovers need to stop this CoW against CoW violence.


Someone correct me if I'm wrong but to my understanding FB uses Btrfs in either RAID 0, 1, or 10 only and not any of the parity options.

RAID56 under Btrfs has some caveats but I'm not aware of any annecdata (or perhaps I'm just not searching hard enough) within the past few weeks or months about data loss when those caveats are taken under consideration.


> RAID56 under Btrfs has some caveats but I'm not aware of any annecdata (or perhaps I'm just not searching hard enough) within the past few weeks or months about data loss when those caveats are taken under consideration.

Yeah this is something that makes me consider trying raid56 on it. Though I don't have enough drives to dump my current data while re-making the array :D (perhaps this can be changed on the fly?)


What's your starting array look like? If you're already on Btrfs then I recall you could do something like `btrfs balance -d raid6 -m raid1c3 /`

https://btrfs.readthedocs.io/en/latest/Balance.html


Yeah I'm on btrfs raid 1 currently, with 1x1TB + 2x3TB + 2x4TB drives. Gotta love btrfs's flexibility regarding drive size :D

I'll have a look, thanks! I guess failing this will make me test by backup strategy that I have never tested in the past.


Out of curiosity, how much total storage do you get with that drive configuration? I've never tried "bundle of disks" mode with any file system because it's difficult to reason about how much disk space you end up with and what guarantees you have (although raid 1 should be straightforward, I suppose).


I get half of the raw capacity, so 7.5TB. Well a bit less due to metadata, 7.3TB as reported by df (6.9TiB).

For btrfs specifically there is an online calculator [1] that shows you the effective capacity for any arbitrary configuration. I use it whenever I add a drive to check whether it’s actually useful.

1: https://carfax.org.uk/btrfs-usage/?c=2&slo=1&shi=1&p=0&dg=1&...


Just want to do a follow up and make a correction that the command to go from whatever to RAID 6 for data and RAID 1c3 for metadata in Btrfs is instead: `btrfs balance -dconvert=raid6 -mconvert=raid1c3 /` instead of what I originally posted


> It's funny because Facebook uses btrfs for their systems & doesn't have these issues.

they likely have distributed layer on top which takes care of data corruption and losses happening on specific server


fs corruption due to power loss happens on ext4 because the default settings only journal metadata for performance. I guess if everything is on batteries all the time this is fine, intolerable on systems without battery.


The FS should not be corrupted, only the contents of files that were written around the time of the power loss. Risking only file contents and not the FS itself is a tradeoff between performance and safety where you only get half of each. You can set it to full performance or full safety mode if you prefer.


True this is file corruption.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: