A disaster for enterprise perhaps. I use only a subset of features (such as subvolumes and snapshots and a little RAID 1) and have never had problems. With the way some people talk about it, it sure sounds like it never worked at all.
I believe the most recent issues I've seen documented with it are around the built in RAID5/6 implementation not being fully stable. I haven't used any of it myself so I can't comment on the rest of it. (I did use it on an external drive like 5-6 years ago and had issues that I know have been fixed, but haven't retried it).
They should have kept RAID behind yellow tape for years to come. There's just too many problems that can bite you once a physical disk fails and you really need that RAID to recover.
You've never had problems? You're lucky. I guarantee you will. It took me less than a month to lose significant data with it, under normal usage patterns.
It took me less than 3 days for btrfs to fail me, but that was in the 2.6.33_rc4 days. Had it gone better, I might have become a btrfs developer rather than a ZFS developer. In hindsight, I am happy with how things turned out.
We use Suse with BTRFS at work. I don't know of any instance where we lost data, but the weekly filesystem rebalance that runs during business hours and causes the machine to not respond to anything except pings is a little frustrating.
I've been using it for a good 5 years. OK, I had one time. When I removed a disk from a mirror, forced the remaining disk to operate writable as a single, then attempted to migrate it to single disk duplication, then shut it down before it finished migrating.
That's not a sane way to handle data really, I was playing around with unimportant stuff. If it was real data, I would have mounted it read only, and copied it away.
I have not had good luck with BTRFS on systems running on battery. Its been solid on my workstation for a while though. Still, it is disappointing that it hasn't stabilized faster.
i have been using btrfs in production for years now and it has never failed me, and i am doing hundreds of snapshots and send/receiving, i've reconfigured raids on the fly and went from 6 disk raid 10 of mixed size to raid 10 of same size. we have in some cases had many power failures with no data loss. We also have some set up using md raid and some using hardware raid...
when i hear people dogging btrfs it just speaks to their inexperience imo.
I use OpenSuse Tumbleweed with BTRFS on a laptop. This is nothing special - a 512GB SSD, no RAID, about 350GB of data and a bunch of snapshots. It was basically the out of the box configuration plus some snapper config to regularly snapshot /home.
One day I shut it down and it wouldn’t boot - the BTRFS filesystem had gotten itself into a state where it would mount ok read-only but hang the system when mounting read/write. My best guess is that I shut it down (meaning a normal shutdown through the GUI) while it was running its weekly rebalance in the background. I’ve since recovered the data and rebuilt the machine, but it’s the first time in a long time that I’ve had a filesystem (any filesystem) fail me.
Huh. I had that same problem on two machines running Tumbleweed a couple of months ago. In both cases, the only working approach I found was to wipe the root partition and install from scratch.
/home was in a separate partition, so it was not a tragedy, but annoying nevertheless. This has never happened to me before, unless the underlying hardware was about to retire.
I wasn't so lucky since I had everything as one big BTRFS filesystem.
I did recover it by booting the installer in recovery mode, mounting the fs read-only, and doing a backup. Then I blew away the partition table and reinstalled.
I also learned a lesson about backups, mainly that I should have them.
But I have used Gentoo, another rolling release distro, before that (way before that, actually), and I never had such a problem with Gentoo, even on the unstable branch.
From what I hear Arch users tell about their distro of choice, Arch does not give them this kind of headache, either.
And last but not least, the reason I took the plunge and went for Tumbleweed was that the project uses extensive automatic testing to ensure they do not break anything.
Do not get me wrong, I still use Tumbleweed on both machines and do not see that changing for the foreseeable future. I know what I signed up for. ;-)
I've been running a two-disk RAID1 setup, alongside a single disk root drive (for snapshots) for almost two years without a single issue. I think there's still a lot of FUD being spread on account of the RAID 5/6 write hole still existing, for which this is the latest update:
> The write hole is the last missing part, preliminary patches have been posted but needed to be reworked.
BTFS is official Linux and apparently "ok" for simple use cases, but there are no end of reports of failures when non-trivial RAID modes are used, demanding work loads are applied, device replacement is attempted, etc. and there are performance problems under a variety of conditions. There are enough qualifications on the BTFS status page[1] that I, for one, do not consider it 'stable.' ZFS is a thing on Linux, but it's not in the kernel and _when_ it breaks the kernel developers don't officially care, except when they happen to have a foot in both camps. This situation naturally limits the size of the ZFS on Linux user base; you're one of the few if you're doing it and that's not where most production users want to be. LVM can snapshot logical volumes and produce `crash consistent' volumes independent of the type of file system. That's been my go-to solution given no other alternative.
The fact is Linux has trailed far behind its contemporaries in advanced file systems for... 10+ years now? Not terribly flattering.
I suspect the reason is that most production use of Linux occurs in environments that provide many enterprise storage functions independent of the operating system, so there isn't much pressure to, for instance, harden BTRFS until there aren't major deficiencies. I can snapshot/clone/restore/whatever my EBS volumes any time I wish and I can do similar with my private cloud powervault volumes as well. I trust either of these mechanisms far more than _anything_ Linux has ever provided, including LVM.
Do you mean snapshotting filesystems? If you really meant snapshotting, that would be ZFS. If you really meant a versioning filesystem, Wikipedia claims NILFS is stable:
LVM can do snapshots and Redhat is working on its own hybrid storage management solution based on xfs called stratis[0]. Redhat deprecated btrfs recently. I don't think that means anything more than Redhat has lots of xfs devs and no btrfs devs.