Hacker News new | past | comments | ask | show | jobs | submit login

At least it's no btrfs. What a disaster that filesystem's been.



A disaster for enterprise perhaps. I use only a subset of features (such as subvolumes and snapshots and a little RAID 1) and have never had problems. With the way some people talk about it, it sure sounds like it never worked at all.


I believe the most recent issues I've seen documented with it are around the built in RAID5/6 implementation not being fully stable. I haven't used any of it myself so I can't comment on the rest of it. (I did use it on an external drive like 5-6 years ago and had issues that I know have been fixed, but haven't retried it).


They should have kept RAID behind yellow tape for years to come. There's just too many problems that can bite you once a physical disk fails and you really need that RAID to recover.


Yeah, but the distributions are a bit at fault as well. If you use btrfs on SuSE you cannot use RAID5/6 because it is not supported.


Same here. I've been using btrfs for years so far on my personal laptop and never had any problems.


I would never trust btrfs for anything. You might as well just use /dev/null for storage


You've never had problems? You're lucky. I guarantee you will. It took me less than a month to lose significant data with it, under normal usage patterns.


It took me less than 3 days for btrfs to fail me, but that was in the 2.6.33_rc4 days. Had it gone better, I might have become a btrfs developer rather than a ZFS developer. In hindsight, I am happy with how things turned out.


We use Suse with BTRFS at work. I don't know of any instance where we lost data, but the weekly filesystem rebalance that runs during business hours and causes the machine to not respond to anything except pings is a little frustrating.


I've been using it for a good 5 years. OK, I had one time. When I removed a disk from a mirror, forced the remaining disk to operate writable as a single, then attempted to migrate it to single disk duplication, then shut it down before it finished migrating.

That's not a sane way to handle data really, I was playing around with unimportant stuff. If it was real data, I would have mounted it read only, and copied it away.


Well, that is very much the intended use case for a mirror...


I was not replacing the disk, instead I was attempting to switch to a single disk


I have not had good luck with BTRFS on systems running on battery. Its been solid on my workstation for a while though. Still, it is disappointing that it hasn't stabilized faster.


I run BTRFS on my laptop. Is yours an ultraportable? Mine is a fairly chunky system with a 100Wh battery, about 3 years old.


In my experience BTRFS has issues when the disk is allowed to go into low power mode.


i have been using btrfs in production for years now and it has never failed me, and i am doing hundreds of snapshots and send/receiving, i've reconfigured raids on the fly and went from 6 disk raid 10 of mixed size to raid 10 of same size. we have in some cases had many power failures with no data loss. We also have some set up using md raid and some using hardware raid...

when i hear people dogging btrfs it just speaks to their inexperience imo.

that said i know it's not perfect, but what is?


I use OpenSuse Tumbleweed with BTRFS on a laptop. This is nothing special - a 512GB SSD, no RAID, about 350GB of data and a bunch of snapshots. It was basically the out of the box configuration plus some snapper config to regularly snapshot /home.

One day I shut it down and it wouldn’t boot - the BTRFS filesystem had gotten itself into a state where it would mount ok read-only but hang the system when mounting read/write. My best guess is that I shut it down (meaning a normal shutdown through the GUI) while it was running its weekly rebalance in the background. I’ve since recovered the data and rebuilt the machine, but it’s the first time in a long time that I’ve had a filesystem (any filesystem) fail me.


Huh. I had that same problem on two machines running Tumbleweed a couple of months ago. In both cases, the only working approach I found was to wipe the root partition and install from scratch.

/home was in a separate partition, so it was not a tragedy, but annoying nevertheless. This has never happened to me before, unless the underlying hardware was about to retire.


I wasn't so lucky since I had everything as one big BTRFS filesystem.

I did recover it by booting the installer in recovery mode, mounting the fs read-only, and doing a backup. Then I blew away the partition table and reinstalled.

I also learned a lesson about backups, mainly that I should have them.


Tumbleweed is a rolling edge release. I get wonky desktops and other problems all the time when I update.


That is true.

But I have used Gentoo, another rolling release distro, before that (way before that, actually), and I never had such a problem with Gentoo, even on the unstable branch.

From what I hear Arch users tell about their distro of choice, Arch does not give them this kind of headache, either.

And last but not least, the reason I took the plunge and went for Tumbleweed was that the project uses extensive automatic testing to ensure they do not break anything.

Do not get me wrong, I still use Tumbleweed on both machines and do not see that changing for the foreseeable future. I know what I signed up for. ;-)


Are you using btrfs native RAID support? The docs say it's really unstable so I was scared to use it


I run single with hardware or md raid, or btrfs raid 1 or 10. Usually the latter.

Raid5 and 6 are unstable in the "avoid at all costs" kind of way.

This may be of use if youd like to know more: https://btrfs.wiki.kernel.org/index.php/Status


I've been running a two-disk RAID1 setup, alongside a single disk root drive (for snapshots) for almost two years without a single issue. I think there's still a lot of FUD being spread on account of the RAID 5/6 write hole still existing, for which this is the latest update:

> The write hole is the last missing part, preliminary patches have been posted but needed to be reworked.


Is there a stable (loaded word, I know) versioning file system available for Linux?


The simple, unqualified answer is no.

BTFS is official Linux and apparently "ok" for simple use cases, but there are no end of reports of failures when non-trivial RAID modes are used, demanding work loads are applied, device replacement is attempted, etc. and there are performance problems under a variety of conditions. There are enough qualifications on the BTFS status page[1] that I, for one, do not consider it 'stable.' ZFS is a thing on Linux, but it's not in the kernel and _when_ it breaks the kernel developers don't officially care, except when they happen to have a foot in both camps. This situation naturally limits the size of the ZFS on Linux user base; you're one of the few if you're doing it and that's not where most production users want to be. LVM can snapshot logical volumes and produce `crash consistent' volumes independent of the type of file system. That's been my go-to solution given no other alternative.

The fact is Linux has trailed far behind its contemporaries in advanced file systems for... 10+ years now? Not terribly flattering.

I suspect the reason is that most production use of Linux occurs in environments that provide many enterprise storage functions independent of the operating system, so there isn't much pressure to, for instance, harden BTRFS until there aren't major deficiencies. I can snapshot/clone/restore/whatever my EBS volumes any time I wish and I can do similar with my private cloud powervault volumes as well. I trust either of these mechanisms far more than _anything_ Linux has ever provided, including LVM.

[1] https://btrfs.wiki.kernel.org/index.php/Status


Do you mean snapshotting filesystems? If you really meant snapshotting, that would be ZFS. If you really meant a versioning filesystem, Wikipedia claims NILFS is stable:

https://en.m.wikipedia.org/wiki/Versioning_file_system#Linux


LVM can do snapshots and Redhat is working on its own hybrid storage management solution based on xfs called stratis[0]. Redhat deprecated btrfs recently. I don't think that means anything more than Redhat has lots of xfs devs and no btrfs devs.

[0] https://www.phoronix.com/scan.php?page=news_item&px=Stratis-...


The most stable solution right now is taking your favorite stable fs like ext4 or xfs and using LVM thin pools for snapshots.


There is hope though: https://www.patreon.com/bcachefs


nilfs?


Agree. Good old NIH.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: