Hacker News new | past | comments | ask | show | jobs | submit login

APFS is slated for a 2017 release, yet development started as recently as 2014. By comparison, development on Btrfs started all the way back in 2007, yet many still consider it to be unsuitable for widespread deployment, particularly in mission-critical settings.

If Apple can actually pull off this turnover so quickly, does that suggest the complaints about Apple's declining software engineering quality were overblown?

Edit: Ted T'so in this talk(1) (at the 8 min mark) discusses the taskforce that birthed Ext4 and Btrfs and its estimate (based on Sun's experience with ZFS, Digital's with AdvFS) of 5-7 years at a minimum for a new file system to be enterprise ready--an estimate which definitely proved optimistic with regard to Btrfs. Will APFS be different?

1: https://www.youtube.com/watch?v=2mYDFr5T4tY




While APFS' design mirrors that of current generation file systems such as ZFS and Btrfs, its scope is much reduced. It does not target large amounts of storage or disk arrays (such as RAIDZ and the normal RAID family) with automatic error correction, it is not required to perform competitively in server scenarios (especially for traditional databases, where COW file systems tend to perform poorly), and it can completely disregard rotating hard drive performance.

Snapshots are read-only, unlike Btrfs (but like Linux' LVM, which has been around for quite some time), and we won't even delve into online rebalancing and device addition.

APFS is a very welcome addition given that it brings Apple file systems into the XXIst century, but the reason for its speedy implementation is that they purposefully constrained its goals — a good call, in my opinion.

If you have the same workloads as Apple's customers and large amounts of SSD storage, you'll find Btrfs and ZFS both fit your needs on Linux.


[Disclosure: I work for SUSE]

> an estimate which definitely proved optimistic with regard to Btrfs.

SUSE has had Btrfs support for enterprises since SLE11SP2[1,2] (2012). And it's been the default for the root partition since SLE12[3] (2014). So it wasn't overly optimistic at all, it was actually a very accurate estimate. The same support apples for openSUSE, but I think they had it for longer.

[1] https://www.linux.com/news/snapper-suses-ultimate-btrfs-snap... [2] https://www.suse.com/communities/blog/introduction-system-ro... [3] https://www.suse.com/documentation/sles-12/stor_admin/index....


Do you have any good source to point people at when they say btrfs is unstable and corrupts data? (and I mean, either positive or negative data, I don't want to be biased either way) I'm kind of tired of people posting comments like that, which are based on "google is full of people saying this", or "I know one person who lost data" (but no idea which kernel was it - could well be from 2014).

If I look at btrfs patches on lkml, on one hand side I can see some fixes for data loss, but on the other they're usually close to "if you change the superblock while log is zeroed and new superblock is already committed and there's a power loss exactly at the point, you'll get corruption" - which are just really obscure edge cases people are unlikely to ever hit.

So what can I look at to get a realistic picture of what's going on? (what would SUSE point me at)

(for the negative results, I know of the recent filesystem fuzzing presentation where btrfs comes out worst, but honestly I don't consider it interesting for real world usage - car analogy, I'm interested how the car behaves on a typical road, not which fizzy drinks added to the gas tank will break it)


RE: the fs fuzzing, David Sterba a Btrfs developer @ Suse replied to this: https://www.spinics.net/lists/linux-btrfs/msg54454.html

As for unstable and corrupts data, this is just not true. Many more users using it on stable hardware don't have problems. I've used Btrfs single, raid0, raid10 and raid1 for years, and haven't had corruptions at all ever that I didn't myself induce.

I have stumbled upon, just days ago, parity corruptions in raid5 however. The raid56 stuff is much much newer and hasn't been as well tested, and has been regarded as definitely not ready for prime time. So that's a bug, and it'll get fixed.

Bunches of problems happen on mdadm and LVM raid also due to drive SCT ERC being unset or unsupported, resulting in bad sector recovery times that are longer than the kernel will tolerate. That results in SATA link resets, rather than the fs being informed what sector is bad so it can recover from a copy and fix up the bad sector.

So there are bugs all over the place, it's just the way it goes and things are getting quite a bit better.

It is totally true that Apple can produce their own hardware that doesn't do things like lie to the fs about FUA or its equivalent of req_flush being complete when it's not, or otherwise violating the order of writes the fs is expecting in order to be crash tolerant. But we're kinda seeing Apple go back to the old Mac days where you bought only Apple memory and drives, 3rd party stuff just didn't happen then. The memory is now soldered on the board and it looks like the next generation of NVMe and storage technologies may be the same.

Windows and Linux will by necessity have file systems that are more general purpose than Apple's.


The thing about btrfs is, it's got lots and lots of optional features, of widely varying maturity.

Hence it's not very surprising that the supported btrfs in SuSE is a sub-set of all the available features: the ones that are most mature.

This is an illustrative patch from a couple years ago, not sure if still current:

http://kernel.opensuse.org/cgit/kernel-source/plain/patches....


>If Apple can actually pull off this turnover so quickly, does that suggest the complaints about Apple's declining software engineering quality were overblown?

The complaints are usually about minor parts of the OSX/iOS stack -- parts Apple might not even particularly prioritize.

A faulty filesystem on the other hand is something entirely else altogether and something they can't ship unless it's good.


I'm inclined to agree that 3 years to squeeze out a perfect file system is ambitious to say the best. That said, consider the robustness of HFS+; APFS is blessed with desperate, captive users.


>That said, consider the robustness of HFS+

Besides being old, there's nothing much to the "robustness of HFS+". It's not like users are losing data left and right, as its being painted. In fact it's an FS running just fine on about a billion devices...


Engineers who have worked on HFS+ believe that it actually is losing data left and right. I'm inclined to believe that they're right and that most people simply don't notice.


I used to work on the largest HFS+ installation in the world; and we saw data corruption all the time. Mostly large media files with block sized sequences of zero'd data; we were lucky in that we had non-HFS+ file systems backing us up, but deeply unlucky in that the nature of our media was such that random blocks were very much more likely to cause media-level problems rather than container-level ones, and thus were much harder to catch.


APFS doesn't add checksumming though, so it won't be an improvement


It doesn't have checksumming yet. I have a feeling it will be added before the final release since data integrity is one of the tenets they're pushing.


Links?


There is a lot of discussion online. Most of it can be found by searching for "hfs bitrot" on Google.

https://blog.barthe.ph/2014/06/10/hfs-plus-bit-rot/

> A concrete example of Bit Rot

> ... HFS+ lost a total of 28 files over the course of 6 years.


And the conclusion of the author itself --in the end of the post-- was that it was due to bad hardware, not HSF+.

To quote:

>I understand the corruptions were caused by hardware issues. My complain is that the lack of checksums in HFS+ makes it a silent error when a corrupted file is accessed. This not an issue specific to HFS+. Most filesystems do not include checksums either. Sadly…


(Downvotes for being accurate? I'm not sure how even the "disagree" downvote applies here, as there's nothing to disagree with).


A bad filesystem would have corrupt metadata. Plain old corrupt data is the fault of the storage media, which does have its own error correction. Clearly it wasn't good enough, or the path from the HD back to the system can't report I/O errors.

BTW he didn't lose any data, since he had backups. If he had a checksummed filesystem, but not backups, he would still have lost data. Checksums, like RAID, aren't backups!


As a long time Mac OS X user, yeah, I've lost a lot of files to bit-rot in the last 10 years. Mostly MP3s with a little bit of corruption.


APFS won't solve that though


I know. And I am not happy about that. But I'm not sure if the issue really was bit rot or bugs in HFS+. I haven't had a lot of issues in years so I lean towards the latter.


Another longtime Mac OS X user here who has seen bitrot.


>> does that suggest the complaints about Apple's declining software engineering quality were overblown

The type of developer you would have working on a filesystem are likely going to be from a different world than those who work on UI applications. Speaking about their apps, the quality problems Apple faces are usually design-based rather than functionality. When working on a filesystem, you're not going to be forced into developing a crappy application by some designer who ruins the entire application.


Maybe Apple's software quality declined due to engineering prowess being siphoned away towards skunkworks projects like APFS


They toyed with zfs quite a few years back, when they opted out don't you expect that they started s project?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: