Yes, this is theoretically cool and all, but when I hear something like:
"Let's start with the easy one: how do we know it's necessary?
Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away."
It raises a huge red flag from a business point of view. In 2004 "some customers" had datasets that would cause issues sometime around 2015 - a problem that ZFS purported to solve (among admittedly much more relavant features). In planning for a future that was possibly ten years away for "some customers" instead of focusing on what was relevant to the larger customer base, Sun managed to continue their slow profitability death march until Oracle finally snatched them up this year.
Something to keep in mind when you decide to add "cool theoretical feature X" or "unparalleled scalability" to your four month old startup...
I lack the ability to downvote you, but this comment is clearly a series of fallacies. You've assumed at least two things I don't see in evidence:
a) That the customers Sun served in 2004 didn't care about anything ten years away. That their time scale for data on disk was less than a decade. And that marketing to them about how ZFS planned for the future was not effective.
b) That building a 128-bit filesystem (as opposed to a 64-bit filesystem) substantially impacted the amount of time it took to engineer the filesystem or impacted the adoption rate by customers. Clearly those are not facts in evidence. Since we know that an integer behaves pretty much the same regardless of whether it's 32, 64, or 128 bits, it's probably safer to assume the opposite.
I take the opposite lesson: I think when planning your startup, a little thought into "are we representing data in a way which we can extend into the future?" is not a terrible idea, especially when the choice is between a 32, 64, or 128 bit integer.
ZFS has been one of the biggest selling points for Solaris over the last 5 years. There are a huge number of places that use Solaris only for ZFS support. For example, the SmugMug people have written about it a number of times.
ZFS may have been, but not the 128 bit support. I mentioned that there are "admittedly much more relevant features" which are in fact what smugmug likes:
"ZFS is the most amazing filesystem I’ve ever come across. Integrated volume management. Copy-on-write. Transactional. End-to-end data integrity. On-the-fly corruption detection and repair. Robust checksums. No RAID-5 write hole. Snapshots. Clones (writable snapshots). Dynamic striping. Open source software."
"Let's start with the easy one: how do we know it's necessary? Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away."
It raises a huge red flag from a business point of view. In 2004 "some customers" had datasets that would cause issues sometime around 2015 - a problem that ZFS purported to solve (among admittedly much more relavant features). In planning for a future that was possibly ten years away for "some customers" instead of focusing on what was relevant to the larger customer base, Sun managed to continue their slow profitability death march until Oracle finally snatched them up this year.
Something to keep in mind when you decide to add "cool theoretical feature X" or "unparalleled scalability" to your four month old startup...