The wholesale costs of transferring a gigabyte of data are (worst case, buying 2Gbps or more of transit) about 1-2 cents.
This is the very most that any vendor pays, and they probably pay far far less than that.
For storage costs, you could take the storage capacity of a 2TB drive, triple the cost to account for servers, RAID and other overhead, then come up with a per-GB charge per year (drives aren't replaced every year, but then there are power/cooling and other costs).
So a 2TB drive is say $200, triple that is $600, say 1800GB formatted divided by $600 = 33 cents per GB per year.
I'm tired of this ZFS hype. ZFS for linux is absolutely not mature. First, you can't distribute binaries. Second, the version is currently 0.5.x, which is alpha; and I can tell you after trying to compile it proved largely enough that it is, indeed, alpha quality. Third, it misses a quite useful feature, the posix interface, i. e. the ability to mount the filesystem. Slight limitation isn't it?
Then I can mention a couple of other small problems. ZFS isn't at all cluster-aware. In fact, it radically sucks as a cluster filesystem. I know quite an important storage platform who sent back 2PB of Sun storage because you know, aggregating 2PB by stacking iSCSI volumes and using RAID-Z simply cannot work and doesn't scale, though that's precisely what Sun tried to sell them. That pretty much assures that ZFS isn't really a perfect fit for the cloud, you know. At least not for the people actually running it.
Cloud storage doesn't mean one gigantic filesystem. ZFS can easily handle as many drives as you can fit in one machine and if you scale horizontally then that's all you have to worry about. It also provides checksumming, compression, and ease of management which are really important on a large scale. If you need a single 2 PB filesystem then you should look elsewhere, but if you need a local filesystem I don't think there's anything better than ZFS right now.
Because it's implemented as a one-system filesystem, there is no cluster capabilities built-in at the moment. The problem is that building the sort of huge fiesystems the cloud demands means clustering, generally. Of course you can go buy a DataDirect S2A 9900 and you'll have a nice 1PB fs in a rack, however the problem will be there again when you'll need to extend further.
Can't Nasuni bundle files together? Given that for a 1 KB file the majority of the time to fetch it will be latency, if small files were bundled into 10 KB chunks (for example) then the transaction cost would go down by an order of magnitude without affecting UX, surely? It seems unlikely that someone would hit a large number of separate 10 KB chunks for 1 KB each time without a significant number of cache hits.
This is the very most that any vendor pays, and they probably pay far far less than that.
For storage costs, you could take the storage capacity of a 2TB drive, triple the cost to account for servers, RAID and other overhead, then come up with a per-GB charge per year (drives aren't replaced every year, but then there are power/cooling and other costs).
So a 2TB drive is say $200, triple that is $600, say 1800GB formatted divided by $600 = 33 cents per GB per year.