Cool. The use case that comes to mind is knowing you are going to get the file you think it is if you are looking it up on some local or remote hash indexed storage scheme.
It would be nice if we had (it prob exists already) a standard block based scheme so files larger than the RAM could be handled without writing to disk. Making something like a .torrent for every file and using that as the per block checksum and then checking the hash of the concatenated checksums could do it. Right? I'm sure there is a real name for that but idk it.
Really this seems like something that the FS should be asked to do. ZFS seems to be close, but I haven't found out how to address a block by it's hash yet.
It would be nice if we had (it prob exists already) a standard block based scheme so files larger than the RAM could be handled without writing to disk. Making something like a .torrent for every file and using that as the per block checksum and then checking the hash of the concatenated checksums could do it. Right? I'm sure there is a real name for that but idk it.
Really this seems like something that the FS should be asked to do. ZFS seems to be close, but I haven't found out how to address a block by it's hash yet.