Hacker News new | past | comments | ask | show | jobs | submit login

It’s a shame modern file systems don’t generate and store common hash algorithms as file metadata.



Some file systems like ZFS, btrfs support checksum on block level. However file level checksum is not trivial with random write file system. Imagine 1 byte is changed in 20GB file - that would require full file scan to update the checksum. Recalculating a file checksum from all block checksums could be a solution however far from any standard.


Wouldn't a merkle tree, like torrents or dat use, avoid this? You'd only have to generate a hash of the changed chunk, then it's really low cost to update the root hash.

https://datprotocol.github.io/book/ch01-02-merkle-tree.html


Yep, my last sentence covered also Markle tree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: