SHA-1 is still perfectly fine for some applications like detecting duplicate files on a storage medium
Absolutely agree, especially when speed is a workable trade-off and accepting real world hash collisions are unlikely and perhaps an acceptable risk. For financial data, especially files not belonging to me I would have md5+sha1+sha256 checksums and maybe even GPG sign a manifest of the checksums ... because why not. For my own files md5 has always been sufficient. I have yet to run into a real world collision.
FWIW anyone using `rsync --checksum` is still using md5. Not that long ago I think 2014 it was using md4. I would be surprised if rsync started using anything beyond md5 any time soon. I would love to see all the checksum algorithms become CPU instruction sets.
Optimizations:
no SIMD-roll, no asm-roll, no openssl-crypto, asm-MD5
Checksum list:
md5 md4 none
Compress list:
zstd lz4 zlibx zlib none
Daemon auth list:
md5 md4
Absolutely agree, especially when speed is a workable trade-off and accepting real world hash collisions are unlikely and perhaps an acceptable risk. For financial data, especially files not belonging to me I would have md5+sha1+sha256 checksums and maybe even GPG sign a manifest of the checksums ... because why not. For my own files md5 has always been sufficient. I have yet to run into a real world collision.
FWIW anyone using `rsync --checksum` is still using md5. Not that long ago I think 2014 it was using md4. I would be surprised if rsync started using anything beyond md5 any time soon. I would love to see all the checksum algorithms become CPU instruction sets.