Hacker News new | past | comments | ask | show | jobs | submit login

SHA-1 is still perfectly fine for some applications like detecting duplicate files on a storage medium

Absolutely agree, especially when speed is a workable trade-off and accepting real world hash collisions are unlikely and perhaps an acceptable risk. For financial data, especially files not belonging to me I would have md5+sha1+sha256 checksums and maybe even GPG sign a manifest of the checksums ... because why not. For my own files md5 has always been sufficient. I have yet to run into a real world collision.

FWIW anyone using `rsync --checksum` is still using md5. Not that long ago I think 2014 it was using md4. I would be surprised if rsync started using anything beyond md5 any time soon. I would love to see all the checksum algorithms become CPU instruction sets.

    Optimizations:
        no SIMD-roll, no asm-roll, no openssl-crypto, asm-MD5
    Checksum list:
        md5 md4 none
    Compress list:
        zstd lz4 zlibx zlib none
    Daemon auth list:
        md5 md4



You're never going to _accidentally_ run into an MD5 collision.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: