Hacker News new | past | comments | ask | show | jobs | submit login

How do we know that we've hit a wall with lossless compression? As a non-expert, I've been really impressed with the likes of zstd and lz4 which are relatively new



I should clarify: There's still room for improvement in the speed of lossless compression/decompression operations, and that's where most improvements happen nowadays. But there's little room for improvement in compression ratio, because most modern algorithms produce outputs that are very close to random already.

More background: https://marknelson.us/posts/2006/06/20/million-digit-challen...


> because most modern algorithms produce outputs that are very close to random already.

Compressing well-compressed or random data would be impossible but how is it indicates there's little room?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: