Beating PNG is not actually hard; zlib is not fast at compression, nor is it very good at compressing anything besides graphics. It doesn't even do RGB decorrelation, which can be a really big win (e.g. YCgCo).
Ironically, on non-graphics material, most lossless video encoders beat PNG, too. FFV1 is particularly good at grainy images, and is probably quite difficult to beat without either a much fancier predictor (e.g. LPC instead of median) or much more costly entropy coding (context mixing).
It would be great if the author could actually write a description of how IZ's algorithm works. The C++ source is fairly complex and has no comments whatsoever.
For fast lossless image compression algorithms that are used by the world's leading visual effects studios, check out OpenEXR:
http://www.openexr.com
From the Features page:
"The current release of OpenEXR supports several lossless compression methods, some of which can achieve compression ratios of about 2:1 for images with film grain. OpenEXR is extensible, so developers can easily add new compression methods (lossless or lossy)."
For any algorithm that’s heuristically searching a very large space (in this case, the space of model parameters to compress a given string), you can expect it to do better given more time. So it would actually be surprising if the best compressors, in terms of compression ratio, were fast.
In practice, the competitions that people like to use to measure which compressor is “best” have resource limits, and compressors from the PAQ family are at or near the top of most rankings.
http://mattmahoney.net/dc/text.html has some nice charts of the Pareto frontier for at least one competition. This lets you see the most efficient compressor for a given compression ratio and vice versa. The log scale gives a sense of the diminishing returns: it’s really easy to compress most structured data to 0.5 its original size, but getting from say 0.214 to 0.213 can be a hell of a lot of work.
Right now the CPU usage of a project I'm working on is dominated by PNG compression. An alternative with significantly faster compression times would be very useful. I'll be watching this closely, and hope that it becomes possible to decode IZ images in the browser (or implement it myself).
PNG compressors generally try a couple different strategies, which can get very expensive. You might be able to ask the compression library to be a little more hands-off, if you don’t need great compression.
Thanks for the suggestion. Unfortunately I'm already specifying the pixel delta strategy that works best with my images, and using the fastest zlib compression setting. Going with no zlib compression would turn my CPU problem into a bandwidth problem :).
Ironically, on non-graphics material, most lossless video encoders beat PNG, too. FFV1 is particularly good at grainy images, and is probably quite difficult to beat without either a much fancier predictor (e.g. LPC instead of median) or much more costly entropy coding (context mixing).