The decoder is Apache 2.0 licensed. For most applications (e.g. websites, apps, games), you only need the decoder.
There is still a lot of room for improvement in the encoder, which is why (at least for now) it has the LGPL license, to ensure that improvements can be integrated in the reference implementation.
The LGPL reference implementation is what has scared off me and many others I know who had an interest in this work. In many corporate environments LGPL is untouchable.
I really don't understand WHY though; aside from paranoia.
The entire point of LGPL is supposed to be that your corporation provides a binary blob which can be linked against any other such binary blob that supports the requested interface. In this manor someone can use any modified (usually security updates) version of the LGPL library with the rest of your work.
I guess if you require obfuscation or signing of your company's blob things are more difficult. However in that case why aren't you using a security model that signs the code you own and segments out the code you do not?
It will be great when you only need one file for all of your responsive image versions and can just take the part you need. Plug - I wrote about this in my high performance web app book along with other types of compression (https://www.packtpub.com/mapt/book/Application%20Development...).
I'm not convinced that FLIF can really do that. One file for all versions is a nice idea, but in practice, the first 20KB of a progressive FLIF is often an inferior substitute for a 20KB JPEG, to say nothing of BPG.
Note that the image in the example was chosen specifically to highlight FLIF's strengths, but the math shows that you could combine the uninterlaced FLIF with the 17KB JPEG 2000 and still take up less space than the interlaced FLIF.
FLIF looks like a very impressive lossless codec indeed, and even is competitive when used as a lossy codec, but the quality of partially downloaded interlaced files just isn't competitive with lossy codecs--including lossy FLIF.
I agree: if you're fine with lossy, then it's hard to beat lossy formats with a lossless format.
Maybe in some future version of FLIF, with some DCT/DWT-like transform and an option to postpone least significant bits until the end of the bitstream, we can truly get there. But at the moment we're not there yet.
Thank you! It looks amazing. So unless processing speed or memory requirements being a handicap, the FLIF could be a real generic graphics compression candidate :-)
Very good question. Do you or anyone else have a good test set of such images?
My gut feeling says that JBIG2 probably outperforms FLIF on repetitive images (e.g. text), since it specifically detects such patterns. On non-repetitive images, maybe FLIF is better. But it would be nice to have some benchmarks.
The thing about flif that feels good on the old noggin is the fact that you can trade off quality for size just by truncating the file. Imagine a decimal point at the beginning of the file, and your image is a number between 0 and 1. Approximate that number to N digits, and you get an approximation of your image.
The truncated version is basically a lower resolution. So if you have a 5000x5000px image and you display it on your page as 100x100px, it only needs to download a small amount of data to show it at that lower resolution. Obviously being a lower resolution it won't match pixel-for-pixel identically to the original, but if you downloaded the whole file it would.
FLIF does support a "lossy" encoding method too. The compression FLIF uses (MANIAC Trees) predict what the next pixels will be. If you modify your pixels a small amount so they will be closer to what FLIF expects it will compress much better but still look very similar to the original image. The stored version with the modified pixels would be losslessly stored, in the sense that it perfectly reproduces your image with the modified pixels. But it's lossy in the sense that you had to change the pixels in the first place to better benefit from the way FLIF analyzes and compresses images. The major benefit to this type of lossy compression is that the image doesn't degrade with multiple saves, unlike JPG, where each save compounds the artifacts, causing the "Xerox effect". Here's a video of how FLIF benefits from it's style of lossy encoding:
The encoder option -Q<quality> (that's a capital Q, lowercase -q is used to decode a lower quality preview by only reading part of the file) can be used to do lossy encoding, where -Q100 is lossless and -Q0 is usually the most lossy you would want to go. But you can go lower if you want, using a negative number (e.g. -Q-50); there is not really any lower bound.
The bitstream is kind of finalized as "FLIF16" now, just like GIF87 was finalized in 1987. Future bitstream-changing stuff will go in a new branch, which will eventually become the "FLIF17" or "FLIF18" format (whatever the year will be). Future decoders will decode both FLIF16 and FLIF1x.
On September 19, 2016 Jon Sneyers wrote "I'm going to drop the "rc" and tag what we have now as v0.2. This will be FLIF16 and all bitstream-breaking changes will have to be in a new branch, which will become FLIF17 or FLIF18 or whatever. " (https://gitter.im/FLIF-hub/FLIF?at=57df798133c63ba01a135f0d)
I'd be curious to see what the result would be if you took the best format for each individual image and added them together; would it be a significant difference over FLIF by itself?
The choice of GPLv3 licensing is unfortunate, since it will limit native browser support.
I'm open to negotiation if Adobe would want to include the FLIF encoder in Photoshop. The other contributors have signed a CLA that allows me to change the licensing to anything more permissive than GPLv3. So if Adobe wants it, they can pay for a non-copyleft license (e.g. Apache 2.0). If that ever happens, I'll be happy to share the licensing fees with the other contributors and donate some of it to the FSF.
Why would they want to pay?
Unless the format is widely spread and the fee is much cheaper than implementing their own what interest do they have to pay? I feel it's the opposite now.
Well, IANAL but I think the original creator of FLIF could still decide give a special license to Adobe if they want to negotiate one, no?
EDIT: practically confirmed elsewhere in the thread by the original author, with motivation:
> There is still a lot of room for improvement in the encoder, which is why (at least for now) it has the LGPL license, to ensure that improvements can be integrated in the reference implementation
How does this compare against Dropbox's Lepton? Seems like any of these of image compression formats would make sense to support in archiving software like 7Zip, Winzip, Gzip, etc.
Lepton is a recompression format specifically for JPEG. It's lossless in the sense that you can reconstruct the input JPEG perfectly, but of course JPEG is itself an inherently lossy format.
FLIF is a lossless image format. It's not a good idea to give it an ex-JPEG file as input, just like it's not a good idea to convert JPEG to PNG, because it will need lots of bytes to losslessly compress all those JPEG artifacts that JPEG gets for free :).
You'd want to maximize adoption, and there BSD can help. Adoption of Xiph.org's Vorbis was in good part supported by such a decision.