Hacker News new | past | comments | ask | show | jobs | submit login

I quite liked this article from about a month ago on "The Case for JPEG XL": https://news.ycombinator.com/item?id=33442281



The case is not that compelling to me…

> * Lossless JPEG recompression

Obviously, jpeg has this. I know it is supposed to be a 20% reduction in size, but that is a relatively small incremental improvement.

> * Progressive decoding

jpeg also has this, and it’s a niche feature these days.

> * Lossless compression performance

png covers this

> * Lossy compression performance

Nice, but a relatively small incremental improvement in the general case.

> * Deployable encoder

This isn’t a case for jpeg xl, just table stakes for any format.

> * Works across the workflow

Not really a reason to include jpeg xl in the browser unless it actually is being widely used across workflows. Actually, this is something of a negative considering the resulting complexity of the codec and image format and that it muddles the distinction between the authoring and published form of an image.

There are negatives too, so it’s not enough to be a little better for some cases..codecs are attack vectors. An all-new complex codec is an all-new broad attack surface. And, of course, it’s yet another feature to test and maintain, which costs time and necessarily pulls focus away from other things.


> Obviously, jpeg has this. I know it is supposed to be a 20% reduction in size, but that is a relatively small incremental improvement.

Let's not forget that a 20% reduction across the entire internet is very significant.

> jpeg also has this, and it’s a niche feature these days.

Because we can't rely on it, I'd suspect?

> png covers this

Without covering the rest.

> Not really a reason to include jpeg xl in the browser unless it actually is being widely used across workflows.

This line of thinking will inevitably lead to a chicken-and-egg problem. Just like with EdDSA certificates and for example H/2 PUSH (support of which got deprecated basically the same time some frameworks just added support).

I think the web doesn't move as fast as Google thinks or hopes it does, and it has and will make people reluctant on adopting next new features.


I think progressive jpeg works just fine, I recall it being quite popular in the early days of the internet when loading a large image could take a long time on landlines. You'd get a blurry preview that would incrementally become more detailed.

I agree that it's niche these days because typically modern connection speeds mean that it's a waste of resources to iterate on the picture instead of loading it at once. Even on mobile networks the issues are connection reliability and latency, once the image starts pouring in it'll likely come in its entirety quite quickly.

If you load images large enough that it becomes a real issue you would generally (in my experience) just have a smaller independent thumbnail/preview image you'll display while the large version is loading. That usually gives you more control on how things will look and avoid having a blurry mess on the user's screen.


I don't think that's accurate. Progressive JPEG has been the biggest winner in the image formats during the last 10 years. It grew from pretty much nothing to about 25 % of all JPEGs today.

Much of this is powered by mozjpeg creating progressive jpegs by default. Chrome and other render them progressive, with recent (2019–21) improvements in quality. While first round or two of updates can be noticeable, the last 40–55% of loading is usually impossible to notice as a change.


Interesting! That may be why I never noticed it. Thank you for the correction which unfortunately few people will see...


> Let's not forget that a 20% reduction across the entire internet is very significant.

In 2003. Now that we have streaming video, it's pretty minor.


I can assure you that for anyone dealing with large volumes of storage and/or bandwidth of still images (i.e. pretty much any high-traffic website), where costs directly related to storage/bandwidth are measured in millions of dollars, will not consider a 20% saving to be a "pretty minor" thing.

I agree that in video, the savings can be even more substantial. But keep in mind that the median web page still has zero videos, while the median web page has about 1 MB in images. In practice most of the transferred bytes in web browsing are html (which is very small after compression) and images. Javascript, css and fonts also contribute to the page weight but those tend to often be locally cached since they're mostly static compared to the html and images.


Since you'll have to support clients that don't have jpeg XL support, however, storage savings are unlikely to materialize any time soon if ever.


And also in 2016: "Lepton image compression: saving 22% losslessly from images"

https://dropbox.tech/infrastructure/lepton-image-compression...

It's now deprecated though and quite comically tragically they are suggesting to switch to a different format and mention JPEG XL as an alternative.


PNG quality AND progressive decoding. AFAIK there is no other supported codec offering this.


PNGs and GIFs can be interlaced, which is similar to progressive decoding.


Not really similar. 10% of an interlaced image is effectively a bunch of horizontal lines. 10% of a progressive image is usually high enough fidelity to present the image in full colour - even less if you are happy with greyscale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: