Articles about the merits of JPEG XL come up with some regularity on Hacker News, as if to ask, "why aren't we all using this yet?"
This one has a section on animation and cinemagraphs, saying that video formats like AV1 and HEVC are better suited, which makes sense. Here's my somewhat off-topic question: is there a video format that requires support for looping, like GIFs? GIF is a pretty shoddy format for video compared to a modern video codec, but if a GIF loops, you can expect it to loop seamlessly in any decent viewer.
With videos it seems you have to hope that the video player has an option to loop, and oftentimes there's a brief delay at the end of the video before playback resumes at the beginning. It would be nice if there were a video format that included seamless looping as part of the spec -- but as far as I can tell, there isn't one. Why not? Is it just assumed that anyone who wants looping video will configure their player to do it?
Besides looping, video players also deal kinda badly with low-framerate videos. Meanwhile, (AFAIK) GIFs can have arbitrary frame durations and it generally works fine.
> GIFs can have arbitrary frame durations and it generally works fine.
But we shouldn't be using animated GIFs in 2024.
The valid replacement for the animated GIF is an animated lossless compressed WebP. File sizes are are much more controlled and there is no generational loss when it propagates the internets as viral loop (if we all settled on it and did not recompress it in a lossy format).
Most modern video container formats support arbitrary frame durations, using a 'presentation timestamp' on each frame. After all, loads of things these days use streaming video, where you need to handle dropped frames gracefully.
Of course, not every video player supports them well. Which is kinda understandable, I can see how expecting 30 frames per second from a 30fps video would make things a lot simpler, and work right 99.9% of the time.
> With videos it seems you have to hope that the video player has an option to loop
<video playsinline muted loop> should be nearly as reliable as a GIF in that regard.
The one exception that I've found is that some devices will prevent videos from autoplaying if the user has their battery-saver on, leading to some frustrating bug reports.
You're doing a similar phrasing weirdnesd as the wiki
> Every major browser
There are only 2 major ones
And this brings us back to the main point - this is no "format" issue, Chrome could just as well support some metadata field as "loop for n" for the newer video files, and the situation would be the same as with NAB when Safari adds it.
Come on, this is ridiculously pedantic. They used "major browser" to exclude WiP and hobby project to avoid pedants coming in and saying "uh Ladybird doesn't support looping gifs yet" or whatever. But I guess there's no pleasing the pedants.
- the 5% point you're responding to doesn't refer to the original "Every major browser"
- the original reponse just highlights that there is no non-pedantic difference between "most browsers" and "Every major browser", so that was the start of the anti-pedantism battle
No, this is ridiculous. The claim isn't about "most browsers" but the ones people actually use. You know, Firefox and those based on Chromium and WebKit. There could be 100 browser hobby projects out there which don't support NAB, "most browsers" would then not support NAB, but pretty much everyone would still be using a browser which supports NAB.
"The major browser engines" is a commonly used phrase to refer to Chromium, WebKit and Gecko (and formerly Trident and Presto). You're willfully misudnerstanding it. Please stop.
This will be my last response in this thread, this conversation is absolutely ridiculous.
The same thing stood out to me. With the popularity of animated GIFs, it's disappointing and ridiculous for a new Web-friendly image format to omit at least a simple multi-image/looping facility.
As for your question about video looping: Nothing prevents that, although I don't know of a container format that has a flag to indicate that it should be looped. Players could eliminate the delay on looping by caching the first few frames.
There is a lossless ultra-packer for existing JPEG files. It's completely reversible, you can get byte-for-byte identical JPEGs back.
Then there is "VarDCT" mode, which acts like JPEG, lossy Webp, or video codecs.
Then there is "Modular Mode", a completely different kind of codec that has different kinds of compression artifacts than JPEG-like codecs. The compression artifacts you see tend to be more like sections becoming more pixelated, or slight color differences. Strong edges don't have ringing artifacts. Modular mode mainly is used for lossless compression, but also allows lossy compression.
Technically it also had a fourth :^) [0] but it was spun out into a separate project of its own, jpegli [1]: JPEG but it uses some tricks from JPEG XL. These include spatially adaptive quantization, quantization matrices that better preserve psychovisual detail, more efficient color spaces, and also HDR (10+ bit depth) support [2].
Pretty good news! I imagine that it'll take a while before libjxl and jpegli won't both supply a cjpegli binary so that'll be mildly annoying at the start, but hopefully this way it'll be adopted quicker so it'll accept more input formats and image software will switch over to jpegli native export instead of using the libjpeg compatible controls.
It's a really excellent software. Its default output quality is storage quality, while the file size is acceptable for mobile data and cloud storage of pictures in most countries. It producing progressive pictures by default still helps when quickly swiping through a whole album of vacation pictures stored on cloud storage, and its progressive output actually reduces size rather than add to it. And it's compatible with everything so now I just throw everything lossy I produce through its default settings until JXL becomes natively supported in Chrome and Windows.
The lossless JPEG recompression is a combination of VarDCT and some additional metadata. In fact, VarDCT should be considered as a (very large) superset of JPEG1 compression. The distinction between VarDCT and Modular in JPEG XL is relatively clear, but in reality VarDCT would still use modular encoding for various data anyway so it is hard to consider one without another. (Compare with Opus, which also uses two main mechanisms but mix them so well that they can't be really separated.)
It’s supported, but unfortunately no 3rd party APIs yet. It’s a bit surprising they wouldn’t ship them on launch to encourage adoption.
I make photography/camera apps and would like to support JPEG XL natively (without having to rely on 3rd party code) so I hope it’s something they add soon!
> It’s a bit surprising they wouldn’t ship them on launch to encourage adoption.
Because Apple is all in in HEIC/HEIV. The "high efficiency" codecs that require up to a second on an M1 Pro to render an image. A comparable image in PNG renders instantly
Oh great. Another image format no one really supports, that requires hardware decoders, and will probably take 2 seconds to decode on a modern supercomputer
please provide some details on why you think JPEG XL only benefit megacorps – if you say it like this, it just sounds like trolling.
couple of counter arguments on my side:
* JPEG XL heavily reduces storage requirements both for lossless and lossy compression
* JPEG XL allows to reversibly compress old JPEG to further reduce storage requirements
* JPEG XL is patent unencumbered
* JPEG XL support very high definition pictures (jpeg does not), native HDR support, higher number of bit per pixel, etc. etc.
* Google wants JPEG with gain map to support some form of HDR, as well as now introducing XYB color coding, etc. etc. They clearly are against JPEG XL
This article is from 2020. I think so far it has aged reasonably well.
But you might be interested in my more recent articles too. You can find those here: https://cloudinary.com/blog/author/jon_sneyers
> HEIC and AVIF can handle larger [than 35MP, 8MP respectively] images but not directly in a single code stream. You must decompose the image into a grid of independently encoded tiles, which could cause discontinuities at the grid boundaries. [demo image follows].
The newest Fujifilm X cameras have HEIC support but also added 40MP sensors--does this mean they are having to split their HEIC outputs into two encoding grids?
It seems like the iPhone avoided this, as 48MP output is only available as a "ProRAW" i.e. RAW+JPEG, which previously used regular JPEG and now JPEG-XL, but never HEIC.
I recently wrote a script that encodes an image to fall within a size range[0]. After toying with it, I noticed that smaller AVIF files are completely fine for web use, but identically sized JPEG XL files are not. Given ubiquitous browser support for AVIF[1], unless JPEG XL gets much better at smaller sizes, I reluctantly agree that Chrome's call to drop JPEG XL is the right one.
In what environment do you work where you need such low quality images though? In my web environment I only want the highest quality I can get at a reasonable size and I've never been interested in slightly less awful looking tiny images. In another comment I wrote about using jpegli at its default distance of 1 for everything and being happy with that size, so maybe I work in a completely different environment to you.
A normal one like everyone else I guess. No need to waste bandwidth and storage if you don't have to. If an image looks good to me, I'll try going lower until it doesn't look good, then go back one step. I've been surprised many times by just how low it can go and still look good. That script I wrote defaults to what I've settled on. (AVIF between 0.5 and 1 bpp at 1 megapixel, increasing or decreasing by square root of total pixels in image, plus JPEG fallback.)
If I change my mind, I keep high resolution originals of everything to do it again.
JPEG XL is supposed to be a progressive mode. Can you read a lower resolution from the file by reading only part of the file, as you can with JPEG 2000? Is there a header which tells you how much file to read for the desired resolution?
You have to first read the image header, an optional ICC profile and finally a portion of the first frame. This first frame might actually be a preview generated by an encoder, but should be fine for our purpose and it's not hard to seek to subsequent frames anyway. The frame itself contains its own header and all offsets to per-frame sections ("TOC"), while there is always one LfGlobal section that contains the heavily downscaled---8x or more---image in the modular bitstream, even when the frame itself uses VarDCT.
Any higher resolution would require some support from the encoder. The prime mechanism relevant here is a version of the modified Haar transform named Squeeze, which generates two half-sized images from one source image. As each output image is placed to distinct sections, only one out of two output images is needed for low-fidelity decoding. If the encoder didn't do any transformation however (often the case in VarDCT images), then all sections would be required regardless of the target resolution.
Therefore it is technically possible and in fact libjxl does support partial decoding by rendering a partial bitstream, but anything more than that would be surprisingly complex. For example how many bytes are needed to ensure that we have at least 8x downscaled image? This generally needs TOC, and yet a pathological encoder can put the LfGlobal section to the very end of frame to mess with decoders (though no such encoder is known at the moment). Any transformation, not just Squeeze, has to be also accounted to ensure that all of them will produce the wanted resolution once combined. Since the ICC profile and TOC already require most entropy encoding stuffs except for meta-adaptive trees, even the calculation of the number of required bytes already needs about 1/2--1/3 of the full decoder in my estimate from building J40.
That said, I'm not very sure this complexity could've been radically reduced without inefficiency in the first place. In fact I've just described what I wanted when I started to build J40! I think there was an informal agreement that the ICC profile could have been made skippable, but you still need all the same stuff for decoding TOC anyway. Transformation is a vital part of compression and can't be easily removed or replaced. So any such tool would be definitely possible, but necessarily complicated, to build.
Something seems wrong im this article.
The side-by-side comparison shows 4 formats:
· Original PNG image (2.6 MB)
· Name "high_fidelity.png", but in fact 298.840 bytes and format: JPEG
· JPEG XL (default settings, 53 KB): indistinguishable from the original
· Name "high_fidelity.png.jxl.png", but in fact 3.801.830 bytes and format: PNG
· WebP (53 KB): some mild but noticeable color banding along with blurry text
· Name "high_fidelity_webp.png", but in fact 289.605 bytes and format PNG
· JPEG (53 KB): strong color banding, halos around the text, small text hard to read
· Name "jpeg_high_fidelity.jpg", but in fact 52.911 bytes and format JPEG
The comparison does not make any sense,
everything is just wrong. Also when encoding the large original PNG image to AVIF, it has only 20.341 Bytes with no visual change, see: http://intercity-vpn.de/files/2024-10-27/upload/
I guess that is because the noise from the lossy encoding creates more entropy, that then has to be losslessly encoded as PNG, which pushes the files size above the original?
JPEG XL is seeing strong industrial adoption where image quality matters: professional and prosumer image acquisition (as a replacement, with huge benefits, of traditional raw imaging in iPhone 16 Pro) and processing and storage reduction with Digital Negative, ProRAW and medical imaging with DICOM. Clinics with archiving and telemedicine needs benefit massively.
Few realise that JPEG was designed for flickery low resolution analogue CRTs and slow CPUs. Nowadays we have digital high resolution screens and fast CPUs.
JPEG XL has been coming soon for as long as I can remember.
Nowadays I like to serve webP from a CDN with the filenames being .jpg even though it is a .webp. This means listening to the request headers and serving what is supported.
If someone right clicks and downloads then they get a genuine JPEG, not a webp. Same if they do a 'wget'. This is because the headers are different, with webp not supported.
Browsers do not care about filename extensions. Hence you can serve a webP as 'example.jpg'.
The benefits of doing this, and having the CDN send the request headers to the origin server (for it to encode webp on the fly) is that the rest of the team, including those that upload images can work entirely in the JPG space they know.
The mozjpeg encoder is really neat but all browsers support webp these days. Mozjpeg is JPEG for digital hi-res screens and fast CPUs. Brilliant, but too late.
What I am interested in now is larger colour spaces than sRGB. This requires a whole toolchain that supports P3 (or whatever) too.
I tried AVIF but in practical testing, webP was simply better.
JPEGLI from Google is what I want for the toolchain with the CDN supplying webp. Nobody cares about your images in the vast majority of applications and it is just best to go for highly compressed webp with artefacts just about to cut in. You also have retina nowadays, so using all those pixels, typically 1.5x, is better than 1x. More pixels is a better trade off.
> Few realise that JPEG was designed for flickery low resolution analogue CRTs and slow CPUs.
To my knowledge, JPEG was initially considerably slow to decode with contemporary CPUs. For example a Usenet post in 1992 [1] stated that decoding "big" images like 800x600 takes 1--2 seconds, which was considered fast at that time. JPEG itself was eventually used as a basis of MPEG-1 video codec, so it couldn't be too slow but I'm not aware of other concrete performance concerns.
JPEG XL has been here a while but Google has decided to avoid it, even Apple has adopted it. at the same file size webp has worst color banding and way more trouble at keeping small details accurate and compresses too much.
> JPEG XL has been coming soon for as long as I can remember.
You must not have a very long memory. The Joint Photographic Experts Group (JPEG) published a call for proposals for what would become JPEG-XL in 2018. That's not so long ago.
JPEG-2000 may have poisoned the well. I still, for some irrational reason, get a flashback of it whenever JPEG XL is mentioned.
Maybe JPEG earned that with pushing a deliberately patent infested codec with horrible proprietary implementations? And at the time when the god damn GIF patents were causing grief, and h.264 trolls were on the rise.
I do not consider JPEG 2000 to be a failure even if it did not displace the old JPEG in all applications. JPEG 2000 “succeeded” but not on the Web. Most people have probably seen more JPEG 2000s than all other image formats combined. Whenever you go watch a movie in theatres you see 24 of them every second; roughly 173,000 for a two hour feature. There were a few open implementations like OpenJPEG, which is the one I use.
Could it be argued that a format being patent encumbered was less in focus back then, that "codecs are covered by patents and users of a codec must pay royalties to the patent holders" was just the more or less unquestioned "way things were" back then by groups like ISO? The original JPEG was also patent encumbered after all.
This is not a rhetorical question, I was way too young back then to have been aware of the conversations which were going on regarding JPEG-2000. I see that PNG was developed specifically to be unencumbered, which suggests that at least some groups were focused on it, to your point. But I can't tell if that was a mainstream concern at the time like it is now.
> Nowadays we have digital high resolution screens and fast CPUs.
The problem is that even fast CPUs may have their limits.
A Macbook with M1 Pro spends up to a second decoding and rendering a HEIC image where a comparable JPG and PNG images are rendered instantly. And this is on an 11-year-old codec
I investigated using JPEG XL for high speed applications but encoding time was much slower than JPEG with libturbojpeg even if you reduce encoder complexity to a minimum
it's much much faster than the other alternatives tho. and staying with JPEG isn't really ideal with it's low quality and limited options going forward.
Not for a long time. The WebP encoder has improved a lot since MozJPEG was released. But these days we have Jpegli [1] which beats WebP at higher quality levels.
I have a similar experience where jpeg encoding and webp encoding result in far less computing resources use that jpeg XL or AV1, and was curious at what other people used (as I might be using the wrong library).
> The JPEG XL reference encoder (cjpegxl) produces, by default, a well-compressed image that is indistinguishable from (or, in some cases, identical to) the original. In contrast, other image formats typically have an encoder with which you can select a quality setting, where quality is not really defined perceptually.
We are the best, so fuck the rest. /s
Using vague language to claim superiority is not a sign of inteligence.
This one has a section on animation and cinemagraphs, saying that video formats like AV1 and HEVC are better suited, which makes sense. Here's my somewhat off-topic question: is there a video format that requires support for looping, like GIFs? GIF is a pretty shoddy format for video compared to a modern video codec, but if a GIF loops, you can expect it to loop seamlessly in any decent viewer.
With videos it seems you have to hope that the video player has an option to loop, and oftentimes there's a brief delay at the end of the video before playback resumes at the beginning. It would be nice if there were a video format that included seamless looping as part of the spec -- but as far as I can tell, there isn't one. Why not? Is it just assumed that anyone who wants looping video will configure their player to do it?