Hacker News new | past | comments | ask | show | jobs | submit login
Progressive JPEGs: a new best practice (perfplanet.com)
133 points by ssttoo on Dec 29, 2012 | hide | past | favorite | 63 comments



I don't like this suggestion of "best practice" without any numbers. Progressive JPEG uses more RAM (because you can stream normal JPEGs so you only have to buffer a row of JPEG blocks) and lots more CPU (up to 3x).

Most of the compression benefits can be obtained by setting the "optimized Huffman" flag on your compressor. e.g., a baseline JPEG will save 10-20% when "optimized" and progressive very rarely achieves a double-digit win after that.

MobileSafari eats cycles every time you inject a large image into the DOM, so (while I haven't benchmarked progressive), it seems like this new-found love of progressive JPEG is beating up one of the slower code paths in the mobile browsers. And to me, it doesn't look that good!


This brings up some important points. Yes, we need numbers. Let's get them.

Progressive jpegs do not necessarily need to use more RAM. The FAQ I linked to also says "If the data arrives quickly, a progressive-JPEG decoder can adapt by skipping some display passes." Win!

Also, why do you say "up to 3x" more CPU? Is that an estimate based on how many scans you're guessing a progressive jpeg has? A progressive jpeg can have a variable number of scans -- we used to be able to set that number, which is totally cool!

As for the compression benefits, you say "progressive very rarely achieves a double-digit win after that." We web performance geeks LOVE single-digit wins, so you can't burst our bubble that way.

Yes, Mobile Safari has trouble with images. Period. But Mobile Safari does not progressively render progressive jpegs (I wish it did). So we can make it a best practice without worrying about Mobile Safari. When the web is full of progressive jpegs, Apple will have to deal with them. It's not an evil plan, it's the right thing to do.

When you say it doesn't look that good, are you saying that for yourself personally, or are you saying it for your users? We need to think about what they see. As I say, perceived speed is more important than actual speed, and the thing that excites me most about progress jpegs is not the file size savings, but instead the behavior of the file type in browsers that properly support it.


Progressive JPEG needs a minimum of 2 x width x height additional bytes over baseline to decode an image (maybe more, definitely 1.5-3x more than that if you're displaying coarse scans), regardless of how many scans you have or display, as it needs to save coefficients for N-1 scans over the entire image, whereas baseline needs only to save the coefficients for a couple 8x8 blocks at a time. Though if you're clever about and sacrifice displaying coarse cans you could reduce this some.

If you don't display coarse scans (and if you're comparing progressive vs. baseline CPU usage then counting such isn't fair), then approximately the only additional CPU time progressive should take is the time additional cache misses take. It's probably a wash considering that decoding fewer coefficients / less Huffman data takes less CPU.

Maybe I'll get some numbers, I'm curious now... But unless you're serving multi-megapixel images the additional CPU and memory doesn't matter. Probably not even until you're in the double digits, if then.


Thanks for the comment. Whatever detail you can add to this conversation is much appreciated. It's a neglected topic, and it's important for us to understand it better.


Exactly. The author tries to present as valid his his agenda of "optimizing everything" when he's actually optimizing for a desktop Chrome browser on a fast computer.

Written from a ten year old notebook. Progressive JPEGs are slower for me no matter which browser. Not having Chrome browser on the newest computer? Still slower. Mobile devices: the same story.

If the author never leaves his desk, has a new computer and likes Chrome browser, good for him. Others shouldn't trust him too much.


I'm nitpicking, but

  When images arrive, they come tripping onto the page, pushing other elements around and triggering a clumsy repaint.
This is easily avoided by defining the image dimensions in your stylesheet.


And when the dimensions aren't defined, then progressively encoded JPEGs don't offer an advantage either, because with both progressive and non-progressive images, browsers reserve space for images as soon as their size is known, i.e. when the file header is received.


Agreed. But would you say that progressive jpegs don't offer a visual advantage in this case? Imagine if a photo has a caption below it: baseline starts rendering far away from the caption, but with progressive we'll get the caption in the correct place without a big gap between the caption and where the photo is rendering. In the case of baseline, the photo will draw to "meet" the caption.


Or in the HTML.


Preferably in the HTML.

Unless width="" and height="" have been deprecated on <img> overnight.


Different use cases IMO. If it's a single image of variable size, use width and height. If there are a bunch of images, all the same size (thumbnails for instance), use CSS.


If you specify width and height explicitly, certain browsers will have issues resizing them (if you set max-width: 100%, Chrome will squish a larger image horizontally but leave the height as it was, IE will scale it proportionally).


I think that's because Chrome treats width= and height= as if they were CSS. (have a look at the inspector on an element with width and height specified in the HTML)


That makes sense (thank you, don't know why I didn't realize that myself), but if I override that with css height: auto; , the browser no longer reserves space for the image while loading it... quite unfortunate.


Maybe you should submit a WebKit (or possibly Chromium if this is Chrome-specific) bug for this.


Why do you say Preferably in the HTML? Just curious. I vaguely recall there was a reason.


Because it has been the standard way for a long time.

And because it allows browsers to make room for the image and avoid reflowing.


    their Mod_Pagespeed service
mod_pagespeed is an open source Apache module, not a service. Google also runs PageSpeed Service, an optimizing proxy. Both support automatic conversion to progressive jpeg.

    SPDY does as well, translating jpegs that are over
    10K to progressive by default
The author has SPDY and mod_pagespeed confused; this is a mod_pagespeed feature.

(I work at Google on ngx_pagespeed.)


> Plotting the savings of 10000 random baseline jpegs converted to progressive, Stoyan Stefanov discovered a valuable rule of thumb: files that are over 10K will generally be smaller using the progressive option.

At first I thought: "What, you're opening JPEGs and saving them again? Don't you lose image quality every time you open and save in a lossy format?"

But then I read the actual source [1], and it says that `jpegtran` can open baseline JPEGs and save them losslessly as progressive JPEGs. That sounds useful!

Does anyone know whether other image editing software do the same thing to JPEGs that are re-saved without modification? What about Photoshop? GIMP? MS Paint?

[1] http://www.bookofspeed.com/chapter5.html


I would not call this a "best practice", but simply an alternative.

Progressively loading photos in that manner is not a good user experience either. The first paint is often quite jarring, especially, as pointed out in the article, over 90% of photos simply don't load this way.

For content such as photos that are contained within some sort of layout, it would be better to have a placeholder there that is the same frame as the final image size, then have the final image appear upon completion.


The first paint is often quite jarring

To me it's still better than nothing. Just like I don't enjoy websites that only paint after they're done loading.


Is it really better than a loading indicator?

Just use the title link as a reference. First you see chicken wings, then all of a sudden 6 piglets.

The transition paint is really meaningless.


Is it really better than a loading indicator?

For me, yes. It is a loading indicator, without any additional "noise".

Also, the second-to-last pass is often quite good, if not indistinguishable from the last one. So you have the full image already, while more details get filled in, instead of first having the top, then the middle, then the bottom, and to me that's less "jarring".


It's a best practice because it gives the user as much possible as soon as possible, and because there are file size savings.

When you say

it would be better to have a placeholder there that is the same frame as the final image size, then have the final image appear upon completion

isn't that the quality of progressive jpegs?


I think the main issue I have is that if there is a situation where you actually notice the paints happening on a progressive image for it to even matter, you might want to rethink the level of quality of the photo being delivered to that device over that connection speed.

The way I'd approach photos is this - no image should take more than 3 seconds to load, and anything between 1-3 seconds has a small progress bar (mobile) or just an empty frame with a thin border (mobile thumbnails or any web photos).

I'd take into account device, connection speed, CDNs, jpeg compression to ensure that I meet the time requirement for the full image to load.

If the full image isn't consistently loading within that timeframe, I've already lost and need to rethink the quality of the images being delivered or if I'm designing the right app / site, because it's going to be a terrible user experience either way, progressive jpeg or not.


This is sort of tangential, but that browser chart reminded me of something I've been curious about for a while: Why do certain browsers only render foreground JPGs progressively? Is it a rendering engine limitation or an intentional one (perhaps for usability during page load)?


Guessing: rendering anti-aliased text over an image background is expensive (you have to read the background image to blend colors). Re-rendering page text multiple times while an image is loading may not be a good idea.


Browsers seem to be very lazy about this kind of thing, perhaps for that reason. Chrome dealt very weirdly with one of my sites, where I had an image, with a transparent-background iframe containing a semi-transparent-background div above it, and text above that iframe. The text's position was lagging relative to some images placed above it. I wonder if this is why.


Right?! This is very interesting and I just discovered it by chance. If it were just one browser we could write it off, but it's not. I'll try to find out.


Oh, no no no. Ill-advised idea. We've gone through this before already, and it was resolved in favor of baseline with optimization. How was it resolved? By website visitors, who hated-hated-hated progressive JPEGs. Boy, those who ignore history... cue the "Doom Song" from Invader Zim, sung by Gir.

(edited)


> visitors hated-hated-hated progressive JPEGs

I call BS. On most every web project I've been involved in, we've used progressive JPEGs for the size optimization. This was crucial in the dial up modem days when we were inventing how web sites should best serve users, and we still do it. Sites with faster times to perceived page completion consistently drove higher page views and longer times on site. Even in browsers not supporting progressive rendering, perceived completion was faster due to smaller size meaning faster load time. Switching from baseline to progressive consistently drove higher page views and longer times on site.

I have never heard a single client say a single user complained about progressive JPEGs. I'm not saying it hasn't happened somewhere to someone. Users will complain about anything. But in billions of page views across countless clients (including pro photo clients), I haven't run into user complaints from progressive JPEGs, only measurable page view and time on site improvements in user behavior.


Assuming the image has a small thumbnail embedded in the EXIF data, maybe it would be even faster to use that resource (already included with the image), scaled to fill the image space and replaced when the image loads.

Essentially the same effect, with some back-end code. I suppose we'd need a benchmark test to find the real numbers. The main advantage would be sticking with existing file types already in use.


JPEG already supports multiple images per file. If you add EXIF data with manifest information with web rendering hints at the head, your thumbnail in the next frame, and your full-sized image after, you'd be on the track to awesomesauce.

(I'm sure someone has already implemented something better than this, but I am lazy.)


Then it would still be even faster in total if you do the same thing with progressively encoded images.


On that note it would be interesting to see if the EXIF data is typically included at the start or end of JPEG files, or if it varies via compressor.


In EXIF, whether applied to JPEG or TIFF, the header comes first, then the thumbnail if present, then the primary image. Exactly as you'd want (which is more than can be said for much of EXIF, e.g. that it lacks time zone encodings).


>and progressive jpegs quickly stake out their territory and refine (or at least that’s the idea) //

Your browser stakes out the image area if you set height and width attributes for the image thus avoiding reflows.

Also the bracketed comment makes it sound like this feature doesn't work well?


I have observed that even when you set height and width attributes, the area is not always "staked-out." Of course we'd assume that it is. I'll need to get you some browser and version details about this.


Are you saying if you specify <img width="xx" height="xx"> that the browser doesn't reserve exactly the right amount of space for that image? Because that's exactly what it's supposed to do: if it's not, then it's a browser bug.


Update: this must have been my imagination. I confirmed that all of the most common browsers will stake out the image area if height and width is set in the css or the img tag.


>They are millions of colors and pixel depth is increasing.

Um, no. Nobody is serving anything more than 8 bpc on a web page, and nobody has a monitor that could show it to them if they did.


The real limitation is the number of colors that can be distinguished by the human eye. I've read that medical scanners and other science-y imaging gear can sample intensity with 16-bit resolution.

But if you're talking about hardware whose sole purpose is displaying images for humans to view, having more than 8 bits is just a waste of resources.


I might buy 10 bits, but it's pretty easy to distinguish 8-bit brightness levels.

Edit: Found a source quoting 450 light levels the eye can distinguish, so adding a bit of buffer for smooth transitions and imperfect gamma you'd need 10 or 12 bits.


Also it depends on the color, for blue, 8 bits may be sufficient, but definitely not green.


Rods pick up blue just fine, so I don't think you can skimp on any color.


Red is picked up about half as well as green, and blue is discerned about half as well as red. You can definitely skimp on blue.


I strongly disagree. Blue is less bright to rods, that's it. If you happen to be looking at even gray, then sure the fact that blue seems a quarter as bright means you can chop off two bits. But if you look at pictures that are blue, without a lot of other colors to wash it out, you're going to have the exact same 450 distinct brightness levels over the 100:1 contrast ratio that the eye picks up.


Lot's of cameras can capture more than 8 bpc too, but there's no point in displaying them that way. The reason it's useful is that it reduces loss of quality when manipulating the colors in the images.


Not just "science-y" imaging gear.

I'm scanning slides today with a Nikon Coolscan 5000ED, and it's a full 16 bits per channel of R, G, and B, and 16 bits of infrared (48 bits + 16 bits). The "raw" TIFFs are 182 MB, while the final JPEGs range between 2.2 and 6.9 MB.


There's plenty of room for increased dynamic range, if not more colors, than what 8 bits per channel gives us. See raw files from digital cameras, HDR, etc.


Good article! I'm quite surprised mobile browsers basically ignore this though. Showing the low quality image when zoomed out, and only the full quality when zoomed in would avoid the CPU issues.


Although zooming in would require a re-rendering of the image, which could cause lag/jitter.


Wow. Everything old is new again...


Exact same thing I was thinking. We used to ue this technique when I first started web dev 12 years ago and used to have to support dial up modems. as internet connection speeds have gotten faster the newer breed to developers totally ignored any optimization techniques and page loafs have become extremely bloated. Now with mobile, limited speeds and limited data plans, as you say, we have gone full circle. Kind of like responsive designs and 100% tables from yester-year - ok, so areas didn't reflow like they do, ir areas dud not hide themselves, but the content did scale according to screen width.


Thanks for pointing this out. I don't usually like the phrase "kids these days", and truth is this has nothing to do with age and everything to do with experiencing first hand the many interesting ways users' connections can be borked.

Same holds true in video streaming, where companies got overconfident with broadband then are surprised when the limited bandwidth and high latencies of wireless are better managed by the multi-bitrate and error correcting streaming technologies of a decade ago.


Just realised how bad the spelling was on my comment, sheesh, the joys of typing from my smart phone!

I'm currently working in Canada, having come over from the UK where we are much more used to cheap unlimited broadband plans, both at home and cellular. So the caps and prices here have been quite a shock! A friend told me that NetFlix (or similar) in Canada stream at a much lower rate and don't offer HD in Canada because of the fact that. My current broadband plan costs me $60 (+15% tax) for 130GB a month.

One thing it has taught me is you really need to consider the differences of the local markets!


I'm usually eager to jump on board with recommendations from Stoyan’s performance calendar, but Anne’s description of baseline loading doesn't comport with my experience in Webkit browsers (and possibly others). They don't "display instantly after file download", but display almost line by line as the file is downloading, with a partial image appearing from top to bottom as information becomes available. Since this particular trick is about perception -- users being given some visual indication that that the image is loading, and data as soon as possible -- the difference between progressive and baseline loading seems like it should be subtler than the article suggests.


I never said baseline jpegs display instantly after file download. They render as I describe several times, top to bottom or "chop chop chop." It's progressive jpegs that display instantly after file download IF the browser does not support progressive rendering of progressive jpegs.

The difference between the rendering of the two file types is not subtle.


Thanks for an interesting article. What's your guess on when other browsers will make using progressive jpegs easier?


Guesses won't make any difference. File a rdar or open tickets instead ;)


Now if only there was a bot that could send pull requests to every GitHub repository making JPEGs progressive...


A current blog post but Firefox versions from April and August in the data?


s/SPDY/mod pagespeed/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: