Sample size of one, but on mobile, my number one pet peeve (apart from popover ads and autoplay videos!) is layouts that change as the page loads. Progressive encoding with multiplexing should help make sure newly loaded images don't make article text jump around as I try to read it.
Then again, maybe this is a laziness issue since it seems to me defining a layout independent of actual image content is something that you could do in the 90's.
As far as page layout goes progressive should not behave any different than non-progressive since images have an intrinsic size in the header which is considered by the rendering engines when sizing the image layer. It does not change throughout the load process.
Multiplexing will load the necessary headers just as soon with non-progressive images.
Two! I hate when it moves the stuff around just as I am about to hit a link and hit an ad instead. It's hard enough without the page moving due to my large fingers.
Also:
* Popups asking me to download app
* Auto starting video of any sort
* Not loading the complete page so it has to keep loading as you read. This is very inconvenient when sitting on the train or subway with lots of tunnels.
* Link/button for "Read full story" on the story it self.
Most of these are reason enough for me to never visit the page again.
On mobile, we mostly do percentages on images due to varying display sizes. Gets ridiculous when you want to specify the dimension, because you can't. We need to specify aspect ratios and css does not provide that.
Went with a padding-top + absolute position in wrapper hack at the end. That jumpiness is part the tooling problem.
We decided not use use viewport relative units for some reason I can't remember. I guess we had paddings etc in pixels and that would mean using calc() and that's another can of worms.
On a second thought, cw, ch as in container width, height would be a great addition.
This would allow `height: 50cw`. Then we can define specific aspect ratios like `width: 100cw; height: 50cw;` would force image to have 2:1 aspect ratio.
This is the only research that claims progressive images aren't a good user experience though. I can't confirm with electrodes attached to my brain but personally I like progressive images better than baseline and they are smaller too!
True. As ever, it makes sense to A/B test these things with your userbase. Their behaviour is more important to your own sites than any research or personal preference.
If the value coming through your site is great enough that marginal improvements (on the order of 1%) are worth time investment (compared to other available improvement avenues) then this is something you should A/B test.
Hah, I'm surprised at the replies here. I guess I'm in the minority opinion (I don't like progressive loading of anything; just prepare the page, then show it to me :-).
One piece of research that measures vitals instead of just asking the users which one they liked more isn't very convincing. It could even be A/B tested by measuring how long people stay on the page or bounce rate above the fold.
Sample size: three (and counting). It's an awful problem, mobile or laptop/desktop: My greatest gripe is that I may think page is fully loaded, click to highlight text and - boom! I'm sent of to some page of video or advertising through a link that abruptly displaces the target I aimed at.
> it is possible to flag individual scan layers of progressive JPEGs with high priority and making the server push those scan layers into the client browsers’ Push cache even before the request for the respective image is initiated
Whoa - I had no idea this is possible. Isn't this a crazy layering violation (why should HTTP2 know about progressive JPEGs)? The links don't seem to provide any more information about it.
edit It looks like HTTP2 only talks about streams. So it's too strong to say that you can flag "individual scan layers with high priority." You can't change the order of scan layers within a JPEG file, or send the file in anything except its natural byte order. So it seems like this has the same limitations as HTTP 1.x.
How can a layering violation in a protocol be fixed at the API level? An API is what it is; a good API atop a gross protocol is only lipstick on a pig.
I think he's saying that you could potentially have the prioritized output without "HTTP2 know[ing] about progressive JPEGs". E.g. specifying "these layers are high priority" versus "the first n bytes of this stream are high priority": The first would be a layering violation, but I don't see any reason the second would be (not saying anything about its practicality). Now obviously the protocol would need to support the second (at least to use it in the context we're talking about), but I read the GP as using "API" figuratively enough to encompass this.
And would pushing chunks actually require a protocol modification? AIUI http2 push mirrors the hypothetical request headers. So it could essentially push a range-request which the browser could use to partially populate its cache.
> The best way to counter negative effects of loading image assets is image compression: using tools such as Kornel Lesiński‘s ImageOptim, which utilizes great libraries like mozjpeg and pngquant, we can reduce image byte size without sacrificing visual quality.
The standard metrics these tools provide (JPEG quality or PSNR) are not enough to preserve the visual quality for the variety of images. I'm working on the project to actually do it [1].
My favorite example is the actual Google logo [2]. It's 13 504 bytes and can be reduced almost 2 times to 7 296 bytes saving probably gigabytes of bandwidth each day.
Image compression is a great first step. Imagemin is a great javascript-based tool that lets you incorperate a bunch of optimization tools (mozjpeg, pngquant, jpegtran, optipng, gifsicle, etc...) all in one (if your project can easily use javascript modules...). And there is a plugin [0] (that I wrote) for webpack to make it happen without any thought.
Another potentially massive step is to use the srcset [1] attribute of the <img> tag. It lets you provide a bunch of different resolutions for the same image, and the browser will choose the best one to download and render based on the physical screen pixel density, zoom level, and possibly in the future even things like bandwidth preferences or battery level.
Combine the imagemin plugin with a webpack-loader [2] that will auto-generate 5-ish different downscaled versions of an image as srcset, and you get a pretty perfect setup.
My web apps now always use the highest resolution image I have available by default (within reason, I do cut it down to a realistic value), then provide 5 different downscaled versions alongside it in the srcsets automatically which are all run through a battery of optimization to compress them as good as possible. And the browsers will only download the biggest one it can realistically use. Everyone gets high quality images, nobody wastes bandwidth because of support for higher res screens.
If it would be sufficient to have high PSNR only in the 'regions of interest', then you could use something like this [1], which uses a CNN model to predict a map, and a multiple JPEG encoding to achieve variable quality compression.
The AI approach is interesting but there are just too many problems with it.
MozJPEG is actually using something similar called trellis quantization or soft-thresholding. The idea is to further quantize DCT coefficients in noisy areas. There are some limitations [1] though. But it's not an issue with locally sensitive metrics and can be improved with edge detection.
Another challenge here is to estimate the current image quality and the target error.
I never liked progressive images. Just loading vertically is far less tantalizing than having to check the pixelation to see whether you're really looking at the final product.
Why would the page jump? The rectangular size of the image is encoded in the header either way. The layout engine will already reserve the necessary space as soon as the first few bytes of the image are loaded, progressive or not.
Progressive JPEGs are actually a worse experience on browsers that don't render them progressively, like Safari (including iOS). This is why many major sites, such as Flickr, don't use progressive JPEGs (last time I checked, which was a few years ago).
I'm curious, this would probably be better than using an image preload/loading icon as far as delay or even blank spots/placeholders.
I think I've seen this before but I've used an overlay loading gif which was shown over the image while the image loaded then when it loaded the loading gif would be hidden.
I'm always happy when sites don't mess with <img> like this, since it completely breaks down for non-JS browsers. :/ There are ways to do this without breaking, but it's not always done. Apparently.
Oh man, that's one of those things with web that sucks you gotta factor it in. I don't know the percentage but say on average, 70% of a basic web gets most of the users. Then you have to factor in blind users, non-javascript, internet explorer... Ahhh
It bothers me because I know it's something to address.
So regarding non-javascript, most interfaces are built with JavaScript so what percentage are you addressing?
I've only had the non-javascript come up personally regarding using Tor. I don't know/use Tor much.
I don't ever put <noscript> alerts in ahhh I should though.
I use AdBlock too, not sure if that affects what the desired end-result of non-javascript
I was talking about myself, in this case. Quite selfish, unlike your comment :)
I use noscript and disabled JS on iOS Safari, for battery life and security. Not a fanatic noscript user at all, it just seems slightly less bad than full js at this point.
Graceful degradation is a better choice than <noscript> tags, where possible. Which, for <img>, it is.
thanks for considering the minority, even if it doesn't pan out :)
some sites also have silly JS breakage when you block 3rd-party domains while allowing same-origin requests. They blindly assume that stuff loaded successfully.
> some sites also have silly JS breakage when you block 3rd-party domains while allowing same-origin requests. They blindly assume that stuff loaded successfully.
Oh yeah, that's the worst. I use uBlock Origin to block third party JS by default, then selectively enable domains I trust (mostly CDN services). This is usually a happy medium between raw and noscript, but it definitely breaks a lot of pages.
I have uMatrix installed just for that, didn't know uBlock origin could also block third party scripts. Oh well, I also love to block all cookies and only allow first party cookies on sites that I've an account with.
Had to do it for the project website. The solution was to generate inline placeholder images and substitute original src attributes with them when DOM is loaded, then back on scroll and click events.
The placeholder images are JS-generated solid color PNGs downscaled to just keep the same aspect ratio. SVGs could be much easier but less supported probably.
If only some side-by-side video of the page rendering with the two methods would be available.
I like progressive images, and the whole idea, but some catchy video about a page re-layouting multiple times as images arrive on traditional http+simple jpeg would be the most convincing.
I think at this point a better solution is to use a polyfill for Webp.
The polyfill is pretty light because the underlying image format is just a single frame of VP8. Shouldn't get much performance hit since it will just render as a single frame of video on polyfilled browsers
nah JPEG 2000 isn't quite as nice because some browsers can't natively parse it :) . WebP support literally already exists in all mainstream browsers, it's just a single frame of VP8. And it's still around half the size of regular JPEG
Not sure why we're downvoted here, either of these are clearly better options for increasing image download speed
Bootstrap was always lazy. The tool was intended for quic prototyping now it is just the replacment for knowning what you are doing.
In other cases it's up to you whether you want to provide the best experience or just "this will do".
http://www.webperformancetoday.com/2014/09/17/progressive-im...