Look at the details on the jpeg settings from the image itself.
Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.
This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.
This entire test has to be redone.
Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.
(and do not use default photoshop settings ever for web images)
ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it
Forgive me for asking, but are these settings really relevant? You've said you don't use the Save for Web feature which is Photoshop 101 for optimizing web images.
What you're describing sounds to me like someone recommending Text Edit, Apple Script, and Automator to do Unix commands because they didn't know Terminal was in the Utilities folder.
All the extra stuff you don't like are used by design studios. The metadata keeps track of the color profile, thumbnails, comments, and other metadata commonly used for managing large libraries of images.
"Save For Web" has the exact same problem and makes no difference unless the user is proactive and adjusts settings, the defaults are just as bad as the regular "save as".
When doing "Save For Web" Photoshop disables subsampling for Maximum and High (the default) and also does not enable progressive by default. It also adds meta.
> Some people like interlaced or "progressive" images, which load gradually. The theory behind these formats is that the user can at least look at a fuzzy full-size proxy for the image while all the bits are loading. In practice, the user is forced to look at a fuzzy full-size proxy for the image while all the bits are loading. Is it done? Well, it looks kind of fuzzy. Oh wait, the top of the image seems to be getting a little more detail. Maybe it is done now. It is still kind of fuzzy, though. Maybe the photographer wasn't using a tripod. Oh wait, it seems to be clearing up now ...
Progressive JPEGs are not smaller. They contain the same data, just rearranged. You can losslessly transform a progressive JPEG into a baseline JPEG and vice versa, without recompressing it. The jpegtran tool, included with the standard JPEG library, can do this.
Not a fan of progressive myself, but it should be noted that iOS (at least iOS5), will downsample jpgs larger than 2MP; saving as progressive apparently circumvents this 'feature.'
I suppose this feature is there so that the thing takes less RAM and is less taxing on the GPU. So strike a balance and choose wisely, as you don't want four or so images killing all unfocused Safari tabs.
I assume you've learned this from somewhere in the bowels of the photoshop documentation or years of domain knowledge? I ask because as a photoshop user of 5 years I've always wanted to learn a LOT more about the nitty gritty details (I am on hackernews :P) of what I was exporting but I didn't find a solid end-all be-all resource. Any ideas where I can find one?
I rarely use photoshop myself - I learned it from having to support photoshop users and solve why the images they upload to the CMS system were so massive, ie. 400k for a 600x600 photo
What we did for the non-tech people was simply tell them to always use setting #6 on photoshop and use the progressive setting. Two steps seemed the most they could handle.
I had to go into photoshop and save the same image repeatedly under all the different settings and then examine the resulting jpeg under different tools to see exactly what it was doing.
It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).
Here is a technical analysis someone did on the photoshop settings:
>It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).
I believe if you use "Save for Web", it will strip out most metadata and EXIF from the image before saving. Here's a result of using "Save for Web" (1.jpg), passing through JPEGTRAN with `-copy none -optimize` (2.jpg) and `JFIFREMOVE` (3.jpg):
Because it strips any meta information contained in the file which reduces file size. It can also losslessly optimize the image. See here for details (I've written about it a short while ago): http://hancic.info/optimize-jpg-or-jpeg-images-automatically...
* It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.*
That's a fair point, but if you've got a site that's getting hundreds of thousands or millions of views, or a large number of thumbnails, the one-time effort to shrink image size might be worth it.
how does this translate into something like ImageMagick ? I mean, if I have users uploading large (>10mb) images which I want to transform into web-optimized images, is there a server-side pipeline that would give the most meaningful results ?
Even that being the case, the higher resolution image would need to have awful artifacting to be worse than the lower res image on a high res display. Scaling up a low-res image in the browser will always make it look fuzzy, you'd need to apply filters to sharpen it up, and you'd still definitely lose detail lost in the original shrink.
Great point. I use Fireworks instead, especially for PNGs. Fw let's you use PNG8, and let's you set the way the transparency is rendered (alpha or index). Large PNG files produced by Photoshop's Save for Web dialog are two to three ones the size of the PNGs I generate w Fireworks.
"other companies have already started or will also start implementing this new Retina technology."
That is simply impossible for two reasons:
1) Retina display is a trademarked Apple phrase and no other company will ever have retina displays.
2) Retina is not a technology.
I think the word technology is being overused these days.
The author was simply talking about high-PPI displays when he said "this new Retina technology", which other companies have already "implemented" in their smartphone displays. Unless one is talking only about Apple products (which is not the case here) the term retina display should not be used.
For most people, Retina is synonymous with High-PPI, like Kleenex for tissue or Coke for cola. Retina is easier to say & understand. High-PPI is lost in a sea of resolution jargon.
I don't think we need terms like "retina" or "high PPI" at all. Consumers can decide which PPI is good enough for themselves.
Sharp is now producing 5-inch 1920x1080 443 PPI displays. Obviously this is much higher than anything we have ever seen, so it is not a "retina display". What should we call it? Instead of "cornea display" or "very high PPI", I prefer the term "443 PPI".
>That is simply impossible for two reasons:
1) Retina display is a trademarked Apple phrase and no other company will ever have retina displays. 2) Retina is not a technology.
Useless pedantic "correction" of the day award.
If you want to be fully pedantic though, you're wrong on both counts:
1) Retina might be trademarked by Apple, but nothing stops another company to sign a deal with Apple to use the name for their displays. So, "impossible"? Hardly.
2) Retina is very much a technology. Or rather, what is the definition of technology? Something that requires specific construction that can be identified qualifies as a "technology". In this case, Retina is: a high dpi screen, where high is the level that is impossible or extremely difficult for a person with 20/20 vision to separate pixels when looking from the average viewing distance for that particular class of device. If it's also the name of a SPECIFIC implementation of such things by a PARTICULAR company doesn't matter much, people are not lawyers.
We use a brand name as a substitute for the technology it represents all the time in other fields too. Even PC was once "IBM PC".
This is incredible. I can't believe it's taken this long for someone to realise this (at least, it doesn't seem like common knowledge to me). Just commenting for the benefit of anyone without a retina display -- the differences really are stunning. It's like night and day, and to do that while still reducing file sizes seems crazy.
So, this raises the question: should this become standard practice from now on. If not, why not?
An outdated computer/browsers sucks at resizing jpegs. A two year old high end Android smartphone too.
Minimizing HTTP requests, avoiding FOUC, using only one version of JQuery (or, even better, none), using CSS sprites... there are countless optimisations which are more important and seldom used.
Retina is a buzzword and a buzzword resolution. If you target the 0.1% rich hipsters, yeah, it's important. If you've real users browsing in 1024x768 on a four year old laptop and a 2Mbps broadband, it's not.
When Amazon'll use Retina img, that's when it'll be a standard practice.
Which is a fair point, but it depends who you're targeting: if affluent users are more likely to have high-DPI devices (iPad 3, most smartphones, high-end MacBooks), and they are, then you're going to have to make your site 'retina-ready.' And it's a mistake to thing this is just a young, tech-savvy issue. How many older people are buying iPads? Do you think they'll notice, subconsciously or not, if your site looks a bit crappy?
I have a hard time believing it would be a significant issue to do a 2x image resize, especially if you provide exact image dimensions in your img tag to start with to avoid the renderer having to wait until the image has downloaded to layout the page properly. In any case, I think someone should do some benchmarks to see what kind of an issue it is in practice.
Either way, it's not a bad thing to be forward thinking with design. The amount of 'retina' devices in the world is growing exponentially. At the moment, it's largely an Apple problem, but with high DPI panels now on the market it'll soon be industry-wide.
Show me some A/B testing showing a conversion increase when Retina proofing your website.
Old people with an iPad3 won't see the difference between a non retina and a retina website. They'll see which is the fastest and the easiest to use (because usually, pixel perfect guys are not too smart about UI.)
Retina-ready is a scam. We have neither the tools (SVG support too weak, no <picture> element, no good navigator.connection...) to provide a meaningful retina experience while respecting other users.
You say show me the evidence for A/B testing and then state "Old people with an iPad3 won't see the difference between a non retina and a retina website." where is your evidence?
Anyone old or young that cares about speed of browsing will be running the latest browser, with good upscaling...
Retina isn't a scam, however putting an SEO spin on the need to even consider preparing for the future is
So why stop at x2, if you want future proof why not store your images at x10 just in case.
Isn't it more of a browser problem if they got crappy upsampler.
The cost of doing it always is primarily in extra memory used on the client, while the quality loss from poor client downsizers should not be too bad, particularly if you target 2x resolution.
IME the best bandwidth optimization is an adblocker :)
>An outdated computer/browsers sucks at resizing jpegs.
Most of us don't care about those anymore. Anything older than IE8 is out in modern web design.
>A two year old high end Android smartphone too.
Those matter even less. Two year old is near end-of-life, since people get new contracts. And Android, despite the higher market share sees less web usage (of the 20-80 scale), probably because more of them are sold to less tech-savvy users.
Well I imagine that there would be no advantage to upscaling the original photograph, as you'd not be gaining any detail. But otherwise, I'd be curious to see what happens.
Suspect that in-browser sharpening of resized images (CSS or HTML) can make up for some of the detail lost in the lower JPG quality.
In the 60-90 range, differences are always minimal, especially when applied to images lacking detail 'coverage' to begin with (like the test set on the site).
Bottom line: I to think that the blogger is onto something.
Even if the "less than original size" thing doesn't always pan out due to inefficiencies in his compression process, it makes sense that in-browser sharpening would allow reduced file sizes for images displayed at lower than native resolution.
PS Significant savings in JPEG size with barely any perceptible loss of detail can be achieved by anyone with JPEGMini (which, unlike JPEG2000 is 100% compatible with all browsers these days).
I've always been vaguely aware that JPEG gives better results at a constant quality setting as increase the size of the image.
Firstly - the 8x8 blocks become smaller relative to the image but in addition to this I think that the it's just in the nature of compression algorithms in general and lossy compression in particular to produce better results when there's more source material to work with.
However I didn't expect it to improve enough to enable one to but the Gordian knot that Retina displays have forced upon us by using 2x resolution images across the board.
I would imagine that not all source images respond quite as well as others.
Also - using 2x images for all devices will surely create a quadrupling of RAM requirements which might cause performance issues.
This isn't really good practice, unless you revise your cookie on every page load to make sure it is still correct; e.g. restoring an ipad2 backup to an ipad3 will leave you with the wrong images for the next 10 years.
And an OLPC-style device might have different ratios if you're browsing using the e-ink display rather than the lcd.
We probably should have been using this all along. That we can also benefit from the extra resolution thanks to touch interfaces and high-dpi displays is icing.
While I can appreciate the technique, I haven't adopted the use of @2x images for photos. Most photos tend to hold up pretty well when scaled (soft edges and relatively low contrast). If you're serving up a portfolio, sure, but I find more value in maintaining and serving @2x (based on media queries for MDPR > 1.3) for UI elements or logos not easily reproduced in CSS/SVG.
Surely this works because the compression algorithm can find more predictability in an image before its resolution is reduced. You see this affect a lot when you are trying to get very high compression levels. However, it is difficult to prove exactly what is the best settings without actively trying different configurations.
Good observation, but too bad that no one will ever see your high DPI images because the "retina revolution" isn't a thing. Most people won't ever notice the difference.
Look at the details on the jpeg settings from the image itself.
Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.
This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.
This entire test has to be redone.
Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.
(and do not use default photoshop settings ever for web images)
ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it