Hacker News new | past | comments | ask | show | jobs | submit login
Retina Revolution - smaller images with better quality (netvlies.nl)
134 points by aggarwalachal on Oct 7, 2012 | hide | past | favorite | 75 comments



This test was done COMPLETELY WRONG.

Look at the details on the jpeg settings from the image itself.

Subsampling is turned off for some and on for others, which gives the target size far fewer bytes to work with.

This is a common problem with photoshop users, they use the highest settings which turns off subsampling but then reduce the filesize allotment which gives it less room to work with. You get better results if you have a target filesize by turning off subsampling first, which photoshop does not do by default until you drop the quality target very low.

This entire test has to be redone.

Use SUBSAMPLING OFF and PROGRESSIVE ON for all (jpeg) images for the web.

(and do not use default photoshop settings ever for web images)

ps. every time you save a file or image in adobe products it embeds a hidden fingerprint (beyond exif) that identifies your specific install - so not only does it add extra file size, every image you post can be traced on the web - use jpegtran or jpegoptim to strip it


Can you write a blog article with this information, please?

I would be very happy to see a few examples of photograph compressed with the OP method and your method.


Correction to my suggestions for settings, since I cannot edit the original anymore.

It should say SUBSAMPLING ON and PROGRESSIVE ON

Not subsampling off. Off is the incorrect setting and makes much larger images (or reduces the available space when restricting file size).


Forgive me for asking, but are these settings really relevant? You've said you don't use the Save for Web feature which is Photoshop 101 for optimizing web images.

What you're describing sounds to me like someone recommending Text Edit, Apple Script, and Automator to do Unix commands because they didn't know Terminal was in the Utilities folder.

All the extra stuff you don't like are used by design studios. The metadata keeps track of the color profile, thumbnails, comments, and other metadata commonly used for managing large libraries of images.


"Save For Web" has the exact same problem and makes no difference unless the user is proactive and adjusts settings, the defaults are just as bad as the regular "save as".

When doing "Save For Web" Photoshop disables subsampling for Maximum and High (the default) and also does not enable progressive by default. It also adds meta.


As someone else pointed out from looking at your screenshot - you're not using 'Save for Web'.

How much of your critique still applies to 'Save for web'?

Nobody (should) be using the normal save dialog for web images so I'm not sure how much of what you say remains valid.


Counterpoint on progressive JPEG [1]:

> Some people like interlaced or "progressive" images, which load gradually. The theory behind these formats is that the user can at least look at a fuzzy full-size proxy for the image while all the bits are loading. In practice, the user is forced to look at a fuzzy full-size proxy for the image while all the bits are loading. Is it done? Well, it looks kind of fuzzy. Oh wait, the top of the image seems to be getting a little more detail. Maybe it is done now. It is still kind of fuzzy, though. Maybe the photographer wasn't using a tripod. Oh wait, it seems to be clearing up now ...

[1]: http://philip.greenspun.com/panda/images


On web sized progressive, the loading is over quickly and the benefit is an immediate placeholder on even slower connections instead of whitespace.

But the real reason is it also produces smaller file sizes.


Progressive JPEGs are not smaller. They contain the same data, just rearranged. You can losslessly transform a progressive JPEG into a baseline JPEG and vice versa, without recompressing it. The jpegtran tool, included with the standard JPEG library, can do this.


Larger images saved as pjpegs tend to be slightly smaller than a standard jpeg.

http://www.yuiblog.com/blog/2008/12/05/imageopt-4/


If you actually try it using the tool you yourself mentioned you'll see that progressive JPEG's are in fact smaller.


Stream compression benefits from locality of data, so the ability to merely rearrange data can be quite beneficial.


Not a fan of progressive myself, but it should be noted that iOS (at least iOS5), will downsample jpgs larger than 2MP; saving as progressive apparently circumvents this 'feature.'


I suppose this feature is there so that the thing takes less RAM and is less taxing on the GPU. So strike a balance and choose wisely, as you don't want four or so images killing all unfocused Safari tabs.


I assume you've learned this from somewhere in the bowels of the photoshop documentation or years of domain knowledge? I ask because as a photoshop user of 5 years I've always wanted to learn a LOT more about the nitty gritty details (I am on hackernews :P) of what I was exporting but I didn't find a solid end-all be-all resource. Any ideas where I can find one?


I rarely use photoshop myself - I learned it from having to support photoshop users and solve why the images they upload to the CMS system were so massive, ie. 400k for a 600x600 photo

What we did for the non-tech people was simply tell them to always use setting #6 on photoshop and use the progressive setting. Two steps seemed the most they could handle.

http://i.imgur.com/vct3D.png (best one-shot photoshop settings for web jpegs)

I had to go into photoshop and save the same image repeatedly under all the different settings and then examine the resulting jpeg under different tools to see exactly what it was doing.

It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).

Here is a technical analysis someone did on the photoshop settings:

http://www.impulseadventure.com/photo/jpeg-quantization.html...


>It also doesn't help that photoshop bloats jpegs by adding hidden adobe meta to every jpeg (beyond and different from exif).

I believe if you use "Save for Web", it will strip out most metadata and EXIF from the image before saving. Here's a result of using "Save for Web" (1.jpg), passing through JPEGTRAN with `-copy none -optimize` (2.jpg) and `JFIFREMOVE` (3.jpg):

    330090  1.jpg
    329916  2.jpg
    329898  3.jpg


Yup, jpegtran is a must for post-processing adobe images.

It can also do a lossless conversion to progressive format.

Most people do not know about it though. JPEGOPTIM is another one.

You can examine what's embedded in the image here http://regex.info/exif.cgi

but there are better offline tools.


I don't get it. Why is jpegtran a must?


Because it strips any meta information contained in the file which reduces file size. It can also losslessly optimize the image. See here for details (I've written about it a short while ago): http://hancic.info/optimize-jpg-or-jpeg-images-automatically...


It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.


Not if you have many images on the same page. Then id adds up.


If you place 1800 images on a page, you'll save enough bandwidth to now have 1801.


In response to Radley, who said:

* It sounds great in theory but the previous example only saved < 200 bytes. That's not really optimization, that's overkill.*

That's a fair point, but if you've got a site that's getting hundreds of thousands or millions of views, or a large number of thumbnails, the one-time effort to shrink image size might be worth it.


how does this translate into something like ImageMagick ? I mean, if I have users uploading large (>10mb) images which I want to transform into web-optimized images, is there a server-side pipeline that would give the most meaningful results ?


The extra weight is for library management features (color, compatibility, previews, Bridge).

Web designers don't "Save" web images, they use "Save for Web" which strips out the extras.


Aha, the web page talks about "chroma subsampling", makes sense now. So saving less chroma than luma (color vs intensity) information.


Even that being the case, the higher resolution image would need to have awful artifacting to be worse than the lower res image on a high res display. Scaling up a low-res image in the browser will always make it look fuzzy, you'd need to apply filters to sharpen it up, and you'd still definitely lose detail lost in the original shrink.


You mean like on the teeth and in the eyes in the third image set?


Great point. I use Fireworks instead, especially for PNGs. Fw let's you use PNG8, and let's you set the way the transparency is rendered (alpha or index). Large PNG files produced by Photoshop's Save for Web dialog are two to three ones the size of the PNGs I generate w Fireworks.


You're doing it wrong then. PNG8 is the same in both products. PNG24 isn't optimized in Adobe products. You need to run it through a PNG optimizer.


You're right that PNG optimizers will yield better results, but PNG8 isn't the same in both products.

Photoshop won't do 8 bit opacity (as opposed to hard transparency), whereas Fireworks will.


Is the conclusion wrong?


"other companies have already started or will also start implementing this new Retina technology."

That is simply impossible for two reasons:

1) Retina display is a trademarked Apple phrase and no other company will ever have retina displays. 2) Retina is not a technology.

I think the word technology is being overused these days.

The author was simply talking about high-PPI displays when he said "this new Retina technology", which other companies have already "implemented" in their smartphone displays. Unless one is talking only about Apple products (which is not the case here) the term retina display should not be used.


For most people, Retina is synonymous with High-PPI, like Kleenex for tissue or Coke for cola. Retina is easier to say & understand. High-PPI is lost in a sea of resolution jargon.


Not that "High-PPI" is all that useful of a term either, any more than "Ultra High Frequency" or "Very High Frequency" or "Super High Frequency".


High PPI is the literal description of the phenomena referred to by the OP. Its not a buzzword or a trademark, its a technical specification.


326 PPI is a technical specification. "High PPI" is a buzzword used to sell things - our current use of it will be invalid in a few years.

edit: deleted first sentence, was more off-topic than useful.


I don't think we need terms like "retina" or "high PPI" at all. Consumers can decide which PPI is good enough for themselves.

Sharp is now producing 5-inch 1920x1080 443 PPI displays. Obviously this is much higher than anything we have ever seen, so it is not a "retina display". What should we call it? Instead of "cornea display" or "very high PPI", I prefer the term "443 PPI".


given that apple's retina and android's xhdpi spec fall into the same number's it's still fair to use the phrase 'high dpi'


>That is simply impossible for two reasons: 1) Retina display is a trademarked Apple phrase and no other company will ever have retina displays. 2) Retina is not a technology.

Useless pedantic "correction" of the day award.

If you want to be fully pedantic though, you're wrong on both counts:

1) Retina might be trademarked by Apple, but nothing stops another company to sign a deal with Apple to use the name for their displays. So, "impossible"? Hardly.

2) Retina is very much a technology. Or rather, what is the definition of technology? Something that requires specific construction that can be identified qualifies as a "technology". In this case, Retina is: a high dpi screen, where high is the level that is impossible or extremely difficult for a person with 20/20 vision to separate pixels when looking from the average viewing distance for that particular class of device. If it's also the name of a SPECIFIC implementation of such things by a PARTICULAR company doesn't matter much, people are not lawyers.

We use a brand name as a substitute for the technology it represents all the time in other fields too. Even PC was once "IBM PC".


Actually, your post is more deserving of your little award. Congrats!


Thanks, but I did it ironically.


This is incredible. I can't believe it's taken this long for someone to realise this (at least, it doesn't seem like common knowledge to me). Just commenting for the benefit of anyone without a retina display -- the differences really are stunning. It's like night and day, and to do that while still reducing file sizes seems crazy.

So, this raises the question: should this become standard practice from now on. If not, why not?

Poor headline though.


It should not.

An outdated computer/browsers sucks at resizing jpegs. A two year old high end Android smartphone too.

Minimizing HTTP requests, avoiding FOUC, using only one version of JQuery (or, even better, none), using CSS sprites... there are countless optimisations which are more important and seldom used.

Retina is a buzzword and a buzzword resolution. If you target the 0.1% rich hipsters, yeah, it's important. If you've real users browsing in 1024x768 on a four year old laptop and a 2Mbps broadband, it's not.

When Amazon'll use Retina img, that's when it'll be a standard practice.


Which is a fair point, but it depends who you're targeting: if affluent users are more likely to have high-DPI devices (iPad 3, most smartphones, high-end MacBooks), and they are, then you're going to have to make your site 'retina-ready.' And it's a mistake to thing this is just a young, tech-savvy issue. How many older people are buying iPads? Do you think they'll notice, subconsciously or not, if your site looks a bit crappy?

I have a hard time believing it would be a significant issue to do a 2x image resize, especially if you provide exact image dimensions in your img tag to start with to avoid the renderer having to wait until the image has downloaded to layout the page properly. In any case, I think someone should do some benchmarks to see what kind of an issue it is in practice.

Either way, it's not a bad thing to be forward thinking with design. The amount of 'retina' devices in the world is growing exponentially. At the moment, it's largely an Apple problem, but with high DPI panels now on the market it'll soon be industry-wide.


Show me some A/B testing showing a conversion increase when Retina proofing your website.

Old people with an iPad3 won't see the difference between a non retina and a retina website. They'll see which is the fastest and the easiest to use (because usually, pixel perfect guys are not too smart about UI.)

Retina-ready is a scam. We have neither the tools (SVG support too weak, no <picture> element, no good navigator.connection...) to provide a meaningful retina experience while respecting other users.

Speed trumps "beauty". Most often.


You say show me the evidence for A/B testing and then state "Old people with an iPad3 won't see the difference between a non retina and a retina website." where is your evidence?

Anyone old or young that cares about speed of browsing will be running the latest browser, with good upscaling...

Retina isn't a scam, however putting an SEO spin on the need to even consider preparing for the future is


So why stop at x2, if you want future proof why not store your images at x10 just in case. Isn't it more of a browser problem if they got crappy upsampler.


The cost of doing it always is primarily in extra memory used on the client, while the quality loss from poor client downsizers should not be too bad, particularly if you target 2x resolution.

IME the best bandwidth optimization is an adblocker :)


>An outdated computer/browsers sucks at resizing jpegs.

Most of us don't care about those anymore. Anything older than IE8 is out in modern web design.

>A two year old high end Android smartphone too.

Those matter even less. Two year old is near end-of-life, since people get new contracts. And Android, despite the higher market share sees less web usage (of the 20-80 scale), probably because more of them are sold to less tech-savvy users.


Curious if we apply it further whether or not it works. At which point does the pattern break-down? 4x and Q20? 8x and Q10? etc


Well I imagine that there would be no advantage to upscaling the original photograph, as you'd not be gaining any detail. But otherwise, I'd be curious to see what happens.


Suspect that in-browser sharpening of resized images (CSS or HTML) can make up for some of the detail lost in the lower JPG quality.

In the 60-90 range, differences are always minimal, especially when applied to images lacking detail 'coverage' to begin with (like the test set on the site).

Bottom line: I to think that the blogger is onto something.

Even if the "less than original size" thing doesn't always pan out due to inefficiencies in his compression process, it makes sense that in-browser sharpening would allow reduced file sizes for images displayed at lower than native resolution.

PS Significant savings in JPEG size with barely any perceptible loss of detail can be achieved by anyone with JPEGMini (which, unlike JPEG2000 is 100% compatible with all browsers these days).


I've always been vaguely aware that JPEG gives better results at a constant quality setting as increase the size of the image.

Firstly - the 8x8 blocks become smaller relative to the image but in addition to this I think that the it's just in the nature of compression algorithms in general and lossy compression in particular to produce better results when there's more source material to work with.

However I didn't expect it to improve enough to enable one to but the Gordian knot that Retina displays have forced upon us by using 2x resolution images across the board.

I would imagine that not all source images respond quite as well as others.

Also - using 2x images for all devices will surely create a quadrupling of RAM requirements which might cause performance issues.


Honestly, I did not notice any JPEG artifacts on any of those images, so I don't think it's that obvious yet.


Funny thing is. This company is called netvlies which is literally retina in dutch. They don't seem to mention it themselves..


It seems like the article is a translation, the irony might be something the original author may not have realized


Maybe it's time for wider support for more advanced image compression formats (JPEG200, WebP, JPEGXR).


This is fascinating as I discovered exactly the same thing this week (sadly after applying loads of javascript to swap retina images).

I found that a 30 jpeg at retina size generally looks better than a scaled 80 and is smaller.

Plus you can zoom in on any Mac, not just an iPad (something I do all the time).


Why not just figure out the pixelRatio once and then serve images according to that?

See this gist: https://gist.github.com/3848834


This isn't really good practice, unless you revise your cookie on every page load to make sure it is still correct; e.g. restoring an ipad2 backup to an ipad3 will leave you with the wrong images for the next 10 years.

And an OLPC-style device might have different ratios if you're browsing using the e-ink display rather than the lcd.


Smart. It now sems like it should have been obvious all along.

You want to reduce the size of an image file from X kB down to y kB. Which method will give better-looking results?

1. Dumb, across-the-board by-two resolution reduction? 2. Smart, perceptually-tuned jpeg compression?

We probably should have been using this all along. That we can also benefit from the extra resolution thanks to touch interfaces and high-dpi displays is icing.


While I can appreciate the technique, I haven't adopted the use of @2x images for photos. Most photos tend to hold up pretty well when scaled (soft edges and relatively low contrast). If you're serving up a portfolio, sure, but I find more value in maintaining and serving @2x (based on media queries for MDPR > 1.3) for UI elements or logos not easily reproduced in CSS/SVG.


FWIW, in case you don't have a retina display, zooming in your browser works just as well to make the difference in quality visible.


Surely this works because the compression algorithm can find more predictability in an image before its resolution is reduced. You see this affect a lot when you are trying to get very high compression levels. However, it is difficult to prove exactly what is the best settings without actively trying different configurations.



Could this technique also work for video?


This is so wrong.

1) He didn't sharpen the small images.

2) He only displayed one type of image - very bright with no shadow detail.

This theory breaks completely if you actually compare apples to apples. Like these two, both 80KB generated from a high quality image:

http://i.imgur.com/E3gaB.jpg

http://i.imgur.com/UrDEx.jpg


Why would he sharpen the small images? The browser won't.


Because then they will look sharper. You have to sharpen when you downsample images. Web design 101.


Yes, every size photo should have size-specific sharpening. Large or Small. Doesn't matter, if you are expecting people to look at it ~seriously.


Or you use an interpolation algorithm optimized for down-sizing.


Good observation, but too bad that no one will ever see your high DPI images because the "retina revolution" isn't a thing. Most people won't ever notice the difference.


Yea that pesky tablet revolution wasn't a thing either, and neither was the GUI actually. I'm still using lynx.

(No offense to lynx users.)

Snark aside, it seems like high dpi will inevitably become standard in the next few years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: