Hacker News new | past | comments | ask | show | jobs | submit login
Retina display Macs, iPads, and HiDPI: Doing the Math (tuaw.com)
59 points by tuhin on March 2, 2012 | hide | past | favorite | 26 comments



Mostly a good article and I agree with his main points, but I have two big disagreements.

- Viewing distances. I only sit about 16-18" away from my 13" MBP screen and only about 24" from my 24" display. This varies obviously as I don't sit in a locked position all day, but I think he's erring a bit too high on estimated view distance, which means his necessary resolution to reach "retina" level is too low.

- Screen size. Right now the 13" MBP I'm staring at has a very significant bezel that I would like to see mostly go away in an upcoming model refresh. The iPad's bezel makes sense since it's meant to be held in the hand. The MBP only needs enough bezel to fit the camera up top and needs none on the sides or bottom of the screen.

But yes, his overall point that Apple does not need to go so far as screen doubling on laptops and desktops to achieve pixels that are indistinguishable to the human eye is correct. I just think the resolution at which that point is reached on laptops and desktops is a bit higher than what he's calculated.


Hi ebbv, I'm the author of the post.

I've had a quite a bit of feedback that I've moved the viewing distances too far out. I measured from my own experiences, but I guess I must be atypical. It might be because I use dual monitors on the desktop (I have a 27" and a 26", so I sit back to reduce head turning) and slump when I have a laptop in my lap.

Anyway, I've expanded the spreadsheet that goes with the post to include some extra settings for closer distances and for your specific distances and devices.

Hope this helps!

Here's the spreadsheet: https://docs.google.com/spreadsheet/pub?key=0Aq8W2-V7OXqfdGV...


Yeah, the viewing distances are cack. There's no way I sit 4" further away from a 17" MBP than I do from a 13" MBA.


The landscape is getting a little messy for app developers. Not saying this is bad. It's just a fact.

Today you have to deliver .png, @2X.png and *~ipad.png image sets with your app. And, there is no off-the-shelf way to reuse @2X images with the iPad when in most cases they'll work just fine. You can, but it requires creative coding.

Still, this results in app packages that are bloated with image assets in triplicate and now soon to add a fourth version.

If you build a universal app it seems that even someone downloading your app onto an iPod Touch is going to end-up with @2X, ~ipad and ~ipad2X (or whatever) images that the app will never use.

Maybe this is the beginning of the end of the universal app?


I was under the impression that not specifying a device modifier would allow the image to be used on both devices. From Apple's documentation:

Applications running in iOS 4 should now include two separate files for each image resource. One file provides a standard-resolution version of a given image, and the second provides a high-resolution version of the same image. The naming conventions for each pair of image files is as follows:

Standard: <ImageName><device_modifier>.<filename_extension> High resolution: <ImageName>@2x<device_modifier>.<filename_extension> The <ImageName> and <filename_extension> portions of each name specify the usual name and extension for the file. The <device_modifier> portion is optional and contains either the string ~ipad or ~iphone. You include one of these modifiers when you want to specify different versions of an image for iPad and iPhone. The inclusion of the @2x modifier for the high-resolution image is new and lets the system know that the image is the high-resolution variant of the standard image.

That said, it would be cool if Apple produced four different variants of an app and would send you the proper one depending on the device you download it on. When downloading with iTunes, your machine would download and store all four versions.


Maybe it marks a new rise in using vector image formats? (note: I am not an iOS developer and do not know the level of support iOS provides natively for vector formats)


You can render PDFs as UIImages[1], there are categories around that make this easy[2].

However, this is never ever going to be as cheap as loading a converted PNG (which Apple's modified pngcrush converts for you). I think a lot of devs have the draw/vector vs. precomposed bitmap tradeoff the wrong way round.

Drawing all of your gradated UIButtons with CoreGraphics methods is a false economy compared to just loading a stretchable PNG. Almost all of Apple's UI system imagery is bitmap based, and for a good reason.

[1] http://mattgemmell.com/2012/02/10/using-pdf-images-in-ios-ap...

[2]https://github.com/mindbrix/UIImage-PDF


Ugh, yeah, this is now a righteous mess. I don't see why a user's device has to be stuffed with payload that he/she has zero chance of ever utilizing.

All it does is bloat up the package and limit what devs can do and still remain in the downloadable-over-3G limit.


There's an open source view that'll let you use a single scalable PDF for things like icons: https://github.com/peyton/moomaskediconview

Disclaimer: I'm the author.


This (http://www.macstories.net/stories/retina-universal) article suggests a solution to the problem from the download-limit point of view.


Why doesn't App Store strip the app for each platforms? The DRM already blocks sideloading, so there is no concern about copying and app from Pad to Phone.


"Retina Display" should have a very clear definition. It should refer to resolution at which anti-aliasing becomes unnecessary. Anti-aliasing is rightly classified as a hack placed on top of modern drawing systems, and one of the reasons that many games don't support it out of the box. With a high enough resolution, anti-aliasing technology will become irrelevant.

This is going to be different for each resolution depending on the distance that you view it at, I built a quick image that you can test this on. http://dl.dropbox.com/u/1437645/alias.html Put that on your phone or desktop and see how far you have to step back before the aliasing affect disappears


From OP: "makes a solid argument for why an iPad retina display must be pixel-doubled -- i.e. 2048×1536 -- and not some intermediate resolution (just as was the case for the iPhone 4 before it). Anything else means every single existing app either has to re-scale art assets -- resulting in a fuzzy display -- or let them appear at a different size on-screen -- resulting in usability problems as the tap targets are resized. This is because every single existing iPad app is hard-coded to run full screen in 1024×768."

Doesn't this suggest that Apple is getting bitten by backward compatibility to the mass of pre-existing apps, just like Microsoft got stuck with the mass of existing software running on Windows (and also actually users who get too used to existing UIs/UX)?


Not exactly. This theoretically only pertains to bitmap UI elements (mostly icons). Sadly bitmaps are everywhere, notably because they're so cheap to create and render compared to vector, and more computation implies more energy. I suppose one could generate a cache of rasterized vector UI elements to cut on subsequent rendering.

Apple tried to bring resolution independence to Mac OS X since quite some time, and in all honesty it worked well... for vector stuff. It broke in varying ways across iterations of it every time there was a bitmap involved, in which case they were at best either blurry or unscaled. There's no miracle, unless you generate bitmaps for numerous multiple sizes (like in icns files, where they range from 16x16 to 512x512, downsampled if scaling is needed, like on the Dock), initially small bitmaps will just look bad unless you use a 2x factor, in which case you will at best have no improvement (but no loss either) over a non 2x screen, or you have an uncanny effect when a 'fat pixel' bitmap stands near a 'thin pixel' vector curve. Anyway as noted by robomartin, things are sufficiently bloated already not to include full-scale 16->512 bitmaps.

What's more 2x is computationally way simpler and much less costly for everyone. The only non-hackish, seriously viable alternative is to go all the way vectorized. A typical case of 'less is more'/'worse is better' if you ask me.

[0] http://news.ycombinator.com/item?id=3658369


This is a very good post. One I've thought about in a different context before: How much bandwidth/resolution do we really need.

The human senses have an upper limit of resolution, once we reach that limit further progress is irrelevant. So once everyone is streaming netflix at limitx2, Where does further bandwidth/storage demand come from? Growing populations? There's a limit to that growth. "big data"? Hardly.

We're rapidly approaching the point where individuals' need for further storage is exhausted. I think it'll be somewhere in the 10-100PB range. Which is pretty damn close.


17 years ago, Jakob Nielsen predicted a maximum of "about four tera bits per second bandwidth without compression, or one Tbps with some compression": http://www.useit.com/alertbox/9511.html

Interestingly, he also predicted that "it will probably be about seventeen years before these perfect monitors are commonplace", though I think he was just looking at VRAM requirements.


When a new product is created, companies compete initially on features. When all the essential features are there (take radios, for example: most radios have the basic features down), innovation starts. Innovation, such as portable radios, radios in your car, etc. push the industry forwards for a while.

Eventually though, innovation stops (I don't see many new radio sets today). When that happens, the product becomes almost a commodity, and another product improves so greatly upon it that the other product can be considered a new product.

Television, for example, replaced radio (two senses versus one sense). The internet, arguably, appeals to the same senses, but allows for user-created content, more freedom, etc.

When we reach the upper bound of innovation for television sets, it too will become a commodity. Perhaps it will eventually be replaced with a product that not only exceeds the capabilities of human sound and sight (for television to hit the bound for innovation, this must happen), but also incorporates something else: maybe it's another sense, maybe it's something more convenient (I guess the internet, could, to some extent, be considered as an evolution of television).


How much data does a 2 hour immersive holographic "film" consume?


Assuming the holographic system uses voxels, it would need to be capable of displaying 108,900 (330x330) voxels per cubic inch to allow the same level of resolution as an iPhone 4 if it produces a flat surface. This would allow Retina-quality images when viewed from at least 11 inches away.

Assuming also that one voxel is 32 bits and not compressed, then the hologram would be 435,600 bytes per frame per cubic inch. At 24fps (you did say "film") that's 10,454,400 bytes per second per cubic inch.

Let's say it projects a hologram to fill a room the dimensions of a Star Trek-style holodeck, a cube of maybe 10 metres on a side. That's about 400 inches on a side, or 64,000,000 cubic inches. That means that a holographic film would be 669,081,600,000,000 bytes (608.5 terabytes) per second.

So, to answer your question, a 2 hour holographic film would be 4,817,387,520,000,000,000 bytes (4.178 exabytes) in size.


Fortunately, i'd be willing to bet you could use some kind of Occlusion culling to compress that stream quite a bit.


Yup, there's surely a ton of ways to compress that. Occlusion culling would be one, since you'd never see the insides of objects. For another, modulo atmospheric effects like fog and smoke, there'd be a lot of empty space between objects that even run-length encoding could compact quite a bit. I'm sure that even with lossless compression you could get it down to the petabyte range.


This is a great post. If an iPad 3 has the "retina" display with resolution 2048×1536, I still think that a similarly high-resolution display will follow soon after for the soon-to-be-unified MacBook Pro/MacBook Air lines. If these high-resolutions screens are available at 9.7" for the iPad, it's hardly a stretch to manufacture them at 13" with a slightly reduced DPI.


There's an elephant hiding behind "UI elements would look proportionally larger": such a non-doubled Retina display would show less information than the old non-Retina one. This makes perfect sense in the Lion (and Metro and Unity) single-window world, but not for people who do work.


Is anyone else wondering about speed/performance issues when scaling up iPad graphics like this? Apple must have a hot new (A6?) chip up its sleeve that will enable performance on par with what you get from an iPad 2 now while also smoothly handling the larger images necessary for an iPad retina display. If not, what's the point outside of HD movies? Who cares about retina display if graphically intensive apps all chug? Or if devs have to use non high res images in order to preserve performance... Will be interesting to see how it shakes out.

Here's hoping for an awesome chip that will make all of the graphics production rework worth it..


Well static images need hi res more than dynamic video, so animations could run at lower resolution and get blurred up with no user experience degradation.


This can't be correct when it comes to the larger screens. I found the text on the iPhone 3G to be fuzzy but not on the iPhone 4. I don't like Apple's antialiasing but on the iPhone 4 it doesn't matter anymore. Antialiasing becomes irrelevant. However, my 17 inch Macbook still has text that looks fuzzy to me and the antialiasing is still bothersome. I'm really hoping that there is some increase in resolution for the larger displays coming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: