Hacker News new | past | comments | ask | show | jobs | submit login
Secrets of the Nexus One's screen: science, color, and hacks (arstechnica.com)
59 points by suraj on March 25, 2010 | hide | past | favorite | 11 comments



Don't miss the rainbow picture on page 3, it's a pretty cool hack:

"I then developed a far more nefarious test of color fringing: an algorithm that would take an arbitrary full-color image and generate a pure grayscale stipple pattern that appears colored on the N1 screen. The interesting thing about these images is if you display them at anything other than 100% zoom, the colors disappear and you only see the grayscale stipple pattern."


I have an iPod Touch and a Nexus One in front of me now. When looked at side by side, the Touch seems drab and fuzzy and the grid layout of the pixels stands out; while the N1 is crisp, and the white background of text is solid.

The thing it reminds me of most is a comparison between a CRT and an LCD. Apple seems to like applying more anti-aliasing to their fonts, certainly more than is customary on Windows, and it really shows on the Touch - whatever about the N1 having slightly fuzzy text when zoomed well out, the Touch's text is plainly fuzzy when zoomed to normal reading size.

Things other folks have brought up, like 18-bit with dithering vs 16-bit, I'm less concerned with. There's more contrast range in that 16 bits (by itself enough to make banding more obvious), while dithering doesn't strictly relate to the pixels displayed on screen and can be done in software. I expect updates can improve this.

The Cooliris Gallery browser the N1 ships with certainly does scale down photos before drawing them on screen; the browser is similar. This is annoying, especially if you want to zoom into a section on the photo, but again it's not related to the actual display. It looks more like an optimization designed to fit the image into a texture for hardware acceleration. Other photo gallery apps don't necessarily have the same implementation (e.g. B&B Gallery).

One actual aspect of the OLED display that does bother me somewhat is the way it "fizzes". When using the screen in the dark with the brightness at the lowest setting, all lit areas of the screen can be seen to be constantly flickering and alternating between different brightness levels at very high speed, a bit like the way the head of a freshly poured Coke's froth is constantly dissipating and getting renewed. But this is a very minor quibble.

As to contrast ratio with bright ambient light, I haven't found it to be a practical problem. I don't use my phone much outside during the day - even if I'm using the map, I'm more likely to be doing that in an evening - but even then, it seldom gets sunny enough in the UK to make the screen hard to read when it's at full brightness.


As has been discussed to death, the main thing is that Apple’s font rendering doesn’t go as far to align parts of characters to pixel boundaries. The result is that people used to reading text on Windows (etc.) will think Mac text looks fuzzy, while Mac users will think text on a Windows machine (or Android, presumably) looks unevenly spaced and sometimes wrongly shaped. It’s somewhat a matter of personal preference. I’m kinda hoping we can all use 250 DPI screens and stop worrying about it at some point.


The Nexus One (and Droid) are high enough resolution that I wonder if they benefit from any subpixel tricks. I definitely don't see any major kerning problems on my Nexus One.

With hinting on (including the patented/smart auto-hinting) in X (not OS X!) spacing issues are pretty obvious, but it at least keeps fonts reasonably crisp on my normal desktop LCDs, which have a much lower DPI than Nexus One/Droid (and iPhone/iPad too)


This article is neat about the engineering/marketing (seriously, it’s clear and direct and has useful pictures; that part is highly recommended). The part about vision and color is pretty sloppy though.

This sentence:

> “The reason this trick works is that rods are 20x as dense as cones in the retina, meaning the eye has approximately sqrt(20)=4.5x the spatial resolution in detecting intensity or luminance transitions compared to detecting color or chrominance transitions.”

... is simply false. The reason that we see lightness differences with better spatial resolution has (almost) nothing to do with numbers of rods vs. cones (indeed rods are almost absent from the fovea – the center of the retina – which we use for fine detail perception, basically the opposite of what was claimed), but instead happens because the signals from the three types of cone cells are processed into a single (monochromatic) lightness response, which we use for perception of fine detail. The signals from the cone cells are also processed into two color difference signals, but over bigger patches of retina: sort of a neurological analog to the “chroma subsampling” that happens in JPEG compression. In other words, JPEG compression does work because we see fine details monochromatically, but instead of “leveraging” this mostly irrelevant cone/rod difference as claimed, it’s really using the same basic approach (averaging/tossing out color data) that the eye itself uses.

Actually, the next few sentences are somewhere between misleading and wrong, too.

> “The eye is more sensitive to quantization levels of green light than to levels of red or blue light, which is probably related to the fact that the sun emits more power in the green region of the visible spectrum than in red or blue. (The eye's spatial resolution in each of the three primary colors is approximately uniform.)”

The eye is more sensitive to green light because we use three cone receptors with the biggest overlap in sensitivity in the green part of the spectrum, so that when you add up their responses to a signal (in the combined lightness signal I mentioned), green wavelengths have more of an impact.

It certainly has something to do with evolution and the spectra of reflected sunlight, but various animals have different spectral sensitivities, so the causality is complex. The sun emits light over a huge range of wavelengths (far beyond the visible spectrum in both directions), and exactly which part makes it to our eyes depends on time of day, weather conditions, altitude, etc.

The three “primary” colors are somewhere between extremely simplified model, useful engineering approach, and myth. We also have somewhat different spatial sensitivities to the three primaries in a computer display, as far as I know.

> “Also the blue and red subpixels are twice the size of green, making them twice as likely to be illuminated at the perceptual edge of a hard intensity transition than green.”

Let’s see here. 1/3 the total area is green, 1/3 the total area is red, and 1/3 the total area is blue. How is it that edges are twice as likely to fall on red or blue than green, again? (In other words, there’s some difference because of the different pixel sizes, but whatever is meant here is being poorly explained, and important assumptions about the edges in question are left unstated.)

Vision is complicated, and there’s a lot of heavy math going on in the neurons in your retina, before signals ever get transmitted through the optic nerve. It depends on adaptation to current light level and color, on what size blobs you’re looking at, on the surrounding colors, and on inferences about objects and how they’re lit. These complexities lead to all those famous optical illusions. So it’s worth using simplified models, to cut out complexity incidental to whatever you’re trying to explain. But that doesn’t give license to just make up stuff.

For anyone interested, this is a good place to start:

http://www.handprint.com/HP/WCL/color1.html

------

One last thing. I’m pretty suspicious of this:

> “Overall, it is hard to see any really good advantages to the PenTile layout.”

Presumably the engineers who designed the thing had some compelling reasons. (It’s conceivable that it was driven by a desire to make a worse display that could be better hyped by marketing, but that seems extremely unlikely to me; I’m suspicious.) I’d like to hear what those are instead of just a hand-wavey “I don’t know what they are so I’m going to imply there are none”.


"Presumably the engineers who designed the thing had some compelling reasons. (It’s conceivable that it was driven by a desire to make a worse display that could be better hyped by marketing, but that seems extremely unlikely to me; I’m suspicious.) I’d like to hear what those are instead of just a hand-wavey “I don’t know what they are so I’m going to imply there are none”."

Probably it has to do with manufacturing requirements. I think the author of the article erroneously assumes that the different colour sub-pixels should be more or less the same. For example, the article says:

"And the layout can't logically be argued to be a limitation in manufacturing capability, because it is clear that the screen manufacturing process is able to create full-resolution green subpixels that are one-third the size of a physical pixel."

This is actually true for LCDs because there the colour of a subpixel is defined merely by a filter that is put in front of the subpixel. However, this is not true for OLEDs. In OLEDS, each subpixel is a separate light emitting diode of the respective color. The light is not changed by the use of filters, but is generated only in a specific color to begin with. However, the light an LED generates depends on the chemical composition of the LED, so each different color LED is a physically different device.

In fact, if you remember some recent history, for most of the history of LEDs, there have been no blue ones (only green and red). Blue LEDs were a big relatively recent discovery. (This is why you only see blue and white LEDs in relatively recent devices).

So the manufacturing requirements for the different colors LEDs are different, and blue LEDs are the most difficult. So it is not at all surprising to learn that the manufacturing process allows for green LEDs to be twice as thin as the blue ones.

So the pentile layout was an attempt to minimize the pixel size given the above mentioned limitation of blue LEDs.


OLEDs also degrade over time. Blue is still the color with the shortest lifespan. Making blue larger will give an equivalent output for longer.


Yes, the rods are not used at all for regular daytime vision, only for nighttime vision and in dimly-lit environments ("scotopic" vision). The rods are much more sensitive than the cones, but can't detect color. That's why you can't see colors at night.

We also have somewhat different spatial sensitivities to the three primaries in a computer display, as far as I know.

This is because the cornea and the lens are not perfect optical systems, so they blur the input. This blurring is much stronger at the blue end of the spectrum than in the red end. The retina has evolved to match this, so it has very few blue cones and many more green and red/yellow ones.

The result is that we have very low spatial sensitivity to blue. This page:

http://www.gamesx.com/misctech/visual.htm

has a demonstration.


One thing that I couldn't find mentioned is exactly what the pixel addressability is on the display. In figure 2, the layout of the first two rows is (using RR and BB for the double-wide subpixels):

   RR G BB G RR G BB G RR G BB G RR G BB G RR G BB G
   BB G RR G BB G RR G BB G RR G BB G RR G BB G RR G
Is the upper left pixel in this (RR G BB), which the article seems to assert, or (RR G BB G) ? If I set the screen to entirely black and turn on pixel (0,0), which subpixels will light up (I don't find Figure 1 shows this well enough)? If (0,0) lights up (RR G BB), then the second pixel at (1,0) is (BB G RR) and overlaps the first? If it doesn't overlap, then the second G from the left in the first row is not part of any pixel. If one pixel is composed of two green half-sized subpixels and one each of red and blue full-sized subpixels, then the ratio of screen area to colored subpixels actually is the same for all pixels, it's just that the green subpixel is spread out more and the order of the subpixels is different in each row -- the later would undermine traditional, plain subpixel smoothing that assume a consistent, regular layout on all rows.

Notice that if you count subpixel groupings in figure 2, it's exactly 5 pixels across if you use (xx G yy G) subpixel layout (where xx and yy are either RR or BB depending on which row the pixel is on).

I find this paragraph confusing considering the layout given:

You can see from the photo above that each logical pixel on the Nexus One screen contains one green subpixel and either one double-width blue subpixel or one double-width red subpixel. So the red and blue color channels on the Nexus One display each have half as many subpixels (480800/2) as the green channel. Basically, half the red and half the blue spatial information in the 2D image being sent to the display is simply thrown away or spread to the nearest matching subpixel by a convolution or intensity-dispersion process.*

And if (xx G yy G) defines one pixel, then this paragraph's math is just plain wrong:

One way to count raw pixels, ignoring the effect of all the signal processing on the PenTile display, is to calculate total effective RGB triplets on the screen. You can do this by taking a weighted sum of total number of red, blue and green subpixels, and then converting back to an effective screen size. The total number of effective physical pixels, counted using a weighted sum, is (480800/2)2/3 + (480800)1/3 = 256,000, exactly two thirds the claimed total number of pixels (480800=384,000). This is equivalent to a screen with edge dimensions sqrt(256/384)=82% of the claimed length, or (48082%)(80082%) = 392653 = 256k.*

... as the ratio of the areas of each of the subpixels is the same, at least in figure 2.

If green has more prominence in the way the human eye picks up color, then it actually makes sense to spread out the green over separate areas (or different shaped areas) to give greater prominence to the harder to pick up colors of red and blue, which are more concentrated and thus brighter/richer. It seems from figure 1, the subpixel smoothing takes into account that the red and blue subpixels are ordered differently. It's also possible that the subpixel smoothing recognizes that the alternating ordering of red and blue can be exploited along the vertical axis to achieve the following layout (this is two pixels side by side):

   RR G BB G
   BB G RR G
which is also the same total subpixel area per pixel, but the whole pixels are more square (1.5 subpixels wide by 2 pixels high vs 3 subpixels (6 half-sizes) wide by 1 pixel high).

I personally don't see "waviness" in the lines on the screen, but I do see some stippling at the screen edges (easiest to see on an almost fully white lit screen, like the background (not the white border) of the HN content area). This would seem to be caused by the ordering change of R and B subpixels, not of the half-wide G subpixels.


> Is the upper left pixel in this (RR G BB), which the article seems to assert, or (RR G BB G) ?

Take a look at the photo vs. the debugger snapshot of the "3G" text. The uprights in the G are 3 pixels wide, and display as one of either:

BB G RR G BB G RR G BB G RR G

Thus the point of the article: A "pixel" is either "BB G" or "RR G". There are 800x480 of these. But neither of them is "a pixel" (RBG) like we're used to talking about, it's less than that. There's one (small) green and one (bigger) red OR blue in each pixel.

I had a Nokia N810, with an 800x480 LCD of approximately this size. It was gorgeous. I was expecting something similar, but got this, with my N1. The screen is, easily, the most disappointing part of the N1 in my opinion. The enhanced contrast of the AMOLED doesn't come close to making up for the jagginess of the uneven pixels. Vertical, and especially diagonal, lines just look horrible.


There's been a few sites which brought this up..

On a totally unrelated note, anyone know if VOIP is working better on the nexus one's these days?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: