Hacker News new | past | comments | ask | show | jobs | submit login
A Proposal for a High Resolution Display (2009) (bernd-paysan.de)
60 points by jevinskie on July 14, 2016 | hide | past | favorite | 36 comments



The ideal would be to use hexagonal pixels for camera sensors, displays, and printer dither/halftone patterns, the same way human retinas arrange their cone cells. :-) Cf. https://scholar.google.com/scholar?q=hexagonal+image+process...

Since we have so much variety in screen shapes and resolutions today, most of the time we’re resampling images on the fly to rotate/scale them into their position on screen anyway, so the pixel data representing an image in a file isn’t getting directly sent to the same on-screen grid. Thus for most of what we display on screen, there’s no inherent need to stick to the same grid shape used for storage, and a hexagonal pixel grid would in practice look better for the same number of pixels (hexagonal grids are particularly good for representing curved shapes and reducing moiré artifacts). For that matter, there’s no inherent need to store square grids of pixels in our image formats; we could use hexagonal-pixel files and easily resample them for display on arbitrary display grids. The only thing stopping us is technical fluency with rectangular grids, and cultural/historical inertia.

(Think of the iPhone 6+, where software can’t even address the native screen pixels, and literally everything gets rendered to one size grid and then resampled for a different display grid.)

Beyond that, people really need to stop using the extremely misleading CIE 1931 (x, y) chromaticity diagram when comparing color gamuts. The CIE 1976 (u', v') UCS diagram is much more uniform, and just as easy to plot (it’s just a simple linear transformation of the (x, y) diagram). We’ve known the problems with the (x, y) diagram for >50 years now, there’s really no excuse for its continuing ubiquity.

For making diagrams of pixel grids, it would also be better to show less intensely colored RGB pixels, so that the G’ pixel doesn’t so look obviously less intense.


On the iPhone 6+, software can, in fact, address native screen pixels. This is even quite easy. However, that is not the default API behavior, the application must specifically request device native pixels by setting a certain flag. Typically, general GUI apps would not bother because it's easier that way, games would flip the switch for performance, and more sophisticated apps (photo apps, perhaps) would flip the switch as well.


Here's more info about how the iPhone 6+ scales pixels in various situations if anyone's curious: http://oleb.net/blog/2014/11/iphone-6-plus-screen/


Ah, okay. I guess I misunderstood precisely how it worked. I haven’t actually ever used an iPhone 6+ myself. Thanks for the correction.

Still though, the point remains that most apps use the setting with resampling and most users don’t seem to obviously mind.


> (Think of the iPhone 6+, where software can’t even address the native screen pixels, and literally everything gets rendered to one size grid and then resampled for a different display grid.)

This isn't really how this works. If everything was being upsampled, youd see obvious jaggies and upsampling artifacts. The only reason iOS can do this reasonably well is because the actual resolution is exactly 2x or 3x the resolution presented to the software. The actual rendering takes place at native resolution; graphics assets are produced in the native resolution. Upsampled rendering/assets look very obviously bad.

> The only thing stopping us is technical fluency with rectangular grids, and cultural/historical inertia.

These are factors, but theres also not much evidence that resampling from arbitrary virtual to physical resolutions will actually look good.


> The only reason iOS can do this reasonably well is because the actual resolution is exactly 2x or 3x the resolution presented to the software.

On the iPhone 6+, this is not accurate. Most apps are rendered to a 1242 × 2208 pixel resolution (3× the “logical” 414 × 736 resolution) and then downsampled to the screen’s physical 1080 × 1920 resolution afterward.

On OS X, the default behavior for “retina” displays is to render everything precisely to screen pixels at 2x the “logical” size, but users can alternatively select a different virtual display size, whereupon all content will be resampled for display.

> theres also not much evidence that resampling from arbitrary virtual to physical resolutions will actually look good

Every photographic or video image you look at on your computer or phone has been resampled to different pixel grids several times in the course of its production, and is typically resampled one final time when rendered to screen. There is thus lots of evidence that we can relatively seamlessly resample just about everything, and only dedicated pixel peepers will care strongly.

As for non-photographic content like line art, maps, text, rendered video game output, etc., we could just render these directly to a different grid.


I thought the genius of Apple resizing was that everything is resampled down, not up.


Digital cameras have used LCDs with non-square pixels for years, but the problem is that with such an arrangement, you can't get sharp text (or any other fine horizontal/vertical line detail) without things looking blurry or unusually jagged vertically/horizontally. Photography is (mostly) fine because the content is largely smooth gradients.

The red and blue pixels can be used for sub-pixel anti-aliasing, which can improve percieved resolution and rendering accuracy further.

Subpixel AA just makes my eyes water and I feel dizzy after a few minutes, probably because there are no real edges to focus on. I'm probably in the minority, but not the only one who prefers pixel-sharp text.


If your antialiasing is making things look fuzzy, then you are either positioning yourself too close to the display, or your display has too low resolution.

When you’re talking about digital camera LCDs, do you mean the display on the back of the camera? Those are mostly awful, I just ignore them. :-)


It can also be a display gamma issue, which will be worse at some angles on TN displays.

But beyond that, disabling antialiasing and using a pixel font (like Terminus) gives an effectively higher perceived resolution than one would expect from modeling a display as a collection of point samples.


Isn't your first statement tautological? I'm also in the apparent minority to whom font antialiasing (subpixel or not) on low-ppi screens, e.g., 22" at 1920x1080, seems a bit blurry, even when leaning back. Certainly though, there are implementations of different quality: I find ClearType the most acceptable.

As far as I am aware, I have yet to see non-antialiased fonts rendered on a high-ppi screen, but it would be interesting to see to what degree antialiasing is still necessary on such screens.


Luckily there are now displays with ~22″ diagonal and 3840×2160 pixels.

These look much better than the 1920×1080 ones.


Reminds me of my cow-orker complaining yesterday that when he turned his screen 90 degrees (pivot into "portrait" mode), the text started to look visibly less sharp. I guess that's the problem with subpixel AA and subpixels now being wider than tall.


Depending on the OS, I think screen rotation can sometimes cause it to just give up on subpixel-AA and fall back to "fullpixel" greyscale-AA.


Windows 10 in this case.


Mac os does that.


Ironically, subpixel AA works better at high resolutions than it does at low resolution. Once you can't see individual pixels anymore the blurriness goes away, and you're simply left with finer positioning.


By rotating the grid 45 degrees, the OP mitigates the problem with fine horizontal / vertical line detail.


Can you explain that? My intuitive sense is that that would make fine orthogonal detail worse.


I think this mostly would be a marginal gain, especially given that there would be a lot of catching up to do in the manufacturing space, except for the wider color gamut, but it is interesting to think about possible improvements. Here are some ideas:

- "Note that red and blue pixels need to be twice as intensive as green" Is there a necessity for pixels to be rectangular? If not, I would go and shop in https://en.m.wikipedia.org/wiki/List_of_convex_uniform_tilin... to look for a way to have fewer, but larger red and blue pixels.

https://en.m.wikipedia.org/wiki/Snub_square_tiling using green triangles looks like a decent candidate to me. https://en.m.wikipedia.org/wiki/Elongated_triangular_tiling also seems reasonable. In practice, the ideal pattern would depend on the relative brightness of the colours in the light source used.

(I guess it would be easier to just make the familiar rectangular pattern with 4 stripes RGBG and/or with variable widths of the color bands at a slightly higher resolution than to experiment with these)

- If you are making a display where each pixel directly emits light (as opposed to a LCD display, where a pixel can be controlled to let through light from the backlight) and pixels can be made transparent for colors they do not emit, layer the pixels.


There’s no inherent reason display pixels (or sensor pixels) must be any particular shape.

For an amusing idea w/r/t sensor pixel shapes, check out this paper: http://www.cis.pku.edu.cn/faculty/vision/zlin/Publications/2...


How about penrose tiling?


A nice idea.

The trouble is our reliance on expecting displays to reproduce frequencies (aka. crispness) that is beyond the Nyquist frequency of the display.

So with the proposed bayer pattern, we can no longer represent horizontal and vertical lines which are common in text and widgets with the sharpness we expect (unlike photographs, where we don't expect these)

We should be looking to satisfy Nyquist with a high enough resolution display and an optical low-pass filter between the display and the eye. Then things will all fall in to place...


This kind of display is poor at representing hard, orthagonal edges, which drive much of our user interfaces, lots of text rendering, and pixel art.


The depends on the real resolution, doesn't it?


Increasing resolution makes it _better_, but the same resolution with square RGB arrangements will do a better job for the aforementioned contexts.


Isn't that Pentile?


Without the rotation, yeah.. pentile is basically

green red green blue green red green blue green

What I don't understand about this proposal is why the rotated grid has 4 different colored squares.


The author talks about using two different greens in the 5th paragraph. It creates a wider colour gamut.


The real question is if the second green is physically realizable with cost-effective chemistry. Given that all real-world sources will start with 3-color RGB, it's questionable whether the improvement would be worth the cost. The Sony F828 camera with its RGBE sensor had the same problem in reverse, there was no advantage to having a 4th primary when everything had to be down-converted to the lowest common denominator.


Sony sensor press release: http://www.sony.net/SonyInfo/News/Press_Archive/200307/03-02...

I’d love to see some better multispectral cameras and displays. If you can get the content and hardware working together, you can get substantially better results than current tech. Of course, as you say, there’s a big chicken–egg problem.


You can't, of course, see the additional colors that would be enabled by looking at his picture of the gamut, because your current monitor can't produce them :-)


With rotation it's "Diamond Pentile" (the current popular layout on OLED smartphones). One difference though - pentile doesn't do extended gamut, at least not the way he proposes.


OLPC did the color swizzling back in 2006 http://wiki.laptop.org/go/File:PixelLayoutDiagonal.png


So they throw away 2/3 of the graphics memory?


(2009)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: