Hacker News new | past | comments | ask | show | jobs | submit login

If you’re wondering what it’s talking about when it says the triads of red/green/blue rectangles on a CRT display don’t correspond to pixels, here’s a Technology Connections video that explains it:

https://youtu.be/Ea6tw-gulnQ

But on an LCD, the display really is made up of a bunch of solid-colored rectangle-ish shapes, and if the LCD is using standard RGB pattern, each red/green/blue triplet does correspond to one pixel. So if you zoom in, “a pixel is a little square” is very close to the truth. In other words, the article shows its age…




Except that even on an LCD the boundaries between individual pixels (RGB triplets) are imaginary. An RGB triplet forming an individual pixel is just a useful abstraction. But physically when looking at the screen an RGB triplet is not any special than a BRG triplet on the same display (ignoring the edges of the display and that some displays do have slightly larger gap between subpixels from different pixels).


And this is not just a theoretical concern. Exploiting that fact has practical applications for sharpness, aliasing, etc. Subpixel smoothing is based on that idea, and I think it's under-exploited in graphics in general because shading pipelines are, as far as I know, stuck addressing full pixels.



Except in games, where it seems common to run games at resolutions other than native, which means that you have a choice between some pixels being different shape than others (nearest-neighbor) or even overlapping (interpolated).

And when you dig in to LCD displays, you'll discover that treating pixels as uniform squares might get you in trouble still, because that blue vertical line turning into a red one is actually shifting horizontally.

And if you look even deeper, you'll find specialized displays (like on cameras) which don't use a square grid at all.

Oh, and that only covers the display side. Cameras also have pixels, but they are different from little uniform squares in new, exciting ways. Typically Bayer.

To make matters worse, designers decided to hijack "pixel" to mean something else in the context of scaling, but don't be fooled. Those measures are not pixels.


>Except in games, where it seems common to run games at resolutions other than native

Not just games. Same for desktops and apps, OS-wide with a compositor and whatever scaling factor ("Retina") etc.


Is there a true 'native' resolution for 3D games? Seems the display resolution is arbitrary since the content is projected and filtered regardless.

I'm surprised you consider retina 'non native' since it originally was intended to exactly double the resolution so old application could still render pixel-perfect without changes.


>since it originally was intended to exactly double the resolution so old application could still render pixel-perfect without changes

Originally, because it just did pixel-doubling (or resolution-halving if you prefer).

Later (since like 3-4 OS releases ago) macOS does rendering in higher resolutions and downscaling to non-pixel-double too. This non-pixel doubled, slightly higher than native/2 "Retina" is even the default since at least 2-3 years on Mac laptops.


IIRC some Retina displays use fractional scaling like 2.1. I can't find any confirmation though.


And if you go all the way down the rabbit hole, oh shit the rods and cones in my eyes have different spacing!


Well, that's the part of the transform in the receiving end. Up to this point the discussion has been on the signal from the emission surface.


Yes but eventually you will run into weird artifacts or limitations which no matter how hard you debug you can never seem to resolve. It will take profound insight to notice the error is in the receiving end.


You can even take it further.

Did my dad ever even love me?


Pixels always being perfect squares is of course not true but the square model is still a lot closer than the point samples on a grid model - that is, each pixel models the average color over an area. Cameras work that way because sampling light at a point would get you no signal (if you could sample a point, which you can't) and display's work that way because you can't physically emit light from a single point. In most cases that area is squarish or at least as much as the technology allows (e.g. subpixels and bayer pattern are used because you can't have the colors cover the same are, at least not without creating bigger issues).

So yes, if you know the details of your display or sensor then you can use a better model to eek out more signal. But if you don't, the little squares model is close enough.


I am keenly interested in everything you wrote here. Any links or even search terms for further reading?



I understand his point from a computer graphics perspective, even if that didn't stand up to the test of time. But why is it written with so much bluster and claims of generality? Pixels are also a data modeling concept (without a graphical representation per se) and his bold proclamations on pixels in general just don't hold water.


Well, "pixel is 3 rectangles in a trench coat" would be closer approximation.

And there are other layouts than 3 rectangles:

https://crast.net/21193/the-first-qd-oled-screens-hide-a-pec...


Or predicted we would now be using OLEDs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: