Hacker News new | past | comments | ask | show | jobs | submit login

In linear space (or should I say coding), operations like +, -, * <num>, average, a * 0.1 + b * 0.9 .. have a consistent, reasonable meaning with direct physical relevance. It is simple and preferable to calculate with linear values, but they don't fit well in 8 bits like sRGB does.

In sRGB, usual operations give weird results instead. Imagine when doing addition (e.g. light a + light b) you get something like

  c = a + b          # linear result
  c = sqrt(a² + b²)  # sRGB result (very simplified)
Instead of average (downscale 2 pixels), you get

  c = a/2 + b/2              # linear result
  c = sqrt((a/2)² + (b/2)²)  # sRGB result (very simplified)
Maybe I messed up the formulas and used inverse transformations, but the point is, if you operate on sRGB values with the usual +,-,*.. operations, instead of adding/subtracting/averaging/.. the light/colors, you end up applying some functions, that are kind of similar, but not very close.

We are kind of used to it when dealing with computer graphics, but this is the reason why antialiased fonts look too thin (or thick, depending on whether it's light or dark), scaled images look darker (or is it lighter? and slightly off-color), and generally the graphics operations in sRGB give disappointing results.




Thanks a lot, I know about both spaces. I thought virtually all modern games only work in linear mode within the GPU, and avoid any compositing in non-linear. That was my first surprise when reading the first comment.

Then someone else replied explaining that such compositing is done due to bandwidth, but I do not understand: after textures are uploaded to VRAM, PCIe bandwidth should be irrelevant. If they are talking about VRAM bandwidth, I would understand if this was for a lot of big textures... but for doing vignette? Is better to be going back and forth to sRGB in each compositing step just to keep the intermediate results smaller due to VRAM bandwidth?


sRGB can be stored in 8bpp. Linear can not. ALU is cheaper than memory bandwidth, and the conversion is in hardware on modern GPUs. Typically, the conversion from linear to sRGB is done as part of the tonemapping which converts from linear quantities, usually from an 16-bit HDR framebuffer target to an 8-bit one.

Mixed in with this is MSAA where the resolve is typically done in non-linear space (this is a choice to artificially boost "smoothness", this is an artifact that happens IRL) [0] https://mynameismjp.wordpress.com/2012/10/24/msaa-overview/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: