Hacker News new | past | comments | ask | show | jobs | submit login

As a movie compositor, I'm surprised to see so much compositing being done in sRGB space ("gamma space"). I'm used to doing all operations in linear space and only converting to sRGB for display purposes. Here, they seem to employ tricks to work around the issues of non-linear compositing, like making an exclusion mask of the brighter pixels so vignetting doesn't look bad.

What kinds of tradeoffs go into deciding to composite in sRGB instead of linear in a game engine? I assume there are good reasons to do so.




Memory bandwidth.

Typically linear is using 32 bit float while srgb is using 8 bit integers. So going linear needs 4x more data transfer.

There exists 16 bit float, too, but since it became popular with AI, NVIDIA has limited it's performance on gaming cards.


Ah, dear old Nvidia, never fails to extort you.


The other way to look at that is those being affordable for actual gaming and not perpetually sold out.


The top of the line gaming cards went from 899 for a 1080ti to 1299 for a 2080ti

I hope AMD / RADEON and Intel put some competition in that market.


Funny you mention it, there's new rumors on Big Navi just recently:

https://www.tweaktown.com/news/70848/amd-big-navi-specs-5120...


NVidia's grasp of the market is stronger than people realize. AMD video cards are comparable on a technical level, but they struggle immensely on the software side. NVidia has built a competitive advantage over those many years of total domination, resulting in better support for most games, more advanced features, more stability, etc.


> those being affordable for actual gaming and not perpetually sold out

This is less about availability and more about market segmentation. Making sure the cheaper product line doesn't eat into a market that is served by a much more expensive product line. This happens everywhere in any industry. And you can see it best in software where a simple license file will make the difference between a fully featured suite and a severely cut down version.

Availability is controlled via another mechanism: setting the price according to market demand. If you're selling so many cards that you miss a percentage of potential sales then the price will increase. And prices did increase.


There was the problem of that SGI patent that prevented using floats for framebuffers ... floats make a lot more sense rather than using this gamma nonsense.


AFAIK in HDR photography they often use linear uint16. I understand that it could be not enough for strong highlight modeling, but for general purpose compositing that should be ok. With an additional layer with like +8 bits of highlight resolution.


32-bit float per color channel would be 128bpp, which is really expensive. You rarely have the bandwidth to afford that.


What do you mean? When the values are already in the GPU, there is no need to usa gamma anymore.

Does the game described in the article transfer back and forth from RAM in the middle of compositing for some reason? If not, why is it compositing in sRGB space then?


In linear space (or should I say coding), operations like +, -, * <num>, average, a * 0.1 + b * 0.9 .. have a consistent, reasonable meaning with direct physical relevance. It is simple and preferable to calculate with linear values, but they don't fit well in 8 bits like sRGB does.

In sRGB, usual operations give weird results instead. Imagine when doing addition (e.g. light a + light b) you get something like

  c = a + b          # linear result
  c = sqrt(a² + b²)  # sRGB result (very simplified)
Instead of average (downscale 2 pixels), you get

  c = a/2 + b/2              # linear result
  c = sqrt((a/2)² + (b/2)²)  # sRGB result (very simplified)
Maybe I messed up the formulas and used inverse transformations, but the point is, if you operate on sRGB values with the usual +,-,*.. operations, instead of adding/subtracting/averaging/.. the light/colors, you end up applying some functions, that are kind of similar, but not very close.

We are kind of used to it when dealing with computer graphics, but this is the reason why antialiased fonts look too thin (or thick, depending on whether it's light or dark), scaled images look darker (or is it lighter? and slightly off-color), and generally the graphics operations in sRGB give disappointing results.


Thanks a lot, I know about both spaces. I thought virtually all modern games only work in linear mode within the GPU, and avoid any compositing in non-linear. That was my first surprise when reading the first comment.

Then someone else replied explaining that such compositing is done due to bandwidth, but I do not understand: after textures are uploaded to VRAM, PCIe bandwidth should be irrelevant. If they are talking about VRAM bandwidth, I would understand if this was for a lot of big textures... but for doing vignette? Is better to be going back and forth to sRGB in each compositing step just to keep the intermediate results smaller due to VRAM bandwidth?


sRGB can be stored in 8bpp. Linear can not. ALU is cheaper than memory bandwidth, and the conversion is in hardware on modern GPUs. Typically, the conversion from linear to sRGB is done as part of the tonemapping which converts from linear quantities, usually from an 16-bit HDR framebuffer target to an 8-bit one.

Mixed in with this is MSAA where the resolve is typically done in non-linear space (this is a choice to artificially boost "smoothness", this is an artifact that happens IRL) [0] https://mynameismjp.wordpress.com/2012/10/24/msaa-overview/


There is also packed float format, R11G11B10. (it is probably cast to 32bit for calculation?)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: