Hacker News new | past | comments | ask | show | jobs | submit login

I can give a few. In general, the theory behind rendering generally treats what's on the screen as a discretization of a continuous visual signal. Post processing especially is largely about signal processing.

Anti-aliasing (AA) is a very clear example where the lack of it leads to moiré patterns and jaggies. Understanding a lot of AA techniques is simplified by seeing the frame not as a discrete set of pixels but as a continuous signal being sampled (and can be sampled at multiple sub-pixel points per pixel).

A lot of other screen effects are essentially filters applied to the graphical signal (sobel, gaussian blur, ...) and understanding them from a signal processing view helps understanding how to modify and optimize them. A good example here is identifying whether your effect is a separable filter which can be split into a horizontal and vertical pass.

Seeing the image as a continuous signal/field being sampled is also the theoretical basis for a lot of visual effects used in physically-based rendering and things like screen-space ambient occlusion.

Finally, if you want to write your own ray-tracer it really helps to be able to take this view of things once you get past the basics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: