Hacker News new | past | comments | ask | show | jobs | submit login
High-Level Rendering Using Render Graphs (ourmachinery.com)
54 points by adamnemecek on Feb 3, 2018 | hide | past | favorite | 9 comments



The trend is clear: Rendering engines are becoming compilers.

At the dawn of real-time rendering everything was both bespoke and fixed-function. You would call a different optimized codepath to draw a floor, a wall, or a ceiling. If the platform was amenable to it you might use self-modifying code to optimize the time/space tradeoff. The datasets were small enough that a simple array was always the default "right" choice until you were running out of memory.

Gradually the demands generalized and became more algorithmic than low-level in nature. DOOM's best-known optimization was in using BSP for visibility.

Now - while there's still a great demand for smarter algorithms - most of the pipeline complexity comes in this kind of generalized dependency analysis stuff to maximize use of resources across a broad, complex, configurable pipeline without hand-tuning everything, which is a familiar compiler problem to have.


One of the best talks at Siggraph last year was on Bungie's particle system Destiny 2 (slides are here: http://advances.realtimerendering.com/s2017/index.html). The talk was an elegant breakdown of how they (1) created a CPU interpreter for a domain specific language describing particle parameters, (2) ported it to the GPU (even running an interpreter on the GPU for parts of the production game!), and (3) how they converted it into a compiler for the obvious performance wins.

Apparently running bytecode interpreters on the GPU is not uncommon: https://dolphin-emu.org/blog/2017/07/30/ubershaders/


Procedural generation and compilation of shaders has been common for a while in some engines and toolchains, too. Though it's often done online before shipping the game.


Long post about rendering.

Not a single pretty image.

What gives?


It's more about generic code structuring and scheduling than some breakthrough in shader or path tracing technology. It's worth a read


Yeah, I started skimming and at the end got the impression it was a thought piece and the author hadn't even coded anything yet. I also kept wondering why the new terminology instead of "scene graph". There's probably a difference, but I didn't find it compelling enough to read in depth.


A scene graph and a "render graph"/Frame graph in Frostbite terminology are not the same. I would recommend checking out:

http://www.gdcvault.com/play/1024656/Advanced-Graphics-Tech-... and https://www.gdcvault.com/play/1024612/FrameGraph-Extensible-...

for a more in depth view.

I would describe it as such: A scene graph specifies _what_ needs to be drawn, while a render graph specifies _how_ it should be drawn and what the dependencies of each step in drawing is. For example, a render graph is used to reduce the number of image layout transitions in a frame by looking at what layout is expected for each operation and figuring out where to put the layout transitions, instead of letting a human place the layout transitions which will inevitably lead to unnecessary transitions when the scene becomes complex enough.

I didn't think this article was bad even if it is maybe less concrete than the slides I linked.


It's not about what you render, but how you render.


Imo a group of Passes should be called a Stage or a Phase, not a module. The terms module is too easy to overload.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: