Hacker News new | past | comments | ask | show | jobs | submit login
The Art of Rendering (2012) (fxguide.com)
96 points by tellarin on Dec 7, 2013 | hide | past | favorite | 23 comments



Very good article! I remember I became obsessed with 3D rendering in high school 6 years ago and actually wrote a little photon mapper myself (I also remembering re-writing the Wikipedia article on photon mapping at one point).

I always had the goal to turn it into something really useful but once college started that project kind of ground to a halt. Nowadays if I were to do it again, I'd just do Metropolis light sampling because I much prefer the unbiased nature of that global illumination solution than photon mapping's approximation (meaning if you average thousands of results from a MLS render it will converge on the exact solution to the rendering equation; this is not the case with photon mapping).

I've always wondered though if someone has found a solution to the pinhole problem: imagine you have a scene that contains a box. Inside the box is a very bright light, and outside of the box it is pitch black. Your camera is pointing at the box. If you rendered the scene, your resulting image would be completely black. Now imagine you poke a small "pinhole" in the box. In real-life, your resulting image would be well-lit because of the very bright light now bleeding out of the box. But none of the rendering techniques I know of would be able to "find" this pinhole in a reasonable amount of time without manually provided additional information. Any rendering experts know the solution to this?


Metropolis light transport with manifold exploration should be able to handle that case.


And the followups to do with actual production renderers:

http://www.fxguide.com/featured/the-state-of-rendering/

http://www.fxguide.com/featured/the-state-of-rendering-part-...

What's interesting is how far Pixar's PRMan has fallen within 2-3 years, despite PRMan 19 (current in alpha) going to provide BDPT with vertex connection and merging (state of the art lighting equation).

The problem is, despite providing the above, they're still getting some of the basics wrong, and Geometric lights still aren't being sampled correctly, in addition to:

Even with fully raytraced (i.e. no REYES) setups in it, its instancing support is still fairly pathetic, and its texture paging support is shockingly bad, even with Pixar-proprietary pixar format for images.

Once by far the dominant renderer in VFX, Arnold has come along and knocked PRMan off its perch (although Arnold still has a few issues as well).


Similarly, watch Vray closely. I have a feeling that in the next few years, Corona is going to eat Vray's lunch much in the same way that Arnold is eating PRMan's right now.


Yeah, maybe, but Chaos Group seem to be fighting back a bit with Vray 3 (much faster MC GI and BDPT support)... - they still need to sort their GPU side of things out though...

It's great that there's so much competition and so many renderers out there at the moment. This is partly due to the fact that it's not that difficult to write a "close to production" raytracer (ignoring asset paging, recursive procedural and curve support, which is where the big boys really shine).


Trust me, it's not _that_ simple :). (I'm working @ ChaosGroup on vray).


I've written my own over the past 2.5 years, mostly on the Train to and from work...

Ignoring geometry and texture paging, curve support and volumetrics (all of which I'm slowly working on), it could be used in production.

But then I guess any renderer can produce a pretty picture with physically-based shading - but speed-wise (it's a brute force GI renderer only, no irradiance caching) it's competitive with the other CPU renders out there with similar settings. The harder stuff is memory-efficiency - Arnold's amazing at this, and numeric stability over all different types of data.

Given the amount of research out there, it's really not that difficult - most of the hard stuff I found was decoupling the GUI (I've written my own context-creation / scene editor as well which hosts the renderer) from the renderer - that involved a lot of re-writes...


Even if you ignore volumetrics and splines, a production renderer still needs tons of features. Gradually adding those increases overall system complexity exponentially as of course new features should not break old ones, and need to be backward compatible (scene format wise) as well. Consider f.e. different sampling types for indoors/outdoors/material wise, handling millions of triangles with limited RAM, subdivision surfaces and sub-surface scattering, many many types of materials your users expect you to support (and precompiled combinations there of), speed expectations etc. That's really just scratching the surface about what's in a production renderer. As they say, the devil is in the details :). Of course if you render two spheres on a plane, yeah you can write a tracer for that in 10 minutes and even compete with others.


Working in the VFX industry, I'm well aware of what a production renderer needs :)

I wouldn't say adding features increases complexity exponentially - I guess it depends on the design of the system, but that hasn't been my experience with my stuff...

I assume by "different sampling types..." you mean BSDF importance sampling for outgoing direction and sampling the Solid Angle or pdf of taking that direction? If so, I'm not sure I'd call that especially difficult...

Building up ready-to-go materials (certainly as many as Vray ships with) undoubtedly takes time, but in the VFX industry, generally everyone writes their own shaders anyway...

Anyway, my focus is on VFX so I can concentrate on those type of issues, whereas I guess Vray needs to work well across different domains as well (product design, archviz, etc) - that'd be where things start getting difficult to balance out the different requirements I'd assume:

e.g. you can save quite a bit of memory by not storing vertex normals as vec3 floats, but as two 16-bit ushort numbers which can then be mapped via a LUT to the spherical coords of the normal. This is acceptable for VFX, and even in worst-case situations (very zoomed in on a very dense mesh with lots of crinkly displacement), it's barely noticeable, but for archviz people rendering at 10k x 10k, they might spot the issues at that size...


Something I've been meaning to do for a while is to write an analytic renderer that evaluates the light equation. Basically, it would be a program that takes in analytic geometric data and spits out an equation (or a system of equations) that can be evaluated in Mathematica / Maxima. The runtime complexity would be horrible, but at least it would give mathematically perfect, noiseless results. Or a hybrid approach could work, too (get the equations, do a series approximation, sample the polynomial or whatever else you end up with).

To anyone looking to get into graphics - I strongly recommend writing a toy scanline raytracer - it's a great learning experience.


Unfortunately, ray tracing of general scenes is undecidable: http://www.cs.duke.edu/~reif/paper/tygar/raytracing.pdf


The stochastic raytracing algorithms are "exact" if they're run long enough, right? (I know I've spoken to you and Edward about this stuff at length)

Semirelatedly, do you think the Haskell Diagrams lib with its normals tricks would be a good pedagogical substrate for raytracing for general shapes? Seems like it would allow punting on how surfaces are represented Mebe.


I have a feeling that this is one of those ideas that is going to be far far harder than you think it will be.


So far I've tried doing it on paper for a infinite plane with a single step and uniform lighting. I ended up with a scaled atan(x) as the function of light depending on the distance from the step. I haven't figured out how to handle complex occluding geometry in a nice way.


That's exactly what I mean; the simple sanity check case is easy, but once you start layering on things like complex geometry and surface models and volume data, the problem grows in difficulty absurdly quickly.


I have been working on one and it's not that hard actually.

The most difficult part has been creating the right data model.


I wonder why they didn't mention the Cycles engine for Blender, even in it's early stages it already provides reliable power.


Are there any rendering tools that automate creating say tens or hundrends of Amazon AWS instances to partition the work for large animations?


Considering that a single frame can require potentially hundreds of gigs of data at professional studios, the bottleneck with using AWS as a renderfarm isn't in provisioning instances but rather is in just sheer bandwidth.


CGTalk (now CGsociety) doesn't seem as active and lush as it was back in 2002 - 2007 or so. Has everyone moved to some other community?

Can anyone shed some light on other communities?


Scott Metzger is a religion. His deity is Vray.

Having worked with the haircut himself, he does do some wonderful things with mari/maya. He also requires super beefy machines.


what is this considered image processing or 3d graphics? there seems to be a lot of math+physics involved which is great


It's both.

It's literally the transition phase from 3 dimensional models (vector-based curves, points, and polygons) to final pixel-based raster images.

Rendering transforms the wireframe models into images.

3D graphics always involve a renderer of some kind. There are different classes of rendering. The kind that video games use are optimized for speed and interactivity, they are still rasterizing the 3D models into images, but the images are dumped to the video buffer and disposed of, with each new frame, at 30 frames per second.

This article discusses renderers that are not concerned with speed or interactivity. The finished frames are retained and polished with compositing and photo editing software, including photoshop. In this case, the renderer is permitted to crunch numbers on a frame for minutes, hours or days, rather than 30 disposable frames per second.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: