Hacker News new | past | comments | ask | show | jobs | submit login

What is a physically based renderer? It sounds like ray tracing that can be done in reverse on a final image?



Traditionally rendering has been concerned with the RGB colors, because that's what the screen has to output. So your materials are defined in terms of what color objects are. Then they started adding "effects" on top like reflectiveness, bump and normal maps, noise, etc. to make objects look more... real.

Physically based rendering approaches from the opposite end. It asks, what are the characteristics of the material, in terms of simulated light interacting with a nontrivial surface, so that the thing appears on screen as it should if it were a real object with real light bouncing off its nanostructure surface.

Ray tracing is one method of rendering, but you can physically based render the old way too with fragment shaders. You just won't get global illumination so it won't look as "real".


> Traditionally rendering has been concerned with the RGB colors

I do need to point out that many (most?) so called physically based renderers are still RGB based, with all the downsides that come with that. Mitsuba is notable for also having spectral rendering, which is a distinct feature that models light as a spectrum instead of rgb triplet.


Man they really need to explain what is unique about it on the home page better. I know what differentiable rendering and spectral rendering are but neither are mentioned. I have no idea what a retargetable renderer is and it seems like nobody else uses that term at all (seriously - Google "retargetable renderer").


There are ways of faking GI, even with current techniques and without using ray tracing. Screenspace GI can look convincing (and also miserably fail). Radiosity techniques exist, etc. We've been able to fake global illumination for a while now. The results aren't perfect, but then again, that's what realtime computer graphics have been forever, a bunch of tricks that are close enough to reality, and 10 artists crying in the corner because they had to spend 5 hours moving an asset around so it looks just right.


I’ve always seen “physically based” to mean the renderer does a physical simulation (photons bouncing, refracting, having wavelengths stretched, subsurface scattering, BSDF calculations, etc.).

So much in computer graphics are just (clever) hacks upon hacks to get something that looks “good enough” but isn’t really simulating physics in any meaningful way (like SSAO, texture baking, bump mapping, etc.). These hacks are much, much faster than simulating the physical process of photons interacting with the world.

I don’t know where “physically based” originated but my first introduction to it was pbrt, which I suspect popularized “physically based” naming.

Differentiable rendering is the name of going from the final image and “reverse rendering” it to reconstruct a 3D model of it.


Some clarification regarding the differentiable rendering vs inverse rendering.

Differentiable rendering is what it says: differentiating the rendering process. Imagine that x is the scene, then f(x) is the function that renders the image. Then, Differentiable rendering is simply taking the derivatives: f'(x).

Inverse rendering is a process of finding scene parameters x, such that f(x) produces the given image y. This is often achieved by using differentiable rendering together with an optimization algorithm (like SGD or Adam). However, due to the nature of rendering, it's easy to get stuck in a local minimum. Therefore, even a perfect differentiable rendering engine is not sufficient for the inverse rendering.


Mind you, physically based renderers also use a bunch of clever hacks upon hacks to be able to run in realtime. Nobody is simulating photons bouncing in the nanostructure of a material. We just have formulas thay say "for this roughness and metallicness, your material will look like that and it'll be good enough". We still abuse the hell out of UV maps to fake things, still include ambient occlusion (as a stylistic choice most of these days).

It's a different workflow that gives you consistency once you know what you're doing, as opposed to the days of Phong shading.


A physically-based renderer tries, as much as possible, to ground the equations and techniques in physical principles. For example conservation of energy[1], surfaces shouldn't reflect more light that what shines on them, and Helmholtz reciprocity[2].

This affects the choice of approaches and algorithms, such as using unbiased rendering[3], and their implementation, like using energy-conserving bidirectional scattering distribution functions[4] to describe how light interacts with a surface.

[1]: https://en.wikipedia.org/wiki/Conservation_of_energy

[2]: https://en.wikipedia.org/wiki/Helmholtz_reciprocity

[3]: https://en.wikipedia.org/wiki/Unbiased_rendering

[4]: https://en.wikipedia.org/wiki/Bidirectional_scattering_distr...


> It sounds like ray tracing that can be done in reverse on a final image?

I forgot to mention, if the renderer respects Helmholtz reciprocity, then you can chose to either do forward rendering (rays originate at light and bounce until they hit the camera or dissipate) or backwards rendering (rays originate at the camera and bounce until they hit a light or dissipate), or even do both, so-called bidirectional path tracing[1].

[1]: https://pbr-book.org/3ed-2018/Light_Transport_III_Bidirectio...


Sidenote: the author of Mitsuba is also a contributor to https://www.pbr-book.org/, which is a great (graduate-level) introduction to this topic.


It just means that the renderer is making some attempt to mimic the way light behaves in the real world. This, hopefully, implies that the resulting image will look realistic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: