Hacker News new | past | comments | ask | show | jobs | submit login

I write a production renderer for a living :)

So I'm well aware of the trade offs. As I mentioned, for lookdev and small scenes, GPUs do make sense currently (if you're willing the pay the penalty of getting code to work on both CPU and GPU, and GPU dev is not exactly trivial in terms of debugging / building compared to CPU dev).

But until GPUs exist with > 64 GB RAM, for rendering large scale scenes, it's just not worth it given the extra burdens (increased development costs, heterogeneous sets of machines in the farm, extra debugging, support), so for high-end scale, we're likely 3/4 years away yet.




I used to write a production renderer for a living, now I work with a lot of people who write production renderers for both CPU and GPU. I’m not sure what line you’re drawing exactly ... if you mean that it will take 3 or 4 years before the industry will be able to stop using CPUs for production rendering, then I totally agree with you. If you mean that it will take 3 or 4 years before industry can use GPUs for any production rendering, then that statement would be about 8 years too late. I’m pretty sure that’s not what you meant, so it’s somewhere in between there, meaning some scenes are doable on the GPU today and some aren’t. It’s worth it now in some cases, and not worth it in other cases.

The trend is pretty clear, though. The size of scenes than can be done on the GPU today is large and growing fast, both because of improving engineering and because of increasing GPU memory speed & size. It’s just a fact that a lot of commercial work is already done on the GPU, and that most serious commercial renderers already support GPU rendering.

It’s fair to point out that the largest production scenes are still difficult and will remain so for a while. There are decent examples out there of what’s being done in production with GPUs already:

https://www.chaosgroup.com/vray-gpu#showcase

https://www.redshift3d.com/gallery

https://www.arnoldrenderer.com/gallery/


The line I'm drawing is high-end VFX / CG is still IMO years away from using GPUs for final frame (with loads of AOVs and Deep output) rendering.

Are GPUs starting to be used at earlier points in the pipeline? Yes, absolutely, but they always were to a degree in previs and modelling (via rasterisation). They are gradually becoming more useable at more steps in pipelines, but they're not there yet for high-end studios.

In some cases, if a studio's happy using an off-the-shelf renderer with the stock shaders (so no custom shaders at all - at least until OSL is doing batching and GPU stuff, or until MDL actually supports production renderer stuff) studios can use GPUs further down the pipeline, and currently that's smaller scale stuff from what I gather talking to friends who are using Arnold GPU. Certainly the hero-level stuff at Weta / ILM / Framestore isn't being done with with GPUs, as they require custom shaders, and they aren't going to be happy with just using the stock shaders (which are much better than stock shaders from 6/7 years ago, but still far from bleeding edge in terms of BSDFs and patterns).

Even from what I hear at Pixar with their lookdev Flow renderer things aren't completely rosy on the GPU front, although it is at least getting some use, and the expectation is XPU will take over there, but I don't think it's quite ready yet.

Until a studio feels GPU rendering can be used for a significant amount of the renders (that they do, for smaller studios, the fidelity will be less, so the threshold will be lower for them), I think it's going to be a chicken-and-egg problem of not wanting to invest in GPUs on the farms (or even local workstations).


I think you’re right about the current state (not quite there, especially in raw $$s), but the potential is finally good enough that folks are investing seriously on the software side.

The folks at Framestore and many other shops already don’t do more than XX GiB per frame for their rendering. So for me, this comes down to “can we finally implement a good enough texture cache in optix/the community” which I understand Mark Leone is working on :).

The shader thing seems easy enough. I’m not worried about an OSL compiled output running worse than the C-side. Divergence is a real issue, but so many studios are now using just a handful of BSDFs with lots of textures to drive, that as long as you don’t force the shading to be “per object group” but instead “per shader, varying inputs is fine”, you’ll still get high utilization.

The 80 GiB parts will make it so that some shops could go fully in-core. I expect we’ll see that sooner than you’d think, just because people will start doing interactive work, never want to give it up, and then say “make that but better” for the finals.


GPUs do exist with 64+GB of rao, virtually. A dgx2 has distributed memory where you can see the entire 16x32GB of address space backed by nvlink. And that technology is now 3 years old, and it's even higher now.


Given current consumer GPUs are at 24 GB I think 3-4 years is likely overly pessimistic.


They've been at 24 GB for two years though - and they cost an arm and a leg compared to a CPU with a similar amount.

It's not just about them existing, they need to be cost effective.


Not anymore. The new Ampere based Quadros and Teslas just launched with up to 48GB of RAM. A special datacenter version with 80Gb is also already announced: https://www.nvidia.com/en-us/data-center/a100/

They are really expensive though. But chassis and rackspace also isn't free. If one beefy node with a couple GPUs can replace have a rack of CPU only Nodes the GPUs are totally worth it.

I'm not too familiar with 3D rendering but in other workloads the GPU speedup is so huge that if its possible to offload to the GPU it made sense to do it from a economical perspective.


Hashing and linear algebra kernels get much more speedup on a GPU than a vfx pipeline does. But I am glad to see reports here detailing that the optimization of vfx is progressing.


Desktop GPUs could have 64GB of GDDR right now but the memory bus width to drive those bits optimally (in primary use case of real-time game rendering, not offline) would up the power and heat dissipation requirements beyond what is currently engineered onto a [desktop] PCIE card.

If 8k gaming becomes a real thing you can expect work to be done towards a solution, but until then not so much.

Edit: added [desktop] preceding PCIE


There are already GPUs with >90GB RAM? DGX-A100 has a version with 16 A100 GPUs, having each 90 Gb.. that’s 1.4TB of GPU memory on a single node.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: