Hacker News new | past | comments | ask | show | jobs | submit login

Well it can't just be one frame total every 24 hours, because an hour-long film would take 200+ years to render ;)



They almost certainly render two frames at a time. Thus bringing the render time down to only 100+ years per film.


I’m going to guess they have more than one computer rendering frames at the same time.


Yeah, I was just (semi-facetiously) pointing out the obvious that it can't be simple wall-clock time


Why can’t it be simple wall-clock time? Each frame takes 24 hours of real wall-clock time to render start to finish. But they render multiple frames at the same time. Doing so does not change the wall-clock time of each frame.


In my (hobbyist) experience, path-tracing and rendering in general are enormously parallelizable. So if you can render X frames in parallel such that they all finish in 24 hours, that's roughly equivalent to saying you can render one of those frames in 24h/X.

Of course I'm sure things like I/O and art-team-workflow hugely complicate the story at this scale, but I still doubt there's a meaningful concept of "wall-clock time for one frame" that doesn't change with the number of available cores.


Ray tracing is embarrassingly parallel, but it requires having most if not all of the scene in memory. If you have X,000 machines and X,000 frames to render in a day, it almost certainly makes sense to pin each render to a single machine to avoid having to do a ton of moving data around the network and in and out of memory on a bunch of machines. In which case the actual wall-clock time to render a frame on a single machine that is devoted to the render becomes the number to care about and to talk about.


Exactly - move the compute to the data, not the data to the compute.


I suspect hobbyist experience isn't relevant here. My experience running workloads at large scale (similar to Pixar's scale) is that as you increase scale, thinking of it as "enormously parallelizable" starts to fall apart.


Wall-clock usually refers to time actually taken, in practice, with the particular configuration they use, not time could be taken if they used the configuration to minimise start-to-finish time.


It could still be wallclock per-frame, but you can render each frame independently.


True. With render farms, when they say X minutes or hours per frame, they mean the time it takes 1 render node to render 1 frame. Of course, they will have lots of render nodes working on a shot at once.


you solve that problem with massively parallel batch processing. Look at schedulers like Platform LSF or HTCondor.


Haven’t heard those two in a while, played around with those while I was in uni 15 years ago :-O




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: