Hacker News new | past | comments | ask | show | jobs | submit login

Why can’t it be simple wall-clock time? Each frame takes 24 hours of real wall-clock time to render start to finish. But they render multiple frames at the same time. Doing so does not change the wall-clock time of each frame.



In my (hobbyist) experience, path-tracing and rendering in general are enormously parallelizable. So if you can render X frames in parallel such that they all finish in 24 hours, that's roughly equivalent to saying you can render one of those frames in 24h/X.

Of course I'm sure things like I/O and art-team-workflow hugely complicate the story at this scale, but I still doubt there's a meaningful concept of "wall-clock time for one frame" that doesn't change with the number of available cores.


Ray tracing is embarrassingly parallel, but it requires having most if not all of the scene in memory. If you have X,000 machines and X,000 frames to render in a day, it almost certainly makes sense to pin each render to a single machine to avoid having to do a ton of moving data around the network and in and out of memory on a bunch of machines. In which case the actual wall-clock time to render a frame on a single machine that is devoted to the render becomes the number to care about and to talk about.


Exactly - move the compute to the data, not the data to the compute.


I suspect hobbyist experience isn't relevant here. My experience running workloads at large scale (similar to Pixar's scale) is that as you increase scale, thinking of it as "enormously parallelizable" starts to fall apart.


Wall-clock usually refers to time actually taken, in practice, with the particular configuration they use, not time could be taken if they used the configuration to minimise start-to-finish time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: