Hacker News new | past | comments | ask | show | jobs | submit login

It gives me no reasons to use it over the standard Cycles renderer.



Not even for all of the available shaders, books, tutorials and other resources made available for it over the years? The analogy I make is a programming language that is so-so, but has a hundred relevant, useful libraries vs. the technically better language with hardly any libraries. Cycles has the basic building blocks to create a library of pre-defined shaders, but Renderman has over 25 years of history, and tons of RSL examples.


Show me just one really cool thing that can be done with Renderman and can't be done with Cycles. Show off that 25 years of history.


I didn't say anything could or could not be done with Cycles. I said there were 'tons of RSL examples'. Do you understand the significance of a history, and body of work available to start with, and not having to start from scratch? I was using BMRT in the 90s, and reading books about procedurally-generated textures. People wrote so many shaders in RSL, most available for you to copy, modify and re-use. I am assuming there are not nearly as many OSL examples as RSL examples or samples of code. I may be wrong. For the record, I'll use both, and I am happy to see Renderman as an option. I am particularly interested in OSL as a corollary to RSL. Sony used it successfully to counter Pixar's mention here. I am also eager to test my RSL chops again too!


Most of those shaders written back then are pretty much worthless now that Renderman has introduced RIS mode (instead of REYES mode), which was introduced last year. Pixar recommends writing shaders in C++ now instead of in RSL, as C++ shaders are just way more performant right now. OSL is pretty mature, I know that both Sony Pictures Imageworks and Double Negative use it in production. Now that Renderman also support OSL I'm pretty sure we're going to see OSL examples posted online. OSL shaders are awesome compared to RSL shaders, as OSL gives all control to the renderer, which allows it to do a lot of optimization (such as using the renderers own importance sampling strategies).


Thanks, I'll have to look even more into OSL (vs. GSL or RSL). I would hardly use the word 'worthless' just because RIS mode was introduced last year. The basics of what the prodecurally generated textures book taught me back then, still apply. Geometry, math, building up patterns in software, etc... Not to mention, the basis of such shaders just need to be translated just as GSL can be relatively simple to translate into OSL as the videos by Thomas Dinges show: https://www.youtube.com/watch?v=4LQXjIDWtz0


Any movie by Pixar, Industrial Light and Magic, Double Negative, Framestore, Weta, etc.

Alternatively, a brief and by no means comprehensive list of things I've found to be better in Renderman versus Cycles:

* Bidirectional pathtracing and VCM for fast, accurate caustics and SDS (specular-diffuse-specular) illumination without fireflies. [0][1]

* Extremely efficient subdivision surfaces and displacement. In Cycles, there is a performance penalty for subdivs and displacement. In Renderman, there is almost zero overhead. [2][3]

* Significantly faster subsurface scattering [4][5]

* Support for OpenVDB volumes, and much faster volume rendering/scattering/etc. [6][7]

* Importance sampling for emissive effects, such as explosions and flames. Think candles with actual fire that don't firefly! [8]

* Importance sampling of HDR maps (basically, prevents fireflies when using HDR maps) [9]

* Much better memory efficiency. Renderman is designed to handle literally hundreds of gigs of stuff in memory going in and out of core all the time. Don't have a link for this one, since this comes mostly from experience with both renderers and having used Renderman in a large production studio.

* Disney "principled" BSDF [10]

* Arbitrary geometry lights, with importance sampling [11]

* Better/faster hair rendering and hair shaders [12][13][14]

* The Denoiser is PRMan 20 is legitimately magic. [15][16]

Generally, there isn't necessarily any particular feature that Renderman has that Cycles is outright missing, but almost everything in Renderman is a lot faster and more optimized (as seen in the link list below, a lot of the research on making stuff like subsurface scattering and hair and whatnot extremely fast comes from Pixar and the Renderman devs/researchers in the first place). On the flip side, Cycles has a usable GPU mode, whereas I doubt Renderman will ever get a GPU mode anytime soon. Cycles is really pretty great, but having a volunteer team versus a dedicated, paid team that has to support not just Pixar's production studio but also a ton of large VFX houses produces a much faster development pace and a much higher incentive to optimize the crap out of every inch of Renderman.

Disclaimer: I now work for a different production renderer team at a different studio (some consider us to be a rival to Renderman, we don't see it quite that way), but I've worked at Pixar before.

[0] http://renderman.pixar.com/resources/current/RenderMan/PxrVC...

[1] http://cgg.mff.cuni.cz/~jaroslav/papers/2012-vcm/2012-vcm-pa...

[2] http://renderman.pixar.com/view/displacements

[3] http://graphics.pixar.com/opensubdiv/docs/intro.html

[4] http://graphics.pixar.com/library/ApproxBSSRDF/paper.pdf

[5] http://graphics.pixar.com/library/PhotonBeamDiffusion/paper....

[6] http://www.openvdb.org/

[7] http://renderman.pixar.com/resources/current/RenderMan/rfmOp...

[8] http://graphics.pixar.com/library/MISEmissive/paper.pdf

[9] http://renderman.pixar.com/resources/current/RenderMan/PxrSt...

[10] http://renderman.pixar.com/resources/current/RenderMan/PxrDi...

[11] http://renderman.pixar.com/resources/current/RenderMan/risLi...

[12] http://renderman.pixar.com/resources/current/RenderMan/PxrMa...

[13] http://graphics.pixar.com/library/DataDrivenHairScattering/p...

[14] http://graphics.pixar.com/library/ImportanceSamplingHair/pap...

[15] http://renderman.pixar.com/resources/current/RenderMan/risDe...

[16] https://renderman.pixar.com/view/denoiser


DNeg are still using Mantra (and Clarisse) for a fair amount of stuff (Ant-man is their first production using just RIS in PRMan 19/20), Framestore are solidly Arnold now, and Weta are using their own PRMan clone (but a path-tracer) called Manuka.

PRMan's volume support in RIS is still pretty poor - even in 20 - they're still using generic Woodcock tracking which just doesn't work well, as the max extinction coefficient is global to the whole volume, so is very inefficient as you can't localise the step distance efficiently, or importance sample the density integration. Similarly, in RIS, the importance sampling for emissive volumes is non-existant (unless you write your own interior integrator).

Hair rendering in RIS is only fast-ish because they tesselate to triangles (based on the shading rate), which means stupid memory usage.

Pixar's (Disney's) denoiser doesn't really work with hair.


AFAIK Industrial Light and Magic used Arnold for a lot of movies, Framestore used Arnold for Gravity (I'm not sure what they use now), Double Negative uses a whole range of renderers (with layer between their OSL shaders and the renderers) and Weta uses their own in-house renderer Manuka now. Most studios switched away from Renderman as its support for ray tracing was extremely lacking until a year ago. On your list of points:

* Bidirectional path tracing and VCM, surely is nice, but you're absolutely fine with just path tracing. Artists have learned how do to avoid SDS paths and how to optimize their scenes for path tracing. It would require a lot of time to train artists to optimize their scenes for VCM and with the amount of diffuse noise (which is unavoidable) I'm not even sure if it's worth it.

* I doubt the subsurface scattering in Cycles is slower than Renderman as it only supports bicubic and gaussian profiles which are even easier to evaluate and sample than the approximate BSSRDF ;). Their profiles are extremely lacking (and I doubt it's enough these days), but SPI used bicubics and gaussians a few years ago [1].

* I absolutely can't imagine that Cycles doesn't have Environment Map sampling, which is something pretty basic and explained in PBRT. Sure, Cycles probably doesn't have an implementation of "Portal-Masked Environment Map Sampling" by Bitterli, Novak and Jarosz yet, so I'm sure Renderman is better at it, but Cycles is decent at it.

* Same for arbitrary geometry lights, but I'm sure Renderman does a better job at it (although Pixar is probably limited by Solid Angle's patents on importance sampling quads and other stuff, Cycles doesn't have to deal with patents).

* AFAIK Cycles has a state of the art hair shader, but it seems to lack the importance sampling scheme by Eugene d'Eon.

I absolutely agree with you that Renderman is a more complete and faster renderer, but Cycles is absolutely amazing if you keep in mind that it was completely developed by a small team of hobbyists. Cycles isn't meant for production rendering, but it's amazing for hobbyists.

[1] http://dl.acm.org/citation.cfm?id=2504520


> * Bidirectional path tracing and VCM, surely is nice, but you're absolutely fine with just path tracing. Artists have learned how do to avoid SDS paths and how to optimize their scenes for path tracing. It would require a lot of time to train artists to optimize their scenes for VCM and with the amount of diffuse noise (which is unavoidable) I'm not even sure if it's worth it.

This is %100 not true. Forward path tracing is not adequate for anything except for direct illumination. Any enclosed space that has most of it's illumination from some sort of bounce is not feasible without some sort of illumination caching. Bidirectional path tracing and VCM do not carry any addition complexity from an artist's point of view. Also avoiding SDS paths is not always something that artists can simply avoid, and any illumination that is missing is room for improvement.

Do you have links to Solid Angle's patents? Renderman actually samples emissive geometry exceptionally well.


There are very few scenes that need Bi-directional or VCM (in VFX, anyway) - in theory it converges faster for indirect illumination, but due to the fact both methods need to be used in incremental mode (1 sample per pixel over the entire image), you significantly lose cache coherency for texture access (even for camera rays) as you're constantly trashing the texture cache, meaning renders are a lot slower. There are much better ways of reducing this indirect noise in production (portals, blockers).

On top of this, it's also very difficult to get the ray differentials correct when merging paths, so you end up point-sampling the textures, meaning huge amounts of texture IO.


So there are two different things there, bidirectional tracing without and with VCM. VCM takes longer to trace but takes care of outlier samples that can't be oversampled away in practice.

When it comes to any sort of bounce, forward raytracing is painful, anything that helps is good.

Most renderers don't take into account much cache coherency of textures at all, which makes me think you work for Disney?


Per iteration, bi-directional is extra work too. Obviously these integration methods are much better at finding certain hard-to-find light paths/sources, but my point is that in VFX, it's generally good enough to fake stuff by just turning shadow/transmission (depending on renderer) visibility off for certain objects to allow light through.

It's rare that we actually have glass/metal objects with lights in/around them such that bi-directional / VCM actually makes sense - even for stuff like eye caustics we've found that uni-directional does a pretty good job. And other situations like car headlights behind glass with metal reflectors behind the light, just turn transmission visibility off for the glass part, and yes, it's not fully-accurate (in terms of refraction and light leak lines), but we're generally using IES profiles for stuff like this so we get accurate light spread patterns anyway.

Well, they do in that camera rays generally (and light rays in bi-directional/VCM) end up using the higher mipmap levels of textures, so you're reading a lot more data for these samples, hence pushing stuff out of cache much more: we've seen this with PRMan 19/20 in RIS: using incremental can have a 3x slowdown in some cases compared to non-incremental, as the camera rays are much more coherent per-bucket in non-incremental, so the highest level mipmaps are kept in cache much more. With incremental, you're only sending the bucket size number of samples and equivalent texture reads for the camera rays, with secondary bounces generally using much smaller/lower mipmap tiles for the texture request (and you can get away with box filtering these in 95% of the cases), then moving on to the next bucket, which will probably need completely different high-level mipmap tiles for its camera rays. With texture IO often being the bottleneck in VFX rendering, this is a huge issue.

Nope, still in London for a bit, then off to NZ...


Hmmmmm.

So you're next in line to try to pull the sword from the Manuka stone?


Forward path tracing is what is used to render more than 95% of graphics you see in movies today. It's not great for closed spaces and VCM is better, but it's definitely possible to render everything with forward path tracing. People have been doing it for years, bidirectional methods have only started to get used in production very recently.


Forward path tracing with direct lighting only has been done for 'years' (about 4-5 years). Pure forward path tracing without caching of secondary illumination is being done now, but at great expense of render time and artist time to deal with the inevitable noise.

It is something that people are getting to work in the end but it is a consistently painful disaster.

So basically forward path tracing is not adequate. That doesn't mean it can't be forced to work through huge render farms, hacks, compositor painting and who knows what else.


I agree it's possible to do better than forward path tracing, but a big part of why nearly everyone switched to it was that it significantly reduces artist time, is a lot less painful and requires fewer hacks than the methods used before it.

The state of the art keeps improving but I've never heard any call the switch to forward path tracing a painful disaster, quite the opposite.


Few hacks yes, less painful I have to disagree with. It is the present and of course the future, but what I have seen is that the hacks shift to do anything possible to reduce noise, and the artist's time shifts to doing whatever they can do deal with noise. It never ends up being 'fire and forget' because renders end up either overkill on cpu time or with noise somewhere in the image.

Forward raytracing as it currently stands is awful to use at the moment on a large scale for final motioned blurred images that can be sold to a client, but because it is simpler it is still better than alternatives. That is because getting the same results out of (REYES) renderman was something left to a handful of gurus and fanatics.

Now we have a state where renders take 8 hours per frame per character per pass but people still like it better. So be it, but that is still very painful.


All fair points!

From your username, would you happen to be Marcos or someone else from Solid Angle? If so, big fan of Arnold. :)


Nope, sadly not. This username was one of the few computer graphics related usernames left on reddit and solid angle sampling is so much cooler than area sampling. You're not only person who thought this, so it's probably about time that I switch to a different username. I'm also a big fan of their work, though, I would love to work for that company once I'm done with university.

Btw, I also really enjoy reading your blog, keep up the good work. Looking forward to the day that you release the source of your renderer.


I am not sure, but can Renderman use the GPU, since I read if you use OSL in Cycles it is restricted to the CPU only. The node editor can utilize the GPU, but not OSL.


I'm pretty sure every movie company you named could have rendered the same thing in Cycles just fine. You still can't name a single important thing that can't be done with Cycles.

Speed is good, but you cherry picked some niche thing that's probably faster than Cycles (which you haven't provided evidence for). That's a silly argument.

EDIT:

Thanks for the updated post, much more substance.


The burden of proof here is on Cycles. There's 30 years of history showing off Renderman, and everybody in the CG industry knows that it's awesome. Not many people have even heard of Cycles. What movies have used it? Why is it better than the existing rendering software? Is it compatible with the Renderman spec? How easy is it to set up a render farm with it?


This. In theory Cycles can be used to make the same kind of movies Renderman can. Please name a few examples of cases where this was done.

On the flipside, when people say Renderman can be used to make production level stuff, they can point to literally any film with CG made in the past 25 years as evidence.


Post updated with detailed list and links for each list item.

At studios I've been at, we did tests with various renderers. Sure, we could have rendered a film in Cycles, but it would have taken much longer.


I'm pretty sure you can't.

Let's say you want to render realistic skin using Cycles. You can't, the only BSSRDF profiles that Cycles offer are simple bicubic and gaussian profiles, these wouldn't work for rendering realistic skin. Cycles doesn't even offer a simple dipole. Renderman on the other hand offers a dipole, it offers the state of the Photon Beam Diffusion, it offers Pixar's recent approximate BSSRDF, which is almost as accurate as PBD and is as easy to compute as a gaussian or a bicubic profile. So you'd have to implement your own BSSRDF shader, which is only doable for large studios such as Weta which have their own R&D department.

So let's say you want to render hair. You can do this with Cycles, but their importance sampling code is not state of the art (last time I checked), which means that you can probably expect double the amount of noise in Cycles. You could still use this for rendering, but computing time is quite limited (especially if artists want to compute previews). You'd either have to implement your own hair shader, or you'd have to purchase a ton of extra hardware.

Let's say you want to render a scene with tons of triangles and textures, you probably won't be able to do that in Cycles. They don't use special tricks for quantizing those triangles in the memory. Their code for caching textures (sometimes multiple terabytes for a single scene), is also not as good as Renderman, especially when using the GPU (a GPU isn't very good at constantly streaming terabytes of data into it), which means that your texture date either needs to be limited, or you'd need to throw a lot of rendering machines at it.

Cycles can do most stuff that Renderman (or Arnold) can, but that's not the point. If you'd pay me to work on a renderer for you fulltime for a year, I could produce a feature complete renderer, but it won't be optimized. The code won't be optimized, but probably more importantly the ray intersection code and the importance sampling code won't be optimized, which means that renders will be slow and noisy. Pixar has a whole team working on optimizing their code and whole team of researchers working on improving the importance sampling. Cycles is made by hobbyists who are doing an excellent job (Cycles is an amazing renderer for amateur users), but it's just not in the same ballpark as Renderman or Arnold.

Cycles is an absolutely amazing renderer, but it just lacks a lot of stuff that you'd during production of feature movies. If you want to use Cycles for your own use I can absolutely recommend it, it will have everything you need and it's open source!


Just one nitpick, Arnold supports the same BSSRDF profiles and has been used for realistic skin rendering in movies. The gaussian profile is actually suitable for rendering realistic skin, by using a combination of multiple gaussians you can very accurately match measured human skin profiles.


Yeah, such profiles can certainly be used for skin rendering. I know that SPI used the method you mentioned on a few movies. There will be visible flaws though. The skin and especially the lips will look too waxy when using gaussians or bicubics. Weta Digital used Quantized Diffusion on Promotheus for this reason [1]. Pixar has an upcoming talk on a simple, yet highly accurate BSSRDF profile this SIGGRAPH, so let's hope the folks at Blender will implement that in Cycles.

[1] http://www.fxguide.com/featured/prometheus-rebuilding-hallow...


> it will have everything you need and it's open source! Not everything for animation, yet. BTW, for those who don't know, solid angle is the company behind the Arnold Renderer, and the creator of cycles, Brecht Van Lommel left the Blender Foundation not so long ago to work for them. The quantity of people contributing to Cycles is pretty small compared to other areas of Blender.


> I'm pretty sure every movie company you named could have rendered the same thing in Cycles just fine.

Movies aren't made by software, they're made by massive teams of people who share overlapping expertise.

Do you want to write Lua for Kerbal Space Program, or Perl for NASA?


http://renderman.pixar.com/view/movies-and-awards

Although it should be pointed out PRMan wasn't the only renderer used for quite a few of these movies (definitely the later ones) - multiple studios work on movies these days, and other renderers like Arnold and VRay until a year ago were very close to knocking Pixar completely out of the market (even ILM switched to using Arnold for a couple of years).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: