I've spent my entire life studying this stuff and feel extremely lucky on so many levels: that we have these incredible computers which can actually do the computation, that we happen to have such a powerful theory that can model and even predict visual reality, and perhaps most of all, that a single person can learn all this stuff and carry it around with them everyday - it's really changed how I see the world.
My deepest thanks go to those who have developed and passed all this on through their papers and books; it's literally what I'm getting the meaning of life from.
What I also think is amazing is that algorithms improve all the time. For example the Blender Cycles render engine has been rewritten lately. Using the latest knowlegde and algorithms it is now more than twice as fast.
Yep, and that design had been presented ages ago[0] already by the Finnish geniuses who basically designed Nvidia's ray tracing tech[1] (as well as the state of the art in GANs[2]), and even implemented in some lesser-known renderers like Indigo Renderer :)
These days the most exciting research I think is in path guiding and many-light sampling techniques.
Yeah, except that Cycles focus isn't on accurate physical rendering, but rather on quick and dirty, visually pleasing approximations.
Its primary purpose is to be be used by artists for SFX type work, but if you're looking to make really accurate renderings of complex lighting scenes, for e.g. accurate architectural renderings, I would not use cycles and look at luxcore [0][1] instead.
I would say very close to the top is the amazing depth of sampling methods, in particular Quasi-Monte Carlo (QMC) methods. I'll try to give a small sample of that.
Basically what you're doing in MC is estimating expected values (and thereby integrals: average height times base width equals area under the graph) using random numbers, to be roughly thought of as a rnd() function which gives you back a float in [0,1).
However, using actual proper random number generators has a very slow convergence rate, as the random numbers chosen will tend to clump; on the other hand, if you just use a perfectly uniform distribution, you'll suffer from correlation problems.
Consider a 1-D numerical integration problem, and instead of using rnd() to pick points in your integrand range, you start with some seed value in [0,1), and each time you need a new random number, you add phi-1 to the existing number and mod with 1.0 to get the new number. If you plot this sequence on a [0,1) number line, you'll see that it divides up the interval into finer pieces really nicely, magically keeping a good distance from all the previous values, without even knowing them!
Why does this work? Well, there's a lot of beautiful theory about phi (also relating to biology and nature, e.g. seashells and the distribution of sunflower seeds), in particular how it is in some sense (continued fraction) the "most irrational" number. Does it work well for other numbers, and can it be extended to higher dimensions? Sure, there are some interesting things you can do: http://extremelearning.com.au/unreasonable-effectiveness-of-...
This isn't even scratching the surface of things like the Halton, Hammersley and Sobol sequences, the human visual response to different kinds of noise that makes blue noise preferable, etc. Here's a wealth of recent research on all that: https://sites.google.com/view/myfavoritesamples
Rendering is particularly beautiful in that it touches on so many fields, from pure maths, to applied maths and numerical methods, to physics, to human biology and perception, aesthetics, both low- and high-level programming concerns, even philosophy: if we can simulate visual reality to such high precision, it lends credence to the idea that we ourselves are simulated. To me the apparently obvious truth of this is up there with understanding that the iron that makes our blood red can only be formed in supernovae.
In case anyone is confused by the idea that “random numbers [...] will tend to clump”, I can’t resist mentioning the glowworm story referred to in the Wired article “What does randomness look like?”[1] and how people showed the seemingly purposeful clustering of V-1 impacts during the London Blitz was purely due to randomness[2].
IIRC, he moved on from inventing a brand new rendering methodology inspired from oldie research in nuclear physics [0] to designing the S2 system at Google [1] to building one of the largest money-making machines in the ads world at Google [2][3].
Note: the 4th edition of this book will likely be out by the end of this year, and cover GPU pathtracing. (I believe the free web edition will be released a few months after the hardback)
The Early Release (still in-development) version of pbrt-v4 can be found here:
I am eagerly waiting to buy one. I am extremely lucky that I took stellar graphics courses during my university years. These have by far been the most awesome things that I have done in computer science. The high one can get from having a good render out of light simulation in immensely pleasing.
If any of you have time to go through the book, go through it slowly and carefully. There are a lot of nuances that isn't apparent on the first attempt. It is comprehensible to most of us. The authors have exclusively marked the chapters that can be skipped on the first run. No matter how many times one goes through the book, it will still feel like a gem!
Oh excellent. That’s something I will definitely look at. I remember having a lot of fun with some of the pbr exercises many years ago, and haven’t had a chance to look at how to this sort of stuff efficiently on GPUs.
Haven't seen it mentioned in here: Matt Phar is also behind ispc, a subset of C (?) in the spirit of SPMD, and an optimizing compiler (based on llvm iirc). It can target Intel AVX (up to avx512, I think?) and is quite fun to work with. It can help out if you're in a rut of using intrinsics and cursed cuda kernels. A fresh take on a programming model.
It's very interesting to see not only the pure tech part, but also the disheartening lack of interest at Intel on the subject. Missed opportunity I think... You can feel a lot of fun turning into a lot of... Sadness? in the end.
Really a big fan of all things Matt Phar, I hope he has the time of his life at nvidia and that they put him to good use!
When Knuth came up with the term/idea of "literate programming" and the idea of programs as works of literature, in the paper (http://literateprogramming.com/knuthweb.pdf) he joked (I think?) that he hoped that, just as there are Pulitzer prizes for literature, there may one
day be Pulitzer prizes awarded to computer programs.
Well, reality is even stranger than imagination: no computer program has won a Pulitzer prize, but this computer program won an Oscar! Here's a video of the awards ceremony, where Kristen Bell and Michael B. Jordan introduce the recipients, and at 2:12 Matt Pharr comes up and says "needless to say, when we started writing a book we never expected an Academy Award would be the end result of that effort" and thanks Donald Knuth for literate programming: https://www.youtube.com/watch?v=7d9juPsv1QU
If you would like to take some first steps into this domain I can recommend Ray Tracing in One Weekend [1]. Not only will you learn how to write a path tracer, but it also touches on things like motion blur and animation. For me it was one of the best starting points because it was very simple to follow.
“Physically Based” always seemed to be a grammatical error to me.
“Physically” is an adverb, which should modify an adjective or verb. So parsing the phrase, “physically based” implies that “based” is used as an adjective or verb, which would imply that “physically based rendering” is a special form of “based rendering”, which makes no sense.
“What kind of rendering is it? Based rendering. What kind of Based rendering? Well, Physically Based, of course.”
It should probably instead be “physics based rendering”, right?
Insider fact on why this makes sense. The intention here is not to make "physics based" because we are approximating a lot of things. The idea of having it named "physically based" is because it is as close it can get to reality but not the reality! So we will always be aware that we are just running a simulation not the real thing. The authors also make this claim in the introduction section. [0]
I always love seeing PBR here. I work in atmospheric science, in particular modelling of radiation in the atmosphere, and the equations that we use are identical to that of PBR. The main difference is that in the atmosphere you are more concerned with processes along the ray, e.g. Rayleigh scattering, clouds, or absorption/emission from trace species in the atmosphere, while in rendering it is all about surface effects.
> while in rendering it is all about surface effects.
I would have agreed with you back in the 80's.
But since then, taking care of atmospheric / volumetric effects in rendering is absolutely essential to rendering realistic scenes, especially in movie SFX.
I'm not going to claim that SFX folks care about being physically accurate when doing volume rendering, but they do care a very great deal about clouds fog, subsurface rendering, light rays through mist, etc ...
One of the best technical books I've read. A lot of books on the subject are math heavy but lacking in implementation detail, others gloss over the math and implement the easy stuff.
This book has a great mix IMHO, going into the math as well as tricky implementation details yet in a very approachable manner.
Lots of great times were had as a result of this book.
I agree. I also started this book. But in order to learn, I have to implement it myself. However, if everything is already provided, how not to fall into the trap and just copy and paste everything? Giving you the illusion to you learned it?
I read in another book about how to learn, there is something called the illusion of competence. If you passively read something a couple of times, you will think you learned and understood it. But in reality you just recognized it. Without the external triggers and clues you couldn't solve it.
> This book assumes existing knowledge of computer graphics at the level of an introductory college-level course, although certain key concepts such as basic vector geometry and transformations will be reviewed here.
Had this book on my "to read" list for a while. Given that it appears to be more rigorous than my previous contact with the world of rendering, and since this is just a hobby of mine:
What pre-knowledge (topics, math, other sources or books) offers a solid preparation to fully tackle the pbr book?
In terms of math, you will want to have at least a decent understanding of derivatives and integrals, as well as "basics" like logarithms/exponentiation, trigonometry and sums.
While it goes through vectors and such from the bottom up, it does so quite quickly so will probably be a bit steep if you haven't had any linear algebra from before.
Partial derivatives show up a lot, probably good if one has some experience from before.
Overall though the book doesn't assume a lot as I recall, and goes through the constructions from a fairly basic level and up.
Computer Graphics: Principles and Practice is a fairly reasonable substitute for an introductory graphics course, and having a good understanding of vectors and matrices. Other than that all you really need is the willingness to go and find resources when you don’t feel you fully grasp something.
You might also want to try this sort of thing in a study group. You definitely get more out of exercises if you have people to discuss them with.
I've only read the first edition, but I'm a bit surprised to read that as I don't recall them assuming any knowledge, except maybe some basic geometry and vector math.
I think there has always been some assumption that you know a bit about raster images, and colour and compositing. I remember it being the type of thing that anybody who has been involved in graphics code probably already knows so well that it’s second nature, but I also know these are things not every programmer has internalised.
So, can I just start reading this book chapter-by-chapter implementing what I learnt in a language of my choice? What is the optimal way to go through this book?
Totally, you can do it. But the book is so huge that you will miss most of the details. However, you can use it as reference. First follow the ray tracing in one weekend to get the overall idea of path tracing. Then you can start with PBRT.
As for language of choice, anything will be fine. The book will build up a solid foundation. The only thing that can stop you is the amount of frustration which you may face when not figuring out the things. It can be confusing at times, but please give your time and let it sink in. After sometimes, you can review it with a completely fresh eyes and you may fix a lot of issues!
The one thing to keep in mind is that even though it's written in a literate programming style, it's not in order. So implementation in your own language (say, Rust) is completely possible, but it won't be easy. Especially since the book's code is optimized and idiomatic C++, and that can make it challenging to get past the borrow checker.
The code in book is in C++ and several of the exercises assume you are experimenting with the provided renderer. I seem to remember at least one exercise caused me to learn quite a bit about AVX intrinsics to see if I could beat the compiler at vector math.
Rust may be tricky because you will need to render pixels in parallel, and most higher level languages will take a lot of skill to make fast.
I've spent my entire life studying this stuff and feel extremely lucky on so many levels: that we have these incredible computers which can actually do the computation, that we happen to have such a powerful theory that can model and even predict visual reality, and perhaps most of all, that a single person can learn all this stuff and carry it around with them everyday - it's really changed how I see the world.
My deepest thanks go to those who have developed and passed all this on through their papers and books; it's literally what I'm getting the meaning of life from.