This is an unpopular opinion, but nobody knows how to make anything look real. The quicker you accept and internalize this, the easier it is to dive into graphics programming. It reveals graphics programming for what it is: A bag of tricks that mostly look good in certain constrained circumstances.
When I first started out, I chased book after book looking for knowledge. "Someone must know how to make things look real, right? Everyone is talking so authoritatively on the topic." No, no one knows. There's no source of knowledge that represents the cutting edge of the field, because the cutting edge is whatever happens to look pretty good today. And that's mostly thanks to very good art, not very good techniques.
Just dive in and start doing geometric puzzles. Look at it like a game, not like a quest for knowledge. If you have fun with it you'll go further than any book will take you.
I love the motivation and message of this comment - to not worry and just dive in. But a few people most definitely do know how to make some very realistic graphics. So much so, that I guarantee you've seen some in movies that you didn't know was fake. I used to make CG movies, and I can't spot the best CG anymore.
The trend in film rendering is, across the board, graduating from "tricks" that work to physically based processes, to whatever degree it is feasible. The setups are getting simpler, not more complex. The surface modeling, material modeling, color processing, lighting and rendering have all moved by leaps and bounds in last 10 years. It is is the process of shifting from art to physics, and the job of CG lighting technicians is becoming closer to the job of stage lighting technicians, because the lighting and rendering is physically based now.
It's possible there are developments you don't know about. It's also possible you're suffering from evidence of Sturgeon's law and not seeing the top 10% clearly; there is plenty of crappy CG that doesn't look real, the vast majority of it doesn't look real. But the best of it is realistic and getting better every single year, and the people studying it do know some ways (and are currently adding more) to improve the realism when the computational power arrives.
As I mentioned below, the only technique that produces effective results is if you mix actual, real footage with CG. It's true that people can't spot the CG in that situation, but that's different from the discussion of how to create fully-simulated realistic video.
I'll give you a simple way to defeat any question of whether we know how to make something look real: What does it mean to multiply two colors?
If you chase down the logic, the true answer is "It's meaningless. It just happens to be an approximation that looks good in most cases." But it has nothing to do with how light works in real life. Yet every engine multiplies colors because the alternative is too computationally expensive -- and it still wouldn't produce realistic results because we don't source our art from real life. Artists typically control the content, and any art-driven pipeline is doomed to look pretty good but not real.
> the only technique that produces effective results is if you mix actual, real footage with CG
This isn't true.
> I'll give you a simple way to defeat any question of whether we know how to make something look real: What does it mean to multiply two colors? If you chase down the logic, the true answer is "It's meaningless."
It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on. If the result is 99.9% accurate physically, and 100% perceptually indistinguishable to humans, then it's a valid predictive physical model. It has everything to do with the result of what light does in real life, and if it didn't, we wouldn't be using it to approximate light.
Your thinking here seems to back up my suggestion that you might have missed out on some of the recent developments. Subsurface scattering, for example, is modeling absorption more phyically, and replacing the simple multiplication with a simulation process, because we now have the computational power to do so.
Your argument is attacking the non-simulation aspects of rendering without addressing whether a simulation that is simpler than real life is acceptable. If I can't tell the difference, does it count? Because I'm 100% certain that color multiplication is not the problem when it comes to CG not looking real.
It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on.
It's amusing that you brush it off with "Oh, there are a few corner cases." Those corner cases are why it doesn't look real.
And no, it's not 99.9% accurate. You may be thinking of constrained scenes, where e.g. you shine a laser on a substance of a specific color and then measure the resultant color combination. But the complexity of real life defies such analysis.
If you're going to say I've missed some recent work, you'll need to cite sources. Then we can debate those.
EDIT: To clarify:
Your argument is attacking the non-simulation aspects of rendering without addressing whether a simulation that is simpler than real life is acceptable. If I can't tell the difference, does it count?
My argument is that if you get a bunch of people together, show them simulated video and real video, and ask "Which of these are simulated?" they will correctly identify the simulated video as "not real" with significant accuracy -- given modern techniques, probably >95% accuracy. The simulation needs to be of a non-trivial scene, like a waterfall or a valley. When you show real video side-by-side with simulated techniques, there's no contest.
If we truly knew how to make simulated video that looks real, without mixing any real-life footage, then the observers in the above scenario wouldn't be able to do any better than random chance. But they can, because we can't.
I did cite a source: subsurface scattering is an example of simulating light absorption. It is an example of things that you just claimed aren't getting better actually getting better.
You're attacking my arguments and getting more hyperbolic without any examples. What specifically doesn't look real? What are you actually claiming? What is your criteria for whether something is "real"? What corner cases are you thinking of that cause color multiplication to break down so frequently that it's a bad approximation most of the time? Can you give some examples of state of the art CG that intended to improve realism but doesn't look real?
I'm not claiming that everything looks real, nor that all CG is realistic. I'm claiming that CG is getter better over time, and that some things are already indistinguishable from real. The number of CG things that look realistic is going up over time, and it used to be 0. There is a trend here that reality contradicts your original basic thesis that nobody can rendering something realistic.
My 99.9% number wasn't a claim, it was a made up number (which I thought was obvious, sorry). I said "If the result is 99.9% accurate... then it's a valid model" to back a point: the point is that if multiplication is predictive then it's a valid model. That's how all of physics works. Acceleration under gravity is an approximation.
You haven't demonstrated that multiplying doesn't work, you've only stated an opinion. I'd like to see some examples of what you mean, because it appears to work very well from where I'm sitting. The colors of grass, bricks, wood -- diffuse materials -- is very closely approximated by multiplication, enough that we can in fact measure how good the approximation is, and humans cannot tell the difference. Therein lies the problem with your argument -- if I can't tell the difference, that is my definition of realistic. It doesn't matter what happened under the hood. You seem to be claiming that only reality is good enough to be realistic, because anything else is cutting corners.
I'm not sure I understand what you mean about real life defying BRDF measurement. One of the ways that CG is getting more realistic is precisely through various gonioreflectometers, some of which shine lasers and measure the output from all angles. Material catalogs are currently being constructed and sold to CG companies using higher and higher resolution measurements of exactly what you're claiming isn't possible and doesn't help. People buy them because they improve realism.
> My argument is that if you get a bunch of people together, show them simulated video and real video, and ask "Which of these are simulated?" they will correctly identify the simulated video as "not real" with significant accuracy -- given modern techniques, probably >95% accuracy.
Every year Autodesk runs the test "Fake or Foto". http://area.autodesk.com/fakeorfoto Less than 10% of people are getting them all right this year, and a considerable number of people are under the 50% line. This isn't scientific, of course, but see if you can score 100%. This is an indicator that CG is pretty good. Will you admit it if you don't score 100%?
Earlier you made an argument that stills are looking okay, but moving things aren't. The problem with that argument vs color multiplication is that color multiplication is used on stills, so if that's what's breaking down, stills should be obviously unrealistic.
It's very difficult to figure out what the core of your argument is. We can't create simulated video indistinguishable from real life, and I've given you an experiment that will prove that we can't.
You haven't demonstrated that multiplying doesn't work, you've only stated an opinion. I'd like to see some examples of what you mean, because it appears to work very well from where I'm sitting.
It's not an opinion that multiplying colors has nothing to do with how light behaves in real life. I even said that it was an approximation that works fairly well, so if you're going to simply ignore the things that I did say, this discussion isn't going anywhere productive. The point is that it's an approximation, and it's partly why we subconsciously recognize simulated video as fake.
Every year Autodesk runs the test "Fake or Foto"
Obviously, photos don't work. It doesn't pass the realism test. This conversation is about video -- the human visual system processes video completely differently. It's not just a matter of taking still frames and stringing them together. The test is invalid. If you use video (of non-trivial length, with non-trivial scene complexity -- any nature video will do fine), you'll see the participants' accuracy skyrocket to nearly 100% correctly identifying simulated video.
If you're truly curious about the reasons why a simulated video looks fake, look into some books about the neuroscience of visual processing and color perception. One of the fundamental tenets is that colors affect colors around them. To make something that looks real, you need to get the colors exactly right. Even a small departure from reality will ruin the entire effect. That's partly why multiplying colors is problematic, since it results in a departure from real life behavior. The other half of this is to ignore any test involving still frames. We don't perceive still frames the same way as video -- it's why video compression is different, for example -- so we can't use stills in any test of realism.
Whenever someone points out that we really don't have a clue how to make simulated video indistinguishable from real life, someone comes out of the woodwork to point out all the reasons why it's right around the corner. That's been false for a decade, and it's not looking any better for the upcoming decade. It's easy to prove me wrong: Get a bunch of simulated videos together and show them to observers, mixed with real videos. They'll spot the real videos every time, if you don't use constrained or simplified scenes. Nature videos work well.
It seems like people just don't like the idea that graphics programming is a bag of tricks. They want it to be deeper. But you can throw in all the physically-based techniques you want, and the resulting video still won't look real.
I have to go to an appointment now, but maybe we can continue this in a few hours if you want.
> It's very difficult to figure out what the core of your argument is. This conversation is about video ...
Ah, I see the problem. You're right. I thought I was debating the idea that "nobody knows how to make anything look real.. no one knows." I just checked, and your first post didn't say anything about video. Your second one mentioned it in passing, but I didn't realize it was a constraint on what I could talk about. I see why I'm confused, and why I'm confusing you. I'm sorry! Honestly. I am indeed thinking of some other things besides 100% fully simulated video of nature that is unconstrained, when I try to make the claim that some people do know how to render some things realistically.
Here's a pretty good CG video, in my opinion. Which parts look fake to you at a glance? http://vimeo.com/15630517
> You appear to be offended by the idea that we can't create simulated video indistinguishable from real life. But we can't, and I've given you an experiment that will prove that we can't.
That's a negative. Personally, I don't think I can't prove a negative, with any experiment. Are you sure it's provable?
Here's the core of my argument, the part that I thought I was debating. I think realism (undetectable to people) has been achieved with: material samples, constrained physics simulations, stills images of architectural scenes, limited still images of natural scenes, elements in video (mixing live and CG footage), fully CG video environments for short periods of time, humans & faces but only in fairly constrained situations for short periods of time. I don't think realistic humans have been achieved in general. I do think realistic simulated video - that meets your criteria - will happen eventually, and I don't know when or claim anything about when.
> Obviously, photos don't work.
But you can demonstrate some color multiplication problems in fake photos, right? You're ruling out still images yet the only problem you've cited is one that affects every single pixel of all still CG images.
> Get a bunch of simulated videos together and show them to observers, mixed with real videos. They'll spot the real videos every time, if you don't use constrained or simplified scenes. Nature videos work well.
Okay, fair enough. I don't know what "constrained or simplified" means. Your goal posts could be anywhere, so I definitely can't win. I don't think this is easy though -- the best CG is very expensive still, making something that looks very realistic is difficult. I could agree here and now that no CG ever rendered yet passes the unconstrained environment and complexity test when it comes to realism, and I would agree that realism is easier to achieve the more constrained and simplified the scene is. My argument is that the threshold for where too complex triggers unrealism is moving in the direction of more complex over time.
> It seems like people just don't like the idea that graphics programming is a bag of tricks. They want it to be deeper. But you can throw in all the physically-based techniques you want, and the resulting video still won't look real.
Now I'm getting really confused. Graphics is a bag of tricks, I don't have a problem with saying that, so I don't know which people you're talking about. Those of us practicing graphics have been saying that all along.
But you're saying that it can never happen? Using all the physically based techniques now existing and ever to be invented, it will never happen? I could simulate reality, and I won't ever get there, no simulation will?
I can see that you've thought about this a lot, and I can see that you know a lot about graphics. I honestly thought you were saying we're not physically based enough yet, and I was trying to show how we're getting there, but now I'm not sure I understand what your claim is, or what we're talking about. I do suspect we're getting down to what my friends call the "dictionary problem" - agreement that is accidentally violent due to miscommunication over a few words.
You can easily blow his claim out of the water by showing a video that is fully computer rendered but looks real.
Parts of the video you offered look very realistic, but the content breaks the sense of realism, e.g., vegetables shattering into tiny pieces, rocks tumbling upwards, etc.
i read this discussion as sillysaurus3 arguing the wrongness of ALL models, while not acknowledging dahart's examples of utility: constrained/simplified for some is good enough for others.
> It's not meaningless at all, it's a close approximation of light absorption that happens to be measurably and perceptually indistinguishable from the physical spectral absorption process, save for a few corner cases that are being worked on. If the result is 99.9% accurate physically, and 100% perceptually indistinguishable to humans, then it's a valid predictive physical model.
> Subsurface scattering, for example, is modeling absorption more phyically, and replacing the simple multiplication with a simulation process, because we now have the computational power to do so.
Surely everyone has to accept that whatever we do here is always going to be an approximation to arrive at a similar looking result to reality that won't be following what's really going on to a low level? We're not going to be able to model individual photons, atoms, electrons, quantum effects etc. so it's always going to be an approximation/abstraction.
The goal should be whether it looks indistinguishable not if it is indistinguishable.
Yes absolutely - If the metric for realism isn't perceptually indistinguishability, then we might need to rethink what the word "realism" means. I don't think anyone here is arguing against that though. I think @sillysaurus3 is saying that the approximations we make in our simulations will always have large errors that cause visible artifacts, that the unrealistic CG will always be perceptually distinguishable from real video.
Modeling individual photons is happening in research rendering code, just not on the same scale as reality. Path tracing renderers using spectral colors & modern shading models are getting there. We might use a few million photons at a time to render a room instead of the quadrillions that reality uses. People are thinking about modeling individual atomic interactions, electrons, and quantum effects. There might even be experimental renderers already that do this, I wouldn't be the least bit surprised. If not, it'll happen pretty soon.
BTW, there is no super distinct line when you're talking about the simulation of atomic interactions and electromagnetism, even for the 100% full simulation of all subatomic particles individually. I'm not sure that's possible yet, but I am pretty sure its unnecessary for "realism" in the context of film making.
There are shading models that account for electromagnetic & atomic effects. Physically based shaders obey Helmholtz reciprocity and energy conservation, among other things. Some of them account for multiple atomic level reflections. This is a statistical way of accounting for atomic effects, just like color multiplication is a statistical way of accounting for spectral absorption. New shading models every year are accounting for smaller and smaller margins of error from the previous approximations. Check out this crazy shading model from 25 years ago that is based on electromagnetism theory and accounts for atomic level shadowing & masking: http://www.graphics.cornell.edu/pubs/1991/HTSG91.pdf I think this paper still wins the award for most equations in a Siggraph paper. It didn't win any realism awards though. :P
I wouldn't say that in a material model you're actually multiplying colors together. It just happens that when you record the amount of reflected low-, medium-, and high-frequency light under a spectrally flat light source, you can display these data as light and and perceive a color.
With fancier equipment, you could record the full reflectance spectrum from every angle -- and polarization, too, why not -- and be able to predict what the (simple) material would look like under every light source. Maybe you're worrying about fluorescence, but then just record the data under varying monochromatic light sources.
If you're just talking about how RGB is spectrally anemic, sure, that's why lighting your house with only R, G, and B LEDs makes everything look so strange.
Spectral multiplication, as I understand it, is just a consequence of the law that if you increase the intensity of received radiation by a certain factor, then the intensity of the reflected radiation will increase by the same factor. Is there some important nonlinearity I'm missing? Where is the meaninglessness?
Why not just win the argument by linking to a (youtube)video demonstrating photo realistic ray tracing? :) I would if I could, but I haven't found any raytraced animation that is indistinguishable from recorded video.
Sadly, the best CG is paid for and expensive, thus not available on YouTube. I also don't know where the best CG in recent movies is, because I can't tell it's CG, and they don't always talk about it.
You can find lots of articles like "10 Scenes You Didn't Know Were CGI" https://www.youtube.com/watch?v=61ETzC1UbM4 I doubt this meets the criteria or will satisfy @sillysaurus3, they're almost all a mix of real footage & CG.
Sorry, I'm not convinced. :) The video talks about editing footage to add or remove details using cgi. The images are a better example, but those are all comparing cg vs heavily retouched images.
on the blue ray dvd of pixar's finding dory, closely examine the sand and water (ignore the living creatures) in the first few minutes of the short film piper. you know a-priori it's not real, but i'd like to hear people describe what they think looks off with the environments (not the creatures). try to normalize that with the intent of a director trying to craft an identical ambiance from a recorded video (color temperatures may be made warmer in post, increased saturation, more vignetting, etc).
in particular, i don't think people watching that night-time sequence of deepwater horizon in theaters would identify it as entirely computer generated.
From my experience this is very much true of the current state of real-time rendering (and there's nothing really wrong with that, because it's just not possible to run anything resembling physically accurate algorithms in real-time). However, in the world of offline rendering and raytracing, there has been considerable work on doing physically accurate rendering. Of course a lot of approximations are still made due to memory/CPU constraints, but it is a different world. "Physically Based Rendering" by Pharr and Humphreys is a good intro to this way of doing things.
If that were true, we'd see the evidence in Hollywood. But nobody knows how to make a realistic fully-simulated video. The reason everyone believes that it's just around the corner is because those academics talk with authority on the topic, and the still frames look pretty convincing. But still frames are completely different from video -- the human visual system processes video differently.
The only technique that we know produces realistic video is if you mix actual, real footage with simulated content. That's very effective, but it's unsatisfying for obvious reasons. I think it hints at a way toward fully simulated realistic video, though.
To me the most interesting thing is not just a fully-simulated video but a fully-simulated interactive scene using VR or AR, and that is obviously an even bigger challenge. I don't personally think that either of these objectives are even close to being just around the corner, but I do think we are moving toward them. I have no idea how many additional orders of magnitude in computing power would be necessary to create a convincing simulation. The journey in that direction is a fun challenge though, right?
I did just read your previous posts, and it sounds like you have a pretty interesting history of working on all of this stuff.
I can't judge the popularity aspect of it but "nobody knows how to make anything look real." assumes obviously that the goal is to make it "look real".
There are so many interesting aspects of modern computer graphics that have nothing to do with rendering realism. My favorite is visualization, or using graphics to create a visualization of a process that is not visible normally like fluid dynamics. But there are lots of others.
I think that if you start from the most featured APIs you end up jumping in with a hugely steep learning curve that really hinders learning. Starting from the basics, very simple OpenGL or WebGL rendering, learning the algebra behind projections so you can think about what you want to see, and then the notions of texturing and shading, all help get from not knowing anything to knowing something much more efficiently.
"This is an unpopular opinion, but nobody knows how to make anything look real."
Unpopular? By whom? Computer graphics is a crafty bag of tricks sprinkled on top of very concrete linear algebra and computational geometry.
That said, it's actually hard to create appealing images just in plain old photography, so anyone thinking computer graphics would facilitate any artistic aspects is in for disappointment.
The math needs to work, but you need artistic talents to make anything look good/real.
What looks good, what looks real, is actually an active area of psychophysics research in high-dynamic range image capture and rendering.
The real world is high dynamic range, but human vision is much more limited, but also spacially variant and adaptive. So we get multiple optimized "snap shots" of a scene rendered in real time and mapped into what our brain considers one scene. Look toward high key high brightness area of a scene, retina gain function clamps down the iris closes and you get properly rendered details in those areas, look toward a tree shaded bench at a rock under that bench and iris opens, gain function increases and again the shadow details are rendered. Cameras are just starting to do this.
But that's capture. It's still HDR. Now you have to re-render all of that if you're going to reproduce it on a lower dynamic range device like a computer display. There aren't that many HDR screens on the market yet but those are also coming, and it means we don't have to do that next stage re-rendering which is a lot more subjective and still actively researched why some renderings look real and others look cartoonish.
And the same would be true for computer generated graphics rather than captures.
When I first started out, I chased book after book looking for knowledge. "Someone must know how to make things look real, right? Everyone is talking so authoritatively on the topic." No, no one knows. There's no source of knowledge that represents the cutting edge of the field, because the cutting edge is whatever happens to look pretty good today. And that's mostly thanks to very good art, not very good techniques.
Just dive in and start doing geometric puzzles. Look at it like a game, not like a quest for knowledge. If you have fun with it you'll go further than any book will take you.