It's stunning that this runs on a single Nvidia GeForce 680GTX in an editor without any visible lag at all. I wonder why tesselation wasn't enabled though, it may have been because of performance issues.
> You're going to need the latest hardware to get access to all that graphical power, though. The demo we saw was running on a top-of-the-line NVIDIA GeForce GTX 680 graphics card, and we were told explicitly that the engine was currently being targeted for the next generation of console hardware rather than the Xbox 360 and PS3
Coming from a film background, it is clear there is a lot of conflation happening in these articles between light that 'behaves' like it would in the real world, and 'good' lighting. That is why you get quotes like in this article 'I light my scene by dropping a sun in'.
In reality, there is a whole lot more to lighting then simply having lights and materials that behave naturally. Sure, this gives you more realistic looking results out of the box, but it doesn't mean you don't have to sculpt and massage and tune your lighting to achieve the artistic result you want, especially for interiors, or scenes with characters in the foreground.
The quote is about overcoming a technical constraint that forces laborious placing of secondary lights to restore behavior that you get for free in the physical world.
It's not claiming that design or art in lighting is unnecessary. It's freaking obvious that's necessary.
To elaborate: Only about a decade ago, 3D game lights were almost uniformly "additive spheres" supplemented with some ambient or directional light. They didn't bounce or diffuse properly. Environments and characters each had separate lighting treatments, making the characters look like "cut outs" in most circumstances. Reflective surfaces were a special case feature, and most details could only be represented with a cleverly painted texture. Everything was faked, and as such, the art of a lot of these games had to hew closely to tech limitations.
Between then and now we've gradually gained enough GPU power to move towards a unified, real-time lighting model with most of the desired real-world properties. Although limits still exist, we can finally start addressing lighting primarily from the designer's perspective.
Achieving a specific mood or "feel" for an environment requires someone who is very sensitive to lighting, color, and architecture, and who has knowledge of techniques to achieve certain looks.
For example, if you want an ominous feel to a cathedral, you need to know what light colors are more ominous than others, what direction of lighting (maybe bottom up, rather than from the top), what types of lights...many dim point lights? or single directional lights?, if light scattering through a medium like fog would look good, etc etc. And also keeping in mind functional aspects like, can the player still see items or things he needs to do, even with all of this specific lighting.
Another example might be that while lighting and materials might behave naturally, sometimes they still behave in undesireable ways. Suppose the angle of the sun reflected off of a reflective surface right into the player's face right while he was at a specific location where he needed to do something. If you can't move the surface, can you shift the light? Will that effect the scene much? Maybe now that you shifted the light, the rays of light that were going through a tree branch beautifully are not there any more.
Getting back to UE4, if its lights behave more like real-world lights it may allow game artists to translate real-world lighting knowledge into games or let them spend less time working around kinks in the engine.
Definitely true. We draw our experience on lights from the real world, so the closer an engine is to the real world, the faster the artists can prototype and test lighting setups.
As a still photographer who sometimes works with multiple off-camera lights (strobist-style) I would love to be able to experiment in software to see a very close approximation of how a setup would look.
In all seriousness, go download a build of blender that has the GPU renderer built in. It is totally handy for what you are talking about. Hit me up via my email in my profile if you want help getting started, the interface can be a bit non-intuitive.
I have a plugin for Cinema 4D called "HDRI Studio Kit" that is basically a set of predefined studio lights and lighting setups. Makes it very simple to light objects using traditional photographic methods. You have various tent setups, soft-boxes, umbrella lights, etc...
There's a lot of untapped creativity from the masters of film and television production in gaming, because the skills don't map well. I'm definitely keen to see what happens to that situation once the engines get closer to simulating the real world. The possibilities are endless, since suddenly a great lighting director's skill set is much more valuable to a team working on a AAA title.
That isn't to say this doesn't happen now, just that it's only going to get better.
Your post is very informative. Thank you for that.
If I may ask, don't you think decisions related to the "feel" of any game/entertainment device rests squarely on the shoulders of the art director?
I feel that statements such as that on ars is not meant to mock the artists but instead convey the message that "Hey programmers, next time the artists/director want these lights and this reflective properties, you can do it very quickly instead of hacking through messy code."
Now, personally what I think is going to happen is that with even more control, and lesser instances of programmers telling the artists "I could do that for you, but we're going to blow the next 6 month worth of budget for 5 people. Is that effect still important now?", artists are going to be able to come a step closer in realizing their intended vision.
Yeah, I completely agree with you here. For me, I am a huge advocate for learning as much as we can from film in the production of games. If we look at the way technology is going, in 100 years I think we will see a huge blurring of the lines between games and film, there will be much much more cross-compatible skills and methods.
You explained it very very well. In film it is par for the course to have hundreds of square feet of lighting gear crowded around the subject of the shot just to get that perfect look that the DP and director are going for.
Natural light is boring, and often distracting. As a very simple example, I live in an old house and have windows broken into small square panels. For a scene where someone is looking through that window I would put a light outside to make a shadow on their face for dramatic effect, even if the sun was not really coming from that direction or if it was raining (which usually softens shadows). But if I was showing the same person looking back into the room, then the shadow on their back would be a distraction from what was inside the room, so I would soften the lighting or put a screen to limit the natural light, and fill in the darker areas of the room with a soft light.
Look at old film noir movies on TV, they have very exaggerated lighting for dramatic effect.Often you are reducing light to remove information from the scene, so that the viewer can focus on the dramatic subject.
Even when they do shoot outdoors during the day in films, you have scrims, bounce cards, 12k lights and all manner of things to control the light. Also, DP's try to shoot exteriors during golden hour a few hours after sunrise or before sunset, to get softer and warmer light. If you have to shoot mid day, you try to either use a scrim or you pray for clouds to soften the harsh light.
Also, none of the graphics engines really capture the wave-nature of light - intereference/diffraction etc. Radiosity captures a bit of this but from an intensity/energy point of view.
The bit where he made a change to the source code, let it recompile on it's own, got a little alert in the editor when it finished and then had the change apply all without restarting the engine...WOW!
Unity does this now. You can recompile a file while your game is playing in the editor, and the running code is replaced. And I almost never use it -- I should figure out why that is.
That's because UE4 probably contains a lot better quality of code (and better GPU acceleration) than the horribly slow and buggy Flash plugin that powers that video.
Incredibly stunning tech. Now, I'm gonna go fantasize about an open-source version of this with updates every month (instead of every 3 years) -- as a compromise, I could do without the whole hot-reloading IntelliEditor shebang and consoles support -- a concentration on PC hardware that "will be commodity in 2 years, on mobile in 4 years, but can already be bought as state of the art now" (aka Kepler GPUs) would be enough.
To the pros: in your opinion, which of the many FOSS engines out there (many of which carrying a large legacy code-base of supporting soon-outdated modes of operation such as DX9 or lower or GL < 4.0) is the most likely candidate to offer a deferred pipeline incorporating lighting / particles such as we see here in this UE4 demo, at that level of performance and (expected but also demoed) robustness? Again, their "industry" tack (smart editor / console support, "we license only to pro developers" stance etc) would not be required ... only the "metal".
Back in the day, OGRE, Crystal Space and Irrlicht used to be the dominant FOSS ones.
Crystal Space is a bit outdated now. Irrlicht is stuck at DX 9.0 and OpenGL 3.x
OGRE seems the most likely candidate (ie the project is the most active and has been for years). Although it is also stuck at SM 3.0 - so no geometry shaders or access to the compute pipeline.
I am likely to be missing out on some other engines here.
Also I'm wondering about the lighting. Some folks on Twitter called it dynamic Global Illumination but here they just call it indirect lighting. I wonder -- are they still using an ambient light? More specifically, what's their GI algorithm? Might be something like the voxel approach sported by @icare3D but then it seems that one's just barely interactive, not as "real-time" as this stuff -- or well it might be on a GTX680 with a really-lo-res voxelization.
Voxels would be too heavy. I have worked with volume rendering before and it's way too expensive to do it at this scale (although their scene with the light passing through the volumetric smoke column kind-of defied this).
(Of course the (fairly lo-res compared to your screen) voxelized geometry representation is not rendered directly to screen ala Atomontage or Gigavoxels -- the the GI is based on RT scene voxelization and approximate cone-tracing.)
Well that's awesome because at least there's (A) a paper for this (grokking it is another matter) and (B) seems like it runs really smoothly on next-gen GPUs!
lo-res being the key :) I can see how the voxel representation would be computed and made resident in GPU memory. What's amazing to me, that such a highly branched computation algorithm is now possible in real-time on a GPU! (we had branching available since SM 3.0 - but was "not recommended")
The physical renderer in Cinema 4D is pretty fantastic, and calls it Indirect Illumination. It's pretty crazy slow at rendering this stuff. I've had 6h+ renders, per frame.
In some cases, it involves calculating the angle and falloff of the original light, and placing extra lights at the right angles to simulate the bounce light.
I agree it's incredibly neat -- but let's say I'm just personally most interested in the core engine side of things -- much less a however-advanced "Game Studio" of sorts. Is it compelling -- you bet.
Eventually I'd expect a "linux of 3d realtime rendering" to emerge. It will then find its way into nearly everything that gets put on a screen from the smallest phone apps to the largest multi-screen games.
This stuff is really fascinating. Does anyone have any recommended reading or general advice as to how to learn more about game/graphical engine implementation? I mean, from the ground up. (I'm a CS undergrad and the closest we've gotten to learning about this has been the computational geometry chapter in CLRS, but that was a mildly esoteric introduction to everything that is possible in this field.)
I read Jason Gregory's "Game Engine Architecture" cover-to-cover and I'd highly recommend it for someone new to the industry who's interested in learning about game development. Gregory is a developer at Naughty Dog (Uncharted series).
Indeed. Having been in the game industry for close to a decade now (apropos of this thread, my first job was at Epic Games as an engine programmer working on UE2 and UE3), I can't say I personally learned a lot from Gregory's book (outside of some tidbits on character animation, where the book is especially strong), but I can say that it's the best book on the subject I've read.
An older book like Eberly's 3D Game Engine Design was more a loosely knit compendium of theory and algorithms. Eberly's more recent 3D Game Engine Architecture is closer to Gregory's in intent. Unfortunately, it's more a reflection of the idiosyncrasies of the author and his personal code base than an investigation of game engines as developed and used in the industry.
I've also heard good things about Mike McShaffry's Game Coding Complete. McShaffry is a game industry veteran, so he knows what he's talking about. However, it seems to be targeted at rank beginners, so a lot of the material might not be very useful to you.
Below that tier, of course, you have an endless array of how-to books written by incompetents. The worst is the "here's my shitty homebrewed codebase" variety.
For rendering in games, you'll need a separate book. Real-Time Rendering is the canonical reference. The first and second editions were my favorites. The third edition added a lot of information on recent techniques but at the expense of reading too much like an annotated bibliography, skating from topic to topic while rarely going into enough detail to be useful and often listing several alternative solutions to a given problem without comparing their relative merits and trade-offs. The first editions flowed more like textbooks and were thus better suited as introductions. Some of that might be hindsight bias on my part; I'm curious what others think who first learned the subject from the third edition.
I also want to give a shout-out to Christer Ericson's Real-Time Collision Detection book. It's full of clear explanations and practical, hard-earned knowledge that you can't get anywhere else. Plus, the accompanying code is robust and well written, a far cry from the usual crap that either straight-up doesn't work or is inextricably entangled with a massive code base (I'm looking at you, Eberly!).
It seems that each time I come back and read your comment, you've added a new recommendation. Awesome!
That Real-Time Collision Detection book sounds particularly interesting to me. I had resigned myself to just using Bullet in the toy engine I'm working on since I couldn't find any good resources on the subject. I'll check that out!
The only book you're missing is the one that explains how to get your foot in the door ;)
Game Engine Architecture and Real-Time Collision Detection were both prescribed material in my games technology course (done as a double major with CS). Cool to hear that they're recommended by industry folk :) Gregory's book is especially good and taught me a lot. +1!
While people mentioned NeHe, and it's where I started years ago, it's widely outdated. Here's one that helps you get started with modern GPU rendering concepts in OpenGL.
On the flip-side, if you're interested in rendering realistic scenes that appear physically accurate but aren't suitable for real-time rendering (useful for movies, ads, etc.), Physically Based Rendering (aka PBRT) by Pharr and Humphreys should be your go to book.
NeHe's OpenGL tutorials used to be a good (although now kind of outdated) starting point. Shaders didn't use to exist back then. But still a good start:
A game engine is much more complex than a graphics engine.
Maybe somebody can point to guides for creating physics/sound engines and AI/gameplay stuff as well?
When I started learning I spent a lot of time reading NeHe tutorials. But modern OpenGL feels so far removed from those tutorials that it feels like learning a completely different API.
The best tutorials I've found for modern OpenGL is "Learning Modern 3D Graphics Programming" by Jason McKesson. The only problem I have with these tutorials is that there aren't enough! http://www.arcsynthesis.org/gltut/
I really appreciate all of the responses and have already put many of the recommended texts on my wishlist. (And I just bought Game Engine Architecture as well, too bad you didn't post an Amazon affiliate link for yourself!) But I just wanted to point out that since this tutorial is free, I immediately checked it out, and I am absolutely blown away. This will certainly keep me busy until GEA arrives! Thanks so much, it looks to be a great starting point for learning more.
Edit: I also wanted to point out that in the author's preface, he discusses what he considers the downside to most introductory texts and tutorials in 3D graphics programming - presenting "fixed functionality" that allows newcomers to more quickly use the tools at hand by abstracting much of the foundational information away.
While this is certainly useful for experienced developers learning a new derivative technology on top of what they already know, I have always found this approach for introductory stuff frustrating. At the end of the day I may come away pseudo-understanding a "higher level" concept, but ultimately much has been abstracted away and I am left ignorant and, as the author says, "Programming thus becomes akin to magical rituals: you put certain bits of code before other bits, and everything seems to work."
I wish I knew of more texts like this for other fields where over or premature abstraction could endanger comprehension. (For example, I know I would like to see a similar approach taken to other complex topics, like networking.)
Hey, glad I was able to give you some useful information! If you ever find yourself in Toronto, say, for an indie game jam (http://www.tojam.ca/home/default.asp hint hint), you can buy me a beer ;)
If you're looking for something lower level, the book you probably want is the white book - Computer Graphics: Principles and Practice. I have the 2nd edition from 1992, which is the still the standard intro graphics textbook for many CS departments. Though Amazon says there will be a 3rd edition coming out at the end of this year!
Another highly recommended book (also recommended in another comment here) is Real-Time Rendering, but I've only used bits and pieces from this one, so I don't know how good it is for folks just starting out. Still probably one you'll want to add to your shelf if you continue on in the field.
Oh, and also also, head over to YouTube with some snacks and a drink, sit down, and watch the weekly Overgrowth game developer videos from Wolfire Games. It's both inspirational to see what other people are doing, and a great demo of concepts that you'll read about in GEA, such as animation blending: http://www.youtube.com/playlist?list=PLA17B3FAA1DA374F3&...
Although, when I go through the code, it feels more like a copy of the (legendary) Red Book. Missing vector/pixel (or fragment in GL-speak) shaders as well.
If you want to know all of the math and functions involved in pushing individual pixels to the screen and building something along the lines of a Quake 2-level engine in software, this book is a pretty good introduction to it all:
Note that this is the type of programming now implemented in hardware GPUs, and in libraries such as DirectX and OpenGL. If you were to write a modern game, you would do it on top of one of these hardware-accelerated libraries, and you wouldn't be writing this type of code in software anymore. But if you really want to learn how to do these things from the "ground up", this book can help you build that foundation
It might be worthwhile to take a look at certain types of game engine architectures.
Since modularity is one of the keypoints of how this editor/game engine work so well together, it's probably a good idea to have some idea of how to structure game components to lessen the 'ball of mud' feeling.
The graphics stuff is amazing, but I'd be also interested to know how they achieve hot-loading with C++. It seems to me that Erlang's philosophy (or FP in general) would be of great advantage here - just have the engine handle entity state and make all of the game world manipulating functions referentially transparent. This way one can swap code between world "ticks" (or update cycles) without worrying about breaking something.
I don't know how efficient something like that would be though; somebody mentioned cache issues with lumping heterogenous properties together, but maybe this could be optimized behind the scenes.
At this point, seeing this kind of extremely high production value animation used to enact this kind of fantasy scene instantly makes me think of hackneyed backstory and plot focus-group optimized to death to appeal to slightly dull 14-year-olds. You don't throw out that sort of development budget to try out something with even a hint of narrative experimentalness, and you can always rely on there being kids with disposable income who haven't yet been saturated with cliches to the point of fatigue.
"This looks very well done, it's probably bad" is a strange heuristic to have.
"This looks very well done, it's probably bad" is a strange heuristic to have.
It's the same for Hollywood movies, isn't it?
When the producers want to focus your attention on how expensive and uniquely complicated a game or movie production was, that's a sign that the actual content was a secondary concern.
I see what you're saying, but the target audience here is the game development community. The animation was clearly designed to show various capabilities: indoor lighting, outdoor lighting, outdoor scenes, interiors with various materials, fluids, etc. etc. As with movies, it's still up to the team to use the tool to create a worthwhile experience.
We've certainly come a long way since Demon Attack and Pitfall! :)
I remember 23 years ago, trying to do 'ray tracing' on my Amiga would take... hours for one frame. It looked pretty good, but... wow. This is jaw-droppingly cool.
Why are people getting so excited about realtime global illumination and code hot-swap? CryEngine 3 already supports both, and has done so for the past year or so.
Realtime global illumination has been possible for the past few years. I believe Crytek was the first studio to make a game engine with support for the same (www6.incrysis.com/Light_Propagation_Volumes.pdf)
And the code hot-swap feature in the freely-available CryEngine 3 SDK isn't just for Lua, there's the CryMono project which adds support for hot-swapping C# scripts.
Very Beautiful! The demo has a nice Skyrim/LOTR feel to it and the lava flows were just gorgeous! If that lava flow generation is completely procedural then I'm extremely impressed.
Writing a highly parametrized engine like that with JIT script compilation is plain awesome!
There is no script. It's all done in C++ with the project broken up cleverly into DLLs which can be reloaded at run-time without interruption. And I believe the lava flow is done with animated meshes.
UE3 had Unreal Script which was a custom scripting language and that has been removed from UE4. Kismet is more data than script. There's no JIT-compilation.
To be clear, Kismet is a flow-based visual scripting language where you connect boxes together. In UE3 Kismet boxes are scripted in UnrealScript. In UE4 they are written in C++ and automatically reloaded when the code is re-compiled.
I have mixed feelings about this. UnrealScript had a lot of interesting ideas; particularly its module system (a kind of sideways inheritance where you could override an ancestor in a package, but not have to override all the descendants to have them inherit the behaviour - really handy for third-party plugins); the synchronous animation playback, effectively using continuations behind the scenes; how it handled synchronization of variables for client/server; and to a certain degree the "modes" that objects could be in, a way of switching behaviour of a whole slew of methods en masse.
On the other hand, object-oriented approaches are not great for simulating huge numbers of entities, where column-major array-based layouts are a lot more efficient.
I wonder if Kismet data-flow graphs may end up getting too complicated for their own good. They look like they could do with a textual representation. The obvious manual iteration involved in the creation of the orrery in the gametrailers demo video posted in the comments elsewhere here shows how painful this approach can be.
> a kind of sideways inheritance where you could override an ancestor in a package, but not have to override all the descendants to have them inherit the behaviour - really handy for third-party plugins
This sounds like the virtual classes that Tim were talking about ten years ago. (For the curious, the first programming language with virtual classes was BETA from Aarhus University.) They never made it into UnrealScript. His language research after that time was less incremental and sought to uproot almost everything about programming games. He moved away from object-oriented programming and more towards functional programming, specifically type theory. When I arrived in 2004, he was all about dependently typed languages, the big inspiration being David McAllester's Ontic. For a long time his plan was that his new language would be used to implement most if not at all of UE(n+1). Eventually that ambition had to be tempered by reality and thrown aside; I'm not sure if he's still working on programming language design.
> particularly its module system (a kind of sideways inheritance where you could override an ancestor in a package, but not have to override all the descendants to have them inherit the behaviour - really handy for third-party plugins
Do you know if this can be done in UE1? I'm actually developing something called "NewNet" in UT99 for fun - plus, more people actually still play that game than all the others! Basically it fixes the movement lag associated with high pings and simulates zero ping by keeping track of the positions of all actors from within the past second or so, and "rewinding" the server to that saved position according to the ping of the shooter. There are mods for UT2003/UT2004 (UE2) and UT3 that do this, but nothing for UT99 - although there's the ZeroPing mod which is way too easy to exploit, as it's clientside hit detection. People said it couldn't be done for UT99 but I've managed to get a working prototype, even though it's quite messy and hacked together. It'd be great if I could do what you've mentioned, as it'd clean things up quite a bit.
The hacks I've had to do it get it to work right have been pretty silly, and there's a lot of duplicate code because I had looked for something like you mentioned (the sideways inheritance) but maybe I didn't look long enough, because I couldn't find it for UE1.
I don't get it, what's oo got to do with storing entities in column major storage. Store the objects in a 2d array.
Normally you don't even store entities like that, I've seen all sorts of crazy data structures, such as storing them spatially, so you can ignore visual updates.
Or am I completely not getting what your saying here...
Polymorphism with objects implies an indirection to data with variable size, and also usually an object graph involving pointer chasing; lots of indirection is bad for performance. Also, cache-efficient manipulation favours non-indirected, tightly-packed data.
So instead of:
class Entity { Vector location; Foo foo; Vector velocity; }; // etc.
Entities[] entities;
In a way, this system reminds me of Light Table[1] in its ability to change some code/settings and get instant feedback. It totally changes the way people create content because it becomes so much more accessible and so much faster to see changes -- you can play around with a lot of different approaches more quickly, resulting in more room for experimentation.
Pretty amazing stuff! Really curious to see the tools that power all of this, as well as hear the architect's (or architects') vision for all of this
Being able to compile code and reload it while the engine is running is something that has existed in many engines for several years. What's cool here is that the engine keeps running while editing and compiling, then it auto-reloads and pushes the changes when compiled.
Yeah definitely -- I guess what I meant was that there's now an interface to manipulate code, without having to touch code, and then it automatically goes through the process you mentioned
It's stunning that this runs on a single Nvidia GeForce 680GTX in an editor without any visible lag at all. I wonder why tesselation wasn't enabled though, it may have been because of performance issues.