Hacker News new | past | comments | ask | show | jobs | submit login
BioShock Infinite Lighting (solid-angle.blogspot.com)
198 points by jamesmiller5 on March 4, 2014 | hide | past | favorite | 43 comments



> Programmers don't generally have reels, but we do have blogs. I've been explaining the rendering work I did on BioShock Infinite quite a bit due to recent events, and I thought it made sense to write some of it down here. For the bulk of development, I was the only on-site graphics programmer. As Principal Graphics Programmer I did quite a bit of implementation, but also coordinated and tasked any offsite rendering work.

Kudos to the OP...I wish more graphics programmers did this. I got into programming because I wanted to do game development but never got into it, but I'm still fascinated by the actual work...not so much the gameplay programming (though I love reading about design choices), but just, well, how it differs from web development. I loved programming in OpenGL, but I've wondered how much linear algebra you bounce in your head on a day-to-day basis in the professional world, and what is the equivalent to "Rails", if any, for graphics work, and if there isn't such a thing, what about graphics development makes such a conceptual framework impossible or unrealistic? And is graphics programming as constrained by the artist workflow as web development is constrained by content-makers/writers?

---

And a little more both on and off topic: reading the OP makes me sad that Bioshock Infinite had such a beautifully-realized world...but focused a disproportionately amount of gameplay on how to best grind generic-policemens' faces with a meathook.


> Kudos to the OP...I wish more graphics programmers did this.

While graphics programmers never update their blogs as much as many would want (which is true of almost all bloggers with interesting things to write about), if you instead look at a large set of graphics programmer blogs, you get new posts all the time.

And, good news! Someone has been collecting them :)

http://svenandersson.se/2014/realtime-rendering-blogs.html


> Kudos to the OP...I wish more graphics programmers did this.

In my experience this mostly happens when a game programmers are switching jobs. Compared to the open source community, or indie community, console game developers rarely publicize their work. To be fair, most of them are under NDA and under a time crunch.

> what is the equivalent to "Rails"

Unity... its "Rails" for game development in general. You can do simple things quickly, and you can dig deeper into shaders and (if you have the commercial version) some advanced visual effects. Of course its not a substitute for writing your own, but unlike writing your own engine, you might actually finish a project with Unity :-)


> To be fair, most of them are under NDA and under a time crunch.

Notable exception being of course ID / Carmack, who open source their engine after a number of years. <3


I have my doubts that idTech5 will ever be GPL'd


Unity is great for rapid development but it is a very poor model to learn from.

Riddled with terrible amateur mistakes, awful design choices and some truly embarrassing bugs. The overhead is huge and the respect for memory and performance is really far from what I've seen working on in-house code or with popular engines.

The editor however is very finished and if you want to make a game rather than learn its definitely the way to go.


I came to Unity from a background as a professional engine and graphics programmer, and initially I also twisted my nose, always muttering 'i could do this better if i had the time'. But once I got over that and realized I'll never have the time by myself, I found Unity to be actually well designed, embracing the component based organization that has become an industry standard, with a well thought out API. Its not without bugs, but nothing I would call amateurish or embarrassing. These days they have (IMHO) some of the best engineers in the industry improving their engine. Thats how they extract money from mostly satisfied customers ever couple of years for upgrades.

There is good bit of overhead from the Mono .NET scripting runtime, which is never going to be as optimal as (hard to write correctly) native code, but then again its light years compared to most game scripting designs. I've also seen some in-house AAA engines, and expensive licensed engines with the same performance and memory issues. Thats why they hire people like AO to beat an engine into shape.

Do you have any links or references to substantiate "amateur mistakes, awful design choices and some truly embarrassing bugs"?

Granted, one embarrassing thing about Unity is the infamous and prolonged lack of any viable built-in UI solution.


the most obvious bug problem i had was with localisation (i've heard this is fixed but not updated yet) where it constantly returned en-US. this is pretty embarassing imo... it shouldn't be possible to ship in this state and when someone raises the issue - imo you fix your build process so it can't happen again. that might be a big ask but its not unreasonable imo...

i had some real issues with the physics not being deterministic or handling the most common edge cases like high speed objects with thin colliders. i'm inclined not to blame PhysX... tbh part of this is poor game design on our part that it was a problem. we should know better than to want to rely on deterministic physics...

i don't think they have particularly implemented components well. but that can be personal taste - everything having a transform is reasonable but the implementation of transform itself i find terrifying and confusing. it seems to really be an elaborate tree thing containing the whole hierarchy...

if i want to iterate over all of the components of a given type i have to use some scary looking functions with 'find' in their name which suggest to me not really understanding the implementation benefits of components...

scales live in the matrices...

the maths library is missing lots of common things and is messy (eg. vector3.lerp vs. mathf.lerp) some of this is because c# makes performant syntactic sugar hard with its lack of clean non-temporary references (you can use boxing but it is, in theory at least, slow)

the random number generator is not very good - maybe its a mersenne twister or something suitably complex but the results are easily beaten with the usual LCGs (which have their own classic problems).

the game object interface is not very complete. where are get and set position? i apply them to the transform? does it cache this? in short my confidence is broken when something like moving an object is handled poorly - its a classic case of something you often want to cache until the next frame and imo that should be a clear part of the entity/game object system not something you have to investigate or write yourself. i find myself writing functions to get lists of children - finding that including self as a child is vague and ambiguous in the built in functions and so make aliases for them so i don't have to get confused by that... find parents and children requires going through the transform? that is - to me - severely broken encapsulation. forget the class keyword - the spirit of OO is worth following imo (if not the die hard 'everything is an object' approach).

the overheads of .net are there but they do not cause me a problem afaik.

all my problems can be summed up as 'i don't like writing this code i always have to write for any game - why isn't it in your engine already? i would have put it there.'

the editor is really good i can't say that enough. its a small sentence but the implications are enormous... its what makes unity so good for making things quickly.

really as an argument that is super significant... making tools regardless of how is enormously time consuming compared to working around the issues i describe.

i've worked at 'AAA' studios - some of them are pretty appalling too. lots of programmer wankfest code for 'performance' which is completely unjustified if you go and measure it... tbh i'm embarassed with the state of gamedevs. i stayed out of the industry for a long time (my fault) assuming i wasn't good enough - when i finally got in the most overwhelming lesson was that the quality bar is much lower than i could have imagined... even in the engine, rendering, audio etc. specialist areas...


i forgot to mention terrible iteration time on device and lack of reasonable control over update order... both of which have conspired to waste a considerable amount of my time this morning :)


I find this stuff fascinating; I used to mess with OpenGL quite a bit in high school/college, but then moved to HCI in grad school and iOS dev/backend web dev later when I moved into the industry.

I'd love to eventually work on that kind of stuff full time- how would one go about making the transition? Do you basically have to learn/work on projects like this during your free time, and hope your "portfolio" gets you a job?


> Do you basically have to learn/work on projects like this during your free time, and hope your "portfolio" gets you a job?

I always thought this and i applied for my first industry job with a game demo and some tech demos built on a moderately over engineered engine, and my second one with a fps demo with most standard last gen (360/PS3) lighting features and support for real world data formats (quake 2/3 bsp, md2, obj, 3ds, lwo, png, tga, jpg etc.) and a tools pipeline involving radiant and the appropriate game bsp compilers to create levels.

sadly most people i have worked with have made no serious demos or games in their spare time and this is much less common than i had imagined.

knowledge about things like rendering, audio, AI, networking is rarer and will help - however a gameplay programmer is a thing - although despite it seeming like a super generic role there is a skill to creating good and flexible gameplay systems for designers, or in picking the right hacky function to make something 'feel' 'fun'.

oh and be prepared for low pay on the way in - you don't have to tolerate it but i've heard salaries ranging from /actually very illegal/ to £20k for new juniors. i took £17k for my first - if i had known better - and esp the pay rates of my compatriots then i would have asked for more from the beginning...

also try agencies if you don't know many games companies.. you will never be able to rid them from your life and they will do stupid things like call you at your current job and speak to your boss when they can't get through to your personal phone - but its their job to find you something or be honest if they think its not possible.


Did the portfolio work for getting a job? For some reason the games industry seems quite impenetrable to me, so I'd like to know what worked/didn't for getting a job in it.


i had a friend who recommended me and thats always the best way - however the only real good advice i can give is to keep trying. some of the juniors i have encountered doing their first jobs found them through agencies, former university colleages, applying to their favorite games company etc.

however, yes the demos helped considerably... its a very strong proof of knowledge and skill. a working demo of reasonable complexity is worth more than any level of academic background or past experience outside the industry imo. you just need to get it seen.

as a programmer i can't really offer much advice for design, art or production though. i imagine design and production is basically luck or attitude to get into - everyone wants to do it and the skill is difficult to measure. artists tend to have impressive portfolios and/or showreels demonstrating past work and personal projects...

A surprising amount of production staff have a background in the QA department in my experience too... QA is definitely a way in for all the disciplines but I have no experience of how that works well.


I'd love to eventually work on that kind of stuff full time- how would one go about making the transition? Do you basically have to learn/work on projects like this during your free time, and hope your "portfolio" gets you a job?

It's really that easy, as strange as it sounds. Just find some graphics stuff you like doing, and do it a lot.


I'm a big proponent of unrealistic lighting in games, for two reasons:

1) Trying to simulate realistic lighting in realtime is a fool's errand - even with every trick we know, lighting can never look completely realistic on today's hardware.

2) There is a certain art to unrealistic lighting. We see reality all the time and it is fairly boring, why not take advantage of the simulation to produce something visually interesting? I recently revamped my lighting for an unrealistic, but stylistic look:

  http://i.imgur.com/t1gC4ME.png


Side note: there's a recent push towards "physically based rendering" (in games; film folks have made that a few years ago). This means making light behave in a physically plausible way. This is different from "photo realistic rendering".

Probably best example is Pixar's rendering style - it's not "realistic", but it is very much at the forefront of "physically based".


Good point - I agree that we can learn a lot from the way realistic lighting works in order to produce stylistic results. For example, I do a bit of radiosity which is sort of physically correct but not nearly enough samples to be accurate, and it is done in screen space. Even though it is a far cry from "realistic", it does add a lot visually IMHO.


This is partially because of difficulty of debugging a completely ad-hoc pipeline.

When you have 50 moving parts and the final image doesn't look right, it's hard to know where the problem is. Instead, you try to make each part adhere to "reality" by making them "physically based", thus enabling objective testing.


Yes. It can also be cheaper to make, since the same material can look good under more different lighting conditions. So an artist can make the textures & material once and it will mostly look "correct" when put in anywhere. As opposed to older ad-hoc models where often in a different lighting setup at least some parts had to be redone.

Just wanted to make a distinction between "physically based/plausible" and "photorealistic", since I've seen both being mixed up a lot of times.


Obligatory link to Errant Signal's discussion of photorealism in games: http://www.youtube.com/watch?v=FRTsl1jCqq8


Looks that we will see the hystory of art repeating itself in games.. imagine a "Van Gogh world" for instance.. with colors and shapes moving/morphing without pattern..

Also reminds me "Das Cabinet des Dr. Caligari" where they were using expresionism in costumes, acting and in the set..

Probably tjats where the future of rendering is.. freedom from reality.. we already have one, isnt? :)


Couldn't agree more. Chasing that realism dragon is good for some things but a mistake for most.


I want a lighting algorithm that handles static and dynamic geometry uniformly, with respect to lighting and shadows, and still manages to look "good enough", if not necessarily cutting edge. Does such a thing exist?

The last such approach that I remember was Doom 3's stencil shadows, but it handled only direct lighting. Now that people are used to approximate indirect lighting, we get these huge piles of hacks and we have to reinvent them for every new art style...


Crassin's "Interactive Indirect Illumination Using Voxel Cone Tracing" [1] might be of interest. A snippet from the second page:

> We handle fully dynamic scenes, thanks to a new real-time mesh voxelization and octree building and filtering algorithm that efficiently exploits the GPU rasterization pipeline. This octree representation is built once for the static part of the scene, and is then updated interactively with moving objects or dynamic modifications on the environment (like breaking a wall or opening a door).

It takes a somewhat beefy computer to run, but the results are impressive. I don't have a link handy, but there was a UE4 editor demo showing some of it a few months ago. IIRC it's been axed as a feature because it wouldn't have run well enough across all of the platforms they were targeting.

[1] https://research.nvidia.com/publication/interactive-indirect...


I'm a big fan of the paper you cite and Crassin's work in general. However that algorithm is only viable in my opinion on current high-end PC GPUs and it remains to be seen how useful it can be on the latest console that also want to support large dynamic worlds.

The original article describes the careful technical tradeoffs and mixing/matching of techniques that is required to pull off a game like Bioshock Infinite on ancient hardware (at least as relative to current PCs and high-end GPUs).

Interestingly when Unreal Engine 3 was first being demonstrated to potential licensees and in the media, it was using a very high quality and elegant one-pass per light rendering algorithm. The demo scenes were basically a room or hallway with several colored lights, and the resulting multicolored shadows cast by an animated character. In 2004/5 this blew people away and a lot of game executives signed expensive licensing deals with Epic. No games to my knowledge actually shipped with that type of lighting, at least on a console. Then Epic and engineers like the author of the article spend the next several years (an entire console generation really) retrofitting UE3 with actually practical rendering technology that involved a lot of sacrifices, careful balancing, and artist headaches.

This generation Epic is showing a mind blowing UE4 demo with Voxel Cone Tracing and the cycle starts again.

One thing is very clear to me, Irrational threw out a ton of invaluable institutional knowledge when they laid off that team. I suspect competing studios, especially those using Unreal, are scrambling and having a bidding war trying to hire them.


So, Voxel Cone Tracing doesn't actually treat static and dynamic geometry as the same. It makes two hierarchical voxel trees, one static, and pre-built. and the other dynamic, and generated per frame. This is really the only way it can stay performant.


Not quite. They're all the same to the lighting/shadowing calculations, but it doesn't re-voxelize geometry that didn't change between frames.

From section 4.2:

> Both semi-static and fully dynamic objects are stored in the same octree structure for an easy traversal and a unified filtering. A time-stamp mechanism is used to differentiate both types, in order to prevent semi-static parts of the scene to be destructed in each frame. Our structure construction algorithm performs in two steps: octree building and MIP-mapping of the values.


You mean that one? (second video into the article, first of the second block)

http://www.joystiq.com/2012/06/08/unreal-engine-4-demo-decon...


That was probably the biggest part of the idTech4 legacy--unified lighting and shadows, no bullshit, no takesybacksies.


Except with one intractable draw back that shadow volumes could eat up an unbounded amount of GPU fill rate, based on shadow caster geometry and your camera position. As a thought experiment, imagine trying to use stencil shadows in a jungle scene, something Crysis does just fine with shadow maps. Everything is a tradeoff, smoke and mirrors, tons of bullshit and often due to schedule pressure, a lot of takesybacksies.


Yep. But we can dream, can't we? :(


I mean, you can just use shadow buffers on everything. And something like a screen-space AO to get a little bit of GI effect. It'll look good enough.


actually a lot of games still use direct lighting in the same style as Doom 3 but with an added ambient term (like Quake 4) and using shadow maps because somehow (I don't agree) the artefacts of shadow maps are considered more tolerable than the hardness of stencil shadows. a lot will only allow a single shadowing light source as well...

there is an argument that high quality shadow maps can encode the penumbra, but current gen console hardware is miles away from that quality (top end PC hardware is not).

nowadays with stencil shadows there are performance arguments - we would rather use our fill rate on HD resolutions, deferred passes and post effects. there is also the potential patent issue (is that still a thing? its embarrassing really for such a trivial algorithm :( )


Well, Whitted's ray tracing. Trouble is that it gets exponentially slower the more pixels you have.


Good article. Lots of high-level generic detail :)

> Dynamic shadows from toggleable lights would be projected into this buffer using a MIN blend

Just wanted to quote this because as much as i love id and JC this bugged the crap out of me in Rage. Overlapping shadows from the same light source do not combine irl :P


To me the lighting in Infinite was completely overwhelmed by everything looking like you were viewing it through a cloud of flour if the scene was 'bright' at all. I nearly stopped playing several times because of that issue alone.


As a web developer this looks very complicated, reading stuff like this makes me feel useless. I wonder whether game developers feel the same reading about the web stack.


Yes, very much so.

I'm a graphics programmer at Unity (the game engine), and recently I toyed around with Rails. That made me realize how much of assumed knowledge and jargon is in any particular field.

As a complete noob, all the information in Rails case sounds like "yeah you just rake your gems, bundle the capistrano and don't forget to bootstrap the angular node" -- stop, WHAT?! And all this while trying to do a very simple "hello, world" CRUD app.

I'd imagine graphics programming sounds exactly like that from outside, just with different words. "Oh yeah, you just swizzle your lanes, dispatch the predication queries and don't forget to Fresnel your BRDFs" (no, this sentence does not make any sense whatsoever).

Of course if I'd spend even a month in the Rails/web field, I could at least navigate without bumping into everything. Spending a year or two would probably make me comfortable. Same with game or graphics programming. The author of original blog post has been doing this for 20 years...


Yeah that's exactly how your jargon sounds to me :) I fiddled with Unity everyone was saying it's "simple"/"easy" ... Sure it is :)))

You know, it's funny how people out of our industry perceive us (at least in my experience) - I mean you say you're a programmer and it's immediately presumed that you just know computers and coding, but we have so many branches, languages, platforms at times it's absolutely overwhelming.

Outside of the basics of programming (you know algorithms, runtime analysis, basic code structure) we have almost nothing in common. It now starts to make sense why the big companies interview in these areas.


"you you're a dentist? good, that means you know how to do a surgery. both are medical field, right." :)


>yeah you just rake your gems, bundle the capistrano and don't forget to bootstrap the angular node" -- stop, WHAT?! And all this while trying to do a very simple "hello, world" CRUD app.

Try flask --> http://flask.pocoo.org/


Most game developers are not this good -- I spent some time in the game industry (coming from a webdev background) and it was important for me to know C++, but I didn't really have much graphics experience and I was still really useful. Gameplay and UI code end up being a huge part of what goes into games, and you don't need this level of expertise for that.

That said, graphics engine development is very technically and mathematically advanced, I would say much more so than the web stack.


Yeah, I see how a lot of the skill will apply in both fields, however they feel a universe apart (at least for me). I remember trying to do a pretty simple game (think space invaders clone) in Java and if I'm honest - I struggled (a lot). Now when it comes to the web (both frond-end and back-end) I'm flying. Having said all of that, It might be just that I'm a shitty programmer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: