Hacker News new | past | comments | ask | show | jobs | submit login
Finding your home in game graphics programming (alextardif.com)
247 points by poga on Jan 1, 2022 | hide | past | favorite | 126 comments



Indeed the author has caught on that graphics is now much wider. Yet his post still focuses on the APIs. A bias he admits to.

APIs are the easy part of graphics programming. Now perhaps I say that out of bias, as I focus on shaders and artist work pipelines. The author focuses on APIs and thus reached for a simpler API as a teaching on ramp, while I would new learners run the other way and learn the basics of blender.

Learn blender, basic 3d modeling. Then dig into understanding how a model goes from disk, to cpu, to gpu. Understand what a draw call is, then batching in its many forms using a commercial engine. Once you know what everything should "look like" and what the state of the art is, only then reach for APIs and write a from scratch renderer.

If the learner has no idea about fultrusm culling, or triangle stripping, or instancing, or uniforms, or texture binding: a fresh from scratch c++ is a horrible learning experience.

Edit: When learning use either Unreal or Godot. You need source access. Time spent learning Unity is time lost. The author should not suggest unity is anyway equal for learners. Digging into the renderer requires breakpoints and source code.


IMO the article is better advice than the parent comment.

Learning from Unreal or even Godot to write a basic renderer is pretty wild. It would be like telling someone who wants to make a model rocket that they should study the Falcon Heavy.

Making a basic renderer in c++ is orders of magnitude simpler then trying to deconstruct a game engine that(according to google search) has over 2million lines of code.


If you want to be a rocket engineer indeed the Falcon Heavy is a better study reference than making a fresh design in your backyard.

The hard part if learning is figuring out what you do not know. The easy part is reading a tutorial or API documentation.


I would say, a rocket engineer should definitely start with a simple rocket to be designed on their own - before proceeding to something modern and complex.

And I did not study rocket science, but in my CS studies we definitely started with the simple ("hello world") basics, too, before touching real systems, so I would think, they do likewise. Because it makes sense. How can you understand and design something complex, if you cannot even make a simple version of it?


> If you want to be a rocket engineer indeed the Falcon Heavy is a better study reference than making a fresh design in your backyard.

If you can't yet do so much as slap together some Estes kit and launch it in your backyard, what gives you the impression that you'd have the requisite knowledge to be able to gain anything from studying the Falcon Heavy?


Well, some people think failure is the best teacher. As an example the first generation of german rocket engineers whose work culminated in Saturn V and thus enabling the first moon landings basically started from scratch.


So your advice is to just suck it up and accept that interested people need to spend billions and study for decades on the issue, because learning from previously successful endeavors is a worse teacher. Right.

It's fine to just half-ass it if the interest is only superficial or for a hobby. Otherwise it might create a very unrealistic expectation in the student, because the gained knowledge by doing it like that is pretty worthless in a professional setting


No, I was not suggesting education is not useful.

But if one is to study rocketry without an aeronautics degree I'm guessing it's better to start from hobby level engines and vehicles and then scale up.

There is lot of added complexity due to size and mission constraints that is vehicle specific in Falcon Heavy, and I'm not sure how one would even use it as a learning platform - unless there is a university curriculum built around the vehicle.

There are lot of basics that model rocketry probably gives a good intuition for but then to my limited understanding it becomes (drumroll) rocket science.


Learn from the past but don't take it as gospel because people probably worked with different constraints and assumptions that might be completely different today.


Different people want to learn from different sides. Some prefer as you say, others prefers learning how to paint pixels first, then go from there and paint triangle structures, then add stuff like effects, maybe generate water and waves, use pixel data to create shadows, write your pixel data and shapes to GPU ram and use it to update future calls etc.

And then once you know how to draw things you start looking at how to import models to draw, instead of starting with a big complex object you learn exactly what each instruction does to the pixels. Then you use what you learned to make what the large structure you imported look like it did in blender, rather than follow a tutorial how to make it look the same you use the skills you learned when you learned the basics before. That way you learn how to create small samples to debug etc, because you did the small samples in code before you did the blender import.


The modern APIs can be quite challenging, on the contrary. Binding models, synchronization and barriers can be quite tricky. I group my conceptual knowledge of a render runtime into four main parts:

1. Graphics Systems. These are the platform APIs and how to use them.

2. Frame graph. The different passes in your frame and the purpose of each, and how they flow together, and how the code around this is organized.

3. Object layer. How draw calls are batched, sorted and submitted. This is the glue that determines which models end up in which passes.

4. Material layer. This includes shaders, resource binding, and more of the tech art side of things.

There's a lot to be done beyond the API world, but there's also a lot to be done on the CPU side and not just the shader side.


Unity render pipeline source is available here: https://github.com/Unity-Technologies/Graphics

All the C# code running in the editor and runtimes is here: https://github.com/Unity-Technologies/UnityCsReference

The code that interfaces directly with the platform API at the C++ level is restricted (you can get access but it's not really a viable option for a beginning graphics programmer :)). For many platforms the APIs themselves are proprietary and therefore that code cannot be shared easily.

There's pretty good tooling support around graphics debugging: https://docs.unity3d.com/Manual/RenderDocIntegration.html https://docs.unity3d.com/Manual/FrameDebugger.html


Your point on the learning difficulties imposed by unity's obfuscation is very important.


I learned unity about 5 years ago and got burnt out on it after a few personal projects and a few freelance side gigs I took that used it. But I was not a decent programmer then and I have earned a CS degree in the time since. I actually used 3DS Max and Blender more than Unity but have since been distracted with web development and once I’m settled in my new job I am planning to write a raytracer. Any other recommendations?


check out TinyRenderer, a 500 lines cpu renderer with some essential parts similar to OpenGL, such as z-buffer, vertex and pixel shaders, texture mapping, shadows etc, with articles explaining how things work:

https://github.com/ssloy/tinyrenderer


Oh this is awesome! Thanks!


https://www.gabrielgambetta.com/computer-graphics-from-scrat...

This walks through the creation of a raytracer and a rasterizer in a way that was pretty easy for me to understand. It’s not exhaustive; it’s just right.


The book RayTracingInOneWeekend[1] is a good starting point for people looking to write a raytracer.

[1]https://raytracing.github.io/books/RayTracingInOneWeekend.ht...


> APIs are the easy part of graphics programming. Now perhaps I say that out of bias, as I focus on shaders and artist work pipelines. The author focuses on APIs and thus reached for a simpler API as a teaching on ramp, while I would new learners run the other way and learn the basics of blender.

Is there some sort of taxonomy of graphics programming that lays out these different areas of work? Are they recognized or common specializations or is each practitioner’s situation unique and circumstantial?


Thanks for sharing! As someone who's not familiar with graphics programming, i would say recommending to start with DirectX is probably the wrong choice: it looks just as complicated as other solutions, but will lock you into the Windows ecosystem. In contrast, OpenGL/Vulkan are widely ported. Same story with the undecided recommendation for Unreal/Unity/Godot: only Godot in those three is free-software, and has an amazing community. Even if you end up using something else in production, it's probably the good place to start since you'll find friendly people and the source code to dig into in case you run into oddities.

As an aside, I was curious about "The technical interview [LEAKED]" but the linked article links itself to dropbox which won't let me download it without an account. If you really want to share that piece of content, maybe consider uploading a copy on your blog or Internet Archive? Thanks for the read!


All my work with game graphics is basically software rendering, but I have it on good authority from actual game developers that DX11 specifically is actually a pretty darned good API that is a pretty good match to how current GPUs actually work. Not to mention that it has pretty good tooling for debugging. Metal is also supposedly pretty good.


I was able to open "The technical interview [LEAKED]" without a dropbox account. Just had to close the login modal.


Vulkan is terrible to get started with. It's not so much a learning curve as a learning electric fence.

DirectX11 was a remarkably friendly API to learn.


I disagree, if you stop chasing infinite progress it's very managable. As hardware peaks now it's the perfect occasion to settle on something older and make a stand there.

They can deprecate it all they want they wont be able to remove it, in fact they will probably maintain and improve it for eternity.

I have choosen OpenGL (ES) 3 and I'm making my 3D action MMO client on top of that.

Metal, Vulkan and DX12 bring nothing of gameplay value to the table, it's "visual only" embarrasingly parallelizable problems they help with; which increases motion-to-photon latency in most cases.

The fragmentation is a sign of desperation.


The game I'm working on uses compute shaders for pretty much all the gameplay.

These APIs are just to control the GPU, they are just complicated to give developers flexibility. Just because you can't think of uses for these features for gameplay doesn't mean they don't exist.

Perhaps I'm misunderstanding "The fragmentation is a sign of desperation" but Metal, Vulkan and DX12 have very clear reasons they exist:

-Metal was created by Apple to unlock features that their GPUs had that OpenGL ES 3.0 does not support. This is pre Vulkan so that wasn't an option.

-Vulkan addresses the desire by developers to make more low level use of GPUs. AMD Mantle started the trend but it became clear that either a standard needed to be developed or every platform/gpu maker would have their own API.

-DX12 was created as the successor to DX11 to address the same issues as Vulkan but in the way that the Microsoft team wanted to.


Compute shaders are trying to fit an elephant on a scooter. The reason Unity has poor animation performance with Mechanim is that they use compute shaders for skin mesh animation so they can apply shaders across assets but that forces them to send all model data every frame, sealing the fate of that engine to the scrap heap.

I think the path forward will be massively parallel microcontrollers with some new memory management replacing the MMU in CPUs allowing for the CPU and GPU to merge. Look at the RP2040 microcontroller for how that is progressing, it will take decades if it ever happens.

GPUs will never do general purpose computing as well as CPUs, they don't handle branching the way you need so the only way to solve this problem is to make the GPU more like a CPU but then manage to scale the memory access.

The VAO was the last meaningful feature added to OpenGL, companies are desperate to always develop new things because that is how our economy works now, but eventually our stored energy is going to deplete and we will have to settle on things long term.

I mean with the newest API you are throwing alot of still very performant hardware on the junk pile... I'm targeting Raspberry 4 for my lowest performance device at 5W single CPU core and GPU maxxed out...


Kind of a weird New Years fever dream tangent. Bear with me:

Say we have a scene to render and a frame can be rendered in 10ms given the chosen game engine or graphics API or whatnot. There is a hypothetical optimum code for rendering that scene that can do it in less time. And the delta between those times is the price you pay for API abstractions, human abstractions (eg. maintainable code vs clever code), and other things.

Is this a concept with known terminology?

I was thinking tonight, “my hardware is running this game at 45FPS and I know that it could hypothetically be WAY faster if we put a ton of money and brilliant engineering behind it.”

And that got me thinking about how you would go about measuring efficiency loss or waste. We’ve all experienced it: two comparable games running at vastly different performances given the difference various technical and business choices made.

Furthermore, I’ve been very curious about why there isn’t a tool that can provide a concrete answer to, “what part of my PC is bottlenecking performance?” I’ve been playing a game that’s running slow and noticed GPU is never above 80% and CPU is pegged at 100% so I shuffled my computer hardware to essentially upgrade my CPU by a lot. Didn’t have any effect.


NVIDIA's internal term for this is "Speed of Light", or "SoL", though that's more focused on theoretical GPU throughput and efficiency.

Most of the time that you see you're locked on CPU, well, rendering is a huge cost, but often times it's gameplay code, it's the sound engine, it's the file I/O system, it's the Lua scripting. Rendering might very well be a very small amount of the CPU frame time, though it's still there too. As a game developer, when I profile a game, I usually don't see the render thread popping up as much. Things I remember:

* Some audio engine code that worked by running an expensive hashmap lookup many, many times a frame (fixed by changing how the hashmap lookup worked, which accidentally broke an older game, then had to add a configure switch to configure between fast and slow)

* AI Pathfinding code that recursively subdivided a bezier spline. Recursion was unfortunately slow on a specific console platform due to a compiler bug (and this was in 2019!). Fixed by changing it to an explicit while loop.

* A wind simulation system that simulated an entire world's worth of wind physics. Game engine was originally built with much smaller levels in mind, but carried forward for 15 years and now the levels were much bigger. Fixed by only simulating what was needed in the shot.

These things are a huge balance, and we fix it by profiling and getting things to run on our minspec hardware. And sometimes we miss.


It's hard or impossible to know that delta. The amount of complexity that goes into rendering a modern scene is beyond what could be formally specified without putting nation-state level resources and decades of time into it. Even then, you'd have to go through a massive amount of analysis and development to find the most optimal implementation for a particular problem, and it would only be a local optimum, not guaranteeing maximum performance for the entire scene. Lastly, we don't necessarily know the absolutely fastest algorithms for any particular tasks that might be executed in rendering a scene. At best you could use statistical methods to guess at what the delta is compared to a likely optimum based on similarly complex scenes that are recognized as maximally optimized by known techniques (e.g. the demo scene)


Appreciate the response. That’s in line with my amateur musings.

It’s funny how it might be impossible to quantify but it’s often trivial to qualitatively know, “this ought to be way faster…” (but also the biases that make us think that given we don’t appreciate what some games have to do to render that state…)


It's easy to "trivially know" because it's really expensive to disprove the statement, so you continue believing in your original one.

I am not in game development, but was recently looking in some deep learning networks. A lot of compute, bazillion layers of abstractions (erm python), but at the end of the day, 98% of the time was really spent calculating the matmul. I had luck that it was easy to check it and estimate the abstraction cost (not into great detail), and disprove my original thought.

Had i not checked it, I'd still trivially know that far more perf loss happened because of abstractions.


The term you're looking for might be "abstraction penalty".

For tools that can identify where the bottlenecks are, I recommend watching Emery Berger's talk "Performance Matters" from CppCon 2020 (either of the two links below should work). He introduces some innovative new profiling tools for this sort of analysis.

https://www.youtube.com/watch?v=koTf7u0v41o

https://www.youtube.com/watch?v=VzyhpbrC2Bs


The guy is unconvincing.

I don't believe RAM layout is that important. Of course it can affect perf, but slightly, not by 40%. For instance, Windows has ASLR (address space layout randomization) for security reasons, enabled by default. Microsoft would not do that if it would cost 40% of performance at random.

However, there're quite a few random factors. Random choices made by C++ optimizers can contribute about 20% random. Computer thermals when running the test can contribute 40%.

Too much statistics to my taste. For practical purposes, a simple "best of 5 runs" is usually adequate. If that benchmark is too unstable (typical for microbenchmarks with runtime measured in nanoseconds), "best of 20".

The statistics is probably wrong because these distributions are not Gaussian. Execution time of programs is bound from below for several reasons (no time machine, CPU throughput) but not from above (if you really unlucky, the computer may stall and the program will never complete), this factor creates large asymmetry in the distribution.

It's hard to selectively slow down code to emulate performance profile. You'd want same power consumption, and same load of external devices like disks and GPU.


> I don't believe RAM layout is that important. [...]

ASLR has nothing to do with the type of memory layout discussed here. ASLR only impacts compiled code/data, and only entire shared objects/executables at a time.

A bad memory layout can have a huge impact on perf. A simple example is iteration order of a 2d array, where not doing sequential access can result in a ~5x slow down.

> Computer thermals when running the test can contribute 40%.

Only if you forgot to apply thermal paste.


> A bad memory layout can have a huge impact on perf. A simple example is iteration order of a 2d array, where not doing sequential access can result in a ~5x slow down.

I know all that stuff, but the presenter doesn’t talk about RAM layout of data structures. They talk about a few [kilo]bytes offset caused by differently sized environment variables, and layout differences caused by linking order.

> Only if you forgot to apply thermal paste.

Try benchmarking a single-threaded code in 2 cases, in cold state, and when the rest of the CPU cores are running something like CPU stress test (but not accessing IO or L3 cache, i.e. not directly consuming any shared resources). You will easily get above 40% difference, despite the thermal paste.


ASLR only randomizes a handful base addresses of code and data sections, heap, stack etc..., but it doesn't change how data items or functions are located relative to each other.

Memory layout is most definitely important just because memory accesses have such a high latency. This cost may be hidden by prefetching and the cache hierarchy, but keeping the caches well fed is exactly why memory layout matters.


> Memory layout is most definitely important just because memory accesses have such a high latency

Indeed, but that’s not what the video is about. They don’t discuss how to implement cache friendly data structures.

They tell how small random differences introduced by the size of environment variables, and linking order, affect performance. The statement seems to base on that article: https://users.cs.northwestern.edu/~robby/courses/322-2013-sp... The problem with that article, it’s entirely based on one synthetic test. And that test is rather unnatural IMO, that’s not how people are usually writing performance-critical code.


I see such effects somewhat regularly, albeit not at an overall impact of 40%. With precise code layout having the biggest impact, leading to different L1i and iTLB hit ratios. Of course that requires execution costs to be well spread around, rather than allow in a small amount of code.

In my case, working on postgres, this is partially caused by the old school recursive row-by-row query executor model...


I know it can happen, but I would expect the result to be a couple percepts.

OTOH, I did observe up to 20% randomness based on other choices made by optimizer, compiler and linker. In my experience, the main source was different decisions what to inline, especially for release builds with LTO/LTCG. For me, a good workaround was compiler-specific forceinline/noinline function attributes.

However, my C++ code is probably very different from what's in postgres. I usually write and optimize manually vectorized and OpenMP parallelized numeric stuff, FP64 or FP32, doing little to no I/O.


You may be interested by FGASLR, which can do per-function random offsets


Gonna take a wild guess, Cities Skylines? I upgraded from a 1600x/GTX1060 system to a 10th-gen-i7/RTX 2080 system and I still can't get higher than 35fps. A city with 3k citizens and a city with 100k citizens may run at 35fps and 32fps respectively. Though I can run it with supersampling at 250% now and my GPU still doesn't bottleneck it (300% supersampling is what finally cracks my fps, jumping from 30 -ish to 5).

It's so weird to me, the game is definitely CPU bound, I would imagine it's almost all because of AI. So I would expect performance to decline linearly as my city grows. But it doesn't. It's frustrating when I'm playing a small town, but also pretty amazing when I'm playing a big metropolis.


Pillars of Eternity 2.

The ignorant part of me wants to scream, “it’s a mostly 2D game with almost no calculations being made!” But of course I don’t know what’s actually going on behind the scenes.


Oh wow. Never played it myself, but some friends do so I've seen a bit of the game. Given the genre and graphics I would definitely not expect it to have such poor performance!


dwarven fortress

the graphics are... spartan is a generous way to put it.

but it will tax your system! its doing some really crazy stuff behind the scenes


What you seem to be describing is the maximum theoretical efficiency. It's a common thing in lots of fields, and it's valuable because it gives you a goal you can aim for, and know that you have reached the theoretical limit of performance. As a director of mine once put it, it's important to know when to stop [trying to improve something]. For example, the Carnot cycle gives us a theoretical maximum efficiency of a thermodynamic engine, and the Cramér–Rao bound gives us an upper bound on how much accuracy we can extract from noisy measurements.

I don't know what this would be, or how we could determine this, in the graphics world, but it's still a very valuable and interesting idea.


Thanks for the term. That’s helps with searches. :)


You might be interested in the Factorio (also 2D, also where graphics aren't usually the bottleneck) blog posts that talk about optimization, like this one :

https://factorio.com/blog/post/fff-204


There's a concept in the research literature called "superoptimization" -- finding the absolute smallest program that meets a given specification. It's worlds away from being used for anything larger than a small handful of operations, but it's very close to what you're asking about.


Appreciate the term. Another lead for me to google and learn about.


So many words and not one single mention of ray tracing, that's pretty disappointing.

As someone who's spent basically their entire career in offline rendering, it never fails to surprise me how many people think that using someone else's API is the alpha and the omega of GP.


What should the article have included about ray tracing to be complete?


Not only is ray tracing the future of rendering, it's by far the best way to learn basic graphics programming theory, in 100% your own tiny code.

I've literally taught a 12 year old this way from absolute scratch, and have been teaching it this way for over 20 years now; it's a bajillion times more upfront complexity to do a rasterisation engine from scratch, so in practice almost nobody does this, and we're back to using someone else's code / silicon to draw your pixels.

All this is quite apart from the massive recent development of hardware ray tracing. Eventually your OpenGL driver will be emulating rasterisation on modern ray tracing hardware, because there's just no getting around the fundamental need for ray tracing to make progress in CG. All credit to Nvidia for going in big on ray tracing, to the point where many young people apparently think they invented it, RTX On etc..

It's in my opinion ridiculous for an article that purports to talk about GP in general to have exactly zero mentions of ray tracing, and nonzero usage of the phrase "full stack".


Raytracing is one of those promised techniques that has always been "the future", just like fusion power. The actual future will most likely be various hybrid solutions that (among other things) also make use of the raytracing support in modern GPUs. The triangle rasterizer most likely won't go away, it's a too obvious optimization method, it will just be augmented with other specialized hardware units (like texture sampling and raytracing).


Hi floh, big fan! :) I agree, RT won't fully displace rasterisation, but the balance of rasterisation versus RT silicon usage will definitely shift over time, and at some point you have enough power for "legacy" rasterisation rendering via RT that it becomes not worth it to include in top end HW.

It's also possible that with high enough pixel density (e.g. 4K 27-32") and a bit of assistance from the tensor units, you can get better performance vs quality through non-uniform/irregular subsampling; a more extreme example of this is foveated rendering[0]. Something like an extension to current DLSS, more in the direction of compressed sensing[1].

[0] https://en.wikipedia.org/wiki/Foveated_rendering

[1] https://en.wikipedia.org/wiki/Compressed_sensing


interesting. do you have a go-to resource for how you would recommend somebody learning RT?


Certainly, there was already a mention in this thread about Ray Tracing in One Weekend, which is pretty good (IMO mostly on account of its brevity): https://raytracing.github.io/books/RayTracingInOneWeekend.ht...

Also good is Scratchapixel: https://www.scratchapixel.com/

Once you've written a few derpy ray tracers, dig into https://pbrt.org/

I like to paraphrase a famous Go (the board game) proverb: "Get your first 5 ray tracers over with as quickly as possible."


Thanks this is excellent! Funny, I play some Go but never heard any variation of that proverb. Maybe that's why I haven't made Dan 1 yet...


Lose Your First 50 Games As Quickly As Possible: https://senseis.xmp.net/?LoseYourFirst50GamesAsQuicklyAsPoss...

I'm still DDK, but can at least appreciate some of the beauty of the game, and finding little parallels throughout life etc :)


The most fun I had with (2D) graphics programming was trying to reinvent the whole wheel myself in a high-level language (C# in this case). My approach was to simply output bitmap images, subsequently compressed into jpeg by way of libjpegturbo, at a high enough framerate to produce a believable result in a web browser, other clients, or even as input into ffmpeg.

To my surprise, I was able to get a fairly stable 60fps@1080p (in chrome) using this approach. Not 1 line of GPU-accelerated "magic" required. Now, this is just a blank image sent to the client view each time. The "fun" part was figuring out how to draw lines, text, etc. without absolutely crippling the performance. I got far enough to determine that a useful business system/UI framework could hypothetically be constructed in this way. Certainly, a little too constrained for AAA 3D gaming, but there are many genres that could fit a smaller perf budget. Controlling the entire code pile is an extremely compelling angle for me and the products I work on.

I could have started with D3D, Vulkan, et. al., but I can guarantee I would not have spent as much time or learned as much down those paths. I have already spent countless hours screwing with these APIs (mostly D3D/OpenGL) over my career, only to end up with a pile of frustrating garbage because I couldn't be bothered to follow all the rules exactly the way they wanted me to. I personally view the current state of graphics APIs and hardware as extremely regrettable and "in the way".

To me, a modern graphics card and its associated proprietary functionality stack is like the most recent Katy Freeway (I-10) expansion in Houston. A totally hopeless attempt at solving a much deeper and more difficult question (i.e. traffic or how to create fun/useful visual experiences).


With "standardized" game engines such as Unreal Engine and Unity and somewhat standardized asset creation pipelines (PBR texturing) a team can start building a great looking game with good graphics and probably good performance by just using "gameplay programmers". Back in the day it was impossible to create good looking games at all without a "game graphics programmer". Maybe on mobile that is different. I would focus on graphics programming in a more broader sense, and maybe consider going into (medical) or CAD visualization fields.. And there I guess targeting OpenGL / Vulkan is a much better platform to be "efficient in".

What I am trying to say is that trying to make "_game_ graphics programming" your home is maybe not the best career choice for a graphics programmer.


I agree with you. 90% of making things is mainly knowing the “general workflow” of dedicated tools and iterate as fast as possible by building a dedicated pipeline.

Low level graphic programing is still a thing and it would be a lie to say it's gone, but it's highly specialized. A low level graphic developer would spend it's days focusing on low level things. Before the Unreal/Unity era, even big game companies tries to share those dev between developments.


Computer graphics is where I found my love for software, math, and art. Needless to say, the first time I got to code lights in raw GL, my mind was illuminated :)

Shameless plug: If you'd like to know how to build a simpler version of THREE.js from scratch in WebGL, you can check out my book: https://www.amazon.com/Real-Time-Graphics-WebGL-interactive-...


When people often say to me "I learned all that maths in school and it was useless", I think to how much of that came in useful when I wrote my first 3D engine in my teens. Suddenly it wasn't so boring any longer.

When I started coding graphics in the early 80s practically every video game was a one-person show - every line of code, every graphic and every sound effect and line of music. By the time I start doing game development as a job in the mid-90s we were up to small teams of about 10 people, and we could all still go out together for a beer after work. Now look at it.. unbelievable how many people are needed for a modern AAA online game. I can't even fathom it.


Absolutely! When you take into account just the artistic aspects (e.g. modeling, texturing, rigging, etc. with professionals specializing just in particular aspects like facial structures, etc.), it really is quite remarkable.

I imagine we'll get more efficiency as procedural and generative techniques become more powerful: characters, worlds, mechanics, etc. are all programmatically generated with little-to-no intervention.


> I imagine we'll get more efficiency as procedural and generative techniques become more powerful: characters, worlds, mechanics, etc. are all programmatically generated with little-to-no intervention.

This absolutely has to happen because humans are really crap at some things, like animating human beings. When I was developing a 3D game in the mid-90s I watched our character animator trying to do the animations by hand and you can almost never get it to look remotely realistic; even now with mo-cap and bones and all the tech, it still looks totally fake (a big recent example for me is Thanos in the MCU who moves like a wooden puppet). It needs that ML layer to sit above everything and fix all the little details that make humans look human and move human.


I wrote a game engine once over a 5 year period and used it for exactly one project[0] before acknowledging just how complicated it would be to maintain, given the progress of other engines. I was constantly reading papers for new techniques, trying to decide if they were feasible, while trying to navigate the ancient-to-modern spread of OpenGL documentation and tutorials.

In the end I'm pretty proud of what I came up with. I had blender integration so blender could be used to design my scenes, animations, etc, and exported via my custom engine format. I had a lua-based scripting language to design things that were dynamic at runtime. And I had a bunch of cool (to me) shaders, like depth of field, gaussian blur, and glow. I had some advanced techniques like light probes[1] that I (kind of) open sourced.

The end result of what I wanted to accomplish though (a project) was different from what I was spending most of my time working on (the engine), so I had to call it quits eventually. It was an excellent learning experience though, and I would recommend it to anyone who wants to learn graphics from the ground up.

0. https://www.arwmoffat.com/work/normal-distribution 1. https://github.com/amoffat/blender-lightprobes/blob/master/_...


If you are wanting to dip your toes in graphics/gamedev and don't want to do low-level rendering, nor super heavy Unity, consider Raylib or the C# Raylib Wrapper I wrote.

- https://www.raylib.com/

- https://github.com/NotNotTech/Raylib-CsLo


Great suggestion, thanks very much. Looks like just the right thing to move onto from pygame.


For learning the basics of GPU programming Metal is a great API to start, especially if you come from a C++ background. It’s clean, straightforward, ergonomic, relies on same old concepts like pointers, references and templates and does not require you to jump through weird hoops like DX12 or Vulkan. And once you understand the concepts it’s easy to move to any API.


Could someone suggest a good book/article(s) about 3d software rendering from scratch. I want to use just plain WIN API or SDL to make let's say a 3d rendered cube. Is "Computer Graphics: Principles and Practice" 2nd edition is all that I need or do you recommend other books?


The suggested site https://LearningOpenGl.com is great and I've seen similar websites for Vulkan and WebGPU.

I also recommend grabbing the suggested book, Real Time Rendering, along. I've found it to be great learning material, having a well detailed chapter dedicated for all the effects used in modern renderers. Works great as a reference though it doesn't have any code samples. Doesn't concern itself too much about API details. In my learning flow, I go through a chapter and try to implement one of the techniques mentioned for the effect; usually referencing one of these web resources.


yeah, I heard about this site. But I need a book about software 3d rendering. I want to build something very simple but without any apis and libraries such as opengl or directx. And Real-Time Rendering 4th chapter is seems like what I need.


I'd be curious to hear others chime in, but I feel like the situation is very similar to this article talking about games. Do you want to get pixels to the screen/file? Shaders and materials (authoring or implementing)? How commercial renderers are organized? My job is mostly using commercial tools, but a lot of us have made toy renderers, read books, and taken classes to reimplement the fundamentals.

It's been awhile, but a few common, imho approachable, sources are:

https://github.com/ssloy/tinyrenderer/wiki - a rasterizer that starts with drawing a line

https://raytracing.github.io/ - a basic raytracer that incrementally adds features

https://www.pbrt.org/ - I've heard good things from people who have gone through the whole book. I haven't taken the dive, but thumbed through it and jumped around.

I wouldn't dismiss realtime stuff, either. Often, the concepts are similar but the feedback loop is much faster. I liked the UE4 docs on shaders talking about pbrt and the simplifications they chose when implementing it. There's a bunch of resources out there. I don't think single source is comprehensive. I say, start with something simple and find resources on specific things you want to know more about.


Let's just say that I want to create text file with 3d coordinates of an object (cube or sphere), and my software renderer shoould read this file and display it on the screen. And I can zoom in/out, change coordinates of vertecies with a mouse, rotate and move it. This cube can be wired or colored and if it wired I want to see which edges are visible and which are not. And to render this cube I want to use only setpixel/drawline functions from win api or sdl. Thanks for the links.


That sounds like a fun challenge. If you're constraining yourself to use as few libraries as possible, I'd go with OBJ [1] for the 3d mesh and PPM [2] for writing images. It's easy to implement a bare bones reader/writer and some OSes (like macOS) can show them in the file browser. Raytracing in One Weekend goes over PPM. There are a bunch of header-only libraries that handle different file formats like stb_image [3]. I usually end up using those when I start dealing with textures or UI elements. I don't use Windows so I haven't used their APIs for projects like this. I'd usually go for imgui or SDL (like you mentioned). tinyracaster, a sibling project of tinyrenderer, touches on those [4]. I liked LazyFoo's SDL tutorial [5]. Good luck!

[1] https://en.wikipedia.org/wiki/Wavefront_.obj_file

[2] https://en.wikipedia.org/wiki/Netpbm#PPM_example

[3] https://github.com/nothings/stb

[4] https://github.com/ssloy/tinyraycaster/wiki/Part-4:-SDL

[5] https://lazyfoo.net/tutorials/SDL/index.php


Yeah, I know about ppm but I didn't know about .obj files. That's great. And sdl is a good crossplatform choice because linux doesn't have similar to win api GDI. Thanks for the links.


I highly recommend this course "3D Graphics Programming from Scratch"[0] to dive into software rendering. It uses the SDL and basically starts from first principles.

https://courses.pikuma.com/courses/learn-computer-graphics-p...


yes. It seems like that is what I looking for, thanks.


This online article series is quite good for Vulkan on Rust: https://hoj-senna.github.io/ashen-aetna/


Seems like some chapters on perspectives is what I need. But I guess I have to buy a book about 3d math for game development.


Try pikuma.com. It covers a complete software renderer from scratch using C and SDL.


What should I do if I want to draw 2D lines? I just want to draw nice looking, efficient 2D lines. I am aware there are many high level engines and libraries that let you draw lines, but I have tried them all over many years and they are all bad. This is a serious plea - please help.


On Windows, the best way is often Direct2D https://docs.microsoft.com/en-us/windows/win32/direct2d/dire...

On Linux, you have to do that yourself. The best approach depends on requirements and target hardware.

The simplest case is when your lines are straight segments or polylines of them, you have decent GPU, and you don’t have weird requirements about line caps and joins. In that case, simply render a quad per segment, using 8x or 16x MSAA. Quality-wise, the results at these MSAA levels are surprisingly good. Performance-wise, modern PC-class GPUs (including thin laptops and CPU-integrated graphics in them) are usually OK at that use case even with 16x MSAA.

If MSAA is too slow on your hardware but you still want good quality AA, it’s harder to achieve but still doable. Here’s a relevant documentation from my graphics library for Raspberry Pi4: https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/VAA...

Update: if you only need to support nVidia GPUs, they have a proprietary GL extension nv_path_renreding which does pretty much the same thing as Direct2D, but not exclusive to Windows.


Thank you!


Check out my free open source book "A bitmappers companion" its a reference book about 2d graphics algorithms with code examples in Rust. https://github.com/epilys/bitmappers-companion


Thanks!


Have you tried SDL2? I’ve used it in the past for 2d stuff and didn’t have any problems with it.


>The more I wrote, the more I understood the issue was more complex than that

This is why discussion of a topic and the ability to elucidate them in speech or writing is important. It is easier to examine them in an orderly format. I often model conversations in my head about my opinions (about work, things with my home life, just in general).

As a side note: Strong ordering of ideas is something that can be developed naturally, but it is also a skill learned through willing engagement with oft-maligned academic topics like humanities (liberal arts) courses. This is especially true of learning ordered writing. I often see devs who fulfill requests precisely, literally as they are requested, and then bemoan how it doesn't even make sense. Then I see devs who demand detailed conversations to pick apart each word of a request so they don't just provide what was asked for, but also what was needed. ( Sometimes to the frustration of the requestor for whom their needs seem obvious since they live every day with the area of work)

I'm not claiming humanities are some sort of magic bullet on this, only that they can be a convenient path to skills that are actually useful, albeit subtly hidden by a surface level course title that fills a 3-credit literature requirement. And yes, some people don't need it. But colleges (in the US, my area of familiarity) do have a mechanism of getting out of a lot of them if you don't need it: Take AP exams the demonstrate you already have those skills and get requirements waived.

You may wonder, since most have had such courses forced on them, how I could correct. Well-- 1) I make no claim that it always works. And 2) Look above for my caveat of willing engagement. Blowing them off as useless won't get you much out of them.

If I had to make a recommendation, I'd direct comp sci majors to fill as many of those courses as possible in philosophy courses, which is an area defined by the requirement for ordered thinking and dissection of meaning on a very granular level. For literature requirements I'd recommend Shakespeare which also often focuses around dissecting multiple levels of meaning from the text. (If you have taken a Shakespeare course and don't know the multiple meanings of "3-legged stool" then you didn't read carefully.) For history, choose courses that focus on the enlightenment and renaissance time periods.


Would've appreciated some books on this list if people still use those or YouTube tutorials would be great too!


I've been trying to learn how to make games in my spare time on-and-off for 20 years (wow time does fly). Here's my postmortem to serve as a warning for younger versions of me.

My first mistake was underestimating how much math one needs to know. Spoiler: it's all math - mostly linear algebra. That's why any graphics / physics / gamedev book has at least one math chapter. You do need it for everything - from coloring something to making things move on the screen.

My second mistake was not making games. One would think that it would be an obvious thing to do, but it's so easy to underestimate the amount of time it takes to understand graphics programming (in retrospect, just the size and weight of the "Computer Graphics - Principles and Practice" book should have been a gigantic red flag). I ended up writing game engine code instead of game code. Being able to draw triangles on the screen is just not as satisfying for me as making cars chase each other and it also doesn't move the needle on getting a game out the door. In practice this translates to maybe reading "Programming Game AI" instead of "Real-Time Rendering" (given limited spare hours) or actually finishing "Nature of Code" instead of reading "Game Engine Architecture".

Arguably my biggest mistake was being stubborn, but not stubborn enough to persevere through all the crap. Early on I didn't want to use an engine because I generally like to understand things from first principles and I didn't understand how deep the rabbit hole went. It turns out that the first principles in game graphics are closer to a collection of progressively more horrible hacks on how to achieve realism in real time without ray tracing, because that's too slow, and how to patch all the side effects of that reality (don't drop a book on implementing shadows on your feet). People with PhDs describe these techniques in scientific looking papers, which then get discussed at large conferences, referenced in books, etc and normalized. Want to learn a new technique? Read the (dense) paper. Other people decide then to take the best hacks, accelerate them in hardware and expose them as APIs. Decades of optimizations and hacks built on top of each other pushed the entry bar higher and higher, to the point where the amount of complexity one needs to wrap their head around is just not realistic for a beginner, unless they are stubborn enough and diligent enough to persevere through all of it. I should have paid that $100 to use the Torque engine, instead of buying the OpenGL Reference Manual, OpenGL Superbible and the OpenGL Shading Manual.

Lastly I fell into the trap of buying the books, but not working on games, aka "all the gear but no f*cking idea". My shelf is filled with highly regarded books that I read and some I gave up on (because they were over my head). I released 0 games. That said it's a good way to discover gems like "The Ray Tracing Challenge" book. Learning new things is fun and addictive.

I came to the realization that I would have really benefited from having a mentor, a well defined and structured curriculum, and being a bit more humble about my own ability to jump into this field, but, most importantly, I should have just made games - crappy looking, never-going-to-make-money, but fun games. That's why learned to code in the first place, after all.

These days I am using Godot and am finding it easy to use, especially after spending all this time fiddling in the graphics world. However, having children greatly reduced the amount of time I can dedicate to this hobby.

It was rather painful and cathartic to type this out.

P.S. I tell myself I'll find time to read "Physically Based Rendering" (another book that would make a whole in the floor if dropped), "Realtime Collision Detection" (more fun math!) and the newest edition of "Real Time Rendering" (its website is a nice resource to find other graphics books to purchase and never read).


this was a grueling read mate. go make them games, maybe something for your kids.


I have a background in traditional painting (and sculpture, to a lesser extent) and have been trying to learn to program. Graphics programming has been a concrete motivator. Thank you for sharing this article, I found it helpful!


Agree learnopengl.com is a good place to start, but I think people should really write a software renderer from scratch before venturing too far on Vulkan / D3D12. It's easier, more fulfilling, and much more helpful.


for anyone interested in a good tutorial on it,

Pragprog has a really good tutorial for building a software based ray-tracer. its language agnostic. I'm following it in rust but you can use anything that can output to a canvas.

https://pragprog.com/titles/jbtracer/the-ray-tracer-challeng...


I've heard that the salary for game programmers is pretty low compared to other fields. Is this also true for graphics programmers in general? E.g. jobs in AR/VR.


I am a former game dev who worked for one of the top AAA game studios in the world in a lead programmer role. At my first FAANG job, my compensation was like 2x higher than game dev. Now it’s many times higher still but admittedly I have been out of gaming for 15 years and have a senior leadership role. I generally do think that for a given talent/experience level the rest of the industry pays much more with way better work-life balance, it’s a combination of supply/demand and business model (at least for AAA) vs SaaS. You can strike it big in games if you are lucky (e.g. create Minecraft) but it’s much easier to do that at a startup or big tech.


The reason salaries for game programmers are so low is that there is an effectively endless supply of new crops of graduates who have told themselves since they were 12 that they want to make video games for a living. Classic supply and demand dictates that, with such a high supply, big studios can get away with paying very little (and can extract additional "benefits" as well, such as demanding insane hours and so on).


No


Can we say that modern AAA graphics is way more complex than a single human can handle unless perhaps he started graphics programming many years ago?


No, but it is a full time job.

It's getting much more complicated. Here's an hour long talk on how Unreal Engine 5's Nanite works.[1] This is a huge breakthrough in level of detail theory. Things that used to be O(N) are now O(1). With enough pre-computation, rendering cost does not go up with scene size. See this demo.[2] You can download the demo for Xbox X/S and PlayStation 5, and soon, in source form for PCs. Explore 16 square kilometers of photorealistic city in which you can see a close-up view of anything.

The format of assets in memory for Nanite becomes far more complicated. GPUs need a redesign again; UE5 has to do more of the rendering on the CPUs than they'd like. What they need to do is simple but not well suited to current GPU designs. It's such a huge win this approach will take over, despite that.

New game engines will need to have something comparable. Maybe using the same format. Epic doesn't seem to be making patent claims, and the code for all this is in the Unreal Engine 5 download.

(Basic concept of how this works: A mesh becomes a hierarchy of meshes, allowing zoom in and out of on level of detail. The approach to subdivision is seamless. You divide the mesh into cells of about 20 triangles. Then, each cell is cut by one or two new long edges across the cell. These become the boundaries of a simpler cell with about half as many triangles, but no new vertices. The lower-resolution cell has no more edges than the higher-resolution cell. There's a neat graph theory result on where to cut to make this work. This process can be repeated recursively. So, one big mesh can be rendered at different levels of resolution depending on how close the camera is to each part. The end result is that the number of rendered triangles is determined by the number of pixels on the screen, not the complexity of the model. Hence, O(1)).

Then, all that needs to stream in from the network. That's probably Unreal Engine 6. In UE5, the whole model has be downloaded to local SSD first. Doing it dynamically is going to require precise download scheduling, and edge servers doing partial mesh reduction. Fun problems.

Then the metaverse starts to look like Ready Player One.

[1] https://www.youtube.com/watch?v=eviSykqSUUw

[2] https://www.youtube.com/watch?v=WU0gvPcc3jQ


The closest thing to the metaverse in the form of open worlds are flight simulators right now. They have an enormous amount of content to model the entire planet already. A lot of that content is generated rather than designed. MS obviously did a nice job with this and e.g. X-plane and Flightgear have pretty detailed worlds as well. Most of these worlds are generated from various data sources (satellite imagery, altitude data, open street maps, etc. MS does the same but adds machine learning to that mix to generate plausible looking buildings and objects from the massive amount of data they have access to.

Fusing that with what Epic is doing is basically going to enable even more detailed gaming worlds where a lot of the content is generated rather than designed. Flight simulator scenery is bottle necked on the enormous amount of triangles needed to model the entire world. Storing them is not the issue. But rendering them is. Removing that limitation will allow much higher level of details. X-plane is actually pretty good at generating e.g. roads, rails, and buildings from open street map data but the polygon counts are kept low to keep performance up. Check out e.g. simheaven.com for some examples of what x-plane can do with open data. Not as good as what MS has delivered but not that far off either. And they actually provide some open data as well, which simheaven recently integrated for x-plane.

Flightgear actually pioneered the notion of streaming scenery content on demand rather than pre-installing it manually. X-plane does not have this and MS has now shifted to doing both installable scenery and downloading higher resolution stuff on demand while you are flying.

Once you hit a certain size, pre-installing all the content is no longer feasible or even needed. Once these worlds become so large that exploring all of it would take a life time (or more), it's more optimal to only deliver those bits that you actually need, when you need them. And doing that in real time means that even caching that locally becomes optional. And even the rendering process itself does not have to be local. E.g. NVidia and a few others have been providing streaming games with cloud based rendering. That makes a lot more sense once you hit a few peta/exa bytes of scenery data.

Another interesting thing is generating photo realistic scenery from simple sketches with machine learning trained on massive amounts of images. This too is becoming a thing and is going to be much more efficient than manually crafting detailed models in excruciating amounts of detail. MS did this with flight simulator. Others will follow.

Interesting times.


I think that we put too much credit on the graphic part for the metaverse. As long as people can immerse themselves in X, I'd say it's good enough. We have had metaverse since the 80s. On my side I enjoy classic games much more than modern ones too, maybe I'm just weird.


Nanite doesn't do any rendering on CPU and does a lot of work that is traditionally done on CPU (culling) on GPU. Nanite does use a software rasterizer for most triangles, but it runs on GPU.


Oh, I think you're right. The nested FOR loops of the "Micropoly software rasterizer" are running on the GPU.[1] I think. I though that was CPU code at first. They're iterating over one or two pixels, not a big chunk of screen. For larger triangles, they use the GPU fill hardware. The key idea is to have triangles be about pixel sized. If triangles are bigger than 1-2 pixels, and there's more detail available, use smaller triangles.

[1] https://youtu.be/eviSykqSUUw?t=2134


Much like any other field, it is wide and takes a lot of learning and study. But it is not unobtainable. I started in 2017 I'd say and I feel like I have a good enough grasp of the field of graphics, enough to read and implement papers, and watch modern SIGGRAPH talks.


If you're not imposing the requirement that someone create a AAA game, then No, I definitely know enough artist+devs who can create a high end engine from scratch as well as the art assets.

I can definitely drop into any part of the graphics pipeline from graphics programming, to art creation and AI etc... and have done so on multiple projects.

That said, doing it all by a single individual is a massive undertaking and incredibly unlikely.

That's why it comes down to capability vs execution. It's definitely possible for a single individual to be able to do each part of the process over multiple projects, but unlikely they could do it all on a single one by themselves.


> artist+devs who can create a high end engine from scratch as well as the art assets...drop into any part of the graphics pipeline from graphics programming, to art creation

Is there a job title for this role? I'm not stellar at art or programming but I'm 'really good' at both and everything in between.


Yep! The catch all term is Technical Artist.

I say "Catch All" because understandably there's a massive variety in types of technical artists (FX, shaders, rigging, generalists, tooling focused etc..) but that's the effective term.

You can join the forum and slack for https://discourse.techart.online/ to learn more.


Awesome, thanks!


Ultimately it's just one elegantly simple equation that defines the entire field and all of the complexity of rendering realistic graphics: https://en.wikipedia.org/wiki/Rendering_equation

Unfortunately it is impossible to solve for real-world scenarios so we just keep refining and refining better estimates. :)


Even the rendering equation isn't fully realistic; it doesn't capture wave optics phenomena like, say, grate diffraction.


My thought when reading this article was surprise at that perspective. It's not one I've seen before. A lot of comments here as well as in the article itself hints at the issue being that it's grown so much in the last decade that there's too much to get into for a beginner.

I've worked with graphics programming since before these changes, so I might simply have had the benefit of watching it unfold while already having a decent grasp of the state of the art at the time.

Is it more complex than any one person can handle?

If we're talking grasp of the theory, I don't think so. I feel I have a rough idea of what tech is out there and how things work throughout most layers of hardware and software. And I'm not the most knowledgeable. But I dedicate quite a few hours every week to reading up or brushing up on things.

If we're instead talking about a single person being responsible for the full graphics pipeline in a AAA game that certainly seems like too much. The projects I've been part of have ranged from teams of ten programmers using an in house engine to AAA productions based on fully featured engines maintained by a dedicated team. On both ends there were multiple programmers working on the graphics. I've also been the sole graphics programmer on a AA/large indie game using Unity. That's definitely possible and required me to understand and make modifications to most levels of the graphics pipeline.


Depends on how many is "many". Most graphics programmers on AAA games have 5-10 years of experience and all of them understand the whole pipeline. Games are realtime so you have the natural ceiling of how complex you can get before you run out of time and drop frames. In the VFX you can have people specializing in something like vegetation or fabrics with 20 years of experience just in that and nobody else in the team even approaching their level of expertise.


You can learn it all, just that one person isn't enough to do all the work required for a big project. It is like full stack web engineers, you can learn backend and frontend but a big project has enough work that people don't need to know both.


Maybe, but AAA as a positive value descriptor is a misnomer.

I've come to realize that AAA gamedev is a way to replace actual artistic taste with "output" that can then be scaled deterministically via the normal corporate software engineering levers we all know and hate.

You can make an absolutely beautiful game as an individual or small team. This game can be 2D or 3D or anywhere in between. You can pass on things that AAA games can't because you can have good taste & artistry and AAA doesn't even try to have those things.

It's just programming & digital art - people mythologize shit like this way too much. Can you replicate a AAA game on your own? No ofc not. But you can make a game as visually stimulating and enjoyable.

So overall..I just ignore any talk that acts like "AAA" is anything of value or meaning besides an org chart turning labor into $60 products.


[redacted] Wrong post!


wrong post? :)


Oh. Absolutely!


Graphics Programming needs to be restructured.

There’s a lot of focus on API driven learning. Vulkan, D3D12, 11, etc.

Wrong approach.

Learn fundamentals of what the hardware is capable of. For each feature, write down a few ideas of how you could use it by itself. Write down a few ideas of how you could combine it with other features. Think of “I want to do X. What features can allow me to do that?” The APIs, although they are not easily interchangeable, use the same hardware and can often do the same things with a bit of effort.


The problem is that when you're starting out on graphics, you don't even know what you're capable of doing with the hardware that you have, and the distance between manipulating things on the hardware and putting something on the screen can seem dauntingly far away.

My advice, start out with OpenGL 1 (the old glBegin()/glEnd() stuff) or Raylib (raylib.com) where you begin to get accustomed to really basic stuff (primitives, transformations, etc.). Then do the LearnOpenGL tutorials to learn modern graphics concepts (vertex buffers, shaders, render passes, etc.) Once you've done this you might have enough knowledge to make a simple game/visualization/application, and you'll know yourself if you need to study D3D12/Vulkan or not.


But it is kinda hard to ponder ideas to do on a GPU when you don't know how a GPU works and what it is capable of. You have to learn at least one API to start thinking about what to do on a GPU, otherwise it is like being a non-programmer brainstorming programming ideas.


>The APIs, although they are not easily interchangeable, use the same hardware and can often do the same things with a bit of effort.

Actually it's the other way round. GPU architectures can vary wildly especially when you include mobile GPUs which are tiling architectures. Graphics APIs are the least common denominator interface that makes targeting all of them possible.


Not at all, hence why AAA game engines have all been based of plugin architecture for their backends, which allows them to take the best advantage of each hardware, no need for least common denominator interface.

Those that try to target everything with a single API are fooling themselves, hence why in the end they all end up with extension spaghetti and multiple code paths, while pretending to still use a single API.


care to elaborate even further?


Sure, OpenGL is portable and works everywhere, right?

Well, first of all OpenGL and OpenGL ES aren't the same thing, so depending on the versions, the set of APIs differ, already there you have some set of alternative code paths.

Then some features are only available as extensions, so you need to query for them, and then programm accordingly what is available.

Some hardware features have different extensions per vendor for the same hardware capability, so yet another execution path.

The shader compilers are quite different across vendors, some more strict than others, or different set of GLSL extensions, so yet another execution path.

Finally, even when everything works, there are driver or hardware bugs to work around, yet another execution path.

In the end it is OpenGL, portable everywhere, yet the engine looks like it is using multiple kinds of APIs just to work around every kind of issue.

Naturally having a rotating cube available everywhere is easy, the problem is having a proper engine with Unreal like capabilities (we are talking about AAA here).


When you're learning instead of making something for production, it may be wise to focus on a single platform (whether it be Windows, Linux, or mobile) and start from there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: