Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Microsoft DirectX Raytracing (microsoft.com)
400 points by mxfh on March 19, 2018 | hide | past | favorite | 223 comments



The only issue is that a lot of this is already possible in the current renderers at a reduced cost.

- Soft shadows work in WebGL fairly well using PCSS: https://clara.io/player/v2/8f49e7c3-7c5e-43f0-a09c-33a55bb1b...

- Translucency can be faked effectively using a few different methods. https://clara.io/view/5c7d28c0-91d7-4432-a131-3e6fd657a042

- Screen space ambient occlusion, if using SAO, is amazing. SAO test: https://clara.io/view/2e1637a7-a41d-4832-923a-e6227d1ebaaa

- Screen space reflections also work. Our reflections are this: https://clara.io/player/v2/b55f695a-8f4a-4ab0-b575-88e3df8cd...

- Fast high quality depth of field: https://clara.io/player/v2/ce7d91ed-1163-4cbc-b842-929adc4ef...

- Real-time global illumination: https://www.siliconstudio.co.jp/middleware/enlighten/en/

So while I think that Raytracing is awesome, it generally will not increase existing real-time render quality that much. My experience with the game industry, even if you have a better way of doing things, if it takes more CPU/GPU cycles that a hack that achieves basically the same quality, it will not be adopted. It is that simple.


While the above is accurrate I think it's a bit one sided.

Physically Based Rendering has become the dominant approach in VFX because it's a huge simplification and productivity boost. Integrating tons of special purpose hacks into one renderer isn't just hard for the renderer devs, the control parameters exposed to artists become an absolute nightmare. Combing the PBR perspective with raytracing greatly simplifies both sides of this.

Not every game is trying to be the next Far Cry. Ray tracing will be adopted by teams that value that unification and simplification over getting the very last bit of performance possible by a pile of hacks. As the hardware improves, which still seems likely, we'll see the balance point of who makes that call shift in favor of tracing IMO.


I agree that PBR is awesome, but PBR doesn't slow things down, it is just a replacement by less accurate equations (BlinnPhong ambient,diffuse,specular) with better equations (GGX albeido, roughness, metallic, clearcoat, sheen) that are generally the same complexity.

It is sort of neutral on render time for the most part.

In time, I am sure we will be in a raytraced future if GPU trends continue.


Physically based rendering is _not_ the Disney shader and renaming your diffuse maps to albedo because it sounds more sciencey.

It’s about the wholesale switch to a fully path-traced framework with a consistent mathematical foundation as opposed to the layered system of hacks and chained prepasses that prevailed in the early 2000’s


This. You can use PBR concepts in an old school renderman style pipeline, but that's not why it's been a big deal in VFX. With a PBRT style renderer, everything is handled in a single monte carlo framework. And once you get an unbiased renderer that meets your standards, you can implement schemes like irradiance caching or photon mapping that introduce bias, but greatly accelerate rendering (and reduce noise). That optimization can be done once, for everything past and future.

That's why I'm saying the combination of PBR and tracing is a huge productivity boon to both sides, art and tech. It has a performance cost, but if/when hardware can pay that cost in real time, games absolutely will use it.


Worth adding that if we get more hardware support for deep neural networks on the GPU (e.g. tensor cores), training a denoising DNN on the noisy renders can save a lot of computation and fits neatly into the pipeline.

If the ray tracing can be done at or around 1 ray per pixel or less using a network trained to do merge information over multiple frames and upscale we could probably get away with less. Maybe more if we can feed in a depth map, velocity map and a flat shader less rendering or other information to help guide the DNN.

Might even end up faster than current raster renders.

See https://www.chaosgroup.com/blog/experiments-with-v-ray-next-...


With 4K around for corner for regular gamer screens, and top of the line GPUs that cannot handle that at an acceptable speed yet, I think it's a long time before we see that. Maybe when a 200$ GPU can do 4K 60 fps, but until then 5-10% gain is seen as worth chasing.


Every pass doesn't have to be rendered at native resolution, and most aren't. When you're running a game at 4K on your PS4 pro or modern PC, very often the 4K image is being produced using a mix of temporal anti-aliasing (accumulating a full image over time across multiple frames), heuristic upsampling, etc. Many post-processing passes run well below screen resolution even if you're running at 1080p - things like SSAO and even particle effects are typically rendered at a lower resolution.

Also, many modern games have been hitting 60fps at 4k on mid-high tier cards for years - you can hit 60 at 4k with reasonable quality settings in a game like MGS5 or GTA5 on a single 970. If you get a 1080 way more stuff runs great, even at higher settings.

Many modern games have an internal resolution slider as well, so you can set the internal resolution to 80 or 90% and still get your UI rendered at full 4K resolution. If the game has temporal anti-aliasing the difference tends to be hard to spot.

Finally, if you've got a freesync or gsync monitor, a few dips down to 50fps at 4k aren't going to be super noticeable, and the games look great. :)


>It has a performance cost, but if/when hardware can pay that cost in real time, games absolutely will use it.

Which has been said since earl6 2000. But the question is, when? Or if it will ever come?

If you look at road maps we surely dont have that in next 5 years, even unlikely in 10.


Actually PBR is quite a very important workflow innovation in rasterizing engines, see my post above!


That's not really what it's about! It's not about achieving physical correctness or ray tracing. You can have a PBR renderer selectively break energy conservation for artistic reasons. It's not about ray tracing either. Current game engines like Unreal and Unity are considered PBR because they use a roughness/metalness workflow.

PBR about having maps that are easy to understand through physical analogy, and simplifying the workflow so that it's mostly shared across render engines, applications and pipelines. It's about the ability to create texture in Mari or Substance, which use rasterizing engines, and seeing the final render in Arnold look nearly identical.


Non-3D artists should note that “Physically Based Rendering” or PBR is merely a standard for defining materials. It is becoming something of an industry standard in real-time and offline rendering workflows. From a real-time(games) perspective, PBR represents a leap up in realism but from an offline(VFX) perspective, it offers no improvement in realism, and often a downgrade from past norms if implemented with no additional efforts. For someone using offline rendering, [EDIT: As per pxl_fckr’s comment, I must be short on some history of PBR. I stand corrected.]


It actually means a lot more than that in offline VFX. It was the games industry that decided it meant “use the Disney shader”.


I updated my comment. I guess I need to read up on that. I read of so many areas in which PBR stops short in offline contexts, like using constant BRDFs which is a little unnecessary with offline rendering. What am I not understanding there?


I’m not quite sure of what you’re asking exactly here? What do you mean by constant brdfs?


I don't think pure raytracing and ray traced global illumination are going to be adopted by many games if there is still such a giant problem with noise. That is far from a solved problem, even in multi-hour per from visual effects renders.


This paper shows pretty good results from essentially throwing a RNN at it:

[1] http://research.nvidia.com/sites/default/files/publications/...


Where will the 1000 frame fly through and the 10 noisy frames that they use come from? I find that paper very disingenuous when they don't count these things in their filtering time.

Not only that, but they compare everything with "1 sample per pixel" knowing that the SURE filter uses per pixel statistics of variance that require more than one sample per to start working.


I guess with games there is already temporal anti-aliasing (TAA), so that would be an effective denoising technique if one was using noisy ray/path tracing.


That's definitely popular for effects that use ray casting. At SIGGRAPH and GDC over the past couple years, a good fraction of the graphics presentations I went to ended with "and then we added TAA because we could only afford a handful of samples per pixel per frame".


What do you mean by noise ? I know that Path tracing is incremental and producer a lot of noise at first but there is also other algorithme than part tracing. Also there is good denoiser filter that can be used with fragment shader.


>My experience with the game industry, even if you have a better way of doing things, if it takes more CPU/GPU cycles that a hack that achieves basically the same quality, it will not be adopted. It is that simple.

That's not always the case. Physically based materials, tonemapping and many more were widely adopted in game engines because they're easier to get proper and predictable results with. It's the same with most rendering techniques based on real physics, they are usually easier to work with, which is a very real advantage which allows devs to make more realistic environments in less time.

It's the same process that led the non-realtime world to physically based renderers, although it is slower in GPUs because of obvious hardware constraints.


In the realtime performance vs developer time tradeoff, the magnitudes are important... and also, their value.

For a long time, games pushed the graphics envelope, but that doesn't seem the case any more. So perhaps that's less important today, and GPUs have overshot gamer needs.

OTOH, development costs of AAA games are incredibly high. with larger worlds at higher res. Even worse at 4K.

So the question of the magnitudes remains.

2018: the year of raytracing in games, finally?

A data-point is that Imagination has had hardware raytracing for several years now: https://www.imgtec.com/legacy-gpu-cores/ray-tracing/ But without publicised success.


The cost of games is not the render engine, remember that most use ue4 or unity. The cost of games 90% is asset and gameplay and UI/UX development.


By "larger worlds at higher res" I meant higher cost of asset development.


I spend a lot of time doing lighting TD work using non-realtime renderers. Every one of those examples is pretty good for realtime, but also looks soooo wrong! its not anywhere near the same quality as a proper raytracer!

-Thats not what an ao pass should look like. But you shouldnt even need ao if your GI/shadows look good

- Gi should have detail to it, every realtime example is sooooo blurry

-The screen space reflections sometimeslook ok but really depends on your scene, you get all sorts of reflections in the wrong place.

-Yucky DOF without the nice bokeh on highlights and all those edge problems you get with a z-space blur.

The beauty of raytracing with a unified sampler is it makes the algorithm for each of these features you listed incredibly simple and it mixes distributes CPU/GPU time to whats important depending on that part of the image.

scene with lots of motion blur- more primary samples, less samples to gi/shadows/reflections - automatically based on how much gi/shadows you see

scene with lots of gi - more samples in gi, less for reflections etc - automatically

You can have a complex scene with reflections/gi everywhere, and then turn a heavy DOF on and get faster frame times.


I'm not sure it's fair to describe those as hacks. Those techniques are also used for animated movies, which take many CPU-years to render. Those "hacks" are used even when artist have huge amounts of hardware, no real-time constraints, and need extremely high quality.

Techniques other than raytracing have an artistic place, not merely a pragmatic one.


Aren't animated movies pretty much universally ray traced?


No-ish:

https://en.wikipedia.org/wiki/Pixar_RenderMan

These days they're pretty much 100% path-traced. But most animated films out there in the history of 3d animation were rasterized.


The edit window has passed so I'll reply to my own comment with a nifty tidbit. The early days of animated films were particularly fun to watch as someone with a background in computer science, as it was a one upmanship competition between the major animation studios (Pixar and Dreamworks) on technology. Monster's Inc had hair. Shrek had a scene with physically-realistic caustic effects of poured milk (really, go watch it--there's an entirely gratuitous scene where a glass of milk is poured to show that they can. milk as a semi-translucent non-homogenous liquid is VERY hard to render as it turns out.) Finding Nemo had lots of water effects, etc.

The challenge of all of this was doing it using rasterization techniques, or integrating more realistic techniques using path tracing into a standard rasterization pipeline in a way that didn't kill performance. But now render farms a big enough and technology has advanced far enough that they can just path-trace everything.


Pixar didn't use ray tracing much until 2013 with Monster's University.

https://www.theverge.com/2013/6/21/4446606/how-pixar-changed...

Nowadays ray tracing is common, but as recently as five years ago it was rare. Not all high-quality animation is ray traced.


PBR proved that we reached the point where the industry had enough of hacks in some domains.

Everybody in CG knows that raytracing is the graal in that it allows a unique, universal rendering model and that triangle-based techniques are always hacks aimed at approximating a raytracing result (or even better, a radiosity-based algorithm)

We have always surfed on the edge between cramming more millions of triangles in our graphic cards and being able to make a complex calculation per pixel. It is arguable that if hardware manufacturers had taken a different path, GPUs would be more comfortable with raytracing now as it requires a very different architecture (make lookups in a big scene cheap vs matrix operations)

From time to time, big player test the water with raytracing and often are not followed by the crowd of developers afraid to change their ways.

I wish that one day we cross that bridge.


I think you meant to compare rasterization with raytracing ? You can raytrace triangle meshes and you can rasterize non-triangular geometry.


> - Screen space ambient occlusion, if using SAO, is amazing.

Yeah, no. It still looks like "draw a blurry black halo around edges and call it a day" ambient occlusion.


You can combine SSAO with (very coarse) global illumination probes to achieve a very convincing effect.

DOOM (2016) is a good example of this: http://www.adriancourreges.com/blog/2016/09/09/doom-2016-gra...

SSAO has been overused/applied poorly (just like the brown 'bloom'/color grading effect in mid-2000s games), which is why people often dislike it.


There is also hierarchical sao


I really wish i kept a bookmark for an article over at filfre.net, because in it a game dev, from back around the C64 era i believe, lamented how whenever there was a big step up in hardware performance, game developers had a bad habit of getting stuck trying to show how far they could push the hardware while ignoring game mechanics and similar.

Effectively whenever some new piece of hardware hit the market, the quality of games would be rolled back by a number of years as companies released what may well be glorified tech demos.

And as of late i wonder if the pace of GPU development have lead to perhaps the most prolonged period of such tech demos.


Perhaps that is what has allowed the indie scene to thrive as well as it has? I'm ok with that.

Honestly I'm not sure why gamedevs seem to care so much about this kind of thing (marketers on the other hand...). Minecraft was worth $2B and intentionally looked like it was rendered in mode 13h.


On the one hand I agree with you, but in the other there’s Horizon Zero Dawn, Battlefield 1, Tomb Raider, Hellblade: Senua’s Sacrifice, NHL 17 and 18, Journey, the Little Big Planet series, Cities: Skylines, GTA V...

I dunno, I think gaming is starting to come out of that prolonged oooh-look-shiny-effects kind of era and into a pretty creative space.


Cities: Skylines is a good example of artistic direction over raw graphic power. It looks nice because its Nordic design architecture and stylish GUI, despite running in a relatively old version of Unity without many bells and whistles.


> Screen space reflections also work

Screen space reflections are an amazing trick and work really well for _some_ things, like reflective floors, water bodies, or objects the player might be holding near the camera (e.g. a weapon)

Since they can't reflect the back sides of objects, or anything outside the camera frustum (which includes the player model) they can't be used for things like mirrors.

The appearance of the reflections being 'clipped' with objects leaving the frustum as you move your head is also quite jarring on VR.


Well, maybe providing an API with fallback (which let's say incorporates those hacks.) is the path towards incremental adoption. Maybe that is the big news here.


If raytracing is practical you don’t need to deal with the limitations imposed by the tricks used by current renderers. All this faking has a price too.


Maybe all these techniques exist because ray tracing wasn't really possible, so a lot of workarounds were found. If ray tracing becomes mainstream and researchers focus their attention more on that, perhaps some awesome new things will be developed that aren't possible in the existing paradigm.


I was at the GDC talks today. The gist is that this is for filling in the gaps that screen space tricks simply cannot do because the reflected surface is off screen.


I can imagine that raytracing for a few effects will be possible when you switch on "Ultra" settings on desktop games, for the guys who have V100s in their machines.


> a lot of this is already possible in the current renderers at a reduced cost.

Maybe this misses the point of standardizing an API? Isn't there considerable value in having a simple framework that can achieve all those effects, and be shared and communicated easily with others? I'm sure a lot of studios would welcome faster dev times, fewer authoring tools, and fewer geometric limits for their rendering effects.

Also worth considering is that all those specific tricks are very limited, either in applicability or in quality or both. Screen space reflections only work on flat surfaces. PCSS still needs a hand-tuned shadow map. DOF has problems at image space contact points. Etc. Let's not oversell today's workarounds as so good they should be preserved forever.

> even if you have a better way of doing things, if it takes more CPU/GPU cycles that a hack that achieves basically the same quality, it will not be adopted. It is that simple.

That's true for any one specific game on any one specific platform. OTOH, that framing misses the trend: there has been consistent and very strong pressure towards higher fidelity, more realism, better physics, faster processors, higher resolution, and overall larger numbers of everything, ever since the first video game ever made. Ray tracing might not be adopted today, but one thing I will guarantee is that next year the CPU & GPU cycles used for rendering will exceed today, and it'll be even more the year after that. When I think about that, I feel like seeing ray tracing take over in games is inevitable.


> Isn't there considerable value in having a simple framework that can achieve all those effects, and be shared and communicated easily with others?

historically, gfx apis went the opposite direction. fixed function pipeline was replaced by shaders. now we rely on the api basically to do rasterization and little else. i think in ray tracing you would want a similar distillation. so like yes, the ray cast operator might be wanted in the api, but i think computations like importance sampling from the conditional brdf are better left in shaders.

and beginners who just want to get triangles on the screen copy/paste a bunch of code they dont fully understand, like now :)


I think the distillation is there already; if you want conditional BRDF importance sampling in DXR, it goes in the shaders, they're not providing that kind of fixed functionality in the same way that rasterization APIs used to.

By "simple framework" I mean the ability to trace rays, rather than the API specifics. What I had in mind is that by being able to trace rays at all, you can very easily get all the effects on the GGP's feature list, with less effort, wider generality, and higher quality than having to code up the buffering tricks required for things like screen space reflections or percentage closer shadows.

Sadly, I doubt the copy-pasta problem is going away any time soon... I think the trend is that it's getting worse.


Maybe, but it may make it substantially more accessible and easier for the average developer that isn't a domain expert in a dozen areas of graphics rendering technology and doesn't want to use a pre-built engine.


This will make it easier to port existing shaders from high end VFX to real-time, like RenderMan and OSL shaders for sure. Trivial even.


What makes you think people are rushing out to replace all these effects? At first blush this doesn't seemed directly aimed at video games.


https://www.youtube.com/watch?v=LXo0WdlELJk

I'm pleased to see that the tech video includes spherical mirrors. As I understand it, all raytracing demos are obliged by universal law to include at least three reflective balls in any promotion of the technology.

One day, someone will figure out a game where these super-shiny ball bearings are a critical part of the gameplay, and at that point, raytracing will finally take off...


Even in AAA games of today with amazing graphics, I still see polygons in cylindrical, conical and round objects (I'm not talking about raytracing but regular rendering). Everything looks so realistic, but a well or bucket [1] or some other cylindrical object you encounter looks not round but faceted, breaking the suspension of disbelief.

Raytracing may bring more roundness, but why not quadratic surfaces or some other fast enough to render method of real roundness in rasterizers? It may be expensive (although one surface would replace many polygons...), but it'd solve one of the last remaining uglinesses :)

[1] https://imgur.com/gallery/AdDyT8B


That's just because the industry is stuck using triangles in the art pipeline (and don't put in as much effort into the low-polys as they used to). If they were willing to use some variation of patch-based pipeline so that they could utilize the tessellation hardware that has existed for years, it wouldn't be a problem. I've heard artists are scared of it due to how amazingly awful some editors managed to make the NURBS editing experience, but with a decent editor it really wouldn't be a problem.

Sadly tessellation largely just gets used to screw over AMD performance (for those sweet Nvidia kickbacks; if you're wondering why some games like Crysis 2 tessellate flat surfaces into an insane number of polygons, this is why).


> I've heard artists are scared of it due to how amazingly awful some editors managed to make the NURBS editing experience, but with a decent editor it really wouldn't be a problem.

That's simply not true. I started getting into (high end) 3D in the early 90's. I've probably used every NURB or higher order surface modeling tool under the sun since then.

In fact, there are many amazing modeling tools for working with NURBs or other higher order surfaces.

They just all don't even come remotely close to polygon modeling when precision is secondary and workflow/ease of use is paramount.

So the answer in the VFX world, for years, has been subdivision surfaces. They have almost all of the good of bi-cubic patch modeling and almost all of the good of polygon modeling. What's more, existing polygon modeling tools can easily be upgraded to subdivision surface modelers by merely enforcing 2-manifold topology and adding the ability to display an [approximation to] the limit surface in real time.

Some of the schemes have nice properties. For example, after one step of Catmull-Clark, the entire surface consists of quads. And when treating each local grid of quads as the control polyhedron of a cubic b-spline patch (not a NURB but an UNRB, a uniform, non rational b-spline patch), the surface of the patch is equal to the limit surface that would be obtained by the subdivision scheme.


>> the surface of the patch is equal to the limit surface that would be obtained by the subdivision scheme.

That's not true if any of the vertices is extraordinary. In other words a vertex shared by more or less than 4 quads. The good news is that subdividing those quads each into 4 smaller ones will result in 3/4 of the surface area meeting the right criteria. Or the area hard to deal with becomes 1/4 the size.

The ultimate coolness of subdivision surfaces is that every vertex on a subdivided mesh is a leaner combination of the original vertices. The weights do not change during animation. The weights only change if the topology does. Another nice thing is that meshes at different LOD contain the vertices of the lower LOD mesh.

They really do everything with one exception - they can't exactly represent quadric surfaces which are so important in CAD tools.


(I do geometry but I'm not in the gaming field)

I'd guess that this is because the fastest way to compute a precise silhouette of an arbitrary shape (like quadratic surfaces) is to do a ray test for every pixel that might hit the surface. You could project the surface onto the image plane to get boundary curves, but (a) determining the curves that make up that silhouette is a complicated geometric process that's prone to numerical error in weird edge cases (of which there are lots) and (b) once you have the projected curves, you still have to actually shade the interior, which require you to project back onto the original surface to figure out the normal, texture, etc. At that point, you're essentially doing little ray tests for each pixel anyway, so why not just raytrace the whole thing? The global illumination approximation is much much better with raytracing, and modern GPUs can do it real-time.

By the way, modelling everything as an analytic surface rather than as a discretized polygon mesh is called "Boundary Representation" or b-rep, and it's what's used in solid CAD programs that engineers use, like Solidworks or Inventor. Engineers and designers like analytic surfaces because they have "infinite" resolution, but even those CAD programs almost invariably polygonalize b-rep to a mesh and then render the mesh, because it's expensive to hit-test all the surfaces involved; computers only recently became able to do that fast enough for real time interaction, and most CAD programs are old old old.


> By the way, modelling everything as an analytic surface rather than as a discretized polygon mesh is called "Boundary Representation" or b-rep

I thought that was iso-geometric analysis? What is the difference?


Isogeometric analysis is a means of doing finite element analysis over boundary representation models; FEA is used to physically simulate materials, and usually requires a polyhedral mesh (a mesh where the elements are 3D, instead of the "normal" kind where the elements are 2D polygons). IGA is a way to skip conversion of your b-rep to a mesh while doing that sort of simulation.


I think that's just a matter of how many subdivisions/polygons they decide to devote for those assets. For example, that bucket is pretty inconsequential in the overall scene so they can save some polygons on it, to give more toward something like a character or weapon model that is more prominent, more of the time, since more polygons means higher storage and computational requirements.

Something like tessellation [1] can be used to dynamically tune how many subdivisions to give a model based on certain parameters, such as camera distance.

[1]: https://en.wikipedia.org/wiki/Tessellation_(computer_graphic...


would you prefer world's greatest virtual concrete slab? https://techreport.com/review/21404/crysis-2-tessellation-to...


No, there the extra polygons do nothing visually useful


Fallout 4 probably set a limit for how many polygons an object could be because the scenes had to support hundreds.


My understanding is that rasterization really made portal hard to make. They had to jump through a bunch of hoops to make it all work. Meanwhile I understand that adding portals to raytrace quake was like an afternoon's exercise. I'm also annoyed that there's a map in overwatch where the corner mirrors don't actually reflect anything :P

I think generally with many of the tricks that have come out of SIGGRAPH for the last decade, we've really reached a point where, computationally, raytracing really is starting to have the advantage over rasterization in terms of both pixel count and object count. The struggle for the next 30 years however is going to be overcoming the tooling built around rasterization, and to a lesser extent the headstart rasterization has had in terms of hardware.

Not to sound too crazy but I also have a strong suspicion that if you tied a physics engine to a raytracer, you might get your physics 'for free' which could be a big deal for pushing forward more realistic physics too.


> Not to sound too crazy but I also have a strong suspicion that if you tied a physics engine to a raytracer, you might get your physics 'for free' which could be a big deal for pushing forward more realistic physics too.

Why is that?


To my knowledge, the biggest part of game physics is collision detection. (There's a lot more floats in the geometry of a rigid body than there are in its motion vectors). If you had a perfect raytracer for "free," you could fire out a fiber bundle from moving objects in the direction of their motion to find out what they'd hit.


the main thing with collision detection is narrowing what to search. consider a game with a million spheres: what can you do to narrow it down so you don't have to test each sphere against 999,999 others every frame. firing out a fiber bundle is akin to saying, "find the distance between each pair of spheres and compare that distance to the sum of the radiuses of the spheres." yes, that is a test for sphere to sphere collision, but it misses the problem.


How is that free though? You have to fire more rays for each moving object. That's not cheap. And the only thing you get is variable accuracy collision detection.


You wouldn't have to fire out more rays if you could use the same ones that are used for calculating light bounces.


You'd have to be lucky enough to have a ray of light bouncing in the right position and direction, and you'd have to be clever enough to notice it at the right moment (without slowing down regular light tracing algorithm by adding a billion comparison everywhere to figure out if it matches a motion vector). I don't want to be a naysayer after only considering it for two minutes but it doesn't sound very practical to me.


Sounds like it would lead to interesting "not a bug, it's a feature!"-worthy result of clipping through objects in the dark


A separate tech demo from Remedy: https://www.youtube.com/watch?v=70W2aFr5-Xk


one tricky thing to code is where you as the player look into a mirror, the trick usually is to create another exact replica of the room you're in and your player and use that like a skybox.

but with this you could have all the particle effects work properly in the reflection.


> one tricky thing to code is where you as the player look into a mirror, the trick usually is to create another exact replica of the room you're in and your player and use that like a skybox.

Actually mirrors can be easily implemented as a pre-render step, you just re-render the system from the right perspective, a virtual camera, and then paste that on the polygons that are acting as the mirror.


frame rates will drop as you look in the mirror no?


It should sort of even out. Because in the area of the mirror there is very little first order scene - because it is just a simple polygon, there is just the second order scene. This is how mirrors have been done in video games for the two decades.

Example in Three.JS: https://threejs.org/examples/?q=mirror#webgl_mirror


Spheres are just a super simple and dead obvious way of showing that your rendering engine is able to do realistic reflections.

In the demo, there are actually lots of much more subtle reflective surfaces that contribute to the realism of the scene.

> .. critical part of the gameplay ..

Strictly speaking, graphics hasn't been a critical part of gameplay for a long time. I think we've mostly gone beyond the point where increasing graphics capabilities actually enable new gameplay.


The exception being VR/AR


How about a game that happens outside a pawn shop?


Marble Madness?


Portal :p ?


AMD has a free ray tracing renderer called Radeon ProRender https://pro.radeon.com/en/software/prorender. Supported by Maya and Blender among others ...


It's too slow to be relevant.



Can you elaborate: what are you testing and comparing to ? are you looking at a real-time application (gaming) or is your comment about offline rendering (visualeffects/animation) ? I don't work on this project but I can forward the feedback to the developers.


> From each light source within a scene, rays of light are projected, bouncing around until they strike the camera.

Which is definitely not how raytracing is done if you want to get an image before the end of the world occurs. Rays are traced from the camera, and then back to the light sources. Some amount of pre-lighting can be done by tracing photons forwards from lights, but not the main image generation.


In path tracing (https://en.wikipedia.org/wiki/Path_tracing), which is what the VFX industry uses primarily, both directions happen.


Does anyone use path tracing for live renderings? I think that is the question.


Demosceners are starting to, for sure.

https://www.pouet.net/prod.php?which=69642



epilepsy warning

The demo is amazing on its own as a piece of art, let alone it's done programatically, let alone it's rendered in real time, let alone it's just 4k.


Brigade Engine, for example: https://home.otoy.com/render/brigade/


Is that even released yet? Or demo-able? I thought it was just "in progress" for ages?



"The Future of AI and GPU Rendering"

I've used Octane and I think what they are best at is hype.


They're not even very good at that. The framerate of the VERY FIRST clip with the statue of liberty at the start of that video is horrendous.


Has Octane 4 really been released? I see the demo video, but going to the website, everything is talking about Octane 3, not 4. Even the purchase page says "Octane 4": https://home.otoy.com/render/octane-render/purchase/



Blender's path tracing engine, Cycles, supports realtime use in the editing viewport. Unless your machine is quite powerful, however, it doesn't render anywhere close to 30fps. Instead, it iteratively updates the viewport. It is fast enough that this is tolerable and useful for many workload+workstation combinations.


There is a nice Quake 2 demo showing various settings: https://youtu.be/x19sIltR0qU

But I think we are still some time away from real time noiseless path tracing. Although more clever denoising filters for the first samples are getting much better.

And ofcourse you can decide to use a very low number of bounces. This will give a darker inacurate result but still can give a feel of global illumonation.


People are definitely working toward it, at least.


Looks like it was updated to the following which is closer:

> Depending on the exact algorithm used, rays of light are projected either from each light source, or from each raster pixel; they bounce around the objects in the scnee[sic] until they strike (depending on direction) either the camera or a light source


That's great.. if you are on Windows. I get that Microsoft works on Microsoft-technology, but it really doesn't help adoption or global development as a whole. DirectX doesn't work anywhere except Windows PC's and the Xbox. That's no biggie for Microsoft, but it's bad for everyone else.


AMD just announced their owm real-time ray-tracer based on Vulkan:

https://www.anandtech.com/show/12552/amd-announces-real-time...


There is a value to having competition. My guess that if it works out and Nvidia and AMD have it implemented on silicon, then they will push to have those features accessible through a Vulkan interface.

I love open standards as much as the next guy but for something like this where the utility is still an open question, I'd rather than some just makes an implementation for just their technology. The alternative is that your open standard is full of half baked experiments and trimming a standard is a huge pain.


Yes to competition, but this isn't really that, this is plain vendor lock-in, and it is not really competing anywhere except within the Windows ecosystem itself, where it practically has a monopoly.

If DirectX were to be used on Linux and macOS, you'd have some real competition, but that is not the case.


> DirectX doesn't work anywhere except Windows PC's and the Xbox.

And most new phone chipsets. And a lot of old phone chipsets. And a lot of ARM-based single board computers. And a lot of non-Xbox consoles like the Switch have hardware support for DirectX...

DirectX is in a lot of places you would not expect.


What? All of those places have hardware support for DirectX, but none of them (barring Windows phones) actually have DirectX running on them.


iOS and Android both don't support DirectX, so the hardware technically being compatible is irrelevant, especially with Windows Phone now discontinued.



A separate tech demo from Remedy: https://www.youtube.com/watch?v=70W2aFr5-Xk


This doesn't even look like it runs at a full 60 frames per second unless they are speeding it up in the editing.


I haven't seen evidence one way or the other, but it's specifically labeled "real-time GPU raytracing."

Do you have evidence it doesn't run at 60 fps?


Well, the video is clearly dropping frames so obviously not maintaining 60fps...


"Real-time" has never meant 60fps. It typically just means interactive framerates - fast enough to drag a camera around or move an actor. You can get away with calling 20fps realtime, and lots of stuff runs at 24 or 30fps. (Obviously nobody wants a game like Doom or Battlefield to run at 20 fps, though.)


Friendly reminder DirectX is a proprietary windows-specific API, with its associated lock-in. Don't fall for such simple trap.



Have fun shipping a real product on OpenGL without a proprietary binary driver blob.


Both nouveau and amdgpu are good enough for every linux game in my steam library.


Same here, except all the newer games I probably would like to add to my library but can't because Win/DX only...

Nouveau is still problematic because of the lack of access to the NVIDIA proprietary code, i.e. the extra bits like fan management, monitors, etc. That and it's performance still just isn't there against the proprietary drivers. AMD drivers are pretty good and Wine performance has increased significantly over the last year or so.

Hopefully Vulkan encourages more widely supported game dev, it's a terrible thing that gamers are practically locked into the Windows OS by way of proprietary graphics stack and the old catch-22 of 'no market for Linux games/no games on Linux.'


Just like they are locked into every games console ever built and are completely fine with that.

The gamer culture is not the same as FOSS one, cool games, getting hold of IP and belonging to a specific tribe (e.g. PS owners) is more relevant than freedom of games.

Currently Vulkan only matters on Linux and flagship Android devices.

It remains to be seen if Microsoft will ever allow ICD drivers on the Store or what is the actual use of Vulkan vs NVN on the Switch.


Reasonable compromise, compared to being aggressively locked into a proprietary platform.


Use OpenGL or Vulkan if you want. Modern (AZDO) GL and Vulkan wouldn't exist if Direct3D 9 & 10 hadn't dragged the state of the art forward by actually having reasonable APIs and well-specified behaviors.

Direct3D has had world-class debugging tools and a reference renderer for years, meanwhile when I'm trying to ship OpenGL games the shader compiler doesn't even work the same across multiple PCs. (Vulkan fixes this stuff, yay!)

It sucks that DX is proprietary but the proprietary nature of it means that it can achieve things that are only possible with full integration - just like Metal on iOS and OS X.


And PS2 SPUs, PS3 libGNM, libGCNM and Nintendo GX, GX2, NVN.

Apple saved OpenGL from being irrelevant thanks to NeXTSTEP and their OpenGL ES push on iOS, because they needed to attract devs, now with the commoditizationof middleware among AAA studios that is no longer a relevant.

And it remains to be seen if Vulkan will ever get any adoption beyond Linux and flagship Android phones.


Considering Windows still has ~90% of the market, this is not a trap. It's a matter of practicality.


1) Still a trap

2) DX12 works on win10 only, which deployed to waay less than 90% PCs


That attitude just adds to the problem.


While true, it's worth noting that these developments will likely impact the next XBox model too.


Anyone remembers Intel's Larrabee project? It featured HW accelerated raytracing. They had Quake Wars running on it: https://en.wikipedia.org/wiki/Quake_Wars:_Ray_Traced or video: https://www.youtube.com/watch?v=mtHDSG2wNho


Imagination and OTOY also experimented with hardware raytracing on mobile: https://www.imgtec.com/legacy-gpu-cores/ray-tracing/


Yes, I was seated at GDCE 2009 Intel session where they told us that Larrabee would be the future of graphics programming, yeah right.


Graphics programmers have been predicting this revolution for almost as long as I can remember. It's quite exciting that so many of the big players seem to believe it is almost time.


> Graphics programmers have been predicting this revolution...

Mostly it has seemed like it's been everyone except the actual graphics programmers.


Yep. There's a billion and a half reasons you wouldn't use full-blown path tracing for games. It'll always perform worse regardless of what you do, the frame budgeting is likely going to be a total nightmare, the tree search you need to trace rays requires too much branching to map well to GPUs (it's perfectly possible, just another absolute nightmare as your complexity goes up), console GPUs are (and have always been) extremely underpowered in terms of compute (but good with textures), etc.

Limited uses? Sure. Otherwise hell no. It's never been more than a gimmick GPU vendors use to show off (in real-time, offline rendering is something else entirely).


Maybe GPU vendors will offer specialized spatial look up trees or something .


To clarify: that's more of a problem in programming the GPU in a portable manner, and that may or may not turn into a problem depending on the implementation. GPUs can do loops that run a different number of times per thread just fine, at the cost of having to wait for the slowest to finish. Things start to turn sideways if you want diverging branches though (at least AMD kind of has support for this, but it's limited). If you want animations to work there's a good chance your search would become pretty complicated and you run the risk of smacking into that.

Also, I doubt GPU vendors would add special-purpose hardware like that. They'd much rather just add some new shader instructions to help doing it.


> Also, I doubt GPU vendors would add special-purpose hardware like that. They'd much rather just add some new shader instructions to help doing it.

The PowerVR guys added special purpose hardware for doing spatial lookups for raytracing.


And it's awesome.[1] Getting shadows right for the specifics of a game is a nightmare; balancing shadow acne and performance. Just being able to do one-size-fits-all shadows on desktops would be a major step forward.

[1]: https://www.imgtec.com/blog/ray-traced-shadows-vs-cascaded-s...


For reference, VSM/4MSM are a set of shadow mapping techniques that don't suffer from acne, at the cost of suffering from bleeding problems that also need biasing.

Shadows are probably one of the places where raytracing makes a lot of sense though. It's one of few shadow rendering techniques that don't blow a ton of precious fillrate, and it shouldn't take too many rays.


In the days of ML based AI, seeing raytracing + revolution is both refreshing and funny :)


Actually, raytracing is a domain with strong potential for AI, as there is a lot of work using ML models to de-noise ray-traced scenes in real time.

Edit: Yeah, NVidia actually just announced a big project on that exact subject: https://www.youtube.com/watch?v=jkhBlmKtEAk


Pixar is also using Machine Learning to denoise path traced images: http://graphics.pixar.com/library/MLDenoisingB/paper.pdf


Ha, I would never have guessed this use case. Very fortunate for Nvidia to provide both GPU and NPU cores for the task :)


I was also wondering if the brief tangent in the article here that the DXR used more Compute than Graphics cores (given availability) may also indicate that they are using something of an ML approach to the Raytracing renderer here?

Obviously, it's hard to extrapolate from just a press release, but with Microsoft's big ML kick of the last few years, breakthroughs in ML steps in raytracing might seem like a reason for the press release.


It reminds me the ray-traced game I made a few years ago :) http://powerstones.ivank.net/

If you don't move, the image "improves" over time. You can change the resolution in the top left corner. Play it in fullscreen. It works on phones, too.


Raytracing is sort of like nuclear fusion: a few years away for my entire life. Not clear what changed yet.


Well I imagine today’s GPU would be more than capable of doing an amazing looking game at 320x240 or maybe 640x480 at a reasonable frame rate.

The problem is at this point people want it at 1080, 4k or 8k.


The goalposts have also moved in terms of quality. Raytracing these days no longer just means point light sources and mirrors, but full global illumination with bidirectional path tracing.

Rasterization has improved a lot over the years, so the meaning of "raytracing" has to improve to be competitive.


And here's a nice demo from Remedy https://youtu.be/70W2aFr5-Xk


This has a lot of artifacts and does not look better to me than full fledged games that have already been released.


It's a tech demo. It's not supposed to look better than retail games, it just shows off some new experimental technology. If you check out the slide deck for the demo they compare each technique with existing solutions (i.e. SSAO vs traced AO, SS reflections vs traced reflections) and the advantages are very obvious.

If you want to see how much existing techniques suffer compared to tracing, just check out the absolutely miserable, incredibly ugly, just disgusting screen space reflections in the brand-new Crytek game Hunt: Showdown. The water is a nightmare.


It looks great, but it also clearly suffers from the noise issues inherent in most raytracing. In this particluar demo I didn't find it distracting, but when using a lower-performance computer I would expect much worse noice (instead of the usual compromise of removing effects)


The article says that MS (like everyone else) expects there to be fewer and fewer "fixed-function" features in a GPU with more shifting over to software-defined shader code. But isn't this just an introduction of more fixed-functionality to the pipeline? So what's the advantage of doing it this way versus writing a ray tracer in the current compute shaders?


APIs like a ray tracing API can be implemented in driver software (on-CPU) or in device firmware (on-GPU, but programmatic, not in silicon). Technically this also applies for the old fixed-function pipeline, of course, but it's worth considering the difference.

A raytracing API also isn't forced into the pipeline for every rasterized polygon like old features - hardware T&L, geometry shaders, etc - were. It's something you use on-demand in a compute or fragment shader.


The main issue with current compute shaders is intersecting with arbitrary geometry in a fast way. I expect that the main thing they will add is raycast() function that propagates the ray to the next surface, similar to how RenderMan shaders or OSL works.


I can't help but feel this is aimed at making future Augmented Reality (ie. Hololens) applications integrate into their host environments better.. or perhaps I'm just giddily over-extrapolating possibilities!


Good point. Ray tracing makes compensating for lens distortion trivial.


I'm not sure what's the big deal here; the demoscene has been doing realtime raytracing for many many years:

https://news.ycombinator.com/item?id=11848097


Large scene graphs full of dynamically allocated objects with arbitrary motion and animation are a very different problem than a flyover of a static landscape or a Mandelbulb that you’re tweaking parameters on.


Well, if we can get a standardized raytracing api built into any os, that’s a pretty big deal.

Also, ray tracing, marching, and casting has been real-time for a while, for some definition and degree. Graphics has always been about tradeoffs, though, and this seems markedly more general purpose than anything I’ve seen before.

I’m excited, even though I don’t have any Microsoft products outside my Xbox.


In my experience with blender, though you can create very detailed photo realistic images without raytracing, but using raytracing makes it much easier to make the same photos. It takes longer, but it's probably the direction we're going generally with 3d rendering given Moore's law.

Anyway this seems a little overhyped



So I guess this is to mesh with the new HW support from nvidia and AMD? Because ray tracing wasn't usually the providence of GPUs.

https://www.anandtech.com/show/12546/nvidia-unveils-rtx-tech...

https://www.anandtech.com/show/12552/amd-announces-real-time...

I haven't touched a PC game in ages, would be fun to come back and see some epic like HL3 done up in this.


This is quite exciting for augmented reality applications - I think it's a reasonable assumption that the next iteration of Hololens will have some sort of hardware acceleration for this (like Nvidia's RTX w/ Volta announced today).

Relevant:

https://arstechnica.com/gadgets/2013/01/shedding-some-realis...

"I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures." - John Carmack


The article's title says raytracing, but the actual text describes path tracing, no?


Can this be applied to Path tracing? If so, would this be able to accelerate something like OctaneRender?

I stumbled across this a year or so ago and was amazed at how the realistic image is built up after rotating. https://home.otoy.com/render/octane-render/


I can't help but imagine the massively power-hungry machines hosting the GPUs necessary for real-time rendering of Pixar-level movie graphics.

Won't this resemble power demands of crytpocurrency mining rigs except actually used for gaming? As if it weren't already bad enough the amount of energy we use per capita...


So where are we realistically with ray tracing? What kind of performance can be expected with modern GPUs?




The Seed tech demo advertises depth of field - so does this renderer simulate a lens? Is this common in raytracing?


Yes, simulating lenses is standard for physically-based rendering.


Awesome. Cant wait to test this out.


Ray Tracing is also a lie! Are they giving us an unbiased renderer?

Looks interesting anyway.


can this somehow work with autocad, etc so that construction fragments could be previewed in real time with Raytracing and global illumination turned on?


"CAD"-like programs from the animation and VFX worlds (Houdini, Modo, etc) already do this, so if Autodesk really wanted to they could almost certainly add real-time path tracing to AutoCAD.


Is this open source?


No, DirectX is proprietary closed-source technology from Microsoft.


Why not path tracing?



We merged that discussion here.


how much of this is due to crypto? do they have a raytracing chip that's much less general purpose and specific to raytracing (and incompatible with mining)?


So why DX12 and not Vulkan? It's time for MS to start helping the industry instead of proliferating lock-in and complicating things for engine developers.


I don't know why you're getting downvotes. DirectX is harmful for consumers and only serves to lock in games and users to Microsoft Windows.


Microsoft is the dominant platform for games. What financial gain would they derive from pursuing a non lock in strategy?


Sometimes I just wish Valve* would pop up and announce: 'oh so HL3 is coming out soon, and it will be a Steam Linux exclusive for the first 6 months.' While not likely, something like that would be a really interesting gauge of how likely genuine Linux adoption could be...

* I use Valve due to their apparent Linux push with SteamOS (what's the go with that btw!?)


Most people are not Linux fanatics like you. 99.9% of people do not care to use Windows as long as it works.


According to the Stack Overflow developer survey, the percent of developers using Windows dropped below 50%. Nobody actually likes using Windows (except for a few Stockholm syndrome sufferers and zealots who drink the MSDN kool-aid), it's just that people are forced to use it due to third-party application support like video games. It seems like every week there's a new article on how Windows 10 is shit for users, whether it's spying on them with telemetry, serving unwanted ads, or reseting user settings after forced updates.

Windows is such an anti-brand that they couldn't even get customers to buy a Windows phone after a $500 million advertising campaign.

https://insights.stackoverflow.com/survey/2018#technology-de...


Sorry to disappoint you, but as developer with Stockholm syndrome that left Linux back for Visual Studio, C++ and .NET.

Also former IGDA member and attendee of a few GDC conferences, game developers only care about shipping games and their IP.

AAA studios don't care 1 second about APIs to make a better world.

Adding a new rendering backend to a games engine is a trivial task, when compared to the pile of features a game engine needs to support.

Also most Windows developers don't care about Stack Overflow surveys.


> Adding a new rendering backend to a games engine is a trivial task

You keep saying this, but it remains false. If it would have been so trivial, studios wouldn't have hard time adding such backends and wouldn't need to hire third party porting experts when they decide to do it. You can see how long it takes major engines like Unreal to make a fully functional backend (since features added to such engines are publicly communicated). It's very clear it's not trivial at all.

And MS and Co. obviously do all they can to keep this difficult, that's the main idea of their lock-in which they designed to tax developers with.

What's bad though, is your justification of this practice.


It's trivial in the sense that it's low technical risk. I've not worked in the games for a while. At one company with an inhouse engine the rendering backend was initially DirectX9 written mostly by one person. He then implemented the Xbox360 backend. Another person did the backend for PS3 (OpenGL based). Don't have the exact timings it was ten years ago, but after the initial material, geometry lighting pipeline was done (and that's independent from the backend), the engine guy was never on the critical path.

They added Wii, DirectX10, iOS and Android backends while I was there. None of these were ever considered risky and none had more than one person working on it. Each console/platform has it's own quirks in how to optimise the scene for rendering but the having something rendering on screen is pretty much trivial once you have the machinery in place.

I can't speak for Epic, they are making an engine for every possible game and every possible rendering scene which is a harder problem than what we were doing. But the rendering backend isn't the hard part.


> It's trivial in the sense that it's low technical risk.

The problem is not in the risk, but simply in the cost itself. It's an extra tax to pay. However quality can also suffer, see below.

> I can't speak for Epic, they are making an engine for every possible game and every possible rendering scene which is a harder problem than what we were doing. But the rendering backend isn't the hard part.

The story of Everspace illustrates my point. They were bitten by multiple issues in OpenGL backend of UE4, and it took Epic a long time to fix some of them. Their resources are limited, and they are more focused on more widespread backends obviously. Which is exactly the result lock-in proponents are trying to achieve.


Sure, there is effort involved, but from my point of view it's a small one (a couple of man months on a project with 50+ coders). I'm going to have to make adjustments between the platforms because the hardware is different. Even on a DirectX PC, you can often have a lot of differences between the rendering scene for Nvidia, AMD and Integrated Intel gpus.

Don't know exactly what issues Everspace had with the UE4, but you want to have a fun night go out with some Epic licensees and get them to tell you war stories of issues they have had when they tried to do something which Epic hadn't done in their games. You're paying Epic for the "battle testing" and often they didn't fight those battles.

Part of the reason I left the games industry is that once you work at studio with an internal engine it is extremely frustrating to work on AAA games without the freedom to walk over to the engine programmer and get them to move the engine closer to what you need.


> Part of the reason I left the games industry is that once you work at studio with an internal engine it is extremely frustrating to work on AAA games without the freedom to walk over to the engine programmer and get them to move the engine closer to what you need.

Internal engines also on average are less cross platform. Simply because big publishers and shareholders don't want these very expenses that creep into development because of lock-in. That's why many Linux releases for such games use source or binary wrappers, rather than proper native rendering to begin with. This highlights my point above.


I disagree with the characterisation that internal engines are less cross platform because of lock-in, the big publishers don't care about lock-in. It's not part of the calculus in deciding whether to support a platform or not.

A port of a game is more than changing the low-level APIs used to control the hardware. It's the hardware of the platfrom the decides the complexity of producing the port.

Linux is a special case because it's the same hardware as a the Windows. Your market is people who want to play the game but aren't dual booting. Most of the issues with producing your port are going to come down to driver incompatibilities and the fact that every Linux system is set up a little bit differently (the reason Blizzard never released their native Linux WoW client[1]). It's not a big market and there are loads of edge cases.

For big publishers and AAA development, they're not looking to break even or make a small profit. They need to see multiples of return on their money or they aren't going to do it. Using a shim is cheap and doesn't hurt sales enough to matter to them.

[1] https://www.phoronix.com/scan.php?page=news_item&px=OTA0NQ


I'm looking at publishers who do release Linux games using internal engines. Most of them use binary or source wrapping. Only a minority are implementing proper native rendering in those engines. And I bet it's based on cost considerations like I said above. How would you explain it otherwise?

And I'm sure that cost plays a role when small market is evaluated. The higher is the cost, the less likely such publisher is to care, because prospects of profits are also reduced. So it goes back to my point. Lock-in proponents like MS and Co. benefit from lock-in in slowing down competition growth.


I agree that cost is a consideration of doing the port. From my experience what renderering API is used at the bottom is a very small factor in that cost calculation.

I think where we disagree is that I don't think of the lower level API as being much of a lock in. The better graphic programmers I know have pretty extensive experience of the various flavors of DirectX and OpenGL. The general principles are the same and good programmers move between them easily.


> I think where we disagree is that I don't think of the lower level API as being much of a lock in.

Lock-in here doesn't mean they have no technical means of implementing other graphics backends, it means that implementation is hard.

A lot of common middleware supports Linux just fine. It's graphics that's usually the biggest hurdle. People have expertise to address it, but it's still a tax to pay. And different distros support is a very minor thing in comparison.

If graphics is not the biggest issue, what is then in your opinion?


> If graphics is not the biggest issue, what is then in your opinion?

Graphics is the biggest issue, but the issue isn't at the API level. It's in the driver and hardware differences below that layer.

The "tax" as you call it, comes mostly from the hardware drivers leaking through the abstraction. Part of this is AAA game developers fault since they are attempting to use all the GPU with edge-case tricks to eke out more performance.


And you keep not understanding how game's industry works.

Make yourself an IGDA member, attend GDC, go to Independent Games Festival, network with people there and see how many would actually share your opinion.


Facts speak for themselves. Your "how it works" can't argue with reality. It took Unreal more than a year to add Vulkan backend. And the work isn't even fully complete:

https://trello.com/c/lzLwtb5P/124-vulkan-for-pc-and-linux

So claim that it's trivial is fallacy. It's surely doable, but it's a substantial effort.


1 - Adding a new rendering backend to a games engine is a trivial task, when compared to the pile of features a game engine needs to support. I did not said it was a trivial task by itself alone.

2 - Check how many in the industry actually care about your 3D API freedom goals. Even Carmack now rather likes DX, in spite of his earlier opinions.

3 - Every big fish and major indie studios are using Unreal, Unity, CryEngine, Xenko, Ogre3D, Cocos2d-X, or whatever else rocks their boat.

If you are happy playing D. Quixote, by all means keep doing it.

Game studios won't change their culture just because of some guys having HN and Reddit 3D API flamewars.


All you said, doesn't change the fact that it's a substantial effort. Whether it's easier than other features is irrelevant to the point above. It's hard overall.

So it's an extra cost for developers who need to spend time on it, and it's exactly the cost MS and other lock-in freaks are benefiting from, since it's increasing the difficulty of releasing cross platform games (one more difficult thing to address). The higher is the difficulty, the more is the likelihood of some games remaining exclusives, which is exactly what lock-in freaks want.

And if you claim that this difficulty is offloaded from most game developers to third party engine developers, it's still a problem. Longer development periods, more bugs, harder support all that contributes to some not making cross platform releases as well.

There are no two ways about it, lock-in is evil, and your justification of it is very fishy (you must be working for one of the lock-in pushers).


As I said, a Windows developer with Stockholm syndrome, that has experience how the games industry works.

Nowadays doing boring enterprise consulting, with focus on UI/UX.

Experience about reality, how people in the industry think, what those people actually consider as project costs.

Gamasutra articles are all available online. Try to find any postmosterm complaining about proprietary APIs on what went wrong section.

Experience, not demagogy.


> Experience about reality, how people in the industry think, what those people actually consider as project costs.

I trust more the experience of those who actually work on games porting, and explain relevant difficulties they encounter. And no one says it's trivial. On the contrary, they say that rendering is the hardest part, and the most costly one to port. So it's very clear, that lock-in proponents who are against cross platform gaming (MS and Co) are benefiting from this hurdle and strengthen it by pushing their APIs.

See: https://boilingsteam.com/icculus-ryan-gordon-tells-us-everyt...


Ryan Gordon, which makes a living porting AAA games to Linux since Loki Games days.

Someone that would actually loose this source of income if you had what you wish for, thus be forced to search for other kind of consulting services in the games industry.

Again, not understanding the industry works, plain demagogy.

When a games developer sees a 3D API manual for the first time, their first thought is "What cool games can I achieve with it?" not "Is it portable?".


> Someone that would actually loose this source of income

He can find what to do, working in engine development directly.

So far you were engaging in demagoguery about how trivial porting is and justification of lock-in, even when facts to the contrary were shown to you directly. I see no point in taking your word on it, against those who are actually known to be working in this field.

> When a games developer sees a 3D API manual for the first time, their first thought is "What cool games can I achieve with it?"

Until their publisher or shareholder knocks on their heads and stops their cross platform releases because of costs of using more APIs. Goal (of lock-in supporters) achieved.


You mean you left Linux for .Net. Visual Studio is just an IDE and C++ is cross-platform last I checked.

You appear to be speaking on behalf of all/Windows developers. Perhaps not 'most' think that way? What is your evidence other than your stated status and related anecdotes?

Why should we care what AAA studios care about? Should we be happy they've continued to push DX as default?

What do you do when DX doesn't provide what is needed for a game? Too bad? Fight with MS?

Overall, shouldn't we want gamers to enjoy the games being developed as widely as possible on a range of platforms? It's a little lazy to just fall back on the old catch-22 of 'no market' because 'no games' because 'no market...'


I am just yet another guy on the Internet, with personal anecdotes like everyone else.

It is easy find out how the industry actually thinks, and if I am just writing nonsense.

Go spend some time reading Gamasutra (all articles available online), Connections, MakingGames Magazine, or the free sessions and slides at GDC Vault.

If there is an university degree in game development nearby, attend their monthly meetings.

Then you can validate for yourself how is the games development culture and what is actually relevant.


> the percent of developers using Windows dropped below 50%.

Yeah, not sure how is this relevant. Developers are not your average CoD, LoL, WoW etc.. players.

I'm not trying to defend anything as I don't even use Windows myself, it's just that from a business point of view making sure DX and "gaming" in general is locked to Windows makes absolutely 100% sense as again - most gamers do not care if they have to use Windows or not, they just want to play games.


This is exactly the problem. Gamers are paying for Windows, just to play games, because the free options aren't usable, because of DX.

They're not choosing Windows because it does X or Y better or because it's a consciously preferred choice, they're choosing it because it's the option they get by default and because it keeps being reinforced through support of proprietary systems like DX.

Though only anecdotes, I sure do see a lot of comments around people willing to support/use Linux if not for lack of games/particular apps. As you state, gamers don't care what OS they use as long as it works. And well, Linux is free and works, it's just not well enough supported because of the chicken and egg of no market size vs no products available so no market size. I.e. if AAA games were developed with full Linux support, gamers wouldn't care if you gave them Linux as default instead, would they?


Good luck keeping their business alive with those 1% Linux buyers.


What would Microsoft have to gain? Absolutely nothing. What do consumers gain. A hell of a lot.

That's why it's foolish to assume Microsoft will change their ways somewhere down the line. The point is that it is us, the users that should be choosing alternatives and not celebrating vendor lock-in.


The same as Apple, Sony and Nintendo. I guess.


That's not a justification for being lock-in jerks. Would you appreciate using ActiveX instead of HTML? We are lucky, MS didn't succeed with the above.


Yeah so lucky we got locked into chrome instead. Now every week I stumble upon a website that works only in chrome.


They have learned that in other areas of their business that using and contributing to FOSS is a win for everybody.


In Satya's Microsoft, it's pretty much guaranteed that DirectX is going to go the route of .NET and be largely open and multi-platform. I almost bet within a year, we'll get a Hacker News article with just that announcement...


How exactly this would benefit Microsoft? They thrive on lock-in DX provide and actively use it to move people on Windows 10. It's just like they benefit from overly complicated Office formats.

And needless to say that exactly with Satya Nadella they started Windows S where you can't use any API other than D3D.

They opened dotnet because they wanted to get it on server-side since Windows have very limited exposure on that market. But even today .NET still not multi-platforms in term of UI and for that reason a lot of Windows-specific .NET software wasn't ported. And it's already more than 3 years after release of dotnet core.


If Satya wanted to do something useful, he could support Vulkan, instead of pushing DirectX.



Great, but how can other engines benefit from it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: