Hacker News new | past | comments | ask | show | jobs | submit login
Ray Tracing Without Ray Tracing API (diaryofagraphicsprogrammer.blogspot.com)
50 points by ingve on Sept 8, 2018 | hide | past | favorite | 16 comments



based on the last 20+ years of graphics development, it is easy to project that when a large part of the ray tracing codebase is owned by a hardware vendor, there will be various bugs introduced with each driver release.

also based on last 20+ years in graphics development, in real-time applications you're using APIs with vendor-specific implementations for quite some time now, and you're dependant on driver quality for quite some time now as well. So, what's the author's point?

In offline rendering, people do write their own everything with code paths to accelerate certain stuff where possible by vendor (embree, gpgpu...). You can bet people will, already are in fact, implement support for RTX as well. Implementing your own crap on compute/shade paths will NOT utilise the chip core in RTX cards - which is meant for these new accelerations, the RTCore.


The likelihood of a feature being functional is usually based on two things: its hidden complexity, and how frequently used it is. HW ray tracing suffers on both of these counts (for now). It won't be truly common outside of major engines for years, and it hides a lot.

Any developer who is in the awkward middle ground of building their own technology without the resources of a major engine (both in terms of manpower and in relevance to hardware vendors) should expect everything on the bleeding edge to be partially or fully broken, often with no timely recourse. I'm not faulting the vendors too much- the software x hardware situation is absurdly complicated, and spending limited driver resources on a smaller company's use case instead of, say, Unreal and Unity, doesn't make a lot of sense.

But if you're one of those smaller companies, you pretty much have to build the fallback implementation first using common low level primitives that have been tested a million times across the industry. If you don't, your product simply won't work for many (and sometimes most) users. Even when their hardware is supposed to support it.

And speaking from unfortunate experience, the up-front cost of building the fallbacks is often less than trying to make the fancier things work consistently, even though the fancier thing is ostensibly doing the work for you.


I'm not convinced the part of ray tracing implemented in hardware on the new NVidia cards (primarily bvh traversal and ray triangle intersection) is significantly more complex than things that are already implemented in hardware (rasterization, anisotropic filtering, tesselation, scheduling, compression, etc.) and those tend to be pretty solid.

I also expect the hardware ray tracing to be heavily used for offline rendering before it becomes mainstream for real time so there will be plenty of usage from demanding customers to iron out any issues.


It's true that features of similar or greater complexity do tend to work pretty well already, but those features tend to be the ones that are practically fundamental and used everywhere. (edit: but I have hit bugs with scheduling too- and maybe tessellation and anisotropic filtering, but those cases were weird enough where it was hard to tease out the true cause.)

Ray tracing has a good shot at getting there (and probably in less than 5-10 years), and developers who can afford the suffering will hopefully pave the way, but early adopters should be very wary.

To be clear, I'm not catastrophically pessimistic. I intend to do some work with it, but I'm not under any illusions. I once encountered five distinct blocking driver bugs in the span of a single week. I just assume everything is broken until proven otherwise.


I think drivers tend to be more problematic than hardware, I'd actually expect more issues with driver implemented support for ray tracing APIs running as compute shaders than with the hardware accelerated support.


Yup, that's definitely true. It's going to be a fun transition period.


> primarily bvh traversal and ray triangle intersection

I've been waiting for a concise explanation of that the damn hardware on these cards actually does since the release announcement. Thank-you.

So meshes are still pivotal. I was hoping for something that might lend itself to accelerating rendering of other types of object representation.


You can write your own custom intersection shaders to support other types of geometry but they will run using the regular compute hardware rather than with the dedicated hardware for the ray triangle intersection. I believe they will still benefit from the bvh traversal hardware.


No one is forcing you to go full ray trace for real time. It's not even attainable yet. Hybrid solutions are where we'll be soon enough with RTX (and hopefully similar by others). Considering hybrid pipeline, of course you would need a fallback for most stuff. That's what you do now with modern features in APIs/cards now as well.

Thing with raytracing though is, if anything, complexity is highly reduced compared to rasterization. That's kind of the whole point of it. With it (rt) you trade less complexity and elegance with more computing power needed.


Using hardware-provided APIs might take more QA later, but I'm sure creating your own ray tracing implementation from scratch costs way more developer and QA time up front (assuming similar quality).


I had the same thought during the SIGGRAPH presentation. The API they are providing is a step back, too high-level, like going from Vulkan back to OpenGL.

It is nice to have for starters, but in the near future it will die quickly if somebody comes up with an improved incompatible technique, that can be implemented over existing low-level.


Who is they? NVIDIA provides multiple APIs which are accessible through various APIs or natively the point of the hardware is to provide specific interfaces and data types which make RT faster these are not limited to a specific graphical API.

You can use it trough Optix which is proprietary low level C++ API based on CUDA, you can use it with Vulkan or DX12 DXR the levels of abstraction are dependent on what API/interface you’ll be using.

On top of that NVIDIA provides ready to use hybrid ray tracing libraries which are built on top of Microsoft DXR currently with Vulkan coming within the next 2-3 months.

But you don’t have to use their hybrid path tracing or denoiser you can develop it completely on your own.


I was talking specifically about DXR.


DXR doesn’t seem to be any more high level than Vulkan RT, if you want you can go native but it isn’t clear to me what advantages you would even get considering the already low level DX12 provides.


Well from the nvidia perspective that's exactly the point. If they figure out better techniques later they can just implement that in a driver update if everyone's using DXR. If you use your own low level thing they can't upgrade it for you.


Just noting that ray tracing doesnt belong to anyone and article is taking shot on real-time ray tracing hype recently deployed into daily hardware.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: