Hacker News new | past | comments | ask | show | jobs | submit login
Ray Tracing Denoising (alain.xyz)
138 points by Impossible on Sept 26, 2019 | hide | past | favorite | 44 comments



It's only briefly mentioned, but NVIDIA's recurrent denoising autoencoder is pretty amazing.

https://research.nvidia.com/publication/interactive-reconstr...

Two Minute Papers did a nice explanation of it:

https://www.youtube.com/watch?v=YjjTPV2pXY0


Yes it's very nice. I think they went a bit too far with their deep learning anti aliasing solution DLSS though. Read somewhere here that AMD potentially had an algorithm that surpassed it in many metrics using no ML. Seems like that thing about holding a hammer for too long so everything starts looking like nails.


Also, DLSS was received rather poorly. The performance impact doesn't justify the increase in perceived quality, and the artifacts when it fails are pretty jarring.


I thought DLSS speeds up the rendering with RT ON.


In all games to date, it's better both in terms of perf and aesthetics to run with RTX disabled and TAA enabled. There's a bunch of blog posts and youtube videos like [1] about this.

There's a bit of a meme going around that DLSS stands for 'doesn't look so sharp'

[1] https://www.youtube.com/watch?v=3DOGA2_GETQ


I watched all these holywars as well, and I’m on OFF side too for a personal preference (framerate >> “realness”). But how disabling RTX corresponds to DLSS speed when RTX ON at all? It was nvidia’s main selling point that dlss makes rtx bearable by rendering in lower resolution and then upscaling via trained network. Which traditional AA can not at the ~~~same quality level.


> and aesthetics to run with RTX disabled

Did you mean to say "run with DLSS disabled"? RTX contains all the new features including ray tracing, which is not the same as DLSS. There are several games where many people feel they are indeed much better aesthetics-wise with RTX on, not off. At the very least it's a matter of preference, and not "better" in general to have RTX off.


I meant to say with RTX disabled. Turning on RTX (and let's call DLSS free at this point) costs a performance penalty that's equivalent to meaningfully upping resolution or AA settings without RTX. And people prefer the latter. This might change as games start making better use of RTX functionality, but this is where things are today.

For n=1 I've played Metro with RTX on lower settings and without RTX on higher settings, and I prefer without. I think realtime raytracing came out a hw generation too soon.

https://au.ign.com/articles/2019/04/17/what-is-ray-tracing-a...

https://www.techradar.com/au/news/we-tested-ray-tracing-in-c...


> And people prefer the latter

You prefer the latter. Many people prefer ray tracing on. In fact the main complaint online is about the cost of cards, very few people seem to contest that scenes where ray tracing is properly artistically used, have superior aesthetic quality to them and superior realism. (assuming that's what you mean, since you keep using the term "RTX" and it's unclear what you talk about, whether it's ray tracing or DLSS etc.)


And yet I cite 3 unrelated sources that all corroborate what I said with detailed analysis and you cite.. Opinion?

> assuming that's what you mean, since you keep using the term "RTX" and it's unclear what you talk about, whether it's ray tracing or DLSS etc.

I use the term the same way NVIDIA uses it. RTX is anything an RTX core accelerates. Still confused? I think that might have been the intention of their marketing team.

> very few people seem to contest that scenes where ray tracing is properly artistically used, have superior aesthetic quality

This is quantifiable bs. Ray tracing as a technique is superior to rasterisation, but only with sufficient flops. And the current generation of hardware does not yield that critical number. So we get 'ray tracing', but so subdued and limited that existing approaches just flat out look better and also perform better.

https://www.youtube.com/watch?v=CuoER1DwYLY

Or if you want a more approachable comparison of RTX vs. not-RTX. consider Minecraft+RTX[1] vs. Minecraft+PTGI[2]

[1] https://www.youtube.com/watch?v=91kxRGeg9wQ

[2] https://www.youtube.com/watch?v=Y2WqX6Iu6cU


You linked a performance analysis of RTX cards in Control, a general overview of ray tracing and how it applies to gaming and some youtube video from almost a year ago about DLSS not being implemented very well in one game (which has since been much improved).

None of these corroborate the idea of RTX effects being aesthetically inferior, or that this is a widely held opinion.

Consider watching these for an up-to-date take on the subject.

https://www.youtube.com/watch?v=blbu0g9DAGA

https://www.youtube.com/watch?v=yG5NLl85pPo


‘Potentially had an algorithm’ - if it is a potential algorithm, this implies it had not been sufficiently performant to work in practice, though.


Bad wording on my part, it's definitely in use. I dug up the context, it's discussed in the second top-level comment here: https://news.ycombinator.com/item?id=20698721


Well, it's very good for still images, problem is, it doesn't work well on videos, since the way it works is not temporally consistent.


Aw this didn't mention orthogonal array sampling, another sampling technique aimed specifically at ensuring good distribution in higher dimensions: https://cs.dartmouth.edu/~wjarosz/publications/jarosz19ortho...


It actually does mention and link that paper.


oh woops! didn't catch it


Can't wait until ray tracing is more mainstream. I know NVIDIA has their share of uncool things but I really appreciate them making the RTX series, hopefully we'll see more and more of this.


With AMD bringing ray tracing next year to consoles and desktop we won't have to wait long.


At one time, writing a ray tracer was a right of passage when getting into computer graphics. I never did it, but this week I was trying to figure out how to find the intersection point of a ray and an oblique cone and all of the resources I found were related to ray tracers.

I'm thinking that maybe I should actually write a ray tracer. Is that still a worthwhile thing to do, or has the world moved on?

FWIW, I never solved the ray-oblique cone problem...


Still worthwhile in my opinion! But full disclosure, I happen to be working on ray tracing problems.

Writing them is still really fun, it’s no less useful today for learning things than it was 20 years ago. You can get amazing pictures with not very much code, and the algorithms are really satisfying to understand & implement.

There are also still tons of low hanging fruit. You’d think the easy problems would be mined out by now, but they’re not. New developments are actively happening with intersection primitives, color handling, sampling, direct lighting, shadowing, the list goes on. If you want to do research, you don’t have to dive that deep to find something unsolved that is solveable.

For an oblique cone intersection, I don’t know the right answer, but the oblique cone is a skew transform of a regular cone, right? You might be able to use a regular cone intersector, but pre-transform the ray by the inverse skew transform?

This resource is fantastic for finding intersection building blocks and code examples: http://www.realtimerendering.com/intersections.html


I've heard good things about 'Ray Tracing in One Weekend' https://raytracing.github.io/


For a more modern twist on it I’d suggest writing a path tracer (bonus points if done on on the GPU). It’s not really any more difficult than an old-fashioned ray tracer but the results are much more graphically impressive.


Here's my solution for a regular cone intersection test (see rayint_cone):

https://github.com/jimsnow/glome/blob/master/GlomeTrace/Data...

One trick is simplify the problem to just doing a ray-intersection test with an axis-aligned cone, and to handle the general case not by complicating the ray-intersection test but rather applying the inverse transform on the ray itself. You can use the same trick to support oblique cones: just figure out what skew transform you want to apply to the cone, and apply the reverse transform on the ray.


It cannot be that difficult, mail me the question and I should be able to help you. See my profile.


For a practical demonstration of what denoising can theoretically do I recommend Quake II RTX. There is a dev mode with a bunch of rendering settings, many more so than in an average game. They were added to it because this game is more of a tech demo of RTX at this point.

There is a switch where you can turn the denoising of the ray-traced output on/off: it shows tremendous difference, to a point where it's hard to even imagine looking at the noisy image that it is even possible to extract the denoised version.


Something I've wondered is whether technology like this could eventually be self-defeating for hardware manufacturers. Rather than the evolution of graphics deriving from improved accuracy of the optical simulation fuelled by advances in computational power, it may instead derive from optimising subjective video quality, similarly to video codecs.

While accurately simulating optics is needfully computationally expensive and gives special-purpose graphics hardware an advantage, it's not clear that psychologically subjective high quality graphics (i.e. generating visuals which are inaccurate but convincing to humans) has such a need.


>While accurately simulating optics is needfully computationally expensive and gives special-purpose graphics hardware an advantage, it's not clear that psychologically subjective high quality graphics (i.e. generating visuals which are inaccurate but convincing to humans) has such a need.

What you're describing is rasterization, which is what the industry standard is (at least for games) for decades.


Techniques used to create realism with rasterization (e.g. normal mapping; shadow mapping; screen-space anti-aliasing) are still simulations of optics, just not entirely faithful ones.

Generating visuals with an autoencoder, albeit hinted by noisy physically-based raytracing, is not an optical technique; detail is generated from a visual statistical model, not an optical simulation.


> noisy physically-based raytracing...detail generated from a visual statistical model

That is an optical simulation :)


The raytracing is, but you don't see the result of the raytracing, you see the output of a neural network inventing detail based on higher definition training data. It's like seeing some blurry dots through a microscope, then drawing a sketch of detailed cells, based on your memory of pictures you've seen. The microscope is an optical system, but the sketch is the result of memory and style transfer, not simulation of optics. Hypothetically, you could have no understanding of the behaviour of light in producing the detailed sketch.


I think the success of deep learning is quite unfortunate. There are a lot of areas where "throw an ANN at it" has become a go-to even though they're basically inscrutable blackboxes with minimal theoretical guarantees.


I think it's a question of complexity. The idea of ray tracing is actually really simple compared to modern day rasterized graphics.


So far, what I've read about denoising is always the context of doing image post-processing, but it seems to me that some of these techniques could be used just as well to identify areas of the image that the denoiser is most uncertain about, so that you can trace more rays in those directions.


Sure if you want to reduce error at some cost, you can use a noise metric to identify where you should send more rays. The premise of denoising is that it's cheaper and you've already spent enough time on the analytical algorithms. Also there is a chance that the noise/variance is due to a high variance feature which (a) would have been fine to leave out and (b) causes a cascade of "noise-driven" ray tracing.


What's fascinating to me about this is that it sounds like future renderers may end up working very much like we think the brain does. There is a virtual world, but very little raw data about that world is used directly, just a small sample, and the rest of the image is filled in by a neural model that is able to infer how the whole scene should look based its a priori understanding of how things like light and depth work.


Great post but disappointingly few images. It would have been really interesting to see these techniques applied on a standard scene with before/after comparisons.


Shirley published a paper in 1991 showing that low discrepancy samplers worked well in a ray-tracing context. So I wouldn't say that's particularly new.


I'm not sure what exactly you are responding to, but low discrepancy samples is not at all what this page is about. There have been a lot of papers on many different techniques with various upsides and drawbacks when it comes to reducing noise.

Comparing this overview to one of the most basic techniques that is used everywhere and is a given is like reading an article on a modern car engine and dismissing it because you saw someone light some gas on fire 30 years ago.


I'm sorry you took such offence to my comment. I guess I should have quoted the statement that I was replying to within the overview section: "Recently, the use of low discrepancy sampling [Jarosz et al. 2019] and tillable blue noise [Benyoub 2019] has been used by Unity Technologies, Marmoset Toolbag and NVIDIA in real time ray tracers."


Those papers are about specific low discrepancy sampling patterns, their ease of use, their speed, their flexibility and the scalability of their properties into higher dimensions. Papers written by knowledgeable researchers in 2019 were not used in all the things you listed.

I understand you know what low discrepancy samples are, but equating the very first demonstration that random sampling wasn't ideal for day tracing, to the state of the art that has evolved over three decades of research is ludicrous.

I don't know why you are desperate to be dismissive but it has no basis whatsoever in reality.


The paragraph was a chronological account of when techniques were introduced with accompanying citations. The 1998 paper in the preceding sentence would lead the reader to believe that low discrepancy sampling came after the robust sampling methods (and potentially even that low discrepancy sampling was a new technique). Both have had further work that continues to this day, and both are much more complex and well-understood today.

> desperate to be dismissive

How is clarifying what I was commenting on "desperate"? I feel like you're trying to escalate here.


whoa, calm down. The post makes it sound like low-discrepancy sampling is a recent development, and 'rbkillea is pointing out that it is 30 years old.

That's it, no need to attribute malice anywhere. I think you are reading way too much into 'rbkillea's comments.


There is nothing about low discrepancy sampling being a recent development. This article is about recent research and suggesting anyone involved would imply that one of the most trivial aspects of rendering is somehow new is total nonsense.

There is a recent paper about generalizing n-rooks sampling to higher dimensions which seems to have been misunderstood by yourself and others. It was written by researchers who already have dozens of high profile papers on many different topics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: