Hacker News new | past | comments | ask | show | jobs | submit login

I've always wondered why games didn't implement that but now that I think about it, I can't imagine a neat solution (in 30 seconds)

Imagine a 3rd person action game. A and B split up and have two perspectives. They end up facing in opposite directions, how do you reconcile the camera views?

Maybe you can have some heuristic that only does it if their perspectives are "close enough" but what value does it bring?

For some kind of topdown/isometric type game I could see how that might work.

EDIT: actually read the article after getting curious. All of this is covered, extremely cool article. Don't quite understand ray tracing enough to understanding exactly why it is faster. Is it because rasterisation starts from the camera but ray tracing starts from the light source? so you can amortise more calculations if you start from a light source rather than individual perspectives?




> Don't quite understand ray tracing enough to understanding exactly why it is faster. Is it because rasterisation starts from the camera but ray tracing starts from the light source? so you can amortise more calculations if you start from a light source rather than individual perspectives?

The problem is Rasterization is a bastard Raytrace. A camera view, represented as a trapezoidal prism, is transformed into a cubic prism and flattened. Every pixel is a parallel "raytrace" along the flattened cube. To get multiple viewports Rasterization creates an entirely new rendering scene, complete with the boilerplate code and API calls, as if it were its own game with its own screen. A four way split was akin to brute-forcing four bastardized raytraces.

Raytracing simplifies this by admitting this is terrible, and thus giving every pixel its own official transform (Raytrace) rather than the ugly pile of hacks above. I think of it as a "Camera Shader", sitting with pixel and vertex shaders as a way to dynamically change a given pixel.


I found this part not well covered in the article too. But to your question, ray tracing starts fromt the camera. Typically you define the direction of a ray for each pixel, and that ray will bounce around to calculate a final color, and that color will be rendered as the color for that pixel.

My assumption is that the shared scene and shaders might play a role here, although the direction of the rays will still vary.


A big part of raytracing is checking which objects are intersected by ray. When you have two cameras very near one another, most rays for second camera will intersect the same objects as corresponding rays on first camera.


If I recall correctly, this would not matter too much in practice.

The reason for this is that while the primary rays are a coherent bundle, the secondary rays are all over the place. Essentially, as soon as you hit the first object, it bounces in a random direction all over the scene. And for every primary ray, you might end up spawning a dozen secondary rays.

I believe this is also the reason dedicated raytracing hardware is needed for GPUs. While a coherent bundle is fairly trivial to parallelize, incoherent rays are going to absolutely wreck your SIMD performance. You really want dedicated hardware to properly manage this, for example by doing BVH traversal and collecting all secondary rays that hit the same object for a single mostly-coherent render pass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: