> Don't quite understand ray tracing enough to understanding exactly why it is faster. Is it because rasterisation starts from the camera but ray tracing starts from the light source? so you can amortise more calculations if you start from a light source rather than individual perspectives?
The problem is Rasterization is a bastard Raytrace. A camera view, represented as a trapezoidal prism, is transformed into a cubic prism and flattened. Every pixel is a parallel "raytrace" along the flattened cube. To get multiple viewports Rasterization creates an entirely new rendering scene, complete with the boilerplate code and API calls, as if it were its own game with its own screen. A four way split was akin to brute-forcing four bastardized raytraces.
Raytracing simplifies this by admitting this is terrible, and thus giving every pixel its own official transform (Raytrace) rather than the ugly pile of hacks above. I think of it as a "Camera Shader", sitting with pixel and vertex shaders as a way to dynamically change a given pixel.
The problem is Rasterization is a bastard Raytrace. A camera view, represented as a trapezoidal prism, is transformed into a cubic prism and flattened. Every pixel is a parallel "raytrace" along the flattened cube. To get multiple viewports Rasterization creates an entirely new rendering scene, complete with the boilerplate code and API calls, as if it were its own game with its own screen. A four way split was akin to brute-forcing four bastardized raytraces.
Raytracing simplifies this by admitting this is terrible, and thus giving every pixel its own official transform (Raytrace) rather than the ugly pile of hacks above. I think of it as a "Camera Shader", sitting with pixel and vertex shaders as a way to dynamically change a given pixel.