> Their combination of a simple algorithm and stunning results are hard to beat.
that's exactly what fascinates me about raytracing. It's pretty straightforward, you literally emulate the light bouncing around, but given enough resources the results can be surprisingly good-looking.
For interest, the method of literally emulating light bouncing around seems to be called Photon tracing (http://en.wikipedia.org/wiki/Photon_tracing). It creates realistic renderings of just about anything if you can afford the time to render it. Rendering times are prohibitively high, though, because this rendering method emulates lots of photons that never hit the sensor.
Ray tracing, although very powerful, has some limitations because it goes the other way: It casts rays from the sensor into the scene.
could you elaborate on where the symmetry breaks down?
From your comment and another one here, it seems to be the case that actually some of the photons that don't hit the sensor still affect the image (either that, or you can't get back to the source starting at the sensor for some of the photons). In other words, if a light source emits a trillion photons everywhere in the room, and your sensor (the idealized camera) catches ten million photons, then it's not strictly equivalent to trace back the ten million maths to the light source, and ignore the ones that didn't make it to the idealized camera.
If it were equivalent, you wouldn't need "photon tracing and ray tracing" or bidirectionality.
But I'm struggling at where the symmetry breaks down, since it seems that tracing a path back from the sensor along the same angles should produce the same result as tracing it forward from the light source - for the photons that directly or indirectly (after a certain number of bounces or passing through the materials) make it to the idealized camera.
But the ones that don't make it to the idealized camera - how can they still affect the frame? Or (equivalently), when is it the case that you can trace it forward to the camera but not backward from the camera?
Why do you need bidirectionality? Where does the symmetry break down?
When you do photon tracing, you begin with beams of known intensity, wavelength, direction because you are aware of the light source. That combines with everything else that hits the sensor to give the image.
When you do ray tracing, you don't immediately know if 10 million photons were captured. You know how many rays you are going to send based on the resolution of the image. This also means there's issues with say light coming from narrow apertures because you limit yourself to a finite number of rays.
If you use photon tracing and a light source sends 10 million photons and 70,000 are absorbed that's 70,000 data points to define the image which may be plotted on a 200x200 canvas. If you use ray tracing with a 200x200 canvas you will have the same size of image, but only get 40,000 samples. Even if you expand the size of the canvas, there may be some photons which can reach the camera but whose path the ray tracer can't reach.
For instance, say some photons hit the camera at pixel x-coordinates 10.0003 to 10.0005 through a fine aperture.
This will illuminate the camera in a photon tracing simulation. The ray tracer will not capture that photon unless its resolution were 10,000 times higher. The ray tracer will trace back from 10, 11. It can't do every intermediary step. The photon tracer can account for those circumstances.
A simple case: the sun illuminating a large white room through a small window. The light creeps all over the room, not just at the direct illuminated spot, as you'd expect If you were to do a simple camera-trace.
The issue at hand is: we don't experience it, but light behaves in a thermodynamic (dynamic) equilibrium way. You don't see the transients, because the equilibrium takes nanoseconds to converge, but when you turn on a light all your room is exchanging photons until it settles at a rate where everywhere is receiving as much as it's irradiating plus what's wasted as heat, and the total heat is equal to the energy coming through the light sources. Finding those values is solving the so called light transport equation -- and this clearly requires probing geometrical information from all over the place.
In the sunlit room case, all that light is a) heating the room and b) going outside. Now let me try to build a physical model based on your intuition: this equilibrium dynamic depends on the rate of absorption/reflection of each surface. If you have experience with EE, the room is essentially a resonator, where walls are reflecting some and absorbing some until it's absorbing as much as there is light coming in. The less absorbing the walls are, the greater the Q factor of the resonator is: the light is going to bounce a lot and build intensity before it gets absorbed. This is why dark walled rooms are so dramatically, well, darker than white rooms -- the Q factor is not bounded. If your walls were perfect mirrors the intensity would build up with time to +inf.
With bi-directional path tracing it works something like this:
* trace a photon from the light source with x bounces
* trace a ray from the camera with x bounces
Now the end points of those traces are not connected but you can calculate the probability that they can 'see' each other. You can use this calculation for the light transport from the light source (photon) to the camera (pixel).
A nice and thorough book about raytracing-like techniques (including photon tracing), and their capabilities is "Advanced Global Illumination" (http://sites.edm.uhasselt.be/agibook). It is very interesting.
Very nice indeed! I especially like the fact that the source is so well annotated, which makes me wonder: why bother with minifying it at all and not simply use the unminified source on Jsfiddle?
> Note that the goal here was to make the source code as
small as possible, not clarity; so even the original code
before minification is a horrible mess. This doesn’t do
justice to the elegance and simplicity of proper raytracer
code; I’m writing a book to right this wrong.
Raytracing is not exactly a lightweight calculation. My first raytracer was TurboSilver 3D on the Amiga, in 1990 (actually one of the first commercial raytracers ever produced). "Photorealistic" images at 7.5 Mhz. For an image like this, you'd set up the scene, hit the render button, and grab a quick lunch. When you came back, the scene would be about 2/3rds rendered, and you'd watch it for a while, thrilled by every new pixel that pushed itself onto the screen. Then you'd go get coffee and hope the scene was done when you got back.
Now, the same scene (say, 320x200px) renders in an eyeblink on my phone, driven by a high-level universal scripting language I can tweak at will. This is beyond amazing. It's fucking transcendent.
(Oy vey, I feel old. Where'd I put my dentures? BTW: get off my lawn, etc.)
Thanks for sharing this. Especially on the page there is a presentation as well as several ports of this same program to other languages like OCaml, Haskell and JavaScript. Interesting to see that the OCaml version is on par with the C++ version performance wise, but is 47% shorter. However the Haskell version is not shorter and 4.5 times slower?
And if you like that (spheres only), you might also like this (general triangles-based), in 444 lines of Scala (and OCaml, Python, Ruby, Lua, Scheme, C++, and C):
Impressive not only from a "ooo-that's-clever" perspective, but it's also readable, tweakable code. Many of these tiny JS toys aren't things you can really do anything with; this example is a great basis for learning.
Glad you liked it. It should be far more readable, though; the constraint here was code size in bytes, so even the non-minified code (http://gabrielgambetta.com/tiny_raytracer_full.js) is quite hacky (especially with the global vars, the horrible "tImageData" hack, the assignments within if conditions, putting all the sphere data in a flat array and playing with the indices,...)
Those [anything] in [n] lines of JavaScript pieces are a great way to dive into the code in order to learn. It is usually much easier to read through 30 lines than through 3000 lines. So it probably not very important that these projects are not finished products.
I'm currently trying to write a raytracer in Python (I know, I know) but I'm already at like 200 lines and no reflections, refraction, shadows or light sources :(
This is one of the coolest points of Javascript IMHO: you can see the result and interact with it directly in your browser. Right here you will find someone tweaking the code for supersampling, and you can apply it yourself and see the result without hassle.
This generates an uncontrollable and very positive propagation effect.
Impressive, especially since it is still fairly readable. On the other end of the spectrum, I give you raytracing on the back of a business card, in C++.
I think this is awesome, however, I think it would be even more awesome to get a walk through of the code and the thought process behind minifying it in a blog post, the past days with all the jsfiddles are nice so don't get me wrong on that part. I just think it adds more value if the code is somewhat explained (I know ray tracing is explained on tons of places). Just my two cents.
Change the "w" variable to > 9000 and it completely kills Safari, can't even close the tab or the browser. Sweet demo though, I remember being amazed seeing this sort of thing the first time.
Awesome! I tried to look into building a simple raytracer many times, but the either math gets me everytime or I just didn't come across the right "raytracer for dummies" tutorials.
For many year I also tried to build a simple ray tracer. The math got me to, but in the end I was able to write different types of render engines.
What helped me:
learn about vectors
learn about (vector) normalization
learn about camera models http://www.ventrella.com/Ideas/Camera/Arm_Camera.pdf
learn about pixels (pixels are just dots without size)
use a 3D coordinate-system that works for you (mine is: x = right, y = forward, z = up)
start with a simple camera model (always looking forward)
normalize all direction vectors
place the screen of pixels 1 unit in front of the camera
normalize the screen of pixels
know that the center pixel is on the forward vector of the camera
try to calculate a vector from the camera position to the left top of the pixel screen in front of the camera
I'd buy it, please post the link once its done, or if you want any proof readers or people to type in the code to check for typos I'd be happy to help.
(on a not-particularly-raytracing-related note, I can really recommend to anyone interested the lesson about colour spaces, first time the point of CIExy and sRGB became clear to me. also for anyone wondering why the colours in F.lux / Redshift shift the way they do, check the "blackbody radiation" chapter)
There's plenty of links on the web, but if you're really serious (it can be quite addictive writing one), get hold of the "Raytracing/Rendering bible" : Physically Based Rendering by Matt Pharr and Greg Humphreys.
There's also a plethora of research freely available with cutting edge techniques to read.
For minification, as someone else said. I originally wrote this for a JS1K competition some years ago, so it's not geared towards readability, but towards small code size.
I had a really, really ugly trick that saved me a couple of bytes which I removed before posting to HN. I figured out I was using canvas.getImageData() and canvas.putImageData(). So I did this: var tID = "tImageDatA"; canvas["ge"+tID](); canvas["pu"+tID"](); You can see that in the original source: http://gabrielgambetta.com/tiny_raytracer_full.js
Yes, that is (my 5 year old idea of) ZX Spectrum Basic. It follows the format of the listings published in Microhobby (http://microhobby.speccy.cz/mhf/031/MH031_08.jpg - in particular, look at the "NOTAS GRAFICAS" sidebar). I grew up with that :) My programming has improved, my drawing abilities not so much.
That is before minifying. The variable name "math" was chosen for readibility in the non-minified version. The variable math can be renamed during the minifying process to a short identifier, in this case "E". Since the Math object is used quite a lot, it is worth the assignment to rename it.
i dont really understand why these things are considered impressive. all of the technically challenging bits are handled by the enormous stack of technology involved.
granted the js is highly size optimised, and that probably required some reasonable effort. i see nothing clever here at all. also, the fact that its 35 lines is a weird measure - i can see ways to reduce that trivially - why isn't everything to do with the render baked into the final loop?.
the byte count is a more useful measure.
4k/64k executable demos are much more impressive - even though there is a stack of technology underneath you have to do things like butcher your C/C++ standard libraries or not use them at all...
> i dont really understand why these things are considered impressive. all of the technically challenging bits are handled by the enormous stack of technology involved.
For one thing, it's a reminder of how awesome the technology stack that we all take for granted is.
To be fair, it's not exactly a "raytracer", it's a program capable of rendering that specific scene (i.e. bunch of spheres) using a recursive raytracing algorithm, which isn't awfully complex in itself. Not trying to say it's bad or anything, just want to point out that it's not very surprising for something like this to be under 100 loc in a high-level language.
A tiny tip coming from a friendly place, if I may- your comment comes off as slightly pedant and elitist. I'm sure it was not in your intention to bring down the original author of this demo, who is probably very excited about what he worked hard on, but rather to highlight how deep of a field raytracing is and to encourage him to not be content with this first venture into it, and to explore more of it.
A good way to do that is to write your comment from a "Yes and..." perspective, rather than a "Yes but..." perspective. In one case you're showing the author how tall the wall front of him is, in the other you show him how you yourself climbed that wall when you were in his position.
and you find a blog with 1.5 semi-coherent posts and a shitty sidescroller? :-)
I'm not criticizing the author or his work, I'm just saying that writing a recursive function that does p + kv recursively shouldn't take a lot of code. Do I need to end each comment with a smiley face to prevent it from being taken the wrong way?
I agree with you, it's not a very complicated thing. But that's the beauty of it - raytracers are simple but produce stunning images (it is a raytracer, though - it traces rays! Plus it does ambient + point lights, reflections, shadows... the kind of stuff raytracers do)
It doesn't take a lot of code, sure. But "a cross-platform game framework in 100,000 lines of C++" is not the current fad in HN ;)
I don't think it's a matter of adding a smiley at the end of each comment (although I love smileys, so I do that too :-)
I think it's more a matter of saying "Yes, good job on figuring out how to do p + kv recursively! Here are links to {papers|books|lectures|other code} that go further if you want to explore this topic more" rather than saying "Oh, you figured out how to do p + kv recursively, just like millions of other CS freshman. Good for you".
It's the little details that make online communities pleasant :)
To be fair, you are wrong. It is totally a ray tracer, there is the top variable that is the description of the scene and you can hack the positions and see how the scene changes. Also it is proper ray tracing as far as it goes from the point of view of the algorithm used to cast rays, it finds the closest intersection and so forth. The most obvious limitation is just support for a single kind of geometric primitive: the sphere.
Well, to answer that question, let me make an analogy. There's no required set of features for calculators either.
Technically a tiny box that can only add two numbers can be considered a calculator, since it calculates. I guess we might call it so, but it isn't the same, is it?
It's just that people tend to think about povray and renderman when "raytracers" are mentioned, and thus, fitting a raytracer into 30 lines of javascript may seem impossible, while actually it's entirely possible because the core idea of the algorithm is quite simple - this is what this demo shows and it is exactly the reason it's a good demo.
>It's just that people tend to think about povray and renderman when "raytracers" are mentioned, and thus, fitting a raytracer into 30 lines of javascript may seem impossible
Not really. At least neither me, nor (I'd say) the HN crowd, would expect something like povray or renderman given the title of the article.
What we'd expect, an implementation of a ray tracing algorithm, and not a full featured program, we got.
link: http://www.gabrielgambetta.com/tiny_raytracer.html