It uses a Lisp-based scene description language (with macros!) and WebRTC to form a P2P network of compute nodes, entirely in the browser, with near-native performance thanks to dynamic compilation to AsmJS.
edit: I reproduced your scene, give it a bit to render.
edit: Wow, you have a lot of neat scenes!
edit: And here you go. rendered at ~2.5 million samples a second, thanks JumboCargoCable whoever you are! (You can set your nick in the settings menu accessible via the gear icon in the top left.) https://i.imgur.com/UvdBhq1.jpg and scene http://bit.ly/2yYciCS though I think I made it too bright.
edit: Some people appear to have buggy systems that always return black pixels. :-(
edit: Could whoever is SilkyDoorGame please post their cpu, os and browser?
If you email hn@ycombinator.com we'll send you a repost invite. I don't want to do it now because once a particular theme (in this case ray tracers) has made the front page it's usually not a good idea to post another one too soon.
Yeah in retrospect my mistake was to bet on a single announcement post instead of a series of smaller posts over the months I developed it. That's the sort of thing you only realize is a mistake after you'd made it, though.
It's not strictly speaking a "scene description language", it just looks like that at first glance.
It's a fully capable compiled programming language, which I happen to have written a raytracer in. Check out the "pathtrace" tab.
The advantage is if there's some issue with the raytracer, you can fix it yourself. And you can use arbitrary scripts for making scenes. (Though the scene in memory must not exceed 32MB, which may limit you somewhat.)
The results (R,G,B) that you calculate are physical quantities (amount of photons, or Watts). But when you double the amount of photons, the human eye sees it like 40% brighter, not 2x brighter.
RGB values, that you give to the monitor (through the canvas), were constructed according to the human eye, so rgb(40,40,40) looks 2x brighter than rgb(20,20,20).
Long story short, when you have your physical Red value between 0 and 1, do Red = Math.pow(Red,1/2.2);
As the author of the original version, I'm very glad to see such an improvement !
My IPython Cookbook contains increasingly optimized versions with Cython as an illustration of how to use this library to accelerate Python code. The fastest Cython version is 300x faster than pure Python ; a lot of Python/NumPy overhead is bypassed by reimplementing the logic in, basically, C. The OpenMP multicore GIL-releasing version is roughly 4x faster than the fastest Cython version on a quadcore computer. (https://github.com/ipython-books/cookbook-code/tree/master/n...)
For anyone without any idea how to do this, Ray Tracing in a Weekend is a good introduction.
It teaches you to do the image in the cover in a pretty short time.
https://www.amazon.com/gp/product/B01B5AODD8
yea, that is definitely a cool book. a few months back i started to go through it doing the implementation in racket but got distracted with other things.
the code should run directly in dr. racket without modification, but i didn't finish the book obviously to get to the cover picture.
the c++ code in the book is pretty straightforward (he leans on practicality), so it is kind of fun to directly port to a language but then slowly change it to be idiomatic in the target language. i was learning a lot about racket (my first project in it) in the short time i was going through the book. i need to get back to it...
While cool, it should be pointed out that this way of ordering things doesn't really scale with scene complexity (more objects, more complex triangular meshes requiring acceleration structures) or image size, as the number of masks required to determine visibility would become very prohibitive.
One of the great things about raytracing (at least the basics before you get to more complicated light transport), is how simple the normal recursive algorithm is for rendering a scene. This method in the article complicates that greatly with the mask passes, and I guess could be termed a wavefront renderer.
I wrote a pure python (No Numpy) ray tracer as a learning exercise. Spoiler: it was slowwwwwwwwwww.
I converted to NumPy and it was just slow. I then went to array broadcasting (which required surprisingly few code changes due to NumPy being pretty awesome) and it became fast.
__s basically covers it but my initial switch from pure python to NumPy just involved changing my vector 'class' (just a tuple in reality) into NumPy arrays. Whilst the basic vector multiplications and additions became way faster the overhead of creating hundreds of tiny NumPy arrays was a killer.
So, just like in the original article instead of creating a single 1x3 array I created a Mx3 array where M represented as many rays as I could fit into memory at once (I have quite a weedy machine).
Due to how NumPy broadcasting exactly the same code for, say, subtracting the origin from a ray vector works to subtract that single origin vector for a multidimensional array of ray vectors.
Anyone besides me disturbed that one of the code samples had function that took 3 parameters, 2 of which where 'O' and 'D'?
I had to look at it a few times before I realized those were different variables.
It's been a long-o time since I did any 3d physics or rendering code (in C), maybe even since before the WWW, but I don't remember this convention... (I mean - sure - it makes sense, but I don't recall the two 3-space triples being necessarily called that even in things like GL)
OpenGL doesn't do raytracing where you have a ray origin and direction though, so you wouldn't have seen it there.
Using the full terms or shortening them to Dir and Orig are more conventional in my experience.
It gets even more fun when you get to evaluating BSDFs for materials and you have variables like wi, wo, and different people use them in different ways :)
Out of curiosity, I did a little digging around. I really need to take a step back and actually understand what is going on fully with the code, but a quick trot through cProfile showed a lot of effort hitting the dot method of vec3, primarily via abs(self) under the norm method.
There's a useful library for python called numexpr, http://numexpr.readthedocs.io/, which can speed up numpy operations, leveraging multiple cores etc. (and Intel's VML library if you have it installed), one I've been aware of but never got around to trying out.
At a quick stab, it seems like it can't take properties of classes? Either way, modifying dot a bit:
at 400x400 this slows things down a little. Once you get above about 800x800 it starts to draw equal. By the time you get to 2000x2000 it's shaving some 10% of the execution time.
edit: here's a quick stab at using numexpr at just the most obvious places, without trying to consider major code refactoring. Note this bumps up the resolution to 2000x2000
Are the reflections in examples like this physically correct? My brain kind of expects the floor to be strongly curved when reflected in the sphere. Maybe it's just the unfamiliar, unrealistic environment?
Yes, for perfect mirror surfaces. The scene isn't very physically realistic, but the reflections are doing the right thing given the scene. The floor is strongly curved in the reflections. The reflection of the horizon line isn't, only because the camera is near the floor and looking almost level. If the camera were up higher looking down, the horizon reflection would be more strongly curved.
The simple answer is no, but unless there is an error the rays will at least be traced from the reflection angle. All light you see bouncing off of hard surfaces is some sort of reflection and even for sharp reflections there are a lot of details.
One is fresnel falloff towards edges, which is simple. Another is depth of field carrying into reflections, which is more difficult and not common (yet).
I'm pretty sure the strong curve is because (normal) floors are finite planes, and usually we don't look so close from the sphere's equator in real life.
Once I wrote a raytracer in Lisp based on pg's code in ANSI Common Lisp, so I've seen the results of changing camera positions.
The demos that come with embree do simple stuff like this in real time, which would be about 450 times faster, so I wouldn't call this 'reasonably speedy'.
You could pack the raytracer and the scene into a GLSL fragment shader without any trouble whatsoever and achieve something greater than 30fps out of the gate.
Check out https://www.shadertoy.com/ - all of this crazy stuff is generated using two triangles and a fancy shader. Almost every 3D scene there is generated using some form of ray casting or ray tracing on the GPU.
After some time you can get this: http://renderer.ivank.net/balls.jpg :)
Edit: I am glad you like it! I also made this fully-GPU renderer (actually, it is a game): http://powerstones.ivank.net/