I remember reading a very similar article, a few years back (4-5 years ago). The fact that text is "written" with spheres, some of the images, writing it in a business card.
Edit: googling "ray tracer on a business card" gave me this link[0] which is what I had read before.
But actually reading (and not skimming) the submitted link, gave me this paragraph:
> Sometime around September 2009, I posted a version of my program to the now-defunct ompf.org (whose demise is lamented here among other places), a web forum dedicated to the real-time raytracing community.
So that's why it sounded so familiar. Oh and the "spher-y" text was just the first image, which is also cited in the other post from 2013 that google found.
A shout-out to my former officemate Paul Heckbert who wrote the first business card ray tracer. He also wrote a ray tracer in PostScript when we realized that the processor inside our laser printer was more powerful than the processor inside our real computer. Needless to say, the PostScript ray tracer was epically slower.
Author here - woke up this morning and saw a big bump in site traffic which led me back to HN -- nice to see folks are reading! Happy to answer any questions here or in the Disqus comments on my blog.
Raytracers are really cool! On the other side of the (not so minimal or efficient spectrum) I made my own javascript-based raytracer using canvas and webworkers:
I can also recommend the more in-depth, Academy Award winning, book "Physically Based Rendering: From Theory to Implementation" [1] by Matt Pharr and Greg Humphreys (and Wenzel Jakob for the soon-to-be-released third edition). The book has nearly 1200 pages though so, unlike Shirley's book, you probably won't be able to read it in a single weekend. It covers both the theory and the implementation of rendering techniques which are close to the state of the art and it provides many pointers to interesting papers.
I have a question about this book. How are you supposed to follow along with it? Peter Shirley's book encourages the reader to program the renderer along with the explanations. It doesn't seem like that is possible with PBRT, but I might be misunderstanding something.
PBRT is big because it is written using literate programming and therefore includes (almost[1]) the entire source code for the ray tracer in print, which arguably makes it easier to follow than other books on the topic.
The PBRT ray tracer is also a production quality ray tracer and uses many advanced techniques you won't find hobby ray tracers, such as differential geometry, which naturally increases it's size.
There are exercises at the end of each chapter. Most exercises ask you to implement something new in pbrt, often something described in a (SIGGRAPH) paper. Pbrt is very modular and it's quite easy to add a new acceleration structure, a new shape, a new light, etc. I haven't done these exercises myself though, I mostly use the book as a reference while implementing my own renderer.
pbrt is a raytracer code + a book to explain the code and the maths involved. You don't really need the code to read and understand the book. It's an amazing idea and his award is truly deserved. The point is not to learn you how to _write_ a raytracer but to explain how a production raytracer _works_.
My apologies. I'm used to having rip-offs like Elsiver and such in academia that I assumed that you (the author) were also in the same predicament.
When the search on the common academic piracy pages showed this book, I assumed that I was only harming the parasitic journal companies. Evidently, I wasnt :/ For that, I apologise.
The publishing industry has gotten worse in a lot of ways in the ~10 years since we did the first edition. The whole for-profit academic journals thing makes me sad in general, etc.
However, it's also impressive how much work how many people put in to publish a book (illustrators, proof-readers, copy editors, professional layout, etc., etc.). For a specialist book like PBR that sells ~1500 copies a year, no one involved is making a ton of money from the process including, as far as I can figure, the publisher.
Even though the most important reason I've done my part of that work is to try to spread knowledge as widely as possible, that all does make me want to do my small part to discourage piracy. :-)
I keep meaning to read this, but the price tag (~$100) keeps putting me off for something I'd just read mostly for enjoyment. One of these months I'll bite the bullet and order it.
The investment of effort and time you would need to put into the study of a serious book, whether for enjoyment or self-education, especially of an over 1000-page book like this one, seems to me incomparably bigger than a few dozen dollars you would have to pay for it. If, on the other hand, all you want is to have it on your bookshelf - just in case, then, of course, the book may end up being merely an expensive souvenir.
Eh, it's a leisure activity for me: I read Principles and Practice for fun. It's a question of whether I want to spend a good chunk of this month's leisure budget on this book. It's out of impulse-purchase range, so I keep putting it off. One of these days... :)
It is well worth the money. I still have my first edition copy on my desk, the second edition on my nightstand and the money ready for the third edition. It easily replaces several other, pricey books.
The book is expensive, but it's worth every dollar in my opinion. I'd recommend waiting a few months though, the third edition will be released before the end of year.
Since we're on the subject ... is real-time tracing feasible in the next decade? i.e. if we had a load of cores, can we parallelize the heck out of it?
>> is real-time tracing feasible in the next decade?
It's feasible today, it just depends on what quality level, scene complexity, and frame rate you're looking for. I can trace a the standard bunny model with 2 light sources in a 640x480 window at >10 FPS on my dual core AMD64 from 2005. The problem is that we want better surface models, global illumination, and higher resolution. OTOH, we can get 8 core 3GHz+ processors today so that make simple renderings go pretty well. You should be able to render very complex geometry at HD resolution without any lighting effects at very interactive rates, but if that's all you want, just throw triangles at OpenGL.
Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL (at least code that is not significantly optimized). If you want 60fps VR, that's 16ms of latency for everything including rendering. In fact, if the user moves their head, there might even be a tighter deadline (I don't know what the number is but I think this is called motion-to-photon latency).
>> Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL
That's correct - you CAN get more realistic images. I guess what I was trying to say is that the better (photo realistic) rendering quality is not real-time yet on high resolution displays, but simple rendering using ray tracing techniques can be near real time. But if all you want is simple rendering with phong shading you're probably not going to bother with ray tracing.
> Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL
With basic (i.e. Whitted) ray tracing, you get shadows, reflection, and refraction. You can do that in OpenGL too, but it's more work and you have to do go through some unnatural contortions and/or use approximations that might look convincing but aren't physically accurate.
Soft shadows, focal blur, and motion blur can be supported by tracing more rays per pixel.
The big leap in realism (the kind where you would say ray tracing is definitely better than what you would see in a modern game) comes when you add global illumination, which is computationally a lot harder than basic ray tracing because it requires a large number of rays per pixel. It works by random sampling, so you can generate a blurry, grainy image fairly quickly, but noise-free images take a lot longer.
It's been feasible for about a decade on modest hardware, just not at super-high resolutions and framerates that most people expect from modern GPUs. (Outbound was a sort of a technology demo/student project game that I remember being sluggish but playable on a Core 2 Duo.)
Also, plain raytracing can look kind of bland. Most global illumination algorithms are based on ray tracing, but require tracing a very large number of rays. So, really the question is more whether we can get to real-time path tracing, which is a harder problem.
Another problem is tooling. Game developers know how to get good performance on GPUs, but ray-tracers have completely different performance characteristics. Re-building a bounding volume hierarchy is, in general, O(NlogN), so you have to be careful about partitioning your scene into things that move/deform and things that don't and only rebuild the parts of the BVH that need updating.
For open-source, I know that Blender uses Cycles, which fires ray-tracing renderings in the normal views on each change, and this so is for quite a while already.
Blender only runs raytraced previews when you set the view mode to "Rendered," not in normal operation. It's tough to work in because you don't get any of the usual UI like selection highlight; it's just a straight render defaulted to relatively low quality.
Great for a quick preview though, especially while setting up lighting. And if you've been good with naming your objects you can always select things out of the document tree.
The problem is that as ray tracing advances, so do the hacks on top of rasterization to enable more realisticish images. So rasterization keeps winning. Someday, though...
The computation is, but the scene description isn't. In the general case, any ray can hit any object from any intersection, which means effective use of cache lines is the hard bit.
Must admit, when I first saw the url I started wondering how on earth Mark Zuckerberg has the time to be writing raytracers. Made more sense when I scrolled to the bottom.
Can someone comment on the appeal or utility of a raytracer when path-tracing produces infinitely better results in offline, and real-time path-tracing has started to appear (and will likely be mainstream in a matter of years)?
Path-tracing is basically a different rendering strategy that's built on ray-tracing, so if you write a ray tracer it's not too hard to turn it into a path tracer later. Ray tracers are generally simpler and easier to write; it was probably also easier for the author to fit a ray tracer on a business card.
If you're interested in a non-minimal, production-quality, blazingly fast Apache 2-licensed raytracer, you could do worse than Intel's Embree project: https://embree.github.io
Worth noting that Embree is a ray tracer in a different sense of the word: It's a library to accelerate the tracing of rays only. All other behavior (such as light transport, materials, etc.) is left to the user. They do have an example renderer implemented in it though.
As for open source path tracing renderers, some well known ones are:
I highly recommend checking out the winning entry of the 2016 ShaderToy size coding competition - they implement a full raytracer in 584 chars https://www.shadertoy.com/view/4td3zr
That's an apple and orange comparison. Not that it isn't impressive in it's own way, but having a huge library of built in vector math and graphics primitives makes it a lot easier than doing it in a language like C.
A few months back was the first time I've ever endeavored on making code smaller regardless of readability. It's for the home page of my website, it took hours to get it under 1 mb, but it felt very rewarding.
This is when I came across my second lesson, mentioned in this article, it doesn't render fast. In my case even though all the assets are downloaded, it's still too heavy.
Either way, when you get time to work on this type of stuff it can be pretty fun.
Edit: googling "ray tracer on a business card" gave me this link[0] which is what I had read before.
But actually reading (and not skimming) the submitted link, gave me this paragraph:
> Sometime around September 2009, I posted a version of my program to the now-defunct ompf.org (whose demise is lamented here among other places), a web forum dedicated to the real-time raytracing community.
So that's why it sounded so familiar. Oh and the "spher-y" text was just the first image, which is also cited in the other post from 2013 that google found.