Hacker News new | past | comments | ask | show | jobs | submit login
A minimal raytracer (mzucker.github.io)
395 points by ingve on Aug 18, 2016 | hide | past | favorite | 61 comments



I remember reading a very similar article, a few years back (4-5 years ago). The fact that text is "written" with spheres, some of the images, writing it in a business card.

Edit: googling "ray tracer on a business card" gave me this link[0] which is what I had read before.

But actually reading (and not skimming) the submitted link, gave me this paragraph:

> Sometime around September 2009, I posted a version of my program to the now-defunct ompf.org (whose demise is lamented here among other places), a web forum dedicated to the real-time raytracing community.

So that's why it sounded so familiar. Oh and the "spher-y" text was just the first image, which is also cited in the other post from 2013 that google found.


A shout-out to my former officemate Paul Heckbert who wrote the first business card ray tracer. He also wrote a ray tracer in PostScript when we realized that the processor inside our laser printer was more powerful than the processor inside our real computer. Needless to say, the PostScript ray tracer was epically slower.

https://www.cs.cmu.edu/~ph/


Since you didn't give a link, is it this one ?

http://fabiensanglard.net/rayTracing_back_of_business_card/i...


Yes, that was the one -- I was a bit distracted and didn't notice that I didn't include the link. Can't edit my comment now.


Author here - woke up this morning and saw a big bump in site traffic which led me back to HN -- nice to see folks are reading! Happy to answer any questions here or in the Disqus comments on my blog.


Raytracers are really cool! On the other side of the (not so minimal or efficient spectrum) I made my own javascript-based raytracer using canvas and webworkers:

https://github.com/jleppert/raytracing.js

It was based off the excellent introduction Ray Tracing in One Weekend (http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-we...).


While it is nothing to do with obfuscation, I can recommend Peter Shirley's blog http://psgraphics.blogspot.com/

and the "Build a renderer in a weekend"

http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-we...

I've recently been making a Julia version from the C++ source code

https://github.com/lawless-m/Jender


I can also recommend the more in-depth, Academy Award winning, book "Physically Based Rendering: From Theory to Implementation" [1] by Matt Pharr and Greg Humphreys (and Wenzel Jakob for the soon-to-be-released third edition). The book has nearly 1200 pages though so, unlike Shirley's book, you probably won't be able to read it in a single weekend. It covers both the theory and the implementation of rendering techniques which are close to the state of the art and it provides many pointers to interesting papers.

[1] http://pbrt.org/


I have a question about this book. How are you supposed to follow along with it? Peter Shirley's book encourages the reader to program the renderer along with the explanations. It doesn't seem like that is possible with PBRT, but I might be misunderstanding something.


PBRT is big because it is written using literate programming and therefore includes (almost[1]) the entire source code for the ray tracer in print, which arguably makes it easier to follow than other books on the topic.

The PBRT ray tracer is also a production quality ray tracer and uses many advanced techniques you won't find hobby ray tracers, such as differential geometry, which naturally increases it's size.

[1] Some utility classes are left out.


There are exercises at the end of each chapter. Most exercises ask you to implement something new in pbrt, often something described in a (SIGGRAPH) paper. Pbrt is very modular and it's quite easy to add a new acceleration structure, a new shape, a new light, etc. I haven't done these exercises myself though, I mostly use the book as a reference while implementing my own renderer.


pbrt is a raytracer code + a book to explain the code and the maths involved. You don't really need the code to read and understand the book. It's an amazing idea and his award is truly deserved. The point is not to learn you how to _write_ a raytracer but to explain how a production raytracer _works_.


For those who are poor, and still wish to read this, go to here:

REDACTED as per author's request


Hi, Kefka.

I'm the author of the book.

Could you please not post links to pirated versions of it on HN?

Thanks, Matt


My apologies. I'm used to having rip-offs like Elsiver and such in academia that I assumed that you (the author) were also in the same predicament.

When the search on the common academic piracy pages showed this book, I assumed that I was only harming the parasitic journal companies. Evidently, I wasnt :/ For that, I apologise.


No worries!

The publishing industry has gotten worse in a lot of ways in the ~10 years since we did the first edition. The whole for-profit academic journals thing makes me sad in general, etc.

However, it's also impressive how much work how many people put in to publish a book (illustrators, proof-readers, copy editors, professional layout, etc., etc.). For a specialist book like PBR that sells ~1500 copies a year, no one involved is making a ton of money from the process including, as far as I can figure, the publisher.

Even though the most important reason I've done my part of that work is to try to spread knowledge as widely as possible, that all does make me want to do my small part to discourage piracy. :-)


I keep meaning to read this, but the price tag (~$100) keeps putting me off for something I'd just read mostly for enjoyment. One of these months I'll bite the bullet and order it.


The investment of effort and time you would need to put into the study of a serious book, whether for enjoyment or self-education, especially of an over 1000-page book like this one, seems to me incomparably bigger than a few dozen dollars you would have to pay for it. If, on the other hand, all you want is to have it on your bookshelf - just in case, then, of course, the book may end up being merely an expensive souvenir.


Eh, it's a leisure activity for me: I read Principles and Practice for fun. It's a question of whether I want to spend a good chunk of this month's leisure budget on this book. It's out of impulse-purchase range, so I keep putting it off. One of these days... :)


It is well worth the money. I still have my first edition copy on my desk, the second edition on my nightstand and the money ready for the third edition. It easily replaces several other, pricey books.

If you are an ACM member, it is also available to you as eBook: http://learning.acm.org/books/book_detail.cfm?id=1854996&typ...


The book is expensive, but it's worth every dollar in my opinion. I'd recommend waiting a few months though, the third edition will be released before the end of year.


Thanks for the suggestion, I'll keep an eye out.


Seconded, his "Ray Tracing in One Weekend" is great. Simple, and so satisfying once you start producing images.


Since we're on the subject ... is real-time tracing feasible in the next decade? i.e. if we had a load of cores, can we parallelize the heck out of it?


>> is real-time tracing feasible in the next decade?

It's feasible today, it just depends on what quality level, scene complexity, and frame rate you're looking for. I can trace a the standard bunny model with 2 light sources in a 640x480 window at >10 FPS on my dual core AMD64 from 2005. The problem is that we want better surface models, global illumination, and higher resolution. OTOH, we can get 8 core 3GHz+ processors today so that make simple renderings go pretty well. You should be able to render very complex geometry at HD resolution without any lighting effects at very interactive rates, but if that's all you want, just throw triangles at OpenGL.


Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL (at least code that is not significantly optimized). If you want 60fps VR, that's 16ms of latency for everything including rendering. In fact, if the user moves their head, there might even be a tighter deadline (I don't know what the number is but I think this is called motion-to-photon latency).


>> Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL

That's correct - you CAN get more realistic images. I guess what I was trying to say is that the better (photo realistic) rendering quality is not real-time yet on high resolution displays, but simple rendering using ray tracing techniques can be near real time. But if all you want is simple rendering with phong shading you're probably not going to bother with ray tracing.


> Regarding your last point .. I thought the point of ray tracing is that you get a lot more realistic images than with OpenGL

With basic (i.e. Whitted) ray tracing, you get shadows, reflection, and refraction. You can do that in OpenGL too, but it's more work and you have to do go through some unnatural contortions and/or use approximations that might look convincing but aren't physically accurate.

Soft shadows, focal blur, and motion blur can be supported by tracing more rays per pixel.

The big leap in realism (the kind where you would say ray tracing is definitely better than what you would see in a modern game) comes when you add global illumination, which is computationally a lot harder than basic ray tracing because it requires a large number of rays per pixel. It works by random sampling, so you can generate a blurry, grainy image fairly quickly, but noise-free images take a lot longer.


It's been feasible for about a decade on modest hardware, just not at super-high resolutions and framerates that most people expect from modern GPUs. (Outbound was a sort of a technology demo/student project game that I remember being sluggish but playable on a Core 2 Duo.)

Also, plain raytracing can look kind of bland. Most global illumination algorithms are based on ray tracing, but require tracing a very large number of rays. So, really the question is more whether we can get to real-time path tracing, which is a harder problem.

Another problem is tooling. Game developers know how to get good performance on GPUs, but ray-tracers have completely different performance characteristics. Re-building a bounding volume hierarchy is, in general, O(NlogN), so you have to be careful about partitioning your scene into things that move/deform and things that don't and only rebuild the parts of the BVH that need updating.


I remember reading a while ago some articles about Lucas Films real-time rendering feature, planed for their future flicks. Actually, here's one (of the articles): http://www.loopinsight.com/2013/09/24/lucasfilm-pushes-the-b...

For open-source, I know that Blender uses Cycles, which fires ray-tracing renderings in the normal views on each change, and this so is for quite a while already.


Blender only runs raytraced previews when you set the view mode to "Rendered," not in normal operation. It's tough to work in because you don't get any of the usual UI like selection highlight; it's just a straight render defaulted to relatively low quality.

Great for a quick preview though, especially while setting up lighting. And if you've been good with naming your objects you can always select things out of the document tree.


The problem is that as ray tracing advances, so do the hacks on top of rasterization to enable more realisticish images. So rasterization keeps winning. Someday, though...


Judge for yourself with these nVidia demos

https://www.google.com/search?q=nvidia+real+time+raytracing&...



Oh you young people. Next Decade? Try 24 years ago. A little game called Wolfenstein 3D used raycasting.

(Yes I know the subtleties that distinguish ray[tracing|marching|casting].)


> (Yes I know the subtleties that distinguish ray[tracing|marching|casting].)

Even worse than not knowing them, you think you do and have no idea.



It seems to me that all physics simulation is highly parallelizable because, well, physics is parallel :)

But please correct me if I'm wrong.


Physics is also very linear. You cannot know what will happen at time t+1 until you know exactly what happened in time t.


Not exactly true, there are subcases where you can isolate islands of objects and be sure they wouldn't interact.


The computation is, but the scene description isn't. In the general case, any ray can hit any object from any intersection, which means effective use of cache lines is the hard bit.


There is also "Second Week" released which I greatly recommend and the author is also very responsive via email.


And also "For the rest of your life" which is part three.


i have been doing the same thing, albeit a bit slowly, but with python :) it's great fun, once you start seeing something tangible on the screen.

then it becomes very difficult to stop ;)


Thanks!


Must admit, when I first saw the url I started wondering how on earth Mark Zuckerberg has the time to be writing raytracers. Made more sense when I scrolled to the bottom.


You could also reformat the URL to see their profile.

https://mzucker.github.io/2016/08/03/miniray.html

becomes

https://github.com/mzucker

GitHub Pages uses their profile name as the github.io sub-domain.


Also:

  >> Once I made a robot to play video games for me.



Can someone comment on the appeal or utility of a raytracer when path-tracing produces infinitely better results in offline, and real-time path-tracing has started to appear (and will likely be mainstream in a matter of years)?

Also, a path tracer in 100 lines of C++: http://www.kevinbeason.com/smallpt/


I'm no graphics programmer but I was under the impression path-tracing is a subset of ray tracing that came out of this paper: http://artis.inrialpes.fr/Enseignement/TRSA/CookDistributed8...

Is this not correct?


Distribution ray tracing is not exactly pathtracing. Proper physically correct pathtracing was formulated by James Kajiya in this paper:

http://www.cse.chalmers.se/edu/year/2011/course/TDA361/2007/...


Path-tracing is basically a different rendering strategy that's built on ray-tracing, so if you write a ray tracer it's not too hard to turn it into a path tracer later. Ray tracers are generally simpler and easier to write; it was probably also easier for the author to fit a ray tracer on a business card.


If you're interested in a non-minimal, production-quality, blazingly fast Apache 2-licensed raytracer, you could do worse than Intel's Embree project: https://embree.github.io


Worth noting that Embree is a ray tracer in a different sense of the word: It's a library to accelerate the tracing of rays only. All other behavior (such as light transport, materials, etc.) is left to the user. They do have an example renderer implemented in it though.

As for open source path tracing renderers, some well known ones are:

  - LuxRender: http://www.luxrender.net/
  - Mitsuba: https://www.mitsuba-renderer.org/
  - Tungsten: https://github.com/tunabrain/tungsten (uses Embree)


I highly recommend checking out the winning entry of the 2016 ShaderToy size coding competition - they implement a full raytracer in 584 chars https://www.shadertoy.com/view/4td3zr


That's an apple and orange comparison. Not that it isn't impressive in it's own way, but having a huge library of built in vector math and graphics primitives makes it a lot easier than doing it in a language like C.


A few months back was the first time I've ever endeavored on making code smaller regardless of readability. It's for the home page of my website, it took hours to get it under 1 mb, but it felt very rewarding.

This is when I came across my second lesson, mentioned in this article, it doesn't render fast. In my case even though all the assets are downloaded, it's still too heavy.

Either way, when you get time to work on this type of stuff it can be pretty fun.


Ok i will say it since no one else is saying it. Why do C programmers keep writing one letter variables?

We have hard drives with terabytes of storage, and making descriptive variable names do not increase the file size of the binary by the way.

So if your program is 1kB because you shaved the white space and made it illegible, don't boast. Instead, shame on you.

It is still an impressive piece by the way, and i would have been fine with it if it was a few tens of kilobytes.


It was a submission for IOCCC, you know, the International Obfuscated C Code Contest.


That moment you add a few bytes to your char array to make your program leet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: