Hacker News new | past | comments | ask | show | jobs | submit login

Naive question:

Does your last paragraph have any implications for VR? Could something like the Oculus Rift benefit from raytracing technology? Or have I misunderstood what you are saying?




I only skimmed the paper referenced below [1]. But what I understand at the moment is, that you essentially measure the area of the screen where high quality is important (using an eye tracker). And with this you can allocate your resources better. The downside is, that the rendering looks rather strange for anyone who looks on it without proper alignment of the high quality region. So VR goggles seem to be a good candidate for the technology, if they include an eye tracker, because they guarantee a single viewer and provide a frame to attach the eye tracker.

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=1766... thanks mynameismiek


The idea is that raytracing can degrade gracefully, so there's a sliding scale between rendering time and rendering quality for every pixel. And yes, this means that, theoretically, the computer could naturally degrade pixels that you're not currently looking at.

I suspect this would look highly unnatural in practice, though, as even static scenes would flicker and change, especially with reflections and any surface or technique that uses random sampling methods (which is virtually every algorithm that is both fast and looks good).


> I suspect this would look highly unnatural in practice

In pair tests test subjects were unable to see a difference between the foveated image and the normally rendered one. The flicker can be removed by blurring the image to the point where flicker no longer exists. Because your brain is used to a very low sampling rate outside the fovea, it actually helps in hiding the artifacts, because they occur naturally in your vision.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: