Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I get your comment.

In my mind you can render $x pixels at 60fps. So the number of pixels for each distinct view is: $x / 45.




I think it's a little more complex than that. In addition to rasterizing a large number of pixels, the entire vertex pipeline has to run 45 times to generate 45 projections of the scene's geometry. You're correct though in that the rest of what happens in a frame (physics, animations & other state updates) do not have to run 45x.


This is an application where ray tracing for primary rays should shine. Instead of having to project thw scene 45 times you only need 45 sets of ray bundles, which is really efficient. The acceleration stuctures are shared between views. With a few pixel reordering hacks you can essentially generate all viewpoints as a single high resolution frame.


This is exactly what I was thinking about.

With raytracing you will almost get this for free.


You're right. This is essentially meaningless: "Now that GPUs can reliably generate 60 FPS ...."

That depends entirely on what you're rendering. You could reliably hit over 60 FPS for decades—for some content. And now you can still render at .000000001 FPS for other content.

Saying "GPUs can generate 60 FPS" is presenting the situation as if framerate were a function of hardware only, whereas the reality of the situation is that it depends at least as much on the software.


Yeah, this is total cart before the horse. Cards aren't built to target specifically 60fps. They're built to perform as many operations as possible. Then content creators push the card as hard as they can while attempting to maintain 60fps. The graphics engine authors out there are always clamouring for more ops with active plans of how to use them. So any extra memory of ops the GPU makers provide will almost immediately be used up. Further complicating things, display technology is always advancing, adding more pixels to render.

I remember this moment when Crytek released a real-time raytraced demo of one of their games running on the best hardware avaipat at the time. It felt like, finally the hardware is capable and now it's a slow march to the end of raster graphics. Then 4K displays came along and totally exploded the number of pixels to render and that was pretty much the end of that talk, at least for a couple decades.


I should have said something different and not even mentioned a specific framerate. What I meant to say is that GPUs now-a-days are able to most rendering with ease, but the product in this link will near a lot more.


For one, prerendered 3D video could go as detailed as the transfer pipe & display would allow.

For another, the worst case cost is $x/45, but I would think that might improve as 3D programmers figure out optimizations in terms of multi angle view rendering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: