Hacker News new | past | comments | ask | show | jobs | submit login
A look at the PowerVR graphics architecture: Deferred rendering (imgtec.com)
40 points by luu on Feb 1, 2016 | hide | past | favorite | 14 comments



The article linked in the first paragraph provides some good context: http://blog.imgtec.com/powervr/a-look-at-the-powervr-graphic...

Graphics developers need to know this stuff because certain operations can be very, very expensive on tile-based deferred renderers. Things like reading from the framebuffer, for example. On an immediate mode renderer, the GPU just has to queue up a DMA. On a tile-based deferred renderer, the previous commands have to be flushed first. And these differences are part of the motivation for new APIs like Metal and Vulkan.


Reading from the frame buffer is, in general, extremely cheap on TBDR architectures, not expensive. The frame buffer for the current render is stored in tile memory which is fast to access. Accessing the frame buffer with an immediate mode renderer will involve a load from main/global memory.


I think I was a bit ambiguous. I was imagining something like glReadPixels() with a PBO target, which won't work unless all of the tiles have been rendered, but is just a DMA or pack operation in an immediate mode renderer.


Doesn't a framebuffer read involve a pipeline stall, even on a immediate mode renderer?


Yes, framebuffer reads (glReadPixels) are bad for everyone.


Is that true? PBO targets here, not system memory.


glReadPixels() forces you to synchronise with the CPU at the point you call it, and stops the driver from carrying on with further GL commands until it completes. Doesn't matter what you're rendering to. So even if it's cheap on the GPU, it stops the driver making forward progress which could be very beneficial to performance.


Really? I thought that was the whole point of reading into a PBO, that you're not synchronizing with the CPU yet, and you read from the PBO into system memory at some later point.


Edited out a minor freakout i had thinking powervr was claiming to have invented tbdr when its been in the literature for years. Well guess what, fuck, i'm totally wrong, powervr did invent tiled rendering. In 1996. That's what i get for being 2 when the paper came out.


An unfortunate consequence of that is you can't discard or alpha test pixels without major performance loss. This means if you want detailed stuff you have to render everything transparent and then it's CPU z-sorting time.


I remember back in the days a very popular dreamcast demo was showing deck of very big cards jumping on the screen without slowdown - mainly because of the tile rendering the dreamcast had (PowerVR).

The only downside was transparency (and I guess still is), so people used to do checkerboard-transparency (e.g. imagine a chess/checkerboard - the white pixels stay, black are out) - then it was fun to combine this other transparency out there. Not sure how well it worked, but people tried it out.


Is this real hardware architecture or is it software api on top of general simd architecture like GCNs ?


A lot of this just sounds like REYES implemented on hardware. Essentially what Pixar did with the Pixar Image Computer in the early/mid 80's.

https://en.wikipedia.org/wiki/Reyes_rendering


The tiling concept is similar, but the overall algorithm is a bit different, in that with REYES micropolygons are shaded first before being visibility tested: that's why REYES is so amazing at doing displacement.

It does means however that REYES can suffer from huge overdrawing issues when objects are small on screen however (unless shading rate is set appropriately).

You can still cull objects that aren't on screen ahead of time though, so you make a saving here.

Pretty much all rasterisers can suffer from this issue to some degree, which is why using raytracing for initial visibility testing scales far better (assuming acceleration structures are used) with bigger scenes (hundreds of millions of triangles). And it's one of the reasons why almost all offline renders used for VFX have moved to raytracing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: