I was working on a problem recently that's remarkably similar the author's rendering of hundreds of thousands of cubes (each arbitrarily oriented in space), and looked into geometry shaders some. There were several points where I thought I'd be able to use them to implement some resource saving trick that occurred to me—but was discouraged by further reading each time.
I guess there are uses out there that they're more ideally suited for, but it'd be great if they did what they do... only fast :)
Ugg! Graphics and GPU compute were split by choice because OpenGL programming is an esoteric art, the only thing that can fix these is simplification - something I don't trust Khronos with because of my miserable experience with OpenCL when compared to CUDA.
Let's hope that since they had input from Value, Epic, and other users of the API instead of just producers they are doing a better job then they did with OpenCL.
For post-processing, Capcom's MT Framework used geometry shaders to generate primitives in lieu of scattered read-modify-writes to pixels. I'm not sure if the bang for the buck was there, but the concept was clever enough that it stuck with me.
They are used a lot for generating normal vectors on the fly, since you can take in a whole triangle, you just do a cross product of two of the edges and you're done.
From my experience, they are fast enough as long as you don't emit more than 2x primitives that you take in.
They are also useful for culling primitives.
They are not suitable for tessellation, which funnily enough is better done by tessellation shaders. Tessellation shaders are actually pretty flexible, and can do most of the things that you might have thought geometry shaders would be suitable for.
I guess there are uses out there that they're more ideally suited for, but it'd be great if they did what they do... only fast :)
Now I'm excited about this, though (https://www.khronos.org/vulkan)—which I only noticed through the author's blog.