Hacker News new | past | comments | ask | show | jobs | submit login

I'd also be interested in details on this but I assume the gl.enable() API changes fundamental things about the rendering pipeline. It allows enabling things like depth testing and stencil (both involve an extra buffer) and face culling (additional tests after vertex shader). For blending in particular I think it requires the fragment shader to first read the previous value from the frame buffer. Changes this stuff is probably not a trivial operation and requires a lot of communication with the GPU which is slow (just a guess).

If you want to change blending for each draw call you can change the blending function or just return suitable alpha values from the fragment shader.




> Changes this stuff is probably not a trivial operation and requires a lot of communication with the GPU which is slow (just a guess).

The GPU underneath looks a lot more like Vulkan than it does like OpenGL. Changing state should, in general, not require communicating with the GPU at all, that happens once you draw stuff (or do other operations like compiling shaders or creating textures).


Yeah, but the problem specifically with GL is that it is almost unpredictable what actually happens at 'draw time', because small GL state changes can be amplified into big GPU state changs.


Yeah, that's definitely an issue. Vulkan has some of the same issues, they're just moved to the pipeline creation stage.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: