Hacker News new | past | comments | ask | show | jobs | submit login

I'm mostly asking this question out of ignorance, so forgive me. But I'm a web dev and it's always been my understanding that browsers mostly use the CPU to render CSS, only utilizing the GPU in certain scenarios. Is Muon stating that they're entirely leveraging the GPU? If so that sounds super interesting.



Yes-- Ultralight (the renderer underneath) has two modes: pure-CPU or pure-GPU. The GPU renderer does all drawing on the GPU using tesselated path geometry and pixel shaders.

All painting is actually emitted as virtual GPU draw calls, interface is here: https://github.com/ultralight-ux/Ultralight-API/blob/master/...

Platform-specific implementations (D3D11 / D3D12 / Metal / OpenGL) are provided in the AppCore repo: https://github.com/ultralight-ux/AppCore


Tessellated path geometry? Have you stopped using the signed-distance field path implementation? If so, any reasons why or lessons learned?


Hah I could write a whole post on this topic but ultimately after experimenting with many different approaches on real-world hardware, tesselating the paths and using multi-sampling for AA (you can limit the MSAA to a single area via offscreen buffer) was the most reliable performer.

Real-time SDFs on the GPU still have a definite advantage when it comes to performing strokes, fills, glows, pseudo-blurs and other complex effects in a fill shader but older hardware (especially older integrated graphics) had unacceptable performance when evaluating the bezier.

The alternative is to cache the SDF (precompute on CPU or GPU then upload to VRAM) but then you start running into memory bandwidth and texture memory concerns.

I may still bring it back and lock it to certain hardware but I think we are still a generation or two away from using / abusing shaders for vector paths.

I should also mention that I spent the last two years rewriting our CPU renderer which now has pretty amazing performance across a wide range of hardware (no GPU required, can run it on a headless server or other device). I started by forking Skia’s core rasterizer and wrote a parallel dispatch layer on top (CPU core counts keep increasing so we can start treating them like GPUs for certain ops, see Intel ISPC).


Despite being a noob in this space, I'm very interested in it. Have you come across Ralph Linus? I've seen him pop up in a few HN threads that discuss SVG rendering with the GPU, and he's doing some interesting things: https://raphlinus.github.io/


Most browsers render nearly everything with the GPU. Chrome and Firefox render most of the browser via Skia and Skia can render with many different native GPU APIs or fallback to software rendering. Safari renders via Core Graphics which is also GPU based.


Firefox now renders with WebRender, not Skia.


It really depends on the CSS rule you are using. The majority of the older/simpler properties are traditionally tied to the CPU, but things like transform, filters, or any 3D effect will push you over into the GPU.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: