Hacker News new | past | comments | ask | show | jobs | submit login

I hope soon we'll have a shot at rethinking the browser's rendering stack. Not necessarily ala WebRender [0] since it needs to be backwards-compatible, but something with a clean slate.

I imagine programmable layout, GPU contour rasterization, motion-blur, programmable shaders, layer blending, LOD and infinite zooming, and render-to-texture (for putting 2D surfaces in a 3D scene).

None of this is unnecessary if you bring AR/VR to the mix. Soon people will want to put on their AR/VR glasses and read books, select pieces of text, open links, etc. There are no APIs that allows them to do that in a web browser. Ergo => flock to native.

Quite a bit of work remains for that to happen. A whole bunch of APIs need to be exposed in the browser, such as native text rendering APIs, a better-than-WebGL2 API for graphics [1][2], a way to make the webpage accessible to non-humans (eg. How do you make the page reader-friendly? how do you do "Find in page"? How do you index the content?), etc.

[0] https://wiki.mozilla.org/Platform/GFX/Quantum_Render [1] https://github.com/KhronosGroup/WebGLNext-Proposals [2] https://github.com/gpuweb/nxt-standalone




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: