Very interesting, and oddly reminiscent of the shift from fixed-function to programmable graphics APIs, right down to the restricted "worklets" being sort-of-analogous to shader programs.
I do wonder whether "as performant as native CSS features" is setting up unrealistic expectations. Same ballpark, maybe, but I'd have thought native will always be easier for browser vendors to optimize, even leaving aside the language/runtime differences (C++/Rust versus JS/WASM).
> Same ballpark, maybe, but I'd have thought native will always be easier for browser vendors to optimize
It will be. And things like the CSS Paint API are based on APIs that are poor matches for GPUs (but are easy for browsers of today to implement). I expect that those who are looking for Houdini for performance reasons will end up quite disappointed.
More broadly, I have seen the "replace the native implementation with a JavaScript implementation for better performance" idea tried many times, and I can't think of a single time it's actually worked.
I didn't get the impression from that article that "replace the native implementation with a JavaScript implementation for better performance" was the goal at all. Rather, the goal seems to be "move away from a situation where one holdout browser (traditionally IE, more recently Safari/iOS) could scupper a new feature completely, and toward one where holdouts are just slower if they don't support that feature natively". There have been lots of examples of that strategy working - jQuery/QSA, asm.js etc.
Is that a misleading impression? Does Houdini have another agenda not covered by the article?
No, that is an accurate impression. The ultimate goal of Houdini is to give developers the tools to innovate; unlike the current situation were all innovation happens at the spec level, and devs have to wait (often years) for browser adoption.
That's accurate, and totally reasonable. If the web is going to move forward, there needs to be a certain acceptance that older browsers should work, but they don't deserve to work well.
And definitely not as well as the latest and greatest.
The target market is for library/framework authors, especially those wishing to make polyfills for future layout algos that haven't been adopted by everyone else yet.
Most things should be within the same ballpark. The trouble is at the moment that CSS polyfills are an order of magnitude (or two) off native performance.
Example: Current CSS layout polyfills (like grid, regions etc) suffer from this.
Where you will see performance improvements is when developers create more expressive layouts/paints/etc. For example you can implement a grid layout with a bunch of nested flexboxes. You could also do it with the CSS Layout API. Because by using the CSS Layout API requires fewer divs, and maybe no N^2 layout passes, chances are it'll be faster.
But this really gets at the point of Houdini is to inform browsers/specs what is important. If a grid/masonary polyfill becomes widely used, then it will help inform CSS work.
I do wonder whether "as performant as native CSS features" is setting up unrealistic expectations. Same ballpark, maybe, but I'd have thought native will always be easier for browser vendors to optimize, even leaving aside the language/runtime differences (C++/Rust versus JS/WASM).