I hadn't heard about this before; I'm clearly out of the loop these days. With buy-in from smart people at Webkit and Mozilla, I've got to assume that it really will be an improvement over the status quo.
But, boy, it feels awkward at first sight. To my eye, it's weird to see what is essentially a low-level optimization instruction as part of CSS (which is usually all about presentation rather than implementation). Also, the article makes it clear that there are a lot of ways that people could mistakenly use this property in unhelpful ways. And I'm still not entirely clear on what they mean by "It’s not possible to remove will-change if it is declared in the style sheet": is this really a CSS property that can be turned on without scripting but never turned off without it? That makes me uncomfortable.
So why isn't this a case where we just need browsers to get smarter? In the same sense that an optimizing compiler can in most cases produce better performance than a human writing machine code by hand realistically could, can't we expect browsers to gradually get better and better at spotting these rendering optimizations? (To take an example from the article, couldn't the browser recognize that there's a transform rule for :active and automatically enable something equivalent to will-change for :hover itself?)
I made the original proposal for will-change on Mozilla' behalf. I agree with your concerns, we've had them since 2010 and kept pushing back on proposing this. But with HiDPI mobile and 4k desktop it's really difficult without it.
It is nearly impossible to predict changes that some page do in perfectly reasonable ways, especially for changes triggered from Javascript. It's true that in some cases a will-change can be inferred from the style sheet but there are still too many changes triggered from JS events. You wont lose any performance by not using will-change and we will continue improving how we infer them. Once an element is declared as will-change the browser will be able to weight the hint and the platform specific costs of various possible optimization like layerizing the element and ignore it if it isn't suitable for the device.
If we can't predict the change then we're left trying to be ready to change a new element within one vsync interval to avoid any latency. This is fine if you're dealing with a small element but large elements like a fullscreen CSS page flip triggered from JS is going to be nearly impossible to perform without a frame or more of latency.
In the case of HiDPI mobile you typically have 2 millions pixels (8 MB) to deal with on a 1GB/s memory bus with a frame time of 16ms. That means that if you touch 16 MB, or twice the size of the screen, then you have zero chance to perform that change without introducing delays. And this is without even considering the CPU/GPU cost require to perform the change. The situation may also get tough on 4k desktop without a discrete GPU if you have 31MBs per fullscreen surface on a 10GB/sec bus.
These are all theoretical throughput maximums. In practice on Android and Firefox OS where a style change occurs on large and complex elements where rasterization is difficult you can't use SIMD to rasterize the page at memory bandwidth speeds, you instead are bottlenecked on rasterizing complicated content. It's typical to see these complex change cause 100ms latency. Once the rasterization is made then the animations can continue smoothly but you're still left with animations skipping at the beginning.
Was there any consideration of making this feature JS-only? From the linked article (and some of the discussion here), it sounds like using this in an actual stylesheet may not be easy to do well (and is often not recommended). And from what you've said here, it's JS changes that are hardest to predict from within the browser. So why expose this as a CSS feature rather than a JS one?
Imagine you're writing something that does fullscreen page flips in a native app vs. a web app. An example is the android homescreen vs the ffos homescreen but I see web pages that often have a similar effect.
* In a native app you're going to have to decompose your page into texture that you will animate (your widget toolkit will likely make this easier).
* Without will-change you're going to hope that the rendering engine will infer your animations in a reasonable amount of time getting bad performance while you wait and hope that it doesn't fall off that optimization path.
* With will-change on the b2g homescreen we just have to add a one line change.
Heck, the browser could even do a lot of things that wouldn't even be an option via CSS. For a transition set to occur on :hover, the article suggests that a human might want to add a will-change attribute to :hover on the containing element (which could potentially be quite large). But the browser could in principle trigger the equivalent will-change optimizations just for a small buffer region surrounding the element with the transition. Or if you wanted to be really ambitious, the browser could even trigger only when the mouse is nearby and actually moving toward that element. I couldn't do any of that via CSS (and I shouldn't have to be the one to do it via JS).
Should'nt this be something that the browser should auto-optimise with a smart renderer?
In the example given with the element:hover and element:active, it would trivial to parse all possible CSS transforms. Then live profiling of the page would give data on the likelihood of particular transforms. The browser could then optimise as necessary given hardware limitations and current resource constraints. Even transforms added via javascript or onhover would be caught via live profiling.
The given use case of stopping flickering when a layer is created should be treated as a bug.
This feels hacky, and likely to be used erroneously or abused - at which point browser will be forced to smartly ignore leaving us with yet more cruft. Performance tuning is a fine art, and browser is in a better position to do it vs your average web developer.
Style properties such as transforms can be changed at runtime by JavaScript (and frequently are for things like dragging operations). There is no practical way for the profiler to detect which styles of which elements your JavaScript will attempt to modify.
In such cases -- where it is JavaScript's responsibility to mutate the CSS at runtime -- it seems like the 'will-change' property should also be added/activated using JS.
However, most of the CSS transforms I write are pre-defined in the CSS and activated by simply adding/removing classes using JS. This seems like something the browser should be perfectly capable of predicting without my help, since the JS isn't actually mutating the CSS.
This reduces to the same problem mentioned above (which is essentially the halting problem). The browser has no way of predicting which elements you will add or remove these classes from.
Automatic preparation would be possible for transforms/animations that are applied in pseudo selectors such as :hover, but not in any other cases.
Live profiling is not the panacea you seem to think it is. The purpose of this is to allow the browser to chose the optimal layout for an element before that layout is required. In other words, before any code you might live profile is actually run. That's the point.
Basically it is that argument, though in this case it doesn't seem to be too technically difficult especially compared to some of the black magic already done by modern JS engines.
And anyway even if it was technically hard, the major browser vendors have the resources and the benefits are obvious. I would much prefer this than yet adding more complexity to the spec
I believe the renderer is where we'll see major innovation over the next year, it really is the next bottleneck. It's amazing (and depressing) how easy it is to jank modern web browsers even on very powerfull hardware.
My thoughts as well. This is an implementation optimisation - It seems wrong to put this kind of stuff into user space. Especially as it seems like something that should be possible to solve with a smarter rendering engine.
What will-change optimization is used under the covers is an implementation detail. Declaring what types of changes you want optimized is something best suited for the page developer. Letting the rendering engine infer and prepare for changes you want to make is something engines already do. It works but only so well.
The problem isn't technical, this is a perfectly fine solution to the question of how to remove a certain class of jank. It's adding more stuff to the already bloated web specs, having them supported forever and then the mental load of wondering what browsers will actually do given that's it's an implementation detail.
I can easily see this leading to more unintended hacks as for example WebKit may be less aggressive than say blink in weighting the hints, and so web developers will then add back the 3d transform hacks in addition to will-change.
The target audience is also web developers many of which are inexperienced and I guarantee that even given all the warnings someone will apply it blindly to *. Hence any production implementation would have to have heuristics to catch abuse; again more complexity that will have to exist forever.
While I'm willing to give the to the benefit of doubt - these guys are super smart and it's got broad support - it just feels so heavyweight to actually change the spec for what seems like a relatively niche problem that can be mostly mitigated. Also by the time it's supported everywhere (ios9?) mobiles will be 2-4x faster.
I think 'flicker' is the wrong word. What the author means is delay/latency/jank.
'Profiling' or inferring the animation is what engine already do. The problem is as an author you're not getting predictable performance. Imagine making a tweak and then rendering engine X or Y no longer infers the animation and now you have a large performance regression.
Live profiling wouldn't work for social reasons. When a frontend engineer demos to his executive, or a startup CEO displays a MVP to a prospective customer, they usually get a link on a cold browser, and if the transition is janky, they're like "Nope, don't want." So every frontend engineer worth his salt will continue to use the null transform hack to keep things smooth even on the initial transition.
I was one of the Blink team's initial "reference customers" within Google for this, and we were pretty adamant that if there's any jank at all between when the user taps on a link and when the transition starts firing, it was a no-go for us. We've had launches canceled because they added to user-perceived latency instead of subtracting from it.
But couldn't you apply the same argument to many things which cause an initial slowdown (and possibly janking), such as static assets (which won't be cached and may need DNS resolved) and JS which will be interpreted until it is jitted. Also the cpu caches will still be cold. Why is this different?
Savvy web-developers employ techniques to eliminate those sources of jank as well, eg. static assets are base-64 encoded as data: URLs and inserted via inline JS after the page layout has loaded (note: this technique may not be necessary under SPDY), and JS is late-loaded via the defer attribute with clicks captured and replayed once the JS to service them is available. This is the reason for the jsaction library that Google recently open-sourced.
The reason it's different from cold CPU caches is that all these sources of jank can cause a user or potential customer to say "Nope, I want a native app that loads instantly." You should care about sources of latency that are perceptible to the user, not ones that are invisible. We ran a number of tests and found that the time required to render a texture to GPU VRAM was user-perceptible on mobile devices.
but don't apply it to any element, then load a remote JavaScript file that adds that class dynamically, no rendering engine will be able to apply this optimization.
The animations that are applied dynamically are the interesting thing. They are the ones more likely to end up being used in performance-critical applications.
This would be caught be live profiling. The same way a JS engine can figure out that a particular function is only called with ints, and then create a specialised version.
Admittedly this would run slow the first few times, until the renderer can get enough data.
Running slowly the first few times is what will-change is trying to avoid. If you read the article, there is an explicit section on applying will-change separately from the actual transformation so that the rendering engine prepares to animate/transform the object ahead of time.
A lot of people are complaining about putting this into CSS, but I think that people are forgetting that using null transform hacks[0] is already the status quo for making animated elements more performant.
What this property does is lets us continue doing the same thing, but in a way that the intent is clear ("I'm using this property to make the animation on this element more performant" vs. "I want to add a transform that does nothing"). Furthermore, it lets the browser decide if it wants to add any optimization, and it can decide which sort of optimization is best.
While it would be nice if we didn't have to do this hinting at all, the fact of the matter is that it's hard for the browser to predict what will happen on the page in the future, and we're already having to do this (just in the cryptic null transform way), so it's nice that the web platform is letting us do it in a standardized way that communicates our intent clearly.
I feel like this is a great example of the... continuing OK-ness of the web as a platform. It feels hacky, but it's considerably less hacky than using unrelated CSS properties, so it's progress and I'm happy with that.
In my dream scenario there's no need to do hinting at all. Most other platforms don't require it. But I am living and working in the real world, which has a significant legacy of web technology.
The web doesn't require it either--we've made it this far, after all. I bet every platform has developers who wish they had simple tools to boost performance.
The web doesn't require it either--we've made it this far, after all.
That is a perfect encapsulation of the "OK-ness" he's talking about. Yeah, it works just fine. Yeah, the web doesn't "require" a smart graphics processing engine, it probably didn't "require" videos or the canvas or CSS transitions at all. But "really good" modern platforms have these capabilities, and IMO there's nothing wrong with wanting or trying to push the web to be as good a platform as possible, and it is frustrating (but understandable) that there is so much legacy baggage "holding it back."
I get that sentiment in general, but what does it have to do with a new hinting feature? Would hinting look different without legacy baggage? Would it not be needed? I don't see how it's possible to create an ideal browser that is fast at everything and also memory efficient. And there are always legitimate reasons to want to juice the most performance out of your pages.
I can’t think of another platform (in the sense of programming environment) which is being optimised to be read milliseconds before execution from a networking socket, with more code being loaded on-the-fly from remote servers. In C++, you have all your code at compile-time and even in Python/Ruby/Perl/Bash etc., it would be highly unusual for the shell to anticipate the equivalent of wget -O foo ${URL} && . foo.
Given the constraints, having to add some hints doesn’t seem all that bad to me.
How about "look ahead"? If there are transition or keyframe properties related to variations of the selector, why not do this automatically? I understand the overhead, but from the standpoint of hackiness this would be the technology working as expected.
How could a browser "automatically" know what changes I'm going to make to the DOM in a dynamic web app? Having the browser try to guess what transitions will be needed in the near future based on "related" selectors sounds way more hacky than a straightforward property that declares it.
As the article notes, for all but trivial examples, these are likely to be added via Javascript. The static CSS file with ":hover" and ":active" was jsut an example.
Saddling every element in the DOM with a data-will-change="TRUE" or data-might-change="POSSIBLY" and utilizing a heavy jQuery plugin to inject browser-specific CSS rules.
Which works fine in a CSS-only environment. But a lot of the time these properties are being set in JS, which is going to be very difficult to predict.
Is there a way to make this a JS-only interface or property, then? Heck, the article made it sound like using it directly in stylesheets would often be a bad idea anyway.
I generally try to not manipulate properties directly through javascript, rather declaring a class for the purpose. Doesn't seem like an unreasonable requirement for proper performance? (I mean, compared to having stuff like `will-change`)
It is an unreasonable requirement in places, though. Before now I've made things like swipable elements that track the user's touch - i.e., you touch down, pull your finger left and the box follows you. That's not possible to do in pure CSS.
While it may be functionally great, it just smells wrong to have a CSS property that you're pretty much only supposed to set via JS that you're supposed to remove after you've set it.
I don't understand why people complain about this. It makes perfect sense: when you change your CSS (maybe adding `opacity` to `transform: translate`), you will notice this property and remember to update it. If it stays in your HTML or JS, you (or whoever changes CSS later) won't keep it up-to-date and the optimization hint will be wrong (which can hurt performance).
I haven't done much with CSS transformations, but I really like this change and think it's a step in the right direction. So much of the web development I've done has involved doing weird hacks to achieve the desired result, and I think that making will-change a valid attribute will help make things more clear. Hopefully this trent will continue.
By what I'm reading you shouldn't have the property in a style sheet at all. The author implies that every usage of the property requires resources that are never released as long as the property is still active. The property in a style sheet will mean that the resources are never released.
Thus, the suggestion to use the property through javascript so that it can be applied and then removed. That just seems totally weird to me.
only WebKit and Firefox Nightly builds have implemented [it]
[...] There is also an intent to ship in Blink too.
(Opera is based on Chromium that uses the WebKit fork "Blink")
Offtopic: first time in years, I see the opera.com website. There was a lot of useful information in their blog posts, when they still worked on their own browser engine.
Small gotcha for people who have been keeping an eye on that from afar for a while: make sure to notice the property is now called "will-change", not "will-animate" anymore.
While I understand the need for the property, it is absurd to put this in a style sheet document :( oh well I have mostly given up on this doc/style/code separation anyway
Mostly technical article about CSS opening with explanation of what CPU and GPU are and where they are located. I'm very surprised that those two terms even need explanation.
Acceleration using the GPU is what will-change is all about.
Also, especially among web developers there are many people who barely know what they are doing, so it's a good idea to comprehensively cover the context (I know it helped me, since I'm mostly a back-end developer who does some front-end on the side).
But, boy, it feels awkward at first sight. To my eye, it's weird to see what is essentially a low-level optimization instruction as part of CSS (which is usually all about presentation rather than implementation). Also, the article makes it clear that there are a lot of ways that people could mistakenly use this property in unhelpful ways. And I'm still not entirely clear on what they mean by "It’s not possible to remove will-change if it is declared in the style sheet": is this really a CSS property that can be turned on without scripting but never turned off without it? That makes me uncomfortable.
So why isn't this a case where we just need browsers to get smarter? In the same sense that an optimizing compiler can in most cases produce better performance than a human writing machine code by hand realistically could, can't we expect browsers to gradually get better and better at spotting these rendering optimizations? (To take an example from the article, couldn't the browser recognize that there's a transform rule for :active and automatically enable something equivalent to will-change for :hover itself?)