I probably should have been clearer in the article. I was trying to strike a balance between:
- presenting what I believe to be a compelling and exciting possible future for the web (especially considering it has a viable polyfill story)
- getting developers excited about this future and thinking about how it could integrate with existing tooling
and:
- Asking for feedback on the KV Storage and Import Maps APIs themselves.
- Encouraging developer to experiment and/or sign up for the origin trial
It's not an easy balance to strike, and in this case I probably should have emphasized more that this is still in the experimentation phase.
It's got to be frustrating to write something like this up and immediately see people jump to, "Google is taking over the world again."
To be clear, this looks really promising. I particularly like the way that import maps are polyfilled. I kind of wish standard modules were flat-out required to be imported in versioned form, since that would open the door to better API versioning on the web in general, but... whatever, that's what the standards process is for, and it looks like that's something people are at least already talking about.
But the unfortunate side of things is that because of Chrome's history, it's really easy to read posts like this as, "here's a new thing, and btw we're shipping it tomorrow." That's not your fault, it's just what the environment is like right now.
> All your users should benefit from better performance, and Chrome 74+ users won't have to pay any extra download cost.
To me that sounds like, "post Chrome 74, we will have this feature turned on for production sites." If that's not the intent, and Chrome isn't planning to turn this feature on early, then I'm much more excited about the proposal.
It seems a bit unfortunate to quote a sentence out of context and then misinterpret it like that. Let's expand the quote by one sentence:
> If your site currently uses localStorage, you should try switching to the KV Storage API, and if you sign up for the KV Storage origin trial, you can actually deploy your changes today! All your users should benefit from better performance, and Chrome 74+ users won't have to pay any extra download cost.
I'm not sure what that extra sentence changes. I would still (and in fact, did when I read the release) interpret both sentences together as, "you can enable this today via an origin trial, and in Chrome 74+ it'll be enabled permanently." I certainly wouldn't read it as, "you should sign up for an origin trial and then provide us with feedback so we can continue to evolve the spec."
I realize that to no small degree that's my own fault for not understanding the specifics of origin trials, but how many people reading this article already understand origin trials?
I don't want to derail things -- I just wanted to get clarification that this wasn't going to be HTML imports again. At the end of the day, I don't care how the announcement is worded as long as it's not actually being shipped in the next release as on by default for every site. The spec itself looks really interesting and I'm excited to see it develop.
I've updated the article to emphasize that this is indeed something we're experimenting with, and explain a bit more how built-in modules go through the standards process (in general and in Chrome).
That 56K number isn't gzipped, gzipped it's only 18K (plus there's also some inline JS, some webpack boilerplate, and then analytics.js).
The reason for its size is my site is my playground. It's where I get to experiment with all the things I want to experiment with.
I also work on quite a few open source projects, which I usually test on my site before releasing them publicly just to make sure they work in production without errors.
Article author here. Yep, not hiding that fact (I could have easily used a trace with minified code, but I didn't to point this out).
Two things though:
1. I used to work on Google Analytics, and I've created a lot of open source libraries around Google Analytics, which I use on my own site because I like to test my own libraries (and feel any pain they may be causing). The way most people use Google Analytics does not block for nearly this long.
2. I've updated my Google Analytics libraries to take advantage of this strategy [1], and I'm working with some of my old teams internally to see if they can bake it in to GA's core analytics.js library, because I strongly believe that analytics code should never degrade the user experience.
> Yep, not hiding that fact (I could have easily used a trace with minified code, but I didn't to point this out).
Kudos to you for your honesty here! I was a bit confused by your question "So what’s taking so long to run?" when it seemed pretty clear what was taking so long to run. If the goal were simply "speed up the pageload/FID", removing browser analytics (in favor of server e.g.) would seem to be at least an _option_ to immediately achieve that end.
Right, when I said "what's taking so long to run?", in my mind I was thinking there'd be one obviously slow thing that I could just remove or refactor, but it turned out that it wasn't any one single slow function/API causing the problem.
And yes, clearly removing the analytics code would have also solved the problem for me, and in many cases, removing code is the best solution.
In this particular case I couldn't remove any code because I was refactoring an open source library that a lot of people use. I wanted to try to make it better for input responsiveness in general, so people who use the library (and maybe don't know much about performance) will benefit for free.
Also, I wanted to help educate people about how tasks run on the browser's main thread, and how certain coding styles can lead to higher than expected input latency.
It's likely not "blocked" by analytics since pretty much all analytics libraries get loaded async.
However, scripts loaded before the `load` event do delay the load event, and analytics script are typically loaded with the lowest priority, so they're usually last and thus the ones you notice in the bottom-left corner of your window.
But the only way they'd be "blocking" anything is if the site was waiting for the load event to initialize any critical functionality (which it shouldn't be).
Shouldn't, but this is absolutely common in the wild. Another one I get often (which makes me need to disable my ad blocker) is UI actions that first log the action in X analytics library and then execute the function. When X library is blocked, the code path never reaches the function. Off the top of my head, flight/hotel reservation websites are particularly bad about this.
Article author here. Most of the comments so far are talking about using lots of tabs, but what I think is actually the most interesting part (or parts) of the article hasn't been mentioned at all.
As I was doing my research for the article, I found a lot of things that really surprised me. And I feel pretty confident in saying that most web developers aren't aware of these things either.
Here are my top four:
- We shouldn't use the unload event. Ever.
- The unload event often doesn't fire when closing tabs/app on mobile
- The pagehide/pageshow events even exist (virtually no one I've talked to knows what they do; most people think they're about page visibility).
- In browsers that implement a page navigation cache, you can click a link to navigate away and then navigate back with the back button, and all your JS code is exactly as it was before you navigated.
To this last point. Try doing this in the console:
1. Write a promise that resolves in a setTimeout after 5 seconds.
2. After 1 second, click a link to navigate to a new page.
You can see the last point in action on a number of places. I can think of two sites in particular that pop up "You will be signed out in XX [decreasing number] seconds unless you click 'Continue'." messages. I can come back to the site's tab three hours later, and the countdown just started!
This isn't true. The proposal was discussed many times in the WebPerf working group and went through a TAG review before shipping.
Microsoft made a public statement of support for the API, and Firefox engineers were actively involved in many of the design and implementation discussions.
Service Worker already introduced the need (or you could argue possibility) to have multiple configs since SW code has a much different transpiling baseline compared to legacy browser code. And module code just adds one more level to this.
I think webpack dev server will update if this practice becomes popular, but for the moment I think your idea of only building the es2015 build in dev mode is probably the best temporary solution (as long as you make sure to run your test suite in multiple environments and include the legacy build there).
I probably should have been clearer in the article. I was trying to strike a balance between:
- presenting what I believe to be a compelling and exciting possible future for the web (especially considering it has a viable polyfill story) - getting developers excited about this future and thinking about how it could integrate with existing tooling
and:
- Asking for feedback on the KV Storage and Import Maps APIs themselves. - Encouraging developer to experiment and/or sign up for the origin trial
It's not an easy balance to strike, and in this case I probably should have emphasized more that this is still in the experimentation phase.
I can update the article to make that more clear.