This feature is really cool, however the ratio of parse-time and compile-time improvement is not always meaningful to end users. I would be curious to see the absolute & relative load-time gains.
When I was benchmarking Firefox on the same feature [1], back in November 2017, I noticed that reddit loads very little JavaScript that any improvement on it does not matter.
Websites are built to fit within the existing constraints of the web platform. It's important to look beyond what sites are doing today and instead look at what they cannot do. It's basically this story for web performance [1]. Here you're suggesting profiling and optimizing website designs that 'survive' and make it production but you're failing to observe designs and sites that don't make ship because of performance issues and you're just trying to optimize against cases where the designs survive.
In general parse-time and compile-time is meaningful enough that large websites are spending a considerable amount of effort and energy playing 'code golf' to keep JS size down otherwise engagement will suffer.
1-2% on a large scale is pretty significant. If each request on average takes, say, a second, and is then reduced by 1-2%, that'd be 990ms. 10ms saved per request.
Let's say that on average, 5 uncached pages are opened per day per chrome-mobile user. I don't feel like looking up the total amount of Chrome-mobile users, let's estimate that at 500 million.
5 page loads * 500,000,000 users * 10 ms = 25000000000ms per day saved, that's 289 days of page loading saved on a global scale per day.
I don't personally think measuring the improvement in terms of time is very interesting, it's very hard to interpret, but each ms of loading takes up a certain percentage of your battery, the amount of electricity saved daily on a global scale because of an improvement of 1-2% is fairly significant.
Only if your phone is spending all its time parsing javascript, and maybe 1% (being generous) of its CPU time is spent doing that for a JS heavy page. So 1% of 1%. Really not much at all.
Facebook.com, LinkedIn.com, and Google Sheets ship ~1MB of JS compressed for their main pages, which is 5-10 MB of JS uncompressed. So JS parsing time ends up taking hundreds of milliseconds on initial load.
And of course, people want to build even richer experiences (live streaming, 360 media, etc) so there is a lot of appetite for loading more code in the same amount of time.
Everything is great. Only why from the introduction Ignition in Chrome 59 (2017-06-05) and resignation from V8 with Full-codegen, I have been recording terrible regression on bellard.org/jslinux. Since then (Chrome 58), each newer version is 3-4x slower.
Have you reported these performance regressions on their bug tracker?[1] I can't find it. Just time for the kernel to boot or you measuring something else?
In my experience, they have been quite responsive to quality reports. Like Mozilla, they have many tools to assist QA.[2]
Initially, it seemed to be a temporary regression. Later that it may be related to Meltdown/Spectre mitigations. Currently, it is well-known to developers and remains patiently waiting for improvement.
https://bugs.chromium.org/p/chromium/issues/detail?id=827497...
Can I use a timing attack to determine if my script has been seen before as a fingerprint measurement? Meaning, is there a way via timing checks to determine whether this is a cache hit which could tell me something about the user? Or is it per domain and effectively like storing a bool in local storage?
While true, I was under the impression that there wasn't a cross-domain cache that wasn't opt-in. Again, though, maybe this is per-domain so it's moot.
V8 engineer here: basically, a serialized walk through the object tree, starting from the top-level script. Predominantly, this means bytecode, scoping metadata, and strings.
Good question: unfortunately not. We only really generate machine code for hot, optimized functions, using TurboFan (all the other machine code, like builtins and the interpreter, is loaded from the "snapshot"). This code is pretty execution specific, and will include things like constants, raw pointers, logic specific to the hidden class transition trees, etc.. Relocating this when serializing/deserializing a cache would be expensive, and we'd probably have to also serialize half the state of the current VM heap, so it would overall be a net loss.
Additionally, our current caching is only after initial top-level execution, where hopefully[0] you haven't run enough code for it to be hot yet, so there wouldn't be any optimized code to cache anyway.
[0] I say "hopefully" because getting code hot during initial execution would usually mean that execution is stalling the main thread at startup, which isn't great.
I'd like to think so, though wouldn't dynamic typing insist branching would still be possible so there's still likely some runtime checking before being sure a precompiled bit is valid. All fascinating stuff.
Could someone add here a note in the title that it applies to JavaScript on V8 engine?
I thought this was something related to caching actual executable code in all languages.
When I was benchmarking Firefox on the same feature [1], back in November 2017, I noticed that reddit loads very little JavaScript that any improvement on it does not matter.
Also, I wish that the benchmark website domains were not truncated, such as "http://www.", and better distinguish websites such as "http://reddit." (for http://reddit.musicplayer.io) as opposed to "http://www.reddit." (for http://www.reddit.com).
[1] https://blog.mozilla.org/javascript/2017/12/12/javascript-st...