Hacker News new | past | comments | ask | show | jobs | submit login
New JavaScript techniques for rapid page loads (chromium.org)
229 points by tosh on March 18, 2015 | hide | past | favorite | 62 comments



The mobile link is a little jarring on desktop (?m=1). Here it is without mobile:

http://blog.chromium.org/2015/03/new-javascript-techniques-f...


What it looks like on my vertical monitor: http://i.imgur.com/mzGr6Da.png

ALL HAIL THE MIGHTY CHROMIUM.


It was actually worse for me. The logo filled the complete vertical space so I thought maybe the page was broken until I scrolled down :D

Kind of funny from chrome blog running in Chrome or is it a little sad.

Edit ---- I guess mobile version in desktop causing problem kind of make sense I guess.


If you define

   #header-inner img {max-width: 100%;}
instead of "width: 100%" inside the css the image would scale down if the screen becomes smaller than the px width of the graphic, but still stays 100% of it's own px width when more space is available. So no ugly upscaling. https://i.imgur.com/9TH3Al7.png


I wish they would have used an SVG.


Hahaha that's pretty awesome.



Looks the same on an iPad.


Blogger's ?m=1 is a really nasty hack, which seems to break so many things, I'm surprised Google have let it hang around for so long.

Another annoyance is that unless you specifically configure Google Analytics to ignore that parameter every blog post gets two sets of metrics, one for the desktop version, and one for mobile.


Ouch. Do teams at Google not talk to eachother?


To be precise, code caching has been in major browsers for a while, but it was in-memory cache. Chromium is the first to make it persistent with this update. Yet, what's cached contains no optimization result. It is more like basic parsing and translation into machine code.


It is pretty hard to cache optimized JavaScript, I've been told, due to the generated code containing inline caches and other tricks. But at least in some cases it is possible, for example Firefox has had persistent code caching of asm.js content for a while now, https://blog.mozilla.org/luke/2014/01/14/asm-js-aot-compilat... , which is the actual fully-optimized machine code.


It looks like a logical next step is to store the optimized and compiled code. And after that, you can perform more aggressive optimizations on frequently-accessed scripts.


yeah if the pages stored hashes of the script, it could be pre-executed before you even visit the page. or common libs like jquery could be pre-parsed into binary structs in RAM.


It needs to happen. It's so simple and effective approach that it's really puzzling to see it not being implemented anywhere.


Is next step that Chrome accepts js in the form of precompiled binaries from websites? Sounds a bit futuristic, but compiling would be reduced to checking the security of the binary.


> To be precise, code caching has been in major browsers for a while, but it was in-memory cache. Chromium is the first to make it persistent with this update.

Does this mean developers will need to put in (additional) efforts to do cache invalidation when they've updated their code?


Developers already have to account for caching, so this shouldn't change anything. Overriding the cache to roll out an update is generally done in 2 ways:

1) Changing the file name, so it's a completely new file to the browser and thus downloaded (usually by incrementing the version number, like myApp.1.0.1.js).

2) Setting different cache rules on the server for different file types. If the updates aren't critical, you can let the cache run it's course. So your cache rules could be 1 month for images, 1 day for HTML, and 1 hour for JS files.


V8-based projects can benefit from this as well: http://www.hashseed.net/2015/03/improving-v8s-performance-us...


Omg so does that mean plv8 (v8 inside Postgres) isn't caching the jit'd js functions that might be executed thousands of times a day??????!


Not sure. If it spins up a new V8 instance every time, then no. But I don't think that happens. It probably keeps a V8 instance around, so in-memory code caching should already work.

Edit: looking at the plv8 code, it doesn't seem to create a new Isolate for every request. So I think for the same instance of PostgreSQL the in-memory cache should also hit.


Interesting that async scripts are parsed on a different thread - I didn't know that. So maybe best practise now is to add JS to the <head> async, so it has the most time to parse?


Best practice is to always add [async] or [defer] attributes. Document location will always be a hint to the browser, but you shouldn't toss it in the head if you don't need it till later.


If your scripts are going to be downloaded asynchronously because you've used "async" or "defer", you should place them before any blocking resources, like CSS for example, otherwise you're sat waiting for that blocking resource to download before you even start downloading the scripts.

This is why my script tags end up in the head now, because I want to place them before the CSS tags, and I want to fetch the CSS before the body starts being displayed.

I tend to use defer rather than async because I want the scripts to execute in the order I add them to the page, and only once the body has been fully constructed.

That said, if you have to support old browsers, make sure you research which ones support async/defer, and which ones have buggy implementations. caniuse.com is your friend.

[*] If I'm wrong with any of the above, please correct me. Just checked your profile and you seem well qualified to contradict me.


You don't have to place things in order to addressing blocking behavior better. Browsers have already tackled that optimization for you. https://groups.google.com/a/chromium.org/forum/#!topic/loadi...


> Best practice is to always add [async] or [defer] attributes.

Can you give a reference for that?

HTML5 Boilerplate, Yeoman's generator-webapp and Google's Web Starter Kit do not come with these script attributes OOTB.

EDIT: ha I just noticed your username. Maybe you can tell me why those projects don't come with async script tags?


Because we've been wimps. :) If you don't care about IE8 or IE9 you can use [defer]. [async] is always fine, but less attractive. But srsly [defer] is equivalent to [theresnodocumentwriteinhere].


I was not familiar with the async attr in a script tag before. Thank you for pointing this out. I also use the scaffolding tools the other commenter noted and never saw this attribute before.


Code caching should have been at the top of that post. Sure its not as exciting as streamed parsing but my guess is that it will be far more beneficial to the user.

Funnily enough I was toying around with cross domain local storage and ways to cache javascript for this very purpose. Sadly I found the overhead was a bit too high for only first load benefits and the extra complexity didn't really help things either. In any case, nice to see something of this nature going into the browser directly.


Exactly my thought on which would be more important.

Though one thing is even more surprising for me. I thought parsing was the easy/cheap part of the compilation process. Why would then streaming parsing be such a big deal or am I missing something? Specially with the additional complexity of parsing the html and layout/rendering I would have thought JS parsing would be a minuscule part. Can anyone enlighten me?


> pages loading as much as 10% faster.

In other words, its not, that was kind of my point, however it is cool.

I suspect, the devs had to modularize / logically separate parsing from compiling / executing somewhat before they could complete the next phase (fro chrome 42) which is the script caching part: cached scripts won't need re-parsing before compilation/execution.


This is especially important for large app startup time.


I've discovered an incredibly efficient and reliable JavaScript technique for rapid page loading: Disable JavaScript. I use the NoScript plugin in FireFox and just whitelist a few reliable ajax sites like Google or whatever. My experience has been that JavaScript and modern web design philosophies make 90% of websites slower, uglier, and more difficult to navigate.


I find it really interesting that NoScript users seem so vocal and adamant about their plugin on this site, at least. Many sites that don't work with NoScript have condescending comments from NoScript users, and they don't hesitate to proudly let their makers know. They often make comments like this, too. What is it about this plugin that seems to be correlated with these comments? Is browsing the Internet with such a big chunk of functionality missing really so much better? (Real questions, here.) I've heard HN doesn't even work with NoScript, which makes this all the more interesting to me.


Not sure about NoScript but I just disabled JS using Chrome Developer Tools and HN seems to work fine.

To answer your question, I think that NoScript users tend to appreciate (and even expect) minimalism... consequently some get irritated when even a simple blog can't load without JS. I get the appeal of the minimalist, standards-adhering, machine-parseable website, but it's not a big source of emotion for me :P


Thank you, very good answer! It makes a bit more sense to me. Also tested in Chrome w/o Javascript, and it does indeed work. Upvoting's just a little more inconvenient.


Is browsing the Internet with such a big chunk of functionality missing really so much better?

I've been a no-script user for 3 days. I'd say the experience is better until I hit a useful site that requires javascript - when this happens it takes one or two attempts to enable javascript properly.

I would not recommend it for people for people who do not like messing around with their browsers. However, I intend to stay with no-script because page loads are faster and incredibly little functionality is missing.


In general, Javascript is fine. But some heavy sites kill my notebook's battery. CSS-heavy sites add another hit by warming up the GPU. Some even spin up the fan. Browsing without JS can be a better experience, when the alternatives are warm & loud or totally drained.

Oddly, the native applications rarely seem to have much impact unless it's doing a hard task, like video encoding.


"Is browsing the Internet with such a big chunk of functionality missing really so much better"

Using NoScript does not mean "functionality missing". It's just disabled by default - and personally, most of the time I find the "missing functionality" unnecessary, and in most of places where it's not I already have a whitelist entry in place.

HN works perfectly. Lots of pages even work better with NoScript, as it basically cuts a lot of bloat. If some don't, enabling JS is one click away.

Anyway, there are lots of pages which are kind of expected to not work correctly without JavaScript - and that's fine. But if for instance Hacker News wouldn't work in its current form without JS, that would be a valid thing to complain about, because there's absolutely no reason for that. There are even some pages that implement loading animation in JS in a way that when JS is disabled, the whole page is obstructed by loader that never disappears - and that's just reckless coding that should be fixed.


less ads, less tracking, less lightbox modals, less hover context menus, less elements that move when you're trying to click something and make you click something else, less stupid "intelligent" tooltip helpers, less weird fonts, less frontend gimmick of the week, acceptable page load time. Basically I hate everything on every webpage except the text and maybe sometimes a picture. Most content should degrade gracefully to text, because most sites aren't doing anything truly special enough to be an exception. When a site doesn't do this it makes me mad. Am I entitled? Yeah probably, but it's my explanation.

I also suspect that a lot of people, like myself, still consider the WWW a linked document system rather than a fat client application terminal, we just want documents with links to other documents, and sometimes simple forms.


There's always one like you in threads like this.

<sigh>


Umm, where's the fun in that?


I'm surprised they're not doing the streaming for synchronous code yet considering it's just parsing and compiling and not yet executing. Perhaps in a future update.

I'd still love to see some sort of byte code for the web though but I love the advancements still.


like Java? :)


Yup or .Net with its MSIL. Then they could change the JavaScript standard every year if they wanted to and it wouldn't matter as it would compile to the same byte code. Naturally byte code improvements would happen slowly but that's not as big of a deal.

It just makes sense to me but I haven't seen anyone working on it just occasionally talking about it. So maybe it doesn't make sense to the people working on web browsers, I dunno.


> Yup or .Net with its MSIL. Then they could change the JavaScript standard every year if they wanted to and it wouldn't matter as it would compile to the same byte code.

Not sure why you need byte code for that. You could just have a relatively stable base JS standard as the common target, and then a more rapidly changing set of advanced/experimental JS features implemented through compilers for the resulting, enhanced, JS+extra features languages that compile back to base JS. Heck, you could do the same thing for lots of other, non-JS languages that compile to base JS. In fact, that's exactly what is done today.

Byte code isn't magic, its just another language you have to write an interpreter for -- just not a human-readable one.


I wonder how they persist the cached code, and if it is signed etc. Seems like it could be an attack vector if it was altered on the filesystem or something like that.

I'm sure they thought of that, so I'm just curious. I suppose I could look through the Chromium source but I'm probably not actually smart enough to figure it out.


If someone else has write access to the your filesystem, you're already screwed.


Well, sort of, but it's not that simple. Let's say they write the cached data to $somedir/jitcache, and they, say expose some other API that indirectly allows you to store data at $somedir/foobar. If they fumble checks in this other API so you can manage to write arbitrary data to $somedir/jitcache instead of $somedir/foobar, an attacker suddenly has a powerful tool to escape the JS sandbox even if the flaw didn't allow them to write outside of $somedir.

I'm not suggesting it's likely that this will be an issue, but it's a legitimate question to ask, as there's a long history of people managing to leverage a combination of flaws allowing them to write in a limited set of locations with different flaws allowing them trick some tool into executing what they manage to get written to disk.


"so you can manage to write arbitrary data to $somedir/jitcache instead of $somedir/foobar"

That's already escaping from sandbox which most likely would cause much more troubles. In this way we can worry about absolutely anything ;)


This shows that being lazy can actually make you faster. I wish more languages had good support for laziness.


Chromium?


Chromium is the open-source version of Google Chrome.


More than that, its a project which a number of other browser also rely on and contribute to, for example Opera.


JavaScript?


Ah, I clicked expecting developer-orientated techniques.

Anyone has any resources he (or she) can recommend about writing faster, more efficient javascript? I'm not a developer, I learned javascript to hack some things together and obviously I taught myself awful practices, but I can't identify/change them.


I am not a full time javascript dev. A general advice:

Code review a great tool to learn great practices. Get your code reviewed by some good developer and work on the comments. Sometime code review comments are very personal views of commenter, but in general they can be very helpful.

You said that you taught yourself an awful practice. If you can identify a practice as awful, then probably you can fix it without help of anybody else by simply googling. In my opinion the real challenge is to identify a piece of code is awful or not. If you can identify, then probably it is much easier to correct that.


Follow Paul Irish and Addy Osmani for starters. Superherojs.com has a list of stuff to read in their "how browsers work" section... other than that, even the experts admit a lot of it is black magic - there are some fundamental things (dom access, writing is slow) but hard to nail down concrete things.


Superherojs seems to be down for me.


Hmm, likewise. Too bad that was one of my favorite reference things (though it was getting a bit dated).


Not quite as convenient, but at least you can still find the links here:

https://github.com/superherojs/superherojs/blob/gh-pages/ind...


This is not only javascript related, but just good web page speed practices in general. Though it is old, most of it still fair game. http://yslow.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: