Concatenating JS doesn't necessarily make things faster, since (as the article mentions) modern browsers will download up to 8 assets in parallel. If JS is delivered separately, the browser doesn't have to wait for all of it to be downloaded before parsing and executing any of it--they can be parsed (and potentially executed) in parallel.
AFAIK the scripts cannot be parsed and executed in parallel since they're not explicitly async. The browser doesn't know if any of the following scripts may depend on the previous ones (think jQuery), so it just downloads them and then waits to parse and execute them in order, blocking rendering.
It's true that delivering them in parallel may in some cases reduce the actual download time, but given the small file sizes Apple is serving, the connection overhead (TCP handshake, HTTP headers, slow start, ...) just makes it worse. Most browsers (especially mobile ones) aren't even going to download more than 4-6 files at a time, since they're not using domain sharding.
>It's true that delivering them in parallel may in some cases reduce the actual download time, but given the small file sizes Apple is serving, the connection overhead (TCP handshake, HTTP headers, slow start, ...) just makes it worse. Most browsers (especially mobile ones) aren't even going to download more than 4-6 files at a time, since they're not using domain sharding.
So what I'm wondering is if you target that 4-6 connection limit, if you get all your js files to load in one connection group can you benefit from the parallel downloading while not blocking anything? Like this: http://i.imgur.com/iuriZ.png
Seems to me like the handshake issue goes away if all the connections are made at once, and you're able to cache files more efficiently. If you make a change to one or two files you can just invalidate those caches rather than the cache of your single mega file. Just some thoughts.
>The browser doesn't know if any of the following scripts may depend on the previous ones
Yes it does. That's why the way they're ordered on the page is important. If I include jQuery after I'm trying to make jQuery function calls, it won't work (in all browsers I've tested, anyway).
You didn't get my point, probably because of my terrible english :) The browser has no way of knowing in advance if a script doesn't depend on any of the previous ones, so it cannot optimize this specific case and parse/execute the script before the other ones have been executed. You can manually tell the browser to load the scripts and execute them as soon as they're ready, in no particular order, via the async attribute. If you don't, the browser is going to assume that the order is important and load them one by one.
The best advice I could ever offer on this topic is to know your audience. If you have a server in the U.S. and you have a large customer base in China you will generally be better served by fewer larger requests than several smaller requests as the latency on each request is often the biggest killer of performance.
Now, I may be wrong in this, but here's my thoughts. If the requests are asynchronous, yes, more requests will be much worse over long distances. However, as you see here http://i.imgur.com/iuriZ.png, Chrome, FF, and IE since 8 (6,7 limited to 2 connections I think) will establish multiple simultaneous connections to each domain serving assets. So what I'm saying is that the best strategy might be serving ~4 or 5 JS and CSS files, with likely to change files separated from unlikely to change files. Serving different asset groups from different domains would speed this up even more. Just some (disconnected) thoughts.
Even if it were faster in modern browsers with a high speed connection (see MartinodF's comments) it'd still be worse on high latency connections like mobile. The number of requests has a huge impact on page speed when you get 100ms+ of latency per request.
Concatenating JS doesn't make sense when you are site like Apple.
They have a lot of mini sites that are quite different from each other e.g. Store, iPad, Home Page, Support, Developer that you may access directly and which may have 90% of JS in common. So they trade off first load for subsequent visits.
"External JavaScript files in <head> – All of those JavaScript files nested inside the <head> tag, further delaying the start of page rendering."
Sometimes the head is the best place to put the javascript. I didn't look into what javascript they are loading there, but there are times the user experience is improved by it.
That actually makes <head> the preferred place for loading scripts.
Also, I find it a bit extreme to recommend putting inline javascript in a <script> tag. I'm ok with trying to maximize performances, but please, do not recommend to produce un-seperated and unclean code. Concatenating javascript in a single file (and compressing it) is way enough, having one single request to get all javascript is not so bad.
What's unseparated about it? Maintenance doesn't have to be performed on the rendered final product. Template systems and processing pipelines are pretty common for letting devs keep code structured in useful ways while still allowing for optimal end results.
For example, setting up event delegation on enhanced elements is best done in the HEAD before the elements load. If you setup all your user-event listeners in the bottom of BODY then your users will have a short time when they'll be interacting with elements which A) will do the non-enhanced behaviour B) do nothing whatsoever. Neither A nor B is ideal.
Google Analytics is generally the only JS I put in the head. If some code is absolutely required for the page to work though then it may make sense to put it ahead of the content.
What if you are displaying different elements based on the users location/country (eg. currency, contact details), and you use javascript to detect and do this.
You wouldn't want the page to load first and then this appear.
@jaffathecake I supplied a single example. Also, you might have a very javascripty web application that the users will have had the files cached 99.9% of the time so retrieving the files isn't an issue.
So you would delay the whole page just because of some location specific elements? How about render what you got first so the user has something to look at and then fill in your specific bits later?
Possibly. I am just saying that you might have things that are more important to do before the page loads that might warrant putting it in the head. It's not a set rule that javascript in the head is always bad, you just need to know the tradeoff and make a decision.
Sometimes you render content with JavaScript and want to avoid FOUC(Flash Of Unstyled Content) also sometimes you have JavaScript polyfills that you want loaded as soon as possible.
But I assume that in Apple's case, they are doing it even when they shouldn't be.
The mod_pagespeed service does some of these optimisations (plus others). It's almost certainly applying JS concatenation, and in the end there's not much difference in load time:
Question: have the average file-sizes of websites gone up proportionately to bandwidth increases? It seems to me like bandwidth has increased enormously, but filesizes have capped between 600k–1 megabyte. Shaving tenths of seconds off pageloads might not be as important as improving the speed of render.
I can't speak for every country, but as I work for a French telecom, I happen to know the figures for this on our own network.
Firstly, the vast majority (around 80%) of browsing on mobile devices is going through wifi. It turns out that at least in France, most people that use mobile devices seem to use those devices in areas where they have wifi available (at home, or in the office) most of the time. This of course means that as soon as you increase landline data speeds, you also increase mobile data speeds, because most mobile usage is routed through landlines.
Secondly, even when clients are out and about, they often have 3G coverage which is not too far off wifi speeds (a typical 3G connection has about a third of the bandwidth of a typical landline/wifi connection). OK, it's a third of the speed, but it's the same order of magnitude, and it only applies about 20% of the time.
What this means is that a mobile user is getting data at (100 * 0.8) + (33 * 0.2) = 86% bandwidth of a landline connection. This means that a 16% increase in landline bandwidth would be enough to balance out everyone moving to mobile devices. Landline bandwidth has of course improved a lot more than 16% in the last few years, and not everyone has moved to exclusively mobile device web-browsing. So yes, I think it's fair to say that the average bandwidth of web users has gone up, at least in France.
1/3 is not the same order of magnitude in base-3 or less, I tend to think in base-2 at least as much as base-10 when considering order of magnitude issues.
Put another way, if I coded something where the performance delta was 1/3 or 3x, I would certainly be complaining or bragging about an order of magnitude change.
Order of magnitude without any base specified is commonly understood as being base 10, and it is extremely confusing to use other bases, as no one else uses them as 'orders of magnitude'.
You can talk about logarithmic scales without risk of confusion.
On T-Mobile in the US, my 3G (HSPA+) frequently sees 13Mpbs down and ~5Mbps up with latencies of <50ms. That's higher than the average broadband connection in the US, AfAIK (for reference, my home wired connections is 6Mpbs up and down).
I think the author merged in two terms of his analysis of performance that are really separate:
- Response Time (how long does it take per task)
- Throughput (how many tasks can we complete per N)
For some of us, getting better responses means in-and-out quicker and so its easier to serve more throughput. For apple, they have plenty of capacity, so new product launches aren't really a motivation to improve response. They will we able to support that volume anyway. The motivation would be to improve user experience.
For me, the most noticeable thing is the time it takes to load that enormous .png file. It's only 500KB but it still took 2 or 3 seconds to load for some reason.
Ironically, Google Pagespeed tells us this about zoompf.com:
High priority
Compressing the following resources with gzip could reduce their transfer size by 235.5KiB (72% reduction).
Compressing http://zoompf.com/js/jquery-ui.min.js could save 142.7KiB (74% reduction).
Compressing http://zoompf.com/js/jquery.min.js could save 62.5KiB (66% reduction).
Compressing http://zoompf.com/wp-content/themes/NewZoompf/style.css could save 24.9KiB (77% reduction).
Compressing http://zoompf.com/js/animations.js could save 4.0KiB (75% reduction).
Compressing http://zoompf.com/.../wp-page-numbers.css could save 1.4KiB (73% reduction).
Medium Priority
The following cacheable resources have a short freshness lifetime. Specify an expiration at least one week in the future for the following resources:
http://zoompf.com/images/background_pages.png (expiration not specified)
http://zoompf.com/images/clipboard.png (expiration not specified)
http://zoompf.com/images/handles.png (expiration not specified)
http://zoompf.com/images/pages.png (expiration not specified)
http://zoompf.com/images/report.png (expiration not specified)
http://zoompf.com/images/streak.png (expiration not specified)
http://zoompf.com/js/animations.js (expiration not specified)
http://zoompf.com/js/jquery-ui.min.js (expiration not specified)
http://zoompf.com/js/jquery.min.js (expiration not specified)
http://zoompf.com/.../wp-page-numbers.css (expiration not specified)
http://zoompf.com/wp-content/themes/NewZoompf/style.css (expiration not specified)
http://zoompf.com/.../freedownload.jpg (expiration not specified)
http://zoompf.com/.../freeperformancescan.jpg (expiration not specified)
http://zoompf.com/.../logo-disrupt.jpg (expiration not specified)
http://zoompf.com/.../logo-virgin-america.png (expiration not specified)
http://zoompf.com/.../social-icons-32.png (expiration not specified)
http://zoompf.com/.../video-icon.png (expiration not specified)
They basically don't follow their own advice. Credibility -> toilet.
That's a subtle fallacy. It confused me at first, then I realised that 'pointing out hypocrisy' isn't the fallacy, it's 'claiming the argument is wrong because it's hypocritical' that's the fallacy.
edit: on further examination, the examples given in the wikipedia article are really bad - because they're merely pointing out hypocrisy ("but you're -foo-") rather than claiming the argument is wrong. Person 2 in each example could quite happily be in full agreement with the argument and make the same comment.
Yes, exactly. Maybe I should edit the article to include this, it's an important aspect. Sure, the person might be hypocritical, but this doesn't invalidate the argument.
That's silly. Often when giving advice on complex issues your simply providing a heuristic. Start here, there might be some edge cases involved but 95% of the time this is the way to go.
Most of the advice they gave was mostly affecting performance for a single user, not performance of the site for scaling to many users. Only the number of requests for javascript files actually would affect the scaling of the site and not much since they are just static files.
I didn't confuse them - I explicitly distinguished between them. Scalability is a category of performance - it is performance at scale. Most of these suggestions may help with individual browser loading - but will not make a significant difference in scalability. However, the pretense for the investigation was the high level of traffic that was imminent - a scalability challenge.
If you don't want to use Zoompf (even if they don't follow their own advice), you should take a look at Steve Souders' blog for web performance: http://www.stevesouders.com/blog/
I'm using the RequestPolicy addon for Firefox (I realize I'm in a tiny minority...). When visitng a website that uses domain sharding for the first time, that means I have to allow cross-site requests to the other domains.
My plea to web-admins: Try to at least make the names of the shards recognizable. I've seen sites where the domain is essentialy "mynewspaper.com" and it needs data from "xac1h139a.com" to display correctly. Now go find the right domain to allow within the dozens of cross site requests that such sites are often using...
Edit: This is a comment on his suggestion to use domain sharding. His site is fine.