Hacker News new | past | comments | ask | show | jobs | submit login
How fast is apple.com? (zoompf.com)
81 points by leak on Oct 24, 2012 | hide | past | favorite | 71 comments



Concatenating JS doesn't necessarily make things faster, since (as the article mentions) modern browsers will download up to 8 assets in parallel. If JS is delivered separately, the browser doesn't have to wait for all of it to be downloaded before parsing and executing any of it--they can be parsed (and potentially executed) in parallel.


AFAIK the scripts cannot be parsed and executed in parallel since they're not explicitly async. The browser doesn't know if any of the following scripts may depend on the previous ones (think jQuery), so it just downloads them and then waits to parse and execute them in order, blocking rendering.

It's true that delivering them in parallel may in some cases reduce the actual download time, but given the small file sizes Apple is serving, the connection overhead (TCP handshake, HTTP headers, slow start, ...) just makes it worse. Most browsers (especially mobile ones) aren't even going to download more than 4-6 files at a time, since they're not using domain sharding.


>It's true that delivering them in parallel may in some cases reduce the actual download time, but given the small file sizes Apple is serving, the connection overhead (TCP handshake, HTTP headers, slow start, ...) just makes it worse. Most browsers (especially mobile ones) aren't even going to download more than 4-6 files at a time, since they're not using domain sharding.

So what I'm wondering is if you target that 4-6 connection limit, if you get all your js files to load in one connection group can you benefit from the parallel downloading while not blocking anything? Like this: http://i.imgur.com/iuriZ.png

Seems to me like the handshake issue goes away if all the connections are made at once, and you're able to cache files more efficiently. If you make a change to one or two files you can just invalidate those caches rather than the cache of your single mega file. Just some thoughts.


>The browser doesn't know if any of the following scripts may depend on the previous ones

Yes it does. That's why the way they're ordered on the page is important. If I include jQuery after I'm trying to make jQuery function calls, it won't work (in all browsers I've tested, anyway).


You didn't get my point, probably because of my terrible english :) The browser has no way of knowing in advance if a script doesn't depend on any of the previous ones, so it cannot optimize this specific case and parse/execute the script before the other ones have been executed. You can manually tell the browser to load the scripts and execute them as soon as they're ready, in no particular order, via the async attribute. If you don't, the browser is going to assume that the order is important and load them one by one.


That becomes true when your scripts are massive. With the sizes Apple are using, the connection & HTTP overhead become the bottleneck.


Well, images.apple.com is Akamai'd (22ms for me) and it also supports keep-alive.


The best advice I could ever offer on this topic is to know your audience. If you have a server in the U.S. and you have a large customer base in China you will generally be better served by fewer larger requests than several smaller requests as the latency on each request is often the biggest killer of performance.


Now, I may be wrong in this, but here's my thoughts. If the requests are asynchronous, yes, more requests will be much worse over long distances. However, as you see here http://i.imgur.com/iuriZ.png, Chrome, FF, and IE since 8 (6,7 limited to 2 connections I think) will establish multiple simultaneous connections to each domain serving assets. So what I'm saying is that the best strategy might be serving ~4 or 5 JS and CSS files, with likely to change files separated from unlikely to change files. Serving different asset groups from different domains would speed this up even more. Just some (disconnected) thoughts.

Also, I'm sure Apple has servers in Asia.


Even if it were faster in modern browsers with a high speed connection (see MartinodF's comments) it'd still be worse on high latency connections like mobile. The number of requests has a huge impact on page speed when you get 100ms+ of latency per request.


when a browser encounters <script src="" it will usually block until the script is loaded before moving on (when no async is used). The result would be a staircase pattern: http://www.webpagetest.org/result/121024_GH_A1Q/1/details/

For assets like images it would allow parallel connections until the configured limit.


The JS assets 4 through 8 all commence loading at the same time.


Concatenating JS doesn't make sense when you are site like Apple.

They have a lot of mini sites that are quite different from each other e.g. Store, iPad, Home Page, Support, Developer that you may access directly and which may have 90% of JS in common. So they trade off first load for subsequent visits.


I would agree with you except it seems they aren't setting the caching headers correctly either.


"External JavaScript files in <head> – All of those JavaScript files nested inside the <head> tag, further delaying the start of page rendering."

Sometimes the head is the best place to put the javascript. I didn't look into what javascript they are loading there, but there are times the user experience is improved by it.


They could be using http streaming, like the feature introduced in rails 3.1 : http://weblog.rubyonrails.org/2011/4/18/why-http-streaming/

That actually makes <head> the preferred place for loading scripts.

Also, I find it a bit extreme to recommend putting inline javascript in a <script> tag. I'm ok with trying to maximize performances, but please, do not recommend to produce un-seperated and unclean code. Concatenating javascript in a single file (and compressing it) is way enough, having one single request to get all javascript is not so bad.


I think he meant something like:

    <script><%= File.read("file.js") %></script>
Not that they should literally move the script inline.


In which case loading the scripts async would be the right thing to do. Early loading, but non render-blocking http://www.whatwg.org/specs/web-apps/current-work/multipage/...


What's unseparated about it? Maintenance doesn't have to be performed on the rendered final product. Template systems and processing pipelines are pretty common for letting devs keep code structured in useful ways while still allowing for optimal end results.


For example?


For example, setting up event delegation on enhanced elements is best done in the HEAD before the elements load. If you setup all your user-event listeners in the bottom of BODY then your users will have a short time when they'll be interacting with elements which A) will do the non-enhanced behaviour B) do nothing whatsoever. Neither A nor B is ideal.


Nah, have a small inline script that records events and replays them to your full script when it loads.

Or, if something only works with js, don't show it until the scripts have loaded, but let plain content load & render in the meantime.


Yeh, both viable solutions. Tbh, I can't think of any other reason to have JS in the HEAD.


Google Analytics is generally the only JS I put in the head. If some code is absolutely required for the page to work though then it may make sense to put it ahead of the content.


What if you are displaying different elements based on the users location/country (eg. currency, contact details), and you use javascript to detect and do this.

You wouldn't want the page to load first and then this appear.


@jaffathecake I supplied a single example. Also, you might have a very javascripty web application that the users will have had the files cached 99.9% of the time so retrieving the files isn't an issue.


So you would delay the whole page just because of some location specific elements? How about render what you got first so the user has something to look at and then fill in your specific bits later?


Possibly. I am just saying that you might have things that are more important to do before the page loads that might warrant putting it in the head. It's not a set rule that javascript in the head is always bad, you just need to know the tradeoff and make a decision.


@redguava ...but you can't come up with a single example?


Sometimes you render content with JavaScript and want to avoid FOUC(Flash Of Unstyled Content) also sometimes you have JavaScript polyfills that you want loaded as soon as possible.

But I assume that in Apple's case, they are doing it even when they shouldn't be.


And they didn't tell anything about what apple did to make it fast. About this for instance

  Betty:~ lelf$ dig www.apple.com

  […]

  ;; ANSWER SECTION:
  www.apple.com.		1434	IN	CNAME	www.isg-apple.com.akadns.net.
  www.isg-apple.com.akadns.net. 21 IN	CNAME	www.apple.com.edgekey.net.
  www.apple.com.edgekey.net. 21234 IN	CNAME	e3191.c.akamaiedge.net.
  e3191.c.akamaiedge.net.	20	IN	A	23.32.109.15

  […]
(If you really don't understand: http://en.wikipedia.org/wiki/Akamai_Technologies http://en.wikipedia.org/wiki/Content_delivery_network)

EDIT: formatting, links


Would really prefer if you make all the changes how much improvement can be seen. This would put into perspective what the optimisation means.


The mod_pagespeed service does some of these optimisations (plus others). It's almost certainly applying JS concatenation, and in the end there's not much difference in load time:

http://www.webpagetest.org/result/121024_K0_78b60a17cae988da...

EDIT: Actually, it is applying some concatenation, but it's not concatenating everything: http://www.webpagetest.org/result/121024_35_049258ec5056f5b4...


Question: have the average file-sizes of websites gone up proportionately to bandwidth increases? It seems to me like bandwidth has increased enormously, but filesizes have capped between 600k–1 megabyte. Shaving tenths of seconds off pageloads might not be as important as improving the speed of render.


Has the average bandwidth of web users really gone up, if you include the increase of mobile usage?


I can't speak for every country, but as I work for a French telecom, I happen to know the figures for this on our own network.

Firstly, the vast majority (around 80%) of browsing on mobile devices is going through wifi. It turns out that at least in France, most people that use mobile devices seem to use those devices in areas where they have wifi available (at home, or in the office) most of the time. This of course means that as soon as you increase landline data speeds, you also increase mobile data speeds, because most mobile usage is routed through landlines.

Secondly, even when clients are out and about, they often have 3G coverage which is not too far off wifi speeds (a typical 3G connection has about a third of the bandwidth of a typical landline/wifi connection). OK, it's a third of the speed, but it's the same order of magnitude, and it only applies about 20% of the time.

What this means is that a mobile user is getting data at (100 * 0.8) + (33 * 0.2) = 86% bandwidth of a landline connection. This means that a 16% increase in landline bandwidth would be enough to balance out everyone moving to mobile devices. Landline bandwidth has of course improved a lot more than 16% in the last few years, and not everyone has moved to exclusively mobile device web-browsing. So yes, I think it's fair to say that the average bandwidth of web users has gone up, at least in France.


1/3 is not the same order of magnitude in base-3 or less, I tend to think in base-2 at least as much as base-10 when considering order of magnitude issues.

Put another way, if I coded something where the performance delta was 1/3 or 3x, I would certainly be complaining or bragging about an order of magnitude change.


Order of magnitude without any base specified is commonly understood as being base 10, and it is extremely confusing to use other bases, as no one else uses them as 'orders of magnitude'.

You can talk about logarithmic scales without risk of confusion.


On T-Mobile in the US, my 3G (HSPA+) frequently sees 13Mpbs down and ~5Mbps up with latencies of <50ms. That's higher than the average broadband connection in the US, AfAIK (for reference, my home wired connections is 6Mpbs up and down).


I'm staying at a Hyatt right now for my job. My cell phone has faster internet than the hotel does.


Sizes have kept pace with broadband to ensure most of the internet still feels like dialup.


No kidding. Today's front page at theverge.com is 11MB. That's something like 20s of transfer time alone at 6Mbit/s.


More bandwidth is not the answer for webpages...

The overhead of TCP and HTTP make latency the limiting factor - see Mike Belshe's "More Bandwidth doesn't Matter (Much)" - http://www.belshe.com/2010/05/24/more-bandwidth-doesnt-matte...


Yes, page sizes have steadily increased over time. See http://httparchive.org/


I think the author merged in two terms of his analysis of performance that are really separate:

- Response Time (how long does it take per task) - Throughput (how many tasks can we complete per N)

For some of us, getting better responses means in-and-out quicker and so its easier to serve more throughput. For apple, they have plenty of capacity, so new product launches aren't really a motivation to improve response. They will we able to support that volume anyway. The motivation would be to improve user experience.


Well, zoompf.com takes a minute to respond at the moment. So maybe they're not the ones to judge.


Tu quoque...

However, it is curious to discover they have quite hefty JS files loading in their own HEAD element.


If Hitler called you cruel for kicking a dog, he'd still be right.


I was thinking the same thing, but still this doesn't make what he says any less true.


For me, the most noticeable thing is the time it takes to load that enormous .png file. It's only 500KB but it still took 2 or 3 seconds to load for some reason.


Ironically, Google Pagespeed tells us this about zoompf.com:

High priority

    Compressing the following resources with gzip could reduce their transfer size by 235.5KiB (72% reduction).
    Compressing http://zoompf.com/js/jquery-ui.min.js could save 142.7KiB (74% reduction).
    Compressing http://zoompf.com/js/jquery.min.js could save 62.5KiB (66% reduction).
    Compressing http://zoompf.com/wp-content/themes/NewZoompf/style.css could save 24.9KiB (77% reduction).
    Compressing http://zoompf.com/js/animations.js could save 4.0KiB (75% reduction).
    Compressing http://zoompf.com/.../wp-page-numbers.css could save 1.4KiB (73% reduction).
Medium Priority

    The following cacheable resources have a short freshness lifetime. Specify an expiration at least one week in the future for the following resources:
    http://zoompf.com/images/background_pages.png (expiration not specified)
    http://zoompf.com/images/clipboard.png (expiration not specified)
    http://zoompf.com/images/handles.png (expiration not specified)
    http://zoompf.com/images/pages.png (expiration not specified)
    http://zoompf.com/images/report.png (expiration not specified)
    http://zoompf.com/images/streak.png (expiration not specified)
    http://zoompf.com/js/animations.js (expiration not specified)
    http://zoompf.com/js/jquery-ui.min.js (expiration not specified)
    http://zoompf.com/js/jquery.min.js (expiration not specified)
    http://zoompf.com/.../wp-page-numbers.css (expiration not specified)
    http://zoompf.com/wp-content/themes/NewZoompf/style.css (expiration not specified)
    http://zoompf.com/.../freedownload.jpg (expiration not specified)
    http://zoompf.com/.../freeperformancescan.jpg (expiration not specified)
    http://zoompf.com/.../logo-disrupt.jpg (expiration not specified)
    http://zoompf.com/.../logo-virgin-america.png (expiration not specified)
    http://zoompf.com/.../social-icons-32.png (expiration not specified)
    http://zoompf.com/.../video-icon.png (expiration not specified)
They basically don't follow their own advice. Credibility -> toilet.


Ah, the ole "tu quoque" fallacy.

http://en.wikipedia.org/wiki/Tu_quoque


That's a subtle fallacy. It confused me at first, then I realised that 'pointing out hypocrisy' isn't the fallacy, it's 'claiming the argument is wrong because it's hypocritical' that's the fallacy.

edit: on further examination, the examples given in the wikipedia article are really bad - because they're merely pointing out hypocrisy ("but you're -foo-") rather than claiming the argument is wrong. Person 2 in each example could quite happily be in full agreement with the argument and make the same comment.


Not following one's own performance advice isn't hypocrisy. Performance might just not matter as much to Zoompf.


Yes, exactly. Maybe I should edit the article to include this, it's an important aspect. Sure, the person might be hypocritical, but this doesn't invalidate the argument.


It's like a chain smoker telling you that smoking is bad.


Yep, and it is bad.


That doesn't make their advice wrong.


That however makes me skip their advice forever.


That's silly. Often when giving advice on complex issues your simply providing a heuristic. Start here, there might be some edge cases involved but 95% of the time this is the way to go.


I am ignoring your advice due to your grammatical mistake.


I did not give advice.


There site may just not be worth optimizing, as i's not as popular as apple.com


Most of the advice they gave was mostly affecting performance for a single user, not performance of the site for scaling to many users. Only the number of requests for javascript files actually would affect the scaling of the site and not much since they are just static files.


You're confusing performance and scalability.

You can have a slow-loading site that will scale to hundreds of millions of users.

You can have a very fast site that will die under a thousand users.


I didn't confuse them - I explicitly distinguished between them. Scalability is a category of performance - it is performance at scale. Most of these suggestions may help with individual browser loading - but will not make a significant difference in scalability. However, the pretense for the investigation was the high level of traffic that was imminent - a scalability challenge.


If you don't want to use Zoompf (even if they don't follow their own advice), you should take a look at Steve Souders' blog for web performance: http://www.stevesouders.com/blog/

He literally wrote the book on it.


Thanks for this, I hadn't seen it before and it is very informative!


Just a case of the cobbler's children having no shoes, no?


what it this?? some one make spam here.


I'm using the RequestPolicy addon for Firefox (I realize I'm in a tiny minority...). When visitng a website that uses domain sharding for the first time, that means I have to allow cross-site requests to the other domains.

My plea to web-admins: Try to at least make the names of the shards recognizable. I've seen sites where the domain is essentialy "mynewspaper.com" and it needs data from "xac1h139a.com" to display correctly. Now go find the right domain to allow within the dozens of cross site requests that such sites are often using...

Edit: This is a comment on his suggestion to use domain sharding. His site is fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: