Someone here probably knows this: How much latency would you cut if your average page request to msn.com, yahoo.com, etc resulted in a single instantly-full-speed download of an archive of all the content the browser requires to show the page?
Actually, hitting msn.com and yahoo.com now, it looks like each takes <2 seconds on this computer on a normal-ish broadband connection without any cache. NYTimes and Bing took about 5, but they had all the useful stuff up in <2.
I suppose it'd be desirable for all of those to be <0.1 seconds, but that'd be darned hard considering normal ping latencies. Between 2 seconds and 0.1 seconds, I'm not sure how much I care.. I still see the delay, but it don't think it makes much difference to me in normal surfing.
Then the browser only downloads the zip file once and everything else is cached. I can't seem to find the link anymore though because everything in the search engines is so frickin SEO'd that all i can find with zip in the search terms is winzip or winrar or something about putting a link to a zip on your website...
I haven't used mod_deflate myself, but my understanding is that it compresses each individual request, rather than putting a set of files in one zip file that are all sent together as one request.
The advantage over mod_deflate is that you have the latency for only one request of perhaps 100's of files, rather than the latency on each file for the site.
Right. The server can immediately know all the files that the client could need (and with some coordination.. does need). The client though has to get the first one to learn what others it may need, then request those.. and those may require more. Each level of recursion costs you at least 1 round-trip latency, likely much more due to the links not being at the very top of the page, the server not being instantly fast, and the browser's rules for requesting things.
You also would have to imagine that most major sites are using some form of edge caching like Akamai already. My guess is that their service would replace it and maybe transparently add asset aggregation and/or compression or something along those lines. It seems like unless this is unprecedented technology that it's probably a CDN with optimization features as the emphasis.
if everything on the page is from one server, it will all go over one connection. In the case of using a http cache, all the connections are made by the cache, and one connection is made to the cache by the end-user box.
But that's about using the same TCP connection to make multiple HTTP requests, whereas the proposal is to use one TCP connection and one HTTP request, e.g.
COMPOUND-GET /index.html
and the server sends a .gz of all required files / data to render index.html. (Not sure that would work so well with very dynamic, crosslinked and javascript heavy sites, but text and images might be possible).
Actually, hitting msn.com and yahoo.com now, it looks like each takes <2 seconds on this computer on a normal-ish broadband connection without any cache. NYTimes and Bing took about 5, but they had all the useful stuff up in <2.
I suppose it'd be desirable for all of those to be <0.1 seconds, but that'd be darned hard considering normal ping latencies. Between 2 seconds and 0.1 seconds, I'm not sure how much I care.. I still see the delay, but it don't think it makes much difference to me in normal surfing.