Hacker News new | past | comments | ask | show | jobs | submit login

Someone here probably knows this: How much latency would you cut if your average page request to msn.com, yahoo.com, etc resulted in a single instantly-full-speed download of an archive of all the content the browser requires to show the page?

Actually, hitting msn.com and yahoo.com now, it looks like each takes <2 seconds on this computer on a normal-ish broadband connection without any cache. NYTimes and Bing took about 5, but they had all the useful stuff up in <2.

I suppose it'd be desirable for all of those to be <0.1 seconds, but that'd be darned hard considering normal ping latencies. Between 2 seconds and 0.1 seconds, I'm not sure how much I care.. I still see the delay, but it don't think it makes much difference to me in normal surfing.




I saw something the other day where Firefox will allow you to browse a website that is zipped up and you can use urls like this:

http://somewebsite.com/somezip.zip?/index.html http://somewebsite.com/somezip.zip?/images/logo.png

Then the browser only downloads the zip file once and everything else is cached. I can't seem to find the link anymore though because everything in the search engines is so frickin SEO'd that all i can find with zip in the search terms is winzip or winrar or something about putting a link to a zip on your website...

This is the closest I can find: http://www.aburad.com/blog/2008/05/view-contents-of-zipjar-f...


You're probably thinking of a combination of this:

http://kaioa.com/node/99

Which explains how to use JAR archives in Firefox instead of CSS sprites to optimize page loads, and this:

http://limi.net/articles/resource-packages

Which is a proposal for a universal standard of packaging resources like this.


Yep, that's it. I think that'd be a really cool way to speed up the web.


Is this somehow different than mod_deflate?


I haven't used mod_deflate myself, but my understanding is that it compresses each individual request, rather than putting a set of files in one zip file that are all sent together as one request.

The advantage over mod_deflate is that you have the latency for only one request of perhaps 100's of files, rather than the latency on each file for the site.


Right. The server can immediately know all the files that the client could need (and with some coordination.. does need). The client though has to get the first one to learn what others it may need, then request those.. and those may require more. Each level of recursion costs you at least 1 round-trip latency, likely much more due to the links not being at the very top of the page, the server not being instantly fast, and the browser's rules for requesting things.


Yep, that's right. The network is the bottleneck.


You also would have to imagine that most major sites are using some form of edge caching like Akamai already. My guess is that their service would replace it and maybe transparently add asset aggregation and/or compression or something along those lines. It seems like unless this is unprecedented technology that it's probably a CDN with optimization features as the emphasis.


if you want to do this yourself for your office or ISP, do a google for 'squid proxy'


It's good, but it's still many requests over network latency and could be improved if it was one connection, one download, for the whole page.


http://en.wikipedia.org/wiki/HTTP_persistent_connection

if everything on the page is from one server, it will all go over one connection. In the case of using a http cache, all the connections are made by the cache, and one connection is made to the cache by the end-user box.


But that's about using the same TCP connection to make multiple HTTP requests, whereas the proposal is to use one TCP connection and one HTTP request, e.g.

COMPOUND-GET /index.html

and the server sends a .gz of all required files / data to render index.html. (Not sure that would work so well with very dynamic, crosslinked and javascript heavy sites, but text and images might be possible).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: