Hacker News new | past | comments | ask | show | jobs | submit login
FasterWeb Wants To Make The Entire Web Up To Ten Times Faster In 2010 (techcrunch.com)
14 points by vaksel on July 19, 2009 | hide | past | favorite | 23 comments



"Here's a company we heard about, trying to do something that sounds ridiculous. They won't tell us how they're going to do it, nor will they name any of their clients. They have funding, though, and since no funded startup ever failed, we (TC) will breathlessly report on their existence."

Here's a litmus test for TC to use: if a) a startup makes a wild claim and the only corroboration you can find is from their funders or b) your article requires the use of the phrase, "so we'll just have to take their word for it," rest assured you can skip this story.


It is indeed a terrible story, but the VC has a decent-looking track record: http://videolectures.net/yoav_andrew_leitersdorf/


What an absurd claim. Went to read the story to find either the meat, or spot a grammatical error or missing specifics:

One VC firm, YL Ventures, believes that it can. And they’ve seen it in action, so we’ll just have to take their word for it, for now.


Yet another post that proves that TC is as respectable a news source as CNN and other American news organizations that do not ask the hard questions and do not criticize bullshit.

TC is an entertainment tabloid and it's messed up because you can also see that they want to be respectable and report useful information. They have a pretty good company database, they've collected some info into an quarterly analysis package, etc. but they still perpetuate the Web 2.0 celebrity gossip.


My previous company built technology in this space, based on Squid, and some custom tools. I vividly recall competitors making bold, outlandish, claims like these on a monthly basis back then. None of them ever came to pass. They required too much infrastructure build out, too much cooperation from ISPs and website owners, and the companies behind them demanded too much involvement and too much money from all parties in the chain. There was a CDN rush back then, too, and only a few managed to raise enough money and build out fast enough to be successful at it. This kind of technology requires a build out similar to a CDN, but with far more involvement of parties that are very unlikely to have an interest in being involved.

I've had long conversations with folks at Akamai, Red Swoosh, and many others, in this particular area...and it's astonishing how much money it takes to build this stuff out, and finding profitable ways to build out small (making it useful enough to make money on without first spending millions and signing on dozens of ISPs) is very difficult. I just don't think there is that much VC money floating around right now, even if (big giant if) they've actually figured out technology that works on the scale they're promising.


Someone here probably knows this: How much latency would you cut if your average page request to msn.com, yahoo.com, etc resulted in a single instantly-full-speed download of an archive of all the content the browser requires to show the page?

Actually, hitting msn.com and yahoo.com now, it looks like each takes <2 seconds on this computer on a normal-ish broadband connection without any cache. NYTimes and Bing took about 5, but they had all the useful stuff up in <2.

I suppose it'd be desirable for all of those to be <0.1 seconds, but that'd be darned hard considering normal ping latencies. Between 2 seconds and 0.1 seconds, I'm not sure how much I care.. I still see the delay, but it don't think it makes much difference to me in normal surfing.


I saw something the other day where Firefox will allow you to browse a website that is zipped up and you can use urls like this:

http://somewebsite.com/somezip.zip?/index.html http://somewebsite.com/somezip.zip?/images/logo.png

Then the browser only downloads the zip file once and everything else is cached. I can't seem to find the link anymore though because everything in the search engines is so frickin SEO'd that all i can find with zip in the search terms is winzip or winrar or something about putting a link to a zip on your website...

This is the closest I can find: http://www.aburad.com/blog/2008/05/view-contents-of-zipjar-f...


You're probably thinking of a combination of this:

http://kaioa.com/node/99

Which explains how to use JAR archives in Firefox instead of CSS sprites to optimize page loads, and this:

http://limi.net/articles/resource-packages

Which is a proposal for a universal standard of packaging resources like this.


Yep, that's it. I think that'd be a really cool way to speed up the web.


Is this somehow different than mod_deflate?


I haven't used mod_deflate myself, but my understanding is that it compresses each individual request, rather than putting a set of files in one zip file that are all sent together as one request.

The advantage over mod_deflate is that you have the latency for only one request of perhaps 100's of files, rather than the latency on each file for the site.


Right. The server can immediately know all the files that the client could need (and with some coordination.. does need). The client though has to get the first one to learn what others it may need, then request those.. and those may require more. Each level of recursion costs you at least 1 round-trip latency, likely much more due to the links not being at the very top of the page, the server not being instantly fast, and the browser's rules for requesting things.


Yep, that's right. The network is the bottleneck.


You also would have to imagine that most major sites are using some form of edge caching like Akamai already. My guess is that their service would replace it and maybe transparently add asset aggregation and/or compression or something along those lines. It seems like unless this is unprecedented technology that it's probably a CDN with optimization features as the emphasis.


if you want to do this yourself for your office or ISP, do a google for 'squid proxy'


It's good, but it's still many requests over network latency and could be improved if it was one connection, one download, for the whole page.


http://en.wikipedia.org/wiki/HTTP_persistent_connection

if everything on the page is from one server, it will all go over one connection. In the case of using a http cache, all the connections are made by the cache, and one connection is made to the cache by the end-user box.


But that's about using the same TCP connection to make multiple HTTP requests, whereas the proposal is to use one TCP connection and one HTTP request, e.g.

COMPOUND-GET /index.html

and the server sends a .gz of all required files / data to render index.html. (Not sure that would work so well with very dynamic, crosslinked and javascript heavy sites, but text and images might be possible).


making a website 2 to 10 times fast is not that hard, put them on a CDN. The usual suspects in increasing performance are:

- compression

- caching (origin, edge, browser)

- persistent connections

- tcp-optimizations


Caching is easy. The hard part is knowing when to expire an object in cache.


That's only the hard part if the site doesn't tell you when to expire it. Squid (and many others) have excellent cache expiry and replacement heuristics; but Squid can only cache 20-35% of the web, because it's simply not safe to cache the rest of it, either because it's session-based and could be different for every user, or because it is SSL-encrypted and can't be seen by the proxy, or because it explicitly disallows caching with Cache-Control or Expires headers, etc. And, even if the site doesn't tell you how long you can cache something, it's reasonably safe to guess based on the age of the object (a five year old object is probably not going to change in the three days it takes to run through a full replacement in your cache; while one that is 30 minutes old could possibly change dozens of times, so Squid will send an If-Modified-Since a few minutes later, with gradually lengthening periods between checks as the age of the object increases).


yeah, i wonder if they're just systematizing the YSlow suggestions


woo! caching and compression! I feel like it's 1998 again!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: