Hacker News new | past | comments | ask | show | jobs | submit login
SPDY Brings Responsive and Scalable Transport to Firefox 11 (hacks.mozilla.org)
91 points by twapi on Feb 3, 2012 | hide | past | favorite | 43 comments



This is terrific news. Now if only Apache would start bundling spdy with their new releases, instead of expecting admins to hunt down the right mod (http://code.google.com/p/mod-spdy/) and hopefully remember to always keep both up to date.

At least nginx is going to start including it: https://twitter.com/#!/nginxorg/status/150112670966747137


That's fantastic news re nginx! The last I heard was that they were evaluating it and it's good that they are now committing to deliver support for SPDY.


Is Apache mod-spdy useful yet? The project page says "still an early beta and is not yet suitable for production environments" and everything I've read elsewhere says it's not ready. But things have a way of moving fast, is that outdated info?


My understanding from when I was working with the Mozilla networking team was that it wasn't very good at all.

I think we used node-spdy to run unit tests, as it was one of the more complete implementations.


Perhaps someone with more experience with network protocols can explain the hype about SPDY, since I cannot seem to figure it out. Looking at the features Google is toting, I can't help but feel underwhelmed:

- Single Request per Connection. It seems that HTTP 1.1 already addressed this with pipelining.

- FIFO Queuing. I feel like the client is in a better position to know in what order the page needs to be rendered than the server. Why shouldn't the server respond in the order that the client asked for?

- Client Initiated Request. Wouldn't it be better inform the client of what it needs rather than just guessing that the client needs these files and sending them down the pipe? It seems that this feature might waste bandwidth, when it could have hit the cache.

- Uncompressed headers. For slow lines, compressed header might be nice if they were very large. That said, I think a better solution to compressing data is to not send it at all. (If you want to increase speed, do you REALLY need to send the User-Agent and Referer at all?) The smallest data is the one that isn't sent.

- Optional data compression. SPDY is forcing data compression? That seems wasteful of power, esp. for mobile devices when sending picture, sound, or video data.

Of course, this list is all just blowing smoke until its actually tested. However, I couldn't find an independent study of SPDY performance.


To address some of your points:

- pipelining is still single-file request-response though. With SPDY, you can send multiple requests at once, and the server responds to them in whatever order it likes.

- The point of removing the FIFO queuing is that the server can start responding with simple files before the more expensive resources are calculated. Usually the HTML itself can take a while to be generated server-side, where as CSS and JS files are usually just served straight off disk.

In a FIFO model, the 200ms the client is waiting for the HTML to be generated is just wasted. You could be using that time to be downloading CSS or JS or images, etc.

- There's two options for server-initiated requests in SDPY. One where the server says "Since you requested this resource, you'll probably also want this, this, and this." (i.e. it sends links to the client with the related resources). Other other option is where it actually says, "Since you request this resources, here's these other resources you might be interested in as well."

In the first case, the client can begin processing those other files (e.g. checking it's local cache or actually making a request for them) before the original resource has completed downloading/parsing. In the second case, it could be that the original resource and the "sub-resource" (e.g. HTML file and attached CSS file) have similar caching rules, so if a client requests one it's likely that it'll request the other anyway.

- SDPY also has options for not including those kinds of things (e.g. User-Agent, Host, Accept-*) on every request. But even when you do that, compression still has benefits. Even once you've removed all the redundant data, compression will still help, so why wouldn't you?

- I agree there's certain kinds of content which don't benefit greatly from compression. But on almost all platforms, CPU power is much higher than network capacity. In fact, I can't think of a single platform where that's not the case...


Http pipelining is quite different. When the server takes a long time to respond to a single request, it stalls the entire pipeline. To efficiently use parallelism you have to start multiple http tcp connections, but that is less efficient than a single connection because they all have to slow start.

As for header compression, have you checked how large headers are these days? They can easily be 1.5 kb.


- Pipelining would be pretty cool if it worked. There are a number of problems with it: * head of line blocking * transparent proxies that don't support pipelining properly * error detection - FIFO queuing: Why is the client in a better position to know in what order the page needs to be rendered than the server? Isn't the server the one that knows all the resources that need to be sent to the client? - Client initiated request: Yeah, server push is a complicated area. But there are some cases where server push is clearly superior. For example, data URI inlining ruins cacheability. It's better to server push an already cached resource that can be RST_STREAM'd than inline a resource and make it larger and uncacheable. - While we'd like to get rid of headers as much as possible, it's still impractical to completely eliminate headers like User-Agent. - SPDY does not force data compression, and optional data compression has been removed in draft spec 3.


Awesome, sounds like good news for the web all-round!

Any news on if other browser vendors (Microsoft, Apple...) are on board?


SPDY doesn't provide a way to proxy-cache public assets, so it's bad for places that use such caching to mitigate low bandwidth / high latency connections.


which is to be expected with a TLS connection. OTOH you get to drop all the extra TCP handshakes, you get better congestion control, you get header compression, and that encrypted connection is often a really good thing.

http://bitsup.blogspot.com/2011/09/spdy-what-i-like-about-yo...

But yes, there remains no silver bullet, and you'll have to pick the right tool for the job.


Forcing the use of TLS for SPDY seems to have been intentionally crippling.


Why should a new protocol offer the option to operate insecurely?

Nothing stops a proxy from handling SPDY, if the client trusts it to do so and doesn't mind getting MITMed.


Transparent caching is part of what makes http so great. Security is relative, you can authenticate content without encrypting it, if there is nothing sensitive about the content.

The authentication can happen external to the http or spdy transaction as well.


> you can authenticate content without encrypting it

True, but insecure HTTP provides neither authentication nor encryption.

> if there is nothing sensitive about the content.

Encrypting only sensitive content leaks information to observers and attackers, namely when you transmit sensitive content, and which servers you connect to when you do so. Encrypting all content eliminates that information leak.



Wouldn't this be solved by sending a Cache-Control: public, max-age=<long-duration-in-seconds> header so that the assets can be stored in ISP proxy caches as well as browsers on-disk caches


The problem is that ISP caches are essentially impossible with SPDY, because of mandatory TLS.


I'd consider that a feature, not a bug. Transparent proxies considered harmful, especially when done without informing their users.

A SPDY client that trusts a particular proxy could easily allow that proxy to operate on its behalf, and SPDY would actually make that far more efficient.


Ah, now we are closer to mongrel2 coming with spdy


SPDY is great and looking foward to support all around, but looking at that waterfall example, why the heck would anyone develop a website that loads that many elements all at once?

No-one has a viewport that large. You can do lazy image loading in just a few lines of javascript, without jquery. Scripts can be deferred, etc.

I fear SPDY is going to be yet another way to allow shoddy, lazy website functionality.


It may be not difficult to do lazy image loading in javascript but ideally you wouldn't have to do it at all. It doesn't encourage lazy or shoddy work. it just allows web developers to focus on creating value rather than just figuring out how to efficiently load images. It's kind of like the argument that garbage collectors are bad because encourage people to use memory inefficient designs, when in fact the reason that garbage collectors are good is because it allows you to focus on more important problems (like design and features). Sure every tool has a different way to shoot yourself in the foot but that doesn't mean you shouldn't use the tool.

tl;dr If SPDY lets me write a website without worrying about how many elements are loading at a time then ill have more time to spend on actually building a product.


  You can do lazy image loading in just a few lines of javascript, without jquery. Scripts can be deferred, etc.
Those are hacks. I'd rather have a browser that loads stuff really fast and not have to jump through hoops like spriting, etc. If you're really concerned about deferring the loading of images, that could also be a feature of the browser.

Also, consider that asking the question, "why the heck would anyone develop a website that loads that many elements all at once," is like asking, "why the heck would anybody develop a program that doesn't fit on a 1.44MB floppy?".


Why are you loading every single asset on a page that cannot be seen entirely on the screen? Or for a few scrolls?

Why are you loading asssets for the footer when only above the fold is seen on initial page load?

In your floppy comparison, that would be like copying over an entire 16GB SD card just to view a few pictures you took this morning.


HTML is a declarative format, so the presence of an <img> tag doesn't mean it has to be loaded eagerly. You could turn it around and ask why the browser is loading assets that can't be seen.


> why the heck would anyone develop a website that loads that many elements all at once?

I just opened my G+ page and count 50+ user thumbnails. Many modern social websites have lots and lots of small unique user-specific images.


You could sprite a users most common friends to improve this.


A sprite per user, that changes every few days?

100,000,000 users, many with upwards of 1000 friends, with no obvious way to determine the most frequent?

You could sprite the top 100 users, but that wouldn't have much benefit to most people. Top 1000 is likely too big to send out to every user.

Just sending them as we do now is lots of HTTP requests. If we could somehow reduce the requests... Ah, pipelining, or even better the built in feature of SPDY.

In reality right now, Google+ uses SPDY for everything but the avatars, which are served over normal HTTPS from https://lh3.googleusercontent.com/. Wonder why?


You can do a lot of things, but why not just speed up the base functionality? Lazy-loading of images speeds up the experience over HTTP, and it will also speed up SPDY. There's only so much optimization you can do at the highest level. At some point, you need the lower levels to just go faster.


Spdy is not great.

For instance header compression using a shared prefix dictionary saves only a handful (~100 bytes) over TLS compression -- worthless, not to mention it's already had several versions of the prefix dictionary.

Spdy developers didn't even test against HTTP tunneling, which in practice as seen in current Firefox and Opera works just as well. The 'head of line blocking' is not a big deal in practice.


Can you link me to what you mean by HTTP tunneling - assuming you do not mean pipelining.

"Tunneling" to me sounds somewhat like proxying but I think you mean something else?


SPDY may be technically sound but...

Mozilla Corp: Wholly 0wned Subsidiary of Google Inc


Why? AFAICS, SPDY is open and available for everyone who wants to implement it.

What's so bad about companies making it open, free and open source, thus contributing to a better Internet experience for everyone? No one limited SPDY to Google sites - Microsoft and Yahoo are free to implement it for their servers and enjoy browsing speedups with Chrome and (soon) Firefox 11.


So will my web server handle HTTP 1.1, SPDY, and HTTP Next then?

Is Mozilla acting in the interest of the Open Web or was this a bargaining chip with their $300M default search provider deal?


Yes. Apache already has mod_spdy and nginx has plans to implement it. Browsers fallback to regular http if the server doesn't speak spdy. There are no disadvantages for you.


SPDY may be technically sound but...

Traffic is harder to debug. Google continues to act like it owns the Web.

These are disadvantages for the world.

Bring on the downvotes of the naive Googlers! This place was getting boring anyway...


I think programmers have achieved harder tasks than detecting which protocol the server speaks and speaking its language, starting with the most optimal protocol and falling back to simpler and more common ones.


Mozilla refused to implement NaCl and WebP; it looks like they're evaluating Chrome features on a case-by-case basis.


Perhaps it does look like that. It also looks like Mozilla has a healthy incentive to shadow Google in the form of their recent search deal.

Is Mozilla doing the best thing for the Web or further cementing the current browser landscape? Will it be better when I have to write my crawlers to work with HTTP 1.1, SPDY, and HTTP Next?


You don't have to write your crawlers to even work with HTTP 1.1; all the web servers out there will respond just fine to HTTP 1.0 requests, even through HTTP 1.1 is ubiquitous and has been in use for over a decade now. What makes you think that SPDY-enabled web servers won't be compatible with HTTP?


SPDY may be technically sound but...

My objection to this turn of events is almost entirely related to the behavior of Google and their search deal with Mozilla. Mozilla has compromised their principles again and again for Google and it is accelerating.

I'm tired of Google's doublespeak and lies about "Open Web" and open standards. If SPDY takes off, we will all have to think about supporting it and if it's built correctly that will be fine. I seriously have my doubts.

It would have been much less suspicious if the second browser to adopt SPDY was Safari or IE. Right now, it looks like Google is essentially bribing an "independent" browser vendor to implement their half-baked standards so they can turn around and claim "it's a web standard!".

It's like 1999 but now with corporate protectorates!


What makes you think you have to do anything? If spdy takes off all you have to do is upgrading your http library to advantage of it. Besides the servers still speak http so your crawlers will keep working even if you don't upgrade.


SPDY requires an explicit upgrade of the connection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: