Hacker News new | past | comments | ask | show | jobs | submit login
Google Modifies HTTP, Makes Chrome 50% Faster (conceivablytech.com)
141 points by peternorton on April 11, 2011 | hide | past | favorite | 78 comments



A draft of the SPDY specification is here:

http://dev.chromium.org/spdy

Mike Belshe presented about SPDY in the IETF HTTP working group meeting at the most recent IETF meeting:

http://www.ietf.org/proceedings/80/slides/httpbis-7.pdf

We'd love for more folks to implement SPDY, both clients and servers.


Sounds like HTTP 1.2?

Since Google servers and Chrome already support it, the chicken-egg problem might be solved. Firefox has an incentive to implement it to make Google search faster and other web service should implement this to provide a smoother user experience.


HTTP 2.0      (1.2 would have to be backwards compatible)


Yes. And just to argument this, here is the relevant except of the FAQ:

Q: Is SPDY a replacement for HTTP?

A: No. SPDY replaces some parts of HTTP, but mostly augments it. At the highest level of the application layer, the request-response protocol remains the same. SPDY still uses HTTP methods, headers, and other semantics. But SPDY overrides other parts of the protocol, such as connection management and data transfer formats.



As much as SPDY is interesting there are a number of problems that are keeping it from being adopted:

1) Only Chrome supports it and even there it isn't fully supported. One really painful thing it is missing is support for switching from normal HTTP to SPDY without using NPN.

2) It requires the TLS NPN extension to function seamlessly. This is something that is just now being put into the OpenSSL package. It is why you need Chrome to do anything with SPDY, it has a special patch it applies to its internal version of OpenSSL.

It is good that people are taking interest but there is still a lot of work to do.

After seeing http://www.igvita.com/2011/04/07/life-beyond-http-11-googles... I decided to fiddle with doing the same with Node.js https://gist.github.com/911761 I'll be shoring it up into a real module but it is going to require changes to the zlib compression module, the bufferlist/binary module, the put module and to be really useful it will require a special build of Node.js and OpenSSL.


Chrome doesn't use OpenSSL. It uses NSS. At least on Linux and Mac; I believe it uses the system-native crypto libraries on Windows.


It doesn't use it in the browser but it does use it for tools relating to SPDY. See:

http://src.chromium.org/viewvc/chrome/trunk/src/net/tools/fl...

Regardless, neither OpenSSL or NSS have NPN in a release version. NSS doesn't seem to have merged the patch yet: https://bugzilla.mozilla.org/show_bug.cgi?id=547312


Ah, indeed. Thanks for the pointer!


You can enable SPDY on your Apache servers with mod_spdy... http://code.google.com/p/mod-spdy/


Has anyone here tested this? Seems to have had little work for a long time.


How long until someone releases a module for nginx to do the same?


Might be a while.

Igor was asked about this a while ago and expressed his dislike for it. Not sure if things have changed since.


Unlike HTTP, SPDY is no longer plain-text (it uses a packet structure similar to TCP headers).

While Wireshark's usability has been constantly improving, I will miss being able to do:

  $ telnet google.com 80
  GET / HTTP/1.1
  Host: google.com


This is a reasonable concern, but the reality is that:

  $ telnet google.com 80
  GET / HTTP/1.0
Is just going to become:

  >>> use SPDY;
  >>> SPDY->dump('http://www.google.com/);


Plain-text formats have always been slower for things that are not plain text. But even 30 years ago, when computers were even slower, Unix designers decided plain text was still the way to go, because it was easier to debug and easier for humans to work with. No specialized tools required, no poring over hex dumps. HTML won over other document formats. JSON and XML won over other binary formats. Any coder can look at JSON and see what is being transferred, without the aid of anything but a text editor. Plain-text marshalling formats for binary data (e.g., base64) are still useful for pasting data into an email or adding ssh keys to authorized_keys with "cat >>".

This is perhaps a long way of saying that I don't expect that having tool support makes SPDY any nicer.


Less true for tcp dumps

which I believe was the previous poster intention since he mentioned wire shark..

Also piping bunch of unix command will suffer a little of they


swig + libpcap should get you there for any mainstream language (assuming you're on a real OS.)


SPDY is always encrypted, so it would be more fair to compare it with HTTPS, where you can't do anything like that anyway.


Sure you can. openssl s_client -connect google.com:443

Not sure what the equivalent is on windows though.


Sure, but that's not telnet. I would assume you could make a similar tool for debugging SPDY too.


not surprisingly it's... openssl s_client -connect google.com:443


They probably won't switch off http support before a very long time :) So you won't miss anything.


Sigh. Those who abandon Unix principles deserve whatever they get.


Efficiency. You can keep your Unix principled autoconf scripts, thank you.


Be interesting to see how fast other browsers/servers start supporting this. I'd love to see Google add it to App Engine sites, for one thing.


App Engine already does.


Interesting. I see it used on one of my python gae apps but not the other. One uses .appspot.com and the other a custom domain.

Edit: The one at appspot.com has spdy.


Are you using SSL on the appspot.com app? IIRC SPDY requires SSL.


Strangely, the one at appspot.com isn't using SSL, but it is going over SPDY anyway according to Chrome.


About time! SPDY's been out for a while!



Would be great, except I can't use Chrome anymore as Flash crashes every time, and no one seems to know why.


http://www.google.com/support/forum/p/Chrome/thread?tid=0d78...

I've seen two things implicated in several google support threads about the issue. First, having both the system and google's sandboxed flash versions enabled as plugins, and second, an older version of Trusteer Rapport (typically offered by banks) was implicated.

Both may be fixed by now, but if it's still crashing, try about:plugins, enable Details at top right, and disable the system version of Flash.


I'm having the same issue, every time I need to use flash I open safari or FF. It's one of those things that I consider so strange about giant companies like Microsoft or Google, something so important it's not getting addressed


Are you on a recent MBP? I had to turn off hardware acceleration in the version of Flash that shipped with Chrome before it would act (relatively) stable.


I'm having the same issues and for that reason I've moved back to Firefox 4, since it seems to have improved quite a bit since I moved to Chrome.

What seems rather ironical is that I only started having these issues with Chrome after they added the Flash-sandbox, which was supposedly made to stop Flash from crashing the browser. With the result that Flash is always crashing the browser, something it never did before.

Sometimes I think Google needs a bigger QA-team.


Try adding --disable-internal-flash to your chrome shortcut, it'll use regular flash instead of the sandboxed version.


actually, I would believe the sandbox was added to prevent flash from accessing parts of your machine that it shouldn't be able to access.

This is not about preventing Flash from crashing, but about preventing your machine from getting owned by an exploit for one of the countless discovered and unpatched flash security flaws.


> Sometimes I think Google needs a bigger QA team That's an understatement :)

When I visited Google Zürich (admittedly not as crucial as Mountainview) I learned that unit testing was mostly used so they had pretty red/green lines on those LCD screens in the hall, and that any dev could break the version /in production/ pretty easily with a misplaced commit...


  > ... I've moved back to Firefox 4 ...
That would be great, except FF4 doesn't let me log in to HN.



That sounds like a competitor to the page speed plugin, but anything running over HTTP/1.x has fundamentally limited performance.


It seems like Nginx Web Server is not going to implement SPDY anytime soon... http://forum.nginx.org/read.php?2,22517

Too bad.


Well, it didn't sound like a categorical "This will never happen!", more like "Eh, I don't like this about it, so I'm not planning on it yet."


The result is a dramatically increased page load performance that only works between Chrome (as it includes SPDY support) and Google’s servers (which supports the features for Google sites.)

Reminds me a lot of MS's strategy of adding incompatible features to existing standards.


No.

Embrace & Extend is a well known, very destructive mechanism for subverting standards.

Enhancing existing standards with experimental extensions is a well known very useful mechanism for improving widely used standards.

As with anything to do with technology you need to look at the details to determine exactly what is happening on a case-by-case basis.

Just saying "that sounds like MS" isn't useful without examining the details. For example, many of Microsoft's extensions to HTML were very useful (eg, XMLHTTPRequest), whereas others weren't. It's a case-by-case thing, and asserting this is always bad is a very shallow interpretation.

TL;DR: Details matter. Experimenting by extending standards isn't always bad.


Yea, I have to come to think that embrace and extend is not the worst thing MS did. For example, extending ODF would be far less bad than creating OOXML.


Extending ODF was not possible because Sun would not allow it. Sun's IP licenses for ODF effectively gave them veto power over attempts to add things to ODF that they did not approve, so that was the end of that. Sun's position was that ODF would support exactly the feature set needed by StarOffice.

If you ignore IBM's and Sun's massive FUD campaigns against OOXML, and actually compare the specs, you'll find that OOXML is not anywhere near as bad as they claimed, and in many ways is better than ODF. ODF does have nicer markup--I'd much rather read or write by hand an ODF file. On the other hand, ODF is incomplete in major areas, and other areas are imprecise. (Sun and IBM actually tried to use this as a point in their FUD campaign, slamming OOXML for having too much detail).


It's seems likely that Sun's IP licence was specifically written to stop Microsoft from doing its well documented embrace, extend, extinguish routine on ODF like they did to Sun's Java. Instead they just emrbraced, extended and extinguished the entire idea of a standard XML office format. Nice work.

Weirdly, Microsoft seem to have incompatibly forked their own OOXML format and are in no rush to fix that now that they've seen off the competitive threat posed by ISO standardisation of a competing format.


StarOffice and Microsoft started work on XML formats at around the same time, and most of the subsequent histories are largely parallel. There was no EEE here.


I'm a long way from being an expert in the area so I won't comment on this exact situation, but an overall thought: the difference between MS and Google is that MS wanted a monopoly and were generally quite sloppy, whereas it's in Google's interest to have a faster internet for everyone, not just their users, plus they have the ability to push for adoption of features developed for their browser/servers, which could lead to other browsers, and servers (would it be Apache/etc. that would have to implement it?) adding SPDY support.


How is that different? MS wants marketshare, Google wants marketshare, faster internet for everyone is just a means to an end: more people using Google services. I'm sorry for not buying into the Google-hype, but keep in mind they're a business like everyone else and they are in it to make money. If you ever need proof that Google is doing this for money like everyone else, just take a look at the history of their advertising services.


I'm not saying that Google isn't doing this for business reasons - it just happens that in Google's case, they benefit more from all browsers being better than trying to make Chrome kill off other browsers.


This is similar to Microsoft happily pushing and supporting new bus/interface/port standards to allow many various hardware companies to bring newer, better, faster hardware to the consumer.

It wasn't because they were nice or had the consumer's best interest at heart. They just wanted PCs to be faster, because they had a monopoly that sat atop PCs. So they benefit when PCs get faster, get replaced and stay ahead of would-be competitors.


FYI Ajax was a "feature" of IE5.

And I truly hope I can be as sloppy as Microsoft and build a software that reaches only 95% market share.

You need to remember that before you were born "stuff happened".


One of those incompatible features that they added ended up becoming Ajax.


they were actually adding it over a well-defined extension point (ActiveX instantiation). Not by extending the existing spec (JS/DOM).

That happened later once the usefulness of XmlHttpRequest was noted and other browsers added it directly to their JS support before it was formally specified.

So this is exactly the way to extend (by using clean extension points) that Google was using with SPDY. This is different than, say, implementing a <marquee>-Tag directly into the HTML renderer which doesn't provide a nice extension point.


Except that the SPDY protocol is open in the sense that Firefox and other browsers can implement it if they want


Google's bottom line is a result of the volume of internet content consumed. Their presence means that no matter where you go, you encounter Adsense ads.

Google wants the whole web experience to be faster. They don't benefit in locking you in. I imagine Apache/Nginx/etc will all have SPDY support in due time (while still defaulting to HTTP or a hybrid setup) and it will be yet another nice enhancement to the web experience.


BTW, if you think what the IE monopoly did to web standards was bad enough, look at what the Netscape monopoly did to web standards. Saying that Netscape had a bad track record of following them would be underestimating the impact.


In a word, javascript. Imagine if we had a good language instead!


Is this why gmail in Chrome seems to be hanging on me a lot lately?


>Click on View live SPDY session to see all SPDY connections at a given time – all Google properties work with this technology.

This reminds me of Microsoft skipping part of the three-way handshake for IIS-to-IE connections to reduce the latency to their servers as compared to Apache.


Except this is within spec, given a particular (and different) spec.


And SPDY is well documented. Everyone can implement it, even IE if they wanted. Microsoft has a knack for keeping the internal workings of protocols and file formats a secret, or at least very ambigiously documented.


That's the great thing about specs - so many to choose from!


The same can be said, and was originally coined, about standards.


The Unix Hater's Handbook attributes that quote to Admiral Grace Murray Hopper, but without citing the source.

Wikiquote cites others as having said it, including Andrew Tanenbaum, Patricia Seybold, and Ken Olsen.

http://en.wikiquote.org/wiki/Grace_Hopper


I'm sure it was independently thought of by many people (me not included; I read it years and years ago...). Can't remember from whom, or if it was attributed at all. It has, to me, the feel of a Henry Spencer-ism.


Indeed, but the comment I replied to said "spec" so I adapted it.


Right:

1) Create unfortunate speed hack so browser written by you is faster with servers written by you.

2) Declare it a spec without other existing implementations.

3) ????

4) PROFIT


And what about the Bugs that Chrome encounters every now and then? Like not displaying images sometime, problems with facebook chat? Please fix that.



The image loading issue the OP was referring to was already reported in September 2009 [0] and is still to be resolved for everyone. Along with this [1] infamous issue reported in October 2008. The Chromium team absolutely seem to prioritise adding new features before dealing with serious long standing issues in the browser. Not that they're entirely different from the competition in this regard.

[0] https://code.google.com/p/chromium/issues/detail?id=20960

[1] https://code.google.com/p/chromium/issues/detail?id=3543



You're honestly expecting him/her to contribute code to a project just to have a resonable browsing experience?

Really?


No but instead of complaining about bugs he can either submit a bug report and let someone get around to fixing it, or dig into the source and fix it himself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: