Since Google servers and Chrome already support it, the chicken-egg problem might be solved. Firefox has an incentive to implement it to make Google search faster and other web service should implement this to provide a smoother user experience.
Yes. And just to argument this, here is the relevant except of the FAQ:
Q: Is SPDY a replacement for HTTP?
A: No. SPDY replaces some parts of HTTP, but mostly augments it. At the highest level of the application layer, the request-response protocol remains the same. SPDY still uses HTTP methods, headers, and other semantics. But SPDY overrides other parts of the protocol, such as connection management and data transfer formats.
As much as SPDY is interesting there are a number of problems that are keeping it from being adopted:
1) Only Chrome supports it and even there it isn't fully supported. One really painful thing it is missing is support for switching from normal HTTP to SPDY without using NPN.
2) It requires the TLS NPN extension to function seamlessly. This is something that is just now being put into the OpenSSL package. It is why you need Chrome to do anything with SPDY, it has a special patch it applies to its internal version of OpenSSL.
It is good that people are taking interest but there is still a lot of work to do.
After seeing http://www.igvita.com/2011/04/07/life-beyond-http-11-googles... I decided to fiddle with doing the same with Node.js https://gist.github.com/911761 I'll be shoring it up into a real module but it is going to require changes to the zlib compression module, the bufferlist/binary module, the put module and to be really useful it will require a special build of Node.js and OpenSSL.
Plain-text formats have always been slower for things that are not plain text. But even 30 years ago, when computers were even slower, Unix designers decided plain text was still the way to go, because it was easier to debug and easier for humans to work with. No specialized tools required, no poring over hex dumps. HTML won over other document formats. JSON and XML won over other binary formats. Any coder can look at JSON and see what is being transferred, without the aid of anything but a text editor. Plain-text marshalling formats for binary data (e.g., base64) are still useful for pasting data into an email or adding ssh keys to authorized_keys with "cat >>".
This is perhaps a long way of saying that I don't expect that having tool support makes SPDY any nicer.
I've seen two things implicated in several google support threads about the issue. First, having both the system and google's sandboxed flash versions enabled as plugins, and second, an older version of Trusteer Rapport (typically offered by banks) was implicated.
Both may be fixed by now, but if it's still crashing, try about:plugins, enable Details at top right, and disable the system version of Flash.
I'm having the same issue, every time I need to use flash I open safari or FF. It's one of those things that I consider so strange about giant companies like Microsoft or Google, something so important it's not getting addressed
Are you on a recent MBP? I had to turn off hardware acceleration in the version of Flash that shipped with Chrome before it would act (relatively) stable.
I'm having the same issues and for that reason I've moved back to Firefox 4, since it seems to have improved quite a bit since I moved to Chrome.
What seems rather ironical is that I only started having these issues with Chrome after they added the Flash-sandbox, which was supposedly made to stop Flash from crashing the browser. With the result that Flash is always crashing the browser, something it never did before.
actually, I would believe the sandbox was added to prevent flash from accessing parts of your machine that it shouldn't be able to access.
This is not about preventing Flash from crashing, but about preventing your machine from getting owned by an exploit for one of the countless discovered and unpatched flash security flaws.
> Sometimes I think Google needs a bigger QA team
That's an understatement :)
When I visited Google Zürich (admittedly not as crucial as Mountainview) I learned that unit testing was mostly used so they had pretty red/green lines on those LCD screens in the hall, and that any dev could break the version /in production/ pretty easily with a misplaced commit...
The result is a dramatically increased page load performance that only works between Chrome (as it includes SPDY support) and Google’s servers (which supports the features for Google sites.)
Reminds me a lot of MS's strategy of adding incompatible features to existing standards.
Embrace & Extend is a well known, very destructive mechanism for subverting standards.
Enhancing existing standards with experimental extensions is a well known very useful mechanism for improving widely used standards.
As with anything to do with technology you need to look at the details to determine exactly what is happening on a case-by-case basis.
Just saying "that sounds like MS" isn't useful without examining the details. For example, many of Microsoft's extensions to HTML were very useful (eg, XMLHTTPRequest), whereas others weren't. It's a case-by-case thing, and asserting this is always bad is a very shallow interpretation.
TL;DR: Details matter. Experimenting by extending standards isn't always bad.
Yea, I have to come to think that embrace and extend is not the worst thing MS did. For example, extending ODF would be far less bad than creating OOXML.
Extending ODF was not possible because Sun would not allow it. Sun's IP licenses for ODF effectively gave them veto power over attempts to add things to ODF that they did not approve, so that was the end of that. Sun's position was that ODF would support exactly the feature set needed by StarOffice.
If you ignore IBM's and Sun's massive FUD campaigns against OOXML, and actually compare the specs, you'll find that OOXML is not anywhere near as bad as they claimed, and in many ways is better than ODF. ODF does have nicer markup--I'd much rather read or write by hand an ODF file. On the other hand, ODF is incomplete in major areas, and other areas are imprecise. (Sun and IBM actually tried to use this as a point in their FUD campaign, slamming OOXML for having too much detail).
It's seems likely that Sun's IP licence was specifically written to stop Microsoft from doing its well documented embrace, extend, extinguish routine on ODF like they did to Sun's Java. Instead they just emrbraced, extended and extinguished the entire idea of a standard XML office format. Nice work.
Weirdly, Microsoft seem to have incompatibly forked their own OOXML format and are in no rush to fix that now that they've seen off the competitive threat posed by ISO standardisation of a competing format.
StarOffice and Microsoft started work on XML formats at around the same time, and most of the subsequent histories are largely parallel. There was no EEE here.
I'm a long way from being an expert in the area so I won't comment on this exact situation, but an overall thought: the difference between MS and Google is that MS wanted a monopoly and were generally quite sloppy, whereas it's in Google's interest to have a faster internet for everyone, not just their users, plus they have the ability to push for adoption of features developed for their browser/servers, which could lead to other browsers, and servers (would it be Apache/etc. that would have to implement it?) adding SPDY support.
How is that different? MS wants marketshare, Google wants marketshare, faster internet for everyone is just a means to an end: more people using Google services. I'm sorry for not buying into the Google-hype, but keep in mind they're a business like everyone else and they are in it to make money. If you ever need proof that Google is doing this for money like everyone else, just take a look at the history of their advertising services.
I'm not saying that Google isn't doing this for business reasons - it just happens that in Google's case, they benefit more from all browsers being better than trying to make Chrome kill off other browsers.
This is similar to Microsoft happily pushing and supporting new bus/interface/port standards to allow many various hardware companies to bring newer, better, faster hardware to the consumer.
It wasn't because they were nice or had the consumer's best interest at heart. They just wanted PCs to be faster, because they had a monopoly that sat atop PCs. So they benefit when PCs get faster, get replaced and stay ahead of would-be competitors.
they were actually adding it over a well-defined extension point (ActiveX instantiation). Not by extending the existing spec (JS/DOM).
That happened later once the usefulness of XmlHttpRequest was noted and other browsers added it directly to their JS support before it was formally specified.
So this is exactly the way to extend (by using clean extension points) that Google was using with SPDY. This is different than, say, implementing a <marquee>-Tag directly into the HTML renderer which doesn't provide a nice extension point.
Google's bottom line is a result of the volume of internet content consumed. Their presence means that no matter where you go, you encounter Adsense ads.
Google wants the whole web experience to be faster. They don't benefit in locking you in. I imagine Apache/Nginx/etc will all have SPDY support in due time (while still defaulting to HTTP or a hybrid setup) and it will be yet another nice enhancement to the web experience.
BTW, if you think what the IE monopoly did to web standards was bad enough, look at what the Netscape monopoly did to web standards. Saying that Netscape had a bad track record of following them would be underestimating the impact.
>Click on View live SPDY session to see all SPDY connections at a given time – all Google properties work with this technology.
This reminds me of Microsoft skipping part of the three-way handshake for IIS-to-IE connections to reduce the latency to their servers as compared to Apache.
And SPDY is well documented. Everyone can implement it, even IE if they wanted. Microsoft has a knack for keeping the internal workings of protocols and file formats a secret, or at least very ambigiously documented.
I'm sure it was independently thought of by many people (me not included; I read it years and years ago...). Can't remember from whom, or if it was attributed at all. It has, to me, the feel of a Henry Spencer-ism.
The image loading issue the OP was referring to was already reported in September 2009 [0] and is still to be resolved for everyone. Along with this [1] infamous issue reported in October 2008. The Chromium team absolutely seem to prioritise adding new features before dealing with serious long standing issues in the browser. Not that they're entirely different from the competition in this regard.
No but instead of complaining about bugs he can either submit a bug report and let someone get around to fixing it, or dig into the source and fix it himself.
http://dev.chromium.org/spdy
Mike Belshe presented about SPDY in the IETF HTTP working group meeting at the most recent IETF meeting:
http://www.ietf.org/proceedings/80/slides/httpbis-7.pdf
We'd love for more folks to implement SPDY, both clients and servers.