Another dirty little secret is that WebSocket does not (yet) work over SPDY, despite the fact both protocols essentially originated in house at the same company.
Both are effectively hacks on existing well established protocols, but both affecting different layers. With WebSocket, you cannot use your tried and tested HTTP parser. With SPDY, you cannot use your tried and tested SSL implementation. With a little bit of foresight, implementation of both might have involved use of only SSL Next Protocol Negotation (SPDY's approach) and a completely separate, orthogonal SPDY/WebSocket parser. This would also remove some of the more "innovative" features of WebSocket (Sec-WebSocket-Key and that magical fixed UUID).
This is a testament to how heavily rushed both protocols were, and the kind of thing that could be avoided with a more strongly community lead process, which was missing at least in the early stages of WebSocket. I have no idea what it was like for SPDY, but given that the "innovative" header compression feature somehow survived, I'm guessing not good.
The discussion on the original article is in the context of basing Http 2 on SPDY. In other words, the very community-lead process you say does not exist.
What's impressive is that at no other company you could wake up one morning and say, "HTTP is slow, so let's change that." Google, amazingly, can do stuff like that, and I'd rather them ship early than not ship at all.
(Well I guess Microsoft could have done it but they would have screwed it up like every other net technology they introduce and then we'd have two internets or something worse.)
I'm not sure it's fair to say that Microsoft has "screwed up" every technology they've had a hand in. Ajax, notably, is a Microsoft invention. What I do think is a fair statement is that Microsoft has historically been slavishly backwards-compatible even with obvious design mistakes and implementation artifacts, whereas the other two 800-pound gorillas (Google and Apple) have a mantra of never keeping legacy tech around when it can safely be replaced. The latter results in cleaner technologies; the former results in longer-living technologies. As a programmer, I prefer the Google/Apple take, but it's worth noting that Microsoft's variant has a clear advantage for the user in at least the short-to-medium term.
>Microsoft's variant has a clear advantage for the user in at least the short-to-medium term.
The approach also often comes out on top when the client is a business and accounting considerations take the lead, as opposed to B2C situations.
When you divide the cost of a Microsoft license by 7 (number of years a business will be running XP on a workstation) it turns into a really great price.
The reverse compatibility mentality is universal in the enterprise.
To put that into proper context, XmlHttpRequest was a hack by a single person at Microsoft -- with zero input or coordination or working groups or long term planning -- on the MSXML team to do a favour to the Outlook team. Further it could only happen because of ActiveX, and paralleled various other light request tools that existed at the time.
XmlHttpRequest is a great demonstration of a hacker getting a solution out there.
EDIT: Downvotes? I am intimately aware of how XmlHttpRequest came into existence, having been closely involved with its birth, so if someone has some correction to add, please add it. But in no universe does XmlHttpRequest vindicate Microsoft on moving web standards along. Their contribution was unintentional and largely accidental.
During my interview with that team, one of the kids proudly took credit for the span tag. A weekend hack. Committed the code and shipped it. No review. SOP.
Being more standards-minded at the time, I wanted to throttle the kid.
With the benefit of hindsight, I see that all the jitter and experimentation was pretty much ideal. If a feature proves useful, other browsers may adopt it, and it may become a de facto standard.
Two internets looks like the direction Google are taking us, not Microsoft. No other company is so willing to bypass the standardization process so willingly, and force other browser vendors to use their technology or face compatibility issues.
SPDY, NaCl, Dart are among some of the examples. Developed behind closed doors where nobody else can have a say, then released as an "open standard", but at the same time slipped into the browser and promoted for use - thereby eliminating the option for others to propose an alternative, or any major (breaking) change.
I'm all for pushing forward with tehcnology, but google should open their doors earlier, or we'll end up in a situation where they're effectively dictating the direction of the web and other browser vendors are busy playing catch-up.
It'll only be impressive if Google actually take the advice from Opera et al, and adjust their SPDY protocol to suit everyone - not just Chrome users. (And the same for NaCl, Dart, and the next "new standard" Google push for).
… which has a clunky interface, annoying limitations and since release has been pushed forward by others (i.e. Mozilla).
Microsoft has had a few first-to-market ideas but they're horrible at continuous improvement because their core business is freezing an API and supporting it for a decade so enterprises feel comfortable using it.
HTTP wasn't, and isn't, broken. Google alone had the combination of a financial incentive to improve, engineers capable of improving, and few application-level opportunities to exploit instead. This is literally the last thing you should worry about optimizing. Very few of us are privileged enough to work at a place that is so short on real problems we can worry about one like this.
In this industry we have a 15-25 year rewrite cycle. The cycle always begins with something deceptively simple that wins wide support through being easy, and ends with multilayered nonsense of dizzying complexity. Efficiency was not a design goal of HTTP. "Changing" that isn't just insulting to the spirit of HTTP, it expresses a complete lack of regard for history.
I don't know how we're going to break the rewrite cycle, but I can tell you heaping complexity atop complexity appears to be exactly the engine driving it, and just when things reach the pinnacle of the absurd, someone comes along with a disruptive, simple alternative and takes over the world with it. SPDY is late-game technology. The end of this loop is near, and we'd be better served keeping an eye open for the start of the next one than trying to master the intricacies of one more difficult and unnecessary improvement.
This is only true if you narrowly define the scope to be what HTTP started as: synchronous, unidirectional, and only concerned with single files[1]. When you consider the problems many, many people - not just Google - are dealing with break at least one of those assumptions it's clear that HTTP is incomplete.
1. SPDY will be nice for anyone using much JS/CSS on their site
I don't think I could disagree with you more thoroughly than I do. Your argument amounts to saying the postal service is incomplete because mail trucks don't have jet engines on top.
First, HTTP pipelining exists.
Second, HTTP's inefficiencies establish a bounds on your performance. HTTP itself does not define your application's performance, unless you've already mined every optimization opportunity at the levels above. Do you ensure all your CSS and JavaScript assets are bundled into single files and minified? If the answer is "no," you have no business complaining about HTTP's performance—managing your assets better will improve performance far more than using SPDY alone will. What about your caching story? What about your database, is it indexed properly? These are tangible improvements that you can make in your application today that can and will improve performance for the end user.
At my work, there is a CGI app written in C that does some atmospheric calculations against a database. The old database was Oracle, the new one is PostgreSQL. The CGI now takes twice as long to run as it did before, which is noticeable because it used to take about 5 seconds, so now it's taking 10, for one day of calculations. Looking into the problem, we found a place where an index would be a good thing to add. So we added it, and the performance didn't change at all.
It turns out, the app is sending a separate SQL query for individual data points. The app needs hundreds of thousands of data points to do its work, so it's sending many thousands of queries on each request. One query can fetch all of the data this program needs in about 0.3 seconds. So this app is, in effect, not measuring the performance of the database itself at all; instead it's measuring the overhead of running an instantaneous query on Oracle and Postgres, multiplied by several thousand. It turns out that the overhead of setting up and performing a query is about twice as high with Postgres. This fact never matters in practice though, because when you notice it you're doing something stupid. Nobody ever says "you should switch to Oracle, because that Postgres's queries take 4 milliseconds to setup instead of 2." (The fact that this app is written in C and could easily be outperformed by a shell script is also telling.)
This is exactly why HTTP is in no sense incomplete or narrowly defined. Google is the one with the narrowing requirements: they know exactly how much money HTTP's inefficiencies are costing them and are in a position to throw engineering time and energy at that number to decrease it. They are also in a position to optimize every other corner of their stack, and presumably they have. This is not true anywhere else.
SPDY is somewhat beneficial for consumers. But it undermines the simplicity and clarity of HTTP. That's what makes this late-game technology. SPDY is SOAP and CORBA to HTTP's RPC. Is it a better definition? Probably. But it's also harder and the benefits are insignificant except at scale.
… and is not currently usable, nor as full-featured as SPDY even if correctly implemented by all vendors
> If the answer is "no," you have no business complaining about HTTP's performance—managing your assets better will improve performance far more than using SPDY alone will
Flat-out wrong: there are many use cases which require multiple requests (you only mentioned CSS/JS but images are significant, too) and this would be exhibit A for an HTTP deficiency: you've internalized the idea that a visitor to the site must download and process EVERYTHING you could ever use to avoid making multiple requests, wasting bandwidth and device CPU/memory because you're trying to use asset packaging as an end-run around protocol shortcomings.
As for databases, this was a fascinating and completely irrelevant digression.
could be avoided with a more strongly community lead process
Would that have yielded something better? I would argue that it would have yielded nothing.
Though nonetheless I am a bit confused by your comparison of the two. WebSockets are not "another way of doing HTTP". SPDY is. They are separate solutions for different problems.
SPDY and WebSocket both have protocol negotiation and framing but they are implemented differently. That's why Microsoft's S+M proposal essentially recasts SPDY on top of WebSocket so they can share the negotiation and framing code.
Despite being entirely different beasts from a design perspective, they solve essentially the same problem: bidirectional communication designed to minimize latency over a single established TCP connection, with protocol-internal notions of channels and framing.
I love the anecdote (and loathe working with that stereotype), but don't think it applies here. We're not exactly drawing a box labelled "DATA", one labelled "SERVER", and drawing a line between them: these protocols are significant, incompatible implementations that do almost the same thing internally, and interact equally as badly with the remainder of the stack externally. To say the similarity is only superficial seems inaccurate.
It's a bit like redesigning a car from scratch just because you need snow chains for certain roads, and upholstery covers for certain passengers.
This "architecture astronaut" phrase. I don't think it means what you think it means.
The two descriptions you make below describe much more essentially than superficially or "astronautically", the same kind of thing, optimized for slightly different use cases (not even THAT different).
It would have yielded the same thing as W3C's decade long stagnation during the development of XHTML: everyone gets bored and they set up WhatWG instead.
> I have no idea what it was like for SPDY, but given that the "innovative" header compression feature somehow survived, I'm guessing not good.
Considering the overwhelming majority of HTTP requests are nearly all headers, and request headers see 88% compression with SPDY [0], you seem to have an awfully negative view of header compression.
The complaint is about how the compression is implemented-
Header compression is a good feature with real world applications,
and deflate with persistent context is a good approach to achieve
it. A fixed dictionary is probably not very effective as it
complicates the implementation while only providing initial value to
the compression. As an example, this is how the average requests
sizes compares between HTTP and SPDY in different modes, using the
set of captured headers used to train the current SPDY 3 dictionary.
HTTP 821.1
HTTP zlib compressed 543.5
HTTP compressed with dictionary 497.0
SPDY 913.7
SPDY zlib compressed 606.5
SPDY compressed with dictionary 517.0
I.e. Just putting the HTTP request in a SPDY stream (after removing
disallowed headers) only differs by 20 bytes to SPDY binary header
format with dictionary based zlib compression. The benefit of using
a dictionary basically goes away entirely on the subsequent request.
> While the concerns is valid, flow control looks like overkill to something where a per-channel pause control frame could do the same job with less implementation and protocol overhead.
Coming from messaging world I can say: nope. "Pause" frames are a very, very bad idea and lead to more problems than they solve. Windowing is much better engineering choice.
> Also note that TCP provides the URG channel for exception messaging.
Which doesn't work.
> 2.6. Push ... The client has the option to read and discard this information, but that may be a costly waste of bandwidth.
It's worth noting that caching and Push don't play together.
All your assertions are unsubstantiated. I'm not saying they're wrong, I'm just pointing out that you haven't explained any of your claims. You needn't go to any great length to cite sources, but a sentence or two of facts or explanation to match each assertion would make your post more useful to read.
The biggest takeaway I get form this is that Opera has done this before but never shared it with the outside world. At a minimum we know that the kind of speed up SDPY proposes is possible and apparently used in production systems.
Of all the parties Google should listen to for SPDY feedback, Opera is probably the top of the list. It looks like Opera took it quite seriously and shared a lot of good work with us all, and I thank them for it. I'd say their point about the asynchronous headers is one that requires serious addressing ASAP. For one example, is it valid to push down a header redeclaring the encoding of the response at the very end? What would that even mean? It's a good point.
I'm having a lot of trouble with people imputing argument to me I'm not making this week. My point is that this is a known thing that they have this experience, not that Opera Mini solves anything else, and that their experience is highly relevant.
Sorry for not being clear. I completely agree that Opera's experience is relevant and useful. They have done some great work on optimizing mobile web performance and it would be fantastic to draw on that expertise to make open standards which everyone can benefit from. However, don't forget that SPDY and related technologies create a problem for Opera, which is based on a closed protocol and ecosystem. Personally, I believe in an open web based on open technologies which can be implemented by anyone. Without releasing code or a spec, it's easy to take a position that your approach is better - since nobody can scrutinize you to prove otherwise.
Where do you see them claim their approach is better? Preferably with quotes.
It seems likely but I can't prove that behind a couple of statements like "As defined the feature is not powerful enough to push non-request related content (such as new RSS items)" is the fact that their approach does do it (and possibly had to add it on later only after they discovered it was a problem), but the document is very carefully written strictly as an examination of SPDY, with no braggadocio I can see at all or any trace of marketing beyond the initial statement of "Hey, we've done some stuff and here's some observations we are in a position to make".
If you really think we (i.e., Opera) have that much interest in closed ecosystems, you're wrong. Yes, Mini uses a closed protocol, but ultimately it's just another web browser. What data format we use to transfer the mostly-rendered page down to the client isn't of much interest. In reality, Mini teaches little relevant to SPDY (because what it sends over the wire is very specific whereas SPDY has to cope with arbitrary content).
Because SPDY "looks like" SSL at layer 5, while SCTP does not: common deployments of Squid (one of the most popular HTTP proxies on earth) will pass through SPDY if they allow SSL, without any config changes.
On the other hand, SCTP has much stronger interactions with basically any deployed firewall/proxy anywhere, lacks widely used implementation for popular operating systems (Windows!), etc. For many networks, deployment of SCTP might mean replacing millions of $ networking hardware.
For one obvious reason, SCTP won't make it through firewalls. Part of the design goal of SPDY is to minimize changes at every other layer of the web protocol stack.
Changing the transport requires patching the kernels of billions of installed devices. SCTP is not available on Windows, Mac, or any popular mobile device I know of.
> 2.5. Asynchronous headers ... While this is already possible to do with chunked encoding trailer, it is not a feature in popular use.
Can anyone point to a working example of this use of a chunked encoding trailer? It seems like a solution to the primary problem that came up on the discussion of HTTP Streaming: http://news.ycombinator.com/item?id=4042247
> And if it for instance needs to redirect the user, it can’t change the status/headers to a 302/Location if it’s already ...
They sound defensive. That's probably a good thing.
Can someone explain how Microsoft's S+M proposal differs and how well it would work in the real world with proxies, firewalls, NAT etc? (I'm being lazy and haven't read the paper)
The biggest difference between SPDY and Microsoft's proposal is that Microsoft's proposal is only theory at this point. The SPDY proposal has dozens of independent implementations behind it and 3 years of operational experience.
I'm not trying to dis the proposal; just point out that it is in its infancy. To jump start it, Microsoft started with SPDY at its core, but changed the syntax.
It's a 2200-word document written by someone whose native language isn't English, and there is a single verb that doesn't agree with its subject. Since that's the only actual complaint you appear to have about the document, I'll assume you agreed with the very well reasoned, well researched viewpoints therein. Overall worth reading, I'd say.
Opera Software is located in Norway. Let's see how much "respect" you have for the language if you have to write in Norwegian.
I bet their English is better than yours. You didn't even start the first word using a capital 'S', so you have a mistake already when writing a single sentence.
Why is it when people say things like "lack of respect for [insert language here]", it's always to defend their own lack of respectful behavior towards, you know, actual human beings?
When complaining about somebody else's non-native English, consider using of capital letters if you don't want to be yet another instance of http://en.wikipedia.org/wiki/Muphry%27s_Law .
Both are effectively hacks on existing well established protocols, but both affecting different layers. With WebSocket, you cannot use your tried and tested HTTP parser. With SPDY, you cannot use your tried and tested SSL implementation. With a little bit of foresight, implementation of both might have involved use of only SSL Next Protocol Negotation (SPDY's approach) and a completely separate, orthogonal SPDY/WebSocket parser. This would also remove some of the more "innovative" features of WebSocket (Sec-WebSocket-Key and that magical fixed UUID).
This is a testament to how heavily rushed both protocols were, and the kind of thing that could be avoided with a more strongly community lead process, which was missing at least in the early stages of WebSocket. I have no idea what it was like for SPDY, but given that the "innovative" header compression feature somehow survived, I'm guessing not good.