Hacker News new | past | comments | ask | show | jobs | submit login
Hello HTTP/2, Goodbye SPDY (chromium.org)
426 points by Nimi on Feb 9, 2015 | hide | past | favorite | 179 comments



I got a copy of Paul-Henning Kamp's critique "HTTP/2.0 - The IETF is Phoning It In" off the ACM website before the link went dead. Here's a bit of what he said about it:

"Some will expect a major update to the world’s most popular protocol to be a technical masterpiece and textbook example for future students of protocol design. Some will expect that a protocol designed during the Snowden revelations will improve their privacy. Others will more cynically suspect the opposite. There may be a general assumption of "faster." Many will probably also assume it is "greener." And some of us are jaded enough to see the "2.0" and mutter "Uh-oh, Second Systems Syndrome."

The cheat sheet answers are: no, no, probably not, maybe, no and yes.

If that sounds underwhelming, it’s because it is.

HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc. I would flunk students in my (hypothetical) protocol design class if they submitted it. HTTP/2.0 also does not improve your privacy. Wrapping HTTP/2.0 in SSL/TLS may or may not improve your privacy, as would wrapping HTTP/1.1 or any other protocol in SSL/TLS. But HTTP/2.0 itself does nothing to improve your privacy. This is almost triply ironic, because the major drags on HTTP are the cookies, which are such a major privacy problem, that the EU has legislated a notice requirement for them. HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier. That would put users squarely in charge of when they want to be tracked and when they don't want to—a major improvement in privacy. It would also save bandwidth and packets. But the proposed protocol does not do this.

[He goes on to tear a strip off the IETF and the politics behind HTTP/2.0 ...]


That critique was a poor one. In fact, what you quoted was the meat of it, with little to back up the hyperbole, except for the argument that we shouldn't have all connections encrypted because that will make the NSA work harder to break all encryption. Which is not a good argument.

The discussion on it covered it pretty well: https://news.ycombinator.com/item?id=8850059

edit: it's still in google cache if anyone else wants to read it for themselves: https://webcache.googleusercontent.com/search?q=cache:3i6EwF...


> This is almost triply ironic, because the major drags on HTTP are the cookies, which are such a major privacy problem, that the EU has legislated a notice requirement for them.

Which, of course, is useless, since any browser supports turning off cookies.

As an EU citizen, my experience of this regulation is simply that I have to click "OK" to accept cookies on all the EU sites I visit.

I apologize if this comes off as a rant, but it really is annoying to constantly be presented with "This site uses cookies. Continue?" when I visit a site. :)


Agreed, it is completely useless. I have never heard of anyone benefiting from this the slightest. A side effect could have been that people actually stopped and learned what cookies was, but as I've asked my non-technical friends, no one has bothered to do this. I can't say I blame them.


My experience is the same, with an added facepalm everytime I see it.

It is a EU thing then ? It appears or not depending on the origin ip address of the request ?


Yes, it is law in various EU countries that requires websites to ask permission to store cookies. However, now that the leglislators have seen the effect and have educated themselves some more, the law is already being effectively retracted, at least in my country (NL).


Which is why, as a tech community, we should have attempted to come up with a better solution. A session identifier controlled by the client (Say, just a UUID and can store no data from the server) and with the associated UI to cleaning "logout" or "reset" a session with a website, may have alleviated privacy concerns without breaking the functionality we originally were looking to add to HTTP.

Yes, this would not have been able to be rolled out to everyone immediately, but neither is any other addition to JS, HTTP, HTML, CSS, &c. We should help build the future, not simply accommodate the past all the time.


Considering (relative to now) cookies were added really early on... it would have made sense to have a user/browser token that was only available to a single site, with a reset option.

Another thing that's a little irksome is that nobody uses http auth, because there's no easy logout option.

I will say I do like parts of http/2 being there... I think that dnssec + tls should have been part of the official mix. At the very least CA pricing has fallen into a reasonable range (about $10/month) for wildcard certs. Another thing that took too long is SNI.

Overall though, I think people have gotten pretty spoiled when it comes to technology (myself included)... OMG it takes a whole second and a half between clicking login and being able to see my bank statement. I remember when it was 15-seconds... I think everyone should experience a modem ANSI interface at 9600bps... (not just because I still like BBSes and ANSI art).


I 100% agree with you on the auth part. Mutual auth via something like SRP built into the browser would be a huge boon to building web sites that don't have to handle the plain-text password. Some nice chrome on that, or even an interface with HTML (<form method="mutal-auth-login"> <form method="mutal-auth-logout"> or something) would be super nice.

> OMG it takes a whole second and a half between clicking login and being able to see my bank statement. I remember when it was 15-seconds

What's worse is that people seem to get offended when you say "I have a crappy computer and a crappy connection, this is too bloated". I actually started running noscript soley to block analytics and ads that causing page rendering to delay for 10+s. I had the page. It was all there, but the browser had to wait for all resources before it'd do a full render. All of that immediately stopped with noscript.


HTTP/2 may actually help PHK get what he wants sooner. Proposing to make major syntax and semantic changes would have turned HTTP/2 into an extremely contentious 10-year project. Splitting that into more manageable chunks (syntax in HTTP/2, semantics in HTTP/3), combined with the iterative process that was used to evolve SPDY into HTTP/2, may be more tractable.

Of course, no process will will help you if everyone disagrees with your proposals.


> Splitting that into more manageable chunks (syntax in HTTP/2, semantics in HTTP/3), combined with the iterative process

Sounds great!

Just one question: How many versions of the (now flabagasteringly complex) HTTP-protocol will I need to support in my application and libraries? Because as we all know, once deployed on the internet, something will never be updated and will need to be supported forever, meaning you can never obsolete that HTTP/1.1 and /2.0 code.

HTTP/1.1 was a fantastic protocol in that it survived for almost 2 decades unchanged. Here we have HTTP/2.0 and people are already talking about what we will need to add to HTTP/3.0.

If HTTP is going to end up being the new MSIE, we can only blame ourselves because we allowed Google to use its dominance to push a protocol the internet didn't need.


Whatever PHK wants it to be, HTTP/2 is a great step forward from where we are today. Check this out: https://http2.golang.org/gophertiles

This is going to make the web so much faster, particular on mobile devices.


> Whatever PHK wants it to be, HTTP/2 is a great step forward from where we are today.

A hugely bloated, binary protocol is better than the simple, text-based on we have today? I greatly disagree. HTTP/1.1 could use an update, but HTTP/2 was not the answer.


I wonder if anyone complaining about binary formats has ever written a high performance parser.

Particularly, HTTP's text format, is a mess. You can continue headers from one line to another. You can embed comments into header values. Seriously. Comments. In a protocol's messages. It's moronic and indefensible. Why anyone would prefer that is probably them thinking that text equals easy to implement or something like that.


I never said HTTP/1.1 is perfect. There are many possible optimizations and fixes possible, of which you've alluded to one.


Only the framing is binary. The PDUs are actually still plain text and 95% (perhaps more) of the semantics of HTTP/1.1 have been retained. Just pointing it out because your comment makes it sound like HTTP/2 is completely binary.

Also, how is HTTP/2 more bloated than HTTP/1.1? It simply adds a little framing; most of the bloat of HTTP/1.1 is retained. And as others have pointed out, HTTP/1.1 is hardly 'simple'.

Somebody on the IETF HTTP WG mailing list (may have been PHK himself) pointed out that their server (Varnish?) spends something like 30% of its CPU time parsing/processing just the Date header because it's in plain text. Plain text is not a panacea. We have nice tools like Wireshark these days which make it much easier for humans to read both binary and plain text protocols.


> Just pointing it out because your comment makes it sound like HTTP/2 is completely binary

Functionally what's the difference? If I'm unable to read the entire message without a decoder, does it matter if I can read part of it?

> It simply adds a little framing;

I think yo forgot the whole reïnventing Layer-4 in Layer-7 thing. That is a travesty -- adding tons of complexity to solve a problem that it shouldn't be solving.

> We have nice tools like Wireshark these days which make it much easier for humans to read both binary and plain text protocols.

And the more tools _required_ to debug a problem, the harder it becomes to debug.

> spends something like 30% of its CPU time parsing/processing just the Date header because it's in plain text.

Plain-text is not a panacea, however I believe it preferable to a binary protocol for ease of debugging and hackability. If there are issues like the HTTP Date is causing trouble for many people, maybe we should look into how that could be made easier? A better format? Does the Date header even need to exist? (I actually don't think it does, along with Server, UA, and a couple others).

Additionally, I really wish that HTTP/2 would have actually attempted to solve problems faced by smaller HTTP users, and not force a behemoth of complexity (remember Layer 4 in Layer 7) on users who do not want that. I outline what I was hoping to be solved at https://github.com/jimktrains/http_ng.


HTTP/1.1 is also pretty bloated, it's just half the features nobody bothered to implement correctly (pipelining, for instance). HTTP/2 is much the same.


>This is going to make the web so much faster, particular on mobile devices.

The performance benefits are overblown: https://news.ycombinator.com/item?id=8890839


HTTP pipelining doesn't work in practice, so comparing it to SPDY or HTTP/2 is a waste of time.


Every Android phone was using pipelining until Google removed it. Every iOS device uses pipelining since iOS 5.

These phones work in practice.


That's one powerful yet easy-to-understand example!


> Check this out: https://http2.golang.org/gophertiles

This page confirms that I'm not using Google's trainwreck protocol and tells me the config-parameters I need to ensure I keep my browser this way.

    Unfortunately, you're not using HTTP/2 right now. To do so:
    Use Firefox Nightly or go to about:config and enable "network.http.spdy.enabled.http2draft"
    Use Google Chrome Canary and/or go to chrome://flags/#enable-spdy4 to Enable SPDY/4 (Chrome's name for HTTP/2)
This is quite good, although probably not for the reason the original authors intended.


> I got a copy of Paul-Henning Kamp's critique "HTTP/2.0 - The IETF is Phoning It In" off the ACM website before the link went dead

There is a copy in maillist http://lists.w3.org/Archives/Public/ietf-http-wg/2015JanMar/...


Who exactly was it that voted for HTTP2? Is the debate leading up to the vote public?



Looking forward to when HAProxy support for HTTP/2 lands since they refused to implement SPDY support.

Here's a list of common servers support for SPDY/HTTP2: https://istlsfastyet.com/#server-performance


Current HAProxy already supports the handshake of SPDY/HTTP2 via NPN and ALPN. You have to route to proper backends. You also need to provide a HTTP/1.1 fallback implementation for incapable clients. Once setup that works very well. I am using it for our blog (https://blog.cloudno.de)


What about SSL Termination? Would it still work if I terminate?


In the described setup, HAProxy is doing SSL termination. See the gist for the cert and crypto parameters. This is getting a A+ from ssllabs.


Thank you!


Maybe you could share a scrubbed config?


Sure, here is a gist: https://gist.github.com/dvbportal/cccccbbf6163cfbbbce6

The frontend definition advertises spdy and http/1.1 protocols via npn. (this should be now ALPN, HAProxy supports it)

The ssl_fc_npn ACL routes to the SSL-teminated traffic to the appropriate backends.

Nginx is configured to serve two backends with one port for each protocol. There can be multiple instances with round robin, if necessary.

This setup scales and is extensible for additional protocols


Thanks for sharing. Can the SPDY frontend only be tcp based though, not http? The reason I ask is because my setup does all the routing (path and subdomain based) with http frontends.


Unfortunately, it has to be TCP. You can define other HTTP frontends thought, but then you lose SPDY for that frontend.

You can use the routing capabilities of nginx in the backend to break the traffic further down.


Thanks! I'll go take a look now.


HTTP/2 might have version 2 syndrome.

Another better way would have been keep SPDY, as there is usefulness there, separate and then on HTTP/2, to incrementally get there, and use an iteration of something like AS2/EDIINT (https://tools.ietf.org/html/rfc4130) which does encryption, compression and digital signatures on top of existing HTTP (HTTPS is usable as current but not required as it uses best compression/encryption currently available that the server supports). This standard still adheres to everything HTTP and hypertext transfer based and does not become a binary file format but relies on baked in MIME.

An iteration of that would have been better for interoperability, secure and fast. I have implemented it directly previously from RFC for an EDI product and it is used for sending all financial EDI/documents for all of the largest companies in the world Wal-mart, Target, DoD as well as most small and medium businesses with inventory. There are even existing interoperability testing centers setup for testing out and certifying products that do this so that the standard works for all vendors and customers. An iteration of this would have fit in as easily and been more flexible on the secure, compression and encryption side, and all over HTTP if you want as it encrypts the body.


I've used AS2 extensively (in EDI) and to be frank, fuck that. AS2 is a really bad version of HTTPS, you take HTTPS, you remove the auto-negotiation (email the certificates!), you disable certification CA checking (self-signed for all the things), and then you allow optional HTTPS on top of AS2 (which is a huge nightmare in its own right).

Imagine this scenario, two people want to interconnect, here's the process:

- They insecurely email their public key (self-signed) and URL (no MitM protection)

- You insecurely email your public key (self-signed) and URL

- They have a HTTPS URL

- Now the thing to understand about AS2 is that when you connect to THEM you give them a return URL to confirm receipt (MDN) of the transaction.

- HTTPS becomes a giant clusterfuck in AS2 because people try to use standard popular HTTPS libraries (e.g. that do CA checking, domain checking, and other checks which are fine for typical web-browser-style traffic, but not for specialised AS2 traffic) but in the context of AS2 where certificates are often local self-signed (some even use this for HTTPS), and the URL is rarely correct for the certificate, they fall over all of the time.

- Worse still some sites want to use either HTTP or HTTPS only, so when you connect to a HTTPS URL but give them a HTTP MDN URL sometimes they will work, sometimes they will try the HTTPS version of the URL then fall over and die, and other times they will error just because of the inconsistency.

Honestly I used AS2 for over five years, looking back, it would have saved everyone hundreds of man-hours to have just used HTTPS in the standard way and implement certificate pinning (e.g. "e-mail me the serial number," or heck just list it in your documentation).

The only major advantage of AS2 is the MDNs. However even there there exists massive inconsistency, some return bad MDNs for bad data, while others only return bad MDNs for bad transmission of data (i.e. they only check that what you send is what is received 1:1, so you could send them a series of 0s and get a valid MDN, because they check the data later and then email).

To be honest I hate MDN errors. They don't provide human-readable information in an understandable way. They're designed for automation which rarely exists in the wider world (between millions of different companies with hundreds of systems).

Give me an email template for errors any day, that way there can be a brief generic explanation and formatted data, to better explain things. The only thing MDNs do well is data consistency checking which is legitimately nice, however almost every EDI format I know has that in it already (i.e. segment counters, end segments, etc).

If I was to re-invent AS2, I'd just build the entire thing on standard HTTPS. No HTTP allowed, no hard coded certificates (i.e. you receive a public key the same way your web browser does), certificate pinning would be a key part, and scrap MDNs in place of a hash as a standard header in the HTTPS stream. Normal HTTP REST return codes would be used to indicate success (e.g. 200 OK/202 ACCEPTED, 400 Md5Mismatch/InvalidInput/etc).

That way nobody has to deconstruct an MDN to try and figure out the error. And handling a small handful of HTTP codes is much easier to automate than the information barriage an MDN contains anyway, it is both easier to automate, and easier for humans.


I wasn't saying use AS2 directly but an iteration of all the pain points of before solved, it is a decade old now. There are some things that wouldn't be needed and an iteration needed.

The things that AS/2 got right was that it rides on top of an existing infrastructure of MIME/HTTP. The other part is doing encryption/compression of any type specified by the server/client. And there is some benefit to encryption/compression/digital signing over plain HTTP.

HTTP/2 might be the first protocol for the web that isn't based on MIME for better or for worse. We are headed to a binary protocol that is called Hypertext Transfer Protocol.

HTTP/2 looks more like TCP/UDP or small layer on top of it that you might build in multiplayer game servers. Take a look at the spec and look at all the binary blocks that look like file formats from '93: https://http2.github.io/http2-spec/. It is a munging of HTTP/HTTPS/encryption in one big binary ball. It will definitely be more CPU intensive but I guess we are going live either way!

Plus AS2 was a huge improvement over nightly faxing of orders, large companies were doing this as late as 2003. AS1 (email based) and AS3 (FTP based) were available as well but HTTP with AS2 is what all fulfillment processes use now. And yes it has tons of problems but the core idea of encryption/compression/signatures/receipts over current infrastructure is nice. Everything else you mention exists and definitely are the bad parts though much of that wouldn't be needed in the core.


SPDY came and went before I had to implement it. Phew.

On a serious note: it's nice to see ALNP being used in HTTP/2


ALNP has been used with SPDY for a while now. It one of the nice improvements that fell out of testing/iterating SPDY in public. The NPN approach was a bad idea since the client drove what got picked (With NPN, Server tells the client what other protocols it supports in the ServerHello and the client picks whatever it wants. ALNP reverses that.)


Sorry I didn't make it clear: that's my sentiment aswel. I'm glad it survived, so to speak.


Are there any good reverse proxies out there that support HTTP/2? Right now I'm using varnish, but I'd love to switch over to something supporting this.


Can someone explain to me the actual upside of header compression? I work on a fairly major educational site and calculating now our request + response headers comes out to 1,399 bytes. Gzipping them they come out to 1,421 bytes. A small net increase.

Am I missing something? Do some people have so many cookies that this makes a difference or something?


The header compression in HTTP 2.0 isn't based on gzip or something like that. The CRIME attack pretty much killed those approaches dead. It's more akin to differential updates for header during the lifetime of the connection. So if you request a lot of files with fairly similar headers you'll effectively only have to transmit the bulk of the header once while the other request will efficiently re-use the previously transmitted fields.

So to answer your question: Header compression as employed in HTTP 2.0 helps if you do many requests with similar headers on the same connection.


> So to answer your question: Header compression as employed in HTTP 2.0 helps if you do many requests with similar headers on the same connection.

In general, HTTP/2.0 seems to be about improving things if you do many requests over the same connection.


I think this is correct - because that is one big area where TCP and HTTP really fall short.

HTTP/2 should be a huge improvement for lowering latency in anything but the simplest web sites, without having to use ugly hacks such as sprite sheets and putting all your javascript/CSS in one file (which is sub-optimal in terms of caching if you want to update say a single javascript file, because then a user with a full cache has to re-download the huge concatenated javascript file instead of just the single source that changed), and avoiding spanning things across multiple domains to get around browsers having a connection limit.


Doing many requests over the same connection is very common.


And one of the fundamental goals of HTTP/2 was to get clients to create only a single connection to a server.


right. The other key fact is that request header compression is necessary to make parallelism work when interacting with tcp congestion control. Its critical.


> Am I missing something?

Google has about 200 different tracking cookies with lots of redundancies which is compressed sufficiently better than that.

Google's aim with SPDY was to be able to track you across all HTTP-requests without the bloat of the tracking-cookies causing you to exceed a normal ADSL MTU-size and thus having the tracking cause packet-fragmentation and potentially reduced performance.

Possibly good goals for all the wrong reasons. And again it's Google's needs above those of the internet at large.


As far as I can tell, the compression is separate for the headers, does not use deflate, and (most importantly) is over the entire session, not just one request.

This would benefit from the fact that in one request headers are not repeated, but over multiple requests they certainly are.

http://www.greenbytes.de/tech/webdav/draft-ietf-httpbis-head...


Unfortunately right now apache doesn't support HTTP/2 at all. There was a mod_spdy, but it's pretty much dead. Apache took it over from google some time ago, but since then nothing happened.


This is what happens when you let Google (or other big corporations) write internet-standards.

If it isn't community-driven, you can't expect it to be implemented in the places the big corp doesn't care for.

So in this case, Apache one of the major drivers for propelling the WWW may end up not supporting a "crucial" WWW-related standard, because the community was never invited.

If anyone still has any doubts why letting Google control internet-standards is bad, this is currently my best example.

Technically speaking, the internet is the result of what we come up with, when we all work together. Not working together will quickly end up as not working at all.


I think the reality here is, this is what happens when you let companies fight over a standard in private.

What I saw on the HTTP/2 mailing lists was "We have a new standard." "It demands SSL, but we don't want that." Then, SPDY is everywhere, let's use that.

Shortly after it was "Omg, we can't call it spdy, because then Microsoft's interests will be left behind and Google will have won. Let's abandon the mandatory SSL requirement and rename SPDY to HTTP2..."

I feel like we've all lost here.

We implemented SPDY at Twitter - the savings were fantastic and the browser performance, amazing. Google and FB did the same. It's nearly like, 800M users said it was great, can we move on now?


Does anyone know if Cloudflare has plans to implement HTTP/2? RIght now they support SPDY.

I found the answer from their blog:

"Part of the service CloudFlare provides is being on top of the latest advances in Internet and web technologies. We've stayed on top of SPDY and will continue to roll out updates as the protocol evolves (and we'll support HTTP/2 just as soon as it is practical)."


We've been talking about it. I think it's just a question of when.


Since CloudFlare is an OpenResty (nginx + lua) shop, they'll likely get it as soon as it's in nginx.


OpenResty does not include SPDY as there are incompatibilities with it and Lua. But I'm sure CloudFlare has the engineering resources in house to decide what they want to support and when :)


Do you know if they support SPDY downstream too ?


Anybody know when nginx will support it?


They say it already does: "Right now, both the Apache and nginx web servers support HTTP/2" http://moz.com/blog/http2-a-fast-secure-bedrock-for-the-futu...

The thinking is, I believe, that "SPDY/4 revision is based upon HTTP/2 wholesale" http://http2.github.io/faq/ and nginx already supports SPDY via ngx_http_spdy_module. http://nginx.org/en/docs/http/ngx_http_spdy_module.html Version 3.1 though...

So it's either there or almost there.


Here's the related thread on the mailing list: http://mailman.nginx.org/pipermail/nginx/2015-February/04658...


Is HTTPS mandatory on HTTP/2 like it was on SPDY?


> Is HTTPS mandatory on HTTP/2 like it was on SPDY?

Not in terms of the protocol spec, but most major browser vendors have indicated that they only intend to support HTTP/2 in-browser over TLS connections, so in practice for typical, browser-targeting use cases, it looks like it will, at least initially.


That's so lame. It's so easy to set up a new website today, it's going to be a huge burden in the future. Some of us still make websites for fun, not as businesses. I guess I have to buy a cheap ssl certificate from some sleezy website every time I feel creative.


I'm puzzled, did you miss the announcement? The EFF, Mozilla, and others are creating a CA that will give free certs to everyone: https://letsencrypt.org/


Will they give out wildcard domain certificates?


No.


It's not live yet.


Neither is HTTP/2, and much less is HTTP/1.1 deprecated, so I don't see the problem.


Actually, HTTP/2 is already more widely-used than IPv6. :-)


> Implying that IPv6 is widely deployed.


This isn't 4chan.


Or we're about to see a whole lot of market incentive to make it easier to set up a site under TLS, which may only happen with this kind of unilateral move.


In the beginning of the internet, when it flourished and bloomed, it did so because it was open to tinkering, hacking and fun!

You could 1. set up a server, and just 2. enter the IP, and you had a website to hack on!

Now... You need to not only do that, you need to understand DNS and then pay a registrar to get a domain, you probably need to get hosting, because your router's secure ISP-side admin-interface may already be hogging port 443, you need to read up on how SSL/TLS and how certificates work in order to correctly request one, and then you need to figure out how those bits and pieces from that process fits into the server-stack you have decided to run with, and then you need to setup all those extra things server-side, which may or may not involve learning quite a bit of Unixy or sysadminy-things.

Phew

And that was step 1. See the contrast? In the playful internet of past times and glory you would have your product/experiment be done by now. In your "secure" internet you're merely done bootstrapping.

Yes, if you already have that, or already know all that stuff, that might not stop you moving along. For the amateur, who still should be the most welcome of all people on the internet for it to keep evolving, that's a complete show-stopper.

He'll just say "fuck this shit" most likely decide that this is just too big a task to even bother starting.

Forcing SSL everywhere on everyone is bad for the internet. I don't need to do everything "securely", and I sure as hell don't intend to setup every single experiment I do securely. Make it too much of a hurdle, and maybe I'll just stop experimenting instead.

And then, if not the internet, its spirit dies.


You are misunderstanding. The requirement for CA-issued certificates and most of the other things you are ranting about will still be only for HTTPS, which will still be optional. HTTP URIs in HTTP/2 will only need self-signed certificates which can be generated automatically by the server. Once servers get good support for it will not be any harder than HTTP/1.1.


> HTTP URIs in HTTP/2 will only need self-signed certificates which can be generated automatically by the server.

Where in the spec is there anything that HTTP URIs in HTTP/2 require any kind of certificate? Anyhow, I think its moot because all of the major browser vendors that have committed to HTTP/2 support have also announced they will support it only for HTTPS URIs, so what HTTP URIs require really only matters for non-browser HTTP-based applications that plan to use HTTP/2.


> they will support it only for HTTPS URIs

No. They may do that now but the intention is to support HTTP URIs that force TLS but allow self-signed certificates.

See https://wiki.mozilla.org/Networking/http2

"There is a separate, more experimental, build available that supports HTTP/2 draft-12 for http:// URIs using Alternate-Services (-01) and the "h2-12" profile. Sometimes this is known as opportunistic encryption. This allows HTTP/2 over TLS for http:// URIs in some cases without verification of the SSL certificate. It also allows spdy/3 and spdy/3.1 for http:// URIs using the same mechanism. "


I wasn't familiar with that, but that approach for HTTP URIs doesn't appear to be a spec requirement. Is there any indication that other browser vendors are going to follow that approach with HTTP URIs?


You can still use HTTP/1.1.


So HTTP/2.0 was launched as the replacement for HTTP/1.1, but without support for one of the most commonly used features.

Is this a joke?


What happens if someone built a service based on it? Should they never trust browsers keeping alive even the shitty (in comparison to free and standardised HTTP/2) features? What's great about the web is that now 20 year old services still are working in the latest runtimes (browsers).


It is always risky to build a service based on something that is not yet standardized. SPDY was in progress to be standardized, but the process ended up with parts of it in HTTP2, making SPDY unnecessary, as I understand things.

It would be the right thing for Google to remove SPDY at this point, otherwise it would be running a nonstandard protocol that other browsers do not, which can lead to fragmentation - as we saw just recently with an API that sadly Google has not removed despite it being nonstandard (FileSystem in the WhatsApp "Web" app).

edit: To clarify, I mean what Google is doing with SPDY sounds like the right thing. I don't mean it should remove it right now, I meant it was the right thing to do, right now, to announce it would be removed after a reasonable delay (and 1 year sounds reasonable).


> SPDY was in progress to be standardized, but the process ended up with parts of it in HTTP2, making SPDY unnecessary, as I understand things.

The "progress to be standardized" for SPDY was SPDY being chosen as the basis for HTTP/2; as I understand for a while SPDY has been being updated in parallel to the HTTP/2 development work to continue to reflect the state of HTTP/2 + new things the Google SPDY team wants to get into the standard, but its been clear for a long time that the intent was that SPDY as a separate protocol would be unnecessary once HTTP/2 was ready for use.


To be fair Google did happily kill Gears when HTML5 became a viable [early draft] standard.


Agreed, Google did the right thing to remove Gears.

My concern is because, overall, Google has a bad track record in this area: FileSystem is still enabled, WebSQL is still enabled, PNaCl is still enabled edit: and H.264 was never removed despite announcing the intent to do so.


> PNaCl is still enabled

Eh? What is the spec competitor to PNaCl? asm.js is a cute trick but it still lacks threads which is easily one of the biggest features of PNaCl. So what actual viable alternatives are there to PNaCl?


We can discuss alternatives to PNaCl, but that isn't really the issue. Even if you have something you believe has no peer at the moment, that doesn't mean you can ship it without regards for the the standards process. It's still wrong for all the usual reasons.

Of course, not having a good alternative might mean that the other parties in the standards process should take another look at it. But again, that's a totally separate issue from whether it is ok to just ignore the standards process and ship whatever you want, which is what Google is doing here.


> that doesn't mean you can ship it without regards for the the standards process. It's still wrong for all the usual reasons.

What Google is doing with PNaCl is the standards process. Standards start life by being not-standards that someone shipped and enough people liked to make it into a standard.

There is nothing wrong here, nothing whatsoever. This is exactly how the process should work. Design-by-committee standards suck. Standards that won through raw competition? Those are all the good ones.


While I agree with you that competition is crucial, and without experimentation we will get nowhere, it is worth remembering that IE6 and all of its specific behaviors "won" through "raw competition".

Often things win not through fair competition. For example, WebSQL "won" on mobile because WebKit won on mobile, and WebKit happened to have WebSQL. If WebKit had had, say, the Audio Data API (which it did not), then the Audio Data API would have "won". Neither of those APIs won or would have won on its own merits, but because it was backed by the 800 pound gorilla in the space. (I chose Audio Data as an example because it is not in the same space as WebSQL, i.e. not competing with it, and was a nice API, that failed).

And the problem is that PNaCl will fragment the web, and already has. That's a serious problem - for everyone but Google.


> it is worth remembering that IE6 and all of its specific behaviors "won" through "raw competition".

It is worth noting that the findings in the antitrust actions in the US over Microsoft's illegal and anti-competitive behavior in establishing IE's dominance indicate that that claim is, at best, misleading.


I would argue the opposite, in fact - that it shows what happens with pure unrestrained competition. Which leads to monopolies and other forms of competition suppression, ironically, of course.

Regardless, we don't need to agree on that point. There are plenty of other examples in tech (and outside) of things winning through "raw competition" that are just not that good.


But again, that's a totally separate issue from whether it is ok to just ignore the standards process and ship whatever you want, which is what Google is doing here.

But isn't that exactly what happened with SPDY? They shipped it unilaterally first, then later on it got standardized as HTTP2.


I don't think it's the same.

SPDY initially began inside Google. But rather quickly it got enthusiastic interest from multiple outside parties, and a standardization process began. We can see the (successful) end of that process now. Yes, it's true that many standards begin that way.

PNaCl also began inside Google. Discussions regarding it, and the PPAPI on which it depends, were a combination of opposition (e.g. because PPAPI duplicates existing web APIs, and because it's a plugin API, which browsers are trying to move away from) to ignoring. Google continued to work on it, enabled it on the Chrome Web Store, and despite any change in the response of the community over a period of years, enabled it for web content. Over a year has passed since then, and it seems clear that (1) no browser outside Google thinks PNaCl is a good idea, and (2) no significant interest has been shown from non-browser vendors either (Google itself is the main user of PNaCl).

Also, to make things even worse, during all that time, PNaCl has not been brought to any standards body.

Another large difference is that, in practice, SPDY didn't pose a compatibility threat to the web. It has clear fallbacks so that content still works without browser support for it. And while it is possible bugs could still cause breakage, it didn't happen in practice. So moving forward somewhat quickly with SPDY was still arguably a responsible thing to do.

Whereas, PNaCl is already showing fragmentation problems, with several Google properties using PNaCl and consequently only working in Chrome.

There is therefore every reason for Google to disable PNaCl, because it is nonstandard and bad for the web to keep it on. Unless Google simply does not care about standardization here.


Not to mention not removing H.264 after promising to do so.


Thanks, right, I forgot that one.


websql is so much nicer then the key/val firefox insisted on :(


WebSQL is nicer to use, but requiring every browser to be bug-for-bug compatible with SQLite 3.0.17 (or whatever it was) forever and ever is not nice for browser developers.


> It would be the right thing for Google to remove SPDY at this point

It's going away, just maybe not soon enough for everyone's tastes. From the blog:

> We plan to remove support for SPDY in early 2016


I think that's a reasonable timeline, actually - sorry if what I wrote was confusing to imply "right now". I meant to say "It would be the right thing for Google to announce the timeline to remove SPDY at this point in time."

A year head's up gives people plenty of time to update their sites, and sounds fair and reasonable.


Nothing is being lost with regards to SPDY. Old versions were never supported; if you wanted to use SPDY you had to commit to keeping the server updated.

So effectively they have just announced a long term support edition of SPDY. What an odd time to complain about the lack of long term support.


The server should fall back to HTTP anyway, so this should not be a problem.


Well how about the fate of the cute little protocol called QUIC?


Maybe QUIC is the prototype for HTTP/4.


QUIC is not a replacement for HTTP; it works below it. See https://docs.google.com/document/d/1lmL9EF6qKrk7gbazY8bIdvq3...


I think they want QUIC to be TCP/2. It's design goals around slow start, congestion control, and RTT reduction are squarely aimed at TCP's short comings.


Anybody aware of a good C++ server framework supporting most of HTTP/2, including websockets?



Facebook's proxygen has HTTP/2 support "in progress": https://github.com/facebook/proxygen


Yes, but I believe they don't support websockets yet. At least, searching their github for "websockets" gives only two broken links.

UPDATE: I noticed somebody wrote websocket support [1], but it didn't get merged yet with the master.

[1] https://github.com/kekekeks/proxygen


If you need only realtime push then Server-Sent Events work over HTTP/2.


Does somebody has good nginx configurations for HTTP/2? Good that browser go this directions but at the moment I have no clue on how to implement HTTP/2 (is there a SPDY fallback?) on my nginx server :(


HTTP/2 is an ugly mess of taking something simple and making it more complex for minimal benefit. It could have been so much better than a binary mess.

As engineers, the ones that take simple concepts and add complexity, those are not engineers, those are meddlers.

It could be as long lived as XHTML.

I was hoping for more SCTP rather than a bunch of cludge on top of what is a pretty beautiful protocol in HTTP 1.1. Protocol designers of the past seemed to have a better long view mixed with simplicity focused on interoperability that you like to see from engineers.


Please tell me where the simplicity in line folding and comments-in-header-values is. Or having special handling for some headers, but not others.


You can still deconstruct the message and it is simple and easier to debug even if less exact.

In binary if there is one flaw the whole block is bunk, i.e. off by one, wrong offset, binary munging/encoding, other things. As an example if you have a game profile that is binary, it can be ruined by binary profiles and corruption on a bad save or transmission.

Binary is all or nothing, maybe that is what is needed for better standards but it is a major change.

What is easier, a json or yaml format of a file or a binary block you have to parse? What worked better HTML5 or XHTML (exactness over interoperability)

Granted most of us won't be writing HTTP/2 servers all day but it does expect more of the implementations to adhere to, for better or worse.

The classic rule in network interoperability is be conservative in what you send (exactness) and liberal in what you accept (expect others to not be as exact).


The "classic rule" aka Postel's Law, has proven to be disastrous. The idea of resuming a corrupted message is a totally flawed concept. At best, it introduces compatibility issues. This is essentially the history of HTML and browsers, each one needing to implement the same bugs as other popular versions.

SIP is another IETF gem, which takes its syntax from HTTP. And guess what? It's impossible to have unambiguous parsing in the wild! Why? The whole liberal in what you accept bad idea. So A interprets \n as a line ending, even though the spec says \r\n. B interprets it another liberal way, and assumes you didn't mean to transmit two newlines in a row, so it'll keep reading headers. End result: you can bypass policy restrictions by abusing this liberal-ness and get A to approve a message that B will interpret in another way. Yikes. And, since the software for both is so widely deployed, there is little hope of solving the problem. In fact, the IETF essentially requires you to implement AI as you're supposed to guess at the "intent" of a message.

So you're sorta proving my point, that people are thinking "oh it's just text" and then writing shitty, sloppy code, and they're giddy cause it sorta worked, even from a two line shell script. And then further generations have to deal with this mess, because these folks just can't bear to get line endings right or whatnot.


Keep in mind you are still going to have lots of these same problems you mention inside the binary blocks and header blocks. Just the specific annoyances of HTTP 1.1 will be gone but new ones will appear.

Going binary does not make it suddenly easier, it just slices it up and adds a layer of obfuscation.

Easier to know what the hell is going on across a wire with current formats and debug them. Utopia interop does not exist so Postel's Law has gotten us this far. Being text no doubt makes it easier to debug and interoperate, otherwise we'd be sending binary blocks instead of json. Unless you control both endpoints, Postel's comes into play and simplicity wins.

We are moving in a new direction for better or worse and going live. I feel like it is slightly off the right path but sometimes you need to take a wrong step like SOAP did to get back to simple. We'll see how it goes.


A binary protocol's parsing is usually something like read 2 bytes from the wire, decode N = uint16(b[0] << 8) | uint16(b[1]), then read N bytes from the wire. A text-based protocol's parsing almost always involves a streaming parser, which is tricky to get correct, and always more inefficient.

Besides, I think this is a moot point, because chances are that less than 100 people's HTTP2 implementations will serve 99.9999% of traffic. It's not like you or I spend much of our time deep in nginx's code debugging some HTTP parsing; I think its just as unlikely we'll be doing that for HTTP2 parsing.

Also, HTTP2 will always (pretty much) be wrapped in TLS. So its not like you're going to be looking at a plain-text dump of that. You'll be using a tool and that tool author will implement a way to convert the binary framing to human-readable text.

Another way to put it is that the vast majority of HTTP streams are not examined by humans and only examined by computers. Choosing a text-based protocol just seems a way to adversely impact the performance of every single user's web-browsing.

Another another way to put it is that there is a reason that Thrift, Protocol Buffers, and other common RPC mechanisms do not use a text-based protocol. Nor do IP, TCP, or UDP, for that matter. And there's a reason that Memcached was updated with a binary protocol even though it had a perfectly serviceable text-based protocol.


Agreed on all points. Binary protocols are no doubt better, faster, more efficient and more precise. I use reliable UDP all the time in game server/clients. Multiplayer games have to be efficient, TCP is even too slow for real-time gaming.

Binary protocols work wonderfully... when you control both endpoints, the client and the server.

When you don't control both endpoints is where interoperability breaks down. Efficiency and exactness can be enemies of interoperability at times, we currently use very forgiving systems instead of throwing them out and assert crash dump upon communication error. Network data is a form of communication.

Maybe you are right, since it is binary, only a few hundred implementations might be made and those will be made by better engineers since it is more complex. Maybe HTTP is really a lower level protocol like TCP/UDP etc now. Maybe since Google controls Chrome and the browser lead and has enough engineers to ensure all leading implementations and OSs/server libraries/webservers are correct then it may work out.

As engineers we want things to be exact, but there are always game bugs not found in testing and hidden new problems that we aren't weighing against the known current ones. Getting a something new is nice because all the old problems are gone, but there will be new problems!

It will be an all new experiment we try going away from text/MIME based to something more lower level, complex and exact over simple and interoperability focused. Let's see if the customers find any bugs in the release.


>Binary protocols work wonderfully... when you control both endpoints, the client and the server.

IP is all binary and I don't think it's a case of one party controlling all endpoints.


Binary protocols are usually far easier to implement both sending and receiving. There is far less ambiguity.

In fact, the newline problem I mentioned? It was not easier to diagnose, and was only caught by using tools checking it as a binary structure.

Postel was just flat wrong, and history shows us this is so. JSON is popular because it was automatically available in JavaScript, and people dislike the bit if extra verbosity XML introduces. JSON is also a much tighter format than the text parsing the IETF usually implements.

Postel's law also goes against the idea of failing fast. Instead, you end up thinking you're compliant, because implementations just happen to interpret your mistake in the right way. Then one day something changes and bam, it all comes crashing down. Ask any experienced web developer the weird edge cases they have to deal with, again from Postel's law.

And anyways, you know what everyone uses when debugging the wire? WireShark or something similar. Problem solved. Same for things like JSON. Over the past months I've been dealing with that a lot. Every time I have an issue, I pull out a tool to figure it out.

Do you know the real reason for the text looseness? It's a holdover from the '60s. The Header: Value format was just a slight codification of what people were already doing. And why? Because they wanted a format that was primary written and readby humans, with a bit if structure thrown in. Loose syntax is great in such cases. Modern protocols are not written and rarely read by humans. So it's just a waste of electricity and developer time.


Yeah using binary protocols seems to be the new hotness. It all makes me feel so old with my preference for simple ascii text files.


Actually, XHTML was a simplification.


Did you close that tag in that textarea from some third party content? If not you're whole view is broken. It was a layer or hopeful standardization that was too hopeful and counted on implementers too much to be exact. It was a nice attempt but was quickly retracted to go to HTML5.

I guess the same thing applies to HTTP/2, sometimes you have to dumb/simplify it down a little, the smartest way that relies on implementers, might be the leap that is too hard to make. The best standards are the simple ones that cannot be messed up even by poor implementations. Maybe the standards for protocols developed in the past looked at adoption more as they had to convince everyone to use it, here if you force it you don't need to listen to everyone or simplify, which is a mistake.

While code and products should be exact, over the wire you need to be conservative in what you send and liberal in what you accept in standards and interoperability land.

In another area, there is a reason why things like REST win over things like SOAP, or JSON over XML, it comes down to interoperability and simplicity.

The more simple and interoperable standard will always win, and as standards creators, each iteration should be more simple. As engineers, we have accepted the role of taking complexity and making it simple for others, even other engineers or maybe some junior coder that doesn't understand the complexity. What protocol is the new coder or the junior coder going to gravitate to? The simple one.


XHTML was an overall improvement to HTML imo. If you're going to use XML, at least be consistent with this choice. HTML was not, XHTML is.


It was better, from an engineering aspect it should have been better and the world would have more precision and validity/verification on content.

But from an interoperability aspect (relying on implementations) the market didn't think it was better or we'd be using it still. HTML5 won because it was simple and met many needs demanded by the market.

The simple standards that provide more benefits, but most importantly are highly focused on interoperability and simplicity, win, always, even if they seem subpar from an exactness standpoint.

At one point in time SOAP had the same religious hype surrounding it that HTTP/2 seems to have. But sometimes you have to take a step to realize you are slightly off path according to the market, not what you might want to design and what should win but what happens with interoperability in the market. HTTP/2 and XHTML type standards are steps, to something better but are too top down or ivory tower eventhough they have lots of awesome and needed features.


HTML5 is a mix of XHTML and HTML4, but with new features. Its syntax ressemble more XHTML than HTML and as thus it is more XML compliant, although you can ignore strict syntax [0]. HTML5 is both XHTML and HTML4 alike, so it is no surprise that is has taken over the market.

Note that I don't especially like HTTP/2 and believe that a hack like SPDY should not make it to a standard. More time and care should be made to make a central protocol like HTTP (central in that it is used alot).

[0] Source: http://www.techrepublic.com/blog/10-things/10-things-you-sho...

>Instead, the HTML5 spec is written so that you can write HTML5 with strict XML syntax and it will work


Technically HTML was a derivative of SGML. That said, as someone who did a fair amount of parsing of old-timey HTML...it would have been really nice if it was XML.


Well, by the time the first spec was written it was defined to be an SGML application; when TimBL first implemented it, it was "roughly based on SGML". As far as I'm aware, except for the W3C Validator, no other serious HTML implementation treated HTML as SGML.

And it couldn't have been XML, because HTML predates it.


For those slamming HTTP/2.0, how do they rate SPDY?


SPDY was great for Google and allowed them to change and take hold of HTTP/2.

It saved them lots of money I am sure in improved speed but at the trade-off of complexity and minimal adoption of the standard because it wasn't beneficial to everyone. HTTP/2 is a continuation of that effort by Google which I would do if I were them as well probably. But in the end both are not that big of improvements for what they take away.

Of course I use both but I don't think they will last very long until the next, it was too fast and there are large swaths of engineers that do not like being forced into something that has minimal benefits when it could have been a truly nice iteration.

HTTP/2 is really closer to SPDY and I wish they would have just kept it as SPDY for now. Let a little more time go by to see if that is truly useful enough to merge into HTTP/2. HTTP/2 is essentially SPDY from Google tweaked and injected into HTTP/2 which has huge benefits for Google, so I understand where the momentum is coming from.

Google also controls the browser so it is much easier for them to be the lead now on web standards changes. We will have to use it if we like it or not. I don't like the heavy hand that they are using with their browser share, just like Microsoft of older days (i.e plugins killed off, SPDY, HTTP/2, PPAPI, NaCL etc)


SPDY is a great prototype that exemplifies why you should write a prototype: to show the problems with your design. It's unfortunate that the HTTP/2.0 committee decided to ignore the flaws and go with the prototype design.


Google just loves exerting their power. It will take more than Chrome devs declaring it a done deal to make this happen. The browser is only half the issue. Web servers must get on board for this to matter. Obviously Safari, FireFox and IE have some say in this too.


Pretty much everyone in the industry is on-board with HTTP/2. It's not just Google.


Hardly a surprise..


@klapinat0r - welcome to the club. I was just about to say the same.


I'm not ever supporting HTTP/2. For something "monumental" enough to be called the whole second revision of HTTP, what have we really gained? A Google-backed "server push" mechanism and some minor efficiency additions? Add to that the fact that SPDY was pushed through as HTTP/2 because nothing else was ready.

Please.

Downvoters: although I don't usually do this, I'd ask you to enter into a discussion with me instead of just hitting the down arrow. Do you honestly think my discussion is worth being silenced?


This attitude is exactly how you make sure that nothing ever changes or improves. It is "the perfect is the enemy of the good" exemplified. HTTP/2 is a huge improvement over HTTP in many very important ways. True, it's not perfect, but guess what? 2 is not the last version number out there. We can switch to HTTP/2 now and fix the rest of the problems with HTTP/3.

Moreover, it seems like we are collectively getting better at upgrading technologies: IPv6 adoption has finally got some momentum; HTTP/2 is actually happening. With lessons learned from the HTTP => HTTP/2 transition, HTTP/3 could happen in five years instead of in another fifteen.


> This attitude is exactly how you make sure that nothing ever changes or improves.

On the contrary, we are in a desperate need of such attitudes in software. We need for everyone to stop jumping to every new thing with silly promises. We need to start choosing quality over quantity. We need substantial well researched improvements.


I think you're confusing quantity as being the end result. The quantity is about experimentation. The quality comes as the winning products are refined over time; the low quality products never gain mass traction and are discarded. That's exactly how it should work. These things are complimentary, not mutually exclusive.

That process is how innovation happens quickly. It's also how you frequently discover new things you weren't looking for, which is how a lot of innovation happens (by accident). Rapid iteration is in nearly all cases vastly superior to turtle-speed iteration.


Playing Devil's advocate, wasn't SPDY the experimentation part? Why the need for HTTP/2?


"Low quality products never gain mass traction and are discarded" Yet we're still kicking IPv6


I appreciate your optimism, but do realize that there isn't really a massive improvement unless you're Google. I don't see this as worthy of the "/2" suffix; Google might like it because it allows them to make their tech standard, but other than that it's unnecessary marketing.

HTTP has never been the bottleneck. I think IPv6 is excellent and a needed, massive improvement especially since IPv4 is no longer tenable. HTTP/1.1, however, still works quite well and keeps a larger feature set in some circumstances. It's less insane because it's not made by W3C or IETF or any other hugely bureaucratic group; however, that doesn't mean it's better either.

I can't wait for HTTP/3! Hopefully this time they won't rush it.


Check this out https://http2.golang.org/gophertiles and tell me that HTTP isn't the bottleneck, especially for high-latency connections. This is going to make the web so much faster.


On some small sites I've worked on, switching to SPDY shaved about 20-30% off our load times. And all we had to do was type " SPDY" into our nginx.conf. That's like the definition of a win.


If that doesn't convince you to support HTTP/2, then nothing will: https://www.httpvshttps.com/ HTTP/1.1 is 5x-15x slower in this benchmark! These insane perf gains are possible only thanks to HTTP/2, specifically thanks to its support for multiplexing. Please read the spec and understand the technical implications before criticizing.

On some unrelated note: I found this tidbit of humor in the RFC draft (https://tools.ietf.org/html/draft-ietf-httpbis-http2-16):

  ENHANCE_YOUR_CALM (0xb):  The endpoint detected that its peer is
  exhibiting a behavior that might be generating excessive load.


I can point you to a benchmark that disagrees. http://www.guypo.com/not-as-spdy-as-you-thought/ .

I promise you that I've considered the spec and its implications. Where are we now?


First comment:

"This study is very flawed. Talking to a proxy by SPDY doesn't magically make the connection between that proxy and the original site use the SPDY protocol, everything was still going through HTTP at some point for the majority of these sites. Further, the exclusion of 3rd party content fails to consider how much of this would be 1st party in a think-SPDY-first architecture, where you know you'll reduce round trips, so putting this content on your own domain all together would be better, anyway."

In other words the guy benchmarked SPDY _slowed down by HTTP connections behind it_!!


Sure. It illustrates how any benchmark can be flawed because it's tailored to the point it's trying to make. The author of that article thought this scenario was "more realistic." What is more realistic to him is not to other people.

And thus, benchmarks are unhelpful.

I care about feature sets and major improvements, not minor down-to-the-wire fixes. If this were called HTTP/1.2 or something I'd be less critical, but there are so many issues and flaws left unfixed, with unhelpful bikeshedding occurring over perceived "performance".


> And thus, benchmarks are unhelpful.

No, they are helpful! Especially real-world benchmarks. Sure you can cook up utterly flawed benchmarks (like the one you pointed to), but that doesn't mean all benchmarks are unhelpful. A good engineer knows which benchmarks matter, which don't. You don't seem to be able to do that.

> If this were called HTTP/1.2 ...

The mere fact you brought this up (no amount of backpedalling you may do after my comment on this) makes your criticism look even stupider. You should judge the spec based on its technical content, not based on whatever arbitrary version number was assigned to it. Talk about a bike-shed argument (http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality)


> benchmarks

The linked benchmark is flawed, as dlubarov noted above. I wanted to note that it is easy to write a flawed benchmark. And in this case, benchmarks are unhelpful, because my major lament is not efficiency or lack thereof, it is the lack of any new features or any consideration to any of the other pain points that exist on the Web today.

> doesn't mean all benchmarks are unhelpful

Didn't mean to imply that, although I can see how it could be read that way. Rest assured, I only believe benchmarks are unhelpful here. Oftentimes a benchmark is the best way to quantify usability, such as DoYouEvenBench (http://www.petehunt.net/react/tastejs/benchmark.html)

> backpedalling

I won't backpedal. This is important, actually, because the name it's given lends it some intrinsic hype. Let's say you're Google, and you're pushing a web standard that benefits you more than anyone else. What's more likely to be adopted, "HTTP/1.2" or "HTTP/2" ? It is important, in my opinion.


> The linked benchmark is flawed, as dlubarov noted above.

No, I already replied to him. You and him should spend some time looking at the Chrome network console visiting some of the top500 sites. It is very common for sites to be exactly like that: tons of small requests for small resources.


Okay. Fair enough. I concede that efficiency is important and that SPDY / HTTP/2 can improve upon it. But I don't believe that this is worth the hype because the exposed featureset is otherwise tiny. Efficiency is cool, yes, but I'm personally waiting until HTTP/3 fixes the other wrong things with the Internet before I implement anything. I think the amount of effort that goes into this is not worth the result. Why include tons of small resources on your page if they're not necessary? Why revamp a protocol entirely if all you have to do is stop including tons of small resources?


> Okay. Fair enough. I concede that efficiency is important and that SPDY / HTTP/2 can improve upon it.

Great, I appreciate you recognize this.

> Why include tons of small resources on your page if they're not necessary?

But it _is_ necessary. In every single of the examples I gave in my reply to dlubarov, it is necessary:

- There are 100+ small images, icons, etc and all are displayed on the nytimes.com homepage.

- All of the thumbnail pictures of ebay items on a listing are displayed to the user.

- The 50+ maps tiles downloaded when browsing Google Maps are all necessary.

- Etc

You seem to fail to realize that in 2015, not every web page can be dumbed down to a blob of static HTML and no more than 2-3 images. The modern web is complex. We needed a protocol that can serve it efficiently.


It's not just web _pages_, either. Many people spend their days working in web applications, which HTTP/2 helps tremendously.


> there are so many issues and flaws left unfixed

Can you explain this point?


so your only real issue is with the version number.


Not really a fair benchmark. It's making tons of requests with tiny payloads, so that most browsers will hit a connection limit and requests will be queued up.

Heavily optimized pages like google.com use data urls or spritesheets for small images, and inline small css/javascript.

On the bright side, reducing the need to minimize request count will make our lives as developers a bit easier :-)


Many of the sites I visit frequently are exactly like that: tons of requests with tiny payloads.

The nytimes.com homepage makes 100+ requests to tiny images.

Same thing for the yahoo.com homepage.

An ebay.com listing page makes many requests to small thumbnails of items on sale.

And so on... This makes it a perfectly fair benchmark IMHO.


I don't know how you're assessing those pages, but bear in mind that

- Counting images can be misleading, since well-optimized sites use spritesheets or data URIs.

- If you're using something like Chrome's dev console to view requests, a lot of them are non-essential requests which are intentionally made after the page is functional.

- HTTP connection caps are per host. The benchmark is making hundreds of requests to one host, whereas a real page might make a dozen requests to the main server, a dozen to some CDN for static files, and a dozen to miscellaneous third parties.

- The benchmark is simulating an uncached experience; with a realistic blend of cached/uncached, HTTP 1 vs 2 performance would be much more comparable.

HTTP/2 is an improvement but if people expect a "5-15X" difference, they're in for a big disappointment.


That's an extremely fair comparison because it shows how it is possible to avoid odd optimisations like the one you mentioned.


> These insane perf gains are possible only thanks to HTTP/2, specifically thanks to its support for multiplexing.

What's sad about this is that if you load this site with pipelining enabled you get the same speed benefits as with HTTP/2 or SPDY, but Google would never know this, since they never tested SPDY against pipelining.

> ENHANCE_YOUR_CALM (0xb):

> Please read the spec and understand the technical implications before criticizing.

Please understand that -- technically -- this protocol is an embarrassment to the profession and to those involved in designing it.


HTTP pipelining is busted for a variety of reasons. Support exists in most browsers but it's disabled by default because it makes things worse, on balance.


At the time SPDY came out, Opera and Android Browser had pipelining on by default and Firefox was about to also default it on. They didn't only because of the promise of SPDY, not because pipelining "is busted". Pipelining works fine in almost all cases.

And if you only enable pipelining to known-good servers over a non-MITM SSL connection -- exactly like SPDY does -- then there is absolutely no problem with it and it performs similarly to SPDY. But I have no doubt you will continue spreading the party line from your employer, who couldn't be bothered to even test this.


Firefox wasn't about to default it to being on (yes, there was work being done on it to see how workable it was, but there was no decision to ship it) — it's well-known that pipelining causes all kinds of bizarre breakage with badly behaved servers and proxies (and the latter are where the implementations are especially bad).

Opera had enough problems with pretty crazy-complex heuristics as to when to enable pipelining; it would've been nice for them to have been published, but that has never happened. Determining what a known-good server is over SSL isn't that easy.


> Determining what a known-good server is over SSL isn't that easy.

Just the opposite. Both Firefox and Chrome's discussion of pipelining claim that "unknown" MITM software is why they didn't turn on pipelining. Nobody knows what this software is (could be malware). But whatever this mystery software is can't look inside SSL, so pipelining in SSL was just as doable as inventing SPDY.

If Google hadn't pushed SPDY then pipelining was going to happen, and the unknown bad software would have been fixed or blacklisted. Android was using pipelining for years in Browser until Google replaced it with SPDY. Mobile Safari has been using pipelining since 2013 (probably why it wins the mobile page load time benchmarks). Pipelining works.

Yes, some endpoints could be buggy, for instance IIS 4 (in Windows NT 4) was blacklisted in Firefox. Introducing a new, more complicated protocol just because of 10 year old outdated software is not a great way to solve problems.


The endpoints can be (and often are, in absolute terms) buggy. TLS stops bad proxies from breaking stuff, but it doesn't stop endpoints from breaking stuff.


The only explanation for this nonsense phaenomenon seems to be that HTTPS ports are less used generally, which isn't an argument in favor of HTTPS.


That has nothing to do with anything. SPDY and HTTP/2 reduce the number of TCP round trips needed to transmit information.

"Nonsense phaenomenon?" Not to anyone who actually looks at how the protocols are implemented!


> I'm not ever supporting HTTP/2.

Congrats? Want a cookie or something?

Do you have an actual complaint with the spec or do you just want to be an old man yelling at a cloud?


My "actual" complaint is that it's not enough to be a major version and that it's a system that only benefits large corporations with data to pre-push, with no other benefits.

You can think differently, of course, but after looking at this (https://news.ycombinator.com/item?id=8824789) I reconsidered my previously positive view on it.

(Also, I'd love a cookie)


So you'd be ok if they called it HTTP/1.2?

Snark aside, it's a standardized way of allowing different architectural patterns that can benefit use cases we haven't even seen yet. Yes, those architecture patterns currently benefit large corporations, but they're not being implemented at the expense of anything else. HTTP is a remarkably complete and flexible protocol.

What other benefits were you expecting to see that aren't already part of HTTP/1.1?


From https://news.ycombinator.com/item?id=8825001:

  * no more easy debugging on the wire
  * another TCP like implementation inside the HTTP protocol
  * tons of binary data rather than text
  * a whole slew of features that we don't really need but that please some corporate sponsor because their feature made it in

  * continuing, damaging and absurd lack of DNS and IPv6 considerations
    * most notably the omission of any discussion of endpoint resolution
Fixing anything related to DNS, DNSSEC, IPv6, or anything else would have made this closer to "HTTP/2."

And as I said in another thread: yes. Calling it HTTP/1.2 would actually have made me a little happier. This isn't the next new, big thing. This is a minor improvement, if not a minor regression.


You do realise that the overwhelming majority (99%) of HTTP traffic is transferred to or from large companies like Google and Facebook? If it benefits their clients, then it benefits most of the web. HTTP/2 is particularly beneficial for the developing world, where latencies are higher. The world is bigger than you.

Also WTF does HTTP have to do with DNS, DNSSEC, and IPv6? Talk about layering violations...

Also, I think a total wire-protocol change warrants a major version number increase, not that it matters at all.


I don't think anyone's selling it as the next new, big thing. It's just a version increment on HTTP; the one that includes DNS/DNSSEC/IPv6 changes can be called HTTP/3000 for all I care. You don't have to use these features if you don't like; they may make sites from companies like Google harder to reverse engineer, but HTTP is currently used for a lot more than text data. You just seem to confuse "corporations want it" with "bad".

And honestly, IPv6 is probably the biggest "big corporate" feature out there. Any big company providing access to more than 16 million devices (and yes, they do exist) has a very urgent need since the 10.0.0.0/8 network only contains ~16 million addresses.

At the end of the day, it's only a standard. As proven by SPDY, "big corporations" like Google are going to implement whatever the heck they want to, then ask for it to be included in the standard. I'm all for a system that makes it easier for companies to get their technologies standardized as part of an open standard - they're spending the investment dollars, but we all benefit from the capability.


Wait, it being a binary protocol is a good thing. No longer will we have proxies mangling Upgrade handshakes and such.

Header compression, server push and proper multiplexing (which avoids all the problems with pipelining) are all features most applications will benefit from.


Network debugging tools like Wireshark will have pretty printers for HTTP/2, just like they do for TCP/IP.


This is pretty nonsense. There are a bunch of useful improvements in HTTP/2 – it's not perfect, but I'd rather see incremental improvements of this sort than e.g. the ridiculously extended adoption times we've seen with IPv6.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: