Hacker News new | past | comments | ask | show | jobs | submit login
HTTP 2.0 First Draft Published (infoq.com)
68 points by Garbage on Nov 30, 2012 | hide | past | favorite | 64 comments



Personally (i.e. not speaking on behalf of my employer) I'm not terribly keen on SPDY being HTTP 2.0. It fixes HTTP/1.1 pipelining by adding a layer of binary complexity to a protocol that has succeeded by being simple and textual.

Also, it's a sort-of-layer because it really is dependent on details of HTTP (such as the existence of headers) to operate so it is not an independent layer. In networking I'm much happier when layers are independently specified.

Also, I think the implementation of header compression is poor because it uses a fixed dictionary that is based on current HTTP headers. There's no provision for negotiation of this going forward. It would be better if the compression dictionary were negotiated on session creation. Notice how the dictionary has been updated between standard drafts. Once it's in the standard updates will not be possible.

And SPDY requires TLS which means that if it were HTTP 2.0 it would require the deployment of certificates everywhere. That's an added expense for a web site operator. Unless SSL certificates are suddenly free and trivial to deploy then that would make HTTP 2.0 complex.

I would much rather see a proposal to change HTTP so that pipelining works, using textual headers and not require TLS.


Poul Henning-Kamp said many of the same things when this came up last time, and I agree with both of you.

https://www.varnish-cache.org/docs/trunk/phk/http20.html

IMO, a HTTP 2.0 standard based on SPDY is a small but meaningful illustration of the divergence from an idealistic egalitarianism to an oligarchy of a handful of huge organizations. The issues at stake here are basically monetary for Google and a few other companies. There's very little technical finesse here despite the added complexity, and certainly little to no reimagining or benefits to the vast majority of independent site operators or web developers.


requires TLS which means that if it were HTTP 2.0 it would require the deployment of certificates everywhere. That's an added expense for a web site operator. Unless SSL certificates are suddenly free and trivial to deploy

Get the browser vendors to stop being idiots, and treat self-signed SSL connections as identical to plaintext connections.

Or even better get them to implement a notary model [ http://perspectives-project.org ] (as a default rather than an extension), that doesn't allow for a single insecure CA to break the security of the entire web.


The first step towards an alternative to the multiple- single- points- of- failure CA model is already underway: it's Trevor Perrin and Moxie Marlinspike's TACK project (currently an Internet Draft). TACK allows sites to cache certificate pins, typically after first contact, so that any site on the Internet can have the same protection that Google's properties have today by virtue of the hardcoded pins in Chrome.

As Moxie Marlinspike has said, on HN even, TACK is a sensible and achievable first step towards a substrate that we can build notaries or other trust models on top of. It doesn't require years of study and consideration; it merely extends something that browser vendors already do for their preferred sites.

Meanwhile, browsers can't simply treat sites under self-signed certificates as normal plaintext HTTP sites. The user reached the site through an HTTPS URL, which promised them security. When the browser detects and warns about a self-signed certificate, it is telling the user "this site is lying about its security". The simple way to understand this: start by asking what a browser should do when the Citibank Online Banking Login presents a broken cert, and then ask how the browser should know when it's OK for a site to present as merely "not encrypted" (ie, HN login) and when it's not OK (ie, online banking). It can't. The browser has to assume that HTTPS sites with broken certificates are sensitive.

Remember also, the "broken certificate" case is exactly what happens when an attacker intercepts a TLS connection for a MITM attack.


Meanwhile, browsers can't simply treat sites under self-signed certificates as normal plaintext HTTP sites. The user reached the site through an HTTPS URL, which promised them security.

The user reached the site by clicking on a link or bookmark, and doesn't know or care about http vs https.

start by asking what a browser should do when the Citibank Online Banking Login presents a broken cert

It should not show the green "Citigroup Inc (US)" at the left of the address bar.

If I go type "citicards.com" into the address bar, I end up redirected to a SSL site with an EV cert. If my DNS got hijacked, I would probably end up not redirected to the SSL site, rather than redirected to a site with a broken cert. So non-SSL sites are just as dangerous ad sites with bad certs, and should be presented the same way.

how the browser should know when it's OK for a site to present as merely "not encrypted" (ie, HN login) and when it's not OK (ie, online banking). It can't. The browser has to assume that HTTPS sites with broken certificates are sensitive.

The browser should visually distinguish sites that are safe for sensitive info from those that are not. Plaintext and self-signed SSL are both not safe. Site with "EV" certs are supposedly safe. Site with other CA-signed certs are also supposedly safe, but slightly less so.

So, show EV sites with the green name by the address bar, like recent browsers do now. Show sites with other CA-signed certs with the little lock icon, and maybe color it light green. Show plaintext and self-signed sites with nothing at all, and maybe color the address bar slightly red. But, do this identically for non-signed and self-signed sites.


Your HTTPS "session" with your bank isn't just one connection that can be checked a single time when you first connect; it's hundreds of individual HTTPS connections, each of which needs to be verified, or an attacker will just corrupt the least obvious connection and use that to break the security of the whole app.


Browsers already complain if a site mixes http and https, why can't they complain if security levels are mixed at all (plaintext, self-signed, normal-CA-signed, EV, TACK-pinned)?


> The user reached the site through an HTTPS URL

Were I in charge, that's the detail I'd change. An "https" URL is actually requesting a certain quality of security service and should fail without trustworthy authentication, but using insecure TLS opportunistically (e.g., caching the result from a prior "Upgrade" header or OPTIONS request) for a "http" URL would be fine since cleartext would also have been fine.


I think this makes sense too, in the context of the security model we have now, but I'd rather see a better trust model that allows everything to be encrypted all the time safely. I think we'll have it eventually.


What's the status of implementations and/or toolsets for TACK? Can I test it on my clients' SSL websites? Are there settings in any popular webservers for activating TACK?

(Oddly, I care least about the browser for this particular question.)



I never thought about it from that angle. You are right, it makes no sense for browser vendors to display a big fat warning page because someone signed a certificate himself while they never let you know that 90% of the websites you use are transferred in plain-text over a wholly untrusted network.


Again: that big fat red warning is the only thing that happens when an attacker hijacks the DNS for Citibank.com at your ISP, redirects traffic to herself, and intercepts all the TLS online banking warnings. Without the big red warning, you might as well not have certificates at all, because nobody would ever notice MITM attacks on TLS. None of the cryptography in TLS works unless users can be assured that every single HTTPS request will generate a big red warning box if the certs don't check out.

I absolutely understand how confounding the self-signed certificate warning seems when viewed solely in the context of normal sites operating under normal conditions. But that warning box doesn't mean "this site could be a whole lot safer if it was just configured better!" It means what it says: "you are probably under active attack".

Yes, there's a Bayesian problem here: most of the time, you are not actually under attack, and the site operator is in fact just using a poor configuration. But the browser can't know that; it has to assume you're under attack, because there's no other signal available to it to determine otherwise.


Yeah, except, since as you say, "most of the time, you are not actually under attack", that's exactly what end-users think, and learn to ignore the warning regardless. So, while you may save some tech-savvy users, all your normal users will just click through. I've seen this happen first hand on multiple college campuses with dozens of users where I've been the IT Support/Network Admin. Too many false positives and you end up with security theatre instead of actual protection.


Which is why the warnings are getting steadily more annoying.

What's the alternative? There is no difference on the wire between a self-signed certificate for a site that simply doesn't care about certificates, and a self-signed certificate that is the sole marker of an attacker having hijacked the TLS connection of a site that very much does care about its security. A MITM attack looks identical on the wire to a self-signed cert.


"What's the alternative? There is no difference on the wire between a self-signed certificate for a site that simply doesn't care about certificates, and a self-signed certificate that is the sole marker of an attacker having hijacked the TLS connection of a site that very much does care about its security. A MITM attack looks identical on the wire to a self-signed cert."

How about adding another signal? It sounds like you're arguing with the sea, expecting normals to change.

Use the recent https-only header that says "if you ever see an insecure connection to this site, it's a bug", and pre-populate the list.

Stop assuming that users will eventually get it, and design a better product.


How does HSTS (the "https-only header") help with self-signed certificates? HSTS doesn't mean "this site won't work if its certificate is self-signed".


Does the HSTS list store the signature? Seems like if you see an HSTS site later convert to self-signed, that's always a security breach.


> Does the HSTS list store the signature?

It doesn't matter. The ONLY thing HSTS does is tell the browser to make future requests over HTTPS. If an HSTS site switches to a self-signed cert between my visits, the browser will still get warn me, because the new cert is suspicious.


So instead of telling people "you're safe unless Joey cries wolf", tell people "you're safe if Joey says there are no wolves".


I don't understand this comment so I'll just repeat: a TLS connection that uses a self-signed certificate by design and a MITM attack against a site look identical on the wire.


So do an unencrypted normal site and an unencrypted attack site (like my spam folder is full of links to).

Most self-signed sites are fine, but because a few are MITM attacks, they should all throw big warnings (which users will learn to ignore, because of poor signal-to-noise).

Most unencrypted sites are fine, but because a few are attack sites, ....

The criteria for a user to enter sensitive info should not be "the connection is encrypted" or even "the connection is encrypted and goes to the server it says it does", but "I know the real-world identity of who I'm talking to".

The criteria to warn the user against entering sensitive info should be the negation of that; "I don't know the real-world identity of who I'm talking to".

The means of warning the user, should be such that the user does not need to act when not trying to enter sensitive info. So rather than pop-ups or interstitials that have to be dismissed -- and that the user can learn to dismiss -- there should be some persistent UI cue (like the green "Citigroup Inc (US)" beside the address bar) that the user can check when needed.

    Browser: "Bad cert! The world's gonna end!"
    User: "That's just a discussion forum, I don't care."
    Browser: "Bad cert! The world's gonna end!"
    User: "That's Joe's blog, I don't care."
    Browser: "Bad cert! The world's gonna end!"
    User: "yeah yeah, that's nice"
    Browser: "Bad cert! The world's gonna end!"
    User: "Would you shut up already!"
    User: "...Why is my bank account empty!?"


I don't understand why you keep saying "attack site". The problem the self-signed cert warning addresses isn't malicious sites; it's attackers who intercept TLS connections to online banks and swap out the Verisign certificate with a self-signed certificate.

The warning isn't about the site; it's about the connection.


  > it's attackers who intercept TLS connections to online 
  > banks and swap out the Verisign certificate with a self-
  > signed certificate
How is a MITM attack with a self-signed certificate any different than redirecting a user to a HTTP site instead of an HTTPS site? Do you really think that 'Joe Sixpack' knows the difference between http and https the way that you do?

You're saying that users can distinguish between the following possibilities:

- A site is over http, so it's insecure (and no big red warning).

- A site is over possibly compromised https with a big red warning, so it's insecure.

- A site is over verified https with a big green "everything is ok" light, so it's secure.

You seem to think that the average user knows that HTTPS needs a "red light/green light" system, but HTTP does not (because it's inherently insecure). I posit that the average user doesn't 'get' the difference between http with no warning and https with a green light. Why not have a system that is simple for the end-user? E.g.:

  - green light == secure
  - no green light == insecure
  - no need to differentiate between http/https for
    the average end-user


The warning isn't about the site; it's about the connection.

Yes. Which is a bad idea. It conflates an absence of network shenanigans with "the user knows who he's talking to".


I'm really having trouble following your argument here. The only reason the user wouldn't know who he's talking to is shenanigans. You're complaining about the shenanigans alert.


The suggestion is to tell people "you are safe if there is a green padlock in the address bar", and only display the padlock if the certificate is signed by a trusted authority.


Again, apart from the fact that the absence of a green padlock is an insufficient alarm for "your site is being hijacked by a MITM attacker", a session with a web application consists of many hundreds of individual connections.


Yes. Of course we still need the certificate system and the big fat warning boxes.

But you didn't address the problem here. You are essentially arguing we should keep the steel-bolted doors, and I agree, but that doesn't exclude us from doing something about all the traffic that doesn't even have padlocks.


HN doesn't promise me encrypted access to content, so I do not care whether it's encrypted or not. HN is not an online bank.

My bank however obviously must promise me encrypted access, or else nobody would use it.

The browser has no way of knowing whether a site should be encrypted; only that is says it is encrypted.


>My bank however obviously must promise me encrypted access, or else nobody would use it.

Hahaha, you're joking, right?


The core issue is that encryption is useless without authentication. A MITM could just replace the original self-signed certificate with his own and read the decrypted plaintext while proxying the request so the user doesn't notice.


Yes; more importantly, a MITM can replace a validly signed certificate with a self-signed certificate. If browsers are lax about self-signed certificates, all TLS connections are weakened, not just the ones that "opt out" of "good" certificates.


Not exactly. The fat warning is there so that you don't get a false sense of security by seeing a little lock.

But then again, we're now moving to a "if its self signed, you can't access the site, you can't bypass the check even".

I wonder how much verisign pays some of us.


we're now moving to a "if its self signed, you can't access the site, you can't bypass the check even".

wait wait wait what?

I am currently in a space where we have to deal with self-signed certificates all the time. I fully accept that people like me will have to deal with more hassles from this so that people don't get their paypal accounts hijacked as easily. But I've yet to find out that I simply cannot connect to a site with a self-signed certificate.


Have a look at http://www.imperialviolet.org/2012/07/19/hope9talk.html for a glimpse of the future:)


Verisign did not pay Mozilla to make it harder to click through the broken certificate warning.


Do you mean that you can't bypass the certificate warning anymore in Firefox? (or will not in the near future)

If that is so, what is your reasoning behind it?


I work for some of the said vendor (and I'm being vague on purpose here). I took part of the discussion. I made the same point you did. People don't care. The argument was quickly dismissed by a "go get a cert for free at startssl". Seriously.

I think the Internet is a social change, and transforms people into idiots indeed. People in groups have always been more stupid than single persons. And Internet makes us all one freaking huge group of people.

Apparently if you don't share the popular idea you must be destroyed (you'll see this happen often in HN comments too).

Otherwise, here's another project, that works on the same principles as perspectives, but has a better back-end (albeit WAY less marketing) IMO http://web.monkeysphere.info/


I somewhat agree, however instead of making pipelining work in HTTP, they should remove it altogether, and focus the HTTP protocol to just the request response structure/headers. Then, create a second protocol - say HMWP (HTTP Multiplexed Wire Protocol) (not a real protocol, I just made that up) that focuses on pipelining, server-push, header compression, etc. This layering approach really feels more "internetty". It also would allow independent development in terms of network optimization and application handling.

This split would also help adress conflicts between Cache/Proxy/Balancer developers and Application developers.


If you don't want to deal with certificates just keep using HTTP 1.1.

If you want the speed and security improvements of 2.0, certificates will get cheaper and easier to obtain. Most of the certificate authorities' costs are one-time (validating identity). Issuing subsequent certificates costs them nothing. Pricing will adjust accordingly with increased demand (and in some cases already has).

A fixed dictionary for header compression is a lot better than no compression dictionary. Negotiation would be added complexity/latency. Sure you could argue either way but it doesn't seem like a big deal.

Google has done a great job on SPDY. It's proven at scale and performs really well. Making it a standard is a great move.


It seems like it would be a lot simpler to fix the current pipelining infrastructure (already in the spec but not used?) and tag pipelined request/responses with request IDs.


Have you tried getting your voice heard in the standardization process?


> Unless SSL certificates are suddenly free

Oh god why so many people dont know about StartSSL.com. They've been offering free SSL certificates since quite some time.


I think it will be quite tragic if HTTP 2.0 ends up being a binary protocol. The discoverability and readability of the basic text based Internet was, in my opinion, one of its most fantastic qualities. It meant that as a young hacker I could just look at things and see how they work and then progress to learning and experimenting by writing very naive partial implementations of said protocols. I am not sure the bits saved justify the obfuscation of everything.


My favorite part about it is that everything will be encrypted by default in the future.


And require SSL certs, which cost how much again? It's quite an expense for hobby sites, non-profits, and such.


An even larger problem is IMHO that the two most widely deployed platforms world-wide (Windows XP and Android < 3) do not support SNI which forces you to use one IP address per domain.

So now we are moving to protocols that mandate SSL while at the same time we are quickly running out of IP addresses (and getting correctly working ipv6 on the two platforms in question is about as difficult as getting them to support SNI)


$0; use StartSSL. They're in the trusted root CAs for Chrome and Firefox which are the only browsers that support SPDY so far (I think they're trusted in all major browsers, but I don't spend my day examining IE6's list of trusted CAs)

The actual cost is in requiring a dedicated IP per SSL site, which many hosts charge monthly for. Unless SNI is built into the spec (I browsed around briefly and didn't see a mention, other than a request for it being a requirement in a forum mid-2011), anyone intending to host multiple HTTP2.0/SPDY sites on the same box will need to buy a dedicated IP per domain. It's also something of a pain to add SNI support to an existing host, but so will adding the SPDY support.


SSL certs are free: https://cert.startcom.org/


Like most (all?) digital products, the price of SSL certificates will converge to zero with higher usage. So mass adoption might be beneficial after all.


I believe the Internet should stay free, and making your own website should certainly stay free.

Right now, you don't have to have a domain. You don't have to pay for a SSL certificate. You don't have to pay for hosting. The only thing actually need you pay for is the link to the ISP.

And it's not like there weren't better, free alternatives either. Ultimately it's not about the price tag. It's about having a third party controlling your stuff. It's more about freedom than free as in beer.


Then continue using HTTP/1.1.

SSL needs to verify the site's identity to be effective, period. For a certain level of trust (EV certs, for example) that requires humans doing work, at least for now. Humans cost money. StartSSL's free certs work perfectly well for non-EV requirements, which basically amount to a verification level of "someone who can read email on this domain has requested a cert for it" - which can be, and is, completely automated and therefore available for free.


That reply is wrong on so many levels.

First of all, everyone wants the benefits of HTTP/2.0, obviously. Else I'd be using gopher, thank you very much.

Then, startssl is a company, that happens to give free certs. For one single sub-domain. Got two subdomains? Gotta pay. They can also decide to make those non-free at any given moment, if they feel like it.

The only part I agree with, is paying for EV certificates. But you should NOT need to pay and you should NOT need a third party to be responsible for YOUR certificates if you do not want to.

And again, there's quite a few distributed trust models around that work well and do exactly that, but get great push back from vendors, since, by nature, they don't bring as much money back.


It took me a minute to figure out that "what do you mean, not free? Surely wordpress will just buy a *.wordpress.com certificate?" was not what you're going for.


Well yeah, unless you own *.wordpress.com, the certificate and domain belongs to wordpress.com, not you. You're in their hands.


Some sites, like embedded products that are shipped to customers, are not compatible with the forced-SSL model at all.

If my fridge has a webserver I need to be able to talk with it.


I understand the pain of deploying SSL with shrink wrapped software, but that should not be a reason for us to just say "oh f... it, let's just talk to our devices using plain text and adopt hope as our new security model". My original statement still stay, with mass adoption of SSL we will have new challenges and will find new solutions.


but that should not be a reason for us to just say "oh f... it, let's just talk to our devices using plain text and adopt hope as our new security model".

That sounds to me like the exact solution you are proposing.

If I'm selling lightbulbs with webservers built-in in an "only SSL signed by a CA" universe, I can only let people talk to it with plaintext and hope no one breaks in.

Right now I can sell lightbulbs with built-in webservers that people can talk to secretly. And with TACK I could keep someone from dropping in on me.



It's about the same price as a domain unless you're doing wildcard certs.


Does anyone else have any experience setting up SPDY on a web server? Since SPDY is already supported by Chrome and Firefox, I'd be interested in at least experimenting with it if there is an Apache or Nginx based solution which is stable enough to work in production. Of course I would also need to serve old style HTTP for those that are using a browser which doesn't support SPDY but if there is a way to serve SPDY to those who can support it I'm all for trying it out.


There's mod_spdy for Apache 2.2 developed by Google employees:

http://code.google.com/p/mod-spdy/

And a spdy patch for nginx from one of the main nginx developers:

http://nginx.org/patches/spdy/

Wordpress and CloudFlare both use the nginx patch so it should be production ready.

I use mod_spdy and haven't had any problems.


And here we go again, another design by committee to haunt us. If this piece of crap sticks we all will have to keep using HTTP 1.1 which by the way isn't that bad. First web sockets, and not this. Just great.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: