Hacker News new | past | comments | ask | show | jobs | submit login
Strengthening HTTP: A Personal View (mnot.net)
51 points by tkorotkikh on Aug 6, 2014 | hide | past | favorite | 49 comments



It's disappointing to hear that the idea of requiring TLS with HTTP/2 has lost traction. For me, TLS-everywhere was the carrot on the stick.

I recognize that getting consensus is hard work, but I don't think creating another encryption-optional protocol and letting vendors duke it over security is going to end well for the users.

  HTTP is a deployed protocol with lots of existing 
  stakeholders, like proxy vendors, network operators, 
  corporate firewalls and so on. Requiring encryption 
  with HTTP/2 means that these stakeholders get
  disenfranchised.
I'd like to hear the arguments of the potentially-disenfranchised stakeholders first hand. Is it mainly because it makes it harder to sell or use products that allow traffic snooping?


One obvious change here is that it would make CA-signed certificates mandatory for all HTTP2 web servers - is that really a situation we want?


That doesn't have to be the case. You could still allow self-signage, with all of the security caveats that presents.

Who knows. Maybe that arrangement could even spur a sorely needed push for a free certificate trust network and get rid of CA's entirely.


Self-signed certs are much harder to get browsers to accept these days. The "I know what I'm doing" button and process are becoming ever more complex, and I wouldn't be surprised if they just start going away in favor of a list of trusted root CAs, which you may or may not be able to control as a user, depending on your browser. Which sucks. But anyway.

StartSSL is one place where you can get a free cert for your website today (and yes, they charge for revocation, but revocation is pretty ineffective anyway). I got a free cert from them, but my mobile browser doesn't trust it, so I decided to shell out $10 for a cert that's more widely auto-trusted by browsers. Not a huge cost, IMO.


I can't speak for anyone else, but I certainly do.

Regardless of the form of PKI employed, there's going to be a cost associated with validating the identity of the parties you're communicating with.

For the average Joe, certs that require domain ownership validation are pretty cheap these days -- certainly on par with domain name registration fees. As with domain names, people need to just start treating it as a necessary cost of running a website.

To be clear, I am far from a fan of X.509, but I'm not holding my breath for something better to come along (and be widely deployed) this decade. So let's use what we've got.


Those who don't have CA-signed certificates can still use HTTP/1.1, I don't think it's that big of a deal.


I'd like to hear why maintaining status quo for stakeholders should ever be a valid argument for a technology standard.


Because for a standard to actually become standard, you have to make it desirable for the people who you're trying to talk into switching to it.


I hate this new HTTP. They seem to have taken a beautifully simple concept and added so much complexity it's ugly and horrible and awful.


I hear this a lot, but without a consistent argument as to why that's the case. Do you have anything more than that to offer?

HTTP 1.1 is relatively simple, but it's also a bottleneck.


Don't shift the burden of proof-It is up to HTTP/2 proponents to demonstrate that the benefits are greater than the costs and from everything I've seen the benefits are meager and the costs are large.


I'm not trying to shift the burden here, but the point made was along the lines of "HTTP2 is rubbish" - I've seen this a lot, with little to back it up.

But I'd say some of the benefits were:

- Server push support - Multiplexed requests/header compression/other performance improvements - Mandatory encryption support

Downsides are (from what I understand):

- Not a plaintext protocol

There may be more downsides, which I'm happy to hear about.


Mandatory encryption support sounds like a good idea, but isn't that what the article was saying they aren't going to do? I would say the other things you mentioned are features, but not necessarily benefits. Server push is not something I want, and if I did, web sockets are probably a better solution. And those alleged performance improvements have yet to show a significant performance increase on real-world web sites.


> - Server push support

Couldn't a multi-part document already do this?

> - Mandatory encryption support

Couldn't they just have said "OK, HTTP 1.2 MUST be done over a TLS connection?" Also, didn't the TLS-always idea go away recently?

The downsides also include reïmplementing much of level 4 (congestion management, flow control, &c) and all of the complexity that goes with it.


Here is a thread about issues in serving it efficiently https://groups.google.com/forum/#!searchin/mechanical-sympat...


Are you claiming HTTP2 is not more complicated than HTTP 1.1?


Is is more complex - why is that inherently a bad thing?


Complexity is bad, all other things being equal, because it takes longer to implement, is more likely to contain bugs, etc..


Right, but obviously that's not the case here - there are actual benefits of using HTTP2. The fact that it's more complex doesn't necessarily (and I would argue, definitely doesn't) outweigh that.


I realize I don't have a well argued set of reasons. But sometimes you just look at something and think there has to be a better way than this.


For example, in the current design of HTTP the decision as to whether to use encryption is completely up to the server; the only thing the user can do is observe whether a URL is “HTTP” or “HTTPS” (or maybe watch a lock icon) and decide whether they can continue surfing.

This seems a strange characterization. As in any other network protocol, if the client and server don't agree then nothing happens. If either insists on something the other finds unacceptable (e.g. a 404 response to "GET /your-secret-plans HTTP/1.1") then the transaction doesn't take place. Perhaps this could be made more explicit via a header, but what would that really gain?


I think he's saying that clients could decide beforehand whether they want to go into HTTPS-only mode, for example.

But this is something clients can actually do today, without the need for a new protocol or any awareness on the server side. URL isn't HTTPS? Don't load it. I actually did a Firefox plugin that implements this, called http-nowhere.


But this is something clients can actually do today, without the need for a new protocol or any awareness on the server side. URL isn't HTTPS? Don't load it. I actually did a Firefox plugin that implements this, called http-nowhere.

Yes - and my understanding is that the browser developers are going to require all HTTP2 connections to be over TLS anyway.


Beyond everything else, someone should have stopped and said "Wait, this is level 4 stuff we're implementing in level 7. There has to be a saner way"

I also find the naming very disingenuous. This is not even an HTTP-style protocol, it should not be named HTTP2.

As for people who want to require CA certs everywhere: Are all the free certs out there accepted by all browsers? (Honest question).


There goes HTTP2's most exciting feature. I guess IETF will remain as useless as ever. Strong encryption on the Internet will need to arrive organically from certain projects catching momentum, and then Internet stakeholders can adopt them as they are, or risk being left behind. IETF standards will always have a multitude of compromises to please all the top Internet stakeholders (even if that's detrimental to the Internet ecosystem and its security).

Also are those 1 or 2 (from what we know of) NSA employees still shaping crypto policy at IETF?


Forget the hand-waving and excuses, TLS needs to be mandatory, period.

This alone would make me forgive just about any other problem with HTTP/2.


I disagree for a number of simple reasons.

- Cost. You now require every domain owner to also pay for certificates (or get a free one, but the process is roughly the same).

- Limited choices. The keys to the kingdom are owned by a few corporate interests.

- Encrypted browsing is simply unnecessary for certain classes of sites (non-logon, informational sites).

- It increases the barrier for entry to put a basic website up. As well as knowing some html and css, you also need to know about certificates and keys, how to deploy them, and how to keep them safe.

- Internal network communication becomes a lot harder with self-signed certificates. You've now got to generate certs and add them to trust stores.

Leave the choice to the site operators and the users. Fix the CA system before forcing it upon everyone.

I have HAProxy set up as the SSL termination point. It fronts about 15 different applications. But there's no easy way to decrypt at the proxy and then re-encrypt to the backend servers.

We're not ready for mandatory SSL.


> Cost

The certificate issue is a browser problem. Self-issued certificates are trivial to produce, and in the era of mandatory TLS they probably should not produce the huge scary browser warnings we see today. I'm not a UI expert, but I could envision a "medium severity" browser warning that states the connection is encrypted but the certificate can't be verified. I don't see the logic behind abandoning encryption entirely rather than tolerate some level of self-signed certificates.

> Limited choices.

Arguably the whole CA system is in need of serious reform. This is orthogonal to HTTP/2 and TLS.

> Encrypted browsing is simply unnecessary for certain classes of sites (non-logon, informational sites).

I could not disagree more. Encryption is--from an efficiency standpoint--literally just some bit-twiddling (after key exchange). The efficiency cost is trivial, the social benefits are enormous. It's very much worth it.

> It increases the barrier for entry to put a basic website up.

This is one advantage of baking it into the protocol. If the HTTP/2 implementation takes care of the details of encryption (including certificate generation if necessary), website authors don't need to be concerned with it. Everything will just work.

> Internal network communication becomes a lot harder with self-signed certificates.

As stated above, certificates are not barriers. If your network is "trusted", then why would you care if the certificates are self-signed? You are arguing instead to not use encryption at all! Do you see how backward this is?


>why would you care if the certificates are self-signed? You are arguing instead to not use encryption at all! Do you see how backward this is?

(I've chopped your quote a bit so that it applies to the internet in general rather than just a trusted network, which is what I think you're arguing for. Correct me if I'm wrong).

I do see how backward it is, but I don't think we have good enough solutions in place to protect against impersonated TLS. Users currently expect "safe and secure" from HTTPS, which is why there was a push to throw big scary warnings for non-authenticated TLS. Perhaps this is a UI problem, but until users are safely able to identify the difference between trusted connection and secure connection, I don't think we're ready for TLS everywhere without authentication.

If it is a problem that can be solved, it can be solved now.

I'm more concerned that my dad will have his banking session impersonated rather than his general browsing sessions snooped on after the fact. I would prefer that neither was possible, but I'm not aware of any good solution.


Actually, I think I've changed my mind after writing this. There can be two and only two states. HTTPS with authentication, and HTTPS without authentication. Authenticated can have the little lock symbol, non-authenticated would just look like regular HTTP does now. Then we just need to make the process of self-signed certificates easier to create and manage.


Your local network may be trusted, but self-issued certificates are effectively worthless to regular internet users, because they make MITM attacks trivial. This is the reason they have huge scary warnings.

Encryption without authentication and trust defeats passive snooping but does absolutely nothing to protect against MITM, which is a real and prevalent threat.

Setting HTTP/2 to encrypt everything but removing the big warnings that appear when trusted authenticity cannot be established would be a net security downgrade, because the lay user will trust their connection when they ought not.


> Encryption without authentication and trust defeats passive snooping but does absolutely nothing to protect against MITM, which is a real and prevalent threat.

Sure. But non-encryption leaves you vulnerable to both passive snooping and MITM.

> Setting HTTP/2 to encrypt everything but removing the big warnings that appear when trusted authenticity cannot be established would be a net security downgrade, because the lay user will trust their connection when they ought not.

How about only showing the padlock for certificated sites, and requiring a certificate when a secure request is made (http2s:// ?), but allowing encryption with a self-signed certificate for "insecure" URLs (http2://) and just not showing the padlock?


Exactly. An analogy is with password salting: you don't salt passwords because it makes the hashes unbreakable, you do it to increase the amount of work attackers have to do to successfully recover the passwords.

Even without verified certificates, mandatory TLS would require a MITM attacker to be present at the time of the connection (not passively record traffic and attack later) for every connection they wanted to snoop on. For a stateless protocol like HTTP, that's a massive amount of work to do if you want to snoop on any large scale. The increase in work required is probably even greater (relatively) than the increase achieved in the example of password salting (which is considered a security no-brainer)!


You're conflating TLS and the current CA model. I'm all for TLS-everywhere and ditching the CA model (no, there are no easy alternative to the model we have come to build and accept, it will have to be gradually evicted)

Here's how the security warnings are in current browsers:

- HTTP: no warning

- untrusted HTTPS: warning

- trusted HTTPS: no warning

This is dumb. We explicitely say that an untrusted HTTPS is worse than raw HTTP, even though it preserves more privacy at a very little cost. We should alternate the two first levels.


That would be totally valid if the CA industry wasn't so terrible. But given that encryption without authentication and trust is more or less theatre, we need a solid replacement for the current CA scene before mandating TLS is sane.


Disagree, purely informational websites have no need for TLS.


The need for TLS is less in purely information websites, but there are a number of problems with this attitude. For one thing, the content of the pages you visit still leaks information about a user, even if the page isn't customized per user.

Second, authenticated, trustworthy SSL connections provide MITM protection and prevents modification of the content you receive - the transition from HTTP to HTTPS is generally going to be vulnerable to an sslstrip-like attack, homoglyph attacks, etc, plus content could be modified maliciously in transit directly over an HTTP connection (a less drastic form of this sort of thing has happened many times with ISPs adding in tracking cookies in transit, or comcast adding some javascript to pages when you get close to your bandwidth limit).

In any case, the cost of all-TLS is really not so high that you can't enable it just for edge cases who want or need to use secure connections for all internet communication.


On the other hand, what's the problem in using TLS for cases you don't think it useful ? I can see a few reasons here, but I don't see any of them being enough not to use TLS everywhere:

- TLS is expensive: my gut says it's wrong, but I'd love to see some numbers. Ilya Grigorik [0] has done some experiments here, and I don't see TLS as really bad

- TLS is complicated: true, and we have to rely on tried and tested implementations. I'd say you'd need to do it whatever security we use (and we want security, right?)

- TLS requires certificates from the flawed CA infrastructure we have: wrong, public-key authentication isn't even the only authentication scheme possible with TLS, it's just the first one we think about (and also the most tested one).

Do you have other counter-arguments ?

[0] https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-...


Those two reasons are enough to me. The web is supposed to be for everyone. A small restaurant that just has directions and a menu on their website shouldn't have to deal with the headache of setting up HTTPS.


If all HTTP connections enforced TLS, how would there be any extra burden?


1) There are regimes that would lock you up (or worse) for looking at some "purely informational" sites.

2) Defaults matter and developers make mistakes. Without required TLS many sites that should be encrypted won't be (they forgot, the site grew into something it didn't used to be, they had no idea what TLS was, etc)


Ruining the web in order to prevent any possibility of bad things happening is something I'm firmly against. I have several small blogs and websites that I absolutely would not bother with if I had to pay for and set up https on each.


Myself, I like the idea of opportunistic encryption. Why not require TLS for HTTP/2 but don't require authentication for http:// URLs?


It's hard to see what benefit that would offer. Non-authenticated TLS is trivially vulnerable to MITM attacks. This is especially the case because I can't foresee a situation in which a website would put the effort in to implement opportunistic TLS, but not implement straightforward authenticated HTTPS…


It completely prevents passive surveillance, however. Sure, you can MITM, but the point is now to look at anything you have to do an MITM attack, unlike now, where most traffic is unencrypted and you can do surveillance passively. This makes surveillance more difficult. Net gain for everyone.


I'll concede that passive surveillance would not be possible. However, I think that benefit is marginal - AFAIK, unencrypted support in HTTP2 is optional and won't be supported by browsers in any case.


Non-authenticated TLS is trivially vulnerable to MITM attacks.

I don't think a well-implemented TOFU/POP policy would be "trivially vulnerable", but it would still accommodate self-signing. Standardizing this would have been a worthy goal for IETF.


I'd argue that even if it is MITM-vulnerable, it's still very useful, as it makes passive surveillance impossible.


This article has a great response to that argument: https://www.tbray.org/ongoing/When/201x/2014/07/28/Privacy-E...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: