Hacker News new | past | comments | ask | show | jobs | submit login
Explicit Trusted Proxy in HTTP/2.0 (ietf.org)
109 points by rdlowrey on Feb 25, 2014 | hide | past | favorite | 45 comments



Wasn't this already discussed on Hacker News, in quite some detail, yesterday? And wasn't the big revelation that this only applied to traffic that was not CA verified and thereby was inherently man-in-the-middle-attackable anyway (as the actually-secure https connections are marked in a way where this feature does not apply), making this a misunderstanding?


I thought the whole point of HTTP2.0 was to make traffic encrypted by default, and not let bit vulnerability holes in the protocol like this. Saying "it just makes it as before" doesn't make me feel better.

Why are we moving to HTTP2.0 otherwise? For a 5 percent increase in speed? The big selling point of HTTP2.0 from my perspective was the "always-on encryption".


You're referring, I think, to opportunistic encryption. As Brad Hill pointed out yesterday: the security situation with HTTP/2.0 opportunistic encryption is analogous to that of OS X with the SecureTransport TLS validation bug. In neither case is "encryption" any more than cosmetic.


Like running ssh without checking fingerprints is analogous to that of telnet.

I for one used to be thinking like that. I had my A4 paper with all my computers fingerprint in my pocket and painstakingly checked it every time I was at a new computer. In my university, studying in the computer security program, I think I was the single person checking fingerprints. Not even the system administrators did it.

I guess in practice, ssh today is nothing more than cosmetic compared to telnet.


If you use SSH and ignore key fingerprint warnings then yes, your use of SSH is cosmetic. Competent operators freak out when they get an unexpected key warning.

I don't understand the comparison you're trying to make between SSH and a proposal to transparently MITM a protocol that is designed to be transparently MITM'd. Unless your gripe is that we shouldn't have protocols like that to begin with, in which case I agree, but you should direct your angst to the people who proposed HTTP/2.0 OE, not this proposal.


I have no gripe since I do no longer consider that better-than-nothing security to be bad.

In return for using ssh over telnet, I get security against any passive attack and attacks past first login. Thus the functionality is on a technical basis superior to telnet (except if you use IPsec, then telnet is better than SSH).

A personal question: when you install a new personal laptop or server, do you check the fingerprints of every ssh connection? Do you prune the CA list and remove any entry that you personally can't vouch the trustfulness of? This is after all what SSL require of each user, so it would be interesting to know if a founder of an software security company do this to his own personal equipment.


No, I copy over my SSH configuration so that I don't need to do that.


How can you securely copy over the configuration? This sound as a chicken and egg problem.


I think the problem is, always on encryption, is always off caching...


I call that a feature. I'm way past sick of flakey transparent proxies with bad caching behavior.


That's not true, your browser still handles caching just fine. More bandwidth trumps caching anyway, and caching forward proxies will be a thing of the past.


Heh, bandwidth can help, but caching helps primarily with latency.


The latency benefits of an extra caching layer are minimal, especially since HTTP pipelining and CDNs will continue to exist.


Judging from http://hillbrad.typepad.com/blog/2014/02/trusted-proxies-and... it looks like opportunistic encryption is not meant to convey security, but rather add obscurity to in-transit plain-text traffic from the perspective of any in-between listeners (and he has no qualms pointing the finger at governments there).

Sadly he now seems to have changed his mind about the validity of this approach, mostly because users and devs alike dislike complexity in their decision process as to what is secure and what is merely obscured.


It should be discussed as long as its new or interesting to someone so people can form their own opinion of a potentially important security issue. Your statement is suspiciously close to trying force a consensus opinion that there is nothing to see here. While I give you the benefit of the doubt, this type of steering of online discussion that benefits those that would like to weaken internet security. It might help linking to the previous discussion as well.


Given that I summarized the pretty-simple point from the previous conversation, if you really think that's wrong you should say why. The Internet has this core problem that people don't learn from discussions and then keep saying things that are clearly, obviously, and even trivially false, giving emotional weight to ideas that actively harm the understanding of others. I mean: look at yourself... in addition to not responding to the specific factual detail I laid out, you are refusing to even take thirty seconds to find the previous discussion yourself... this is work you should have done before ever coming here to post something on this subject... it is the basic, core due diligence that I would argue is not just a courtesy but a responsibility for those who join online discourse :/. This "you may have said something factual, but damn it I like being angry at people for no reason, so I'm going to ignore what you said and pull the 'you are trying to steer the conversation' card" is ridiculous... facts should steer conversations :(.


Yes.


A possible objection to the proposal is that those who opt not to use "trusted" proxies will be making themselves more visible, like users of Tor are more visible.


As discussed yesterday, this is not a new MITM vulnerability. To make this work you need to establish a TLS connection to the proxy which is verified in the usual certificate authority way. Note that the standard says that user agents that discover they're talking to a trusted proxy should obtain user consent to talk to that proxy.

Any situation in which someone can force your machine to trust one of these proxies is a situation when they had administrator access to your machine anyway, and in that situation you're already screwed.

Would it kill HN to actually read one of these specs instead of just whining about it?


I don't really care to argue this point so I'll just explain why I find this extremely problematic. What percentage of browser users have any concept of how TLS works? This an exceedingly low number. You're essentially creating a dragnet to capture and decrypt the contents of transfers for a huge number of people who likely have no idea that they're volunteering their (sensitive) information. Browser users are not TLS experts. They will click right through warnings without a second thought. No, this standard doesn't harm the very small minority of people capable of protecting themselves. It only takes advantage of everyone else. This is why, to me, dismissing this off-hand as no big deal is seriously negligent. Yes, I've read the draft. Yes I have the technical experience and qualifications to understand fully what it proposes. And yes, I believe this is an egregious thing to propose.


The TrustedProxy standard specifically documents that it not be invoked for HTTPS URIs. TrustedProxy doesn't interact at all with TLS the way it's understood now.


That's a totally understandable fear. Personally, I trust the ability of user-agents to help users make informed decisions in this area, but I can understand why you don't. Nevertheless, even with this proposal HTTP/2.0 will be substantially more secure than HTTP/1.1 is, at least in the aggregate.

It's also worth noting that this is a proposal. You didn't actually make this mistake yourself but I do want to highlight it: the HTTP WG is not yet discussing this as anything more than a suggestion (see http://lists.w3.org/Archives/Public/ietf-http-wg/2014JanMar/... ). If you are worried about this sort of proposal becoming a draft, I highly recommend you join the working group and keep an eye on the proxy discussions.


The same thing is possible today by getting users to install a new CA and maybe configuring a proxy for them. It doesn't seem like these proposals would make this significantly easier.


This simply means that phone/tablet manufacturers together with carriers will pre-install and trust the proxy certificates of the carrier, without any end user consent.

This will easily allow the carriers to perform their duty of Lawful Interception


This is not a new security hole. Carriers can do this today and transparently MITM all current HTTPS traffic: no new risk is present.


Sure it is, the old way you either

* get an alert when going to e.g. www.google.com as the carrier tries to hijack the session with a fake certificate.

* the carrier have their own versions of libraries/browsers/etc. installed on the phone that disables certificate checks/alerts that would pop up when they hijack a session.

* the carrier have actually gotten hold of a certificate for www.google.com - which I'm sure is doable, but harder, and is thus able to perform a successful mitm attack.

With this approach, the carrier just needs to generate their own certificate, install it on the phone they sell, and can proxy any service they want without user alerts. A significant lower entry bar.


The "old way" is actually to install a new CA on the device. Then the proxy can just dynamically create dummy certificates signed by that CA.

This is simple on the client and avoids any security warnings. It's supported in quite a few firewalls and even squid, so it would be very easy for a carrier to roll out tomorrow if they needed to.


Only with SIM locked phones for specific providers, I presume. Otherwise cert pinning will alert pretty quickly.


The problem is that this feature can (and will be) used as a legal backdoor for ISPs to snoop traffic, simple as that.

Even they don't want to, they will probably have to after a subpoena. While if you don't implement this and other backdoors to the protocol, you won't be able to do it. At least not transparently.


No, it can't. You're arguing very confidently about a proposal you haven't even read. What you've instead done is take the headlines about the proposal at face value, and then constructing an argument by reasoning about what global TLS MITM would mean for the Internet.


Actually I did, here are the relevant parts[1]:

uers should be made aware that, different than end-to-end HTTPS, the achievable security level is now also dependent on the security features/capabilities of the proxy as to what cipher suites it supports, which root CA certificates it trusts, how it checks certificate revocation status, etc. Users should also be made aware that the proxy has visibility to the actual content they exchange with Web servers, including personal and sensitive information.

Now the question is, did you???

I've seen the link[2] you posted and I didn't find ANYWHERE the part where it specifically talks about HTTP and not HTTPS. There's even the above part explaining making things even more complicated...

[1] http://tools.ietf.org/html/draft-loreto-httpbis-trusted-prox...

[2] http://hillbrad.typepad.com/blog/2014/02/trusted-proxies-and...


Since the entire point of Brad's post is the distinction between http/https as it applies to HTTP/2.0 and specifically TrustedProxy, I call "shenanigans" on the idea that you actually read either of these.


I got what you mean the second time I went through both of the texts, my bad. I thought HTTP2.0 was about always-on TLS layer, which is false so. And to be fair, reading the draft is kinda hard to understand the exact meaning, especially since they added the 6th paragraph (posted above), which in this case doesn't really make sense. If the connection is non-encrypted anyway, why ask the user a permission to tunnel the connection through TLS?

ps. I really did read them both, I just tend to be a little strong-opinionated.


Before people start associating this with actual HTTP/2.0, it is worth emphasizing that this is a separate document. None of this "trusted proxy" MITM nonsense is in the HTTP/2.0 draft: http://datatracker.ietf.org/doc/draft-ietf-httpbis-http2/?in...

Thankfully, it seems fairly unlikely that the trusted proxy thing is going to get anywhere: It serves the interests of Ericsson and AT&T, but not those of the HTTP/2.0 spec authors (who are from Google and Mozilla) or server and browser vendors that will have to implement HTTP/2.0.


Some context: http://lauren.vortex.com/archive/001076.html

"What they propose for the new HTTP/2.0 protocol is nothing short of officially sanctioned snooping."


The post you've linked to is technically inaccurate and highly misleading. Here's Brad Hill's rebuttal: http://hillbrad.typepad.com/blog/2014/02/trusted-proxies-and...


Thanks for the link, it does clarify matters.


I particularly like how the Privacy section is completely blank.


Section 6 (Security Considerations) is truly shocking. And Section 7 (Privacy Considerations)? Whaddya know? It's empty!


In some third-world countries you cannot get a telecom licence unless you "implement" this, or your license could be easily revoked or canceled.

In Russia, for example, there are explicit regulations which says that no telecom company can operate unless it provides "monitoring and law-enforcement facilities".

My guess is that each country nowadays has regulations of this sort, so telecom equipment manufactures are forced to "add required functionality". Of course, US has such "secret" regulations.)

So, it is much better to face the reality and to standardize this shit to reduce the pain of telecom "workers".)


It is an improvement compared to HTTP/1.1, in that it allows for opportunistic encryption, and it is those connections that can be cached (or if you so prefer, snooped). This will still make it harder for NSA and similar agencies to do mass surveillance without traces. They would either have to insert their own certificate, or get the private key from the ISP. That is far more difficult to do in a covert manner. This alone makes HTTP/2.0 an improvement.


NSA will have the ISP keys, that's a given.


For American ISPs yes. For ISPs in some allied countries, probably. For all ISPs in every country in the world? Unlikely. And furthermore, that would require a nationwide (or worldwide) scheme where NSA gathered or issued keypairs for every certificate at every ISP. That is much more expensive than just tapping the lines, which is some of the point here, and some data probably would even be off limits. It would also be hard to keep an operation like that hidden, as they could for many years with the current methods.

I have no illusion that NSA can be stopped if they target someone, but it should be possible to make it impractical to just tap plaintext from the internet backbone as they do today. If data generally is encrypted _unless_ they do MITM attack it will be too expensive to just collect everything.

This is of cause not enough in itself, but it is certainly a step in the right direction.


I understand that HTTP/2.0 needs to address both scalability and security, but the proposed "trusted" proxies smells really bad. Knowing what we know today, in that the current level of security offered by HTTP/1.1 is barely adequate to protect web citizens from real and present threats, shouldn't we be radically rethinking HTTP security.


This would be an awesome term project for students studying computer security to find problems in the draft, if there is any.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: