I know this may be a bit off-topic, but since we're discussing HTTP/2 already, I can't help but wonder. The recent attacks (BEAST, CRIME, etc.) have all relied on being able to build an oracle from (sent) body content and uncover header elements. The attacks only get better in time, so I expect to see even more attacks like this.
If we're going to overhaul the HTTP spec in any case and go for framed messaging - why not go for separate header/body compression AND separate encryption keys? As far as I understand, that would block the whole family of attacks. Separate compression contexts would prevent the attacker from building an oracle from body to attack the header. Separate keys and encryption contexts should prevent padding and mode oracles against the header.
I know crypto is hard, and I know there would be devilishly tricky details to figure out. So I have to assume someone has already thought of this, has had it vetted out for flaws and then discarded the idea.
But if so, why? Apart from requiring at least twice the key size in negotiation (compute oversized shared key, split for header/body and likely HMAC keys), are there any other obvious or non-obvious technical reasons not to do this?
I think the focus on the dichotomy between the header and the body is misguided. If you separate them then an attacker can't attack the header using the body, but that's just inviting the attacker to find a way to insert data into the header to attack other header fields, or find secrets in the body that shouldn't be revealed.
Also, it seems like you could get the same benefit as using a separate key and encryption context by just padding the header to a block boundary.
The real problem with these attacks is that they aren't attacks on the cipher at all. You could be using a one time pad and the attacks would still be effective. What they really are is an attack on content-revealing efficiency. The only defense is to prevent the length of the message from depending on the existence of matches between attacker-supplied plaintext and secrets, and doing that in a way that allows non-cryptographer web developers to not screw it up basically comes down this: You have to disable TLS protocol compression. It exists at the wrong layer of abstraction because the TLS layer doesn't know what data is acceptable to compress against what other data.
But you still want to use compression, so the compression has to exist at a higher level in the stack, i.e. in HTTP or even HTML, where the web developer can specify exactly which data has come from an untrustworthy source so that it won't be compressed against secrets.
One thing I've really thought is consistently screwed up with security proposals is what do you do when it might be compromised?
Users invariably click through the big red web page because they still need to use their email at the end of the day, and that page provides no information as to what's going on.
If the certificate doesn't verify, don't give up - try a different route to it (sadly we've taken away source-routing for other security reasons). If that fails then start a Tor session and try connecting through that.
Then show me a little diagram that shows where we think the problem is so I can think about what might be the problem.
The problem is the false positive rate. It's like strict parents and car alarms. Car alarms go off without good reason all the time, they're ubiquitous and thus almost completely useless. When was the last time you heard a car alarm and investigated whether a car theft was in progress?
Similarly, a parent which tells their kids that pot will kill them instantly, prevent teenagers from watching PG movies, and so on is more likely to have their advice completely ignored when there is an opportunity to do so. And then you end up with kids who get into oxycodone, binge drink, and have unprotected sex.
The strictness of browser security is pretty ridiculous and not very helpful to the end user. If I go to google news on my phone I get half a dozen messages about certificate mismatches, which I have to dismiss individually. It's a dumb system with even worst UI. Except the UI is the foundation on which a sense of trust or alarm is built.
I would like to see in the HTTP/2 spec allow http traffic, with the requirement that http 2 traffic must be signed. This would allow publicly accessible resources (including javascript) to be cacheable, without compromising security. Random thoughts:
* The signage could be done in headers (HTTP/1.0 compatible)
* Works with all existing internet caches that don't modify pages they cache
* (some proxies inject headers into pages, so the spec should be resistance to this, and throws out all unsigned headers)
* Signature must cover cache expiry and full page url (including FQDN, port number etc)
* don't send cookies, user-agent or other identifying details over http 2 (helps with caching, and privacy)
* Could have a requirement that all https 2 traffic can only link to resources on https or http 2
* If the signature in the http 2 message fails, browser could fall back to https 2
* A redirect could sign the content it is redirecting to (to securely load resources from 3rd party CDN)
Is it folly to continue with https given that the certificate authority tree of trust has been compromised?
Maybe baby steps are a good option, but I'd like to eventually see all traffic signed with keys stored in a distributed ledger like namecoin. Good on the IETF for moving forward though.
Even if certificate authorities are compromised by major governments, SSL is still enough for now to keep my credit card safe from the shady guy in the back corner of the coffee shop. Or to keep my Google searches private from the techies at my ISP.
There are some upcoming workaround that will help too like cert pinning or DANE.
There are issues with trusting CA in PKI, but you have to recognized that this is still a good model. What we have to worry about is insider and protecting contract integrity. There are other ways Google can collect statistics and there are items Google don't need and absolutely have no use. In the past people discovered Motorola collecting/logging http requests you are making and this is the kind of things Google should not collect. Protecting whether a malware has been installed in Google's server or a hub is attached at AT&T network room are all necessary things to do.
In any case, if PKI trust model is broken, then nothing can be trusted. Decentralized network relies on a degree of trust in the software developers and the hardware engineers.
Yes, we also need to tackle the crazy mess of certificate authorities and their monopoly on trust from browsers, but that doesn't mean encryption is not worthwhile.
It has been trivial to bolt encryption to things for a long time now. Encryption is worthless without trust. Ubiquitous encryption more so because it's no longer a surprise when something is encrypted, it's just expected that you have your snooping client MitM the traffic with dummy certificates.
It doesn't stop 3-letter agencies (though really, people should stop pretending that's where the real threat lies for most people) and it doesn't stop informed hackers or intrusive corporate firewalls.
It doesn't stop 3-letter agencies (though really, people should stop pretending that's where the real threat lies for most people) and it doesn't stop informed hackers or intrusive corporate firewalls.
It doesn't stop any organisation capable of compromising your certification chain.
That might be a TLA going after a root CA. We could debate the ethics of that but I'm not sure it's helpful for this discussion to go over all of that again.
It might be an "intrusive corporate firewall", but if you're doing anything personal and sensitive from work equipment then I'm somewhat lacking in sympathy. No doubt that equipment was provided to help you do your job, and it is quite possible that your employer has statutory and/or regulatory obligations to meet regarding controlling of data and auditing of same, which is probably what any intrusive scanning on the way out is for. As for scanning on the way in, you only have to look at how many idiots open any e-mail attachment even if it's under a neon sign saying "Trojan here!" and how many people try to use things like personal webmail over HTTPS from work computers, and you can see a strong argument for inspection as far as corporate IT is concerned. (I do think it is in the interests of all concerned for employees to be properly informed of the possibility that their "encrypted" communications aren't and any legitimate justifications that apply, though. A throwaway line about monitoring communications on page 17 of the contract is not a good way to build trust, IMHO.)
But how exactly is an "informed hacker" going to compromise something like your on-line banking if it's encrypted this way? Probably the biggest practical benefit of current HTTPS implementations is that they do stop Little Johnny Wireshark spying on everyone else in the cafe or Little Johnny ISP monitoring the private communications of his customers. Even if the chain of trust is compromised and someone has the encryption keys, it's still only that one party who can intercept your communications, and you're still guarding against monitoring across the rest of an untrusted network.
Intrusive corporate firewalls MitM SSL sessions because the client is already compromised (IT installs the firewall's cert as a trusted CA). Any solution can't route around a situation where you don't trust the client machine.
Right but that's my point: it doesn't matter if it's encrypted. It matters if it's encrypted AND you've established trust.
I'd go so far as to say that trust is actually more important - whether someone can read my messages is less important then verifying they're what the sender intended to send.
I agree it's really important, but we need both parts of the puzzle, not just one. Out of interest, what would your proposal for a new trust model for server communication be?
Web-of-trust rather then CA-rooted, with more attention placed on who the signers are.
i.e. if I'm using my bank, then what matters to me is whether the bank is certified by my government, not VeriSign or whoever. If it's a foreign bank, then it matters whether their government trusts them, and it matters to me whether MY government trusts them.
If it's NOT a bank, then maybe my trust requirements are different. But the UI for all this and details are everything - but I think we've definitely stuck way too much import on that little green padlock icon without doing enough to educate users moment to moment about what it means.
It is a separate issue as long as HTTP2 does not force any reliance on commercial CAs, or CAs other than the server owner. If it did, it would be the enemy of secure web traffic and would merit being rejected or modified.
But AFAIK, or if I've understood correctly, the proposals for HTTP2 continue allowing "self signed" certs. So all we need is better UI in the browsers, and some public means of verifying the association of a particular cert with a server, over time.
This scheme would reduce the vulerable areas to the case of TLAs coercing secret keys - which is a lot smaller than it is now with corrupt third-party CAs (and browser UI obstructing attempts to work around that system).
You cennot hide from the 3 letter agency. Your goal should be to prevent trawl fishing the traffic. If you make the traffic even slightly more expensive to capture and decrypt the 3 letter agency will have budgetary problems down the line.
If they want to be in your PC they will be. You should just makes mass surveillance prohibitively expensive.
I feel that while it’s good that we’re encrypting HTTP more and more, there is too much focus on it, and other protocols are being neglected. Most SMTP and instant message traffic, for example, is not encrypted and authenticated.
> HTTP/2 to only be used with https:// URIs on the "open" internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs).
Seems like the most viable option, and leaves open the option of later implementing TLS Relaxed.
Given the proposal to encrypt everything, I wonder if the architects are going with standard SSL or the extended validation SSL. Will we need to provide full business details and such just to get HTTPS on our websites, or will a simple credit card payment complete the process?
Given the current state of IPsec/IKE, making it "required" is pointless. AFAIK there is no standard for opportunistic session setup, so random hosts don't know how to speak IPsec to each other even if they both implement it.
If we're going to overhaul the HTTP spec in any case and go for framed messaging - why not go for separate header/body compression AND separate encryption keys? As far as I understand, that would block the whole family of attacks. Separate compression contexts would prevent the attacker from building an oracle from body to attack the header. Separate keys and encryption contexts should prevent padding and mode oracles against the header.
I know crypto is hard, and I know there would be devilishly tricky details to figure out. So I have to assume someone has already thought of this, has had it vetted out for flaws and then discarded the idea.
But if so, why? Apart from requiring at least twice the key size in negotiation (compute oversized shared key, split for header/body and likely HMAC keys), are there any other obvious or non-obvious technical reasons not to do this?