There is so much open wifi nowadays that non-https should really start to be considered harmful. A large portion of website visitors are probably connecting via Starbucks, airport wifi, etc., which means their session cookies are basically public information.
So even given the mass surveillance problems, non-https connections need to start being treated as Bad Practice and discouraged by the sysadmin community.
If only it were that simple. As it stands now, the problem with HTTP/HTTPS is that encryption is an all-or-nothing proposition. There should be three modes supported:
1. Unencrypted (for casual/ad-hoc web servers serving cache-able static assets)
2. Encrypted-untrusted (for casual/ad-hoc web servers that just want to thwart basic Wi-Fi cookie sniffing)
3. Encrypted-trusted (for most established websites)
Right now, #2 isn't a viable option, because HTTPS servers with self-signed certificates are correctly treated as dangerous. A URL that designates the HTTPS protocol must always require a trusted connection.
The solution is (relatively) simple: HTTP 2.0 should allow encrypted-untrusted connections for URLs designated with the HTTP protocol.
Exactly: I believe that's actually the plan from browser-makers, that the "S" designates not whether you want to use encryption, but whether the encryption will be required to be trusted. We'll see what actually happens.
Going forward hopefully TLS 1.3 or a later version will be able to encrypt enough of the exchange that a passive sniffer can't necessarily tell between 2 or 3 - hopefully they won't be able to see the certificate, or there'll be a way to negotiate an anonymous DH as a minimum that an attacker cannot observe (without that in place right now, I've argued against aDH remaining in TLS 1.3).
Is encrypted-untrusted really a good idea? MITM attacks over wifi are nearly as easy as cookie sniffing. What sort of sites would fit into this category?
Absolutely everywhere you use HTTP now. It's never worse than plaintext, and it's an effective replacement for it.
Encrypted-untrusted prevents mass passive surveillance, and requires it to be active MITM. And - as I've pointed out on the TLS WG list a few times - if an attacker can't tell you're being opportunistic, they might not be willing to risk being detected. We can make them take that risk by reducing the visible distinguishers there.
Sorry, maybe I wasn't clear... With encrypted-trusted if you try an MITM, unless you have a valid SSL key, the browser will show a big error screen and not send any cookies. With encrypted-untrusted the browser wouldn't do that, so the cookies would still be sent. You don't even need to route the connection, you could just present an "Error connection to server" page and most users would just think the wifi isn't working.
My question is given that what is the purpose of encrypted-untrusted? It's no more secure than HTTP for anything that uses session cookies or the like. Sure the connection is encrypted, but then all you are doing is stopping people seeing that you are accessing My Little Pony shows on YouTube. If it's a public hotspot at the coffee shop, they can probably see you watching it (IRL) anyway. Given the connection is untrusted, it wouldn't be hard for the NSA to do a MITM attack at the ISP level.
Thanks, that makes sense. As long as people understand that although it's encrypted it's still not really safe. That could be the hard part though, as I expect a lot of people assume anything encrypted is safe (related, see malicious SHA1 - https://news.ycombinator.com/item?id=8136526).
Here's one extremely important use-case for #2: router/server admin interfaces. For most people it's out of the question to deal with certificates here, but the advantages of end-to-end encryption are still obvious.
That's just one example, of course. There are countless others.
#3 can't be spun up on an ad-hoc basis. It requires a domain name and a signed certificate. This costs time and/or money, and simply isn't feasible for the smallest websites.
#2 can allow the most simple web server instance created in five minutes to be guarded against passive snooping by governments and hackers.
For this to work we need free certificates. Many still keep referring to StartSSL, but there you just pay double later on in revocation fees. https://www.startssl.com/?app=43
What, your domain doesn't have DNSSEC because your registrar doesn't offer it? It'll have to have the opportunity to next year, if they want to renew their contract with ICANN...
Everyone says that, but revocation doesn't even work anyway.
- Any time a certificate could be impersonated by an attacker, the attacker is also able to block a revocation check.
- Live revocation checks (A) kill performance, and (B) leak information as a side-channel.
- You could solve this by predownloading every single revocation ever. But that is massive (infeasible for mobile), never going to be up to date, and leaks information about private domains.
See Adam Langley's blog for more about OCSP vs CRLsets.
No, I don't think we need free certificates, I think we need people who can carefully evaluate an untrusted certificate and make their own decision. The whole idea of a CA is pretty broken at current scale. Meanwhile, self-signed certificates are seen as a red flag. Should they be?
Opportunistic encryption of all sessions doesn't require any certificates. All that's needed is a key exchange algorithm and browser support. The UI would make it clear when a certificate was presented and validated, but no cert is needed for unauthenticated encrypted connections.
The next step up from that would be unsigned public keys with pinning (like ssh) to reduce MITM, and finally above that would be identity verification on the level of https.
Which is why the pinning bit comes in. If I connect to server X at home, get their self-signed cert and then connect later at starbucks on a MITM'd connection, I will know something is wrong.
Best solution? Not by a long shot, but it is a reasonably simple one to implement and does improve things.
You need to complete that sentence before it is a real suggestion: "and either the keys were changed or there is an active MITM. The browser then ..".
It would not be acceptable to trade the ability to change SSL keys for a trust-on-first-use model. There are solutions to this of course, but also tradeoffs. These needs to be taken into account.
> You don't have to care about MITM to benefit from opportunistic encryption.
How? An encrypted connection subject to MITM is as secure as an non-encrypted one. Attackers will stop listening for plaintext credentials at WiFi spots and will just launch their favorite HTTPS MITM tool instead.
Certificate pinning is flawed because you cannot revoke certificates from your clients. How would you distinguish a MITM attack from a certificate change? And if you can't revoke certificates, what happens after an attacker gains access to the private keys? Definitely flawed model.
HTTPS without identity verification is even worse than HTTP. IMHO a false sense of security is more dangerous than no security. Self-signed certificates must not be advertised as secure by browsers (the famous lock icon) because they are not! This would render them invisible for most users.
Self-signed certificates are only viable for tech-savvy users, and even then the dangers are too many and the burden too heavy (did the server change certs, or am I being subject to a MITM attack?) for them to be useful.
MITM isn't the only threat to Internet users. I can see three levels of security, each of which provides more security for users but less convenience for server operators:
1. Opportunistic encryption without certificates or identity pinning -- protects against dragnet surveillance and packet sniffing, but not MITM.
2. Encryption with pinned self-signed certs -- protects against second connection MITM, but not stolen certs without some revocation design.
3. Encryption and identity verification with CA- or WoT-signed certs -- protects against first connection MITM except by powerful adversaries.
How can you (or an 'average user' whoever that might be) carefully evaluate untrusted certificates? Wouldn't i need some kind of detached information like the certificates hash signed by an already known gpg key?
The CA idea is broken but not too easy to replace. http://convergence.io/ could be a few steps forward.
You could have a URL format that includes the signature of the key. Big sites would transparently add the signature and the browser would give a warning if the signature is different. You could add the same signature information to cookies and warn the user if it changes.
There is a big tradeoff here: The ability to enter URLs manually.
This is what we tell users to do today for important sites, and not click in links in mail or elsewhere. It is not obvious that there is a net benefit in security.
> How can you (or an 'average user' whoever that might be) carefully evaluate untrusted certificates?
Well, you decide whether you want to trust the person on the certificate, and if the security ever changes again that'd be the red flag. Initial trust is always difficult.
I completely agree, but what of the trouble of getting certificates? I have a hard time figuring out who to trust - the cheaper providers seem scammy and the expensive ones seem to be charging a lot. Self signed certs issue a big fat warning that deters users.
The cheaper ones aren't scammy - quite frankly it is the expensive ones that are the scammy ones!
I have used cheapssl.com (now ssls.com) for a number of years, probably a total of 20 sites, without a single issue.
That being said, namecheap or gandi.net also both offer reasonable priced certs.
Just so you are aware, if you think the $1,000,000 insurance policy offered with the more expensive providers is worth _anything_, be sure to read the fine print first.
The only potential more expensive thing of value is buying an EV cert to get the green URL bar - not that it actually provides even a little bit more protection, it doesn't - but because users see them and have been conditioned to immediately associate it with trust.
Devil's Advocate: there is no world wide cabal of wifi hackers trying to steal the Facebook login cookies of starbucks customers. The lack of security is real, but the dangers are overblown.
Currently on the front page is an article[1] about Russian malware pen testing for SQL injection. Your attack surface increases dramatically when you have valid session cookies.
A good example of this is reddit, that is not https for logged in users with write ability (and thus PostgreSQL write ability). If you have malware that sniffs public wifi traffic for reddit session cookies, you can easily start to test all those write calls for exploits.
> Devil's Advocate: there is no world wide cabal of wifi hackers trying to steal the Facebook login cookies of starbucks customers. The lack of security is real, but the dangers are overblown.
Perhaps not, but consider for example that the Starbucks near people with valuable IP (e.g., many Starbucks in Silicon Valley, in NYC, in Washington DC, in Redmond WA, in Cambridge MA, in Beverly Hills, in the Hamptons, etc.) would be valuable targets.
A valuable target will always get hacked because they'll be specifically targeted. For most of them it'll be from spearphishing, but there's a multitude of ways to attack people even if https is the default. All well known celebrity hacks are done either by spearphishing or breaking into an account independent of contact with the person.
And in general it's probably not a good idea to set world-wide standards on the most used high level protocol in the world based on a few rich people who use the net from an unsecure device.
> A valuable target will always get hacked because they'll be specifically targeted. For most of them it'll be from spearphishing, but there's a multitude of ways to attack people even if https is the default.
Yes there are other vulnerabilities, but that's not a reason to fail protect against this one (taking that reasoning to an extreme, all security is pointless because there always are other vulnerabilities). The purpose of security is to increase the cost of a successful attack; https has a high ROI in many cases. In the example you give, spear-phishing is much more expensive than sniffing wifi.
At the very least, regular http should support starttls to allow encryption. That at least requires your attacker to go to the trouble of man in the middle.
STARTTLS mixes concerns in a very bad way and is a horrible hack. Let it die with FTP and SMTP.
Really, TLS should just be a mandatory part of the protocol. I haven't read any convincing reason why it isn't, other than vague hand-waving, eg, "we're just a standards body and can't enforce policy".
Faced with a choice of obtaining a TLS certificate or just going for the old HTTP/1.x protocol, it'd be a hard sell for HTTP/2 for any one-off, quick hack experiment/microsite/internal APIs...
The problem is browsers are mostly lousy at giving a good user experience with self-signed certificates. 99% of the time, self-signed doesn't matter except if the certificate changes unexpectedly - i.e. the service just isn't that important.
The 1% of sites it does matter for are things like banks and the like, where you need to hammer into users heads that certificates should be valid via other means.
Of course, it's casually accepted that it's a-ok for companies to MitM their employees encrypted connections anyway, so I don't really know where that leaves us.
STARTTLS has risks over and above pure use of TLS - in particular where it's been used in IMAP and POP3, an attacker injecting plaintext commands before STARTTLS/STLS occurs, tricking an early (plaintext) login and other such shenanigans.
This is why the SMTPS port is being resurrected, and STARTTLS-type upgrades are not considered good practice in future - port assignments in future are likely to take that into account.
So even given the mass surveillance problems, non-https connections need to start being treated as Bad Practice and discouraged by the sysadmin community.