> Today only 0.5% of HTTPS connections made by Chrome use TLS 1.0 or 1.1.
It's nice to have this stat from Google. A lot of people make wild claims about how widely used TLS 1.0 is based on a glance at their webserver logs. But if you look more closely you'll see that a large percentage of 1.0 traffic is actually just hacked machines scanning for vulnerabilities, filling your logs and using your resources. It's not real traffic, and TLS 1.1 is microscopic in comparison to 1.0. As a percentage, very few sites would be harmed by disabling 1.0/1.1. Most sites can simplify their TLS stack and lose more bad than good by going TLS 1.2-only.
So I'm very glad to see Google deprecating and planning to disable 1.0/1.1.
To see the disabling of 1.0/1.1 over time check out SSL Pulse from SSL Labs (in the chart "Protocol Support"):
TLS 1.0 has been dropping like a rock. Maybe it's too late to be an "early adopter" of 1.2-only but regardless I think tech sites/people should lead the way there.
Note that "connections made _by_ some particular web browser" (in this case Chrome) and "connections made _to_ some web site" (and seen in your logs) are not the same kind of thing.
Both client and server will try to negotiate the best version they and their peer can manage, but the populations of those peers are different so the statistics are not necessarily well correlated.
One problem Chrome doesn't have to worry about, but your site might (especially if its demographics skew towards people with older and possibly out-of-support systems) is out-dated web browsers.
Also I'm going to link the diediedie draft, which is now superseded by a version with a more corporate name but I was amused because the IETF's script crashed when trying to render it, so rather than link the current one that doesn't do that here's the original with the funnier name and the crash:
This is important, from a client perspective (e.g. from Chrome) this makes sense.
But from a server perspective, you still have trouble with old clients, particularly Windows XP.
Slowly disappearing but not gone yet.
Fortunately SNI (i.e. not requiring a dedicated IP per SSL customer) is becoming so commonplace that a lot of those browsers are getting broken anyway, which will probably speed up that deprecation.
PCI-DSS standards require companies to drop support for TLS 1.0 (though the deadline did get extended to June 2018), which will be probably one of the more significant driving factors.
Most people just won't bother to change their configurations, short of some kind of external pressure. Support for new protocols will often be driven by "Is it supported out of the box" on appliances, Apache, Nginx etc. shipped by distributions.
edit: HTTP 2.0 support at 31%? That's higher than I expected, and a good sight to see. Wonder how much of that is driven by cloudflare etc?
As someone who scans the internet (but not with hacked machines), I try every version of TLS to determine support. If you support older versions I scan more frequently to see when they are turned off.
Feel free to point to this comment as evidence that logs should not be trusted.
I've run TLS 1.2 only on my site for a year or two, and I've not encountered any problems. The ecommerce websites I manage are TLS 1.1 and 1.2 only, and not heard complaints.
> Today only 0.5% of HTTPS connections made by Chrome use TLS 1.0 or 1.1.
Curious who the 0.5% are. Presumably Chrome is doing it's best to avoid falling back to TLS 1.0 or 1.1 unless necessary, so presumably these are sites that just don't support 1.2.
One thing that really stings here is that only the past few years have OS vendors, library vendors and so on started paying attention to how people use their crap to build things, and so software written / custom sites developed before that awakening are often flawed because the developers tried their best but weren't experts and weren't given the opportunity to ask for "You know, good security, whatever that is at the moment?" but had to pick specific values for things they didn't understand, including protocol versions, ciper suites, key lengths, trusted roots...
For example, in Python you can ask for a specific TLS version ("Oooh, TLS 1.1 just came out, let's use that") or you can ask for what it calls "SSLv23" which means any protocol in the SSL/ TLS family whatsoever. But only in very recent 3.x Python releases can you easily specify "Um, anything above TLS 1.0 is OK?" even though that is clearly both more useful AND more likely to result in the software working as intended in future.
In fact, software developers reading this: You probably didn't implement TLS yourself (OK, the six smug people thinking "Yes I did", this isn't for you, you've already read and perhaps written similar rants before). TLS 1.3 was finished earlier this year. How many programs that you wrote more than six months ago are you sure will "just work" with TLS 1.3 if the relevant libraries or OS features are upgraded to support it? How many do you think will tell the library/ OS/ framework/ whatever "Oh, no, I definitely want TLS 1.2" even though actually it wants security, and TLS 1.3 would improve that?
> For example, in Python you can ask for a specific TLS version
libcurl has the same thing. A really common problem in PHP ecommerce libraries and applications is the misuse of CURLOPT_SSLVERSION to pin a specific protocol version. Then their payments stop working every few years ...
Maybe somebody more familiar with TLS can set me straight here, but I find it surprising that SNI still exists and there isn't much effort to replace it. To me it seems like an odd hole to punch into the encryption layer. Back when I was doing security research I wrote a TLS MITM and I distinctly remember thinking "wow thanks SNI this makes it so much easier".
Thanks for the links. Sounds like the main reason it hasn't been addressed is DNS, if the client is just going to make a DNS request before their TLS session, the host is effectively leaked anyway. Sadly DNSSEC wouldn't address this as it only provides integrity and not confidentiality.
A Firefox Nightly can be configured to do D-PRIVE (specifically DNS-over-HTTPS to Cloudflare) and do eSNI, and thus connect to a default configured Cloudflare site without any indication to other parties about which one...
Some other cloud providers or CDNs have made interested noises, if those noises weren't just for the public record they might begin doing the exact same thing in short order, especially if the Firefox tests go well.
This is a step in the right direction but it's not perfect. The name of at least one domain that the responding server must host is still leaked. This can be a non issue if the same ip is hosting hundreds of domains (e.g. CloudFlare) or pointless if it's just hosting one site.
I just had an idea that might be able to work around this though:
1. Create a new TLD: .ip. All *.ip domains are valid ip addresses (in some encoding, e.g. 740-125-138-139.ip, or anything else) and they always resolve to the ip address specified.
2. Automatically issue certificates for each host for each of the ips that they serve on. (Thank you Lets Encrypt)
3. Every new connection made can just use the ip-domain as the esni originating host, because you can know that every host is serving https://ownip.ip.
This doesn't solve the fact that server ips are still fairly unique, and so a reverse dns might be enough to find the host, but it doesn't leak any more information than what the IP header already leaks, and it doesn't require leaning even more on increasingly centralized proxies like CloudFlare.
Certificates already support IP addresses. They just need to be public (i.e. not RFC1918 space) and for legacy browser compatibility the IP needs to be in the commonName and subjectAltName.
In terms of browser compatibility the situation is:
The address must appear as a SAN ipAddress to work in modern browsers like Chrome and Firefox
BUT
The address must appear as either a SAN dnsName or as the X500 series Common Name to work with older Microsoft SChannel implementations.
Key root programme rules and the Baseline Requirements mean that:
IP addresses must not appear as a SAN dnsName (they're IP addresses, writing them out as text doesn't mean now they're part of the DNS system) but only as a SAN ipAddress
The X500-series Common Name must be the textual representation of one of the SANs (doesn't matter which one).
As a result the only compliant certificates for IP addresses that also work in older IE / Edge releases do this:
1. Write exactly one IP address as a SAN ipAddress
2. Write the same IP address, but as a text string as the Common Name.
There are a LOT of certificates that do something else, some of them work but aren't compliant (and so get finger wagging from people like me) some are compliant and don't work in older Windows systems (which may be OK if you're building a new system for like CentOS users, who cares if it works in Windows?) but only the pair of traits I described above manage to be compliant while also working, and they're limited to a single IP address per certificate.
Hey, at least Windows 10 finally groks SAN ipAddresses, in another decade we might not need a workaround.
Is this just a restriction on current CAs? I have a self-signed certificate on my router (more out of curiosity than any practical benefit), and it comes up fine on https://192.168.1.1/
Yes, this restriction applies only to public CAs. The purpose is to prevent someone from getting, for example, a 192.168.1.1 cert and then using it on another network in a mitm attack.
Worth keeping in mind that without SNI, we wouldn't have anywhere near the current level of HTTPS adoption either.
It wasn't that long ago I had to sell clients on a separate IP address just to set up HTTPS. Let's Encrypt using SNI allowed me to secure everyone for free.
The real issue with deprecating TLS 1.0 and CBC ciphers is that you need a minimum of Android 4.4 for TLS 1.2.
Might not be reflected in browsing stats, but according to this: https://developer.android.com/about/dashboards/ it's 3.8% of Android users. And that number undercounts non-Google Play Services using Android devices.
This particular blog post and plan is regarding removing support of old TLS versions and options on the client. Which means that all server administrators need to do is enable TLS 1.2 and modern options.
Old clients (such as Android) will still be fine, if the servers don't also turn off older versions/options.
Kikat was released 4 years ago, and Chrome plans to drop Jellyean on Android anyway, and genuinely, is enforcing minimums such a bad idea?
When you're handling your own PII and potentially others (contacts, addresses, phone numbers etc) I think having an absolute minimum requirement is beneficial for the Internet as a whole; for example, it'd not be easy to use Windows XP and access the Internet, for good reason, I'd hope (I can't verify that but I doubt browsers still work there).
This is good news all around, forcing people to upgrade is a good idea in the long run.
I feel like Google and Mozilla are acting too lenient here. In my opinion it doesn't matter if you wait 1 month or 2 years to remove support for old TLS versions, affected parties probably won't notice or act until it happens. On the other hand I understand that Google and Mozilla are very careful about user's experiences and their market share.
Shameless plug: I've been trying to tackle this front via removal of dependencies[1]. We have been relying on more and more complex and heavy protocols to secure any kind of transport security, just because the browsers do it this way. I don't think TLS is the right approach to all transport security nowadays.
[2] is a comparison of OpenSSL with the various experimental implementations of Disco (tl;dr: 700k LOC vs 1k LOC)
It's nice to have this stat from Google. A lot of people make wild claims about how widely used TLS 1.0 is based on a glance at their webserver logs. But if you look more closely you'll see that a large percentage of 1.0 traffic is actually just hacked machines scanning for vulnerabilities, filling your logs and using your resources. It's not real traffic, and TLS 1.1 is microscopic in comparison to 1.0. As a percentage, very few sites would be harmed by disabling 1.0/1.1. Most sites can simplify their TLS stack and lose more bad than good by going TLS 1.2-only.
So I'm very glad to see Google deprecating and planning to disable 1.0/1.1.
To see the disabling of 1.0/1.1 over time check out SSL Pulse from SSL Labs (in the chart "Protocol Support"):
https://www.ssllabs.com/ssl-pulse/
TLS 1.0 has been dropping like a rock. Maybe it's too late to be an "early adopter" of 1.2-only but regardless I think tech sites/people should lead the way there.