Hacker News new | past | comments | ask | show | jobs | submit login
SHA-1 Deprecation: No Browser Left Behind (cloudflare.com)
99 points by jgrahamc on Dec 23, 2015 | hide | past | favorite | 40 comments



I don't understand the rationale - SHA-1 gets sunset as broken crypto, indicating that with time a motivated attacker can find a collision and forge a cert for a SHA-1 secured site. Serving a SHA-1 cert to users whose browsers cannot understand newer cryptography seems like you'd be claiming that the security of those users' connections is not something you consider meaningful, as those users are still vulnerable to SHA-1 attacks. This just raises the barrier for entry to attack those users (compared to plaintext), except those users will potentially be unaware anything is wrong.

As the article states, 37 million visitors were using outdated crypto - that's 37 million potential sets of private information I could skim with a forged cert. Does protecting those users not matter?


Cloudflare and others believe [1] the risk can be much reduced by new, carefully enforced issuing policies on the part of certificate authorities.

The vulnerability of SHA-1 isn't that it's feasible to generate an input that will produce a checksum of your choice - it's that it's feasible to generate two inputs that will produce the same checksum. This is orders of magnitude easier, due to the to the birthday problem [2].

The SHA-1 the certificate authority generates to sign isn't just for the private key in the certificate; it's also for metadata like the URL and the powers the certificate grants. So to successfully exploit the weakness of SHA-1 you not only have to find a collision between good and evil certs, you also have to get a CA to sign the good cert without fiddling with the metadata and hence changing the SHA-1.

The proposal is that CAs should start systematically adding a random serial number to every cert, i.e. always fiddling with the metadata, so it's impossible to choose the SHA-1 the CA will sign. So even if you find a collision, you won't be able to get a cert issued that lets you exploit it.

For example, maybe I know that sha1("the public key for good.example.com is asdfqwerzxcv and it isn't a CA") = sha1("the public key for evil.example.com is rtyufghjcvbn and it is a CA") = 7d97e98f8af710c7e7fe703abc8f639e0ee507c4 and if I ask a CA to sign the good cert they give me a signature for 7d97e98f8af710c7e7fe703abc8f639e0ee507c4 and I can copy the signature onto the evil cert. With this proposal, the CA would instead only agree to sign "the public key for good.example.com is asdfqwerzxcv and it isn't a CA, and the CA's random number is 994782906" and because I can't control the random number, I can't control the SHA1 of the cert they issue to me.

Unfortunately CAs have a poor track record of complying with their own issuing policies, so some people doubt CAs could be relied on not to mess this up.

[1] https://blog.cloudflare.com/why-its-harder-to-forge-a-sha-1-... [2] https://en.wikipedia.org/wiki/Birthday_problem


Protecting those users _does_ matter. As CloudFlare points out, many people simply cannot upgrade their devices to something that supports SHA2. Instead, the choice is either (1) they don't get crypto at all, because SHA2 won't work on their device, or (2) those users get SHA1, which is better than nothing. CloudFlare is advocating stance 2.


Or option 3 for truly important data: they don't get access at all. It would be inconvenient for that block of users, but it would avoid doing something that look s(to the end user) more secure than it is and lulling them into a false sense of security that means they don't see any need to upgrade.

Looking at the figures for who would get excluded (largest being 6%) that could sound pretty bad, but if that is a % of devices then it isn't so bad: most of them may be old mobile devices and people with them may have another machine (a laptop or desktop for instance) that they can access the functionality from.

Unfortunately for a commercial site excluding people, even just a fraction of a percent of them, this way might be something that is impossible to justify to the powers-that-be even with the "not wanting to enable people to do something insecure" argument.


If the site decides that the data is too valuable, they can probably ask CloudFlare to disable this feature for their site. If the user decides that the data is too valuable, they can decide not to access that data with vulnerable devices.


Correct. All our customers can disable SHA-1 fallback from our control panel. While a US-based bank, for instance, may decide it's better based on their risk assessment to not SHA-1, a Syrian-based refugee organization may reasonably have a different risk assessment and therefore decide to support it.


> but it would avoid doing something that look s(to the end user) more secure than it is and lulling them into a false sense of security that means they don't see any need to upgrade.

On that note, does CloudFlare still allow for https (cloudflare issued ssl) to http (on the backend)?


Is it feasible for websites to detect which cipher is being used and display a warning banner at the top of the page if SHA1 is used? (Similar to the cookie warnings most sites are now required to display to US visitors.)

Although maybe the precedent of serving content conditional on the cipher used might not be a good one to set.


With the push for HTTPS everywhere, those old devices won't get content at all.


Case 1) isn't that they access the site in plaintext, but that they don't get access at all. Case 2) could easily be that their credit card gets emptied.


Besides what others have pointed out, there's also the economic angle. It may be worth the estimated $X (for whatever value of $X you believe) to predict a collision, forge a cert, and MITM all of a website's traffic. It may not be worth nearly as much if the prize is only the ability to MITM 2% of that website's traffic. Maybe it's 6% under oppressive regimes, but it's still the same amount of work for a much smaller prize.

Of course we'd like to protect all 100%, but this is about tradeoffs. Assuming downgrade attacks are as preventable as they claim, I think it's respectable that they're making this kind of effort to reduce the impact.


In addition to what sister comments said, it's also a matter of your threat model. While SHA-1 might no longer be sufficient protection against targeted attacks by powerful attackers, it's still good enough to protect against a random person snooping on another random person on an open WiFi network.


In addition, many less than 5-year-old Android phones (pre-Gingerbread) don't support SHA-2.

TLS 1.2 (RFC 5246) added support for SHA-2 in August 2008.

Android Froyo was apparently released in May 2010 without support for TLS 1.2 or SHA-2.

Why the delay?

Edit:

Just found this:

> Android has the technical capability of handling SHA-256 certificates right from version 1.0. In practice, some users may encounter issues with validating certificates that use cross certificates (these help chain certificates to alternate roots). 1.6 improved this issue for some users, with the issue being resolved as of version 2.2.

https://support.globalsign.com/customer/portal/articles/1499...

which helps explain the situation with SHA-2, but not the delay in implementing TLS 1.2.


Until the last few years, anything newer than TLS1.0+SHA1 was considered crypto-wonk neckbeard territory. Sure, it was standardized, but asking a vendor to implement it would get you an eyeroll and a "sure, we'll get right on that, uh huh".


Also, pre-Gingerbread was abandoned by everyone nowadays.

Even Gingerbread has been abandoned in the last 18 months.


> The Firefox team has spoken publicly about the drop in downloads they experienced when they moved mozilla.org to only support SHA-2 certificates

I was wondering why I could not download firefox from my work computer with the latest Chrome, it seems that not only older browsers have the problem with SHA-2 (hint: cooperate proxies).


I'm baffled by this. I think what's happening is similar to the streetlight effect, also called a drunkard's search. We're focusing on the wrong problem.

The article claims SHA-1 is "increasingly vulnerable to potential collision attacks". Aka, theoretically vulnerable, not demonstrated yet. But there's all this frantic activity to fix this. For some reason it's interesting to a lot of people.

Here's why I'm confused. There have been very real, in the wild, documented MITM attacks against users due to malfeasance of certificate authorities, many of which are state actors. And yet, today, my copy of Firefox 43.0.1 still has a plethora of CAs it trusts (for all domains?). Including trusting known bad actors such as TÜRKTRUST.

I think that truly fixing the CA mess should be infinitely higher priority than worrying about SHA-1. Why doesn't my browser implement proper pinning for every site it visits? Okay, so Chrome pins some Google certificates. But 99% of the Internet is vulnerable to MITM and everybody just wrings their hands and promises eventual future fixes. Moxie Marlinspike has been calling attention to this for years.

But I don't follow the certificate pinning stuff carefully. Has the problem been solved? I've previously used Certificate Patrol, but that software had some serious limitations so I gave up on it.


Browsers do implement pinning via HPKP, every site can use that and it adds a trust on first use protection. It is not deployed widely yet, but that's not the fault of the browsers. People need to start using HPKP if they're worried about CA failures.

Also there is another effort to fix the greater CA problem called Certificate Transparency. The idea is to have publicly verifiable logs of all certificates. It is not fully deployed yet, but it already helped to uncover misissuances of certificates.

Appart from that the policies for CA operations are much clearer and stricter these days.

I've recently written a 2-part article for LWN about all these issues: https://lwn.net/Articles/663875/ https://lwn.net/Articles/664385/


Is that title a play on the also failed education policy called "No Child Left Behind"?

Such policies fail for a reason. You're not supposed to cater to the least common denominator. That way you'll slow your progress down much more than you would otherwise.

You can't support broken security for another decade just because 2% of the users will continue to use the tools that are broken. I'm willing to bet there will still be Windows XP users 10 years from now unless everyone just decides to leave them behind and purposely break their apps' support for it. And they'd be doing those guys a favor.

Here, it also teaches OEMs that they can continue to leave their devices on ancient versions of Android, because someone else (hint: like Cloudflare or Facebook) will solve that problem for them down the line. When instead, the people who bought the phones from Huawei or Xiaomi, or whoever sold those Android 2.3 devices last, can see their devices stopped working properly, curse those companies and buy from someone else next time.


This is not "catering to the lowest common denominator."

That expression would be appropriate if CloudFlare decided to keep everyone on SHA-1 just because less than 2% of users couldn't use SHA-2. But that's not what they're planning to do. The 98% of users that support SHA-2 will be given SHA-2 certificates.

Your argument about Windows XP is moot because XP SP3 can handle SHA-2 certificates just fine. So even if we got rid of SHA-1 now, people will still be running XP SP3 10 years from now.

I used to have a Nexus One that ran Android 2.3. I didn't curse Google or HTC when it became too old to be useful. I don't think people will curse Huawei or Xiaomi, either. We're getting used to planned obsolescence, the exceptionally long life of Windows XP notwithstanding.


I don't see whats the hurry in deprecating SHA-1, when for things like RC4/weak DH/SSLv3 there wasn't such a hurry, even though for those cases they were known broken.


Surely you missed https://eprint.iacr.org/2015/967.pdf , which prompted the CA/Browser Forum not to delay the SHA1 sunset.


Thanks, thats more worrying than the 2012 article linked on the cloudflare blog. The paper you linked to says 'will cost between 75K US$and 120K US$ and will plausibly take at most a few months', and not $700,000 like the CloudFlare blog does.


That's for a freestart collision, which is not the same as real-world attack. This is a big warning sign, but real attacks aren't that bad yet. https://www.schneier.com/blog/archives/2015/10/sha-1_freesta...


I wonder how they manage the SHA-1 fallback? Guess it might be a proxy which decides it, based on the user-agent.


By examining the ClientHello.


Wouldn't this make modern browsers vulnerable to a downgrade attack via an active MITM?


Modern browsers are removing SHA-1 support. The MITM would not be able to downgrade them to something they've removed.


This is unrelated, but maybe someone here has an answer: what is up with the cloudflare captchas? About half of the time I get served easy street-sign captchas which always works on the first try, but when the opposite happens I have taken to simply not visiting the web page. I often need about 20-30 tries to get through. My attention span rarely is that long, so I just end up not visiting those sites.

Is it the choice of the cloudflare client, or is it just cloudflare trying to make the internet hostile to the clients of my ISP?


Cloudflare seems to have blacklisted a big range of VPN ip's recently. If it's mostly on a single site you have the problem, you can ask the site to whitelist your VPN IP range i the cloudflare settings.

Regarding the street sign vs. actually hard captchas, you always get the street sign/easy captcha if you are logged in with a non-new google account (One that has solved a lot of captchas).


Cloudflare breaks badly with Tor.


How are you connecting to CloudFlare sites? Direct? VPN? Tor?


Either via VPN or through my regular ISP. I get the captchas regardless. I know my ISP (a small local one) has had some misuse problems, so I understand that there are captchas. What I don't understand is why they don't rely on captchas that are actually readable.

I know computers are great at captchas, but now they are a serious usability problem.


"In a Silicon Valley tech company, where most employees get a new laptop every year"

Is that really the SV standard? In the European tech companies I have worked in, people look at you funny if you replace your laptop more often than every 18-24 months.


I haven't done a survey, but at a previous startup it was 24 months, though you could get a new one after 12 months with approval from a manager that would basically always be approved if you asked.

At my current company there's no policy, but every new hire is given a new MacBook Pro.

Given the normal costs of employing someone, a new laptop every year is a tiny fraction of the overall compensation so in general, it should be a non issue.


The main problem in my experience is on the bean counter side. Depending on the company, when you buy equipment it is an expense for the company (i.e., you don't have to pay taxes on the money used for buying the equipment). However, you usually can't (depending on the country/state/province/whatever) declare the entire expense in the first year. You often get it as the value of the item depreciates. Some countries have bizarre rules for depreciation. For example, in Japan, you might get stuck on a 20 year straight line depreciation for a telephone (even if it is a mobile one). So you get 5% of the value as an expense in the first year (tax law is complicated, so I'm just giving you the worst case example -- there are cases where you can get 100% depreciation in the first year as well).

Anyway, each thing you buy, you have to track the depreciation of. You can sell the used item for a loss which means more paper work. Or you can pile the items up until they have fully depreciated. But, basically, it is a huge pain in the back end. If you have an accountant on staff who is already up to their eyeballs, it is not unusual for them to grumble very loudly about having to deal with the developers' obsession for new shiny toys.

If you have a company who is willing to throw money at problems to make them go away (don't try to track the equipment as an expense if you don't have the capacity to do so), then it won't be a problem. But as always, the people problem is far more difficult to solve than the technical problem. Personally, I cut startups some slack if they haven't got all their ducks in a row on small issues like this. YMMV.


Of course startups are a special case, since they often do not last long enough to require a laptop refresh policy at all.


I wonder if some brave sites could get together to end of life browsers earlier by refusing to serve pages to those user-agents.


But to what end? The entire brouhaha leaves the impression that ridding the world of "obsolete" software is comparably important to ending world poverty and hunger.

Why is dragging everything and everyone onto the "2 months and you are out" treadmill of such great importance?


Why is dragging everything and everyone onto the "2 months and you are out" treadmill of such great importance?

I can't let aging tech threaten my https connections! Have you even heard of MITM attacks? That data is just too important! I only 47 let of my most trusted javascript embeds be positioned to slurp data, change the contents of my pages and track my users!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: