Author here. It has gotten kind of hard to follow what has happened, so here's a chronology:
1. In September, Chrome 53 was released, which enabled mandatory Certificate Transparency for Symantec certificates due to Symantec's history of incompetence. Some website operators, such as Chase, asked Symantec to submit their certificates to Certificate Transparency logs in such a way that the certificate wouldn't be trusted by Chrome, triggering the ERR_CERTIFICATE_TRANSPARENCY_REQUIRED error. This is when I wrote this blog post.
2. Last week, an internal timebomb expired in older versions of Chrome causing this error message for any website using a Symantec certificate issued since June. Basically, Chrome contains a list of Certificate Transparency logs that it trusts, and this list has a 10 week expiration date. So if Chrome was built more than 10 weeks ago, there would be no trusted Certificate Transparency logs, and therefore any certificate that was supposed to be logged (such as new Symantec certs) would be untrusted and display this error message. The Chrome team was able to fix this within 24 hours by remotely disabling CT enforcement in Chrome. (When Chrome starts up, it fetches a list of feature flags from a Chrome server using a system called Finch which is independent of the normal upgrade system.)
3. Today, the Chromium packages in several Linux distros, including Ubuntu, became 10 weeks old. For some reason, they have not picked up the Finch update, and so they are displaying this error message for all Symantec certificates issued since June. This is not confirmed yet, but the current hypothesis is that the distros have disabled Finch for privacy reasons. It will probably require a distro package upgrade to fix.
The Chrome team was able to fix this within 24 hours by remotely disabling CT enforcement in Chrome. (When Chrome starts up, it fetches a list of feature flags from a Chrome server using a system called Finch which is independent of the normal upgrade system.)
After reading the blogpost, it seems like it was working as intended. Why "fix" it?
The expiration was designed back when Certificate Transparency was only required to make Extended Validation certificates display a green bar. The intention was never to make certificates fail entirely when a Chrome build was more than 10 weeks old. Now that CT is being used for more than just a green bar, the log list expiration is being revisited.
» When Chrome starts up, it fetches a list of feature flags from a Chrome server using a system called Finch which is independent of the normal upgrade system.
I'm not a Chrome user. But that sounds awful at first. What is the idea behind this service? Is there any documentation about the 'features' these flags can enable/disable?
I understand that I'm paranoid at times AND I really dislike Google, but why would you have an extra channel to influence deployments, other than offering a global update file?
I fail to see this as any more harmful than the auto updating feature. I understand the concern of multiple avenues to phone home as being worse than one, but it's negligible considering it's the same company. Coupled with all of their other services for security incidents, prediction, auto correct, spelling, usage stats, dangerous page warnings, etc, I think it's just another log on the fire and not worth being concerned about specifically.
It actually is almost nothing like the auto updating feature.
Finch is mainly used for A/B testing. It doesn't push actual updates, all it does it turn existing features (flags) on and off. It's used for quickly A/B testing or incremental rolling new features and is designed to be more agile.
if you have a logged-off cookie, and the publisher can associate you with a logged-in user at a near point in time, they can charge you as logged in impression.
The biggest difference I see is the speed. You're right, it doesn't make a difference from a technical point of view but it illustrates again just how much power they have over chrome - and thereby over the web: They can push a policy update to the bigger part of Chrome's userbase in 24 hours.
It should be entertaining when car makers use the same strategy: "Oh yeah, we have this system where everytime you turn the ignition, your car polls us and updates its assistant, motor, steering and airbag settings. Don't worry though, we mostly just use it for randomized tests and field trials."
because Google don't know bugs on your (their) browser affecting their revenue.
interesting, if you're a victim of mitm attacks, i wonder if attackers can abuse that to disable some certificate check features to make spoofing ssl sites easier.
I use LetsEncrypt on a (personal) site or two and on a couple for work.
We've got customers that mostly run Windows, however, and it's a helluva lot easier to, for example, just get a three-year certificate from <vendor>, install it, and then forget about it for the next three years.
Fortunately, I don't (usually) have to deal with the Windows boxes (i.e. actually installing and/or configuring the certificates) but I'm often the one acquiring the certificates.
CertSimple only does EV, and we do it completely differently from every other company: we check as much of your company's details before you pay, matching your order to a registered/active entity, flagging up things before asking for your credit card number, and helping you resolve any missing identification steps based on your company, order and the domain names involved.
I've been on HN for a decade and was at YC in Mountain View last week for the 10-min final interview last week (we didn't make it, which I blame on me being a jetlagged mess). OTOH the AirBnB we stayed in used a customer as their ISP.
We're used by a bunch of companies HN folks might know, including Travis, Tito, and Monzo.
The Symantec "this website is ssl protected by"-badge has high customer recognition and will raise conversion rates a tenth of a percent or two.
If I hadn't seen it repeatedly with my own eyes - most trust images will get you a small bump, for some reason Symantec performs the best. It's understandable and yet very frustrating!
It's not "Symantec", it's not "Norton", it's the "VeriSign checkmark". That's why they also bought the checkmark from VeriSign (which was also their corporate logo) when they acquired the CA business, and VeriSign went on and adopted a new corporate logo.
The checkmark been around since at least 1997, and in it existence it has been associated with the most trustworthy institutions. It has become a synonym of trust. That checkmark is one of Symantec's most valuable assets.
For small startups, probably not. Large organisations have far more stringent processes around certificates. Auditing, validation, testing, regulatory requirements, EV, support contracts and SLAs, etc.
Apart from the reasons posted earlier, "legacy" also comes into play.
Symantec acquired the CA business from the old VeriSign in 2010, which back then was the largest CA and had a large corporate client portfolio accustomed to paying $$$$ per certificate per year. Corporations can become very reluctant to change, even if it benefits them, and there are some who just pay up every year without researching alternatives or even being aware of any.
You will find that there are a lot of similarities with the domain name industry where there is also a company operating with a VeriSign heritage: Network Solutions. Network Solutions also charge premium prices, because their legacy customers expect those. And although market share is slowly eroding over the years and cheap (and even free) competitors rise, more than enough legacy customers stay on board to remain very profitable for many years to come. What definitely helps is that most of the processes in the domain registration and CA industries can be automated and don't need a lot of maintenance.
We use wildcard certs for a number of domains to support affiliate subdomains. So affiliate.example.com and affiliate2.example.com are all served by the same servers and thus all need to validate with one cert.
There's a limit to how many SAN's you can fit in one cert. There is apparently no defined upper bound, but dependent on the client's implementation. 25 - 100 names seems to be the common limit supported by most registrars.
In the parent poster's case, it sounds like a good use case for a wildcard cert. They may have thousands of affiliates, and may not know all of the affiliates ahead of time, so with a SAN cert, would need to reissue the certificate each time a new affiliate signs up.
I can't speak for Symantec, but folks use SSLMate (which resells Comodo) because of the customer support, the wildcard certificates, the lack of rate limits, and the central management of certificates (which integrates with Cert Spotter, our Certificate Transparency monitor). Also, some of our customers have special requirements and unfortunately can't automate certificate issuance. These customers want year long certificates with email validation, but still prefer using SSLMate's command line workflow over the clunky web interfaces of most CAs.
Ugh that's the worst. I work on servers where the GUI can only be accessed using HTTPS, but its all internal so many clients don't bother. And I don't know why browsers sometimes don't let me say "continue anyway" but it's super frustrating.
Using HSTS on a website will generally prevent your browser from allowing you to continue, which is fair: the website owner has explicitly indicated the website should only ever be used over a encrypted connection, and that is not the case..
To be RFC compliant, it MUST NOT allow the user to ignore the errors.
12.1. No User Recourse
Failing secure connection establishment on any warnings or errors
(per Section 8.4 ("Errors in Secure Transport Establishment")) should
be done with "no user recourse". This means that the user should not
be presented with a dialog giving her the option to proceed. Rather,
it should be treated similarly to a server error where there is
nothing further the user can do with respect to interacting with the
target web application, other than wait and retry.
Essentially, "any warnings or errors" means anything that would cause
the UA implementation to announce to the user that something is not
entirely correct with the connection establishment.
Not doing this, i.e., allowing user recourse such as "clicking
through warning/error dialogs", is a recipe for a man-in-the-middle
attack. If a web application issues an HSTS Policy, then it is
implicitly opting into the "no user recourse" approach, whereby all
certificate errors or warnings cause a connection termination, with
no chance to "fool" users into making the wrong decision and
compromising themselves.
You can configure most major browsers to forget HSTS entries they already know about, so to the question of who owns your browser wrt honoring HSTS, I'd say it's the user.
I get the error on the NIST web site [0] as well. I do have the "Proceed to ..." option, though, as you can see in my screenshot [1].
The same thing happens on the Chase Bank URL [2] mentioned in the article. Clicking "Proceed to ..." then redirects me to another URL [3] which throws up the same error. If I click "Proceed to ..." on that URL, I then get an ironic login page [4].
$ lsb_release -d
Description: Ubuntu 16.04.1 LTS
$ apt show chromium-browser | grep ^Version
Version: 53.0.2785.143-0ubuntu0.16.04.1.1254
$ chromium-browser --version
Chromium 53.0.2785.143 Built on Ubuntu , running on Ubuntu 16.04
Perhaps there's a setting somewhere that you've toggled? Maybe it's HSTS or something similar?
> In my experience that past couple days, I get the warning on about 10-25% of major web sites.
I'm finding about the same, including major sites like Mint.com and Amazon (or part of Amazon's CDN for images, anyway). I'm running a slightly older Ubuntu and as of now, Chromium 53 is the latest in the repos...
"Too many websites have chosen redaction incorrectly"
I purchased a Symantec cert from ssls.com for one of my sites. I wasn't given the option of redacting anything.. yet I'm seeing this error with Chrome 53 in Linux. (I also have Chrome 54 on another computer, and it's working fine).
There are clearly other ways of ending up with a certificate that triggers this.
tl;dr: Symantec have tried to implement a broken version of Certificate Transparency on their Certs when IETF have not finished the spec.
As such new Symantec certificates don't work in newer versions of Chrome.
Crazy but true.
Kudos to the site owner - clear and simple and authoritative explanation
(Although I see the hand of politics behind this. "Hey we really fucked up the Google.com certificates. The board insists we do what Google wants and implement full certificate transparency by June 30. And it took two hours to explain it to the board so I am not going back to explain that it's all changed - Just implement the most recent IETF draft. Then I can tell the board it's done. What's the worst that can happen!"
I am in China, when I open baidu.com, zhihu.com, Chromium will throw the private error.
There is an interesting solution, on the page of private error, input "badidea", the browser will automaticly redirect to the targeted website. But it may be a "bad idea".
Now I use Firefox to open these sites.
This bit me on both the NYT and WSJ websites this past week using Chrome 53; CDNs they were using both broke with this error. Upgrading to Chrome 54 seemed to solved the problem for me (I'm using arch linux, fwiw).
Forgive my ignorance, but if you're issuing certificates for internal hostnames that you want to keep private, why would you need a public cert? Wouldn't an internal CA be better?
This is affecting Australia's Suncorp internet banking too. We reported the certificate error to them yesterday, and I've just updated that with a link to this article.
Somewhat off-topic but what a horrible bank they are.
A friend of mine's father almost got a heart attack by trying to refinance their underwater house. Despite of being a US veteran and government willing to sponsor their whole mortage through some special aid program, Chase chosen not to accept check from said agency. If that would be check from him they would gladly take it.. but not from gov. Government, of course will not want to write a check for him, so its a catch 22.
They currently file a lawsuit that Chase gladly accepted (thank God my friend has a good lawyer who took it pro-bono), but it just shows you to what extent they will go just to make an extra buck.
1. In September, Chrome 53 was released, which enabled mandatory Certificate Transparency for Symantec certificates due to Symantec's history of incompetence. Some website operators, such as Chase, asked Symantec to submit their certificates to Certificate Transparency logs in such a way that the certificate wouldn't be trusted by Chrome, triggering the ERR_CERTIFICATE_TRANSPARENCY_REQUIRED error. This is when I wrote this blog post.
2. Last week, an internal timebomb expired in older versions of Chrome causing this error message for any website using a Symantec certificate issued since June. Basically, Chrome contains a list of Certificate Transparency logs that it trusts, and this list has a 10 week expiration date. So if Chrome was built more than 10 weeks ago, there would be no trusted Certificate Transparency logs, and therefore any certificate that was supposed to be logged (such as new Symantec certs) would be untrusted and display this error message. The Chrome team was able to fix this within 24 hours by remotely disabling CT enforcement in Chrome. (When Chrome starts up, it fetches a list of feature flags from a Chrome server using a system called Finch which is independent of the normal upgrade system.)
3. Today, the Chromium packages in several Linux distros, including Ubuntu, became 10 weeks old. For some reason, they have not picked up the Finch update, and so they are displaying this error message for all Symantec certificates issued since June. This is not confirmed yet, but the current hypothesis is that the distros have disabled Finch for privacy reasons. It will probably require a distro package upgrade to fix.