Hacker News new | past | comments | ask | show | jobs | submit login
HTTPS in the real world (robertheaton.com)
162 points by mbaytas on Dec 6, 2018 | hide | past | favorite | 26 comments



Maybe a stupid question, but what's the difference between OCSP Stapling and just rotating your TLS certificate more often?

It feels like at the end of the day, my server is essentially asking some authority to sign something that says "yes you are you for the next 24 hours".

I coukd keep the same keypair unless I want to "revoke it". It also doesn't seem more expensive because the authority, in both cases, is just signing something?


Not a stupid question. In fact Google proposed requiring either OCSP or a short-lived certificate on exactly the basis you're describing, that they have the same consequences, and Google's infrastructure is already set up to allow the short-lived certificates without problems. That proposal didn't go anywhere, but there's nothing wrong with it in theory.

In terms of the real world, a lot more Certificate Authorities are set up to bulk issue OCSP responses. In theory OCSP responses could be "live" but in practice what happens is periodically (e.g. once per day) you decide which certs are revoked and then you bulk make OCSP GOOD answers and sign them for all the other certs, and upload those bulk answers to a CDN. So now your signing stuff is isolated from the uncontrollable horde of requests.

Whereas in contrast a lot of CAs (not the biggest one in volume terms, which is Let's Encrypt, but most of the others) have humans involved operationally in issuance, which means higher issuance volumes drive up costs, and they'd have to recoup those costs by charging more. So for you as a customer of those CAs OCSP looks more attractive.

In the non-stapling modality OCSP also doesn't mean any changes to your five year old unsupported custom server software, whereas replacing the certificates every week might mean somebody has to manually paste text from a file into a crappy web form and press "Replace certificate" every single week. Ugh. Such a server of course will turn out not to be capable of OCSP stapling, so if we want that we could have just upgraded it to have a sane certificate automation protocol rather than teach it how to do OCSP stapling...


I guess that issuing new certificate requires some logging, for example certificate transparency log, also it requires domain verification, while stapling does not require anything, so it's a more lightweight operation.


Why can't the certificate issuer log a certificate, marking it as renewable for 6 months (or whatever), and not log the renews?

That would require changing the Google's standard, but it's a ridiculous small change.


Technically I don't see any difference. Whether you're signing a timestamp or certificate, it's the same.


Issuers are expected to log every certificate in a public block-chain (most don't). And clients are expected to consult that block-chain and report certificates that are not there (to whom?).

That creates a difference between issuing a certificate or just attesting it was not revoked.


I wondered the same thing when setting up OCSP stapling. My conclusion was that the largest benefit is to the issuer, because OCSP allows them to use a different private key for initial issuance and for status responses. So the issuing key could be stored with much stricter security.

OTOH I assume that in reality half the world's CAs store their signing keys in an Access database on an unpatched Windows XP laptop, so it may be moot.


Technical communities like this have to accept their role in perpetuating false narratives and a naive perspective of the world where everyone acts in good faith.

Like the exploding complexity in browsers touched on recently that makes it now impossible for small teams to develop in effect rewarding billion dollar companies and guaranteeing centralization and vested interests. Once done these are nearly impossible to reverse making it all the more important for the scrutiny to happen while it is happening.

Similarly there is something completely disingenuous and false about those who have been pushing ssl on the pretext of 'concern' end user privacy and surveillance when the response by the tech community both in comment and action to Snowden and Assange's revelations and invasive surveillance by Google, Facebook and others remains embarrassing if not non existent and in case of the latter even supportive, again promoting centralization and a few interests.


Thank you, although I fear you may get downvoted to oblivion for your tone.

Something that people need to consider is that the tech folks behind things like CT and cert pinning (many of whom I know personally) have true technical motives, but their employer entertains them because it protects against ad injection.

We haven't seen robust development of alternatives like DNSSEC not because they are worse, but because it isn't in the commercial interest of the powers-that-be.


We haven't seen robust development of DNSSEC because it is worse.


Hmm, went into this article skeptical, but came out agreeing with it more than I expected. Makes some excellent points, and highlights some real challenges of the ecosystem.

But actually, I'm optimistic about Web security in general. Especially with regards to HTTPS, things have improved a lot, and looks like they will continue to get better! We really have come a long way in a few years to baking privacy into the Internet. In general, nowadays attackers will have to be well-funded, have a lot of time on their hands, and/or focus on specific targets to break this encryption. (Aside from the usual side-channels, 0-days, etc., which exist, but which is also a different problem.)

1. Private keys staying private: there was a catastrophic flaw in web servers a few years ago called Heartbleed that handed out the private keys like candy. These flaws are rare, and modern web servers written in memory-safe languages are much more immune to such vulnerabilities (cough Caddy). Of course, securing one's system to prevent break-ins is paramount.

2. Revocation: It's still mostly broken, because it's generally not enforced, and when it is, that can take days. Recent research that I believe has yet to be published proposes a way to scale CRLs at the regional or organizational level with 99.99+% cache hit rate. Fun fact: with Let's Encrypt, you can revoke anyone's certificate... if you have their private key. Another fun fact: revocation is really, really, really not awesome when it's used as a censorship weapon.

3. OCSP: Again, I agree with the article here. Vanilla OCSP is not awesome. OCSP stapling, however, is actually pretty awesome! But only if done right. Unfortunately, only one web server does it "right" out-of-the-box (cough Caddy). OCSP stapling is broken enough in most mainstream web servers that I would generally advice against using them with Must-Staple certificates. More details: https://gist.github.com/sleevi/5efe9ef98961ecfb4da8#gistcomm... and https://caddyserver.com/blog/certificate-management-policies -- basically, OCSP stapling is great when web servers implement it robustly and conservatively, and when it doesn't require any configuration. Highly recommended in that case.

4. Key rotation: Again, I know of only one web server that does this by default and automatically (cough Caddy).

5. Certificate transparency: As of now, there is not enough useful consumption of CT logs. We have TBs of data but no one to read them. So, we're close to integrating a CT monitor into Caddy so that your web server can report if it finds a certificate issued for a name it is serving that it did not request: https://github.com/mholt/caddy/pull/2274 (And even if a misissuance is identified, what is one to do about it?)

I want to emphasize that just because PKI is not a totally solved problem does not mean that every site should not be using HTTPS. Surely this will be controversial (even though, in my mind, it is not a question), but I'm going to sleep for the night so I won't be able to answer right now. (And I'll pre-empt the argument that HTTPS, or at least DV certs, are bad because it makes phishing sites look safe: that's not why PKI was invented. I'll go out on a limb and suggest that even phishing sites deserve HTTPS... not because of the merits of the site, but for the sake of the poor victim.)

(Note: I am the author of the Caddy web server.)


For 2. note that Let's Encrypt just automates a facility every CA provides (is required to provide). A compliant CA is required to have a means by which you can point out that a private key is revealed and they'll revoke certificates for the corresponding public key. It's just that for Let's Encrypt this is automated the same way as everything else.

You SHOULD NOT post private keys, if you find a key that you think shouldn't have been shown to you, you can prove you have it without posting it anywhere. That's what Let's Encrypt does in the revoke-with-key modality. A poor man's way to do this with tools a typical Linux system has is to construct a CSR using that private key, but requesting a Subject that makes it obvious this is a bogus request, e.g. CN="mholt should not have this private key" instead of any real subject. Bad guys don't learn the private key by seeing this CSR, but everyone (who understands cryptography) learns that you must have the private key or moral equivalent.

The requirement to revoke if a private key is revealed is why that craziness with the reseller happened earlier this year where they'd been keeping all their customer keys in escrow (if you allowed them to generate the key - never do this) and they said "Oh, these keys are leaked" and sent all the keys to the Certificate Authority. The CA went "Oh, these really are the private keys, huh" and revoked all the affected certificates. The Baseline Requirements don't have an option for "But, but, my reseller is an idiot, pretend you haven't seen the keys". So, don't trust your idiot reseller with your keys and then they can't do that to you.


One note regarding revocations.

Recently a bank in Brazil got leaked the private key for its main domain (and internet banking frontend). The leaker tried to ransom the bank but wasn’t happy, so he went to the press with a detailed report of what he got, and included a message signed with the bank’s key as a proof. The bank denied everything.

But the more interesting thing was that, when confronted about the key, they said it was indeed legit, but their site was already using a new certificate for a while, so everything is ok. And part of the press bought it, including sites targeted to technical audiences. That’s how much a lot of people in real world don’t know exactly how PKI works.

It took a few weeks until the leaked cert was finally revoked. And now I wonder if it was really the bank who did it.


> 4. Key rotation: Again, I know of only one web server that does this by default and automatically (cough Caddy).

This misses the point, because whilst nginx/Apache don't handle this for you - the automated ACME clients like Certbot (and e.g. Lego) do and will ensure you get a fresh keypair used for renewals.

>5. Certificate transparency: As of now, there is not enough useful consumption of CT logs. We have TBs of data but no one to read them.

Don't these sort of monitoring services already exist? https://sslmate.com/certspotter/ https://developers.facebook.com/tools/ct/

I don't disagree adoption is probably low but it seems more appropriate for a specialised service (as part of your existing monitoring infrastructure) to handle this.


> This misses the point, because whilst nginx/Apache don't handle this for you - the automated ACME clients like Certbot (and e.g. Lego) do and will ensure you get a fresh keypair used for renewals.

So you're agreeing with him: nginx and Apache don't do it by default and automatically, they need an external tool. Which is yet another step to add and monitor as an admin, which means fewer people will go through the hassle of doing it.


>So you're agreeing with him: nginx and Apache don't do it by default and automatically, they need an external tool. Which is yet another step to add and monitor as an admin, which means fewer people will go through the hassle of doing it.

The vast majority of people using Let's Encrypt are going to be using an external tool such as Certbot to automatically generate and renew their certificates for them. It's the officially endorsed method for getting certificates [1]. It's more hassle to do it without such tools.

Proclaiming that existing solutions don't do key rotations is therefore misleading. The _web server_ in isolation might not be directly responsible for it, but then Apache and Nginx aren't responsible for injecting sponsor advertisements into your HTTP response headers either so there's perhaps some precedent for Caddy doing more than it should.

[1] https://letsencrypt.org/getting-started/


This is the officially endorsed solution because the Let's Encrypt project assumes none of the web servers have this functionality out of the box. Caddy does, and that's what the whole discussion is about.

> more than it should.

We're entering subjectivity here, but I think you can agree that doing proper https with rotating certificates is probably a should for easy-to-use servers in 2018


the unix way is small tools for small jobs.

if you have to debug the OCSP renewal, would you rather debug an entire web server (and update it), or just the behind the scenes renewal tool, which doesn't require an interruption to ... web serving.


I know and agree with the philosophy, however there is a limit to how much you can decouple things and keep the whole system maintainable. Just look at the woes of our peers embarking on the microservices journey. This is especially relevant for caddy, which focuses strongly on the ease of use. Now of course it's not the perfect answer for everyone; different solutions for different people I guess


First I have to admit I know nothing of Caddy. This is the first I've heard of it.

However, I don't think this is a case of different strokes. Even at a hobby level (1 webserver instance, not load balanced, per site or group of sites), in today's infra we can expect use of Let's Encrypt and therefore certbot, can we not? I mean, if we're talking about stapling at all, we're talking about enough infra that we do the type of automation that will include certbot.

Once you're at that point, I cannot agree that it's easier, and more manageable, to include the functionality within the web server.

Perhaps if Caddy integrated acquiring and renewing the cert itself, not just stapling, then I'd have a different opinion.

Now at the point where you are load balancing, with or without an https proxy, in my experience debugging and maintaining smaller components is easier. Yes, interactions can create hard to debug problems, but large complex "monoliths" are worse. And we are talking about a sufficiently discrete component here. That said, I'm not an SRE. Back before SRE was a thing, I was a "LISA" sysadmin though.


Yeah you should take 2 minutes to check out Caddy: https://caddyserver.com/docs/automatic-https

> Perhaps if Caddy integrated acquiring and renewing the cert itself, not just stapling, then I'd have a different opinion.

It does, so.... does that change your opinion? (That's, like, the point of Caddy.)


I think a valid point of criticism is that this increases the centralization of the web.

I think it's already a notable shift that an HTTPS server must periodically connect to the internet in some way to get a renewed certificate - but with OCSP Stapling, the requirement seems to be that the server queries the CA in realtime, i.e. has a permanent internet connection.

All of this is clearly necessary to keep HTTPS secure (as the article described very well) and wouldn't be a problem if the end goal weren't to make HTTPS the only option to serve web pages - but as things are going now, together with DoH, it really feels that browsers have changed from being tools to view HTML documents to front-ends for yet another platform.


> I think a valid point of criticism is that this increases the centralization of the web.

this = ??? OCSP stapling?

I fail to see how it increases centralization.

> I think it's already a notable shift that an HTTPS server must periodically connect to the internet in some way to get a renewed certificate - but with OCSP Stapling, the requirement seems to be that the server queries the CA in realtime, i.e. has a permanent internet connection.

Both points you've made here suffer from oversimplification.

1. web server does not need to connect to the internet; an agent with access to the key does. rather, an agent that acquires a signed CSR, so this doesn't mean the server has to share the key, just a signed CSR.

2. server does not need to query in realtime, just once/day and it can be done in advance, not "on demand".

> All of this is clearly necessary to keep HTTPS secure (as the article described very well)

Perhaps this is just a difference of opinion, but I don't think the article makes the case that this helps at all. Consider a state-run CA that mints certificates for state purposes (and of course client browsers are forced to use the trust anchor). It simply doesn't include the stapling option in the cert, or it points to its own OCSP server.


Thanks Matt, I love Caddy and use it for most of my sites. It's a real example of "it just works".


> But actually, I'm optimistic about Web security in general.

Care to elaborate? https is not the most important nor most challenging aspect of "Web security in general". I guarantee, when you go to any Marriot website, you are getting to it via https.

Most phishing destinations are https-protected.

etc.


For 3. Other servers e.g. h2o, do stapling out of the box too




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: