Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 32 Supports Public Key Pinning (monica-at-mozilla.blogspot.com)
188 points by jonchang on Aug 27, 2014 | hide | past | favorite | 100 comments



I wish that this sort of stuff would come down to API-level interfaces.

For example, for the longest time Python's SSL library wouldn't even verify SSL certs:

https://wiki.python.org/moin/SSL

And would gladly connect to MITM'ed sites. I think this is rectified, but the information I've found is conflicting.

It seems to me that securing API endpoints would be even more important than end users, as there's could be a much larger quantity of data through an API than a browser.


Python's SSL state has been worse than other languages because of the 2.x -> 3.x transition, so basically 2.7 was left broken for longer than ideal. Eventually, they decided to backport most of the network security improvements (http://legacy.python.org/dev/peps/pep-0466/), see there the timeline.

Notice that this applies to the standard library; many people use the requests library which not only offers a superior API but also more security by default.


If you are making SSL requests with Python you will want to use service_identity[0] to perform certificate validation. Hynek has some good information on his blog[1].

[0]: https://github.com/pyca/service_identity

[1]: https://hynek.me/talks/tls/


I wonder how this is going to work. I've been using an add-on to pin certificates for a half a year now and it's a hell on some websites. It nicely worked for my bank for a while, but they now employ the same technique as Google, Twitter, Facebook, Akamai, etc., changing certificates and even their CA seemingly at random. You'd think I'm being MITM'd but I'm pretty sure that's not actually the case.

Edit: I should read more closely, found it:

> the list of acceptable certificate authorities must be set at time of build for each pinned domain.

So it's directly in Firefox' source code right now. Pretty much useless for anyone but a few big sites.

And the pinning RFC doesn't sound much better. It makes the client store something about sites they visited, which roughly translates to supercookies.


> And the pinning RFC doesn't sound much better. It makes the client store something about sites they visited, which roughly translates to supercookies.

I don't follow. If the user visits a site for the first time (ever) over a secure connection, they will become much more resilient to MITM for all future connections (including the ones where it updates).

That's a win in my book. At least it is a win from the "hacker" MiTM threat. It won't be as useful against states/governments since there might never be a secure connection ever.

But I'd take a solution NOW that works-ish than a solution maybe never which is flawless. Security in depth and all that jazz. The ultimate solution is some kind of secure DNS infrastructure which delivers information about HTTP certificates (which I believe is in the works also).


I'm not saying it's entirely bad, but it's something some users will want to disable for privacy concerns. The current model works well enough to allow widespread online banking and although this RFC will certainly make it more secure, there are also disadvantages.

I happen to know that Chrome hardcodes a list of EV certificates (or at least I read so a while ago), it could do the same for CAs. Or the browser could ask a central server which CA belongs to a certificate fingerprint. Not that different from OSCP except that it'll probably be run by the browser manufacturer instead of the CA.


> If the user visits a site for the first time (ever) over a secure connection, they will become much more resilient to MITM for all future connections

.. as long as the site never cycles their certificates, and bugs such as Heartbleed never forces them to.

Not a win in my book. In practice, this will be either useless or amount to a tiny nagscreen that users click through with their brains on autopilot.


  And the pinning RFC doesn't sound much better. It makes the 
  client store something about sites they visited, which 
  roughly translates to supercookies.
So what? The server can't read it back out of the client store again. The comparison to cookies is completely spurious.


> The server can't read it back out of the client store again.

Actually it can, by making the client connect to various subdomains with various certificates (valid or not). Like with image caches and everything else that supercookies use, something that is meant to be cached between requests is always detectable.

> spurious

Nice day to you, too...


Okay, maybe not entirely spurious, but how much information can a malicious tracker actually extract using HPKP + invalid certs, and the resulting failed handshake?

  1. The client is new enough that it supports HPKP 
  (information you get from the default UA string anyway, 
  but could be used to smoke out someone using a blank UA 
  string)

  2. The client has connected to this server once before.
That's, what, one bit? A bit and a half?

You could add a new subdomain once a day, then invalidate the cert 24 hours later, to get more bits of information, but this is an attack that requires buying a new cert once a day, forever, which costs money. (StartSSL gives you one free cert, but only one) (It wouldn't cost a state sponsored attacker any money to create certs, of course...)

Is there even any way to implement PKP that wouldn't leak client information via failed handshakes? Is killing pervasive monitoring worth enabling an expensive supercookie attack?

(One dumb mitigation: ignore handshake failures due to PKP, proceed with the connection as normal, but throw away the data. You'd have to load and parse the requested page in the background, though, or else the attacker would notice that the client isn't requesting inline images/scripts...)

---

EDIT: Ah, the draft mentions the supercookie attack: http://tools.ietf.org/html/draft-ietf-websec-key-pinning-20#...


> EDIT: Ah, the draft mentions the supercookie attack

I didn't actually know that. Nice to see they thought of it while designing, but besides the attack scenarios I don't see anything to do about it, for users nor websites...


This isn't really intended for the end user to use. This is for a web service to let Mozilla know that they will only use XXX root CA to issue their certs, and Firefox then only trusts valid SSL certs from the proper root CA for that web service.

Google has had pinning enabled in chrome for their own properties for a while (and some others now I believe). In fact that is how they detected an intrusion in one of the other Root CAs


> In fact that is how they detected an intrusion in one of the other Root CAs

I know that and that's great, but why only Google? Sure it is Chrome we're talking about, but it is still a mainstream browser. They could make it a bit more neutral by including, say, https-enabled sites from the Alexa top 10k.

> This isn't really intended for the end user to use.

Of course, what I meant was how small websites will get their sites secured. Do we have to do a pull request for every browser out there every time we order a new certificate? It doesn't seem manageable for the developers nor the browser builders.


Well nothing is stopping a top site from changing their Root CA, so it would be impossible to ship pinned certificates without coordination between Google and the site.


> I've been using an add-on to pin certificates for a half a year now and it's a hell on some websites.

Do you mean Certificate Patrol? I also tried it and have given up. Way too "noisy". Let's hope that Firefox eventually implements a more refined solution.


Does anyone know why they didn't go with TACK?


TACK is a TLS extension. It must be added to TLS 1.3 RFC.


TACK doesn't have to be added to any RFC in order for browsers to adopt it. They could have integrated TACK, but chose not to. I wish they'd go the other way on that.


Sure, unless you want a standardized interface that everyone can reliably implement. Once it's in the RFC, people can start implementing it, but if you implement before there's a published standard then you're stuck with a broken implementation, or you break backwards-compatibility.


So you'd prefer Firefox hadn't implemented public key pinning? Because it's not in the RFC either.


It wouldn't be the first RFC to begin life as an implementation instead of a spec document.


Which is the right way for an RFC to begin!


I see, thanks.


Maybe FF and Google should just become CAs? It would remove the extra step as currently both pinning and registering with existing CA are required?


They are.

Google for sure, and Mozilla maintains a list of them that ships with Firefox. If that isn't the ultimate level of trust (deciding which CAs your browser will trust) then I don't know what is.


Both Google and Mozilla have their hands in some ways tied about which CAs they support. Their browsers have to work on the Internet as it is, not the Internet as they want it, and so both certainly include CAs that their owners would prefer they didn't.

Also, of course, there's the fact that both Mozilla and Google give their users control over which CAs they include. The UI for that functionality is unfortunate. But on the other hand, if generalist developers really want to stick it to NSA, that's a good project they can work on without screwing over users with bad cryptography.


That must be a legal minefield as they'd then be in competition with established CAs that use their browser as sort of a platform (unfair competition or something). Being a CA also seems to be a messy business, I don't now if Mozilla would have the resources to do that. Google seems to have a CA of their own, but they don't seem to be signing certificates other than their own with it.


I've been told by people on those teams that the legal issues behind this (or any other interference with the CA system) is a big concern.


Why should we trust them?


With the current setup, people are trusting mozilla/google to: Give you the correct software, Update silently, Determine which CA certificates to trust by default, and Determine which certificates are valid by pinning.

The CA is trusted to do: Determine which certificates are valid.


Not really. For firefox anyone can build from source (see also iceweasel), disable automatic updates. For chrome it's mostly the same, but then for Chromium instead.


For people who build from source, all control is at the user. They are responsible for the security, and they do not need to trust anyone. The question about who should have trust invested in them do not involve them, as they operate outside the system.


Can you verify that the binary download of Firefox is compiled from that source unmodified?


There's work in progress to allow this. https://bugzilla.mozilla.org/show_bug.cgi?id=885777


Just building from source doesn't guarantee anything.

Firefox is giant. It shouldn't be hard for a malicious party — should one appear someday — to hide some tiny backdoor somewhere in a more-than-a-hundred-megabyte source code tarball.

Verifying GPG signatures of the tarball could prevent some (but not all) issues, but from my observations it's rarely done. And when I've seen it done public key's origin wasn't thoroughly verified, just blindly `gpg --recv-keys`'d from keyserver.


This comes up again and again. Sure, the individual users is unlikely to wade through the complete delta for every published version, but it's not uncommon for packagers to be involved upstream as well (with a few unfortunate outliers of course). Backdoors have been catched this way before.


There's a difference between "building from source" and "obtaining software from a trusted party". Your suggestion (trusting the packagers) implies the latter and is almost irrelevant to the former. If one trusts a team to catch possible issues, one may trust the binary this team builds as well.


Yes, why should we trust our browser company with the security of our SSL connections. That's like locking your car with a key made by the company that manufactured it!


> Other mechanisms, such as a client-side pre-loaded Known Pinned Host list MAY also be used.

Fantastic addition IMO. You could distribute/sync hash lists on and offline, awesome.


The problem I've run into with public key pinning is captive portals. Mobile operating systems or browsers need to provide a better user experience for captive portals.


iOS have had good user experience for captive portals since a long time (iOS 3 I think), and also Mac OSX. They automatically detect when a captive portal is present by doing a background connection to some Apple-owned property and trying to download a small text file; if it fails for any reason, they bring up a modal panel showing the captive portal and letting the user login; then, they detect when the Internet access is active and close the panel (or, lately, let the user close the panel with a "Finish" button). What's more important is that the whole operating system / applications are not given the signal "network is on" until the whole process is finished, so you don't get one dozen of failures from applications that try to refresh in background and can't access their backend servers.

This said, I really hate the fact that the WiFi alliance has left us in this very sad state. Like many similar committees, they move too slowly; captive portals arose from a real-world problem that wasn't solved by the WiFi standard, so people had to come up with weird DNS/HTTP interceptions that fail in so many regards it's not even funny. If there's somebody to blame, it's not operating systems for not adding weird heuristics like Apple did, but Wifi Alliance for not bringing to the market a good solution for handling hotspots soon enough.


OS X's solution is really nice, I only wish other platforms would do it. Microsoft do a similar connectivity check on Windows, yet they don't deal with the portals.


I believe both Windows and Android give you notifications that there is a captive portal active, but it will just open it in your default web browser instead of a dedicated one.


I wish they'd do the dedicated thing. I hate opening my browser to see all my tabs either warning triangles, or portal pages. I want to get that over with before opening my browser.


Could it be that Microsoft won’t ”force” the user to use Internet Explorer because of anti-trust concerns?


Doubt that. You wouldn't be forced to use it here anyway.


>if it fails for any reason, they bring up a modal panel showing the captive portal

AFAIK, technically, there's no standard address for a captive portal's location so iOS probably tries to navigate to a website. The router then poisons the DNS response and the captive portal loads. Android devices do the same thing to my knowledge. My Samsung pops up a notification that when tapped opens the browser and tries to navigate to Google. If a login happens before the user opens the notification (for example, by an app running a background service like the Fonera app) the notification is cleared automatically.


Actually, most captive portals don't poison the cache (because that would break the web site you tried to load), but answers on port 80 for any and all IP addresses.


To my knowledge you can use a TTL of 0 and then you have no problems.


Wouldn't that cause problems with (a) browsers that cache dns for a little while anyways, and (b) clients arriving on the captive net but already holding a long-lived cached proper dns result?


I've never used iOS so I didn't know about it does this, but I've always wondered why Android didn't do the same thing.



it has a notification telling the user that they need to sign on, but it also tells other applications that a wifi connection is available, which causes all of them to try to sync.


Apple customers, where phoning home on every network connection is good UX. What is wrong with you people?

I had this behaviour crash a captive portal daemon once, was fun tracking down the owner of the device in question in the building. "CapDaemon crashed, request came from around room A2.011!", "Let's get him before he's gone!", "Hey, Mr., do you own an Iphone or I pad? Yes? Please come with us. We need to have a debug session with you."


It is good UX. For privacy concerns, it's already phoning home on every network connection for push notification registration (at the very least), so your objection is moot to me.

To handle this less centrally, you would need a distributed list of URLs to be used for captive portal checks, with servers handled by different entities, and each device selecting one at random from the list. This wouldn't change the UX though.

For your other remark, if the captive portal crashes for a standard HTTP GET to a normal URL, you can't really blame this on anybody else but the captive portal developers.


  To handle this less centrally, you would need a distributed 
  list of URLs to be used for captive portal checks, with 
  servers handled by different entities, and each device 
  selecting one at random from the list.
I dimly remember that the way Android does it is by requesting a list of randomly generated garbage domains (like http://aghepodlln/) and seeing if they get a response. If they do: captive portal.


Here are the major implementations:

http://www.apple.com/library/test/success.html http://clients3.google.com/generate_204 http://www.msftncsi.com/ncsi.txt

And some other implementations of dubious longevity: http://start.ubuntu.com/connectivity-check http://network-test.debian.org/

Now, if only NetworkManager could add support for this.


> so your objection is moot to me

"...because Apple already fucked it up elsewhere so double fuckup doesn't give extra points."

> you can't really blame this on anybody else but the captive portal developers.

I don't think I did, thats just how I found out about this particular appleism. Also: This behaviour enabled us to track down that device and its owner physically too. So think about the sensibility of this "feature" in this light. Every other device would have enjoyed relative anonymity amongst the other devices in the building.


We're speaking of a device where Apple can silently push kernel-level code over-the-air at any moment. Surely an extra IP connection to an Apple server is not changing anything in the picture, but it is helping millions of people using their device without getting weird error messages or certificate errors every time they connect to a hotspot.

If you don't trust Apple with your IP, you shouldn't use a device which runs their kernels, that's a no brainer; if you do trust them, though, you might appreciate the way they go through multiple hoops not to handle too much of your sensitive personal data, see for instance the design of iCloud Keychain: http://www.apple.com/ipad/business/docs/iOS_Security_Feb14.p...

So it's not like the impact of an IP connection is not taken into account in design considerations; it's just that it doesn't look so important in the big picture of the security and privacy implications of using such a device. For personal passwords, different avenues are taken.


What kind of crazy bug did the CapDaemon have that a single iOS device was able to crash without even trying?!


I don't remember... it was perl?


Don't captive portals already have a problem with every HTTPS website? I don't see why PK pinning would cause any problem that doesn't already exist.


Yeah, many captive portals do have a problem with HTTPS. From my experience in 50+ hotels in the last years.


Which is not surprising at all, since the whole point of HTTPS is to prevent anyone other than Google from displaying a valid web page at https://www.google.com/

If a captive portal is able to display a valid web page when you try to visit an HTTPS website, that would be a serious red flag.


My solution to that is to open http://example.com/ (instead of, e.g. Google) after connecting to Wi-Fi. It doesn't use SSL (so it works) and I have little use for it the rest of the time (so if the Firefox DNS cache gets messed up, it doesn't matter).


Wrong way round. Captive portals need to provide a better user experience for browsers that doesn't involve an MITM! See 802.11u: no excuse for not using it.


Excuses for not using 802.11u it do exist: client support simply isn't there. iOS only supports it since 7, Android doesn't support it (and will not generally for many years even if Android L does include it), Windows doesn't support it generally, OS X doesn't appear to have support for it, the list probably goes on. It will take 5 years at minimum before hotspot owners will provide this functionality, as there needs to be major client adoption first.

Just because your solution is problematic doesn't however mean that your point is not correct. The problem here is indeed with captive portals.



When you support DANE, you trust the various governments responsible for the various country domains to not wanting to MITM owners of said domains.

I'm sure everybody has a different subset of governments they would be putting in the "trustworthy" bucket and it's not up to the browser vendors to make a political statement there.

Browser vendors would have to trust all governments equally and AFAIK, none of them have publicly stated what their policies regarding MITMing DNSsec is, nor how well they protect their DNSsec signing keys.

CAs have to follow quite rigorous protocols if they want to be included in the browser default list and they have all financial incentives to comply.

Governments don't have to follow anything and even if they had, they have all the incentives not to comply.

This is why DANE, while otherwise sounding like a really good idea, is ultimately doomed to failure. No browser wants to take responsibility for less-than-stellarly performing governments and no browser wants to make a political statement by only supporting DANE for certain top level domains but not others.


CA model is broken (>500 of CAs, race to the bottom in terms of price, security is not a part of their business model). DANE is no better. What we really want is to be able to withdraw trust. There is no point for me to trust some Iranian CA. Why should I want to trust one? Today - I have to trust it, because you can't just remove CA without breaking a percentage of websites for you. And you can't normally erase CA from existence, because on single CA rely many customers and each of them will have broken website.

Please read about Convergence by Moxie Marlinspike. It solves the removing of trust problem.


With DANE it's immediately visible who you have to trust, and that can't easily be changed. If it's a .se domain then you know that only the Swedish govt can MITM that, with the current CA model any CA is able to authorize a MITM.


That would be great if looking at .COM wasn't an immediate guarantee that the USG could MITM something.


Don't use .com if you distrust them. It's still orders of magnitude better than the CA model.

The web site owner gets to choose which top domain to use and trust. It is not the end user that is supposed to value how much trust they put in each CA. That alone is the most important point right there.


This makes absolutely no sense to me. DNSSEC is a forklift upgrade of a key piece of the architecture of the Internet. We should incur that cost so that all of the most popular sites on the Internet will end up with the USG as their CA? And that's "orders of magnitude" better than what we have now?


Today, for a .com, there are a large number of CAs (let's call it 100?) that can sign a cert. Additionally the registrar or the registry (VeriSign) can change NS and DS records due to a US court order (or otherwise) and the new destination could get a domain control validated certificate.

If DANE were adopted and the current CA system abolished, then the registrar or the registry could still change the NS and DS records to takeover a domain, but that takes us from 100+ parties capable of signing a cert to 2 parties that are already part of the system.


It is. The trifecta of three letter agencies can expropirate and generate valid certificates for .com domains today. But they can't do this for most of the other TLDs.

No cryptography in the world can protect you from a fully legal domain trasfer. So, who better to be your CA than the registrar who have this power anyway?


Also, not a "forklift upgrade". The work is done. DNSSEC is live.

Your resolver probably already supports it. Your TLD probably already publishes the records.


You can look at TACK as a baby-step towards Convergence.


DANE is about 500x better than the current CA model, using your estimation. All CAs are allowed to issue certificates for almost every (not entirely true, but pretty much so) domain, while only one top domain is involved in DANE trust.

I don't need to trust the Iranian top domain (which is what you mean, not "government") for all of the web that is not in .ir. I don't know about you, but 100% of my web use falls in that bucket.

And if you want to visit an .ir domain you need to trust the Iranian domain registry, independently if you use CAs, DANE or even Convergence. If they change the legitimate owner of the domain (which is what you mean here!), they can just as easily get a legitimate SSL certificate in every trust model in practical use.

(Indeed, if a legitimate owner of a domain couldn't get a legitimate certificate, there'd be no possibility to do things like rotate your keys and change your certificate. That would be an even bigger problem than broken CAs.)

So DANE makes sense. And while there are known issues with DNSSEC, trusting governments is not one of them. Not more so than with any other model.


DANE is just the hash of a site's cert stored in DNS as a TLSA record. I get that it is then signed with DNSSEC, but I don't get how this involves any governments. It's still the site operator putting the hash of their cert in the TLSA. Can you elaborate how you're 'trusting governments' when you're using DANE?

IMO it's much easier for an [NSA|GHCQ|etc] to compel a CA to give site operators broken certs than it is to deal with site operators rolling their own certs and using DNSSEC/DANE. Even if Sweden is controlling .se, example.se can create their own cert and stick its hash in their DNS. Does your model of 'trusting government' then enter into the picture because their entire domain is then signed by .se?


Suppose you could use both normal PKI + DANE ? This way you both need a signed certificate and a MITM for DNSsec. Also in some cases, DANE is "enough". (Unless they start to MITM all websites.)


The difference is you only have to trust the one your domain ends in and you have to trust one anyway.


It's about users though. You do SSL for the users, not for your sake. If you as the server owner trust the government behind, say, .ch, then that's fine for you. But if browsers supported DANE, then all of the browsers users would also have to trust the government behind .ch.

Or rather: The users trust the browsers to show an SSL warning if there's an indication that the connection is being MITMd.

Today, the browsers trust (hand-picked, subjected to stringent rules) CAs to tell them whether a connection is being MITMd or not and they then tell the users.

In the case of DANE, the UI to the user is the same (Blow up or don't blow up), so the trust given by the user to the browser is also the same, but now the browsers can't rely on CAs which they control to some extent (using said rules and monetary incentives), but they must rely on governments, often without oversight or clue and with all the incentives to MITM connections.

That means that by trusting DANE, browsers force users to also trust DANE and users might not want to trust some or all of the entities behind DNSSec.


The percentage of government controlled TLDs is rapidly declining though. And making the CA an integral and evident part of the domain and url is a vast improvement to the current state of affairs.

Is the USA-controlled DNS root a problem here? Hopefully interference would require highly visible and suspicious changes to the DNS root.


I do not understand your first paragraph. If the last 5 years have demonstrated anything, it's that the US DOJ more or less controls the 3 most popular TLDs. I am not seeing the same decline in government control you do. Please be more specific?

Also: I have a really hard time seeing how baking a CA into the fabric of the Internet is a vast improvement over the current situation we have now, in which we are continually tormented by our reliance on CAs.


I meant what I said: there are 700+ TLDs listed at https://www.iana.org/domains/root/db and the percentage controlled by governments is declining - currently 297 country-code TLDs and a handful of other state controlled domains (including .com). ICANN has been giving out new top-level domains pretty generously and the vast majority of the new ones are non-state domains. Hence, the percentage of state/government controlled TLDs is declining.

No matter what the PKI system is, there will be more and less trustworthy actors around. I agree that many state controlled TLDs are currently quite popular, but I don't see them as generally less trustworthy than the commercially operated TLDs. Both groups will contain some iffy elements, but I don't know if there's any way to build a system where iffy actors can't play. At least they can only mangle their own domains with DANE. And sounds like DNSSEC should be quite a bit more tamper-evident than our current CA sysetm.


Proposed ideas must be compared to current situation, not a utopia. DANE is not the best solution. Trusting governments is a failure right there. But it is an improvment to the current mess.


this sounds identical to google CRLSet. Basically a list of pinned certs inside source code.

> In the future, we would like to support dynamic pinsets rather than relying on built-in ones. HTTP Public Key Pinning (HPKP) [1] is an HTTP header that allows sites to announce their pinset.

ok cool. Requires initial safe connection once. Like HSTS.


This is nothing like Google CRLSet. CRLSet is just a way of collecting the CRLs from a ton of different CAs and having a way to push those out to Chrome browsers easily without users having to individually download them all from the CAs.

Chrome has its own TLS pinning implementation that basically works the same way as Firefox's. See https://src.chromium.org/chrome/trunk/src/net/http/transport...


Meanwhile Chrome doesn't even support OCSP (Certificate Revocation) for performance reasons, not even after Heartbleed.

I hope that doesn't sound like fanboyism, but not being able to communicate a certificate revocation properly is worrisome.


That's because OCSP doesn't work. The real-world Internet routinely breaks OCSP queries, which results in Firefox (and other browsers) soft-failing them: if OCSP doesn't work, the browser goes ahead with the connection. The security problem here is trivially observed.

TLS certificate revocation is a mess. We know what the solution will look like: it'll be something like HSTS, except that instead of caching "this site must use HTTPS", we'll also cache "this site must use OCSP stapling", and the OCSP data will be conveyed in-band in the HTTPS connection. It's hard to ding Chrome for not supporting something that doesn't really exist yet.

So, no: not "for performance reasons".

Incidentally: Chrome more or less invented certificate pinning.


OCSP is still useful, it's just not panacea. Getting timely notification that cert A is invalid or that server B should no longer be using cert A is still good. It doesn't solve every problem (like somebody blocking access to the CA to prevent the OCSP check) but it's not worthless.

It can be defeated if the user or his ISP is pwned. It might kick in too late to properly save the user if the connection to the CA is too slow.

But it still does nicely handle the case where a server and its cert are no longer trustworthy.


OCSP mitigates attacks by adversaries who control cryptographic secrets and connectivity (or else those secrets don't do any good). It's defeated trivially by manipulating connectivity just enough to break OCSP. It's not useful at all.

This is a common pathology when thinking about security. Lots of things appear to work when they aren't being tested by a real attacker, but don't work at all when they are. It's like an insurance policy that you pay small premiums to that then vanishes when a disaster strikes.

Like I said upthread: there's a way to make something like OCSP work: the OCSP signaling can be moved in-band, so it can't be attacked separately from the TLS session itself, and then made sticky through an HSTS-like mechanism. But until OCSP gets a "Must-Staple" mechanism, it provides literally no real benefit.


Obviously stapling improves on OCSP, and OCSP is flawed. But OCSP is better than nothing. It's not 100% useless - if a server or a cert has been owned, OCSP will protect the user from that. If the user has been owned, only stapling will save them, so obviously a move towards stapling is a good idea.

But don't let the perfect be the enemy of the "better than nothing".


But OCSP is the only thing that's widely supported as of now. You can't on one hand say "don't blame Chrome to not support something that doesn't exist", when at the same time it rejects something that's widely deployed and even is considered a requirement for CAs.

Being able to revoke your certificate, even if it has problems is better than not being able to.

OCSP is still the standard way of communicating certificate revocations and even with all the HTTP-Extensions you need a way for certificate revocation.

Unlike most alternatives OCSP is out of the box supported by IE, Firefox, Opera and Safari. Only Chrome has it disabled per default. Most people revoked their old certificates after Heartbleed. This is an example of where you need an alternative to just pinning a key.

So you are saying that an attacker has to make sure that the OCSP connection isn't working means OCSP is worse than having no possibility of certificate revocation at all?

Not saying that it's absolutely secure. Hopefully everyone knows that there are flaws in HTTPS/SSL. Zooko's triangle[1] even gives you a hint why.

Also I am curious. What better, more widely used way of Certificate Revocation do you know?

[1] https://en.wikipedia.org/wiki/Zooko%27s_triangle


OSCP fails often enough that requiring a verified pass would break the internet. Thus, browsers which support OSCP treat a failure to fetch OSCP data as a soft fail (i.e. they ignore it).

The problem is that if you can MITM someone, you can deny them access to the OSCP service and cause a soft fail, which makes OSCP worthless.


Just because it doesn't stop a full MITM between the CA and the client doesn't make it worthless. It still protects the user from trusting a server that is no longer trustworthy.

If a certificate is worth issuing when a server is trustworthy, it's worth revoking when the server loses that trust.


We are talking about adversaries who control both secrets and connectivity. Not because those are the adversaries we care most about, but because those are the adversaries that key revocation contemplates. The notion of "full MITM" versus "partial MITM" versus "passive-only" attacker does not apply.


Revocation in this case of pinned key means setting the max-age to 0 of the previous key and pinning a new key. This must be done before attacker is able to MITM.


Yeah, makes sense. I guess. But I'm more and more worried about the web "platform", it just gets more and more complex each day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: