Hacker News new | past | comments | ask | show | jobs | submit login
Is HTTP Public Key Pinning Dead? (qualys.com)
128 points by okket on Sept 6, 2016 | hide | past | favorite | 140 comments



I figured interesting words would probably start to trickle out about HPKP after the last month or so of talks. Opinions are split on how much damage RansomPKP and other hostile pinning attacks can do. I'm inclined to believe that it's a big enough risk to warrant some action (such as more diligence by free certificate authorities -- the crux of the talk was rekeying at up to 20x per week enabling the attack). Others, notably Mozilla, are taking the wait-and-see approach. Google and Let's Encrypt were the most resistant to changing anything at all.

Anyway, Scott Helme's writeup did a quick and solid job of summarizing a lot of it. It's linked from the Qualys piece, but here it is if you'd rather get to it directly: https://scotthelme.co.uk/using-security-features-to-do-bad-t...

Full Disclosure: I and buu700 (https://news.ycombinator.com/user?id=buu700) did the AppSec Glory talk at Black Hat / DEF CON and developed the RansomPKP attack pattern.

Edit for Fuller Disclosure plus added thoughts: I'd rather not see HPKP die because we've done some cool stuff with it over at Cyph. I'd rather see HPKP be refined with extra requirements, possibly some sort of tie-in with CAA (that little-known standard for enforcing which CAs can issue certs for your domain).

Speaking of CAA, we did a show of hands in both the BlackHat and DEF CON versions of the AppSec Glory talk. Five hands across thousands of attendees. That's it.


I don't think blocking rapid re-keying on the CA side is going to do much good to prevent this kind of thing. It would help with the pattern described in RansomPKP, but that's not the only option available to attackers. The web server does not need direct access to the (maliciously added) private key - a separate service under the attacker's control that holds the private key and signs stuff for the web server would be sufficient (this is basically what CloudFlare does with Keyless SSL[1]).

I see some potential for CAA to help prevent these attacks. Limiting issuance to a small number of CAs would help, especially if those CAs are ones you have a business relationship with and have agreed to some sort of out-of-band confirmation process for all certificate requests (I believe this is fairly common for high-risk domains). There's been some discussion about account key pinning via CAA in the ACME WG. Account keys could be kept separate from the web server (possibly even in a HSM), so this could also help mitigate this problem.

That being said, none of that is going to do any good while checking CAA records is optional for CAs, not to mention that just about no one knows it even exists (even in the infosec community, as you have demonstrated). Hopefully we'll see some movement on that front in light of recent mis-issuance reports.

[1]: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...


> CAA (that little-known standard for enforcing which CAs can issue certs for your domain)

I'm sure the CA's absolutely LOVE this - customers are signaling that they will stay with CA 'X'.

Given 'vendor lockin' or 'guaranteed recurring revenue', cert prices would drop thru the floor for 1st year only for renewals to increase.


CAA is just a DNS record that can be changed at any time, so there's no vendor lock-in.

CAs that implement CAA request that record and check whether they're permitted to issue the certificate; if they're not, they should refuse issuance.


1) CAA tries to address the threat of attacker getting a cert issued for your domain - to carry out such an attack inherently requires taking over your DNS record or your domain account - if they can do this then they can also reconfigure your CAA record.

2) CAA requires the CA to perform additional operation (retrieve and check that DNS record) - not all of the 300-600+ CA's will do this - all it takes is 1 CA and the attacker will get his fraudulent cert from that CA.


1) No, it is not necessary to have full control over your domain's DNS to obtain a certificate. There are various domain validation mechanisms. Having full control over the HTTP server (as mentioned in the article) would be sufficient, as would having access to certain email addresses associated with your domain.

2) I don't know why you're telling me this - it's not related to either your original post, or my reply. I'm quite aware that CAA is only really effective if it is mandatory (in fact I've mentioned it in a sibling comment a couple of hours ago).


>CAA requires the CA to perform additional operation (retrieve and check that DNS record) - not all of the 300-600+ CA's will do this - all it takes is 1 CA and the attacker will get his fraudulent cert from that CA.

CAA seems like another good step that the browsers can demand after a CA fucks up - like google has mandated that some CAs publish Certificate Transparency logs after they've mis-issued a cert, they should also be requiring CAA on threat of being revoked from the browser's trust lists.


Yeah - in theory, but in reality the browsers are reluctant because of all the other sites that a CA has issued certs for - no browser wants to be singled out for blocking some other non-related sites.


For what it's worth, my impression is that CAA will be mandated by the CA/Browser forum at some point. But, indeed, that's the main weakness of CAA—it requires that substantially all CAs support it.


There's too many paying entities to appease - not just hundreds of CA's but various browser vendors as well. Either MUSTs will be changed to SHOULDs - or fragmentation of the CA/Browser body itself.

Look no further than at some of the past transgressions browsers let CA's get away with.


Maybe, but there's no memory effect associated with CAA and you can change your configuration it any time. You're not actually locked in.


> with CAA and you can change your configuration it any time

so too could an attacker. If an attacker is able to set HPKP, then they could just as easily reconfigure CAA to a CA that issues them a cert for your domain.


The attack described in the article (web server compromise) would not necessarily give you access to the DNS of your domain (that's putting a lot of eggs in one basket).

It's certainly an effective defense-in-depth mechanism.


> It's certainly an effective defense-in-depth mechanism.

1) We've seen time and again that complexity is the enemy of security. Generally, the more moving parts, the more likely for another flaw. This is less defense-in-depth than it is added complexity. The attackers have time on their side to figure out where the next hole is.

2) The OP suggests that HPKP failed because of practical reasons (domain admins are scared to death of getting it wrong and bricking their site) - things like CAA only add to the complexity.


1) I don't see how it can possibly be worse than the status quo. CAs already rely on DNS for domain validation. This is just another DNS query, with the reply being a whitelist of permitted CAs, not a replacement for domain validation. If the CA fails to follow the whitelist, it's not worse than a CA that does not implement CAA. Without pointing out evidence that shows how this addition could make things worse, I don't think this is a good argument against CAA.

2) CAA was introduced in this discussion as a possible solution to the RansomPKP problem. It is not a requirement for HPKP and the only effect on HPKP it would have is to actually reduce the risk associated with that attack.


> CAs already rely on DNS for domain validation

Not all - only a subset of certs issued

> If the CA fails to follow the whitelist, it's not worse than a CA that does not implement CAA. Without pointing out evidence that shows how this addition could make things worse, I don't think this is a good argument against CAA.

because the USER can't tell. The user is under the impression of increased security (because upgrading to browser version X.Y said it now supports CAA) yet the user doesn't know which cert was issued by a CA that checked CAA. And you can't add yet another indicator to the UI because the user is already numb from just the certificate itself.

> actually reduce the risk associated with that attack

and as I tried to point out to you, it's ineffective against the threat of an attacker gaining access


> Not all - only a subset of certs issued

I'm not sure I follow. My point is that if you're arguing that a DNS query is some kind of added complexity, then I have bad news for you, because DNS queries are already a part of all domain validation methods (whether it is email, HTTP, DNS, etc.)

> because the USER can't tell. The user is under the impression of increased security (because upgrading to browser version X.Y said it now supports CAA) yet the user doesn't know which cert was issued by a CA that checked CAA. And you can't add yet another indicator to the UI because the user is already numb from just the certificate itself.

This has nothing to do with browsers. It's a mechanism to improve the CA domain validation process. It can help mitigate certain vulnerabilities in the domain validation process. For example, WoSign recently mis-issued certificates for github.com due to a bug in their domain validation system. If they had used CAA, and github.com had a CAA record indicating WoSign is not permitted to issue certificates for that domain, it's quite possible that the certificate would not have been issued.

CAA is not a replacement for HPKP, and no one is arguing that. It's an useful defense-in-depth mechanism that could help prevent mis-issuance in many cases, but it's not (and never has been advertised as) a solution to the problem of a fully-compromised CA. In addition to that, as has been mentioned in this thread, it might be useful in mitigating RansomPKP to a certain degree in the future.

> and as I tried to point out to you, it's ineffective against the threat of an attacker gaining access

No, you have not pointed that out. I have pointed out that you need access to the DNS in order to change (and bypass) CAA records, whereas you only need control over the web server in order to obtain a certificate that can be used to ransom a domain.


> I'm not sure I follow. My point is that if you're arguing that a DNS query is some kind of added complexity, then I have bad news for you, because DNS queries are already a part of all domain validation methods (whether it is email, HTTP, DNS, etc.)

We're talking about CA's issuing certs - DNS queries for email, http are outside of CA cert issuance.

> This has nothing to do with browsers. It's a mechanism to improve the CA domain validation process.

The browsers need to decide when to trust a cert - and if/when CAA becomes involved in cert issuance, then I suggest this DOES have something to do with browsers. Furthermore, you're suggesting co-existence between CA's that check CAA and CA's that don't - which implies that either the browser or user has to make a determination of whether to trust.

But we're talking past each other. So using your example, think of it like this, how will you as a user know when you visit github that another WoSign hasn't happened? ...or rather - how would a LAYPERSON know they are secure? Github might detect it - but how do ordinary USERS?

> If they had used CAA, and github.com had a CAA record indicating WoSign is not permitted to issue certificates for that domain, it's quite possible that the certificate would not have been issued.

that (wrongly) assumes ALL CA's trusted by EVERY browser will perform CAA check before issuing a cert for github.

> CAA is not a replacement for HPKP, and no one is arguing that. It's an useful defense-in-depth mechanism that could help prevent mis-issuance in many cases, but it's not (and never has been advertised as) a solution to the problem of a fully-compromised CA.

The problem is not so much "fully-compromised CA" as it is we're trusting a whole pile of CA's and not all them behave the same.

> you need access to the DNS in order to change (and bypass) CAA records, whereas you only need control over the web server in order to obtain a certificate that can be used to ransom a domain

Any attacker that can gain access to web site admin credentials can also get the DNS credentials.

> CAA is not a replacement for HPKP, and no one is arguing that. It's an useful defense-in-depth mechanism that could help prevent mis-issuance in many cases

You're using defense-in-depth again. Replace "-in-depth" with "-added-complexity". Here's a directly relevant example - HPKP arose from deficiencies in static pinning - which arose from deficiencies of CA/SSL ecosystem - which arose from deficiencies of plaintext traffic. It's not adding security "-in-depth" when all you're doing is resolving deficiencies in existing deployed solutions. The "depth" is single-level, not multiple.


> We're talking about CA's issuing certs - DNS queries for email, http are outside of CA cert issuance.

Still doesn't make any sense to me. Whether a CA performs a DNS query in order to do domain validation via email, http or to check a CAA record doesn't matter.

> The browsers need to decide when to trust a cert - and if/when CAA becomes involved in cert issuance, then I suggest this DOES have something to do with browsers.

The domains that a CA "vouches" for are part of the certificate. Browsers tell CAs how to determine domain ownership (by telling them to follow the Baseline Requirements). Implementing CAA means making the CAA check a mandatory component of these requirements. In other words, this would be just another step and if the domain appears on a certificate, the CA would indicate that the CAA check was successful. Browsers are the ones who would mandate this, yes, but there would be no other change necessary.

> Furthermore, you're suggesting co-existence between CA's that check CAA and CA's that don't - which implies that either the browser or user has to make a determination of whether to trust.

I'm not suggesting that. I'm suggesting making CAA mandatory. The scenario you're describing is the status quo, by the way: Some CAs implement CAA (Let's Encrypt, DigiCert); others (the majority) don't.

> But we're talking past each other. So using your example, think of it like this, how will you as a user know when you visit github that another WoSign hasn't happened? ...or rather - how would a LAYPERSON know they are secure? Github might detect it - but how do ordinary USERS?

Yes, we're talking past each other. My argument is that CAA would prevent certain mis-issuances, such as the GitHub/WoSign example. The certificate would not have been issued in my example. It is not a mechanism against a fully-compromised CA, as I have said before. That's what HPKP is for, as I have already said before.

> that (wrongly) assumes ALL CA's trusted by EVERY browser will perform CAA check before issuing a cert for github.

You're wrongly assuming that ALL CAs have the same domain validation vulnerability that WoSign had. I'm saying: Given the WoSign vulnerability, CAA would have probably prevented the mis-issuance.

Also: see previous comments regarding the effectiveness of CAA prior to it being mandatory.

> The problem is not so much "fully-compromised CA" as it is we're trusting a whole pile of CA's and not all them behave the same.

Which is why I have (for the n'th time) indicated that CAA is only fully effective if all CAs implement it (or in other words: if it becomes mandatory).

> Any attacker that can gain access to web site admin credentials can also get the DNS credentials.

What? RCE on the web server is not the same thing as full access to the DNS. You're not making any sense.

> You're using defense-in-depth again. Replace "-in-depth" with "-added-complexity".

Again, you fail to demonstrate how this complexity does more harm than good. You can make this argument for any change.


> Still doesn't make any sense to me. Whether a CA performs a DNS query in order to do domain validation via email, http or to check a CAA record doesn't matter.

Brush up on CA cert issuance. You seem to be assuming that all CA's perform similar levels of due diligence before issuing certs. They don't, they differ widely. Some go much further than simply DNS validation.

> I'm suggesting making CAA mandatory.

For practical reasons, I am skeptical this will happen. Too many paying entities. In the spec/contract, the MUSTs will be lowered to SHOULDs.

> The certificate would not have been issued in my example.

IF the CA checked the CAA...

> CAA would have probably prevented the mis-issuance.

Exactly - "probably".

> CAA is only fully effective if all CAs implement it (or in other words: if it becomes mandatory).

1) it won't happen. some CA's may/already-have implemented it but how is the browser/user to know which have and which haven't? 2) unclear that it is even fully effective

> Again, you fail to demonstrate how this complexity does more harm than good. art. Introducing a new mechanism requires demonstration that the added complexity is worth the effort. And in this case, it is clear that unless everyone implements it, there is no added benefit. Added cost without benefit is a bad start


> They don't, they differ widely. Some go much further than simply DNS validation.

You're missing my point. You're acting like adding yet another DNS query equals some massive increase in complexity. However, all certificate issuance requires DNS queries. I'm quite aware of other validation steps (like for OV and EV). These steps are performed on top of the domain validation steps I mentioned. I have never argued that those are the only steps.


Server software might be able to help with some of the foot-gunning mentioned in Scott Helme's writeup. For example, web servers that were just configured to do cert pinning for the first time could default to giving a 5-hour grace period where the max-age was clamped below a day, followed by a 2-day grace period where the max-age was clamped below a week. Problems are likely to show up immediately, so starting with a low max-age makes sense.

This wouldn't help with malicious scenarios though; attackers would just disable the grace period.


You could do that on the client side too though. Have the clients refuse to long-term pin anything until they've seen it multiple times over multiple days.


Doing it client-side defeats the purpose of certificate pinning. Three years after you rolled out pinning, new users would still be using max ages that were clamped too low.


Just curious: What did you mean by you "did a show of hands?" Do you mean that's the number of people that had heard of or understood CAA?


Literally:

1) raise your hand if you've heard of CSP (more than half the room at both talks)

2) raise your hand if you've heard of HPKP (half at def con. Probably a third at black hat)

3) raise your hand if you've heard of CAA (I gave this one something like 15 seconds for people to think. The number of hands v. attendees was a rounding error.)


Next time you're testing that, you might also include Certificate Transparency.

Mandatory CT seems like it solves the same problems HPKP does, without the risk. Now if we can just finish getting to mandatory CT...


Certificate Transparency (especially if mandatory) guarantees detection after the fact (if we assume the domain owner is properly monitoring known logs). It also adds a certain chilling effect that will probably cause attackers to look for other methods of compromising the target rather than risk sacrificing one of the CAs under the attacker's control (which, arguably, is what intelligence agencies are already doing as many high-profile sites have deployed HPKP).

It's not a real-time protection mechanism for targeted attacks and such - the mis-issuance event won't be noticed immediately, and revocation is both slow and, well, broken in general.

The existence of HPKP is still a good thing for the Gmail's of this world, but I agree that mandatory CT is probably good enough for 99 out of 100 sites out there.


I can imagine extensions to CT that would make it better at combating real-time attacks before they can affect anyone.

For instance, suppose a new certificate didn't become valid for a certain amount of time after the publication of the corresponding CT log, to give the domain owner (or rather, automated processing working on their behalf) time to shoot it down somehow? You could choose how much of a delay you wanted, trading off between security and potential downtime (if you lose all access to your existing certificates, can't revoke them, and have to issue entirely new ones).

If you use any non-automated CA, you should know when renewing your certificates, so you could open up a window to renew; in the absence of that, a fully automated process could notice a new certificate for your domain and automatically shoot it down.

If you use an automated CA, you (or the automated process) should similarly know the window of time you will renew in, though in that case someone could learn that and take advantage of that window. On the other hand, given sufficient confidence in the automated CA you could use CAA with it.


> such as more diligence by free certificate authorities -- the crux of the talk was rekeying at up to 20x per week enabling the attack

You are wrong if you think this is the crux of the attack. Let's Encrypt is unique in rate limiting issuance to 20 a week. Other CAs will happily give you unlimited reissues once you pay a small amount for the initial cert (e.g. $15), and they have an API for it too!


Well aware, but not having to pay for it makes it more widely accessible. Any nation-state attacker or attacker with the will to also compromise an identity or two will have no problems either compromising a CA or swiping an owned credit card respectively, but will either of these actors really go out of their way to ransom a site for up to a five figure sum? The former has no incentive to make money, and the latter makes money through identity theft as-is.

For what it's worth, we're aware of rapid rekeying with other CAs because Cyph uses rapid key rotation with DigiCert right now and has since before Let's Encrypt entered open beta.


> Well aware, but not having to pay for it makes it more widely accessible. Any nation-state attacker or attacker with the will to also compromise an identity or two will have no problems either compromising a CA or swiping an owned credit card respectively, but will either of these actors really go out of their way to ransom a site for up to a five figure sum? The former has no incentive to make money, and the latter makes money through identity theft as-is.

These CAs have resellers that sell certs anonymously, accepting Bitcoin as payment. If the upside is a five-figure sum, $15 is nothing.

> For what it's worth, we're aware of rapid rekeying with other CAs because Cyph uses rapid key rotation with DigiCert right now and has since before Let's Encrypt entered open beta.

Odd then that you mentioned Let's Encrypt 5 times in your Chrome security report (and reported this to the Let's Encrypt security team, as if it's their problem), but didn't mention DigiCert once: https://bugs.chromium.org/p/chromium/issues/detail?id=620776


> Odd then that you mentioned Let's Encrypt 5 times in your Chrome security report (and reported this to the Let's Encrypt security team, as if it's their problem), but didn't mention DigiCert once: https://bugs.chromium.org/p/chromium/issues/detail?id=620776

It's not odd when you consider the point I'm re-quoting below (which isn't really negated by Bitcoin reselling since Bitcoin anonymization is non-trivial and since it still requires payment):

> Well aware, but not having to pay for it makes it more widely accessible.

As you noted, the bug report is readily accessible for anyone to view, and it entirely supports the quoted assertion above. In the end, we'd rather draw attention to the free CAs and HPKP-supporting user agents first and foremost in hopes of locking down the problem where it's most conveniently exploited because it is their problem, at least in our eyes. We can tackle the resellers later, and the entire problem of the broken CA model thereafter.

Hopefully that satisfies you.

Edit: shot you an email. Something from your tone strikes me as this being a personal matter for you. Our goal here isn't to antagonize anyone, just to make sure we've done our due diligence in disclosing the issue to those on the path of least resistance for attackers.


This is going to sound glib, but here goes:

1. The fact that HPKP can create this kind of gigantic foot-gun is a pretty good sign that it actually works. So much of security technology turns out to be cosmetic. The stuff that isn't often is hazardous.

2. Don't HPKP your blog. Don't HPKP your cat-sharing starting. Key pinning is a good thing, but it doesn't need to be universally deployed.

3. In order to hijack your site with HPKP, an attacker needs to take control of your server; they probably need RCE. In other words: if you can't safely HPKP your secure messaging system (cough), you probably aren't yet qualified to be running it: something worse happened to your users than HPKP misconfiguration.

If there's a capability we're missing right now, it's probably the ability for sites with low security requirements to opt out of HPKP. Remember, the threat model for HPKP isn't fraudsters or trolls; it's state-level adversaries.


> 2. Don't HPKP your blog. Don't HPKP your cat-sharing starting. Key pinning is a good thing, but it doesn't need to be universally deployed.

Just to be clear, this is equivalent to saying don't bother with SSL for your blog or cat-sharing site (because without pinning someone else can get a cert for your blog/cat-sharing site).

> the ability for sites with low security requirements to opt out of HPKP.

opt out?! It's off by default.

> Remember, the threat model for HPKP isn't fraudsters or trolls; it's state-level adversaries.

Again, by saying HPKP is only for high security sites (now you're saying only for those threatened by state-level adversaries) you are essentially saying SSL is useless for everything less.


> > the ability for sites with low security requirements to opt out of HPKP.

> opt out?! It's off by default.

I think tptacek meant that a site owner should be able to explicitly declare that nobody should be able to pin a key, so that nobody can hijack the site in the future.


An opt-in would make more sense, since people using HPKP are those that know the most about it. Just like an opt-out, it would have to be via a different channel than HTTP headers.


Fine, but take note that leaves the door open for others to get a cert for your domain.


That's an odd argument to make. You're essentially saying if you can't afford the most secure safe for your jewelry, just put it in a basket in front of your door instead of getting a cheaper safe that keeps most thieves out.


By suggesting HPKP is only for sites worried about nation state adversaries means deploying SSL without HPKP which leaves the door open for adversaries to get a cert issued for your domain.


Right, I don't think you got my point. You don't always need the best option. CAs are far from perfect, but the threat model for a blog or a cat-sharing site is unlikely to include adversaries who are capable of compromising a CA (whether they're nation-state adversaries or not). SSL without HPKP is probably good enough for you in that case. Saying that the traditional CA system without HPKP is as bad as a plaintext protocol is silly.


You are underestimating 1) the number of CA's embedded in your browser, 2) how easy it is to get one of those CA's to issue a cert for someone else's domain.


I think "opt out" means opting out of being able to be pinned, not just choosing not to pin. Instead of pinning a certificate, you might pin "This domain will not be pinning a certificate before $date."


Fine but then you're leaving open the possibility for others to get a cert for your domain.


Not bothering with TLS for a blog is likely to be a good idea.


That's outdated thinking. Look around at the efforts in past ~year for getting SSL everywhere. Non-SSL sites already suffer Google PageRanking[1] (maybe you want Google visibility for your blog or cat-sharing site), other consequences are not far off.

[1] HTTPS as a ranking signal: https://webmasters.googleblog.com/2014/08/https-as-ranking-s...


Well, TLS (allong with access controls) for his sing-along blog would have really helped Dr. Horrible stay ahead of the authorities.

In all seriousness, though, there are plenty of blogs where which page a user visits might, for instance, out a gay teen via gateway logs, hint about a private health issue, etc.


> In other words: if you can't safely HPKP your secure messaging system (cough), you probably aren't yet qualified to be running it

And of course, if you do find a defect like this in a web based secure messaging system that relies on HPKP, it'd be great if you'd report it directly to the service and allow them to fix the defect, per responsible disclosure anyway.


Well, the real risk here is that it creates a new consequence for being hacked by a criminal, right. So I can't cop out and say "people should report instead of playing HPKP games".

On the other hand, it's an internalized risk, which is kind of a beautiful thing: so many of the risks we create through poor security are externalized to end-users --- so much so that you can run a secure messaging site, get owned up, cost thousands of people their privacy, and still keep chugging along as if nothing happened. This particular risk though just wallops the owners of the site.


Unless I'm mis-understanding it doesn't just wallop the owner of the site. It wallops all of that site's users as well. I know Google is less likely to be walloped but imagine a site similar to Google docs or gmail went down until the key expires. That's not just going to affect the site, it affects all of it's users.


Users can still get their data; the owners need to stand up another hostname. They're inconvenienced. But the attack is devastating to companies, who depend on user convenience to attract and retain users.


> Well, the real risk here is that it creates a new consequence for being hacked by a criminal, right.

No disagreement from me here, which was one reason we set out to talk about RansomPKP at BH/DC in the first place :) That said, I do think this can, and should, be reported to site owners. You're not going to see much innovation if people don't take time to futz around with technologies and see what they can do with them. HPKP has plenty of potential to hurt, but as you alluded to, it's generally just your own feet in harm's way.

> This particular risk though just wallops the owners of the site.

Yeah, that's what the (cough) team was going for.


Sorry, I was being snarky, because there's a secure messenger that apparently did commit PKP suicide.


Is there a way to mitigate the risk of automated PKP ransom attacks without just pinning yourself before they find you? My impression after reading this is that the mere existence of HPKP (as implemented) forces you to either use it or be ready to go down for 60+ days if you get owned.


Oops. The article mentioned fronting your servers with a proxy that strips the HPKP header. Not perfect, but it's something.


HPKP is a workaround, not a solution, to the problem of trusting every CA in the world to certify every site in the world — it's a symptom of the fact that the economics of CAs are completely, totally and fundamentally out-of-whack: relying parties have no relationship with the CAs they trust.

There are two things a CA can do: it can a) certify that a key is associated with some real-world identity or b) certify that a key is associated with some DNS name. The first is actually pretty desirable, but … there is no global namespace of real-world identities, so there's nothing a CA CertCorp can certify other than 'public key 0x1234ABCD0987FEDB is associated with CertCorp identity X,' which is … not nearly as desirable, or useful.

The second possibility is actually pretty useful, but also imperfect. To a huge extent, Amazon really is amazon.com; Google really is google.com. But, of course, .com could always decide to reallocate hostnames at will, regardless of its contracts (there may be contractual, civil or criminal remedies, of course).

Where does HPKP come into all this? It really doesn't: it's a half-assed way for a site to say, 'you visited me once, and trusted CA X to verify my identity then; in the future, please trust CAs X & Y' — but it does nothing at all to actually ensure that CA X was reliable in the first place! After all, CA X could have issued a certificate to some malfactor, who served the HPKP instructions to only trust X & Y.

It's smoke and mirrors, all the way down.


There's nothing stopping you from pinning exclusively to keys under your control. A pin counts as satisfied if the hash matches at least one of the keys in the certificate chain, including the end-entity certificate itself. In other words, if you do not trust any CA, HPKP can still help you.

For most sites, pinning to a small set of CAs is sufficient and a vast improvement, but if you chose to trust no one, that's fine too.


Here's an issue not mentioned in the article, which gives me the half-baked feeling about HPKP:

Thought experiment: (just using companies I think HN will be familiar with, I'm not involved with these.)

1. You're Tumblr or Basecamp or Slack with thousands/millions of user.example.com user subdomains

2. You want to protect all of them with a root HPKP policy with includeSubdomains, because users will be bouncing between them, and with that many trust-on-first-use gaps, it'd be pointless otherwise.

3. And then you want status.example.com hosted at StatusPage, blog.example.com hosted at Medium, and someapp.example.com hosted via GitHub Pages with a CNAME. Assume there are long-lived links to all of these subdomain URLs.

You can even assume that the third parties implement HPKP, and that they'll never have a hiccup.

There's no way to make this work without either:

A) Removing includeSubdomains, and no longer catch a large amount of cases as users traverse your site, barely solving a problem anymore

B) Serving a concatenation of every third party pin, opening up gaps huge enough where suddenly every CA is now authorized again, not solving a problem at all.

C) Noticing something I haven't.

And note that there are security (not merely vanity) reasons for using subdomains instead of paths for hosting user-generated content. I don't think this is too crazy of a scenario.


That's fairly easy to fix by putting your users on user.fooapp.com and your corporate pages on *.foo.com, just like you're already (for a different reason) putting static resources on foo-static.com.


I think a plumbing-level security implementation detail should be a little less intrusive than having to move either the corporate domain or millions of user's addresses. I can't imagine Tumblr rolling out HPKP now with this workaround (though github did do the .com -> .io thing, so who knows).

The spec should've just included excludeSubDomains=


For security, especially at the plumbing-level, simplicity is a virtue.


This. It's already best practice because fooapp.com should be on the public suffix list (which allows Let's Encrypt to be used with subdomains, among other things).


There's still a decent size contingent of non-technical people that look at exampleapp.com the way I'd look at wellsfargo-online-banking.com. I don't blame them: subdomains imply authority.

While I'm sure there's more, github.io, herokuapp and GAE/Appspot are the only services I know of following this practice, and they cater to a developer-level technical audience with the assumption that they'll work out a better name.


i was wondering if this is more a problem with how third parties are handling SSL custom domains rather than HPKP. I would have thought SSL custom domains would have been you just upload a certificate/private key you want to use or you click a checkbox saying that you are happy to use letsencrypt. But I'm guessing for various reasons (no SNI support, etc) some of these companies are using a massive shared certificate.

also, if you are building a competing product with someone that is doing custom SSL domains it might be useful to check out if your competitor is using a shared cert because you can get a list of companies that are paying for the premium product :)


This is a really good point, and I feel like the two issues are intertwined. I think HPKP being heavily adopted as it's spec'ed now makes it even harder for third parties trying to do the Right Thing, and their customers from taking advantage of it.

Most of the problems I've seen with pushing HTTPS and the vulnerable CA problem forward really do break down at the "third party provider" boundary, which is a little surprising, because it's an amazingly common situation now, and has been for a while.


EDIT: I was mistaken about something fundamental in how TLS operates. Unchanged question included below:

I have an amateurish question. Please correct me with counterpoints if I'm mistaken.

In an attempt to 'explain like I'm five', HPKP adds an additional checksum, so that the server can say, 'if you see any certs whose Subject-Public-Key-Info doesn't match this checksum, those aren't me, because I generated this checksum off of my cert's Subject-Public-Key-Info.

But since the Subject-Public-Key-Info is (by definiton) public, a CA could issue an owner-unsolicited cert to the same site, containing the same Subject-Public-Key-Info, and which will pass pin validation and allow the site to be impersonated [1]. This may happen if the CA is rogue, or if the CA makes a mistake and issues a certificate to some bad actor other than the actual site operator.

So if HPKP is designed to protect against CA errors (malicious or accidental), how can it do that when it can't protect against CA errors, malicious or accidental?

[1] https://tools.ietf.org/html/rfc7469#section-4.5


If the attacker doesn't have the private key that matches the Subject-Public-Key-Info, then he won't be able to impersonate your server given such a cert. (If he has, then there's no need to produce a replacement cert; the official one is good enough).


the issue is that HPKP is as user friendly as GPG but without the same benefits. (learning-curve vs ROI) Also IMO the underlying problem is choosing a CA's in the first place. None of them have 'Skin in the Game'[0]. How do you actually decide on which CA is trustworthy?? Trust is measured by how long they have been in the business and how many trust related incidents[0] they had. And how is that confidence measured/created? (hint it isn't Math but whoever has the best marketing or lowest price - not the most trustworthy certs).

CAs are a business model impatiently waiting to be replaced (with something decentralized). It's a self-serving quasi-monopoly that all other vendors that want to implement trust must use.

[0] http://www.fooledbyrandomness.com/SITG.html

[1] https://www.schrauger.com/the-story-of-how-wosign-gave-me-an...


afterthought: I wouldn't actually call HPKP dead. It's a prerequisite if you want to do any PKI related functions on your site. E.g. <keygen> support was dropped by most browsers and the industry is heading towards native clients. Nevertheless WebCrypto[0] is supposed to offer the same type of quality and you need both HSTS/HPKP if you want to secure your implementation in any meaningful way.

[0] https://www.w3.org/TR/WebCryptoAPI/


May I ask a stupid question? Last time I heard about HPKP was as a solution to prevent an attacker from installing a fake root CA certificate in the client system (a mobile device), so that it could observe the behavior of a mobile app, by posing as its API server (for any purpose of reverse-engineering). Is such an attack possible, and HPKP a reasonable solution to it?


Locally trusted CA certificates circumvent HPKP protection, because many organisations use TLS MITM attack to spy on their staff and browsers must support it in order to work properly in those environments. But it makes sense for browser. If you are making your own application, you can implement it in any way. Simplest way would be to embed public key in the application and check it without HPKP. Though, probably, if your app is popular, you'll face that in some networks your connections are MITMed and you (or user) can't do anything about it, so it's choice about dropping security or not to work at all.


> Simplest way would be to embed public key in the application

Yes, but one eventually faces the need to update the certificate on the server, and still support installed applications (which fails when apps have an obsolete certificate pinned). Hence the need for smooth certificate updates, and the good smell of HPKP. But it addresses another need, hence my confusion.


No, HPKP is not intended as a mechanism to prevent that. In fact, all user agents I'm aware of do not check pins if the certificate chain leads to a custom CA certificate (one that has been installed manually, as opposed to the default root certificates the user agent ships with). (I believe this is a MAY in the RFC, but all major browsers have implemented it that way).

Mobile apps typically have other key pinning mechanisms that are preloaded (i.e. baked into the binary), but that's typically easy to bypass if you're the owner of a device; it's not really effective as a mechanism against reverse engineering.


> Mobile apps typically have other key pinning mechanisms that are preloaded (i.e. baked into the binary), but that's typically easy to bypass if you're the owner of a device; it's not really effective as a mechanism against reverse engineering.

Your comment makes me realize that HPKP can bootstrap itself, and is unlike regular pinning (with certificate bundled in the binary, as you say).

So do you agree that a powerful enough owner of the device should be considered able to setup a server which poses as the regular server the app talks to, and sniff any request the app sends to its server?

(edited for spelling)


> So do you agree that a powerful enough owner of the device should be considered able to setup a server which poses as the regular server the app talks to, and sniff any request the app sends to its server?

I think any attempt to prevent that is doomed to fail, similarly to how DRM has had absolutely no effect on the availability of pirated content.


HPKP is ignored by browsers when the certificate chain presented by the website includes a user-installed root CA. This is so enterprises can continue to MITM their employees' connections on their work machines.


This happens, see Superfish and friends. HPKP does not protect against this, it's an entirely different set of problem.

HPKP's only purpose is to protect against malicious or compromised CAs.


Yes, like a rogue one installed through any jailbreaking technique, isn't it?


No, it is not; locally-installed CAs (roots) bypass HPKP.


DANE certificate pinning seems to like it solves most issues HPKP does, without the imminent danger.

It won't stop a dedicated attacker from MITMing a connection if they can block DNSSEC. However, if someone uses false BGP route to hijack your IP, they can't reach your DNS queries.

You'd still be fucked if you connect to a local wifi connection though.


The downsides of DANE are substantial:

* DANE is effectively a key escrow system. COM's keys would be held by the USG.

* In the same vein: people with cosmetic TLDs like .LY would be forced to concede their TLS policy to governments like Libya. Similarly: the Internet would be conceding that protocols should work differently and be less trustworthy depending on what country you're in.

* The "last mile" on DANE is spoofable, because DNSSEC doesn't protect the link between browsers and DNS servers; it's a "server-to-server" protocol. In particular: your ISP could still override pins!

* DANE doesn't actually work in practice, because of all the networks that block odd-looking DNS lookups; this is Adam Langley's point from "Why Not DANE In Browsers".

* Stale or misconfigured DNSSEC signatures knock whole sites off the Internet with no diagnostics. Unlike a broken HPKP pin, there's no UX saying something's wrong with the site; it just doesn't exist anymore. This is what happened to HBO Now on Comcast, and why Comcast users occasionally can't reach .GOV sites (which mandate DNSSEC).

* DNSSEC is far, far more expensive to deploy than HPKP (HPKP is just a header you add to your web server configuration).

* The cryptography behind DANE is 1990s-grade. The DNSSEC roots are still RSA-1024! (Don't worry! They've got the upgrade to RSA-2048 on the calendar!)


> * DANE is effectively a key escrow system. COM's keys would be held by the USG.

They can already temporarily change DNS records and get certificate from any legitimate CA.

> * In the same vein: people with cosmetic TLDs like .LY would be forced to concede their TLS policy to governments like Libya.

They already are doing this, see first point.

> * The "last mile" on DANE is spoofable, because DNSSEC doesn't protect the link between browsers and DNS servers; it's a "server-to-server" protocol. In particular: your ISP could still override pins!

Run a local resolver. If you have trusted server some you can use DNS over TLS.

* DANE doesn't actually work in practice, because of all the networks that block odd-looking DNS lookups; this is Adam Langley's point from "Why Not DANE In Browsers".

Then these networks are bad. For what reason network should be interested with more than UDP/TCP layer in DNS packet? Again, you can use DNS over TLS.

For other readers, look at discussion in https://news.ycombinator.com/item?id=12382752


Regarding your first point: no, they can't. See what happens if they try to exploit the DNS to get a Google certificate. (Notice the last several years of Google punishing CAs).

It is very important to DNSSEC advocates to convince people that our security can't be any better than the security of the .COM zone, which is controlled by the US Government through Verisign. But nobody accepts that level of security today. Arguments that we can escrow all our security with Verisign are DOA, post Snowden. Move on.

Regarding the rest: you can't persuasively rebut arguments by conceding they're correct.


> Regarding your first point: no, they can't. See what happens if they try to exploit the DNS to get a Google certificate. (Notice the last several years of Google punishing CAs).

If the U.S. government, or the Russian government, or the Luxembourgian government orders a CA subject to it to produce a certificate for a given key and hostname, it will be given a certificate for that key and hostname.

If the U.S. government, or the Russian government, or the Luxembourgian government orders a DNS provider subject to it to resolve all requests for a domain to a different IP than it would have otherwise, those requests will be so resolved.

If the U.S. government, or the Russian government, or the Luxembourgian government orders a network subject to it to route all traffic for one IP to a different IP it controls, that traffic will be so routed.

> It is very important to DNSSEC advocates to convince people that our security can't be any better than the security of the .COM zone, which is controlled by the US Government through Verisign. But nobody accepts that level of security today.

Everybody accepts that level of security today, because that's the level of security we have today.

Actually, it's better than the level of security we have today, because at least a DNS-level CA system would mean that no-one other than someone who can compel .com can take over a .com hostname. Right now any government in the world can do so, at the drop of a hat.

Restricting that to the government a company or person is subject to greatly reduces the attack surface.


If the US government orders a CA to mint a bogus certificate for Google Mail, they will get the certificate, and then the CA will get put out of business, one way or the other.

There are consequences to suborning the TLS CA system. DNSSEC advocates pretend there aren't, because they have to, or their arguments become untenable, because there cannot plausibly be similar consequences for suborning the DNS: we know for a fact, with repeated existence proofs, that governments will manipulate the DNS (like DOJ did with file sharing sites), and we know .COM isn't going anywhere.

DNSSEC advocates will grudgingly acknowledge one half of the modern CA system security model: key continuity (pinning).

What they seem unable to acknowledge is the other half. To use Moxie Marlinspike's terminology, which he coined in an argument against DNSSEC: trust agility.


> If the US government orders a CA to mint a bogus certificate for Google Mail, they will get the certificate, and then the CA will get put out of business, one way or the other.

Probably, but in the year (or more considering that startSSL is still trusted) until the CA's trust is removed, that certificate still works.

Besides, if you accept that governments manipulate DNS, why do you deny they manipulate DNS to get domain validated certificates? Cause if not, we are back at trusting verisign for all websites that aren't statically pinned. (with that added 'bonus' that those statically pinned sites can be wrecked by their pin at any time)


> we know for a fact, with repeated existence proofs, that governments will manipulate the DNS (like DOJ did with file sharing sites), and we know .COM isn't going anywhere.

Yeah, like I've posted before, this is actually US gov SOP.

The US government asserts super-jurisdiction over .COM/.NET/.ORG and will seize such domains at will [1] [2] [3], which should make every non-American think twice about using those TLD's for anything. Your only recourse will be the American justice system that you will have to navigate as a foreigner. Meanwhile, ccTLD's are under the jurisdiction of their respective countries. Of course, the US is still ultimately in control of the root zone, but I consider messing with that the nuclear option after which the US will surely have to hand it over to the UN as has been discussed at times.

[1] https://yro.slashdot.org/story/12/03/06/1720230/us-asserts-s...

[2] https://en.wikipedia.org/wiki/Domain_name#Seizures

[3] https://en.wikipedia.org/wiki/Operation_In_Our_Sites


> Regarding your first point: no, they can't. See what happens if they try to exploit the DNS to get a Google certificate. (Notice the last several years of Google punishing CAs).

What? Article about it on HN? They won't punish CAs for that, because that won't be CA fault that USG modified DNS zone.

> It is very important to DNSSEC advocates to convince people that our security can't be any better than the security of the .COM zone, which is controlled by the US Government through Verisign. But nobody accepts that level of security today. Arguments that we can escrow all our security with Verisign are DOA, post Snowden. Move on.

Because it can't be better with DNS, which is by design hierarchical. You would have to create own domains as TLD in root zone. Oops, you now have to trust root zone owners! So you can create blockchain based domain system. But, it won't be DNS. And you will argue that implementing blockchain name system is even more expensive to implement than DNSSEC, because it is slow and requires big storage space.

> Regarding the rest: you can't persuasively rebut arguments by conceding they're correct.

Last mile problems aren't covered with DNSSEC, there is other RFC for that.


You've missed my point. The reason you can't DNS spoof to obtain a Google certificate is that the certificate is pinned, and CAs are monitored (through both pins and through CT).

The rest of this comment continues to concede the arguments I made previously, but pretend that those concessions are rebuttals. It's really only in DNSSEC threads that I notice this peculiar rhetorical strategy deployed.


Sure, Google is defended, as are some other big sites. However, this isn't a scalable solution. Are you really going to argue that the sites small enough not to be statically pinned don't deserve to be safe from state-level attacks?


No? That's what HPKP does: it allows sites of any size to pin certificates.


I'd say the article shows plenty of issues with HKPK. It also forgoes the fact that HKPK only works on the second visit.


For that one user. But part of the point of HPKP is that it raises the stakes for CAs who would issue bogus certificates: someone will have their pin broken, browsers will notice, CAs will be forced into CT or worse.


I have kinda lost faith in the stakes for CAs. It seems like they have to fuck over a really big client for anything to happen.

In general, the CA system is fucked and we need a new one. Note that I never said DANE should be the replacement for CAs. Instead I wanted DANE pinnen to replace HPKP.

I wonder if any if this is even relevant when you consider downgrade attacks.


Everyone has lost faith in the CA system. The problem is that DANE doubles down on it by creating a new hierarchical PKI, this time de jure controlled by governments. And it's not even a good PKI! It's a PKI built around 1990s cryptography.

DANE is not the answer.

Mandatory CT and enforcement of CA Baseline Requirements by browsers is the immediate step. A decentralized trust model is the step after that. None of these steps require the USG to escrow our keys.


DANE doesn't quite double down, it reduces the attack surface to your TLD, instead of all CAs. It sucks as a replacement to CAs. However, when it comes to pinning, it is an alternative to HPKP that avoids the costs mentioned at the article, giving up power to the TLD as the price.

In any case, I wouldn't mind if the USG was one of the parties that get to vouch for my key. Many attackers don't have control over the USG. Obviously, that requires multiple other parties that also vouch for my key.

I'm also quite interested in what you think the decentralized trust model looks like. Specifically, the out-of-the-box experience.


> DANE is effectively a key escrow system. COM's keys would > be held by the USG.

That is one mode of DANE. There is also a pinning mode of DANE, where you restrict what certificates can be used to sign yours. This is then an add-on to the CA system, so you'd still need to convince a CA to give you a certificate to MITM.

Besides, as others have said. Those same DNS servers can already break the current systems as they can simply rebind and get a certificate through any CA.

> The "last mile" on DANE is spoofable The last mile is only spoofable if you don't locally run a DNSSEC validating resolver. This is easy enough to set up.

> DNSSEC is far, far more expensive to deploy than HPKP (HPKP is just a header you add to your web server configuration). Does that hold when you take into account the safeguards required to sanely setup HPKP? I'd say the article explains quite well the issues with deploying HPKP.

Sure, DANE and DNSSEC aren't going to replace the CA system. They can certainly complement it though.


If you don’t trust the DNS, you’re pretty much screwed anyhow. The solution to the “last mile” problem is to run a private resolver on each machine. The DNSSEC root KSK (master) key is 2048 bits, and the ZSK is due to change to 2048 bits this year, after careful consideration of all the factors.


First: no, not only is it not true that we must trust the DNS, but not having to trust the DNS is literally part of the thesis statement of SSL/TLS.

Second: you only need to break the weakest of the keys to attack the system. It's 2016, and we're talking about deploying a system that aspires to RSA-2048 or ECDSA, both of which are outmoded.

It actually gets worse. DNSSEC was designed in the mid-1990s with the assumption that cryptography was too expensive to deploy at scale. It's for this reason that DNSSEC uses signing-only constructions and requires support for offline signers: that is, support for people who will sign once, offline, and then never touch cryptography again.

All sorts of badness flows from support for offline signers. In particular: DNSSEC responses themselves are not encrypted. Despite relying on the most expensive cryptography primitive (public key signatures), the protocol offers no privacy at all.

This would be funny if it weren't for the fact that it turns out you need online signers anyways. Without online signers, there's no way to simultaneously (a) authenticate negative responses (ie: prevent someone from spoofing an NXDOMAIN response) and (b) not reveal all the names in a DNS zone. Surprisingly (at least to the people on the working group), pretty much everyone demands (b), and so pretty much every real-world DNSSEC deployment is an online-signer, so that it can synthesize NSEC records to trick out people trying to dump zones.

It's an incoherent protocol that deliberately handicaps itself in an attempt to achieve a design objective that turns out not to be possible to achieve.

Moreover: the more DNSSEC gets deployed today, the harder it will get to deploy fast, modern, misuse-resistant curve signature schemes. That's because anything you try to deploy on DNSSSEC needs to be supported by pretty much all the DNSSEC resolvers, because DNS is an infrastructure protocol, not an application-layer protocol.

It's for the same reason that DNSSEC failures cause sites to vanish completely from the Internet rather than generating pop-up warnings.


And how do you get a TLS certificate? By… proving that you control the DNS. Come back when there’s a working attack against the crypto of DNSSEC; being “outmoded” is not a security flaw. DNS is, on a very fundamental level cached, by slave servers, who does not, and should not, need to have access to the DNSSEC key to do live signing of records. DNSSEC was made for security, and explicitly not encryption – bringing up its lack of encryption like it’s a fundamental flaw in DNSSEC is a strawman argument. Regarding enumerating zones, you do know of the NSEC3 record (widely deployed for years) and the work on NSEC5, right? They were made to fix that, and, as I understand, were successful. Arguing against DNSSEC because we could in theory have something better reminds me of those arguing against SSL because the CA model is flawed. Sure it’s not perfect (what ever is?) but it’s what we have, and it’s better that the practical nothing we would have if we keep arguing about theoretical perfection over actual deployment of working code.


You are like the 9th person on this thread to make the same banal point about the relationship between DV certificates and the DNS. It's not accurate. See: rest of thread.

I don't so much mind people making the same arguments over and over again on different threads, but when you can read the thread and see the clever point you're about to make has already been litigated, I'm less sanguine.


You keep offering HPKP as a solution in a thread regarding an article that shows how HPKP sucks. Hence why we keep repeating the same point.


The article doesn't show that HPKP "sucks". The article shows that HPKP can be dangerous. PKP isn't going anywhere, and we've been relying on it for years: it is likely that at least one CA has been penalized for breaking PKP pins.

This is very much unlike DNSSEC. True statement: we could publish the secret keys for the root ZSK on Pastebin today, and nothing on the Internet would break, because nobody relies on DNSSEC for their security.


Protocols that require online signing are impossible to cache and hard to scale. Even OCSP is often used with pre-produced offline signed responses for that reason (but they are periodically re-signed of course). So that's probably why DNSSEC supports offline signatures.

And while I agree that confidentiality of DNS would be a step in the right direction, there would still be many other leaks, e.g. the SNI field in TLS, or the destination IP address. To completely hide who is accessing which site, you'd need some kind of mixnet or onion routing.


> First: no, not only is it not true that we must trust the DNS, but not having to trust the DNS is literally part of the thesis statement of SSL/TLS.

Uh. Certificates are issued by checking DNS entries, and you are saying that we don't have to trust it.


That is correct, yes.


Your usual example here is how hard it is to get a domain-auth cert for google.com, but what about a small e-commerce site? What should they do to prevent a bad guy from getting a domain-auth cert by spoofing DNS? Honestly asking for my education (and others).


They should get an OV cert and use HPKP for their domain. That is: if they're worried that a state-level attacker is going to MITM their users with a fake certificate.


OV certs do almost nothing to prevent MITM attacks. Unless you expect users to recognize when an OV cert is replaced with a DV one.

Moreover, HPKP means that I have to trust my first visit to a site. Any new site could be MITMed. Heck, HPKP means that anyone that gets to MITM your domain can wreck it for however long a Pin is valid.


It's easy to "win" arguments if you cherry pick parts of them and ignore the rest.


It's easy to "win" arguments if you try to come of better, rather than seeking for mutual understanding.

(yep, this ain't really seeking for mutual understanding)


Thanks. Is that attack only available to state-level attackers? I thought that--absent the measures you detail above--it was possible for a criminal enterprise to fool a CA into issuing a domain-authenticated cert.


The attack is too complicated for fraudsters, because it's multi-stage. They have to perpetrate a heist to get the certificate, and then another heist to MITM people to use the certificate (you can potentially use the DNS for both attacks, but they're still different attacks). DNS spoofing is not common in the real world, because it has low payoff.


Thank you.


If you find any of these points convincing, please have a look at this FAQ, esp. the DANE section:

http://blog.easydns.org/2015/08/06/for-dnssec/


This is the third time you've posted this link (and that's fine; it's far from the third time I've written an anti-DNSSEC comment --- though I do take the trouble to rewrite them each time).

Each time, I'm going to point out that the author of that

(a) there was an HN thread that addressed it:

https://news.ycombinator.com/item?id=10019029

(b) the author of the post actually mailed and apologized for its tone. I don't know how much of it they continue to stand by, but I think they might write it differently now regardless.

(I reserve the right to cut/paste this particular comment in the future).


Hum, no.

DANE is centralized, so it solves the same problem the CA system solves, just slightly better.

Key pinning is a peer-to-peer form of key distribution, it is touching the same problem the PGP network touches, but solves a different subset of it.


Yes, DANE is centralized, but note I was talking about DANE pinning. This means you also still need a CA to break a DANE pinning secured certificate.

So DANE pinning adds significant security as compared to base CA's.


Ops, yes. I've missread it.


I won't use or recommend HPKP until there'll be a way to recover domain with lost keys. I think of something like bit in the certificate which will override HPKP and this bit should be very thoroughly verified by issuer, something like EV. Risk of losing domain is unacceptable and overweight security gains.


Well, there's an idea: require a special certificate (via an extension) to enable pinning.


I recently had an idea to make HTTPS more secure for the average user: What if Google were to check the certificates when crawling a site, ensuring that its keys did not change in a malicious way.

Now, sites could present a different cert to Google IPs and to users. To solve this problem, the html anchor element could be augmented with the certificates fingerprint. The users browser would make note of the fingerprint and alert the user if something is off.

I think embedding information about a certs finger print or its root CA in links makes a lot of sense, as it would allow a sort of "web of trust" between linked sites.


The most practical attack for the average MiTM is going to be the coffee shop wifi scenario. That scenario will never end up seen by Google.

On a related note, Google's crawler submits to Certificate Transparency logs any certificates it's never seen. Given you can monitor transparency logs, we have a similar situation to that you describe available to us.


That's more or less what Perspectives (https://perspectives-project.org/) set out to do: a bunch of notaries would crawl websites, note their TLS cert and distribute them to whoever would want to visit the site, to have a point of comparison.

It didn't take off, mostly because of potential privacy problems and because there were not enough volunteers to make it scale properly. But the idea was sound.


I get the idea, but it sounds like it would make the problem of link-rot 100x worse, and the number of broken links would quickly desensitise users to the warning.


Obviously it'd be optional, and it'd only make sense for dynamically-serving systems like a web search engine.

And a multi-domain'ed serving system. Like a blog host. It could serve links to other blogs it hosts (on completely different domains) with cert hashes, and suddenly make it very hard to MITM any of the served sites without MITM'ing all of them.


I'm not sure how reliable this would be. E.g. an attacker with sufficient capability to launch an attack like RansomPKP (which requires a web server compromise or equivalent) would—on first glance, anyway—also be able to undermine these mitigations.

Technion's point about certificate transparency is probably the closest to solving what you're aiming for.


The article makes some good points, but I don't believe the risk of malicious pins has any effect on adoption. It doesn't really matter whether your site currently uses HPKP or not - the attack works either way, as long as the user agent supports HPKP. Once an attacker has compromised your server in a way that allows them to add arbitrary HTTP headers, you're doomed.

That's not to say that things like RansomPKP aren't problems worth thinking about, but when it comes to adoption, time would probably better be spent improving tooling and documentation.


Like tptacek said, "Don't HPKP your blog".

There's only a handful of sites that need HPKP and that's okay. That doesn't mean there is something wrong with HPKP. It's a useful tool for people with a specific threat model. If you are one of these sites then deploy HPKP and handle the operational responsibility that it comes with.

If you aren't one of these handful of sites then do not deploy HPKP. HPKP is a huge footgun and the benefits are not super large on an average case.


I think high profile sites are better served by static pinning, which seems simpler to maintain and also comes with the benefit of preloading. Thus, if you have those sites at one end, and lots of small/no-large-risk sites who don't need any pinning on the other end, you're left with no one for HPKP to serve in the middle. But, if HPKP were easier to deploy, perhaps more sites would use it.


> Or, if you’re lucky, they seek ransom, giving you a chance to get the backup pinning key from them and keep your business.

That's a rough choice -- let the business stumble/fail by moving to a new domain or accept the keypair from an untrusted party who's shown themselves to be untrustworthy. What's to stop them from sharing the private key with other ne'er-do-wells?


You can gradually migrate to another non-compromised key. Generate new key, add HPKP headers for both compromised and non-compromised key, keep it long enough so all users who got bad HPKP headers will get new ones, then replace certificate and set HPKP header to include only new keys.


The problem is if you dull HPKP, you potentially weaken the benefit that the really high-profile targets gain and therefore make it less useful for them.

Perhaps the answer is only a small number of websites actually need/want this functionality - and that's okay? Simply dulling it to broaden take-up may have a net negative affect.


The article did mention this but I will re-iterate that mobile apps can benefit a lot from key pinning. That's a huge chunk of total internet activity. So, not that dead if you ask me.

The scenario of attacker having root(?) access to the machine warrants much bigger (or equal) concern than bricking your website in my opinion.


Just to clarify: in my article I discuss mostly HPKP as defined in RFC 7469. I think static pinning works great (and also has the benefit of being preloaded). Mobile and other types of pinning where you control both sides of a conversation also work well and are easier to get right.


Wouldn't a new record in DNS a safe dynamic way to publish the certs signatures?


It's too easy for the same attacker to MITM both the certificate and the DNS.


He says "bricking your website" while he means "your website will appear as if it has an invalid certificate".

Bricking means to crash something beyond repair.


No, when HPKP breaks your site no longer works, period. The error page doesn't allow clicking through. Some browsers (e.g., Chrome) support manual editing of the HPKP configuration so some users might be able to get around the problem, but that's unlikely to work for many.

Try it here: https://pinning-test.badssl.com


Most users won't click through on a prompt that a website is insecure though. For that majority of users the website is well and truly "broken beyond repair".

The only thing that matters is what your users experience, not what your server serves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: