Here's the full list of DNS attacks DNSSEC protects against:
* Resolver cache poisoning
* MitM upstream from resolver
That's it. That is all that DNSSEC does. Meanwhile, here's some attacks or concerns that are actually seen in the real-world and which DNSSEC does nothing for:
* Passive data collection
* Malicious DNS resolvers
* Registrar account compromises
* Local network MitM against endpoints
And these are the real-world attacks which are enabled or made worse by DNSSEC:
* DNS Amplification DDoS
* DNS server denial of service (on-the-fly signing or validating signed responses adds non-trivial overhead)
* Zone enumeration (NSEC and NSEC3 are hilarious)
* DNS server compromise through RCE exploits (not exclusive to DNSSEC, but handling signed responses adds a lot of complexity to servers and almost all DNS RCE exploits we've seen recently have been in the DNSSEC handling part)
Add to this the complexity of key handling and the risk of breaking your own zones and the calculation is pretty clear - on the whole, DNSSEC is actively making things worse and not providing a clear benefit for anyone.
And even this part is also mostly solved by DoH. (Unless someone can mitm you and spoof certificates, which means you're targeted by some organisations with serious resources...)
The problem with DoH is that it reinforces the certificate hegemony.
Anyone can set up DNSSEC and run it without applying for authorisation from another party. But DoH requires a certificate signed by a CA trusted globally.
This is a super important point! The parent post, which is quite good, forgets the qualifier that in practice DNSSEC is exclusively a server-to-server protocol: it's what your DNS server uses to attempt to authenticate records from other servers. But when you get the records, you don't get the signatures; you just get a single bit in the header that says "I pinky swear that I checked the signatures".
It's wild that we're even considering rolling more of this out in 2023.
> But when you get the records, you don't get the signatures; you just get a single bit in the header that says "I pinky swear that I checked the signatures".
Clients absolutely do get RRSIGs (and relevant NSEC/NSEC3) however they don't necessarily validate them.
Thus providing an interesting foundation to build upon, for example DANE or SSHFP. Not very known or common things, but they show the potential of what is possible when there's a way to publish cryptographically authenticated records in DNS.
Though support in that space is very patchy as well and maybe going nowhere. In any case, it serves as an example that DNSSEC can be built upon, even if the new ideas don't necessarily make it.
In both cases what it boils down to is that "it's an alternative to the current CA system". Sure, alternatives are good, but why does it need to be coupled to DNS? If I have a .ly domain, do I really want to be forced to trust the Libyan government for my security?
What do think a ccTLD is, if not a TLD controlled by a country, meaning the government of that country? In what world would it be reasonable for you to have a .ly domain and not trust the Libyan government not to do whatever they like to your domain? Like, you know, everybody already does with every .com and the U.S. government?
But that is exactly the point: in the world we are in right now, I DON'T have to trust the TLD operator to make sure my services are secure. Sure, there is a risk that the TLD operator will shut down my domain completely, but that is a different risk from the TLD operator MitM-ing my services.
If my CA misbehaves? I can use a different CA. Not trivial, but doable. What if my TLD operator misbehaves or TLD changes ownership (like the .org case a few years ago) - change my domain?
Not really; with SPF, huge TXT records are all over the place, and can be used instead of having to use RRSIG records.
> * DNS server denial of service (on-the-fly signing or validating signed responses adds non-trivial overhead)
People said this about HTTPS, too. They were wrong.
> * Zone enumeration (NSEC and NSEC3 are hilarious)
Firstly, NSEC5. Secondly, no secret data should be in DNS. It’s not designed for it.
> * DNS server compromise through RCE exploits (not exclusive to DNSSEC, but handling signed responses adds a lot of complexity to servers and almost all DNS RCE exploits we've seen recently have been in the DNSSEC handling part)
Not on the authoritative server side, surely? Sounds like a DNS resolver problem.
NSEC5 is a research paper, not a deployed standard. But thanks for calling this out, because it gives us an opportunity to reflect on the fact that, confronted with the problem of enumerable sensitive DNS labels, the best cryptographic minds of the IETF DNS working group came up with a 1990s password file.
One of the ways you can tell that DNSSEC advocates are operating in bad faith is by catching them attempting to argue that nothing in the DNS is secret to begin with. First, that's obvious gaslighting; disabling public zone transfers has been a security best practice since the mid-1990s, and no network security audit would fail to flag you for enabling them. Second, if the DNSSEC advocates actually believed that, they wouldn't have done NSEC3, whitelies, and (now, apparently, at some point in the distant future, NSEC5).
"DNS names aren't supposed to be secret" is not a good faith argument. It's not your argument: it is a popular trope of DNSSEC advocates. I don't mean to imply anything about you personally --- you're an anonymous abstraction to me, so there's nothing I could reasonably say about you as a person. But the argument you're making is what it is, and I've characterized it, I believe, accurately.
Later
I edited this slightly to make clear that I'm characterizing a trope, not an HN thread.
> "DNS names aren't supposed to be secret" is not a good faith argument.
Yes it is. Here I am, making it. It’s not even about DNSSEC; it has been the truth ever since the DNS itself was designed. Having secret data in DNS is doomed to fail in any number of ways, since DNS was never designed for it.
And you can’t get away with claiming “Oh, I wasn’t calling you a bad faith gaslighter, only other people. Who are making the same argument as you. And I am saying this in a reply to your comment for no reason in particular.”
Since you are, incredibly, doubling down on calling my argument a bad faith argument (instead of responding to it), I have no recourse but to leave it at that.
> Meanwhile, here's some attacks or concerns that are actually seen in the real-world and which DNSSEC does nothing for:
DNSSEC also does not press your suit or iron your shirt. How is this a criticism against DNSSEC? A security technology can not possibly solve all problems, and not solving some specific problems should never be a strike against it.
There is one case where I think DNSSEC provides a lot of value: domain validation for the purpose of issuing a TLS certificate.
To get a DV certificate, the most common kind, you just have to prove you own the domain. If someone can MitM the DNS request to verify domain ownership, then they can issue a certificate for your domain. Transperency logs help protect against this, as long as you are watching them, and notice the bad cert issued before too much damage is done. Granted, there isn't a big risk of MitM between the nameserver and the CA, hopefully with multiple locations. But a state actor might be able to do it.
Still, I'm not sure that is worth the complexity of DNSSEC. Especially since you really only need signed records for specific records.
On-the-ball CAs are already doing multi-perspective lookups from multiple global POPs to reduce the theoretical potential for this attack (which is probably extraordinarily rare). By far the most common way people lose control of their domains (and their certificate identities) is by getting their registrar account phished.
> On-the-ball CAs are already doing multi-perspective lookups from multiple global POPs to reduce the theoretical potential for this attack (which is probably extraordinarily rare).
It was certainly visible when a previous employer tasked me to go looking for it, in I'd guess 2018 or so. I have no details from that work because by convention I keep nothing, although by mistake I do still have a front door key to the office, which I really ought to get back to them one day.
IIRC the main targets were military and government entities in less important (so not G7) countries. We were focused on email at the time, but these days perhaps you would target other infrastructure and the shape of the attacks would look different. Since "Governments of poor countries" isn't an attractive sales target for a startup I don't think that work progressed beyond my investigation.
The Web PKI assumes the DNS is trustworthy, which in practice means DNSSEC or else just hoping. You are in team hope, and I think you ought to make that clear to people.
It is simply not true that the WebPKI relies on the continuous global trustworthiness of the DNS.
And a fact that you're not communicating to your audience here is that the ordinary way DNS is subverted to trick CAs is by phishing registrar accounts.
I think it's quite telling that the examples here all involve BGP subversion, which DNSSEC doesn't coherently mitigate: universal DNSSEC deployment would change the exploit you'd deploy to exploit BGP, but it wouldn't break the vulnerability.
Out of curiosity—how does DNSSEC fail to address this specific vulnerability? Wouldn't it make it impossible to get an HTTPS/DNS record response for this zone, taking it from a "fail dangerous" MITM attack that could violate confidentiality to a simple "fail safe" DOS attack that only harms availability?
Where it's just one out of hundreds of Let's Encrypt certificates.
Two simple things are missing to make any of this infrastructure useful at all:
- A system as easy as ACME for domain holders to revoke any certificate. A few months ago, Cloudflare started issuing certificates for domains where they're only dns hosts. They issued these certificates through the Google CA, without informing the domain owners. I only found out through a CT alert. The response when I contacted both Google and Cloudflare was a shrug and a runaround, and the certificates were never revoked.
- A monitoring system that will only alert me on new certificates in the CT logs that it can't find on my infrastructure. Alternatively, an external system that will alert me if it sees a new certificate for foo.example.org that it can't see being used at foo.example.org - the low number of false positives should still make it worth it.
> A system as easy as ACME for domain holders to revoke any certificate.
ACME in fact supports this, and I expect at some point CAs will be required to support it.
In the meantime, I'm working on a tool that will take a certificate, identify the true issuer, look up the CA's problem reporting email address, and provide an email template with the right words to make the CA care. If the CA supports revocation over ACME the tool will take care of revocation automatically (although you'll still have to demonstrate control over the domain).
(Note however that your example is flawed because the certificate was actually authorized by virtue of making Cloudflare your DNS provider. If you don't like that, your recourse is to pick a different DNS provider which is more respectful of their customers.)
> A monitoring system that will only alert me on new certificates in the CT logs that it can't find on my infrastructure.
Commercial CT monitors, like Cert Spotter (my product) or Hardenize support integration with your certificate issuance infrastructure so you're only notified if the certificate is unknown. I only get CT alerts for my domains if I actually need to care.
The open source version of Cert Spotter can execute a script when it discovers a certificate, and I know of users that use this to cross-check against their inventory and only send an alert if the certificate is unknown.
> ACME in fact supports this, and I expect at some point CAs will be required to support it.
If you use ACME to get a certificate from CA1, then you can use ACME to revoke the same certificate from CA1.
But if the cert was issued from CA2 (via ACME or otherwise), there is nothing you can do about it. And even if the 'rogue' cert was also issued from CA1 as well (but under a different account), unless you have the private there is nothing you can do to revoke it; only perhaps get a completely new cert that 'supersedes' it.
No, ACME allows you to revoke a certificate if you can successfully complete a challenge for every domain in the certificate (see https://www.rfc-editor.org/rfc/rfc8555#section-7.6 "an account that holds authorizations for all of the identifiers in the certificate"). The certificate need not be issued from the same account.
Once root programs require all CAs to support ACME, then all you need for automatic revocation is a mapping from CA to ACME directory URL (this could perhaps be disclosed through the CCADB).
I’m not sure how CT logs actually work, but is it impossible for the CA to issue a certificate that is not entered into the CT logs? If it’s possible, then governments would surely just require that, and then it would only be caught by interception of the actual certificate being used.
Many major browsers expect CT, and won't accept a certificate from a default CA without it being in CT. Therefore it matters a little less whether such a certificate can be issued, but rather whether it can be accepted by a browser (which many popular ones won't). And therefore it will become relatively noisy and detectable if such a certificate is deployed at any sense of scale.
In essence, a cryptographic proof that the certificate was sent to CT providers needs to be enclosed along with the certificate. That can come via an OSCP staple, or a TLS extension.
There is not currently a good story for non-browser clients. Generally, non-browser clients don't enforce CT, and those that do are at risk of breaking if they don't stay on the ball with changes to the CT ecosystem. Browser makers can stay on the ball because they are well-resourced and competently staffed; non-browser apps, not so much.
Case in point: earlier this year a bunch of CT-enforcing Android apps were suddenly unable to establish any TLS connections because Google stopped publishing a JSON file which these apps should never have been consuming in the first place. There was plenty of warning that this would happen, but the author of the Android CT library was not on the ball, and app developers were not keeping their dependencies up-to-date: https://groups.google.com/g/certificate-transparency/c/38Lr9...
I hope this will get better with time and we will find a way to safely extend the benefits of CT to non-browser apps. I think we're more likely to find success with CT than with DNSSEC, but there is no free lunch.
Browsers won't accept a certificate unless it comes with proof that it was submitted to a CT log.
So a government could MITM you but they'd have to burn a CA to do it, whether you personally noticed the attack or not, so it's a very high cost attack
This is theoretically true but not really true in practice right now.
If a CA misissues a cert for something major, like Facebook or Google Mail, and Google or Mozilla find out, my current belief is that they'd be in for a world of hurt.
But if a CA misissued such a cert for a single specific target, without a CT SCT, neither Chrome nor Safari will report that (currently, CAs are explicitly allowed to issue non-CT-logged certs; the check on that is that Chrome and Safari won't honor that certificate --- a reason, by the way, to reconsider Firefox). If Google found out that you'd misissued a non-logged Google Mail certificate, you'd get nuked. But there's nothing currently in place to make Google find that out.
It's clear what tweaks to the system would need to occur to make this work that way it would "ideally" work, and the problems are mostly not technical; you'd just have Chrome (or Safari, or Firefox) report certs without SCTs in its default configuration. But that kind of surveillance isn't really a thing right now.
I've been cagey about this in past discussions because my understanding was that the Chrome team did do some of this kind of surveillance informally. And I believe they did --- but I'm told that stopped being a thing years ago. Now they just don't accept certs unless they're logged, and that's that.
> If a CA misissues a cert for something major, like Facebook or Google Mail, and Google or Mozilla find out, my current belief is that they'd be in for a world of hurt.
It doesn't even need to be major. Misissuing for example.com and test.com were major factors in the distrust of Symantec and Certinomis, respectively.
> It's clear what tweaks to the system would need to occur to make this work that way it would "ideally" work, and the problems are mostly not technical; you'd just have Chrome (or Safari, or Firefox) report certs without SCTs in its default configuration.
This would require a pretty big paradigm shift which is hard to see happening. But as long as clients require SCTs (Firefox needs to hurry up already) this is not really necessary.
> I've been cagey about this in past discussions because my understanding was that the Chrome team did do some of this kind of surveillance informally. And I believe they did --- but I'm told that stopped being a thing years ago.
I'm pretty sure this has never been the case. I think at one point Chrome may have reported certificates for Google domains that were not issued by a Google CA, but this was unrelated to CT.
Or maybe you're thinking of the Googlebot, which logs the certificates it sees while crawling the web.
I may have misconstrued things I was told by team members, or you might have, but either way it doesn't matter, because it's not happening now. You were right to call this out on the last DNSSEC thread, and I want to make sure I'm not endorsing a WebPKI security feature that doesn't currently exist. :)
The proof is an SCT, a signed document from the Log which says "I promise I logged this pre-certificate". [[ In some cases it'll be the actual certificate, but for most real world TLS certificates it's a pre-certificate with the same substantive details, if you think about it you will see why the SCT baked inside your certificate cannot mention the actual document it is baked inside ]]
Now, of course at the moment this document is created, there's no way to be sure this claim is true. The logs are a distributed system.
Twenty four hours later, (the Maximum Merge Delay, this is a global policy decision) the log should, if you interrogate it, be able to show you a log entry matching the SCT, which traces to the agreed Merkle Tree head for that log. If it cannot, in principle that log has failed and must be distrusted. This happens (to one of the dozens of public logs, somewhere) a few times per year on average.
In principle, everybody who sees the log could agree that they see the same Merkle Tree head, thus the certificate really is logged. In practice the mechanisms to ensure this works, a Gossip protocol, do not exist and have never been deployed in practical use.
It's probably fine, but without ever completing the system as designed, it does have flaws.
> In principle, everybody who sees the log could agree that they see the same Merkle Tree head, thus the certificate really is logged. In practice the mechanisms to ensure this works, a Gossip protocol, do not exist and have never been deployed in practical use.
It has been - the commercial instance of Cert Spotter gossips STHs with Chrome's SCT auditing infrastructure.
There is no way to prove that every cert from every CA gets logged in CT. How can I know for certain some CA out there is not creating certs and not logging them?
> Some see the current DNSSEC costs simply as teething problems that will reduce as the software and tooling matures to provide more automation of the risky processes and operational teams learn from their mistakes or opt to simply transfer the risk by outsourcing the management and complexity to larger providers to take care of.
> I don’t find these arguments compelling. We’ve already had 15+ years to develop improved software for DNSSEC without success. What’s changed that we should expect a better outcome this year or next? Nothing.
We’ve had the X.509 certificate infrastructure for 30+ years, and it’s only recently become mostly safe and automated enough for people to deploy without risk. (This includes new risks, like accidentally sending HSTS headers with too large timeouts to the world.) DNSSEC will get there, too.
> We’ve had the X.509 certificate infrastructure for 30+ years, and it’s only recently become mostly safe and automated enough for people to deploy without risk. (This includes new risks, like accidentally sending HSTS headers with too large timeouts to the world.) DNSSEC will get there, too.
The WebPKI didn't improve on its own; it got there largely because web browser makers have high-leverage points to exert influence, which they used to make sure improvements got done.
It certainly helps that we're moving on from "It's irrelevant" to "Actually this is important and it's bad if it's broken". When would you say that happened for the Web PKI? Not last century when the Web PKI comes into existence for sure. Maybe if you squint you could argue for the Baseline Requirements in 2011 as a starting point ? Or maybe the Ten Blessed Methods which was only a few years ago?
The browsers do have leverage over the TLDs, and over the DNS resolvers, in the form of how they choose to treat names and name services. For example check the "trusted resolver" policies at Mozilla and Chrome, which IIRC got big US ISPs to swear off nonsense like injecting bogus results into DNS in exchange for having the browser work with their DNS servers.
Browsers have leverage over CAs because CAs have to ask browsers to be included in the first place, and because distrusting a CA only temporarily breaks websites until they can switch to a new certificate.
What kind of leverage do you think a browser could practically exert over a TLD?
As just one example, they can change presentation. Remember the CAs were able to sell an otherwise useless change in presentation for a considerably increased price, you will still occasionally run into people who thought that was crucial to their business.
Oh, people are already working on it; they do it because:
1. Doing DNSSEC manually is work.
2. Doing DNSSEC manually is risky, which all these outages show.
The more DNSSEC usage increases, the more work it will be to do it, and the more people will mess up, and the more the incentives to automate it will increase.
I'm not sure what you mean about deploying without risk only recently. X509 without extra recent additions like hsts could never lock out your users for extended periods of time. If you really messed up in some way, they could still ignore errors and continue while you buy/deploy a new cert.
With dnssec you can be down for days by messing up just the very basics.
You frequently saw even large sites being down with an expired HTTPS certificate in the past, for days, or even longer. And if you mess up your HSTS headers, what recourse do you have? None, from what I recall.
If you get an expired certificate, you can fix it close to immediately. It's all under your control. For dnssec you can't do anything until the cache expires.
If the large sites were down for days because of expiry that's their own failures, not because of unexpected complexity.
> We've had the X.509 certificate infrastructure for 30+ years, a
You could argue that because TLS is both free and ubiquitous, DNSSEC is just added burden given the outages. Personally, I don't see BigTech pouring resources, like they do at HTTP/TLS/WebPKI.
Adding to this: virtually none of the large tech firms have adopted DNSSEC. It's usually news when one of them does, and that news story usually takes the form of that firm dropping off the Internet for half a day because DNSSEC ate them.
I work in a field where we deploy software as a service for customers in various regulated fields, and DNSSEC is something that auditors look for as a checkbox to check that DNS is secured.
So far we've let AWS Route53 deal with the whole DNSSEC thing, but it has made moving hosted zones from one AWS account to another more difficult and more troublesome than it should have been, and making sure that DNSSEC is setup on all of them is frustrating and annoying.
Thankfully we haven't encountered any issues with DNSSEC signing causing downtime, but I expect it will happen when someone misconfigures a DNSSEC record somewhere and now we gotta wait for the caches to empty/clear.
Which audit requires DNSSEC? SOC2 certainly doesn't. ISO doesn't. HITRUST doesn't. PCI doesn't. FedRAMP might (we'll see how much longer that remains the case).
Most firms aren't DNSSEC-signed, most especially those with well-resourced security teams, the kind that collect all the available audit certifications because they have teams dedicated to doing that.
It's a requirement from some of our customers in the financial industry, as well as gov't. It's something various security firms have asked for, and it's a checkbox we will continue to just check. No matter how useless I think it is.
That's fine, I understand. I'm not telling you how to live your life. All I'll say is: there are SOC2 firms that will also tell you to turn DNSSEC on. You can tell them to go fuck themselves, and you'll be just fine. I don't know if that's the case with your NY-FI customers (none of the major investment banks are signed, nor are any of the exchanges, nor are the largest hedge funds and prop traders, but that's neither here nor there); only you know how to handle them.
I'm just here to establish that the mainstream security audits definitely don't require DNSSEC. :)
My experience with these types of audits and security questionnaires is that there are usually a lot of items that aren't actually important either in general, or for your specific company/product. And often if you can provide a compelling argument why you don't actually need something, the customer will usually be ok with it. But the more of those there are, the longer it takes to close the deal, or finish the audit, and the more resources you have to spend on it.
This is why 'cookie cutter audits' are fairly useless, a standard should establish a framework but not result in endless bickering about stuff that isn't even applicable.
There is a huge difference here when it comes to who pays. If a customer of the company pays for the audit you can expect a whole pile of trouble because the auditor wants to show that they've delivered value, whereas if the company being audited pays for it then it tends to go the other way. Ideally the audit would be exactly the same no matter who pays for the audit and it should be customized to the company (and market).
The finance systems I’m familiar with generally don’t rely on DNS for security and, more commonly, don’t rely on it at all. For systems operating over the open Internet, the most common security mechanisms involve pre-shared key pairs. For systems not on Internet, IP (v4) addresses are generally used directly.
I’m actually generally disappointed that TLS in the context of web browsers essentially only uses the CA system for identification and security. It’s wrong at both ends of the spectrum. There are systems where manual key distribution would be fine, and avoiding the CAs as a weak point would be beneficial. And there are systems where the CA system flat out does not work (e.g. anything where the servers are not connected to the Internet and do not even have every-few-days Internet access). Neither of these use cases are handled well by web browsers. (DNSSEC doesn’t help either, of course.)
This is why I said “do not even have every-few-days Internet access”. Without some kind of recurring connectivity, certificate renewals are impossible. (The client could, in theory, help, but that’s not a thing right now.)
Systems like this are all over. Almost every IoT device is an example. More seriously, networking hardware is in this category. You can’t make connectivity to networking hardware depend on a working network with Internet access to obtain certificates without introducing a circular dependency and the possibility of making it very hard to recover from downtime.
As a result, security of network control planes is often abysmal.
I'm not clear here if "Internet access" means incoming access, outgoing access, or neither.
To be clear, the solution above requires no incoming access. Its 100% client, there's no server access. (Client to the DNS Api, client to ACME.
If the device does not have client access to anything then the certificate can be obtained from another device. You would then need some mechanism to put the certificate on the device.
I agree that IoT devices are problematic, they either need client access to the Internet (which most don't get) or they need to run a "server" of some kind to receive the certificate updates. If you're lucky this could be automated.
I have found it relatively easy to just run a local CA for my isolated network. I didn't find it too difficult to setup. I run around 600 servers and about 20x that in IOT devices.
And how did you provision any of this? Did you take each device, connect to it while on that same network in a completely unauthenticated manner, and provision certificates? Can you send links to webpages on these devices to ordinary browsers that aren’t provisioned with the certificates? Would someone who doesn’t fully trust you and your infrastructure be willing to install your certificate?
> You can’t make connectivity to networking hardware depend on a working network with Internet access to obtain certificates without introducing a circular dependency and the possibility of making it very hard to recover from downtime.
> C) provision certificates for this address using the DNS-Challenge approach rather than the HTTP-challenge approach.
The other "bonus" is that due to CT it leaks the internal name to basically everyone on the internet. It may or may not be a problem but definitely something to be aware of.
>> (e.g. anything where the servers are not connected to the Internet and do not even have every-few-days Internet access
It is possible to add your own root certificate to clients on the LAN. so the equivalent of hand-distributed keys. Granted, this leads to unwelcome other problems (you can't limit your root to your local domain) so it's not often done, but it is possible.
Personally I prefer the DNS-challenge approach I described in a sibling comment.
NameConstraints used to break in Safari: if you didn't set them mandatory, the constraints would be ignored and you could sign anything; if you set them mandatory, the certificate wouldn't validate. But it looks like support was added in 10.13.3, circa 2018, so probably good to go.
Might break some things, and I'm not aware of any mainstream uses, but obviously it'd be really handy if it's usable.
This is actually something important: auditors are all over the place and the 'checkbox types' tend to be insistent on all kinds of busywork that the standard they are auditing against doesn't require at all. As the company being audited you are definitely in a position to push back (but maybe not with that particular language).
If you know the standard you are audited against as well or better than your auditor you're in a good position to spot such deviations and avoid a bunch of unnecessary (and sometimes conflicting) 'requirements'. The less competent auditors might not like this but they're probably not going to challenge serious pushback and if they do you're free to contact their employers and ask for someone more qualified.
There's a historic DNSSEC requirement from the USG, which is actually a big chunk of the reason anyone invested in DNSSEC in the early 201x's. But then it got rescinded, but then it stuck in some of the OMB documents, and it's become a weird political football (in the fleeting moments anybody thinks about it) so we're in a weird limbo where every once in awhile teams like Slack turn it on to close some FedGov business and fall off the entire Internet for a half day when DNSSEC eats them.
And from there, it’s more recently worked its way into the minimum mandatory set of requirements in the StateRAMP baseline and the state specific variations such as TX-RAMP and AZ-RAMP.
We’re working with customers who will be subject to DNSSEC requirements due to business with state universities, and we will be trying to make a case with the sponsoring agencies to avoid deploying on their domains.
The Australian government’s offical cybersecurity guidelines recommend - but do not require - Australian government agencies to implement DNSSEC, “where possible” and especially on their “principal domains”. In a competitive procurement, if another vendor says “you want DNSSEC? No problem!”, while you are saying “we don’t support that” or trying to argue against the recommendation - that is unlikely to help in winning the deal. If your product is miles ahead of the competition, it might not matter; if it is close, this might be the thing that loses it.
It's funny how the USG might be conflicted with both an interest in DNSSEC adoption, and an interest in preventing its adoption.
Less than a year ago I was having DNS problems reaching a subnet in the spaceforce.mil domain (via AFRC Desktop Anywhere). After some troubleshooting, I determined that the DNS server for their subnet was not configured for DNSSEC and my client was configured to refuse unauthenticated resolvers. Everything worked fine once I turned off DNSSEC locally.
DNSSEC can prevent spoofing via MITM, which is something that lots of governments do, so I can see why they might not want everyone to adopt it.
On the other hand, DNSSEC can help to secure network communication, so I can see why they might want to adopt it.
> DNSSEC can prevent spoofing via MITM, which is something that lots of governments do, so I can see why they might not want everyone to adopt it.
Only if you're NOT using HTTPS, right? In which case your traffic is already trivially spoofable, so you're not really gaining anything else by using DNSSEC there.
Does Route53 let you manage the DNSKEY RRset (so you can add a DNSKEY to the set yourself) or do they have some sort of facade you have to interact with instead?
That's an answer to a different question. I asked if they let you put a DNSKEY RR in the RRset they serve. (To be clear: I'm not asking if they will import private keys and sign with them or anything like that, just if they'll let you place an additional DNSKEY in the RRset they serve.)
I had to add a DS record to a Route53 zone just the other day, and when I did, I didn't see a DNSKEY option in the list of records. Don't count on that 100% because I wasn't looking for it, but I definitely don't recall seeing it.
For some reason I get rather annoyed by people who write lengthy blog posts about hot topic news of the day, especially those who mostly do handwaving.
> Even if you have a team of DNS experts maintaining your zone and DNS infrastructure, the risk of routine operational tasks triggering a loss of availability (unrelated to any attempted attacks that DNSSEC may thwart) is very high - almost guaranteed to occur.
What an absurd statement. Yeah, some people have had issues. But most of those did not have a trained DNS team. Or even person. I ran DNSSEC for hundreds of TLDs as a one man team. I'm not particularly smart or special...most TLDs have not had a DNSSEC outage. NZ did because they made mistakes, which could happen with any technology. Expired certs, for example, are much more prevalent. Should we throw away the CA system too?
I'm not even a DNSSEC advocate, really. I just find it bizarre so many people attack is as impossible to do. It's not. Attack it on its merits or lack of necessity instead.
> What an absurd statement. Yeah, some people have had issues. But most of those did not have a trained DNS team. Or even person. I ran DNSSEC for hundreds of TLDs as a one man team. I'm not particularly smart or special...most TLDs have not had a DNSSEC outage. NZ did because they made mistakes, which could happen with any technology.
The evidence base is growing that even with a well funded, competent DNS, it's very possible to completely shoot yourself in the foot with a minimum time to recovery not entirely within your control. This is not a good technology with well-thought-through failure modes.
> Expired certs, for example, are much more prevalent. Should we throw away the CA system too?
When my website cert expires, my entire domain and all its endpoints don't completely become unreachable with no easy workaround. The impact is very different.
> I'm not even a DNSSEC advocate, really. I just find it bizarre so many people attack is as impossible to do. It's not. Attack it on its merits or lack of necessity instead.
It's a bit like the arguments around programming languages like C/C++. Just because you think you can write C with zero memory issues, doesn't mean we should be encouraging everyone else to.
We should have started with DNSCurve and moved later to DNSSEC. The problem the engineer/developer persona is we intrinsically don't understand the value in "progress, not perfection".
It's pretty easy to say what people back in the 90s should have been working on in retrospect. The zeitgeist around privacy only really changed in the late 2000s if I recall. Shortly afterward DNSCurve appeared and since it's arguably easier to deploy than DNSSEC it's noteworthy how little impact it has had.
The worst part is actually that very few of the TLD support dnssec, so it still is a chinken-and-egg problem (and the available tooling remains atrocius).
But what alternative is there for securing the dns chain?
DNS-over-HTTP requires time sync, the usual CA problems and it only works for the resolver to the client, it's not used for authoritative servers.
DNSCrypt? Sounds good, but the server key is published in the NS record, which gets published in the old cleartext DNS. So we are still not securing the authoritative part.
Unfortunately DNSSEC is still the only option to secure the whole DNS chain.
> But what alternative is there for securing the dns chain?
You're asking the wrong question. The better question would be: What do I want to achieve by securing the DNS chain, and can I do that in a different way?
What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
I'm not saying TLS+WebPKI is perfect. But it works certainly a lot better than DNSSEC (which, let's be honest, does not work in practice).
Yeah, ok, but then you need to ask if DANE achieves that. And the answer to that is clearly "no".
DANE never made it past an experimental browser plugin, and as far as I know that isn't even developed any more. In the time where DANE achieved nothing, we got Let's Encrypt, we got CT, and a whole bunch of other improvements to the WebPKI.
DANE technically achieved that, I played around with OpenSSL plugin for DANE back in 2010 (when the . was signed). But yeah, I get your point.
Right now, the path ahead is stapled DANE TLS extension.
Basically, the idea is that you can use simple unauthenticated DNS to get the domain name, just like now. Then you get the complete DNSSEC-authenticated DNS chain for the DANE record as a part of the TLS handshake: https://datatracker.ietf.org/doc/rfc9102/
This seems to be the best of both worlds:
1. We keep DNS as a simple and nimble UDP-based service, without trying to cram the signature in each packet.
2. Since we HAVE to use TLS anyway to achieve any meaningful security, we can just transmit the full DNS chain (up to the root zone!) with signatures easily as a part of the TLS handshake.
3. The client then just needs to validate this chain, and it only needs to have the root zone's key as the root of trust.
4. The root zone's key changes fairly infrequently (once in a decade), so IoT devices can use it to bootstrap themselves.
The stapling extension for TLS is a dead letter. Work on it was abandoned a couple years ago, when it finally occurred to the working group that control of a CA would allow you to strip the stapled DANE proof off the handshake, which defeats the entire purpose of the extension. DANE advocates counterproposed a pinning mechanism and got laughed out of the discussion by the browsers, for whom "pinning" is a cursed word.
How would that work if you only trust DANE? And if you have a control over a CA, then you can just issue yourself a certificate for any host anyway, so it's a moot point.
What exactly did DANE achieve? Replacing CAs? I must've missed something, last time I checked every webpage on the planet was still using CA-signed certificates.
> Right now, the path ahead is stapled DANE TLS extension.
This thing I find really funny. It basically boils down to "we secured DNS. ok, we figured out it doesn't really work. Ok, how about let's stuff our DNS security system into TLS, because over DNS it does not work? Then we have secure DNS!"
Though I've heard this idea many years ago, and it appears deployment is nonexistent, so... probably not gonna happen either.
The RFC provided a standard, but people choose to ignore it.
> This thing I find really funny. It basically boils down to "we secured DNS. ok, we figured out it doesn't really work. Ok, how about let's stuff our DNS security system into TLS, because over DNS it does not work? Then we have secure DNS!"
Not quite. DNSSEC secured DNS, but it turns out that this by itself is not important in the today's world.
However, we do have a very fragile situation where half the Internet now depends on the goodwill and competency of just one organization for their security (I'm talking about Let's Encrypt).
> Though I've heard this idea many years ago, and it appears deployment is nonexistent, so... probably not gonna happen either.
TLS extension is a fairly new idea, and it only now is starting to get some traction.
This is the second time on this thread you've claimed that a DANE stapling extension is in the works. But it's not: it was proposed and shot down, years ago.
That's factually untrue. RFC 9102 was published two years ago in order to gather experimental data: https://datatracker.ietf.org/doc/rfc9102/ The relevant RFC for DANE is still active, and it hasn't been obsoleted ("shot down").
The major vulnerability of DANE is MITM attacker's ability to "downgrade" attacks into regular PKI in mixed PKI+DANE deployments. So DANE is at worst no better than PKI.
The other problem is the lack of something like certificate transparency for DNS updates and the inability to quickly invalidate erroneous records in caching resolvers, and the general complexity of DNSSEC setup.
Geoff Houston's basically says that _browsers_ bailed on the stapled DANE TLS extension, which is quite true. It doesn't mean that the work on DANE and DNSSEC has stopped. Yes, it's taking time, but it's still ongoing.
As for CT, we don't need _everyone_ to adopt it. Just the most relevant TLDs would be enough. And there is work going on adding something like it for DANE. It's also slow, because there's a desire to obscure the DNS names from the CT logs (current CT logs expose ALL the DNS names, allowing to easily map the internal infrastructure of any company depending on PKI).
Meanwhile: there will never be anything like CT for DANE and DNSSEC, for at least two big reasons:
1. Browser vendors have no leverage with DNS registrars to force them to deploy it, and that kind of leverage was required to get CT as far as it has been.
2. The DNS top level domains that are required to implement something like CT are controlled by sovereign states, which aren't going to participate in a single global transparency log; in particular, you'd be surprised to see any of the US or UK TLDs conceding to anything like this.
You have links here to DNSSEC automation work, but nothing to any kind of "DANE Transparency", because, so far as I know of, nothing like it has even been meaningfully proposed.
> What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
That really only works well for web servers. If your service isn't the one found at the A/AAAA lookup for a given name it's much more difficult to obtain a cert for that name, and without that clients are reliant on what's in DNS alone to make the association.
I'm not entirely sure what scenario you're talking about, I can't think of one. If your issue is that you can't get a cert without an A record, well, set an A record. I don't see how that's a problem, except if you make it one.
WebPKI, while called that, is not just used for the web. It is used to secure e-mail servers (for IMAP+POP3 trivially, for SMTP it needs a bit more work -> MTA-STS). It also works for all kinds of more obscure things like IRC.
Most modern services are HTTPS under the hood anway, but there's really nothing stopping you from using TLS+WebPKI for other services, too.
I don't know how to put this in a more palatable way but you're looking at the world from the confines of a webdevs point of view. Can you really not imagine that there's any other protocol on the internet than HTTP? That there might already be a web server listening at a given name? That the person in control of that webserver isn't you?
Regardless of protocol, yes only one person/org should be in control of (and able to obtain certs for) a particular domain.
Unless you're suggesting Person A should have port 443 on www.something.com and Person B should have port 444, and each gets their own (valid) www.something.com cert? Because that some very clear problems.
Generally services not found directly at A/AAAA records for a name are found via another record that contains a hostname (HTTPS/SVCB, SRV, etc) at a leaf node below the name. So `_xmpps-server._tcp.example.net` might contain the hostname `hosted-provider.example.com` in which case `hosted-provider.example.com` will need to respond with a certificate for `example.net`, unless you trust the DNS, in which case it can respond as `hosted-provider.example.com`.
Well it looks like they are in this instance and neither they nor you have done anything to suggest otherwise. TLS and WebPKI have great usability for webservers but non-webservers cannot even approach being as smooth as for example Caddy's "Automatic HTTPS" configuration.
> What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
The thing is TLS with certificates doesn't always achieve this and we know this because there have been real world successful attacks. E.g. intermediate CA certs being issued to bad actors, certificates for major domains being issued to people who don't control them, etc. The issue with the CA ecosystem is not that they're evil, it's just that they don't validate what regular users expect them to validate and that they are also subject to human error, governmental and commercial pressure.
The more common issue with the validation CA's can perform for DV certs is that it just tells me that you controlled a domain for a given moment in time at some point in the past (pretty much up to a year ago). To fix this we introduced ways to mitigate the vulnerabilities of TLS, watching CT logs for your domains, HSTS, CAA record restrictions. Unfortunately if you do secure your infrastructure in this ways you have added complexity and will likely discover how your infrastructure can break in new and interesting ways too.
We could skip all the issues we have with DNSSEC using just the same technology and security model that TLS+WebPKI has. It is called using DNS over TLS and talking directly with the authoritative server rather than using caching resolver. With no (non-local)caching you don't have any caching invalidation for keys which is one of the major problems in deploying dnssec.
Severs talking to servers do have a bit different handling of errors than browsers. A browser can ask the users if they want to ignore the error, but with a server the answer need to be to close the connection. If the certificate for the DoT server is broken then the domain goes down, just as a TLS+WebPKI SaaS service will break with broken certificates. The impact however will be just as major as with dnssec errors.
I have personally tried to advocate a bit that people should give up public/shared revolvers. The delay in response time in modern machines quite minor, and in term of loading a web page or sending a email, a few milliseconds here and there doesn't really have a large impact. My argument for that has primarily been about privacy and security (no insecure path between stub and resolver), but I have not considered in the past the major benefit of removing caching in terms of stability.
This is false! The point isn't that TLS 1.3 is a better protocol that DNSSEC (though: it extremely is). It's that the entire WebPKI ecosystem has features DNSSEC does not have, has no timeline towards having, and probably never will have, because DNSSEC is de jure controlled by sovereign countries who will not agree to a global transparency log.
I think you are missing the whole point of what is being suggested. There is no DNSSEC if clients are talking directly with authoritative services using DNS over TLS. No signatures, no sovereign countries who controls keys, not anything which is defined by the DNSSEC. Once a client is securely connect to an authoritative service using TLS, everything that the server is transmitting is by definition authenticated. The client need to traverse the tree of authoritative servers by itself, but that is the only drawback.
Of course they support DNSSEC. It's a government controlled key escrow system for those domains. I'm surprised they haven't tried to mandate its adoption by statute.
Governments control the roots! If they want the DNS PKI to assert that a host they control is MAIL.GOOGLE.COM, to a particular target, they can do that. They cannot trivially do that with the Web PKI.
You don't understand how DNS works. The root is controlled by a US organization (not the government, but whatever) and it delegates individual TLDs to other organizations in other jurisdictions.
So if the US government wants to fake (for example) mail.google.cz it needs to:
1. Create a fake .cz record and sign it with the root KSK.
2. Make sure only you see this record (so they must control the network path to you).
3. Create a fake response.
This is technically possible, but highly unlikely. It's also going to be very visible and easily detectable.
I think the attack tptacek is suggesting is where a government replaces your key with their key, and your nameservers with their nameservers (perhaps with a passthrough for non-interesting records), at which point they can send whichever responses they want with a valid signature.
In your example the Czech government would be the bad actor, but the US government could be the bad actor for .com.
The thing is, I can pick and choose the government to host the website. If I don't like the Czech government's policies, I can use a domain in Turkey or Libya.
The hierarchical nature of DNS makes it impossible for the US to covertly mess with that.
This is strictly better than the current situation where de-facto any government can get a certificate for your domain name by putting pressure on one of the countless CAs. And quite a few governments even have their own CAs recognized by browsers.
This is how you know the argument has lost the plot: your response to government manipulation of Internet cryptography is that Google can just leave .COM.
It's not used by DNS root servers because it's designed to secure communications between clients and caches, not between caches and authoritative servers.
Swedish regulation requires DNSSEC for government domains (MSBFS 2020:7 for the DevLegalOps nerds like me out there).
I mostly agree with the author, except for one issue: ACME is currently vulnerable to DNS MitM attacks. LetsEncrypt reduces this risk by spreading validation across the globe, to decrease the likelihood of a successful DNS MitM. However, I feel that DNSSEC is the sound solution here.
They are protecting against the wrong attack. MitM is mostly theoretical. An irresistable government order (supplemented with people holding guns) for the DNS provider to insert the "correct" _acme-challenge record is something real, and DNSSEC makes this attack impossible.
Those same people with guns can also force your registrar to request removal of the DS record. Or in the case of ccTLDs, the people with guns might already control your TLD operator and can just answer the acme-challenges themselves.
TLS isn't worth much if someone hijacks your zone and obtains X.509 certificates for your services. A smart attacker with enough resources could even re-encrypt the intercepted traffic and control enough addresses to avoid running into per prefix resource limits or other obvious failure modes.
Thanks in large part to our research partnership with the group at Princeton CITP, Let's Encrypt has deployed a system we call multi-perspective validation. We validate domain control from multiple network perspectives so that an attacker would have to pull off several simultaneously successful BGP attacks in order to hijack our validation. It's not perfect but it makes it very difficult to do. There are other improvements coming that will further improve effectiveness.
I have to confess that I don't deeply understand DNSSEC. I've set it up on domains in AWS (Route53) and GCP (Cloud DNS) and found it pretty simple and never had any issues (only .com and .org, no weirder TLDs). Are all the problems that people complain about only relevant if you manage all the DNS infrastructure yourself (rather than just letting GCP/AWS handle the KSK, rotation, etc)? Or have I set up a ticking timebomb that's going to be a big outage at some point?
the point of DNSSEC for tlds is to prove that the data coming from the server is actually correct.
for TLS to work, you need to validate the authenticity of the host you are connected to. Obviously you can use mutual auth, but you still need to authenticate the CN. You can use a custom CA, but that involves previous setup.
DNSSEC proves nothing about correctness, it just helps show that a record was published by someone with access to the private key pointed to in the DS record.
What's "correct" or not is an entirely more nebulous thing.