Hacker News new | past | comments | ask | show | jobs | submit login
For DNSSEC (easydns.org)
43 points by indolering on Aug 6, 2015 | hide | past | favorite | 77 comments



The authors of this post missed the fact that there's an FAQ linked to the top of the post, where I rebutted all of these objections (and many better ones) 8 months ago. Rather than tediously recapping the same points again, I'll just direct their attention to the link:

http://sockpuppet.org/stuff/dnssec-qa.html

Dan Kaminsky, speaking at the Black Hat CSO Summit this Tuesday, attempted to advocate DNSSEC to the room. I wasn't in room yet to see it, but I'm told the suggestion was met with loud, sustained laughter. DNSSEC is not going to happen. There is absolutely no chance that, in the wake of the Snowden leaks, we are going to forklift out piece of core Internet infrastructure so that the security of .COM, .CO.UK, and .NET can be permanently and irrevocably signed over to the US Government. Move on.


Also see my other HN thread on how to fix it.


I tried to address all the major points in your FAQ and Adam Langley's follow-on blogpost as well, but I will take another crack at it.


Hey, I mistook the QA for your FAQ post, my overall thesis hasn't changed but I will follow up on the new info in your QA post : )


With the current CA system, you have to worry about far more than just the CAs.

An attacker just needs to MITM the connection between the target website and a CA of his choosing, and the CA will give the attacker domain-validated certificates for the target website.

So the current SSL in browsers can be compromised not just by the 600+ CAs, but also by anyone who can launch MITM attacks on the internet (e.g. the thousands of ISPs who can publish BGP routes).

For email-based domain validation, the attacker doesn't even need MITM capabilities -- catching the domain validation email in passive bulk data collection is sufficient. The NSA can trivially get valid certificates for arbitrary domains even without the cooperation of any CA! And if the attack on the target website is discovered, it would look like $RANDOM_CA is to blame, even when the CA did nothing wrong (unless you consider the whole idea of domain-validated certificates to be wrong).

DNSSEC+DANE has the potential to be much much more secure than the status quo. The problem is that no one wants to be the first to implement client-side DNSSEC validation and get blamed for failing to connect to incorrectly configured domains.


That is inaccurate. Several mainstream vendors have implemented DNSSEC+DANE clientside resolvers. Chrome had one. OS X briefly had one. They were removed. DNSSEC was implemented on mainstream platforms, then taken out.


Is there a way to see if any CA has issued a cert for a site? If all CAs posted the certs they sign to some database, then there could be services that alert you and you'd know right away and raise an alarm. If such a system doesn't exist, why not?


Certificate Transparency (http://www.certificate-transparency.org) is a Google initiative working on just that. It's already required for EV certs in Chrome.


Have any CAs publically committed to submit all their generated certs to that?

Does https://pki.google.com/ do that?


Chrome currently requires that all extended-validation certs with an issuance date of 1/1/2015 or later be submitted to the CT logs. (Otherwise it will not show up in the UI as an EV cert.)

As with all browsers, only a subset of CAs are trusted to provide EV certs in the first place, regardless of whether they include the EV bits in the certificate. And this policy only applies to EV certs, not to all certs by an EV-capable CA.


With some exceptions, at least. For example, twitter.com shows up as an EV green bar while my browser tells me there's no CT information available. The CT website mentions a list of exceptions would be generated but I haven't been able to find any public list, and there's been no update to their rollout plans in over 7 months[1].

Oddly enough I noticed the other day that certificate-transparency.org isn't even accessible over https.

[1] http://www.certificate-transparency.org/ev-ct-plan


The twitter.com cert I see in my browser was issued on 9 September 2014, which is before the 1 January 2015 threshold.

(Eventually this should go away because a compromised CA key can obviously just lie about the time, but this is just a migration step.)


You're right, I missed that at the top of the doc. I assumed all of the details would be in the numbered list.


I think a good chunk of the foot-dragging on CT has been a concern about customers not wanting certs for private names to be publicized (yet wanting publicly-valid certs for those names). It's unclear if there are concrete customers with this actual concern, but the argument goes that if Uber gets a cert issued for lyft-acquisition.uber.com in advance of an announcement, they don't want that leaked in CT logs.

The current Internet-Draft for RFC 6962bis allows them to register this in CT logs as "?.uber.com", which is enough for Uber's sysadmins to raise an alarm if something matching that pattern was issued without their knowledge.

https://tools.ietf.org/html/draft-ietf-trans-rfc6962-bis-08#...


This is unconvincing. For instance:

"There are methods to prevent leaking" isn't the same as "leaking is prevented by default." This sounds like a protocol design flaw that will end with implementors being blamed.

I think the way forward is a protocol like Stellar, not one like DNSSEC. Also, EdDSA or GTFO


> "There are methods to prevent leaking" isn't the same as "leaking is prevented by default."

This is from the zone enumeration section and plain DNS doesn't do anything to prevent zone enumeration by default either.

> EdDSA

We are working on standardizing EdDSA, but it takes a long time to get to deployment.


Good, come back when it's standardized. Until then, stamps [REJECTED] on the DNSSEC folder death to 90's crypto


I thought part of successfully killing off '90s crypto was deploying useful, obviously-correct systems without waiting for standards committees to bikeshed them to death (cf. CFRG).


Sorry what were you saying? I couldn't hear you over the bitter arguments about endian-ness.


> Until then, stamps [REJECTED] on the DNSSEC folder death to 90's crypto

If you read TFA you would see that we are transitioning to P-256. FWIW, ECC crypto is really slow and we will need to transition to post-quantum crypto in another 10 or 15 years.


"ECC crypto is really slow"? Please explain what you mean by that.


ECC signature verification is slower and big resolvers are complaining about it. Ed25519 is faster than the equivalent P curve but it's still slower than log-based crypto.


The alternative in DNSSEC to ECDSA isn't "log based crypto".


I did the article. ECC crypto can mean ECDSA which is also flawed 90's crypto. I specifically demanded EdDSA which can more easily be implemented in a high speed and constant time.

I'm sorry you're so laser focused on your own writing, but the DNSSEC debate is bigger than you.


That's an implausible claim, because even the IRTF CFRG can't standardize EdDSA.

At any rate, you can say this about literally any bad cryptosystem. "We're working on standardizing X" is just another way to say "someday maybe we'll have X".


So as I read #1, I'm trying to figure out what angle they're trying to argue here. The fact that the current TLD and CA systems are baked around the concepts of central authorities who are controlled either directly or indirectly by governments is known, and it's not a valid justification for DNSSEC also having that same pitfall. We're already well aware of the threats to security that come from centralized control like that, lets not build a new system that puts all our eggs in the same basket.


The argument is that if you choose a traditional TLD your eggs are in the centralized basket. All of the mitigation techniques proposed by Marlinspike for HTTP can be applied at the DNS level as well.

And even if you choose a decentralized DNS solution, you still need DNSSEC.

The point is that DNSSEC has little to do with the centralization issue and that it can do a lot to improve the security and privacy of the Internet as a whole.


> And even if you choose a decentralized DNS solution, you still need DNSSEC.

But that's not true. DNSCurve does provide a decentralized trust mechanism. If you are willing to embrace the pitfalls and advantages of such an approach, you don't need DNSSEC.


As covered in TFA, DNSCurve provides encryption between a DNS resolver and the client. It doesn't allow domain owners to sign their records.

Dan Kaminsky did a good job taking DNSCurve apart back in 2011: http://dankaminsky.com/2011/01/05/djb-ccc/

"I observe this is essentially a walk of Zooko’s Triangle, and does not represent an effective or credible solution to what we’ve learned is the hardest problem at the intersection of security and cryptography: Key Management. I conclude by pointing out that DNSSEC does indeed contain a coherent, viable approach to key management across organizational boundaries, while this talk — alas — does not."


"If you are willing to embrace the pitfalls and advantages of such an approach, you don't need DNSSEC."

That was in reference to Zooko's Triangle. You have to choose one of the other two if you are doing decentralized. The obvious choice being that you can't trust the human readable host names.


cbsmith: I'm not sure what you mean ... using a traditional TLD means that you made the choice to add third party trust to your system. Adding the CA system just adds additional third parties that you have to trust.


SSH relies almost exclusively on the first connection to a server being correct and an attacker being unable to perpetuate a MITM attack against a given host.

Do you rely almost exclusively on the first connection to a server being correct? Apparently. Does SSH? No.


In practice, I have few ways of verifying SSH fingerprints that don't involve trusting the SSL PKI.


DNSSEC got PKI wrong, but what it got right was building authentication (DANE, etc) on a standard protocol (DNS) for exchanging key/value data.

Like DNSSEC, whatever protocol comes next ought to support offline signing of k/v data, so that it can be served securely by organizations that can't afford HSMs. And it should be used in a similar way to DANE -- for bootstrapping other protocols in a general way.


> DNSSEC got PKI wrong, but what it got right was building authentication (DANE, etc) on a standard protocol (DNS) for exchanging key/value data.

~~Huh? That is what a PKI consists of....~~

> Like DNSSEC, whatever protocol comes next ought to support offline signing of k/v data

~~DNSSEC was, from the outset, designed to support offline signing.~~

Update: Whoops, I misread what you wrote! : )


> Huh? That is what a PKI consists of....

How do you sign a key/value data set with a certificate from Verisign? Are there tools that can automatically follow this delegation?

In DNSSEC once your zone is signed, you can upload an entire signed tree of subdomains. With current CA's, you get one certificate signed for xxxxxxx.com, and have to go back to Verisign for any subdomains (keys) you decide to add later.

I can't add a signed value for __custom_protocol__._wellknown.me.com that uses the existing x509 infrastructure. If I add another server, I can't delegate to it.

> DNSSEC was, from the outset, designed to support offline signing.

Yes... that's what I said. The replacement should support it, too.


> How do you sign a key/value data set with a certificate from Verisign?

If your domain root is signed by Verisign, you get your domain key signed through your register. The actual procedure varies with the register, from impossible to fully automated. DNS is exactly a key/value dataset.

I just don't understand the point of your other questions:

>Are there tools that can automatically follow this delegation?

Just list the domain. You'll see all delegations there.

> In DNSSEC once your zone is signed, you can upload an entire signed tree of subdomains. With current CA's, you get one certificate signed for xxxxxxx.com, and have to go back to Verisign for any subdomains (keys) you decide to add later.

> I can't add a signed value for __custom_protocol__._wellknown.me.com that uses the existing x509 infrastructure. If I add another server, I can't delegate to it.

Are you arguing that the specific certificates are better? Why?


> Just list the domain. You'll see all delegations there. > Are you arguing that the specific certificates are better? Why?

We seem to be having a communication issue.

From [1]: "Delegation problem: CAs cannot technically restrict subordinate CAs from issuing certificates outside a limited namespaces or attribute set; this feature of X.509 is not in use. Therefore a large number of CAs exist on the Internet, and classifying them and their policies is an insurmountable task. Delegation of authority within an organization cannot be handled at all, as in common business practice."

With DNSSEC this is not a problem. Once I have a DS key pointing to my zone, I can delegate as much as I want. This allows a great deal of flexibility, as I've already mentioned.

[1]: https://en.wikipedia.org/wiki/X.509#Problems_with_certificat...


Ah, I think the delegation problem with the X.509 / HTTPS PKI is a (fairly massive) implementation facepalm, not anything inherent to a PKI itself. So I think that's why people were confused by your statement "DNSSEC got PKI wrong" -- that doesn't seem wrong, it seems like a good thing for a PKI to support this. Do you think that a DNSSEC replacement should match the X.509 PKI in this regard?


Ah, that makes sense. Sorry for the confusion I created. I meant that DNSSEC PKI is wrong in the sense that it is hierarchical, and the root keys are centrally controlled by a biased party. A better PKI would probably look like Convergence [1], where trust is derived from a hybrid CA/"web of trust" system formed by a network of notaries.

The delegation implemented in DNSSEC is a good thing. I didn't think about how that feature would quite reasonably be lumped in with "PKI".

Oops.

[1]: http://convergence.io/


Marlinspike abandoned convergence a year after it was debuted. It's currently broken on most (all?) platforms.

As covered in the article, Convergence-style triangulation of cryptographic information is application/protocol neutral. Indeed, a DNS-level implementation would be much more powerful than what can be accomplished with HTTPS.


He moved from Convergence to TACK, which is on hold during the rollout of HPKP, which is like 80% of the value proposition of TACK.


What’s in the missing 20%?


It's still vulnerable to MITM attacks on the first connection.


Ah. What is TACK’s proposal to prevent that from happening?


What does "moving on to ECC" in HTTPS have to do with the weak crypto in DNSSEC?

It seems to me like they're brushing this issue aside, especially with their "but seriously, you don't really expect the NSA to spend tens of millions of dollars to crack a server's security, do you?" comment.

Um, if it's a site that serves millions of people I most certainly do! I'd also expect them to spend that much on any tens of thousand to hundreds of thousands of users Lavabit-like service, too.


Where do you see that? The article seems to only talk about moving to ECC in the context of better DNSSEC security and message sizes that are an order of magnitutde smaller than RSA's.


They are moving the ECC for DNS record signing, not HTTPS certificates.


Personally I think a new DNSSEC2 based on for example online signing might be a good idea.


There is nothing stopping you from doing online signing with DNSSEC. FWIW, DNSSEC is on it's third major iteration, 2/3rds of the TLDs have deployed it, and major DNS service providers are getting into the act.


You can even avoid domain enumeration by doing online signing with DNSSEC. It's just that people still prefer the offline option.


And the offline signing option adds complexity to the protocol.


Ok, my point about people preferring the offline option wasn't criticism.

I'd it being the preferred option is strong evidence that the extra complexity is worth it.


So you are arguing to remove the offline signing capabilities?


Yes.


That would force companies to either own all of the front-end server farms or store their keys on machines in untrustworthy environments.


Yes: a choice between two very unattractive options. Isn't DNSSEC great?


Uh? You can run everything in offline mode, unless you really care about trivial zone enumeration.


I am not talking about the current DNSSEC, of course.


I.E. the way SSL/TLS is run today.


What do you mean?


Signing DNS records using a machine connected to the internet, instead of passing the zone files to an air-gapped signing machine over a sneakernet. Eventually, you run into the issue of updating the "offline" machine and many places just have an "online" machine that gets the zone files through a highly restricted interface.


We're currently working on providing DNSSEC for our customers. The problem with only signing is that we really don't want all the keys to be physically present on our DNS servers, as many of them are hosted in other companies data centres.

Having a central online signing server is bad for availability, as DNS down time is really not acceptable.

That basically only leaves offline signing.


So have three online signers.


Yea, HSMs are not that expensive I think.


DNSSEC has several problems some of which is in the design, as tptacek likes to mention.


This post directly addresses each point that Ptacek raised in his "Against DNSSEC" blogpost and FAQ.


Poorly and unconvincingly addresses. Same argument we've seen over and over -- now in a nice clean FAQ!

Doesn't change the design problems. Just tries to hide them behind half-truths and "but there's nothing better..."


But what do you mean by "online signing" ?


One of the problems is that it exposed DNS zone information, and one of the reason it did that was it was designed to require signing only once and after that the private key didn't have to be used again until the zone info changes.


Are you talking about zone enumeration? That's covered in the article.


Can you provide an NSEC white-lie response to an arbitrary query without an online key?

(I'm not sure if this is what was being asked, but I'm curious about the answer to that, either way.)


No, not in the narrowest way. You can go 'somewhere in between' at the cost of blowing up your zone size tremendously, but it's not worth it.


Don't think so.


I believe so: http://dankaminsky.com/2011/01/05/djb-ccc/#whitelies

(Phreebird is an online signer, but I don't see why an offline signer couldn't generate these proofs.)


Because you can't predict offline which names you need to obscure with fake NSEC records.

If there is any company in the world actually using Phreebird in production, I'd --- for more than one reason --- like to know about it.


I don't know of any either, but there are plenty running PowerDNS in online 'white lies' signing mode. And then, of course, there is Cloudflare.

(And indeed, it cannot be done offline - although doing much narrower NSEC/NSEC3 ranges than 'normal' could be done offline).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: