Taking a step back here... shouldn't proving ownership of a domain involve the domain registrar some way, rather than involving whoever happens to host your DNS? They know who you are, so maybe they can provide an interface to let you prove to others that you own the domain. Like I dunno, maybe just let the owner of the domain generate a unique one-time key valid for 30 minutes, and provide an API to allow anybody to check ownership if given that key, e.g. at a standard URL like
DNS security is a red herring. This is a BGP attack; the attackers control IPs. They can intercept traffic --- including DV cert checks --- without touching nameservers.
> They can intercept traffic --- including DV cert checks --- without touching nameservers.
Huh? I'm talking about certificate issuance and domain ownership here. If a CA can't verify the domain with the registrar then they fail to issue a certificate, it's as simple as that. It's not like they can get a forged response over HTTPS...
> An attacker who controls arbitrary IP addresses can get a DV cert issued without control of the DNS.
What? How? Did you read my comment at all? I was saying the CA needs to have a way to verify ownership with the domain registrar. Over HTTPS, obviously. An attacker can't forget a response, so the worst case is the cert doesn't get issued, which it very much shouldn't be if ownership cannot be verified.
I can't tell whether you're talking about how you think things should work or making claims about how they actually do work. If the latter: no. DV certificate checks don't use HTTPS to validate ownership. The point of DV certificate checks is to provision HTTPS in the first place.
If you control BGP, you can thwart DV checks and get a certificate issued. You can do that without touching the DNS at all.
> I can't tell whether you're talking about how you think things should work or making claims about how they actually do work.
How can you not tell? I was extremely explicit that this was the former in the very first sentence of my initial comment:
>>>>>> Taking a step back here... ___shouldn't___ proving ownership of a domain involve the domain registrar some way, rather than involving whoever happens to host your DNS?
The only reason we have DV certs is because people find normal certs painful to apply for. They're a convenience. Similarly: LetsEncrypt does HTTP challenges, relying on the DNS as a side effect, because lots of people who need certificates do not control their DNS records, even if they "own" the domain.
How does either solve this if the bad actor is able to inject routes at will? Just redirect whois.iana.org to their servers. Then they can prevent you from accurately determining who the registrar is in the first place.
Where is that information on whether or not you should expect a TLD to be signed stored? Is it on the client side? If a client receives an unsigned whois response, what does it do?
This isn't something I was aware of, and I'm not having much luck in finding out the implementation details.
> Where is that information on whether or not you should expect a TLD to be signed stored? Is it on the client side? If a client receives an unsigned whois response, what does it do?
I can imagine a zillion different approaches... most obvious (not necessarily the best) one being to ask an IANA server over a normal TLS connection whether it should expect a TLD's records to be signed. And you can obviously cache that response for a while.
Remember the point here is to validate domain ownership, which everybody already understands to be a big deal. If you can't get any trustable response from anybody, then I would expect it is your duty to refrain from issuing a certificate for that domain.
There are some drafts, and some NICs that have custom WHOIS solutions over custom authenticated protocols that don’t integrate with the normal WHOIS functionality.
But the larger issue is that there is no global integrated WHOIS system – e.g. to view a WHOIS record for a .de currently you need to solve a captcha and provide a valid reason (and can’t do it automated).
Crypto maximalists don't like additive security mechanisms that reduce the risk and potential scope of attacks, but may not address every possible attack vector. Often the effect of crypto maximalism is reduced security. In the real world it takes multiple mitigating technologies to address a problem, and perfection is not always achievable. See RFC7435.
DENIC has been compliant with the GDPR for months already, and its WHOIS database is still running – you just can’t expose all fields of the WHOIS to any unauthenticated viewer.
I don't think it makes sense to force every domain owner on the planet who wants HTTPS to understand how to generate, store, use, and maintain (more) public/private keys, especially in a secure manner. Having to deal with SSL certificates is enough of a pain already, and for this one you'd have to use another program to sign requests, so there's an added layer of pain. In fact, I don't think it makes sense to add to the burden of domain owners so much at all, even if they could all do this.
Not to mention that, more practically there's no need (and potentially possible harm) to tie the ownership records to a particular cipher. Really, it's the registrar's job to deal with domain ownership; ideally it shouldn't involve you at all. And as a near-corollary, they could probably deal with long-term key security better than the average domain owner.
In my mental model the requirements of the modern web are way above what was in the 1990s.
A single static landing page would be expected to have at least traffic tracking, at least for the sake of load-balancing.
Even if the page is that - just a static page, the chance it's hosted by the content owner themselves is becoming smaller every day.
So frankly - yes. I believe today is the right time to start adding/replacing layers in the stack. After all - we've gone a long way in terms of software deployment tools so it's way more interchangeable.
Uh, how? Are you assuming that a single BGP leak would be enough to cause e.g. a letsencrypt misissuance for the domain? It sounds like (from other comments) they have a global round-robin resolver setup for their DNS challenges.
> 2. Only hijack one of the ranges, and do not respond to other domains (causing SERVFAIL), so other domains will resolve unaffected
I think exactly that's what they did. I saw people posting SERVFAILs during the outage.
They do, and they don't. They have a round robin setup, for which resolver that validates the DNS challenge - however, they do not validate the DNS challenges from several resolvers[1]. So, if that particular resolver got caught in the BGP leak while doing a challenge verification you could get a valid cert.
Lots of ifs and buts - but it is certainly possible.
No, it clearly would not have. In a BGP attack, attackers control IP addresses. Whatever your signed DNS record points to, attackers can simply inject prefixes for.
The invalid BGP route advertisements redirected folks to the attacker's DNS servers, which served invalid responses to queries.
In this instance, had myetherwallet.com been signed, and had a user been using a DNSSEC validating resolver, they would not have visited the invalid site. The attacker's DNS responses would not have validated.
Had myetherwallet.com been DNSSEC signed anyone using Google's public DNS resolver at 8.8.8.8 would not have visited the invalid site.
Again: the attackers control IP addresses. All of them (or, enough to randomly claim AWS addresses from places in the topology nowhere close to AWS). It doesn't matter what the DNS says.
So with global anycast bgp, people usually use it for short sorts of things, preferably single packet requests, and redirect to non-anycast addresses for longer TCP streams so that there aren't problems from flapping. The idea is that if one dns packet goes to ns1 and then next packet goes to ns2 on the other side of the world, we're cool. I get my answer from both, as those two packets are different queries. But if it's a webserver and packet 1 of my tcp stream goes to lb1, and packet two goes to lb2 on the other side of the world? lb2 is going to send me a rst packet because it has no idea about the tcp connection I'm speaking
In the legitimate world, this usually means you global anycast your DNS servers, which then respond to queries with IP addresses of load balancers that are only announcing from one location, out multiple providers.
I think this maybe could be extrapolated to a BGP attack of this nature and could explain why the attacker, in this case, focused on attacking the DNS server rather than a different piece of the stack.
I'm guessing that your global anycast setup in the legitimate world is going to be way more reliable and flap a lot less than whatever the attacker can cobble together out of upstreams with inadequate prefix filtering.
Note, I'm not trying to claim that the attacker couldn't take over the end machines that serve longer TCP streams; I'm just pointing out that the attacker has good reason to attack DNS first if possible, for the same reasons as the legitimate operators have for using global anycast for DNS and not for HTTP.
(as further trivia, I have seen setups where the http server was global anycast, but that webserver only returned really short responses, 3xx redirects, again redirecting the request to a non-anycast webserver.)
no, in this case, the attackers controlled the IP address of the DNS server, not of 1.2.3.4 (the actual web server)
DNSSEC would have prevented you from resolving this invalid domain their fake DNS servers would have tried to send.
I think there's also a tradeoff. If they had redirected a large chunk of AWS traffic to their machines it would have been hard to handle, by redirecting Route53 and poisoning DNS they get a big bang for their buck because the public resolvers (like 8.8.8.8, 9.9.9.9 and 1.1.1.1) will take the poisoned records and do the real work of serving the DNS records.
I don't see how this really solve this attack. Like, it solves the specific way they did it - standing up their own DNS server on the IPs they've started advertising - but really, all they have to do is announce the IP(s) that are in the DNSSEC signed records. Then you're still getting the traffic, and nothing has thrown any red flags on the DNSSEC side because nothing has changed as far as DNS goes.
The problem is that when you can hijack the routing for an IP there's nothing that can realistically be done with how the internet works today to protect again it. Basically every system of validation requires you have access to some outside authority that can verify authenticity, and if you can announce the route to those IPs to your own infrastructure, you can fool the validation.
If i'm not mistaken, in this scenario they only hijacked the dns traffic and were using it to resolve a domain to another IP, so they were changing the DNS zone. DNSSEC would have prevented this as the change they did in the zone needs to be signed, so it would require them to also get the private key used to sign the zone.
It fails to solve the issue if the dns resolver doesn't check DNSSEC signatures though.
If they managed to hijack the route to the server itself, there's no need to hijack DNS at all, so it's a different issue that can't be solved at DNS level itself as DNS is not involved in the attack
Well, that's specifically my point - fundamentally the two attacks are the same. The key part here is the BGP leak itself. This specific attack would potentially be solved by DNSSEC, but if you can poison the internet routing table, you can just advertise a route for the IP you want to steal traffic from just as easily.
The potential downside there is the specificity - last I checked, the internet routing table won't take anything more specific than a /24 - so if you can't provide a more specific route, as-path-length becomes the next determining factor, so you might be in a situation where you can announce the DNS cidr with more specificity and not whatever IP is in the A record... But that's pretty far outside of your control and could just as easily flip the other way.
All of this is to say: I think the important part of this whole thing was the BGP hijack, and not necessarily the lower level specifics, because the hijack isn't dependent on a lot of those specifics.
Wonder if services like Let’s Encrypt were affected. I imagine a scenario where a small hijack of DNS could allow for properly signed certificates for domains that are not owned. If I operated a CA service, I would carefully examine the requests received during this time frame. Maybe someone can audit the Transparency Logs during this period for anomalous activity.
BGP hijacks sadly happen on a weekly basis. It’s not new and there are hundreds more cases. It’s only noteworthy this time because high profile targets were affected instead of these being used for spear hijacking, etc.
In a theoretical scenario like this where a hijacker used Let's Encrypt to receive a cert, rather than using a self-signed one, it's worth mentioning that there is a defense against that: HPKP - though much like HSTS, this is only effective for browsers that have previously visited the site.
It seems like they would have much better and more effective targets. They could have attacked bitcoin or crypto coin mining pools and just used the hashing power. It would have worked almost instantly and they wouldn't have to deal with SSL/TLS. Say a pool mines 20% of the blocks. After 2 hours they will have mined 2 blocks of 12.5 btc so 25 btc and $220K
(although they would likly not get all the hashing power because not everyone is taking that route.)
How could redirecting traffic of a Bitcoin mining pool make an attacker money? The transaction to which the block rewards goes is part of the block hash and thus still under full control of the miner that has the matching private key.
So what’s happening with the deployment of RPKI[0]? It would seem that owners of IP address space would have the incentive to use it - but perhaps routers don’t implement BGPSec widely enough?
RPKI doesn't really solve the problem as it is a proof of origin. You can still inject routes, just ensure you add the original AS as origin of these routes.
BGPsec would solve this issue, but is currently not deployed at all (and won't be before a long time).
An interesting aspect of this crypto boom over the last year has been the stress test on public internet infrastructure. Just the other day myetherwallet was hit with a nasty hack.
They did. To have an AS number you've gotta register it with a provider (In this case RIPE, as noted by d215). You can always look up an AS number and get ownership details.
Sure, let's use blockchain for everything. I mean, why not? It's clearly the perfect solution to every problem that has ever occurred in any distributed system.
security, like DNSSEC does. but only 1% of domains use DNSSEC. there are other advantages, like uncensorability, direct access to dns records, disadvantage is speed
This implementation requires you to download tens of gigabytes of data to any device that you want to resolve a name on. It's a non-starter. (For that matter, last time I looked, Ethereum was at a point where it's not actually possible to "keep up" with the blockchain on a machine that isn't in a datacenter.)
Or a solution like hash based domains that Tor uses. I know a lot of commercial sites balk but it really is more secure. And instead of just leasing a domain on the whims of a company easily swayed by politics and lawyers you actually own your domain on the Tor network.
You wonder why we don't propose replacing all of DNS with Tor .onion addresses? After all, it's so easy to remember/verify that DuckDuckGo is https://3g2upl4pq6kufc4m.onion/
It is easier than ever to brute force generate a readable Tor address. Even with my 2010 graphics card I generated superkuhbitj6tul.onion (for my superkuh.com and superkuh.bit (namecoin) domains) in about 30 minutes. With todays graphics cards you can go much further than that number of characters.
But yeah, there's still the trailing sequence and companies that don't understand how simple that is.
Even if you made this a one click service the trailing characters are going to be a killer. Companies care a lot about their brand. And it would require a total mindset change for people to verify that they are going to superkuhbitj6tul.onion and not superkuhbit6g4tfr4.onion by verifying the cert that was offered up by the destination site.
You mean superkuhbit6g4tf.onion. Although longer Tor based address hashes are coming soon for better security (https://blog.torproject.org/tors-fall-harvest-next-generatio...) so the trailing length of random chars will be even longer. Kind of mitigates my objection.
Still, Tor hidden services come with DoS/DDoS protection built in as well. Something Cloudflare and it's centralized service doesn't like to acknowledge.
Cloudflare's business is providing DoS mitigation. Cloudflare blocks Tor users by default. Tor provides DoS protection for free. It's no citation but it's certainly reason to believe.
That's great news. I looked it up and it seems like it's almost been 2 years; not a very long time. I understand defaulting to re-captcha is a compromise and no better option exists for Cloudflare but you'll have to excuse me for not noticing the difference when I encounter "Your network is sending out automated queries" so often it's effectively a block.