Hacker News new | past | comments | ask | show | jobs | submit login
Hijack of Amazon’s domain service used to reroute web traffic for two hours (doublepulsar.com)
1048 points by coloneltcb on April 24, 2018 | hide | past | favorite | 284 comments



Getting a basic certificate issued is incredibly easy these days. If you own the DNS resolution, you can get a cert.

Here a few ways I can think of to make this harder for an attacker:

* Add a Strict-Transport-Security to all HTTPS requests. This means the attacker will need to get a valid cert (still easy if you can hijack BGP).

* Pick a preferred SSL issuer and stick with them. Add a CAA DNS record only allowing that one issuer.

* Go through the EV SSL process with your preferred issuer. Now modify your CAA DNS record to add an EV policy. This should make it very difficult for an attacker to get a cert issued during a BGP hijack.

* DNSSEC? Make sure to choose an SSL issuer that will correctly test DNSSEC?


If I control the network, I can strip off any security that might be in DNS. I can change or remove CAA records, DNSSEC, etc. Those things currently fail open, which exception to DNS replication which will take "refresh" time to fail. I can then get new certs for the DNS I now control. HSTS just means use HTTPS. It doesn't validate the previous cert. That is HPKP which almost nobody uses unless they control the client, such as mobile apps.


One could argue that in a perfect world the combination of DNSSEC and CAA should stop attacks of this nature. However, this only holds up under a rather limited set of circumstances:

1. The targeted domain would need to make use of both DNSSEC and CAA in the first place. I doubt that's true for more than a tenth of one percent of all domains out there.

2. The attackers would have to only target DNS resolution, and not the IP ranges hosting the targeted domain itself.*

3. Every single CA out there would have to follow the Baseline Requirements to the letter and fail closed for DNSSEC-enabled domains.†

I'm not sure if that makes for a good argument in favour of wading into the cesspit of DNSSEC.

* By targeting the IP ranges the site is hosted at, rather than the authoritative DNS, an attacker would be able to complete a "Agreed-Upon Change to Website" challenge as defined in the Baseline Requirements (http-01 in ACME). There's work being done to somewhat mitigate this by extending CAA to support whitelisting domain validation methods.

† From what I recall, when researchers looked into this this shortly after CAA became mandatory for CAs, a number of CAs were found to have issues with DNSSEC enforcement. Additionally, some argued the language in the Baseline Requirements regarding CAA and DNSSEC was quite ambiguous.


If you have DNSSEC with CAA, you can use that to choose a CA which has a contractual relationship with you. That contract can say anything you both like, for example it can say:

"All issuances under example.com shall first be approved by telephone call to our security office on 1-234-567-8900, and the certificate issued shall have a notBefore timestamp no earlier than 24 hours after an SCT included in the certificate"

This requires no new technology that I can see. If CAA and DNSSEC work together as intended (which I admit is a big "If" but they can and should) the Bad Guys now need to mess with the phone system for approval, then wait past the MMD and hope you aren't watching for their shenanigans. Or break into a CA and issue for themselves. That's a much higher bar, albeit at great cost.


If the DNS servers providing that message are hijacked due to BGP, the hijacker can simply not serve that notice and the registrar doing the lookup would never see it.


This is under the assumption that CAs fail closed when they encounter DNSSEC errors (which, by some interpretations, is mandatory according to the Baseline Requirements).

Merely hijacking the authoritative DNS for a domain does not defeat DNSSEC, which chains from the root to TLDs to registered domains. In other words, and assuming all CAs are perfect w.r.t DNSSEC, you cannot spoof a "No CAA record here" response for a DNSSEC-enabled domain merely by hijacking the authoritative DNS or a resolver.


Assuming that all CAs are perfect w.r.t DNSSEC is a big assumption to make. has anyone tried verifying that for every CA?

I also had been referring more to CAA than DNSSEC. It is a premium feature at certain registrars such as GoDaddy, which hinders adoption.


Yes, but under such a scheme, once I’ve wrestled control back of DNS, your ability to use those ill-gotten certs is reduced/eliminated.

Today, if I got a basic cert for your domain by temporarily hijacking DNS, I can later use it to MITM you even without DNS poisoning/takeover.


> HSTS just means use HTTPS. It doesn't validate the previous cert. That is HPKP which almost nobody uses unless they control the client, such as mobile apps.

I thought HPKP is TOFU, not (necessarily?) preloaded? Meaning you don't need to control the client for the client to be able to take advantage of it?


(For those like me who may not be familiar with the acronym, TOFU = Trust On First Use)


This came up when the root servers for .io were hijacked. It's really the same problem. Short of monitoring for such anomalies, the only saving grace would be HPKP which is just too risky to use outside of mobile apps. Why the attackers in this case didn't get valid certs, I have no idea. It takes seconds to get a wildcard now.


HPKP can be either TOFU or preloaded. Google maintains a preload list that is included with Chrome, Firefox, Opera, Safari, IE 11 and Edge, but that is not a scalable solution for the future.


There might be a small preload list for HPKP. I'm pretty sure google uses key-pinning on their domains on chrome. Not sure if they use HPKP for that though.


I think chrome now ignores HPKP headers, and only respects the pre-load list. The risk of someone hijacking a domain, and inserting a malicious HKPK header had become too high.


None of these steps would help against a BGP hijack since the IP address space itself is being stolen, not just the domain name. Heck, with a good hijack, the attacker could even get himself a valid automated certificate from let's encrypt.


Forgive my ignorance but I have a few questions:

* Why would HSTS help in this case? While HSTS is active, does it prevent clicking through the warning (which was done here)?

* How would a CAA record help against cert issuance in this case? Is it only helping against compromise of the authoritative during the remaining TTL of the record in recursives AND if the CAA record points to something that doesn't have on-demand DV like Lets Encrypt?


HSTS does indeed prevent users from clicking through the warning - it makes handshake / trust chain failure fatal with no "proceed anyway" button.


Sorta. In most browsers you can manually delete the HSTS property on a per-domain basis. Presumably, anyone doing this knows the risks.


In Chrome, it's a real pain in the ass, and it's buried deeply. It's not available at all in the settings UI (not even in advanced settings).


You can type "badidea" at the warning page to skip it.


Changed to "This is unsafe" (followed by enter) in chrome 65. Try it at https://badssl.finn.io/


Doesn't work for me (anymore) in Chrome 65. Used to work.


I think it was changed to something like "thisisunsafe" now.


Just checked this on chrome 65 on https://badssl.finn.io/ seems to work (also need to press enter).


That's correct, tested it with https://badssl.finn.io/


Had no idea about that 'badidea' work around in chrome. Just tried it and it actually worked.


In Chrome, clearing the cache, clears the saved HSTS records which are not preloaded in the browser.


> * Why would HSTS help in this case? While HSTS is active, does it prevent clicking through the warning (which was done here)?

Yes, HSTS (if preloaded or cached by the browser) prevents clicking through the warning, although there's a secret code you can type in chrome? to get around it still.

> * CAA record

Yeah, a CAA record wouldn't help. Presumably the CA in question is going to want to get that record freshly, and while the DNS is hijacked, they can return whatever they like. DNSSEC could conceivably help, but that's a pretty big can of worms.


DNSSEC only "helps" if you can get CAA working reliably with it, because otherwise, it's only role in this attack is to lock CAs to a particular IP address. But this is a BGP hijack: attackers control IP.


HSTS wouldn't help users clicking through warnings, but it's a good thing to have (myetherwallet doesn't use HSTS).

CAA record would only help in remaining TTL. Once expired, then it doesn't matter.

So yeah, these seem like decent steps to help protect but certainly not going to 100% prevent an attack like this one.


>HSTS wouldn't help users clicking through warning

Actually it would have! Chrome and possible other browsers do not allow clicking throw certificate validation issues on sites with HSTS. For example, try to get to https://badssl.finn.io in Chrome.


Sorry how does that help if the attackers purchase a new "valid" SSL certificate since they control the DNS and thus email?


It does not help for that scenario, but it forces the attackers to jump through another hoop, and publish that a new cert was issued for the domain.


Once you control DNS, it doesn't require email, you can just use lets-encrypt. The lets-encrypt verification does not check HSTS.

Makes sense because it keeps HSTS from the lockout scenario that makes HPKP so scary.


Is it possible to get a TLD registrar to set a very short TTL like 5min on your NS record? Then you can switch to a backup DNS hosted on another network fast.


Yes, you can set a short TTL on your NS record, but that would not keep this attack from hijacking your site. This attack intercepts the clients DNS lookup, so the DNS a browser is talking to is ill-behaved. No amount of correct setup here will work because the ill-behaved DNS server will just replace your setup with whatever that server wants.

There is no strong defense against this as a website. With an app the solution would be certificate pinning. You could try HPKP but that comes with a host of issues and I think it is being deprecated.


Wait: in example.com say .com registrar sets a 5min TTL on NS record for example.com that resolves to 1.2.3.4. That means that your DNS server is at 1.2.3.4, serving your A and MX records. An attacker BGP hijacks 1.2.3.4. You change NS record in your .com registrar settings to 5.6.7.8 that is not compromised. Notice I am not talking about your A or MX records that you controlled on a compromised IP, but of NS record that a .com registrar controls. So after 5 min the browsers contact a non-compromised nameserver at 5.6.7.8 to get your A records.

I think unless a TLD registrar gets hijacked that mitigates the attack on your own DNS after the NS TTL


You are right, I was thinking of a different attack. Something like a BGP-hijack of 8.8.8.8 , 1.1.1.1 or similar often used DNS resolvers.

Here though, people using area53 for DNS probably can't move away from it as they are stuck on amazon.


I'm seeing HSTS on https://www.myetherwallet.com/

  curl -I https://www.myetherwallet.com/
  strict-transport-security: max-age=63072000; includeSubdomains; preload


Presumably that's fairly new. The domain hasn't made it on the HSTS preload list shipped by browsers yet, so my guess would be they started using HSTS in response to this attack.


It was addad at most a few hours earlier and as a response to the attack. That's when I checked and it did not have HSTS at that point.


I see. That is indeed bad that it wasn't there before.


wait, myetherwallet doesn't use HSTS? I thought they had a vulnerability analysis done a few months ago. I feel like this should have been something that was caught.


Wow, that's scary. I even told someone recently, jokingly of course, they forgot to add HSTS for their mvp app.

But MEW doesn't have HSTS? I would never use it personally on a public Wifi, but many people will for sure and they have no idea they'd be MITM'd.


> But MEW doesn't have HSTS? I would never use it personally on a public Wifi, but many people will for sure and they have no idea they'd be MITM'd.

Even without HSTS a bad actor would have to either trick a user to install a root cert or trick a certificate authority to generate a cert for the domain. Both of these are possible and have happened in the past, but they're also are a requirement for the attack you mention that you seemed to have completely forgotten about.


No they wouldn't, without HSTS a bad actor (public wifi) could just do an SSL strip attack. Sure, observant users would notice that the page isn't over https, and with browsers adding warnings on all http pages, that'll become more obvious, but it's still not something most people notice.

Are you thinking of HPKP?


ah, yes, I was confusing HSTS with HPKP.

Now I need to re-read the whole thread with this context. Thanks for the correction!


HSTS does prevent clicking through the warning. Here's a test page: https://subdomain.preloaded-hsts.badssl.com/


Choosing a particular CA doesn't help, since attackers can pick whichever CA is optimal for their attack.


> * Pick a preferred SSL issuer and stick with them. Add a CAA DNS record only allowing that one issuer.

It may be worth also suggesting HPKP here. That would be effective for many customers.

> * DNSSEC? Make sure to choose an SSL issuer that will correctly test DNSSEC?

This basically means you can't use Route53, which is a deal-breaker for a lot of people running services.


GCP supports DNSSEC, I moved all my domains away from Amazon due to this

was very easy to setup on GCP too


Use cloudflare. Unlike Amazon, their DNS is free and it is supposed to be among the fastest.

Then again, DNSSEC is useless against a BGP hijack where the nameservers for the domain are replaced with malicious ones.


Believe me, I'd very much like that. I'm better aware than most of their technical capabilities and speed.

When I say "a lot of people running services", I don't mean me. I mean a lot of people I know and associate with. A lot of them have grown to rely on close integration between DNS and all their other infrastructure services. That CF covers them technically isn't helpful - they expected it all under one roof and they need to not spend cycles on thinking about something marginal to them like DNS.

One of the biggest selling-points of AWS is that it puts all your services in one spot. It limits orchestration work and makes life simpler. Asking people to leave that for security improvements may be necessary, but it's not a small ask.


I thought HPKP was essentially deprecated - too many footgun scenarios and not enough specificity on "what to pin" since CAs can use / issue with multiple intermediates.


There are definitely a series of foot-gun-y scenarios with HPKP, just as with full DNSSEC. It's just one of the few things I can think of that gives you the ability to detect when a cert has changed unexpectedly.


A CAA DNS record won't help much if the attacker can just go to the certificate issuer and request a valid SSL certificate because they own DNS. Plus can't they just change the CAA DNS record?


If you have DNSSEC that protects against CAA spoofing, that should also protect against A-record spoofing. Unless we fail open on A records but closed on CAA records.


If you are using DNSSEC, hopefully the issuers are validating it.


I'm not able to find any information on Let's Encrypt and DNSSEC validation. Anyone know if they validate it?


Yes, (last I checked at least) Let's Encrypt validates DNSSEC, this causes problem periodically for users who expected something to work but the DNSSEC setup is wrong and they never noticed before because nothing checked.


For ultimate protection, add HTTP Public Key Pinning (HPKP) as well.

Warning: this is only to be used by experts. Otherwise, your site is going to become permanently/long-term inaccessible by the browser if you screw up just once.


An Expect-CT header would have mitigated this attack (but not for new users).


MyEtherWallet (which was the target) has an EV SSL certificate. That could be why they couldn't get it as easily.


Sorry, why couldn't they just replace the EV certificate with a regular "valid" certificate that they purchased from a valid CA if they controlled DNS and thus email?


Wonder if services like Let’s Encrypt were affected. I imagine a scenario where a small hijack of DNS could allow for properly signed certificates for domains that are not owned. If I operated a CA service, I would carefully examine the requests received during this time frame. Maybe someone can audit the Transparency Logs during this period for anomalous activity.


I would assume they go far out of their way to resolve the address from a large number of distributed DNS resolvers - the question is if they're just round robin instead of multiple simultaneous queries, but I would hope that's part of why the DNS test usualy takes >60 seconds. If they're not treating DNS as a consensus protocol instead of as an immutable database, they'll always be open to poisoning attacks.

If any certs were issued for hijacked domains (which as far as I've ready was only one, not using LetsEncrypt), it's a pretty glaring failure on the issuer, assuming they used "DNS Validation"


Let's Encrypt does not currently validate from multiple perspectives but we're working on it. We're cooperating with a research team at Princeton to make sure our strategy is an effective mitigation against BGP attacks.

Any given production validation right now may come from one of two of our datacenter locations, but not both.

You may see multiple validation requests if you use our staging servers, but that's just an early version of multi perspective validation which doesn't work well enough to promote to production. There are some performance issues, and the external validation points are all on AWS, which uses the same autonomous system (AS) across datacenters. In order to effectively defend against BGP attacks we're probably going to have to deploy validation servers on 3+ different ASes so that our network perspectives are sufficiently diverse.

If multi perspective validation is effective and nothing is done to significantly improve BGP security (we're not holding our breath) then I expect multi perspective validation to be a requirement for all CAs down the line.


If you can do validation from a perspective where your ISP can send you a full BGP table, you should be able to keep track of how recently the most specific route to that destination has changed; something that has changed recently or that currently has an AS path different than what has historically been typical may be suspect.

AWS does have some network diversity in different regions; they apparently do not operate their own backbone to interconnect the different AWS regions.

(And for all that Hurricane Electric seems to have not been doing a great job of filtering BGP announcements from peers, they do seem to have pretty aggressive pricing on colo space in their Fremont data center where they can provide a full BGP feed.)


I'm trying really hard here not to be snide, but it is amazing to me that an organization that is responsible for securing the world wide web is basing that security on the hope that nobody can spoof 3 AS's at once.

Just give up on BGP. Strongly suggest people use DNSSEC. For TLDs that don't support DNSSEC, require a public key issued to the registrar. You (the Certificate Authority) can get the public key from [R]WHOIS, IRIS, RDAP, whatever method you like, directly from the registrar. This is standard practice using the thin WHOIS data model, where .com and .net require domain registrars to maintain their own customers' data. Other TLDs simply run thick WHOIS servers that store all the data for their domains.

Registrars are already required to pass up Delegation Signer records to a TLD to support DNSSEC. Passing on a similar record to a Certificate Authority would be practically the exact same thing. So we know the registrars can support it, and we know it would ensure that people would only be able to generate certificates for those domains that they actually own (and not "whomever can control the IP space currently").


We should not "give up on BGP". What we should do is to improve security in all layers. This includes BGP, and as you mention DNS. DNSSEC should be mandatory, just as TLS, for any business that take themselves seriously.


Not to start another DNSSEC melee (as I start another DNSSEC melee..) but for it to be effective we really need browsers to a) be able to tell whether a DNS response was properly DNSSEC signed or not, b) produce some large, scary, red warning for ones that aren't, mark them "Not Secure", etc. [1]

There is a major chicken-and-egg/game-theoretical problem though: any browser that does that today will piss off/irritate its users, forcing them to use other browsers or older versions of the same browser, as DNSSEC is not widely deployed on the corporate side. And until most/all browsers do something major to make the current DNS security crisis obvious to large numbers of users, most companies won't care enough to deploy DNSSEC.

At least it will be interesting to see whether, and how, this eventually gets solved.

[1] Certain abysmal DNSSEC cryptographic choices, such as 512-1024-bit RSA, should also be addressed.


> There is a major chicken-and-egg/game-theoretical problem though: any browser that does that today will piss off/irritate its users, forcing them to use other browsers or older versions of the same browser, as DNSSEC is not widely deployed on the corporate side. And until most/all browsers do something major to make the current DNS security crisis obvious to large numbers of users, most companies won't care enough to deploy DNSSEC.

This seems analogous to the problem browsers faced with Flash. Perhaps they can leverage the same incremental approach here.

For example: 1) Validate DNSSEC. If present and valid, the HTTPS "green lock icon" gets a bonus glow. 2) 6 months later, not having a DNSSEC response gives a little red X badge on the "green lock icon". 3) 6 months later, not having a DNSSEC response graduates to a little ignorable info/warning box near the location bar. 4) 12 months later, you have to click through a big scary warning to access the site.

Essentially, start by providing a carrot and then introduce a progressively larger stick. Communicate the whole plan up front so that large organizations can get the ball rolling.


> Strongly suggest people use DNSSEC. For TLDs that don't support DNSSEC, require a public key issued to the registrar.

I think this is absolutely unreasonable and damaging to me as a small player. I don't want to give my registrars more power to mess with me, they already refused to support DNSSEC on my domain because I am not using their hosting service, I don't want them to be responsible if I can get certificates of not. DNSSEC is also way too hard to set up (especially if, for example, I use OVH's name servers and another company as registrar), we need DNSSEC equivalent of LetsEncrypt's certbot for it to be usable.


DNSSEC is a limited hack. Ok? I only suggested using it at all because it can be used right now to secure the process of generating certs. There is a very good reason I don't want to put all my eggs in that basket.

The root servers and TLD are not supposed to be the authoritative source of trust for your domain. That is the registrar's job. That is the registrar's only job: control my domain, and don't let anyone fuck with it. Hence, they should be the ones to keep a record that you can control to tell people where trust should lie. If you don't like your registrar, you can transfer your domain to a different one.

The purpose of DNS is to be a phone book. When you call a number from the phone book, the person you are calling might not be who you expect, even if you got the number from a very trustworthy source. Perhaps a spy is sitting on the other end of the phone. After you connect to the phone number, you should authenticate the other end.

Why not put the authentication information in the phone book? Because you got the phone book from one place. The spy could have compromised the phone book, and now you have no way to know if your authentication is real. It is a better idea to have one phone book for calls, and one separate authentication book that you get from a separate source.

DNSSEC's security depends on one child publishing one key to one parent. If you attack that one factor successfully, the security is broken, and there are multiple vectors to attack. There is no defense in depth. For a highly motivated state actor, this is not difficult to circumvent. The more you lean on DNSSEC, the more fragile the entire internet's security becomes. A better method is to decentralize and distribute trust into multiple organizations and processes, so that it would be very difficult to compromise an entire internet system.

I haven't mentioned the many fundamental problems with DNSSEC rollout and use because I think those are mainly a problem of lack of incentives, and are addressable. Go ahead and use DNSSEC if you want. Just don't tie a rope around your neck by making all internet security dependent on it.


Per https://github.com/letsencrypt/boulder/blob/master/bdns/dns.... it seems they round robin. But they are aware of the issue in the spec: https://ietf-wg-acme.github.io/acme/draft-ietf-acme-acme.htm... - “Querying the DNS from multiple vantage points to address local attackers” is a mentioned mitigation that a server could implement.

Seems like a reasonable basis for a pull request.


> why the DNS test usualy takes >60 seconds

The server-side part of DNS validation takes about a second. The delay is all about clients waiting for their authoritative DNS servers to update. If you use a fast DNS provider, there's no reason to wait longer than necessary.

> If any certs were issued for hijacked domains (which as far as I've ready was only one, not using LetsEncrypt), it's a pretty glaring failure on the issuer, assuming they used "DNS Validation"

It wouldn't be a compliance failure, though. CAs are not required to be invulnerable to BGP hijacking attacks.


It doesn't matter how many DNS resolvers you have. If you can spoof BGP, and use it to spoof the DNS resolvers, and the authoritative name server, you can stand up your own DNS that says anything you want it to say.

The only thing you cannot do is fake out DNSSEC. If you use a TLD which supports DNSSEC, you can make sure records have to be signed with your key. If fake DNS records aren't signed with your key, and the CA uses a validating stub resolver, and the CA is checking for CAA records, they will reject the attempt to create the cert. In theory.

Here's the thing. The CAs know about these flaws and are still not providing more secure ways to prevent these attacks. How could it be more secure? The domain registrar could accept a public key from you at the time of purchase. At that point, nobody should ever be allowed to generate a cert anywhere without you signing off on it with your key. Is this some magically impossible technical challenge which mere mortals can't comprehend? Hell no. They literally just need to write an ASCII file in a database next to your domain name. But we aren't demanding it of them, so they aren't doing it. So attacks like this leave everyone vulnerable, because nobody cares enough to fix it.


And if you lose the key, as thousands of people would every day at a large registrar?


Use the same method you use to pay for the domain to reset the key. This would require logging into your account at the registrar, or contacting support and providing enough verifiable material to reset it.

If the registrar signed a message that had your key in it, they could re-sign it with a replacement key. So now you can 1) verify who originally assigned the domain, 2) record the public key, and 3) see a verifiable history of changes as long as the old history is included in successive signing. This could be used as part of a public record ala Certificate Transparency, or a blockchain-style system. If the attacker spoofs BGP and SE's your registrar, you (or whomever) can see if anyone creates a new key.


But on a big, distributed site the DNS is going to vary greatly depending upon the location of the query.


I'm going to assume that Lets Encrypt wasn't impacted for the reason that there are several discussions that the hijacked myetherwallet website apparently had a self signed SSL cert running - and people were actually clicking through the warnings before handing over passwords.

Surely they would have realised they could make the whole thing a lot more profitable if any SSL provider was impacted.


But if you route all traffic for a specific domain through your own web server, surely you could complete the Letsencrypt verification steps as well. They just check for a specific file on the remote web server, right?


Yes, but they weren't routing traffic for that web server to their own server. They were routing the IP of the DNS servers to their own server, and then just handing out the DNS address that suited them.

In turn, if your own DNS wasn't configured to use a DNS server with a poisoned fraudulent address, a web server based verification landed on the valid server, not the attackers.


Okay, thanks for clarifying!


Isn't this exactly the problem that EV certificates solve? By going the extra mile to validate identity instead of just trusting DNS ownership?

Not that there's an easy way for LE to validate identity automatically - governments need to do a better job of providing electronic/smart identification, in which case a physical chip ID serves as a second factor (alongside DNS ownership) for automatic EV certs - but the weaknesses of only having DNS ownership are pretty much why EV certs exist.


It was certainly noticed in under 2 hours. The Outages list thread about the issue started at 11:54 UTC: https://puck.nether.net/pipermail/outages/2018-April/011257....

I'm sure that there were ops teams working on it before then. The people involved in actually fixing the problem wouldn't have been posting to public mailing lists.


> The people involved in actually fixing the problem wouldn't have been posting to public mailing lists.

Surely, but their coworkers should have been.


Looks like it was about an hour and a half before anyone in that thread figured out there'd been a BGP hijacking.


How do we know it was unnoticed? It could very well have been noticed and taken two hours for Amazon to mitigate it. I don't know the technical process behind making such a correction, but in any large organization there are steps that have to be followed. It's not like some kid typing out a shell command on his basement Linux box.

Inflammatory/clickbait headlines do not make the internet a better place.


Ok, we've taken that word out of the title above.


If no one mentioned this important outage, it's likely that many/most/all people didn't notice.


How big was the attack? You can't fool all the people all the time. But fooling all their targets for even 30 minutes is a significant achievement considering what info they could have obtained. What do we know about the victims? What reason do we have for believing it was a random victim attack? Besides the insane difficulty part..


Wow. Just wow. These BGP vulnerabilities are ridculous. Imagine, someone taking over DNS for even a small subset of people and being able to basically just rewrite the internet as they see fit, completely taking control of anything. Even without being able to get a valid SSL certificate you could do a lot of damage. For example, let's say I rewrote requests for SomeNationalBank.com to my proxy server. I make a request to the real SomeNationalBank.com with the weakest set of ciphers allowed, and then pass those bytes along to the client. Then I can try and decrypt it and log into your bank account as soon as I do.

OR better yet I wait for you to log into an unsecured site like neopets.com or something and then pair that unencrypted password wait the fact that I saw you request the domain for a certain bank and if your password is the same (which it is for most people) then that can also get me in.

We need a new more secure system to replace DNS.


>For example, let's say I rewrote requests for SomeNationalBank.com to my proxy server. I make a request to the real SomeNationalBank.com with the weakest set of ciphers allowed, and then pass those bytes along to the client. Then I can try and decrypt it and log into your bank account as soon as I do.

What you're describing is called a Downgrade attack, and it can be detected. The newer the software at both ends, the better (more capable) the detection.

In TLS 1.3 both sides need to have experienced exactly the same conversation right up until the encryption switches on, or else encryption fails and they abort. If you sneak in say a message saying "No, do crappy old crypto" to the client that was never actually sent by the server, the two conversations won't match and the connect fails. If you snuck in a message to the server saying "I can only do crappy old crypto" that was never really sent by the client, again, no match, no connection.

In earlier versions of TLS, you can't fake the version number or tweak the ciphersuites or again the encryption fails and no connection is made.

Only downgrading to SSL 3.0 or lower gets you somewhere, and no modern client software will allow such a downgrade.


How similar was this to the BGP attack by Russia in late 2017?

https://arstechnica.com/information-technology/2017/12/suspi...


Ostensibly they were the exact same thing?


>by Russia

Sounds like xenophobic bullshit.


More correct to say "Russian Government"


The correct thing to say is “ISP in Russia”, could be Norwegian hackers using their routers for all you know.


Check out IPFS' DNS alternative -- IPNS. No blockchain needed, it's pretty solid.


The issue with IPNS is it's based off of public private key crypto which means the "domain" is a hash and has no name recognition. While still useful, it's not going to make its way into the mainstream anytime soon.

For instance, I host my personal site on both IPFS and normal web traffic.

https://fn.lc/ipns/Qmea45XwFtdwaCGAPLRMxFmoUP5YLnknc2WGCGQ3H...

https://gateway.ipfs.io/ipns/Qmea45XwFtdwaCGAPLRMxFmoUP5YLnk...


The first one took 61 seconds to load, while the second one took 121 seconds. It doesn't seem very practical.


I did. It's not very useful or performant, unfortunately.


SomeNationalBank.com in my experience uses 1 or 2 ciphers which are not the most modern, but are secure enough. For a few reasons there are almost no banks which would allow you to drop lower than aes-128-cbc.


Certain configurations of aes-128-cbc using certain software stacks have been found to have vulnerabilities. But if you really couldn't decrypt it, you could at least save anything you think might have the password in it, and then wait until you can decrypt it at some point in the future.


DNSSEC adoption would prevent a hijacker from manipulating responses.


No, it wouldn't. It comically wouldn't: in a BGP hijacking attack, the attackers control IP. It doesn't matter what your signed DNS record points to; attackers will just make that address theirs.


I think in this instance DNSSEC would have prevented the entire attack. If I understand the sequence of events correctly, the route hijack was done to the auth DNS servers hosting myetherwallet.com. Then these rogue DNS servers redirected users to a fake myetherwallet.com.

If myetherwallet.com had been DNSSEC signed then users who validated the signature chain would have not been redirected to the fake site. But it's not DNSSEC signed.


In a BGP hijacking attack against a DNSSEC-signed zone, the attacker just looks at which IP address is in the signed A record, and then injects that prefix.

Note that "the auth servers hosting myetherwallet.com" aren't some random IP addresses in Tallinn, Estonia. It's Route53!

I get what you're saying with your counterfactual, that DNSSEC "breaks the exploit", requiring the attackers to use a slightly different exploit. But who cares? The slightly different exploit has the same predicates.

That's what's so funny about the narrative that DNSSEC has some role to play in today's attack. It would almost make sense if the narrative was "we could use DANE to get rid of all CAs and then it wouldn't matter if you could trick a DV validator". But, no, people are talking about how DNSSEC would help LetsEncrypt, in the event of a global BGP hijacking. Admit it! That's funny.


If I understand the discussion correctly, I think tptacek is right but he's not explaining his position well, which might be why he's been downvoted.

I think he's saying: let's say the correct IP address for example.com is 192.0.2.80. Instead of hijacking the prefix containing example.com's nameservers, an attacker could just hijack 192.0.2.0/24 and immediately get a DV cert. Within seconds they would be up and running and DNSSEC wouldn't have done a thing to prevent it.


DNSSEC plus CAA today would let you tell Let's Encrypt "No thanks, automation is too dangerous for my high-value names". If the hijacker fakes the answer, DNSSEC fails, Let's Encrypt says "DNSSEC failed, no issuance". If the hijacker allows the real answer, Let's Encrypt says "CAA says forbidden, no issuance".

I posted in response to pfg's comment above a mechanism that high value targets could choose, which is modelled on how Facebook behave today but (since we're proving a point here) more extreme. Make a deal with a CA, forbid all other CAs from issuing.

The other thing about Let's Encrypt is that they wouldn't be a good target for such an attacker because they are seeing the world from more than one place. You specify particularly a global BGP hijack, but those are hard, which is why this wasn't global. Tricking just, say, the original San Francisco office of Let's Encrypt doesn't get the job done.



I don't know what IPSEC has to do with this.

Your second link is a reference to what is now called DANE. There is an alternate universe in which DANE helps this problem. In that universe, there are no CAs anymore (otherwise, DANE is effectively just another CA, and attackers will just choose whichever CA allows them to execute their attack).

But DANE is already dead on arrival. Both Mozilla and Chrome flirted with DANE support a few years ago, and then withdrew it. Adam Langley wrote a post on why DANE support turned out to be untenable.

There's another big reason why you shouldn't be excited for DANE deployment, which is that it is essentially a TLS key escrow system. The most important TLDs in the world are controlled by world governments with massive SIGINT infrastructure. DANE turns those TLDs into the new Web PKI roots.


> There is an alternate universe in which DANE helps this problem. In that universe, there are no CAs anymore

There is another, even less likely, universe where DANE works. Here, every cert needs to be validated by both DANE and a CA. Security wise, this would be an improvement over just CAs. However, the large governments would still be able to MitM TLDs in their control (just patriot act some CA).

This is where I point out that the current CA system is also controlled by those who control the TLD, because domain validation depends on the DNS system. But that is an old argument.

There is some hope for the CA system through the use of certificate transparency logs. Notably, this system moves away from trusting the DNS system. Sadly, it is mostly a detection-after-the-fact system, but there is hope.


The TLS WG is currently haggling about, effectively, DANE's future. The thing up in the air was, should this technology for saying "Hi, I can do DANE" be pinned? e.g. a client sees "I can do DANE" and it says to itself "OK, I will now expect DANE certificates from this TLS service for... 24 hours? 10 minutes? 10 years? An amount of time defined in some new parameter?" Or maybe it never does this. Last I looked at the thread, it seemed like "Never do this, make yet another TLS extension to specify if you want pinning" is the choice of the working group.

(The reason this isn't an application header like HSTS is that DANE advocates want to bake this into TLS itself, so that it works without needing new application features in every single TLS-using application)


The TLS WG can haggle about DANE all it wants, but DANE is a dead letter: it was tried in two of the four major browsers and failed, and a third (Safari) briefly had nascent framework support for it and jettisoned it. Without support from the browser vendors, DANE is irrelevant.

The big problem Chrome had with DANE was that they couldn't (and presumably still can't) assume reliable DNSSEC/DANE record lookups. If you can't fail closed with DANE, then all DANE really is is another CA; attackers can "strip" DANE from connections, trivially, and be exactly where they are today. Meanwhile, end-users pay the price in latency and reduced performance for a cosmetic security improvement that most people don't care about.


So, you should not be surprised (but perhaps you are anyway) that how to work around not having DNSSEC on end systems is exactly what this work was about. Specifically, the idea is you shove DNSSEC RRs into TLS. Now the client doesn't need working DNSSEC, because the DNSSEC records were delivered with the connection. Hence the pinning question.


There's also https://tools.ietf.org/html/rfc4956 , which allows a domain to sign that a range of DNS entries does not exist; under this system, a client resolver could demand cryptographic proof that a domain was missing certain records, which stops the MitM RR-stripping attack.


You're describing authenticated denial, which has been a characteristic of DNSSEC since 1994.


Yes, exactly. The RFC I linked to specifically refers to Authenticated Denial in the Security Considerations section.


I'd expect there are limits to BGP hijacking. Certainly not every BGP router can hijack any IP. Moreover, the more specific the hijack, the clearer the target.

At this point in this attack, we are not sure of which domains have been hit. With a BGP attack, the specific IPs would've been quite public.


Buuuuut.... not if you hijack the destination route. It helps if the DNS servers are hijacked, but if you also snag the destination route's IP then you can still get the certificate and serve traffic. Sneaky sneaky.


Why do AWS http://status.aws.amazon.com say that only Google resolvers were affected?

Between 4:05 AM PDT and 5:56 AM PDT, some customers may have experienced elevated errors resolving DNS records hosted on Route 53 using DNS resolvers 8.8.8.8 / 8.8.4.4. This issue was caused by a problem with a third-party Internet provider. The issue has been resolved and the service is operating normally.


It seems like either the title is misleading, or the article does a poor job of explaining the situation. From my reading of the article text, it seems DNS traffic was rerouted to Route53, which the attackers then used to serve false DNS records. That does not sound like Route53 was hijacked at all, just that the attackers happened to use the service it provides.


Traffic to Route 53 was rerouted to an alternative DNS server. But that has nothing to do with my question re. AWS calling out Google in particular in their status update.


Some Route53 traffic was redirected somewhere else. Traffic can only be hijacked from neighbors that actually accept the routes. You would think Google of all networks would have effective ingress filtering to prevent this, but it seems like they did accept it in this case.


I don't think ingress filtering on Google's edge would have helped if the rerouting happened in any of the transit ASs between AWS and Google.


If BGP is open and anybody can modify routes, why don't these attacks happen constantly? What prevents these sort of attacks and how are they fixed?


There's an informal trust chain. You only set up BGP peering with companies you trust, and you implement filtering of it's traffic and advertised routes that you know can be trusted.

You know that if you do something nefarious, or if someone at your downstream does something nefarious, your upstream will identify it and block your traffic, or the part of it that they identify to be the issue.

Thus as long as you need to appear on the internet, you do your best to play nice, otherwise your traffic gets blocked, and you also do your part to identify and block traffic when something goes bad.


So if BGP is trust based and hard to compromise then how were these Russians able to?

> They re-routed DNS traffic using a man in the middle attack using a server at Equinix in Chicago.


Because large networks do not register all their prefixes or enforce policies on what they accept from other large networks. So a large network can do this. Allowing a small network to do this demonstrates incompetence. Frankly, I am having a hard time believing that HE.NET failed that badly.

AS7007 incident demonstrated this being a bad idea in the nineties.


> Frankly, I am having a hard time believing that HE.NET failed that badly.

Believe it. HE has many, many netiron-based routers in their network and many, many ASNs as transit customers. the hardware doesn't have the capacity to implement complete filtering for everyone.

between hardware-constrained equipment being replaced and the tireless work Job Snijders (& others) are doing with route security, these kinds of episodes will become far more difficult and occur much, much less.


Half if not all of HE business model is to sell dirt cheap flat rate IPv4 with BGP to other companies in major carrier facilities. I find it extremely hard to believe that not filtered access to HE.NET is possible.


It is not open.

Sane networks register prefixes. Other sane networks do not accept prefixes that are not registered to parties announcing them.

If your network does not register prefixes, you need to fire them and move to someone else. If your network does not accept only registered prefixes, you need to fire them and move somewhere else.

Not registering prefixes/filtering based on registrations is having sex without condoms with a toothless hooker behind a dumpster who charges $5 for a full service.


This comment is only partially true, the realities of managing customers on routers with thousands of routes precludes this careful management at the rock bottom tier-1 transit prices of the lowest common demoninators. The way HE & Cogent operate for large web-hosters at data centers is by trust and prefix-counts. The churn of various web-operators between hosting companies is massive.


Did not Cogent pioneer massive use of multihop BGP to do prefix filtering outside the low powered edge in the nineties?


Starting in 2000, yes.


Just guessing here but it’s only open to the companies and countries that are peering with one another isn’t it?

If so then companies that intentionally advertise routes that are not theirs to advertise would lose their peering agreements with others because other companies would no longer want to peer with them.


How would you see this in ThousandEyes? I see he has a screenshot of it in the post, and doesn't appear to be logged in.

Was troubleshooting this for the two hours, until I started reading reports of the BGP leak. Not in networking, so don't fully understand how it works.

I'd just like to know how to troubleshoot this in the future, and check for leaks, if any. Are there any other online tools I could use? Obviously don't have an edge router or anything like that.


> How would you see this in ThousandEyes?

Checkout this video analysis from the team on the hijack: https://youtu.be/YXm4GJMUlP0


Thanks! Do you know how I can access that page he is on? I'd like to have a click around myself.


Here's the link he starts on in the video:

https://webysvi.share.thousandeyes.com/view/tests/?roundId=1...


A website I own was affected by this. I got an alert from our monitoring saying the website was down for 1hr 2min 59sec. Luckily, they didn't redirect it to anything.

I have other domains using Route53 (and hosted at AWS, just like this one).. that weren't affected AFAICT.


> I got an alert from our monitoring saying the website was down for 1hr 2min 59sec.

That's a very accurate time. What system do you use to allow sampling at under 1 second intervals? My nagios boxes poll every minute, so an outage could be 2 seconds, or nearly 2 minutes, and nagios would report the downtime as 1 minute.


To be accurate, that’s a very precise time. We don’t know if it’s accurate or not.


Could be a process that checks every minute with slightly inaccurate measuring which leads to 1h3m minus 1 second reported.


They could be alerting based on metrics, instead of "pinging" the site.


Yes, if the site serves up lots of traffic (multiple hits per second), then the hole in the logs would certainly give an indication.

I get very suspicious when people quote things accurate to 1 part in 1000


Experienced the same behavior on only one of my hosted zones. Started around 11:28am UTC


> This traffic was redirected to a server hosted in Russia, which served the website using a fake certificate — they also stole the cryptocoins of customers.

What does this sentence mean? Sorry if I'm being slow. I thought the whole point of certificates is that you can't generate a legitimate one for a domain you don't control, so messing with name resolution wouldn't affect it.


Certificates can be issued with simple domain validation. If you can receive an email directed to webmaster@crypto.com, you can get a valid cert. So if you can put up a fake DNS server to impersonate Amazon's DNS servers and then start receiving traffic meant for Amazon (BGP route hijacking), you can accomplish domain validation to receive an illegitimate certificate.


Or by controlling the IP (as with letsencrypt)


Thank you.


I thought the whole point of certificates is that you can't generate a legitimate one for a domain you don't control

But they did control it. The domain's name servers were pointed to route53 ips, which were controlled by the attackers. If they setup a dns server, they can point those entries to any server they want. They certainly could have gotten a dv ssl certificate for any of the domains affected.


It means just that. Since they had control of DNS, they could have easily gotten a DV certificate, but in this case, they didn't and people clicked through certificate warnings.


Here's a source on Twitter showing some documentation of the self-signed certificate that myetherwallet.com users ignored warnings about: https://twitter.com/GossiTheDog/status/988785871188045825


> people clicked through certificate warnings

Which is a shame. They could have prevented users clicking through warnings by implementing HTTP Strict Transport Security.


Relatively easily, you'll need to be prepared with a mix of DV providers to make sure you don't get screwed by DNS caches.


Usually yes, but that's only the case if you try to spoof DNS/routing for end users. If you can fake the DNS/routing for the certificate registrars you can trick them in giving you a certificate.


According to crt.sh, there were no new domain-validated certificates issued to yEtherWallet.com - https://crt.sh/?Identity=%.MyEtherWallet.com


Maybe they didn't mean "fake certificate" as we think of it. Maybe they meant a valid certificate that they got from something like LE since they could show they hosted the domain (via the DNS check or the HTTP check).


They rerouted DNS traffic, which would've let them point the domain at their own servers, which would've let them successfully get a new SSL certificate.


Apparently they didn't though. Reports are that they just served an unstrusted certificate and people clicked through all the warnings (or their browsers failed to detect the faulty cert, or some middle box proxy failed to)


This same attack can be used to generate valid TLS certs for any website, for example using Let's Encrypt. The best part? Your target doesn't need to use Let's Encrypt at all. Anyone can use them to forge certs for any domain. Of course, this is possible with most other cert providers, but Let's Encrypt automates it.

So.... TLS certs mean jack squat if you can pull off a BGP+DNS attack. You might want to start pinning certs for your bank's website. (but not using HPKP, Google is removing it from Chrome)


The solution to this is the CAA issue record combined with DNSSEC. DNSSEC ensures that an attacker can't spoof your DNS even if they have your DNS traffic directed to them globally, and then CAA ensures that only the issuers you want can create certificates for your domain. This is a solved problem, it just requires caring about it.


CAA depends on CAs (all 650 of them) to respect it, and for all of them to implement everything perfectly (it was shown in 2017 that they don't). This also depends on DNS again, which again not everyone will do properly, so CAA requests from CAs can be hijacked. So CAA is stupid.

The client should be getting the list of authorized CAs for the domain from DNS on first connection, and/or from inside the first cert, with expiry dates. It should require that if there is a new CA, that its authorization be signed by a previous CA or cert. The client needs to verify the entire chain of authorized changes throughout history, rather than just believe any cert from any CA at any time. And there should probably be a certificate transparency pooled service which the client checks the first time they see a new chain or new root of a chain (or something, i'm making this up as I go). In the current case the domain owner may eventually figure out something's up, but the client may not, and changing BGP/revoking certs/etc are ineffective when the attack is already in play.

This is the only time you will ever see me say this, but we kind of need Blockchain here.


CAA support wasn't mandatory for WebPKI CAs until 2017-09-08 so it's likely that it's better supported now than it was last year.


Assuming every CA's support was perfect (again, 650 of them, so not likely), CAA still does not stop these attacks. Domains have to implement CAA. Most domains do not, so most domains remain vulnerable.

Even if the domain does, if either they or a CA screw anything up, they are vulnerable. The client will accept any cert signed by any CA. One rogue cert screws the security.

Compare this to if the client was verifying it using a blockchain. The blockchain provides a record that the client can verify, not only from multiple peers that share the same records, but at the sources of the original records, to verify a CA was authorized, and to verify a specific domain owner created a cert. This is a much higher barrier to attack than what CAA provides. Even if this verification were optional, it would be a much better authenticity guarantee than what clients get today.

The other thing a blockchain would provide is protection against state actors. If the FBI barges into Verisign waving a FISA warrant, they could possibly force Verisign to generate valid certs outside of the CT or CAA process. A blockchain could show the client that a cert it received was never in the blockchain. Comparing the blockchain to CT could further emphasize rogue certs.


How do you get 650?

I count significantly less than 100 organisations, I started with the CCADB (the database shared by Mozilla and Microsoft) for all intermediates, I cut those that aren't part of the Mozilla root programme, and then I asked for the "CA Owner" field, which avoids thinking a single owner is really several distinct entities when it isn't.


Well, they still pretty much mean the same thing. It's just that effective control of the domain temporarily changed hands.. This is a well known limitation, we just don't typically see DNS hijacked this close to the core.


What's also interesting is in the comments someone stated they'd used a Godaddy certificate issued on april 7. Maybe they had access to their account, or Godaddy's internal systems?


Hi

I'm the GM for Security Products at GoDaddy, responsible for the GoDaddy CA. I have checked with the team and do not see any certificates issued to: http://myetherwallet.com/ (including DV).

If someone has an image of what was seen, please send it to me at tperez@godaddy.com so that we can investigate further.

Thanks

Tony


I guess the comment was unfounded. Can't seem to find anything about it with the information available now, they appear to only have used a self signed cert which users ignored.


Like I said: anyone can generate a valid cert for any domain. You don't need to hack Godaddy or anyone's account. Create a new account, hijack the target's DNS, generate a valid cert for the domain you want.


> hijack the target's DNS, generate a valid cert for the domain you want.

That’s hardly “anyone” – it requires non-trivial resources and, thanks to certificate transparency, is going to be noticed and revoked relatively quickly. That further lowers the likelihood that someone capable of the attack will burn it for a limited window of access.


You consider two hours a limited window? To automate capturing traffic for any domain on the internet? Just as one example, you could initiate bank transfers for everyone doing online banking at any bank. Malware sitting on desktops have been doing this instantly, automatically, using valid sessions from the client side, for like a decade. No reason they couldn't do this sitting in the middle.

On top of this, many high profile hacks in recent years involve attacking the protocols used by banks to transfer funds. Many BGP attacks in the past have attacked the networks of major payment processors as well as banks. Again, you don't need two hours, or even one hour, to pull off this attack. Give me 10 minutes and a good connection.

It is "anyone" because literally anyone can use literally any CA to create the certificate, after they begin the attack. Who has the resources to do this attack? Anyone who can read a book on BGP and get on a backbone with the right provider. At least 13 high profile BGP attacks have happened in the past decade and a half. There's 328 "possible" BGP hijacks listed on bgpstream. https://www.google.com/search?q="Possible+BGP+hijack"+site%3...

Is Joe Schmoe script kiddie going to be doing many BGP attacks? No. But that's not the attacker I'm scared of. I mean, even the current attacker burned their capability to collect a fake currency. Obviously, there is not a high bar to who will use this access once they get it.


> You consider two hours a limited window? To automate capturing traffic for any domain on the internet? Just as one example, you could initiate bank transfers for everyone doing online banking at any bank. Malware sitting on desktops have been doing this instantly, automatically, using valid sessions from the client side, for like a decade. No reason they couldn't do this sitting in the middle.

Again, I'm not saying that this isn't a big deal but that you're really downplaying the level of effort required for the worst-case scenarios. Besides the average script kiddie not being likely to compromise an ISP to send the malicious announcements, your online banking scenario would only work for the people actively logging in during those two hours to banks which don't use IP fixing, out of band confirmation or other precautions to actually transfer money (every single financial institution I use has a period measured in days with human-on-phone verification to setup a new outbound transfer target); whose security staff don't check certificate transparency logs; and you'd have to develop the code to collect credentials and run the transfer for each bank, taking care to avoid triggering key-pinning and other high visibility errors.

Malware is similar: yes, it'd suck if you could get a valid HTTPS cert for someone's automatic update service but these days those certificates are pinned and most system use signed updates which would require a separate exploit to avoid.

One thing to remember is that this is coming after a decade of everyone responsible in the tech industry trying to protect against malicious state actors. This attack is what happens all the time to people in countries with repressive governments.


I know precious little about how DNS / certs work, yet it took me only an hour or two to install gitlab on some random cloudserver with a valid certificate. It's sort of a joke how easy it is now.


It's surprising that GoDaddy allow the issuing of the cert. I'd expect them to require some proof of ownership.


They do require proof of ownership, but it's recently become common to support validation by proving that you control the domain or web site being validated so that the proof is purely electronic, e.g.: https://www.godaddy.com/help/verify-domain-ownership-html-or...

In a case like this, the attackers had sufficient control to pass those kinds of checks - although people are saying they didn't do that, and just relied on enough users clicking through the certificate warnings.


That proof of ownership is generally "publish a DNS record or a file at this specific URL", both of which hijacking DNS would permit you to do.


Excuse my ignorance but how does this allow someone to forge TLS certs for a given website/domain name that already has active certs? Wouldn't they have to revoke existing certs and then get new ones during the attack to pull this off?


Step one: Hijack the DNS entries so they point at your site.

Step two: Obtain a certificate from Let's Encrypt using website validation.

Step three: Proxy traffic through a proxy you provide that SSL cert for and capture anything you like. In the event that a website legitimately has a higher-class Extended Validation cert or other such thing, hope users don't notice the downgrade from Extended Validation correct SSL cert to a merely correct SSL cert, which is a pretty solid hope.

Certs don't have any sort of overriding or superseding property. A given cert and its chain of signatures is either accepted because you trust the signers, or it isn't; at the time you are making that decision, you don't have access to know whether any other certs have been issued since then or anything.

One thing that comes to mind is that browsers really ought to scream if they see a domain that was previously Extendedly Verified drop down to a Let's Encrypt-level cert. Though how much good this would do in practice is hard to tell... "Hmmm, what's this about the SSL cert being less good? It still has one, right? Let's just click through this... hmm... looks like the web site I expected... must just be the computer being weird enters password"


For some reason I thought CA's at least talked to each other to see if someone else had issued a valid cert for a given domain along the lines of owning a domain name itself through a registrar. Clearly that only works when DNS isn't being abused like this...


Why should websites be limited to having only one certificate?

For example, if you have a service running on multiple backed servers, you might want to have once certificate per server for all of your domains.


That use case still wouldn't explain a domain having certs from two different CAs.


You can generate as many certificates as you want for a given name. There is no master list of all the live certificates out there and there are no checks for existing certificates when generating new ones (otherwise we'd never be able to renew certs until the old ones expire, resulting in downtime).


You can have multiple certificates for a single host.


The new, fake cert just has to validate for your phish/drove-by/cred-jacking site that is mimicing a real site. You are not planning a long-term take-over but just to make sure the new site passes checks by your victims.


You don't need to revoke existing certificates to generate/forge new ones.


I noticed this while setting up some hobby servers that as long as my ip agreed with dns I could get a cert from lets encrypt.


How did they manage to steal coins if they didn't get a valid cert? Did people logging into MyEtherWallet just ignore the invalid cert warnings and log in anyway?


In addition they could be behind rubbish proxy's that aren't passing the warning forward. I know ours in work will happily resign that expired cert to mitm you for compliance and the user will see that green padlock.


I get that companies need to control access to their internal networks. But surely when you do it this badly you have to realize you are only making your network less secure.


So far as we've been able to tell all the middleboxes on the market for this sort of purpose are worse than useless. If you remember that NCSC blog post that annoyed Adam Langley, its author Ian Levy insists that "there are some good products out there" I actually replied to that comment, requesting a list of these "good products" which presumably are warranted by the NCSC not to actually be worse than useless. Unsurprisingly Ian has elected not to in fact list any such products. There aren't any.


Yes, multiple people on reddit have said that's exactly what they did.


Could be, could also be API clients not doing cert validations


https://pulse.turbobytes.com/results/5adf2844ecbe40692e003ad...

Some traceroutes captured during the incident. The results that show "Target unreachable" are the ones seeing the hijacked paths


A lot of people in here are talking about securing their DNS setups but not their BGP setups.

For any BGP operators out there, you could automate your inbound/outbound BGP prefix filters using IRR/LIR databases, peeringdb.com, RADB etc. Tools like https://github.com/snar/bgpq3 can help with auto-generation. Even manual prefix lists will do, anything is better than nothing!

It would be good if operators could also use AS-SETs so that other operators can see what downstream ASNs should be seen via their peers (and which ones should not).

Also BGP RPKI (Resource Public Key Infrastructure) based ROA (Route Origin Authorisations) can also help to ensure that prefixes are announced by their true originator.

Most networks follow the Pareto rule - 20% of prefixes learnt via BGP (in some cases even less!) are the source/destination for 80% of their traffic flows. Some operators use an approach of writing explicit filters for those 20% of prefixes explicitly blocking them from all their BGP peers except the origin peer sessions.

None of these techniques are fool proof and we all need to play along to improve their impact. They cost nothing, only man hours, and we can all benefit. In metaphor - no encryption is unbreakable but we can try and make it unappealingly difficult.


This is another one of the articles that says absolutely nothing.

The only reason why such attack was possible is because some of the providers did not filter announcements from hobos claiming to be able to advertise AWS space. That could have only happened if:

a) AWS did not register all their policy prefixes

b) providers did not apply filters on the transit announcements from those that announced AWS prefixes based on the policies registered by AWS.

This is a modern day "let residential IPs to send data directly to tcp/25 and not check".


If you think the majority of BGP speakers world-wide are applying these announcement filters you are sadly mistaken.


I dont remember when was the last time that I was able to just announce anything I wanted either as a customer or peer to any large network. So as long as the large networks filter their customers and peers it seems to minimize the exposure. [1]

[1] That statement excludes Google whose idea of filtering is to let me announce anything I want and build filters based on what I announce.


So they obviously couldn't or didn't hijack certs, so they were depending on people who clicked through or APIs/systems that didn't do cert checks?


Update: see other comment chains invalidating the security I'm claiming HSTS provides.

APIs will probably go directly to https://, and hopefully cert failures will cause the API to blow up with minimal harm.

If you want to visit my web site, and you need to type it in, you'll probably just put realms.org into the URL bar of your browser. Most secure websites, such as paypal.com, have something called HSTS https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security to force the browser to always hit the site with https, even if requested to it it http:

dana@araman:~$ curl -D - https://paypal.com 2> /dev/null |grep -i strict-transport Strict-Transport-Security: max-age=63072000 dana@araman:~$

So even if going to paypal.com lands you on a blackhat server, the cert will still blow up, protecting you.

Slightly related: much to my annoyance, it seems that neither AWS ELB nor ALB support sending HSTS, which is a real bummer.


Yep, this is nasty to defend against.

Most of our internal infrastructure handling GDPR-data would be fine with this incident, since we encoded mutual validation and trust into a lot of systems. Clients need to authenticate to be authorized, obviously, but clients also authenticate the server against company-internal secrets. This is done using a bunch of secret, self-signed CA-Chains or managed ssh/known_hosts files.

But that kind of system is quite impossible to implement without control over the server and the client-side. And if the client has to get the information from somewhere, you have a tradeoff - flexibility vs security. You can either change the source of trust in a client quickly, as an attacker, or you can provide a secure connection which can be changed with 6 years of planning as a service provider.


> Slightly related: much to my annoyance, it seems that neither AWS ELB nor ALB support sending HSTS, which is a real bummer.

I didn't know that either one lets you set any kind of header - I've always set a passthrough on the upstream, so it's guaranteed no matter what the LB or routing framework is used.

    curl -sI -H 'Host: myhost.mydomain.com' https://long-alb-descriptor.us-west-2.elb.amazonaws.com/ | grep Strict
    Strict-Transport-Security: max-age=15552000; includeSubDomains; preload


Except, it would have been trivial for them to get Lets Encrypt to issue them a valid cert.


it depends, let's encrypt might be performing request from different sources (and then it would need a bigger BGP hijack to fake them)


Yes, Let's Encrypt acknowledges that they perform their validations from multiple observation points on the Internet. They (intentionally) don't provide a list.

This occasionally annoys people who have an idea like "I will firewall everything but whitelist Let's Encrypt" and then find out that while you're welcome to try to puzzle out some way to make this work, Let's Encrypt won't help you and when it breaks you get to keep both halves.


If LetsEncrypt is actually hosted within AWS, then a network request from them to Route53 might not even hit the public internet.


i assume that did not work because the challenge would fail because it looks that only google dns would return invalid results.


Yup, I saw that after I commented. I edited and put a warning at the top of my post.


If you use Route53, I encourage to you to check crt.sh if any SSL certificates were issued for your subdomain over the last few days.

Example: https://crt.sh/?Identity=%.MyEtherWallet.com

Keep in mind that crt.sh has a certain time lag between the time a certificate is issued and a certificate entry is inserted into the crt.sh database.


Your advice should get much much more attention. You got mine, thanks.



We experienced the same thing. DNS lookups via Googles public DNS servers where failing for one of our hosted zones.

Interestingly this only seemed to be a problem with Google DNS, other large providers as well as smaller ones where not affected.


As I mentioned in the previous thread; DNS lookups via Azure also seemed to be affected.


we saw the same thing. dns stopped resolving while using google's dns for about 2h.

there's a route53 status update on their page that says basically that -- only affected google dns.


That’s simply wrong. Best case, this is ignorance on the Route53 marketing team. Worst, it’s actively misleading. Disappointing either way.


"Between 11am until 1pm UTC today, DNS traffic-the phone book of the internet, routing you to your favourite websites-was hijacked by an unknown actor."

Is it worth keeping a record of the IP addresses for "your favorite websites"?

For example, with IP addresses saved, would this enable reaching the websites ven if DNS is not working?

How often do these "favorite websites" change IP addresses?

Are they all the same in that regard?

(The frequency with which they chaange IP addresses.)

Is it worth paying attention when one of them changes its IP address?

Is it worth noting where these addresses are thought to be located (e.g. the countries)?

What if several of "your favorite websites" each change their IP address on the same day, the same week or even the same month?

Is this worth noting?


Depends on how the website is hosted. If it's a tiny website on a single server/VPS, then it's going to be fairly stable with the IP. For anything else, you're out of luck. Shared hosting or PaaS will change ips when needed internally, or to rebalance the traffic, or when new hosts come up. Anyone with large enough traffic will use some edge network like CloudFlare which would also get a different address depending on your location.

Some IPs may stay stable for a long time, but virtually every change to them is going to be intended. And you can't tell easily which one want.


"Anyone with large enough traffic will use some edge network like Cloudflare which would also get a different address depending on your location."

What if the request is always sent from the same location?

What if when the user moves location, e.g., from one country to another, she updates her stored IP addresses?

"And you can't tell easily which one want."

Personal experience as a user is that when it is an "intended" change, the previous IP address no longer hosts the content.

Personal experience is that this is surprisingly rare.

In the case of the more common load balancing, the user who may store several IP addresses for the website, all of which continue to serve the content. She can choose which of those she prefers.

Personal experience is that this works very well.

Further questions:

If this was a BGP hijack, why did the hijackers target the IP addresses of DNS servers, rather than the target IP addresses of the websites?

If they had targeted the IP addresses of the websites, then what would be the possible mitigations, if any?


> What if the request is always sent from the same location?

Then you may get the same IP.

> What if when the user moves location, e.g., from one country to another, she updates her stored IP addresses?

It doesn't have to be a different country. Different ISPs with different peering may get different results. Moving between your home wifi and you phone tethering may get different results. So depending on your usage style, it's between "change multiple times a day" and "no change".

> Personal experience as a user is that when it is an "intended" change, the previous IP address no longer hosts the content.

In some cases I'm sure it's true. In others, it isn't. Basically it's not a reliable indicator.

> If this was a BGP hijack, why did the hijackers target the IP addresses of DNS servers, rather than the target IP addresses of the websites?

Haven't seen the site's announced block, but it's possible that it's announced as /24 block. This attack was possible because Amazon announced /23 and the attacker could announce something more specific.


"So depending on your usage style, it's between "changes multiple times a day" and "no change"."

For me, for each ISP, it's been "no change". Can only relate personal experience.


Something similar happened once in 2014 [0]

> That BGP hijack allowed the hacker to redirect the miners’ computers to a malicious server controlled by the hijacker.

Also, the entire blockchain ecosystem is very susceptible to BGP hijacks [1]

[0]:https://www.wired.com/2014/08/isp-bitcoin-theft/ [1]:https://btc-hijack.ethz.ch/


can someone ELI5 how this works? like, how do they physically do this ?


I’m not an expert, but I think it goes something like this —

You register an AS and get a dedicated server [0] at at an internet exchange point (IXP) and install a BGP router on it. Your router advertises a route to the original route53 IP, with a shorter AS path to the target IP than the route advertised by the “real” Amazon router. You peer your router with at least one provider who peers with the real route53 servers (they accept your route advertisements).

Then you redirect incoming traffic on port 53 to a mitm dns proxy, where you either do nothing or return an IP address you control, which could be anywhere.

At the IP address you control, you run a transparent HTTP proxy that signs HTTPS responses with the valid DNS cert you obtained by spoofing MX replies in the dns proxy. You then either do nothing, or modify the page to return arbitrary content (like rewriting bitcoin addresses).

[0] or you get a VPS at a provider willing to advertise BGP routes on your behalf without validating them


Not an expert either, but I think it was slightly different.

Firstly, I believe BGP hijacking works by announcing a more specific prefix rather than path-length. That is to get to 3.3.3.3 A 3.3.0.0/16 route takes precedence over a 3.0.0.0/8 route.

Secondly, this attack did not obtain a valid cert. They probably could have using spoofing (either of MX or just of A records and using lets-encrypt), but they did not. This might be because the BGP hijack did not affect any CA, or it might just have been laziness.


Yeah you're correct. There was a great defcon presentation on this in 2015: https://www.blackhat.com/docs/us-15/materials/us-15-Gavriche...

Really the hardest part of the attack, and what makes it so infrequent, is the social engineering aspect of registering the AS and getting an influential peer to accept your advertisements. To really cover your tracks you would need fake identities, fake businesses, fake bank accounts...


what's an AS?


AS = Autonomous System (see Wikipedia). This is the corporation or entity to which an IP prefix (min /24) is registered, and which shows up in queries on sites like ipinfo.io.

BGP Path to Target IP prefix = ordered list of autonomous systems to reach target IP prefix

The point of BGP is to find the shortest AS path to an IP. Each AS has at least one router, and each router has at least one other AS peer. A router will forward the packet as far as it can given the peers it knows, until the AS list is length 0, at which point the controlling AS routes traffic to its destination.


Autonomous System.

Basically, a top level Internet shard. You need to become a Real ISP to pull off this attack.


That may be technically true, but not by any layman’s definition of an ISP. An AS can be a tiny company (I’ve been one) as long as it has some IP space SWIPed (registered) to it with ARIN/RIPE. All you need is to fill out a form.

Then you need to find other ASes to peer with you of course.

Also note that technically all you need is the ability to advertise routes. That could be mutually exclusive from AS status if your server provider is willing to advertise routes from their AS on your behalf (like vultr does, I think). In that case the provider is acting irresponsibly and should validate the routes.


Confusion prevention squad:

> That is to get to 3.3.3.3 A 3.3.0.0/16 route takes precedence over a 3.0.0.0/32 route.

I think you mean:

> That is to get to 3.3.3.3 A 3.3.0.0/16 route takes precedence over a 3.0.0.0/8 route.

a /32 is _more_ specific.


Thanks, you are totally right. Not sure what caused that brain fart. Luckily, I could still edit.


It is extremely unlikely that they did what this article claimed.

If they did, this is what would have happened (this ignores specific implementations which may make certain parts of this impossible which in turn would completely block this kind of attack):

1. Attackers would have noticed that either AWS does not register their prefixes or some of the prefixes were not registered or their registered policies did not match the views of AWS from the location that was to be attacked.

2. Attackers identified transit networks that did not enforce AWS registered policies for routes advertised by AWS as well as their peers that had the same flaw.

3. Attackers obtained credentials to the router(s) and monitoring/provisioning systems used by the transit networks that would have been attacked.

4. After obtaining access to the (3) and disabling monitoring/provisioning/version control systems(!)/automated backups/verification(!) the attackers brought up a GRE-type tunnel to some destination that the attackers controlled.

5. Attackers shimmed a static route for the address space that the attackers needed to attack pointing it to the local side of the GRE-type tunnel thereby sending the traffic to the servers controlled by the attackers.

6. Using a few other techniques ( most likely another GRE-type tunnel ), the attackers created a way for MITM traffic to get back to the attacked router.

7. Attackers added the route that was being MITM into IGP protocol of the router they got credentials to. Attackers adjusted outbound filters between this router and other routers from (2) to allow this route to be announced to the routers in (2).

8. Networks in (2) would not filter AWS announcement, so anyone who would be single-homed to networks in (2) would be no wiser that their traffic never reached the affected prefix. Those that receive transit from (2) via BGP and do not do a full policy based prefix filtering would be affected if those networks did not engage in hot potato-type routing.

All in all, it is extremely unlikely that a full global BGP hijack happened.

All of this needed to be done without configuration changes being picked up or overridden by management systems which is extremely unlikely on non-hobo ran networks identified in (2)


It looks like Amazon announces /23s, but there were announcements for the more specific /24s today, starting from around 11:05:42 UTC, the routes were withdrawn at 12:55:35 (to be more specific, that's when RIPEs probe saw the changes).

https://stat.ripe.net/205.251.192.0%2F24#tabId=routing

As far as I know, most of ASNs only filter routes coming from their direct customers (if even that and even in that case they are likely to accept those prefixes from their own peers), so once route gets into the tier 1 ISPs routing tables, it's going to be propagated pretty much everywhere.


Well, in that case Amazon needs to hire better backbone engineers. For anycast they should always be announcing both covering routes and de-aggregated routes.

They should also be constantly monitoring views of AWS network from other BGP speaking networks, allowing them to immediately detect such poisoning. If other important BGP speaking network does not want to provide them the view, they can go to a single-homed with BGP customer of such network and buy the view.


You need to have access to BGP router (hack it for example), then using the router you announce your target IP class as yours and your BGP neighbors update theirs BGP tables. Now when someone that is near those router wants to connect to certain IP it connects to you because shortest path to that IP in BGP tables is you.


I'd guess that they hacked a weakly secured BGP router and spread the new routes from there.


Physically? Hammering on the keyboard sitting in a dark room wearing a hoodie. Its not like you need to start pulling cables to do this sort of thing, internet routing is all software and configuration based which is pretty much its core characteristic.


I wonder why they did not reroute traffic for their targets directly. I assume the amount of traffic the Route53 servers get is quite a lot to handle and failure to do so would make a lot of people aware of some issue. Also, how is it possible to reach the original servers in order to forward non targeted lookups to them? Is it possible to have your own routes not affected by your own BGP announcements? Maybe they were able to do this by using EC2 internal DNS resolvers?

Maybe Route53 does use individual IP addresses for the nameservers of registered domains? Or at least are using them hihgly segmented? I guess this would help to handle the amount of traffic quite a bit if so. Otherwise it seems inplausible that the mentioned site is the only target or that there was just a few targets because that should be easier to pull off.

On the other hand i am not sure if these targets might share IP addresses with a large pool of websites and handling "only" authoritative DNS requests for them (and any other that happens to have the same IP addresses setup as nameserver) would be alot easier. How does Route53 handle delegation exactly? Would i have to handle all NS requests that happens to resolve to Route53 or just some specific set of domains? Only handling these delegation related NS requests might reduce the amount of traffic alot in any case. But maybe they setup glue records for registered domains. In this case the amount of traffic they would have to handle would be even larger and i guess disrupt quite alot of DNS clients because they would not be able to sign the zone properly.

I also wonder why they were not able to issue a new valid certificate. This appears to be very easy even without any interruption at this stage. Especially this feels like it was only done in order to direct the attention away.

Hopefully a more detailed writeup appears because this seems rather interesting.


Can I check my understanding

(Somehow) A BGP route was advertised and accepted for the IP range for amazons route53 - so that some / many DNS requests hit a fake server instead of the real amazon one. The only dns requests that changed seem to be for a bitcoin site, and they were redirected to a site that seems to have then grabbed their passwords and then emptied their wallets on the real site.

?

Mitigation strategies that come to mind

- certificate pinning (where I have my browser only accept certificates that I have approved, not just are signed by a CA (this has not really taken off)

- Certificate warning - surely browsers should shout of a certificate that was valid for domain X changes today? (suffers similar problems to pinning)

- BGP route acceptance. We are out of my comfort area now, but this is surely where the real solution lies - why did people accept a BGP chnage to redirect route53 to russia? Can routes be advertised with signing ?


No, BGP is actually terrifyingly easy to hijack and it's surprising this doesn't happen more often. There's really nothing at the protocol level to indicate that the router you're peering with is actually authorized to advertise the routes they're sending. I could configure my company's edge routers right now to advertise these same AWS ranges if I wanted to and those would be sent to our upstream peers.

What can be done is filtering the routes you receive from peers. Any ISP that knows what they're doing will coordinate ahead of time with their customers which routes the customer will be advertising, and whether the customer really owns that IP space. Then the ISP sets up rules on their routers to only allow those specific networks to be advertised by you, and drop all other advertisements you might send. This can be done manually via a Letter of Authorization, or by querying some central registry like http://www.radb.net/. But clearly it can be done incorrectly or not at all, and that opens the door for route hijacks like this one.

BGP is an old, old protocol in tech terms. The RFC for the current version dates from the 1990's. The go-to textbook reference for it (Sam Halabi's Internet Routing Architectures) was published in the year 2000 and is still considered up to date. It's really not equipped at all for the sophisticated threats seen on the modern internet.


> surely browsers should shout of a certificate that was valid for domain X changes today

Why? If the new cert is valid, why sound any alarms?


I visit my bank site and my browser gets a certificate for barclaysbank Ltd, to expire in 12 months. Tomorrow i visit the same site and get a different certificate, or from a different CA.

Should my browser warn me?

I don't know. the chances are high that 99% of people will click OK and 99% of the 1% left will look at it and think "how do i verify this?"

That is probably the reason browsers don't bother with pinning and the rest - there is simply no chain of trust i as an individual can possibly use to verify a fraction of the sites i visit.

but still ... if I pay my landlord every week in cash and one day someone else turns up and says "hi i am your landlord, pay me" I have visual signals to warn me of a possible problem.

The signals are there - my browser could fingerprint the servers, the headers, the response times, a lot could be done. But then most days my browser would act like Donald Sutherland and point at people and say "they have been replaced" and i would have no means to tell.

we really have become reliant on one piece of technology haven't we.


> The signals are there - my browser could fingerprint the servers, the headers, the response times, a lot could be done.

You realize this stuff can change by the minute, nay, second?

> but still ... if I pay my landlord every week in cash and one day someone else turns up and says "hi i am your landlord, pay me" I have visual signals to warn me of a possible problem.

No, it's more like you walk into your landlord's office and there is a new secretary. Sure, usually there won't be, but it's not unheard of for companies to get new secretaries.

> I visit my bank site and my browser gets a certificate for barclaysbank Ltd, to expire in 12 months. Tomorrow i visit the same site and get a different certificate, or from a different CA.

> Should my browser warn me?

If the new cert is valid, and there are no CAA records (which, admittedly a dns hijack would make worthless), then why should the browser warn you? Everything is valid, what's wrong?

What you should be asking yourself is why we don't use mutual authentication for banks. The answer is banks think we're all idiots and couldn't handle it.


What separates BGP hijacking from BGP anycast? Amazon is already advertising a bunch of routes to its route53 IP's for BGP anycast. How can an upstream peer tell if a new advertisement is another anycast route vs. a hijacked prefix? Is the only way to tell by validating the advertisement came from Amazon?


The issue here is that the attackers were advertising more specific prefixes, and when doing route selection, routers pick routes to these over less specific prefixes that cover the same ranges.

e.g. if Amazon was announcing 205.51.192.0/21, that would cover all the /24s that were announced in the hijack. Not sure what Amazon actually announces, this is all just examples.

Say I'm trying to get to 205.51.192.12, for example. If I have two routes, one advertising 205.51.192.0/21 and another advertising 205.51.192.0/24, the router will forward the traffic to the next-hop advertising the /24, all else being equal.

So if you have the ability to advertise routes over BGP to the internet, you advertise a more specific route than what someone else is already advertising, and your peer has no filtering in place with regard to what they're accepting from you, you can potentially hijack IP space like this.

ETA: Anycast DNS generally advertises the exact same prefix to multiple peers. That way, the best path to that IP will be dependent on where your traffic is coming from, and will hopefully come in over the closest peering to you.


As much as it's bad for routing table inflation, given how much content they host Amazon should probably consider de-aggregating their DNS resolver prefixes to /24 announcements which would limit the effect to be much more local for the majority of the Internet.


Also experienced this today - was doing user testing on an app, and suddenly our AWS hosted app that uses Route 53 stopped working, didn't resolve. When I checked http://status.aws.amazon.com/ it showed all green for Route 53 though.

Luckily I noticed the app kept on working on my own devices where I had previously switched the DNS to 1.1.1.1 (Thanks HN for the heads up on that a few weeks ago!) - otherwise would have been a waste of half a day in an expensive user testing lab.

Assumed it was Putin's blocking of Amazon and Google IP's that had been on the news, but guess it was just hackers after some ETH. Or was it?



It's sad that HPKP is being deprecated. It's one of the best ways to defend against an attacker with bgp hijack capabilities assuming you pin your own public key. Difficult to scale though and prone to disastrous misconfiguration.


Too many people have foot-gunned themselves with that one. As someone who has to readily fight technical fires for other people (when they should have googled their problems) I am glad to see HPKP go.


Honestly there are a lot of ways for people to foot-gun themselves when dealing with providing a service. HPKP isn't perfect, but deprecating it just for the sake of keeping some incompetent people safe is just silly. Might as well remove HSTS and not roll out Expect-CT because everything allows someone to foot-gun themselves.


The difference between HSTS, Expect-CT and HPKP is that the former two offer a way out (support HTTPS, provide qualified SCTs) whereas HPKP can effectively brick your domain for a couple of months, and it's not even hard to pull off.


HPKP can brick your domain if you use HSTS alongside with it, if you don't then one can fall back to http. Not an ideal solution, but it's a way out. HSTS is an extreme and possibly dangerous measure that should be carefully considered (maybe it shouldn't be TOFU but TOTU - trust on third use), but I still think it's not reasonable to totally deprecate it. There are cases where such measure would provide clear and strong security benefits.


That sounds nice and all but you don't have to support it


Route 53 DNS queries getting black-holed at a small ISP in Ohio: https://pbs.twimg.com/media/Dbka0oAVMAAFEXx.jpg


Isn’t dns hijacking a major problem? I could go to a Starbucks and setup my own dhcp server on my laptop. A certain fraction of the WiFi devices would probably pick up my dhcp given dns which would reroute requests to my own dns server. Nobody would ever know... or am I misunderstanding something about https?

Or if I was a malware writer I would discretely target hosts files or the local dns configuration - exploits in certain routers to try to reconfigure the complete local network and such.


You are missing something about HTTPS: you can't produce a valid certificate for the site you're trying to impersonate, and therefore users would notice.

HTTPS operates on the assumption that DNS is insecure.


Let's hope your local Starbucks has device isolation enabled so that isn't possible.


I will find out and report back.


That would only work for websites that do not use HSTS.


> which served the website using a fake certificate

This would cause an untrusted certificate error in browsers.

> failed to obtain an SSL certificate while man-in-the-middle attacking

The sentence doesn't make sense. MITM is by definition an attack where an actor was able to impersonate the target. In this case, they presented an certificate that was not signed by a CA in the root CA file on the clients. So this can be qualified as a failed MITM attempt.


Time to use the Ethereum Name Service (ENS). As long as you keep your private key safe, your domain name is safe.


What's with the clickbait title? It is simply a falsehood.

Here's the update from the horse's mouth:

[5:19 AM PDT] We are investigating reports of problems resolving some DNS records hosted on Route53 using the third party DNS resolvers 8.8.8.8 and 8.8.4.4 . DNS resolution using other third-party DNS resolvers or DNS resolution from within EC2 instances using the default EC2 resolvers are not affected at this time.

[5:49 AM PDT] We have identified the cause for an elevation in DNS resolution errors using third party DNS resolvers 8.8.8.8 / 8.8.4.4 and are working towards resolution. DNS resolution using other third-party DNS resolvers or DNS resolution from within EC2 instances using the default EC2 resolvers continues to work normally.

[6:10 AM PDT] Between 4:05 AM PDT and 5:56 AM PDT, some customers may have experienced elevated errors resolving DNS records hosted on Route 53 using DNS resolvers 8.8.8.8 / 8.8.4.4. This issue was caused by a problem with a third-party Internet provider. The issue has been resolved and the service is operating normally.


The horse's mouth, in this case, speaks nonsense. Multiple /24s were announced more specifically to the DFZ, and that has absolutely nothing to do with Google's DNS resolvers. "Simply a falsehood," indeed.

https://twitter.com/InternetIntel/status/988792927068610561

https://arstechnica.com/information-technology/2018/04/suspi...


The horses mouth actually does say it, you'll note they said

"This issue was caused by a problem with a third-party Internet provider."

As soon as I read this and saw the myetherwallet announcement I called it last night that's exactly what they did, and today I find out that was the case.


Wow myetherwallet is trusted for transaction very non-trivial amounts. What would a good mitigation strategy be for these types of attacks.


Using a hardware wallet and verifying the address on your hardware wallet is the same you entered into the website.


Making sure the cert is valid.


Also do a dry run before each usage, generating a throwaway wallet, and inspect the network tab / use fiddler to check if there is any suspicious sending of data.

Even this isn't 100% as they could seed keys if bad person gets control of the website. This has been done many times in the IOTA community for example, albeit those sites were dodgy from the get-go.

So I'd probably get a snapshot of their client code when you trust it, and serve it to yourself locally thereafter.

It's a sad state of affairs that people use services like this. I have done so myself :-(

The reason is that the Ethereum desktop clients suck. More than suck, they are untenable. The official client never synced even after running for 2 weeks. I tried this twice. It is awful.


Also, you can save the MEW webpage locally and use it.


Is there any official post mortem available?


How does one prevent an attack like this? APIs and Browsers.


With all the talk of blockchain, dns should be one of the best uses.


Does anyone know if this happened to affect Amazon's Seller Central?

Amazon sellers (including me) couldn't access their control panels for several hours last night.


was this related to the myetherwallet.com hack? they hijacked BGP and DNS to redirect traffic to their own page

https://altcoinreport.co/my-ether-wallet-hacked/


Yes, the article says the only compromised site they are aware of was MyEtherWallet.


Surely related (and warning!):

During this period I received an email from:

    ship-confirm@amazon.com
purporting to notify me of the despatch of a non-existent order; with an attached .zip.

Nothing about the message, except the .zip and suggestion of an order's existence, was at all suspicious - all links point to amazon.com.

I checked on amazon.com (without following a link) and on amazon.co.uk (where I would usually shop) and an order with the given number does not exist for my account.


That's just regular spam. I bet if you checked you would see the email was not actually sent by Amazon, and did not have a DKIM signature.


Maybe. I've deleted it now, but the headers looked OK to me. If I'd actually ordered something (e.g. it had come a day earlier) I'm sure I'd have taken it as being legit.


Why is an obvious phishing email related?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: