Hacker News new | past | comments | ask | show | jobs | submit login
Issue with TLS-SNI-01 and Shared Hosting Infrastructure (letsencrypt.org)
160 points by jlgaddis on Jan 10, 2018 | hide | past | favorite | 77 comments



> At this time, we believe that the issue can be addressed by having certain services providers implement stronger controls for domains hosted on their infrastructure.

Requiring unrelated third-parties to take specific action(s) to prevent Let's Encrypt from incorrectly issuing certificates doesn't sound like a valid solution to this issue.

What about the shared hosting providers that are "affected" that decide to do nothing to "fix" this?


In specific answer to your second question, Let's Encrypt intends to blacklist IPs for those providers from using tls-sni-01 proof of control.

Now, requiring unrelated third-parties to take action is (perhaps unfortunately) nothing new for DV. Back in the day lots of the web mail providers got bitten because some CA would decide that, say, receipt of email to ssl-certificate-officer@example.com is proof of control of example.com, so you just sign up for web mail and say you want the email address ssl-certificate-officer and bingo you get a certificate for a few bucks. This was exploited over and over.

Even today, the Ten Blessed Methods let a CA choose any email address you listed in WHOIS (fair enough, don't list anybody else's address in WHOIS) _or_ any of 'admin', 'administrator', 'webmaster', 'hostmaster', or 'postmaster'.

So when you build a web mail system you need to explicitly blacklist those five email addresses even if you would never need an account named "webmaster" or "admin", if you fail to take this specific action anybody can get certificates for your domain by talking to any of the commercial CAs that use a method complying with 3.2.2.4.4.

Let's Encrypt already disabled a feature of http-01 validation before going live to avoid needing bulk web hosts and CDNs to make a sensible configuration choice, because surveys suggested that lots of them had left insecure default choices enabled.

Or another email example (again thus not affecting Let's Encrypt): Lots of companies purchased anti-phishing systems that scan all incoming email to check for dodgy links. These systems typically dereference all the links, check what they find and quarantine suspicious emails. Several CAs had systems which sent an email saying "Just click here to authorize issuance". The anti-phishing system follows the link, now it's clicked, anyone could get a certificate for these domains because the anti-phishing system would "authorize" anything they requested...

Now, all these are historical rather than current examples, but they do illustrate that "requiring unrelated parties to take specific actions" won't be new or unique to Let's Encrypt.


> So when you build a web mail system you need to explicitly blacklist those five email addresses

Actually, you should blacklist all of the addresses listed in RFC 2142¹. Also consider this blog post for additional addresses to block: https://ldpreload.com/blog/names-to-reserve

1. https://tools.ietf.org/html/rfc2142


> "Let's Encrypt already disabled a feature of http-01 validation before going live"

I'm curious what this was. Can you elaborate or point me somewhere I can read about that?


Sure, http-01 as defined originally has a flag for whether to do the validation with plaintext HTTP on port 80 (as today) or using HTTPS on port 443.

Now, suppose http://no-ssl.customer1.example/ is a site Customer 1 is running on a bulk host, and it doesn't have SSL. We (the attacker) ask Let's Encrypt for an HTTPS version of the http-01 validation for this name no-ssl.customer1.example

A DNS lookup of no-ssl.customer1.example gives Let's Encrypt the IP address of the bulk hosting server. They connect to port 443 of that address, and they do SNI and say they expect to talk to no-ssl.customer1.example, then they do an HTTP request with Host: no-ssl.customer1.example and a GET for the relevant .well-known/acme-challenge/ file - what happens?

A common default configuration of web servers used by bulk hosters says if anything doesn't match, let's use a "default" such as the first (often alphabetically first) named virtual host to serve the answers. No error, just assume that must be what the request was really for.

Since there is no SSL version of no-ssl.customer1.example, the server drops through to this default, and the web server offers the certificate and content for aaaaa.attacker.example which we, the attacker have prepared. This includes the ACME answer file, Let's Encrypt is satisfied we have "control" over no-ssl.customer1.example and grants us a certificate we clearly shouldn't have.

Apache and nginx both do this by default. Lots of bulk hosts are set up to use them without a configuration that prevents this happening, and they didn't seem to be in any hurry to fix it.

So, Let's Encrypt disabled this feature for their production release.


Thanks for expanding. That's interesting. So they disabled http-01 over TLS [0]. Two questions:

1. Isn't the default virtual host practice still a problem even without TLS/SNI? Apache chooses a default virtual host on port 80 in the same way. Is that "acceptable" simply because it's more common for the default port 80 virtual host to be controlled by the hosting provider? That's not as robust as one would hope.

2. Wouldn't http-01 over TLS be made safe if the certificate was required to be valid for the domain being challenged? Granted, there's a TLS chicken and egg problem there, but that's beside the point.

[0] https://tools.ietf.org/html/draft-ietf-acme-acme-09#section-...


> What about the shared hosting providers that are "affected" that decide to do nothing to "fix" this?

The problem is that I can upload a self signed certificate on a shared host for a domain I don't own.

This, in and for itself, should not be a problem. I did nothing wrong, and the shared host did nothing wrong as well: it only allowed me to upload a self signed cert for an inexistent domain. IMHO, those two actions should not allow someone to get a valid cert for a domain they don't control, and whose only fault is that they pointed their A record to the shared domain host.

I don't think TLS-SNI is sufficient proof of Domain Control. IMHO it should be sacked.


As someone with a long history in shared hosting, I have to agree. Basically requesting that ".acme.invalid" SNI proves ownership only of the ".acme.invalid" name and not the actual domain name in any way. I don't see any way this scheme can work.

The only way I can see this working is having to pre-install a self-signed certificate with the same private key as the requested certificate to sign, on the actual domain name. I think that can work, can't it? I'd have to try it with openssl..

They may be able to fix it for larger providers but you'll forever have many smaller providers that will never mitigate the issue.

The obvious problem with that is temporarily killing the real SSL certificate during initial setup. Though in theory they could re-use the private key.


> the shared host did nothing wrong as well

I'm not so sure about that. Is it really a good idea to let customers configure security settings (e.g. the TLS cert used) for domains they don't own?

I suppose as long as they're only doing it for invalid domains it shouldn't be an issue, but what if a customer were to set a cert for sitea.example (which they don't own) and then later the real owner of sitea.example comes along and wants to set up their site on the same host?

Just doesn't seem like good practice to me.

Maybe the solution here is to make the domain name sent by the SNI challenge a subdomain of the domain being validated? If a shared hosting site allows customers to set TLS certs for subdomains of a _completely different customer's_ domain name, that's most definitely a security issue regardless of whether or not TLS-SNI validation exists.


> Maybe the solution here is to make the domain name sent by the SNI challenge a subdomain of the domain being validated?

FWIW, Matthew Hardeman just made an excelent post on the mozilla.dev.security.policy which discusses this solution and some potential problems with it in detail: https://groups.google.com/d/msg/mozilla.dev.security.policy/...


This is an interesting detail from Ryan Sleevi

> So I don't think this buys any improvement over the status quo, and actually makes it considerably more complex and failure prone, due to the cross-sectional lookups, versus the fact that .invalid is a reserved TLD.

So it should be more likely for shared hosting services to recognize that .invalid is an explicitly reserved TLD, vs correctly walking subdomains and recognizing "effective TLDs" (which vary in number of dot-separated elements).


Indeed that the hosting providers do neither is a strong sign that both TLS-SNI-01 and TLS-SNI-02 and the mechanism I proposed would all be deficient to really address the issue of making this mechanism secure again.


Agreed. It may be necessary to abandon TLS-SNI. Maybe ALPN will work out.


There are equivalent attacks on both the DNS and HTTP challenges as described in the Baseline Requirements and ACME, and those are expected to be mitigated (and are) by hosting providers. Let's Encrypt isn't bound by the Baseline Requirements to mitigate this for these hosting providers, but is doing so out of a esprit-de-corps while the web figures out how it wants to handle this.

Hosting providers haven't, previously, thought of TLS serving as an individual user service before (though the Baseline Requirements do) and are having to work out the kinks of that.


I don't think they are really equivalent. We are talking about Domain Validated Certificates, and for DNS and HTTP "attacks" you need to be able to serve content in the name/as part of that domain.

In this attack you need to be able to serve content in the "name of" a made up TLS name under an IP you share with the domain you attack (which is very common).


My "Hosting providers haven't, previously, thought of TLS serving as an individual user service before (though the Baseline Requirements do) and are having to work out the kinks of that." covers what you're trying to get at here.


I agree completely with that sentence factually and as an explanation why we have such issues, but still think HTTP and DNS are fundamentally different to this issue with the TLS-SNI challenge.

The domain under attack is not part of the actual challenge process here. As a hosting provider I never see it and it plays no role in the decision of what information I reply with. At no point do I serve content under the "wrong" domain. At no point does the attacker show any control over the domain being validated.


Well, no, the DNS resolved to these endpoints, just like in an HTTP attack, and then this is as if the Host header wasn't checked by the HTTP provider. You and these few specific hosting providers didn't think they had to look at the SANs, but, in fact, the Baseline Requirements expected them to verify that the SANs in the certs are controlled by the same user.

There's a reasonable disagreement but I (and others[1]) liken this the "postmaster@" attacks. At some point, for every protocol the hosting provider handles, we always end up having them do a bit more work then they thought they had to do but them's the breaks when dealing with the modern internet.

[1] https://twitter.com/sleevi_/status/951041801368035328


I don't advocate for such hosting providers to not mitigate that attack. It's a real problem and needs a real solution, no matter the technical or political reasons that lead to this. Those hosting providers might not ever want to use LE for their customers and might arguably not be "at fault", but still their customers are at risk and they should take steps to protect them.

But I still think it's a different problem in this case. In the end I suppose my argument is that this is a design flaw in this challenge and we ideally should not use it in it's current form, just as we no longer should use postmaster@ for domain validation (but the technical argument against postmaster@ is again a fundamentally different one).

Edit: I realized I was wrong and removed one part of my response regarding IP lookup as a positive sign of domain ownership.


Very curious what the DNS attack would look like: as far as I use it you need to be able to create arbitrary TXT records for the domain in question. Given that you won't be able to create TXT records for a domain you don't have control over (e.g. example.com) I can't quite figure out what the attack would look like?


Slightly mitigated by only effecting the domains of people who chose to use such a hosting provider; but I agree this doesn't seem to meet the bar we should expect from a CA.


Over the next 48 hours we will be building a list of vulnerable providers and their associated IP addresses. Our tentative plan, once the list is completed, is to re-enable the TLS-SNI-01 challenge type with vulnerable providers blocked from using it.

This is a terrible solution.

You're assuming you can scour the internet and find every shared hosting provider affected, and add them to a list. And then you're also going to keep that list updated. That's crazy.. an impossible task.

So after this is done, Let's Encrypt will continue issuing certificates improperly on any shared hosting providers that didn't get added to this list. Attackers just need to find a provider and check a list now.

I'm actually kind of appalled that this is what you've come up with.


I concur in full with your commentary here.

LetsEncrypt, through thoughtful planning and action -- even when it was inconvenient for them -- has established themselves as a shining beacon of best practice.

For better or worse, when you become the best practice example, you are held by the world -- and especially by your peers -- to a higher standard.

I can not imagine how they would see a return to production status of TLS-SNI-01 as anything other than painting a target on themselves for their various detractors to take aim at.

It is no secret that numerous other parties in the CA space would love to see them slip and fall.


I think this issue can be described as a form of confused deputy problem.

Is not the real solution to have the confused deputies stop acting in a confused manner?

If this issue were confined to a handful of small hosting companies almost no one uses, then I don't think this would be described as a problem with Let's Encrypt - it would be described as a problem with those hosting companies.

The only party that can truly fix the issue of the incorrect user being granted access (directly or indirectly) to a domain pointed at shared infrastructure is the operator of that infrastructure.


I came here to say this. What's more, the spec was agreed upon, in relatively public forums, with a voice from the community. Crappy shared hosting providers are going to mostly ignore their customers and perpetuate insecure scenarios while they continue to bill exorbitant rates that exploit the customers' ignorance or inertia. That has been the case for some time, and will continue to be the case, this is just another symptom.


What's more, the spec was agreed upon, in relatively public forums, with a voice from the community.

It was agreed upon and no one caught this issue. Now we know the issue.

There's nothing wrong with using a protocol you think is correct. There is something wrong with using a protocol you know is incorrect, but continue to use it anyway.

The entire internet should not be required from now to forever to workaround LE's mistake. LE should fix their protocol.

And worse, this protocol isn't even needed for LE. They could remove it, and everyone could use one of the two others that are secure, and LE would be just fine, and everyone -- even those crappy shared hosting providers -- would be perfectly secure.

LE created this issue all by itself, and is capable of fixing it all by itself. LE should do that.


LE "created" this issue in the sense that they were the first to formalize an implementable specification for automated verification of authorized domain name use.

In comparison to the prior relatively unspecified approach to verification, it's still an improvement.

"The last person who touched needs to take ownership over anything anyone can blame on the change" is the management style which leads to enterprise IT being unable to get anything done. Because at that point, the safest thing is to never change anything - regardless of how bad things currently are.

Sometimes the world changes, and other parts of the technology ecosystem need to adapt.

[Editing to add since I can't reply to you]: The fundamental "flaw" here is that it's possible for people to get self-signed certificates served for domains where they haven't validated ownership of the domain.

Hosting companies can't be simply adjusting their routing tables for anyone who asks. If you are pointing a domain name at an IP address which will accept routes from any untrusted party, that's simply not a secure situation.

A signed certificate might be good evidence of some authority for a domain, but a self-signed certificate used in a challenge process most assuredly is not.


The difficulty is the comparative security posture.

The HTTP-01 and DNS-01 challenges really don't require the web hosts to get their $h1t together to improve security. (Technically, HTTP-01 could for some attack scenarios, but that would still be more difficult than this TLS-SNI-01 attack.) The TLS-SNI-01 mechanism does. Maybe it's the method that's deficient, rather than the web host.

(In truth both are, but one of those can be fixed timely, albeit with a bullet.)


In TLS-SNI-01/02 there's SAN-A and SAN-B (added in 02). These are random values.

Those random values are all known to the attacker. Importantly, they contain no information on which (real) domain these random values correspond to.

These random values are the only values sent and used by LE to verify domain ownership.

How would a provider map random values to a domain? Only LE knows what random values correspond to which domains.


This spec may only be usable by provider issuing certificates they manage on behalf of their customers. It's possible the protocol is unusable on a shared IP address for customer-managed certificates.

Even today, there are many hosting companies where you can go ahead and setup an account for "gmail.com" - and they will happily direct messages from their customers intended for "gmail.com". And there are hosting control panels where their "solution" to this problem is to have a hardcoded list of high value targets for which end customers cannot create accounts.

But the general issue is that providers need to be mindful of what changes they make to their routing tables. Not all inputs to the system are trustworthy.


> You're assuming you can scour the internet and find every shared hosting provider affected, and add them to a list. And then you're also going to keep that list updated. That's crazy.. an impossible task.

I think you underestimate the capabilities of modern infosec tooling. Essentially the whole of the internet can be scanned in some ways in durations measured in hours. What is more, some systems are being constantly updated (such a certificate transparency), and there are relatively easy ways to identify bad actors via whitelists and behavioral monitoring. All that being said, if you have a legitimately better idea, voice it in a meaningful way, and I am sure they will at least listen. That was the part _in_their_post_ about "taking community feedback" you must have missed.


The underlying assumption of TLS-SNI-01/02 is that IP ownership = ownership of all domains using that IP. But as long as SNI is part of the TLS standard, that is an incorrect assumption.

So the affected hosts that need to be in this list is every shared hosting IP -- not just those that use LE, but all of them. They can then remove IPs once they verify LE-specific workarounds have been applied.

But they need to scan IPv6 as well.. since, although SNI is less necessary with IPv6, it can still be used with IPv6. So they need to scan the entire IPv4 and IPv6 address space (regularly.. to keep the list updated) looking for shared IPs. And we'll assume their scanner won't be blocked or interfered with in any way, so they'll actually be able to determine which IPs are being shared.

As far as a solution, I think I've been very clear in my other posts. The underlying assumption in this protocols is incorrect, and that makes them fatally flawed. They should be discontinued entirely (at least until we've all moved to IPv6 and SNI has been removed from the TLS standard).


I think IP ownership does = stewardship of all domains using that IP from a DV certificate perspective. The moral of the story is don't point a domain you value at a sketchy host's IP. The list of the things that need to happen to work around poorly managed hosting providers in this scenario is overblown. No one should host anything they think is important on shared hosting. Full stop. That is about as much of a reality as the above statements around SNI, but it is something individuals can actually act upon.


Since the issue is only domain based, Could they start with a list of domain names instead of every IP?


They would need to resolve all of the domains, and compile a list of IPs.. but one problem that jumps out to me is geodns/round robin policies/etc. If LE makes a dns request, they can't be guaranteed that they're seeing all of the IPs for a domain.

For example.. if a number of domains are at a CDN (that does not use anycast)... they may all resolve to a single IP (to LE, from the location they're requesting from)... but really that CDN may have hundreds of IPs that are all valid for those domains. LE would then add that single IP to the list as a shared IP, but LE verification requests sent to those other IPs would still be vulnerable.


So would a whitelist of providers/ip's be sufficient? Whitelists can be much easier to maintain.


It may make sense as a stopgap measure.

Even then, you have a CA sticking out its neck on the assurances of a web host that isn't accountable to the root programs and isn't accountable to the CAB Forum.

If that web host swears they don't have the issue, LE tests them, whitelists them, and then subsequently... at a customer request or just to be nasty the web hosts reverts and allows this exploit, the web host won't be held accountable. The CA will.


Ok, in this scenario, we have a web host with an adversarial entity on its server, that commits a crime.

By the same token, if that web host were hacked and used to obtain a nefarious certificate, would the CA be accountable? It seems to me that, as a customer, if you point your domain (which you must do somehow) at a hosting provider, then any DV issued with that hosting providers' infrastructure should be considered to be the responsibility of the hosting provider and domain owner. I think you and rgbrenner are making perfectly valid points for high-value infrastructure, which has in my view very little to do with these hosting providers. The fact that people can upload certificates at all for domains which they have not proved (to the hosting provider) ownership of is disturbing in and of itself, even if it is quite common.


Even if you could find them all and make automatic determinations as to whether or not they facilitate the vulnerability. (You really can't automate that as you need to be a customer of the host to really attempt to pull off the exploit.)

Even if... You would only know for the set of web hosts for the time period you checked each one.

New ones come and old ones die every day.


You don't need to keep a master list of IP addresses up to date, you only have to test an IP when a request for a cert comes in from that IP.


The only domains vulnerable to attackers are those pointed at the shared hosting provider with this issue. So you can protect your own domain by hosting it with a provider you know does not have this issue. And if a shared provider has this issue, they might have others which compromise the isolation of accounts/domains. And major websites which host themselves are not at any risk.


I agree with your position entirely, save for the "and if a shared provider has this issue, they might..."

No one ever told the hosted service providers that they should explicitly guard for TLS SNI names unrelated to any name of the customers they host. That certainly doesn't follow from any obvious logic.

I don't believe that a service provider who has this susceptibility is necessarily any more likely to surface other threats.


I can create "773c7d.13445a.acme.invalid" in almost all shared hosting control panels I have access to.

When I send an HTTP request with Host: 773c7d.13445a.acme.invalid, the server responds with a file from ~/domains/773c7d.13445a.acme.invalid/public_html or a similar directory available through my FTP account.

When I connect using openssl s_client ... -servername 773c7d.13445a.acme.invalid, the server sends a certificate configured for 773c7d.13445a.acme.invalid in my control panel.

Is this a problem for Let's Encrypt? Doesn't Let's Encrypt's verification require creating files with random names in http://example.com/.well-known/acme-challenge where example.com is the certificate's common name?


> Doesn't Let's Encrypt's verification require creating files with random names in http://example.com/.well-known/acme-challenge where example.com is the certificate's common name?

Are you asking whether this is an issue for the http-01 challenge?

If so, the answer is no, because if you wanted to use this to obtain a cert for some domain you don't own, the DNS reponse for that domain would have to already point to the shared hosting server you're configuring. (Which would imply there's already another customer using that domain.)

If you can serve content from another customer's domain who is on the same shared host as you, that's a serious security vulnerability with the hosting platform without respect to whether or not Let's Encrypt exists.


> Is this a problem for Let's Encrypt? Doesn't Let's Encrypt's verification require creating files with random names in http://example.com/.well-known/acme-challenge where example.com is the certificate's common name?

That applies to the http-01 challenge. The tls-sni-01 challenge works solely based on the returned certificate. If the SAN value in the certificate matches the SNI value sent by the validation server, the challenge succeeds.

Would you mind sharing which control panel you tested this with?


DirectAdmin, the most popular webhosting control panel in my country.

In my opinion this is not a bug because when I need to test a website, I often create an invalid hostname on the server and add the server's IP address to my computer's /etc/hosts. When I need HTTPS, I upload a certificate for the test hostname signed by my private CA.


Thanks. I signed up for the first shared web hosting provider I could find that uses DirectAdmin and was able to reproduce this. I'll bring this up in the relevant thread on mozilla.dev.security.policy, this is definitely concerning.


Ultimately, if you're attempting to prove hostname control, it would seem fundamentally incorrect to make a request with a Host header or SNI hostname set to anything other than the hostname requested for issuance.

I don't really understand why this didn't come up while considering the protocol initially.

Why aren't we using ALPN for this?

(ALPN is used to negotiate HTTP/2 without prior knowledge of the server's capability - https://en.wikipedia.org/wiki/Application-Layer_Protocol_Neg...)


Yes... as long as SNI is used and valid on the internet... it is incorrect to assume IP = domain control.. which is the assumption that the TLS-SNI-01/02 challenges are based on.

These challenges are fatally flawed in their current form, and should be discontinued.

(Your solution in the first sentence changes it to domain control = domain control. Obviously correct.)


> it would seem fundamentally incorrect to make a request with a Host header or SNI hostname set to anything other than the hostname requested for issuance.

Pondering why they'd do it the way it is right now, my best guess would be that it allows you to get a LetsEncrypt certificate while still having a valid certificate from another CA. For example, example.com currently has a Symantec certificate that is soon going to be invalidated, and I want to swap over to an LE cert with 0 downtime.


You could still do that! Just accept LE’s requests with the existing Symantec cert and respond as needed.

Or use the DNS challenge.

Or use something like ALPN instead.


How would LE do SNI validation using the Symantec cert though? I can't add anything to it without invalidating it. My understanding is that that's why they do the .acme.invalid SNI responses; so that the server can respond with a self-signed cert that's not going to interfere with anything else.

DNS and ALPN are both great solution, but here's why neither of them work all that well in my specific circumstances, but TLS-SNI works awesome.

- I can provision VMs at will, but I don't personally have access to DNS. To get a DNS record added, I need to submit a change request. I haven't done the DNS challenge, and I'm not sure if a new TXT record gets generated when you renew a cert or if it's static after initial creation, but either way, DNS very much adds a painful manual step to cert provisioning here.

- ALPN would be great, if it were baked into the default Centos Apache :). That, I suspect, would be a long time coming...

Right now I generally just use HTTP validation, because I've worked out a pattern that generally works pretty good. I set up an Apache rule that auto-redirects all traffic from http to https anyway; I just add a condition to it so that /.well-known/ gets served off the filesystem before redirecting everything else.


You need to re-do a proof of control periodically, in normal use it will be for each renewal. So yes, you'd need to do the juggling to sort out a new TXT record every time.

For what it's worth I believe Let's Encrypt would happily follow your redirect to the HTTPS site, so long as within a reasonable number of steps you give it the answer it was looking for, the redirect isn't a problem. It also doesn't check certificate validity (after all, it's doing validation, that would be a chicken-and-egg problem) for the redirected HTTPS connection. But just serving the answer directly works fine and is probably more future proof.


I don't see why you couldn't do the HTTP validation, but over the existing HTTPS cert you have, without having to accept non-TLS connections.

To be honest, I just don't think the SNI challenge has any place here. Although I totally recognise it is an easier option where you're more restricted, the fundamental security concerns are more important.


Your sibling comment suggests that the LetsEncrypt verifier would potentially follow a redirect HTTPS, which I'm going to have to experiment with today because it'd keep things simpler here.

Best I know, currently HTTP validation is required to happen on port 80. And that works awesome for bootstrapping; on a new install, you don't need to provision a fake self-signed cert to be able to request a real one. If it can happen over HTTP or HTTPS, that would be amazing.


It seems... very strange that it doesn't allow the use of port 443/HTTPS with a self-signed cert. It's exactly the same thing.


[Copying my comment from https://news.ycombinator.com/item?id=16112237]

Unfortunately, this means that Traefik's default Let's Encrypt integration (without setting a DNS provider) does not work anymore. Although the logs now say "could not find solver for: http-01", they actually use tls-sni-01.


The entire Golang autocert functionality relies on this - which presumably Traefik is using


Through a twist of fate, I was looking into this over the weekend, and you're correct about Traefik.


My _guess_ at this point is that there's a bunch of scary corner cases with the behaviour that makes this possible so that if your bulk host or CDN does this you need to be _very_ careful even if Let's Encrypt abolishes tls-sni family proofs of control. If I'm right, fixing this is pretty urgent for anyone with untrusted users sharing an IP address.

It looks very much as if affected hosts/CDNs are letting any customer impersonate any other customer with the only caveats being that you've got to get in first and you need the victim customer to make the config change that causes this to be useful.

Suppose I am victim.example, I have a contract with Funky Jim's Hosting, and I have configured *.clowns.victim.example to go to a server at Funky Jim's but I've only every bothered setting up www.clowns.victim.example, images.clown.victim.example and js.clowns.victim.example in Funky Jim's config system. I use HTTPS, so the valuable Clown Login tokens available on www.clowns.victim.example are protected - aren't they?

Some loser, let's call her Sad Carol buys a basic account from Funky Jim, and tells Funky Jim they want to add a new site, busted.clowns.victim.example. Funky Jim OKs it, because that's "harmless" right? No need for Carol to prove to Jim that she owns it, if not why would it matter? But wait, actually busted.clowns.victim.example matches my wildcard DNS entry, it goes to the right server and serves up Carol's site - now Sad Carol can get all my bloody Clown Login cookies sent to "her" web site just by injecting "her" site into an ad inventory or whatever.


An excellent example of a plausible attack scenario.

Having said that, I think what we're all dying to know is what was it, initially, that made Carol so sad?

And, indeed, was it that same sadness which stagnated, fermented, and evolved into the obvious hatred and antisocial behaviors that she exhibits here?



does this mean you can collide legitimate certificates on these hosting providers? like someone is hosting www.foo.com can I upload a www.foo.com certificate then it will choose which cert to use based on some random factors. or maybe it only works on names that haven't been uploaded so i can block someone from uploading certificates for a future domain they might want to use.


Maybe, but that is a different issue.

If as a hosting provider I have accounts for foo.com and bar.com, I certainly need to make sure to never reply to requests for foo.com with data from the account bar.com.

But I might still allow the account bar.com to upload a certificate that includes bar.com www.bar.com bar.io getbar.com or any other domain name even if those are not registered in my platform.

So what if bar.io is actually not owned by the same person as bar.com? The DNS for bar.io does not point to my platform, no real user will connect to me.

If an attacker is also able to change the DNS for bar.io (say by an MITM attack in a public wifi) it is of no real consequence that he uses my platform. He could just as well respond with an IP completely of his own and do everything himself. That is why we have SSL in the first place.


so if someone has www.foo.com registered with a provider (but has wildcard DNS pointing *.foo.com to the provider) would it be possible for someone who has another domain bar.com to upload a cert that would be used for x.foo.com. because even though no content would be served from bar.com that is kind of a weird situation.

like i think the way ACME tls-sni works is broken because it should be using the DNS hierarchy to make the SNI request [maybe something like: 773c7d.13445a.acme.foo.com] but maybe some providers are still broken even with this fix because they let people upload certs with names that belong to other clients.


I'd say no, though it depends on the provider.

Afaict the vulnerability is simply "users can upload arbitrary certs and have the server use them based on SNI", which does have some legitimate use cases (private CA)


> At this time, we believe that the issue can be addressed by having certain services providers implement stronger controls for domains hosted on their infrastructure.

Is DigitalOcean one of these service providers that need to implement stronger controls?


Does this basically say, that letsencrypt will no longer work in a virtual hosting setup?

What about existing certs? Can they at least be renewed when they expire?


Let's Encrypt works just fine in a virtual hosting setup using the http-01 or dns-01 challenges. In fact, all (shared/virtual hosting) users in my company automatically get certificates this way.


Is there a way to tell whether I am using http-01, dns-01 or TLS-SNI-01? I'm using the letsencrypt nginx module.


Seems to use tls-sni-01. This is my guess by looking at the certbot-nginx source directory: https://github.com/certbot/certbot/tree/b1826d657ffc7c278041...


That's correct.

Certbot can use different plugins for validating the name and for installing the certificate.

You can configure HTTP-01 to work and use "certbot -a webroot -i nginx -w /path/to/whatever -d example.com -d www.example.com".

https://community.letsencrypt.org/t/solution-client-with-the...


This is not specifically a vulnerability in TLS-SNI-01 - more like a major misconfiguration by the shared hosting provider, to share ACME secrets.

I guess TLS-SNI-01 does share part of the blame for making this possible in the first place.

It would be great if TLS-SNI-01 could be salvaged, for all the binaries out there - but i don't think a blacklist is a great long-term solution. How about a strict one IP = one domain lockdown until you can prove that you don't share challenges? i.e. negative responses.

EDIT: See reply


I disagree. It's a fatal flaw in TLS-SNI-01/02. Here's why:

All of the information required to solve the challenge is in the attackers control. SAN-A and SAN-B (in v02) are known by the attacker. And no action or information from the target domain is required to solve the challenge.

Every shared hosting provider and CDN that lets you pick a domain and upload a custom certificate can be used to solve the challenge.

Your solution of restricting it to ip = one domain just reduces the vulnerable sites to sites that don't use LE and are on a shared provider. (Also note, if the provider does not implement LE.. then they are vulnerable to this attack against all of the customers.. with your solution: 1 customer for each IP owned by the service)

As long as SNI is part of the TLS standard, it is incorrect to assume IP = domain ownership.


You're right. Thanks for clarifying.


How does TLS-SNI-02 fix this issue?

...

Edit: I've removed most of my post here... This was already acknowledged in the article. 02 is vulnerable also.


I think the only fix is to create TLS-SNI-03 in which the only dnsName component in the self-signed certificate is a well known child of the domain label to be validated.

Validating www.abc.com, SNI and dnsName is well-known-acme-pki.www.abc.com and the certificate should have some other parameter stuffed with a challenge response that is defined to be a challenge response calculated over the inputs of a random token provided to the requestor by the CA and the account key of the requestor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: