Nice. Good to see this has a shared storage option (Redis) for load balanced servers. The LE HTTP challenge gives no guarantee which A record it will use. Similarly, if your LB chooses another server even behind the same IP, it'd have to share the challenge state. I haven't read deeply, but I assume it handles the race conditions that can occur with auto renewal (i.e. different servers triggering auto renewal at the same time, overusing LE and hitting the quota faster). It was something I struggled with during the implementation of pluggable storage adapters for Caddy[0].
If you are going this scale, there is another way: Use your own DNS server and try the DNS-01 challenge. My setup works with [0]letsencrypt.sh, [1]pdns_api.sh, [2]PowerDNS4 (MySQL backend w/ replication) and a little script that distributes the certs and restarts services. The nice thing is, this works for anything you can put in DNS, you do not need not to respond to a HTTP request for a name (think mail server).
Yes, as long as you're using a shared storage adapter (like Redis), then for the most part, duplicate registration or renewal requests should be handled.
The approach to locking we're using is somewhat simplistic (and the specifics depend on the storage adapter you're using), so there might be some rare edge-cases in which 2 requests to Let's Encrypt slip through. However, that shouldn't actually affect the functionality (the last response simply wins). Here's a bit more detail in the code: https://github.com/GUI/lua-resty-auto-ssl/blob/v0.8.2/lib/re...
Re: hitting the quota faster, they have a staging environment that they explicitly mention as a target for client development. Probably not a helpful tip to you now, but others may find it useful.
Wow, this is awesome. LetsEncrypt is great but I still find it somewhat clumsy to use it in production. This is the first no-downtime solution I've seen.
Hey this is pretty cool. Would be even cooler if it held off on the SSL handshake with the initial client until after the certificate was issued by LetsEncrypt. If it's only a couple of seconds it might be possible.
Author here.. If I'm understanding you correctly, I believe that's how things are already working. The very first time a client hits a new domain, the SSL handshake initiates the certificate registration with Let's Encrypt (assuming the domain is part of the whitelist of allowed domains). The certificate is registered in the background, while this first request is paused. The SSL handshake is then completed with the first client once the certificate is successfully issued. This does lead to the very first client's request being delayed a few seconds, but this is a one-time delay (per allowed domain).
Or it would be even greater if it requested/generated all certificates upon configuration file parsing and delayed nginx from taking those values into account until valid certificates have been obtained.
Author here. Our current approach is perhaps a bit different, since we're not actually parsing the nginx config file, so we don't have knowledge of the domains at startup. Instead, we're relying on the "allow_domain" Lua function to be defined which provides a way to determine which domains should be allowed. By making this a Lua function, it allows for the logic behind this to be very flexible and dynamic (for example, nginx could handle wildcard requests to any domain, and then you could lookup what domains to allow SSL registration for from another dynamic source you might already have).
But I do like the idea of allowing this to be handled at startup too. Thanks for the idea!
Not having this featured built into NGINX is the reason why I transitioned my production servers over from it to Caddy. Having SSL just "dealt with" is the biggest time saver ever.
I had been playing around with https://github.com/jwilder/nginx-proxy and docker-gen to handle automatic Let's Encrypt generation for my docker containers.
The shared storage redis backed option is great. I hacked together something that served .well-known from an NFS mount to overcome my issues when behind a proxy / LB.
My $work has a very terribe DNS infrastructure that needs lots of work to implement dns-01.
I wonder if there would be a way to integrate Let's Encrypt with router admin interfaces. It would be really cool if consumer routers used a valid https certificate on their web ui
This is awesome.. hopefully we can bake nginx inside docker. Without something like this, we could not spin up nginx because there was a circular dependency on the certificate files.
Yes, by default the module requires that you explicitly whitelist the domains you want to allow certificate registration for. So while you could allow any domain to be registered, that's not recommended for this precise reason. But the whitelist is defined as a Lua function, so you have quite a bit of flexibility in integrating the whitelist logic with other sources of information.
Author here. I hadn't really considered it, but it should be possible. Since we're handling this with nginx and Lua, it made it pretty trivial to handle the simple HTTP challenge (since we can easily intercept the /.well-known/acme-challenge request), so that's why we went with that approach, but other approaches should be possible.
I guess I figured DVSNI should be even easier since you're already evaluating the domains of incoming TLS connections and picking which cert to respond with from there.
I haven't actually looked into DVSNI in too much detail before, so that could definitely be the case. I'll have to investigate a bit more (or any pull requests are always welcome). Thank you for the tip!
ah this would nicely handle the let's encrypt wildcard issue. too bad I just forked $$$ for a wildcard cert. but hey will turn off renew and have a go with this :)
Bear in mind that you could potentially hit a rate limit depending on the number of subdomains you have (or a different rate limit depending on how the plugin requests particular certs). I'm sure a small number of users will become very aware of this issue very soon.
This is a reason that being at least somewhat aware of the overall list of names to be covered ahead of time could be helpful.
Apparently this requires OpenResty. OpenResty is an app server type platform built on Nginx. This does not appear to work on your standard run-of-the-mill Nginx system package.
OpenResty is a cool project, but it brings a lot of additional infrastructure along with it that you may not want.
It would be nice of this was clearly indicated in the title of this post.
Updated: see comment below, the README has been clarified, you can run this with the nginx_lua extension. You don't necessarily need all of OpenResty.
Compiling nginx from scratch to include ngx_lua is a bummer because of all the added questions that need to be answered (packaging, repositories, automation, etc).
It becomes 98% of the time spent enabling this awesome feature (at least on CentOS and Fedora, from what I can tell there aren't RPM packages for ngx_lua).
That has been one of my major driving forces behind creating Dockerfiles/docker-compose my personal/work projects to stand up quick environments. Docker support on CentOS 5 doesn't exist but I only have a few servers left from that era luckily.
Please, please stop using the name "SSL" and use "TLS".
As others replied, x509 would be even more correct, but SSL is completely incorrect, because that protocol is insecure and shouldn't be used. Using SSL suggest that the software supports that protocol, although it doesn't (as it should)
0 - https://github.com/mholt/caddy/pull/913