Let's Encrypt might be one of the most important initiatives for a secure web. I applaud all their great work.
The fact that they have chosen to reduce certificate lifetime in order to encourage automation is a really big win for the security of the web as a whole.
The only hiccup I've run into is that if you run too many tests during automation setup then they start denying further requests from you for weeks or longer under "too many certificates issued for that domain".
If you are trying out the client for the first time, you may want to use the --test-cert flag, and a domain name that does not receive live traffic. This will get certificates from our staging server. They won’t be valid in browsers, but otherwise the process will be the same, so you can test a variety of configuration options without hitting the rate limit.
It sounds like you are thinking of this being a tool for testing deployment configuration, but I could also see using it for internal test environments. The qc person adds the test CA to their browser, if they see "untrusted connection" then something is wrong. Would that model be supported?
For internal test environments, you'd probably want to run your own ACME server[0] and use certs from that if you can. Then you only need to trust your internal CA that you can manager, rather than the test one that LetsEncrypt offer.
Trying Let'sEncrypt out the first time I stumbled over this. --test-cert worked flawlessly after I figured out the kinks, as soon as I removed the flag it stopped working :/
For testing just use a local install of boulder server, https://github.com/letsencrypt/boulder . It is very straightforward to run one especially when using their Docker scripts. That is how I check my setup against letsencrypt during development/testing on my laptop.
Maybe they should have single day "test certs" for which you can issue an unlimited number on a given day for a given domain. Not even completely sure this would work but it might be convenient to have even with the limit increase.
Given that the resource being limited here is signing operations performed by their HSM, this wouldn't really help much. (OCSP responses wouldn't have to be signed for quite as long, but still.)
There's a staging server issuing untrusted certificates with significantly higher rate limits for this purpose. With the reference client, it's a matter of using the --staging flag.
The only problem with reducing the lifetime at the moment is that all of the solutions to use this for windows are ... lacking.
Without using the various hacks people have created, you can generate the certificates on linux and then move them over to windows, but doing this all the time is extremely inconvenient.
I hope that this gets addressed at some point, because encryption is important for everyone, not just linux admins.
"The Let's Encrypt Client is BETA SOFTWARE. It contains plenty of bugs and rough edges, and should be tested thoroughly in staging environments before use on production systems."
And NGINX support is still labeled "highly experimental".
Not complaining though; these things take time. Thank you for bringing free encryption to the masses, Let's Encrypt!
Any plans to make the official client based on Go? I wasn't too happy about having to download a bunch of Python stuff on my server just to get an SSL cert. Reminded me of the days of yore when you had to fiddle with Perl modules just to run basic scripts.
Unfortunately lots of Go code on GitHub has significant oversights, this included. I remember reporting a DoS bug in a different Go acme library identical to this one I found in acmetool in less than 60s:
In case it is not obvious, anyone in a privileged point on the network can fill resb with enough data that the program panics due to OOM and crashes. ioutil.ReadAll really needs a big warning in the docs because I have seen this pattern far too often.
I'm not sure if, conceptually, the term "official client" is still appropriate after the project is moved to the EFF and the rename is done. It's basically a move to ensure a vibrant client ecosystem which encourages users to pick the client that best fits their needs.
If you're looking for a Go client, lego[1] is awesome.
Why go? I can't think of any reason to prefer Go over any other language for this project. I'd prefer a security-oriented program to be written in a safer language, actually.
Could you explain what you mean by "safer"? If you mean memory safe or free from undefined behavior, Go is exactly that. If you mean a language that has excellent native crypto libraries rather than wrappers over openSSL, Go provides that too. To answer your specific query, Go makes more sense for a LE client compared to Python because you'd simply need to run a binary instead of fiddling around with the source on your server.
Go is not memory safe. It admits null pointers and a whole host of incompleteness bugs.
Rust and Haskell are both examples of safe languages. These languages admit very few bug classes. Both also compile to binaries; I'm not sure why you're touting that as a feature of Go.
That's not even a useful feature in this case. Running a python program is just as easy as running a binary from the user's perspective.
This is assuming you have the correct version of python installed, right? What if you were on CentOS and the python version is 2.6? Or on Alpine and you simply didn't have python at all?
What if you download a binary and a dynamic library is missing? (This is what happens with GHC on Alpine. Binaries will expect glibc. Packages fix this problem, but they also fix the Python problems.)
Another example: I recently wanted to run IDA on Arch Linux, but there are no 32-bit Qt5 packages. Compiling Qt5 is more painful than installing Python.
I've been using it on nginx for about two months. Was pretty easy to setup and works on most browsers.
Browsers that don't work are chrome and ie on windows XP and android 2.3.7 or older I guess.
Steve Jobs was pretty publicly opposed to donating to charity, but he's been gone for a while. If you Google "Apple Donations" you'll find a whole slew of articles detailing some of the contributions they have made since Tim Cook took the reins. For example, they match employee donations both in money and time.
I don't know how much it helps but if you shop on Amazon, EFF is a sponsor of LE and EFF is an option for Amazon Smile. Tricky part is remembering to use smile.amazon.com instead of www but Amazon has started reminding users to go there and there's also browser plugins to force it to go there.
The node/express middleware works flawlessly for me. I am really stoked that Let's Encrypt is happening.
Writing multi-microservice apps in Node just got a lot easier to deploy for my side projects and small consulting clients - as well as cheaper. Of course it's safer, you never have to think twice about doing TLS.
That said, I'd still opt for a wildcard certificate for anything enterprisey.
Thank you to all the people who worked for years to make it happen.
> That said, I'd still opt for a wildcard certificate for anything enterprisey.
Many sites use wildcard certs because they're the simplest way of getting SSL for your entire domain. There is one huge problem with that, though: Sharing the private key between different services means that the security of your private key is only as good as the weakest link. If any of your services suffer a compromise that leads to a private key leak (i.e. Heartbleed), all your other services are now vulnerable to MitM as well. Not just that, but weaknesses in your TLS configuration on one service could have implications on other services as well, as demonstrated by the DROWN attack.
That is not to say that there are no valid use-cases for wildcard certificates. If you're, for example, hosting a SaaS where each customer gets a custom subdomain, but the infrastructure behind each subdomain is essentially the same, there's no additional risk and wildcards are the perfect solution. If you're just getting a wildcard certificate because it's the most convenient solution, however, you're probably weakening some of your infrastructure in exchange for that.
You seem to be assuming that there needs to be a distinctive backend codebase for each domain covering the wildcard certificate.
This is something entirely orthogonal to the number of certificates. You could have 1000 codebases with different attack vectors under one SSL certificate, or 1 codebase the same attack vectors serving up 1000 SSL certificates.
What I was trying to say was that it's a good idea to have one key per DISTINCT($attack_surface), which includes anything from physical security to the software stack, in order to limit the damage during a compromise. Throwing wildcards at everything tends to go against that principle, even though I admit there are use-cases where wildcards are entirely appropriate (namely those where each subdomain has the exact same attack surface anyway), and using something else isn't worth the trouble.
> That said, I'd still opt for a wildcard certificate for anything enterprisey.
Let's encrypt can be used to generate a certificate for every subdomain, though it's a little more work compared to just using a wild card certificate.
If you are running Windows Server as your Webserver (many Enterprise places do), and the version of IIS is <7.5 (i.e. Server 2008, still under extended support until 2020), you can't assign more than one Cert to the same IP which sucks. SNI is only available from Windows Server 2012 onwards, and that doesn't work with Windows XP as the client (Still approx 10% of users globally).
Agreed, and before LetsEncrypt that was insanely expensive for absolutely no reason whatsoever. I sincerely hope that paying through the nose for Certs becomes something only Enterprise Orgs do, just for the backend support service.
I really really wish there was a client with decent support for using DNS-based verification, mainly so that I can generate certificates for mail servers (since they don't run a local httpd on the same machine).
Even if it was just spitting out a string that I needed to add/edit in my zone files (self-hosted DNS) every ~90 days, I'd be fine with that.
I haven't had much luck finding clients w/ support for this. I did find one that supported using Route 53, etc., via the APIs, but that didn't help me any.
If anyone is working on something like this, I'd be happy to throw a few dollars their way.
You could use the official client with the --standalone flag — that's actually what nginx users have to do right now, as there's no automagic vhost configuration for nginx with the official client yet. The client spins up an HTTP listener on port 80 or 443 for the domains you've specified and listens for the challenge. Then your certs are live in /etc/letsencrypt.
What's the plan if LE ever goes down / commercial / rogue? Millions of sites will have to get new certs manually within 90 days or less or they will all break. Wouldn't it be healthier if LE had some "competition" with compatible protocols as backup?
Let's Encrypt is actually an implementation of the ACME (Automated Certificate Management Environment) specification, developed by the IETF[1]. The goal is to get other CAs to adopt the standard as well.
In the meantime, other CAs are starting to offer free DV certs (Symantec being the most recent example), although there's no word yet on ACME compatibility.
That's why they renamed the client: https://letsencrypt.org/2016/03/09/le-client-new-home.html So, they want competition. Creating the competition is up to others ;-) (and should be relatively easy compared to what LE has already done)
Great News! I've been using Let's Encrypt through Laravel Forge and it's far and away the best experience I've had with SSL. Let's Encrypt is great for all web developers and end users.
Anyone know what, if anything, this means for the rate limits on issuing certs on a given domain? That was the one thing that was stopping us from making SSL completely automatic on our customer's domains.
It's also worth pointing out that you can have up to 100 SANs in a certificate, so you can technically issue for up to 20x100 (2000) subdomains per week.
Forgive my ignorance, but can anyone please explain this a bit more? I've only configured SSL on small, single server sites, but when I buy a SSL cert once a year. I believe LE certs expire after 6 weeks? Why would you ever even need to issue 5 certs for the same domain in a week?
And/or to expand on Spivak's answer: static.example.com api.example.com db1.example.com db2.example.com www.example.com www2.example.com loadbalancer.example.com etc
[ed to add:]
Note that while you can just use example.com, or add alternate names, that would mean that a compromise of www1.example.com also compromised the key used for db1.example.com etc - which would largely defeat the purpose of splitting up different services across different vms/zones/machines in terms of security compartmentalization, because you'd only need one copy of the key to mitm all services.
(For some setups, where everything is a set of web apps/services, and TLS is terminated at the load-balancer/reverse proxy, this is a moot point -- but that's not always a good idea. See eg: how Google suddently rushed to encrypt their intranet after it turned out NSA had been happily snooping on everything from the "inside" via datacenter links that I assume were across rented "dark"/dedicated fiber).
They don't support wildcard, but they do support alternate names, so you can have multiple domains on the same cert. I know it's not the same, but if you have a known list of subdomains, you could still use it.
I have a list of 75,800 subdomains on one of my sites, several hundred sometimes added in the course of a day. On my IPFS HSHCA subdomain CORS proxy, I have ∞. It's not mechanically sensical to just sit there banging on a centralized server all day to get an updated certificate.
Wildcard certs start at $100/yr (for -the- cheapest) and go quickly up from there.
Many people are not using SSL for many use cases because of this, and that's bad. Why leave them out in the cold for doing things like enforcing CORS security?
It's mind boggling to me that they could leave them out of the final product.
Out of curiosity, have you thought about registering neocities.org as a Public Suffix? Aside from bypassing the rate limits, neocities.org seems like it would belong on the list, given that you're (more or less) delegating control over subdomains to other parties. The PSL has implications for cookie scope as well (which might or might not be a concern for you).
I would very much like to see wildcard support in ACME and hope that Let's Encrypt will adopt it eventually (although I think clients should offer it as an opt-in, as not to encourage practices which are bad for security), and I think that both of your examples are a good fit for wildcard certificates. I'd probably still stick with a regular wildcard as well - $100/yr vs. managing ~1k+ SAN certificates sounds like a bad trade-off - but I thought I'd mention it anyway.
Wasting money is always a problem. I've got a $500+ bill coming up for a multi-year wildcard renewal that I'd really love to spend on pretty much anything else. And I only need to pay it because I can't use *.site.org as a "subdomain" on Let's Encrypt. Ponder the absurdity of that.
FWIW, I donated to Let's Encrypt in its early stages (and considered sponsorship), so it's not like I'm trying to just freeload here. I'm feeding a lot of money into broken system and I hate doing it.
One last thought: Google is about to start docking sites for not using SSL, which means that a lot of sites are basically going to be forced to buy these expensive certificates in order to play. This is a really artificial and sad barrier to entry for small startups. Security and privacy shouldn't just be the purview of the economically privileged.
FWIW I just donated[1] USD 100 to neocites.org - not that I took any of your comments here as soliciting for donations. They're of course not meant to go towards just SSL, but I'm sure a handful of people here with an actual Software Engineer income can chip in the remaining 400.
With the wonderful work you have been doing with this project (even if I don't host anything on it myself, I'm much too much of a masochist to outsource my web hosting, when I can rent a dedicated server for much more, and do much more work to end up with the same service ;-) -- I think it's terribly wasteful that you have to spend time worrying about trivial things like paying for a wildcard cert, when you could be playing with other, more useful stuff. And, yes, I too hope that Let's Encrypt will support wildcard certs soon.
Last I saw they were working on it, there was just a lot of questions about how to validate that control was properly at parent domain level and not just at the subdomain. They wanted to get whatever was needed fleshed out and part of the ACME protocol before doing anything so that it wouldn't just be some hack they added for let's encrypt on top of ACME.
If you wish to allow the creation of subdomains by users, or if you dynamically handle unlimited subdomains for any purpose, then Let's Encrypt won't be able to help you just yet. Luckily, wildcard support is at least planned, so these use cases will eventually be supported in the future.
I've been using Let's Encrypt for my personal blog and I definitely appreciate the free security that is does offer, even without wildcard certificates.
So, they've solved a major problem... but not your problem?
Honestly, wildcart certs would make my job a lot easier too, but with automatic renewal I'm more than happy to deal with multiple certs and stop giving the commercial CAs my money.
I would love to use LE but I can't because my use case is all about internal network. I don't want self-signed cert. I want to leverage internal domain system, especially the ones with IPs. While I understand the design of LE would not encourage any of the above, I do hope one day LE has a solution to end the pain of self-signed cert (without modifying host CA file).
If your internal domain is also a public (ICANN) domain, you can use DNS-based validation to get a certificate. Basically, you create a TXT record with a challenge token and make sure Let's Encrypt can resolve that record. Your actual internal domains don't have to be publicly resolvable or routable for that to work.
This is still a WIP in the official client, but there are a number of other clients with dns-01 support[1].
Publicly trusted CAs are not allowed to issue certificates for internal domains (i.e. made-up names which don't end with an ICANN TLD.)
How does turret19.local talk to commandserver.local? I imagine there has to be someone on the middle who says yes 192.168.1.24 is actually the command server and not camera3.local or public webserver.local
How is it supposed to work? How can LE know that you are 192.168.1.24? Maybe we'd need some kind of certificates that are only valid within a certain zone? How would that work?
Besides the automation features, what's the difference between these guys and, say, StartSSL, who also provide free SSL certificates? If I only have one or two personal domains for toy projects, why would I switch?
That grade is actually mostly about your TLS configuration (available cipher suites, etc.), and not about the certificate. Aside from some major no-goes (like SHA-1 certificates), TLS security is about server configuration; certificates are relatively straightforward in that respect.
It's true that they embrace a lot of best practices, like short certificate lifetimes, automation, logging all certificates to CT log servers (which StartSSL started doing quite recently as well), etc. The client also helps with securing your TLS configuration, among other things.
I wonder if NameCheap will ever offer this to customers, I guess it would take out from their profits if they did considering they sell SSL tickets, but I never bought a ticket from them, and unless I absolutely have to / am being paid to I wouldn't bother doing so either way. If domain registrars push this it could raise more awareness, that and web hosts elsewhere. I wonder how DigitalOcean can aid in this endeavour as well (LEMP / LAMP images with LE integreated as an option?).
I was pleased to see Dreamhost (who I use) offering free Let's Encrypt for any domain bought/hosted with them. Quick & easy setup. Would love to see other registrars/hosts do the same.
I use WebFaction and so far they're totally quite if there is any plan to support Lets Encrypt. Just does not gel with their "Hosting for Developers" tagline. Contemplating moving my sites to VPS at other host.
The web has a new server. If I trick the server into thinking that stuff I write gets hosted on example.com, it will generate a certificate asserting that I have a right to post stuff on example.com. It's easy for me to get that certificate revoked, but hard for the legitimate owners of example.com to do so.
Comparing this to self-signed certificates, I can think of a bunch of drawbacks, but I can't see any advantage. I hope I'm missing something.
Part of getting a certificate from Let's Encrypt involves getting a file from them that you then serve from a certain URL on your domain. Someone without enough control over your server to be able to do that won't be able to issue a certificate for it. This is called a "domain-validated" certificate.
It's not self-signed: Let's Encrypt is a certificate authority, which signs the certificates it issues.
Open Sans does not render very well on some platform combinations.
Edit: I should say your specific version of Open Sans. Your 14272-byte version of OpenSans-Regular.woff renders like this, whereas other sites' 21956-byte version renders fine.
If you can't rely on a platform/browser to properly render a font as popular as Open Sans, it's really not worth working around it... I run firefox on Linux as well and don't have any issues. It's very likely on your end somewhere.
I'm using open sans (locally hosted as woff) on that site so i'd say it's unlikely to do with the font itself. Maybe your browser has issues with eot/svg fonts.
Could likely be something to do with fontconfig on your system.
You can expect at least huge difference in how open sans bold text is rendered between platforms. On some (OSX) it's very very bold, on some other far less (font-weight bold/700).
Designing a harmonious and consistent UI under these conditions is difficult.
What does this mean to you? Note this is an honest question. Does it mean you're rebuilding your server from scratch and therefore requesting a new key each time for the same domain? Or does it mean something else?
I wonder if anyone has created a simple middle service that can locally cache the certs so you talk to the cache server?
When I was first testing out my automation I was using real keys instead of the staging server, so every time I tested I was eating up a cert.
In the end though, I have more than 7 certs for my main domain--With a bunch of services built up over the years, I have roughly 14 or 15 certs (so it took me 3 weeks to get all my certs created). Subdomains count against your domain limit, so "mail.example.com", "blog.example.com", and "www.example.com" count as 3 certs for "example.com".
Are you 100% sure subdomains count against your limit? I just asked that question on letsencrypt recently and was told there is no limit to subdomains.
The issue is when you try to set up the client. Oops, typo in something, let’s try again. Oops, forgot to set permissions. Again. Oops, forgot a domain. Oops, forgot to sudo it this time. Oops, limit run out.
Having Cisco onboard is scary, but it doesn't imply that backdoors have been added to LE.
Sponsors only give money to support LE development and infrastructure.
hmm sponsors not "only give money"... they usually expect something in return - which is understandable.
To me this is a deal-breaker. Cisco did so many bad things in the past in terms of privacy that the only good news is that now I know to stay away from LetsEncrypt.
Could you elaborate on how Cisco being a sponsor affects your trust in Let's Encrypt? It's in the nature of the CA system that it's only as strong as its weakest link, and there are dozens if not hundreds of CAs of questionable trust.
This seems like a conceptual misunderstanding of how TLS works. Let's Encrypt does not have access to your private key and does not have the ability to decrypt your traffic. They put a stamp on your certificate saying "Yep, this key belongs to this domain" - that's it.
Been using ReliableSite.net for years for our hosting, brilliant company and will always recommend them. What a pleasant surprise to see them as a new sponsor!
Dynamic or Static DNS? I've currently got a Pi running my site, but haven't bothered with setting https up. Probably also because it seems a bit more complicated if you use a non-standard web server (in my case thttpd). But on the other hand, I baerly looked into it at all.
Dynamic, I use Amazon Route53 to manage the DNS, and have a script that checks my public IP every 15 minutes, if it's != the one it has seen before, it updates Amazon Route53 to the new IP.
Actually, setting up LE on FBSD was even easier than that. We've had LE working on two of our FBSD servers for 4 months. Running the py27-letsencrypt client with --certonly and -d domainname -d ..., installed the certs in /usr/local/etc/letsencrypt/live/<domain>/...
Then pointing nginx to the "live" cert location and setting a cron job to renew every 4 weeks or so pretty much takes care of the whole process. I notice the FBSD LE client was just upgraded, not sure what new features it offers, but certainly should work at least as well as before.
Sure, the documentation isn't perfect yet. The client is being split from the CA in the near-future (as well as renamed), so this aspect of the documentation will need to be reworked anyway.
That being said, I sincerely hope that anyone responsible for TLS deployment knows how to use a search engine and enter "letsencrypt windows".
Well I see people are voting me down for having the audacity to ask about compatibility with the second most popular web server in existence. Oh noes the ebil Microsoftz!
Seriously, though. Maybe I misunderstand. Wasn't the entire point of this project to make it as easy as possible to add and maintain a SSL cert for any website? If that's their mission statement, then they're really, really, really far away from that goal and I re-assert that they probably shouldn't be calling it "production-ready".
If I'm wrong, and that's not their goal, well, fair enough.
All I'm saying is I run a website based on IIS and I go to their website, there's absolutely no hint that they support Windows in any fashion whatsoever. Nor even a hint that I might be able to Google for alternative clients that do.
Now I think that's a pretty damned big flaw. Maybe I'm the only one.
EDIT: (As an aside, the documentation is part of the project, so if it's "not perfect yet", as you claim, that also supports that they're not ready to leave beta yet.)
Let's Encrypt is, primarily, a CA. CAs don't "support" web servers or operating systems. They provide certificates, which are trusted by (most) clients.
Traditional CAs leave the actual configuration of said certificates completely up to you. Let's Encrypt, in addition to having a protocol that's based on an IETF specification, offers a client which tries to completely automate this part, as well as renewal. Right now, this only works for apache (and, experimentally, nginx) with the official client. Note that just this alone is significantly more than what you'd get from a traditional CA. In addition to that, you also have a number of third-party clients, many of which do this for IIS as well.
The stability or "feature-completeness" of the CA Let's Encrypt is completely independent from the client ecosystem. Let's Encrypt provides certificates, and those certificates work on Windows - that's pretty much it. That's why the beta label was removed.
I'm a consumer. I have a web server on IIS. I come to their site. How do I know this?
---
The answer is: I don't. I hit the "getting started" page. I see instructions for Linux. I don't have Linux. I say "oh well," and leave.
In fact that scenario is pretty much exactly what happened a few weeks ago when someone told me about let's encrypt. Which explains the post I made to this thread today.
We've already established that this is a documentation issue worth fixing, but surely you're not suggesting a lack of IIS-specific documentation on the website is reason enough to keep using the beta-label? Have you ever had the pleasure of buying a certificate from $RANDOM_CA_RESELLER? I'd have no trouble sticking an "alpha" label on the majority of those!
It seems you're misunderstanding what I meant with $RANDOM_CA_RESELLER. There are tons of random hosters who resell SSL certificates (sometimes as a white-label product), and many of them have terrible documentation. RapidSSL, one of the biggest CAs, is certainly not what I had in mind there.
You're also missing my point that a documentation issue doesn't mean a service deserves the "beta" label.
Again: I thought the goal was to make it as easy as possible to add a SSL cert to any website.
If you make the process require knowledge of acronyms like "ACME" or "LE CA", well. You've not attained that goal. You're not even close. That's the message I'm trying to get across here.
If I misunderstand the goal, then by all means, correct me.
I disagree. When you go to the webpage linked above and hit "getting started", there's not even a hint that their system works in OSes other than Linux.
Possible there is support for Windows and IIS, but their own website strongly suggests otherwise.
EDIT: if they consider their client and their CA as two different products, well, right now the website conflates the two badly. And there's absolutely nothing there to suggest it supports Windows/IIS.
> here's not even a hint that their system works in OSes other than Linux.
> And there's absolutely nothing there to suggest it supports Windows/IIS.
Their "system" is an API for issuing certificates. There's nothing platform specific about that.
> right now the website conflates the two badly.
From https://letsencrypt.org/getting-started/: You’re welcome to use any compatible client, but we only provide instructions for using the client that we provide.
So what part of this is going out of beta? The protocol, the CA, or the client?
Now I'm just confused.
But I still say: if their mission is to make it as easy as possible to install a SSL cert on any website, as long as they don't have any (apparent) support for Windows/IIS, they're a long, long, long way away from fulfilling that mission and probably shouldn't be leaving beta.
Could not disagree more. They are a CA with a very short expiring time frame and without the proper client software to automate that it's not practical to use at all.
Yup, agree completely, anyone voting you down is ignorant of the issues.
I was excited to use this, kept up with the news for over a year then when it launched there were no really practical to use windows clients and after fighting with it for a while we decided it was cheaper and easier to just renew our paid cert instead of wasting time with it any longer.
I hope there is a viable automated windows client for this before we have to renew again next year but everything is pointing to this being more of a non-windows thing so far.
The fact that they have chosen to reduce certificate lifetime in order to encourage automation is a really big win for the security of the web as a whole.