Agree with this - it's one of the few missing features of lets encrypt. I think more SaaS websites will be inclined to use the service, especially ones that have user.saas.com subdomain structure.
Their infrastructure is meant to all be HA from memory, and unless you're doing stupid things like caddy did[1] small windows of downtime shouldn't cause much issue for ongoing operations anyway - you will likely be starting to attempt renewing certs weeks before they expire.
Distributing a CA makes it less secure, not more; ultimately the thing that "is" a CA is the private key, which ought to be stored in a HSM and never let out.
I meant a clone/copy of the (hopefully open) server software of LetsEncrypt stack running as a service on several well known cloud datacenters - of course under a different name.
Why? Not a single point of issue. Given now that it has now 25% market share, it's a high profile target, and given he short cert lifetime aggressive updating is necessary meaning you could receive a underhanded cert in case someone gets access to the high profile target. If Let's Encrypt is not a startup and doesn't plan to make money, then there should be no problem to release the whole service as DEB/package so that every big hosted can run a clone under a different name. Or is LetsEncrypt in the business of gathering analytics and selling the usage data? I think no.
Just to clarify, do you want to give every one of those big hosters the keys to the internet, as in either a new root CA or one that's cross-signed by an existing CA?
If you're not giving them that, I'm not sure what they would do with just the CA server (which is indeed open source, by the way.) If you're giving them the keys ... well, do you really want to trust every single big hoster with the keys to the internet? They would still have to pass (very expensive) audits, apply for root inclusion, etc., so it wouldn't be as simple as running the Let's Encrypt server stack.
Amazon and Google run their own independent CAs already, with Amazon offering free issuance as part of some of their products (with non-extractable keys). I'd expect Google to offer something similar in the near future. I'm reasonably confident that these companies know how to run a CA, but I'm not sure I would trust many other hosters with something like that.
I'm quite sad that they're doing the wildcard thing. There are RFCs stating the problems they cause[0]. It would have been nice to steer people away from this awful security practice.
I wrote a very opinionated and ranty blog post which goes into more detail ( but strongly implies people lazy :( )
Do you have an opinion on Sandstorm's use of wildcard certificates and randomized hostnames for each application sesssion? [0] They insist that this provides many desirable security features. (Note that the free HTTPS certificates provided by the Sandcats.io service are renewed every week. [1])
I'm the tech lead of Sandstorm, not dijit, but let me see if I can interpret his views relative to Sandstorm.
dijit's complaints seem to focus on the case where a wildcard is shared by many logically separate services, as a convenience vs. getting a separate cert for each. This is probably the most common use of wildcards in practice, and it is indeed bad.
None of dijit's complaints apply to the case where the entire wildcard is really served from a single logical service that needs to generate lots of short-lived hostnames for browser-side sandboxing purposes, which is what Sandstorm is doing. Sandstorm is possibly the only infrastructure in existence which is trying to do this at a scale that legitimately cannot be solved without wildcards.
I think dijit is trying to say that each logical service should have its own certificate that does not overlap with any other service. For this purpose, a Sandstorm server is logically only one service, and as long as no other services serve from the same domain, the properties dijit is worried aobut should be no different from a service with a non-wildcard cert.
Regardless of the format; the statement still stands, even if it's overly combative. There is functionally no requirement for wildcard SSL that can't be met another way- the other ways in almost all cases are better for user security. Not designing for it (just like not designing for cloud failure) isn't someone being belligerent about nothing it really only emerges out of laziness or unwillingness.
(or, you have a legacy application which is architected a certain way, which is similar in my scenario of having a "non-cloud ready" application.. it's not a good reason to architect things in future this way)
I question the accuracy of an article that claims insurance should make a difference in SSL certificate selection. To my knowledge, the policies on SSL certificate insurance are deliberately and carefully crafted to exclude basically every likely compromise scenario so that no SSL vendor has ever had to pay one out.
I think that's a reasonable assertion, the insurance is not meant for the company it's meant for the user. The /idea/ is that if you are buying something on ebay and the SSL cert says "Ebay.com" then as a consumer your payment is protected from fraud if that entity is /not/ ebay.com.
In reality this almost[0] never happens, but, it's probably rare for a CA to give out certs without some strict checking because of these insurances, not in spite of them.
But yes, I agree that it's not a good argument since it's basically a marketing gimmick and consumers aren't aware of such "protections".
I guess if you're the sort of person who buys $1 nonsense on eBay this could even sort-of work.
But if you bought something actually valuable, the insurance is useless _even if it paid out_ which it never has. This is because the huge headline dollar value is the _total_ sum insured and individual claims are strictly limited to a far smaller amount.
Let's try an analogy. Imagine if Coca-Cola took out $10M insurance against Coke causing brain damage. Then it turns out Coke has caused serious and irreversible brain damage to everyone in the world who drank it in the last 50 years. Oops. The insurer says OK, just individually post us proof your brain damage was caused by Coke and you'll get 5¢ each. It won't cover the cost of postage, but too bad, up to 200 million people can claim on this insurance, and at $10M total that's five cents each so that's the maximum claim.
That's how the SSL "Insurance" works. People who can prove they reasonably relied on the insured certificate AND can prove they were damaged financially as a result can claim up to a fixed tiny sum of money, which isn't worth doing.
The insurance firms selling it know this is worthless, it would be illegal to sell such pointless insurance products to consumers in most of Europe, but fortunately they sell it to the Certificate Authority which isn't a consumer, and the CA has no reason to care that it's useless, it sounds impressive.
I doubt the web will ever be fully encrypted. There will always be tons of people who don't care enough. I've come across dozens of high information static websites that were set up like 10+ years ago where the author will probably never be back to configure tls.
It's so strange that you have to ask someone's permission to use encryption[0]. Certificate authorities should've been (should be?) optional. SSH came out around the same time and its Trust-on-First-Use is a much better system.
The original impetus for SSL/TLS was to permit credit card transactions. For a long time only banking websites and retail checkout pages were encrypted. In that light it makes more sense why the industry focused so heavily on certificate authorities. People were told never to enter their credit card number unless the "secure" icon was displayed; if it did, then they could trust the site much like they would trust any brick & mortar store.
Back in the early days you couldn't get a certificate without faxing in a copy of your business registration. A certificate authority's digital imprimatur implied something substantial about the credibility of a site, however imperfect the process. But as the certificate authority market diversified there was a race to the bottom in terms of credible certificate authorities. We've been at that bottom for so long that it does seem ridiculous that we didn't have Let's Encrypt earlier.
> Back in the early days you couldn't get a certificate without faxing in a copy of your business registration.
Well well, guess who could easily spoof that one.
> A certificate authority's digital imprimatur implied something substantial about the credibility of a site, however imperfect the process.
Not just the process. SSL downplay attacks and lack of certificate pinning already existed back in those days.
As long as those attacks are unknown you may still be reasonably secure against a MITM from a banana state government or a random ISP.
Also, lets not forget that running your entire website on HTTPS was expensive on resources before the 10s.
So the argument "because we can" makes sense. That doesn't mean all that information has to be encrypted. However, if you want to harm a surveillance state, then one act is causing significant noise. Uninteresting, encrypted data is noise and potentially yields plausible deniability.
The other argument is "because we have to". Different attacks have been demonstrated on that one: hostile networks such as ISPs injecting ads, hijacking DNS, open WiFi, impersonating fraudulent websites are just a few examples.
> Well well, guess who could easily spoof [faxing in a copy of your business registration].
I worked for an ISP in 2000 and we had to both send and receive faxes on "company headed notepaper" as a means of authenticating a request. All you needed was a word doc with the company name in bold at the top of the page.
When we hit this particular bureaucratic speed bump (mostly dealing with domain name registrars), and having no "company headed notepaper" ourselves (we were 20 people) we just fabricated it. It always worked.
Anecdotal evidence, but for Japan that seems to be changing. Popular Japanese art site Pixiv rolled out TLS to the entire site in the past year or so, before that it was only used during login.
As far as Korea goes, they've got a whole different bag of worms going on, at least for a couple more years. Look up "South Korea Internet Explorer" for some astounding stuff if you've not heard of this before.
> I doubt the web will ever be fully encrypted. There will always be tons of people who don't care enough.
I don't think the Web will become fully encrypted by virtue of everyone caring enough to move to HTTPS.
The knockout punch for unencrypted sites will come when browsers make the decision to not load plain HTTP sites by default in order to protect their users. At that point, for the vast majority of people, the Web will be 100% HTTPS because they'll never see a site that isn't HTTPS.
We just have to get the encryption percentage high enough to allow browsers to finish the job.
What about the other part, the vast quantities of information that will never be encrypted because the sites remain but the people have moved on or are gone? Why does that deserve a knockout punch?
Not trying to defend the idea, but the domain will expire at whatever point the registrant doesn't fund renewal anyway. Is that potentially arbitrary sunset date "better" by virtue of being more distant? If the authors care to let the sites exist yet prefer to abandon stewardship now, they should transfer ownership now and to someone who cares enough to encrypt as needed.
Some of these sites may be on shared hosting where the hosting provider doesn't provide SSL for custom domains, or provides it for a fee which is unjustifiable to the website author[s], so it may not even be that they don't care enough, but that they don't want to shell out personal money for say a static, hobbyist website.
Well, there's always services like CloudFlare which would offer free certificates - thus even allowing for pages hosted on GitHub Pages to be served through HTTPS.
They don't deserve a knockout punch any more than FTP or Gopher deserved it, but unfortunately commercialisation of the web and the merging of static pages into dynamic apps (feature creep) and one gargantuan application (the browser) to interface with all of it and one gargantuan entity to discover it (Google), means that minnows are going to have to go with the overall flow.
I understood it to mean that FTP has mostly been replaced by HTTP, through no fault of FTP itself, but rather because that's what serves the interests of the people who implement things. That is, FTP still works, and works as well as ever, but its usage is dropping.
The browser providers have clearly made that decision for the vast majority of users: default safety trumps information access.
We've yet to see if users will either (A) follow the on-screen, flashing red prompts, or (B) be trained to click through the warning screens due to their sheer prevalence.
Even in case B, it's a non-negligible probability that browser vendors respond by disallowing click-through. Just another step towards (re-)making the web into a nice, safe walled garden.
those sites aren't really under any threat from an interstitial like you currently get on an expired certificate (which i assume is what the parent means by "refuses to load by default"). It would be a knockout punch for any site trying to build an audience, but it doesn't actually block the site. I don't think anybody is suggesting that insecure archived sites be deleted from the internet.
Oh I can't wait for that day. Then I can sell an Internet enabled appliance with a certificate that lasts just as long as the warranty! Recurring revenue here I come!
What prevents you from already selling an appliance that stops working, certificate or not, after the warranty expires? Sounds more like a lawsuit than recurring revenue to me.
> What prevents you from already selling an appliance that stops working, certificate or not, after the warranty expires?
That sounds malicious -- whereas having it no longer work after the certificate expires sounds like security, and everyone likes security. With enough PR, you can string together something that sounds like a legitimate concern, like "with privacy being a major concern, encryption has progressed leaps and bounds past what we had two years ago -- so even if we were to renew the certificates, we simply do not think our customers' data would be safe when handled by these old devices."
The most chilling part is the casual way that data is discussed, because the consumer of the future will be okay with their refrigerator gathering data to drive advertising profit.
It seems that if something stops working for a "legitimate" technical reason (like, your camera stops working because of no server access, even though you only ever used the camera on you lan anyway) the customer can't win the lawsuit.
> The knockout punch for unencrypted sites will come when browsers make the decision to not load plain HTTP sites by default in order to protect their users.
That would be a terrible idea: it would break every "http:" link in existence, requiring people to edit billions of documents. And if "modern" browsers started pretending that "http:" meant "https:", it would break every other browser and lots of bots.
Sadly because of SNI browsers send the domain name in clear text, so I do not feel there is much to gain in going all encrypted. HTTPS/HTTP2 is broken by default in this regard and there is no way you can fix it other than using a VPN. At least with DNS you can do something about it your self.
It looks like TLS 1.3 still puts server_name into ClientHello and that isn't encrypted.
I can't see how you could encrypt SNI without pushing TLS out to at least two round trips, which would suck for the Web although it might be tolerable in other applications.
One option that does occur to me would be to shove say, a 32-bit SHAKE(FQDN) instead of an FQDN in as the identifier. This way we don't show eavesdroppers the actual FQDN we wanted, although they can try to guess and discover if their guess was right at their leisure. So it doesn't prevent a sophisticated attacker verifying that we connected to vpn.example.com not www.example.com on the same IP address + port, but it does make it essentially impossible for them to find out that our server has a site named xy4298-beejee-hopskotch-914z.example.com by inspecting the traffic.
The server knows all the hosts it serves for, and can work out 32-bit SHAKE(name) for them and pick the right one or reject it without revealing anything extra. The birthday paradox means this is likely to have conflicts above a few tens of thousands of vhosts per server, but that's already getting into deeply bulk hosting territory where you don't care too much about security anyway. Ordinary people aren't doing much more than hundreds of distinct sites per IP address so conflicts for them would be extremely rare.
I can passively sniff all traffic and know where my users are going that is a big deal. Having a ip -> DNS mapping is a lot of work AFAIK, sure it can be done, but executing a get_hosts_by_ip() on every https packet is hard. In contranst this is enough to get all hostnames from SNI:
I do agree that there is more things to consider about privacy, but the call for al websites to be encrypted is so naive. If that is what you want it would be enough to extend chunked http with an adler32 on the end of each response/chunk, and it would be a lot easier to work with than TLS.
1. TOFU won't scale - if I have a single SSH host, I can easily verify the server key out of band, and then never change it (though that's quite an insecure proposition if you ask me). How would you do this to _every single website you visit?_ And if you don't verify out-of-band (like calling up the host), how do you know that you're not being massively MITMed? And then when you leave your house and go to an airport and get a "your certificate changed", all it means is that _now_ you're connecting to the real page, and not MITM page.
And even if you verified the original cert and now get a "site certificate has changed". It could mean that the owner rotated his private key. What do you do? 99% will just say "false alarm, ignore, move on".
2. WoT - It's a false sense of security. You're trusting that random people on the internet will 1. Bother doing _any_ verification before signing someone's key and 2. People will keep their private keys safe from botnets. And the network has perverse incentive - the less verification a group does, the more cross-signing they'll do, the more "trusted" it is.
Of course, airports will DNS poison you to serve up the stupid Wi-Fi ToS. google.com never loads due to a HTTPS error, so at airports I usually use aoeu.com or some other non-encryted site to agree to the ToS-by-poisoning.
Exactly, a stolen key and no verification and you're back to traffic in the clear. Just because you do some encryption doesn't mean you're somehow protected.
I think SSH and HTTPS have very different use cases, which is why their cert structure is different. Most people ssh to servers they either own or have a working relationship with. Https, on the other hand, is very often used to connect to sites that you have no relationship with, but you still want to know that you are connecting to who you think you are.
SSH works on the premise that you should know, or have a way to check, the key for the server you are connecting to. That really doesn't happen with HTTPS.
How are you sure it's only your grocery list? You trust the networks and states that the content is transferred through to not inject anything potentially malicious or otherwise undesirable into the content? The only way you can be relatively confident that's not happening if the content is delivered via HTTPS, surely.
Right now it's just your grocery list. In the future when it's sucked up into the algorithms used by the underwriters of your health insurance and life insurance, and you can't quite figure out why you aren't getting the healthy-eating-disguised-with-some-ambiguous-name discount, maybe your grocery selection is to blame.
> I assume you are hinting at app operator will sell user’s data? Or do you mean parent’s network provider does?
Yes and yes. ISPs are racing as we speak to develop the technology to data mine their users and inject their own ads. Hopefully we will get the web encrypted fast enough to make it less economically attractive.
They want to know which sites you visit, what you search for, which movies you watch and what you shop for, and package it and sell it to anyone who will pay.
Individual developers could do the same, and we should find ways to prevent that, but they have less negotiating and lobbying power and users have more choice in what apps to use.
Honestly question, is there any reason why a static website should have SSL? Unless you have encrypted DNS, whoever is stealing the data will know the server you're connecting to, and therefore will have all the exact data anyways. So HTTPS is quite literally a waste, no?
Sorry for the ignorance, it's a legitimate question.
Even if the site is supposed to be static, an active attacker can still inject dynamic code into it.
In one somewhat recent example, a non-HTTPS page was modified in transit by an attacker to inject Javascript code which did a denial-of-service attack against github. Had the page used HTTPS, the attacker would not be able to inject that Javascript.
Then perhaps browsers should not execute any code from a non-HTTPS source. That would nicely cover the old static page case. Any modifications to the html would be visible to the user.
If the malicious JS is served over https, it could still be inserted into a static page served over http without the user knowing. The static page needs to be served over https to avoid tampering from MITM attacks.
The browser would obviously have to be smart enough to mark code that came from any sort of http source, even those embedded in https pages or https loaded by http pages, as non-executable. That would probably be the easiest way to do it anyway. Once we hit http, the level of trust drops and stays dropped.
Having thought about this a bit more, that would be an browser option that we could use today without any general convention. Turning on the "No script execution without https" option would break very little and would prevent more than just MITM attacks.
Among other things, they can tell you're reading wikipedia but not that you're reading about <controversial subject of the day>, and they can tell that you're browsing youtube but not able to sell data that you watch PewDiePie every day. They're also prevented from messing with the data - providing incorrect information, blocking individual pages, and inserting advertisements.
A number of web pages from better writers than me will argue why you need HTTPS. E.g. it provides integrity (so the coffee shop WiFi can't insert ads in your site), and browsers only enable some features on HTTPS sites.
It's a great question. One is protection from ISPs. Over http, they can see everything you're browsing -- the exact page and data -- whereas with https I believe they only see the top-level domain (e.g. they know you're on hacker news, but not which thread). Worse, the ISP can potentially modify pages, inject code and advertisements, etc. In the US, we know that ISPs have done these things and now sell browsing info to advertisers.
This was enough to convince me to add https to my static blog.
See it not only as encryption, but a sealed envelope that verifies the integrity of the website which could otherwise be compromised by a man in the middle, unfortunately often an ISP to insert billing reminders, ads and promotions. Perhaps more nefariously by a hacker to replace your bitcoin wallets and Paypal links with his own...
I'd be interested to see whether the total numbers of SSL sites has expanded because of LetsEncrypt.
I remember, one of the arguments that the Comodo CEO had on the forum was a rant about LetsEncrypt attacking their business model. While there were a lot of weird things in that rant, it does seem reasonable that a free service will erode the paid, commercial offering. So I would be curious if Letsencrypt is enabling people who otherwise would not have gotten an SSL cert, as well as the extent Letsencrypt is taking away customers.
I would also be curious to see what happens when wildcard SSL certs are launched.
> And because 85 percent of those sites never had HTTPS before, it's already significantly boosted the total fraction of sites that are encrypted on the web as a whole. Based on numbers Mozilla gathers from Firefox users, encrypted sites now account for more than 42 percent of page visits, compared with 38.5 percent just before Let's Encrypt launched. And Aas says that number is still growing at close to one percent a month.
Let's Encrypt has been of enormous service to me. It's not just about saving money -- it's about changing my perspective and expectations to "encrypted by default".
This is something that is going to save a company I work for thousands annually. Extended Validation certificates for dozens of domains and subdomains. Now we just plug certbot into crontab and forget about it, forever!
Thank you for all the work you do. It is a great service you have given the world at large.
We don't do any sort of major ecommerce and I have never drank the EV Kool-aid. It's the same encryption either way, just EV has an extra CA "stamp of approval". Considering how much I trust your average CA[1], i.e. not at all[2]...
At least with DV certs. Having worked at a legaltech company for a couple years with lawyers, I've gain some appreciation for the role of governance. I can still see EV certs having a place in this ecosystem.
DV certs though ... largely felt like a scam to me, back when I learned how they work in the early days of the web.
> So I would be curious if Letsencrypt is enabling people who otherwise would not have gotten an SSL cert
Since Lets Encrypt, every last thing I put on the Internet leverages SSL. Prior to Lets Encrypt, I had purchased a single SSL cert, ever, because I don't have the money to throw at every little thing I like to create and play with.
Admittedly the plural of "anecdote" is not "data", but assuming I'm not special, my suspicion is "very much yes".
> So I would be curious if Letsencrypt is enabling people who otherwise would not have gotten an SSL cert, as well as the extent Letsencrypt is taking away customers.
For my personal stuff, it is both. I paid for a single cert on my little server, but it hosted a handful of domains. It wasn't practical to secure the rest with only a single IP or to expensive to pile them all on a single cert. Let's Encrypt replaced one paid cert and secured 5 other domains that I otherwise wouldn't have.
This bled into work where we replaced all paid certs(except wildcards, coming soon) and secured hundreds of domains were not before.
For me, I would never have bothered securing my personal site and small project sites with SSL. With Lets Encrypt they are all SSL now. I assume many devs with side sites are doing the same, but that may be just a drop in the bucket.
I'd be shocked if it hasn't. The announcement from Google a year or so ago that sites not using SSL would suffer a small but undefined down-ranking in search results I think also really helped drive adoption too, and doubtless drove some into the arms of LetsEncrypt and their free certificates.
I switched from self-signed to LE for all of my personal sites. I had gotten some other free SSL cert provider up and running for one site, but it was a lot of steps; I would probably have paid $5 per year or so over free for something as convenient as LE, were LE not free.
Dropping marketshare for others may not mean falling business. It's possible that the market size is growing fast enough for the commercial ones to actually be seeing a growth in business.
This could definitely be true. I've set up SSL on a bunch of sites that otherwise wouldn't need it at all just because the Let's Encrypt client is so easy to get working.
Every CA is growing except StartCom who were distrusted by most players last year (and Microsoft, belatedly, remembered this year) and have recently declared they're throwing in the towel, won't try to get back their trust.
The big categories which shrank are "None" and "Invalid Domain", ie people who didn't have SSL, or were on shared hosting and never switched it on for their site or whatever.
Is your hypothesis that Cloudflare doesn't _pay_ them for those certificates?
How about cPanel and others who bundle Comodo certificates?
No, I think Comodo are doing very nicely off this. They've correctly judged that they're never going to be the Maserati in this industry they need to be Ford or Toyota and too bad if some people sneer.
This is especially likely given that many sites considered HTTPS in earnest only when Google started aggressively factoring it into search rankings or when they needed HTTP/2 for performance. Search & site speed are a lot easier to make a business case for than privacy, unfortunate as that may be.
It is hard to compete with free. I think if some CA made things as easy as the ACME protocol and the associated tools did, yet still charged $9/year, it would have delayed things a bit. But the end result would have been the same: $0 is the winning price.
The weird (and pleasant) thing here is that the $0 price is backed not by a shady business, but by a non-profit [1] that hires a stellar team of TLS/DNS/Internet experts that does most of its job openly and in a replicable way.
It's not that easy. I mean the code is there but the infrastructure needs to be provisioned and it takes time to gain trust in the CA world, see CACert.
CACert is a weird example because their model was completely at odds with how everybody else (yes now including Let's Encrypt) does things.
CACert says this is trustworthy because you have to know somebody who knows somebody who knows somebody etcetera. The profiles with a single photo of a swimsuit wearing young woman and a generic-sounding name that say they know me from school on Facebook every few weeks can tell you what happens to that idea at scale.
In contrast all the trusted public CAs (commercial or not) have a bunch of employees either directly making validation decisions according to some company policy or writing software to automatically make such decisions.
In theory maybe CACert's model really could work, but what it definitely can't do is work the same way as everybody else. So it was very hard to get anyone to give them a chance, still less after they started to have internal squabbles.
You are correct that gaining trust takes time, but what a conventional CA does (with the exception of maybe Verisign and Thawte which are _really_ old) is they first get an existing trusted CA else to sign a subCA certificate saying they trust the new CA. The ordinary "Let's Encrypt Authority X3" certificate is signed by IdenTrust (it says "DST Root CA X3" on it but the Digital Signature Trust no longer exists). There's another copy of the same certificate signed by ISRG, but that's only trusted in much newer software. Modern Firefox, current macOS or iOS, but not say, Internet Explorer 10
> CACert is a weird example because their model was completely at odds with how everybody else (yes now including Let's Encrypt) does things.
Well, CACert insisted on validating people but it turns out that it's not really necessary to know your customer to issue DV certs according to Baseline Requirements. Let's encrypt understood it and just did a minimal required job to be accepted (it's still a lot of work).
Instead of verifying people I'd gladly see X.509 replaced with OpenPGP w.r.t. trust model so that I could see who trusts who and why. OpenPGP has a mode of hierarchical trust with trust signatures, additionally they can be limited to a domain, that could be used to give people power to issue their own certificates for their own domains.
I've never mentioned that to be a dealbreaker for anyone, least not for me. Just wanted to make it clear that any organizational form can be shady, not just corporations.
It's just a statement, with no emotion added to it from my side.
Personally, I backed right away from it and went for paid certs with a client. Not great when it "kinda, sorta" works on Amazon Linux. The cost of the certs from namecheap seemed trivial compared to digging through GitHub issues to find out why the particular version of certbot running on a cron task hadn't renewed properly on a (customer production) EC2 instance. Screw that.
Honestly I've been holding back on going with lets encrypt at my 9-5, partly because certificates through namecheap/comodo are only $9.95 per year. I'm surprised by how much market share it has already, time to jump on board. I imagine paid for certificates will be dead in a few years.
I’d been in the exact same not-worth-the-hassle-to-change-for-only-$9.95 mindset on one of my side projects. That is, until last year when Comodo engaged in some really shady behavior directed at Let’s Encrypt [1]. At that point, decided there was no way I was going to keep rewarding Comodo with my business — not even a measly ten bucks’ worth.
And now I couldn’t be happier I switched to Let’s Encrypt. Automatic renewals mean no more working through the partially-manual, email-based process we had to use with Comodo. And free+automated means I don’t think twice about using https on every new subdomain. Really, I don’t think at all about https anymore; it just works.
I recently have been working on moving a chunk of our infrastructure to a fully automated deployment. It consists of several micro services on various subdomains within 3 top level domains.
For the dev and staging environments, I used a sub-domain prefix: app1.staging.example.com, downloads.staging.examplecdn.com, etc. There's a script that looks through the deployed systems in it's environment and configures the proxy server appropriately -- including running certbot if needed to get certificates.
I used let's encrypt initially so we could actually test real SSL without the hassle of new certificates, especially when we wanted to deploy a new environment. It worked so well we just left it in when we deployed to production.
The cost of certificates is nothing compared to the effort to get someone to put it on the company card, renew it (and answer questions about what it is a year from now, yes we still need it), manage the private key, etc. Let's Encrypt is better in every way.
Lets Encrypt provides wonderful services, but don't forget in business when something is free, it might end up costing you money.
Not only these certs are for 3 months, but what if their renewal services have hiccups. Not wishing them anything bad, but I can imagine that for whatever reason (including root cert being invoked if they happen to encrypt sites that normally wouldn't do like c.p.) I imagine that 35% of net rushing to a local cert provider to get a paid solution or face no traffic to their site at all.
Besides, the times of Thawte's 1999 when single cert was $150 are long gone; you can get a decent Comodo for $4.99 per year these days.
Due to the lack of automated renewal procedures, I've seen many more small websites die for a day or so once a year with "traditional" certificates (when people forget to manually renew) than those with certificates maintained via letsencrypt.
The actual solution to this should be for any of those providers to provide the ACME certificate management protocol, and then if letsencrypt fails, I can point certbot at it.
I was always on that end - in a distant-ago job I'd somehow fallen into a position of looking after a server box for our subdomain that was under someone's desk.
Every year I'd preempt it and email our procurement and IT teams asking for a renewal months in advance (knowing the red tape means I'd get it JIT).
And, every year it wouldn't be renewed or bought until the day after expiry when I had enough complaints from people to light a fire under someone's arse.
The whole cert-package back and forth was a nightmare, am glad LE exists with auto renewals (or even manually pushing a renewal with one line is easier than the old).
There's always going to be stuff which is automatically renewed while you don't want to, or stuff you need to manually renew when you have to.
I have a wonderful suggestion for anyone who wants to combat this problem: put the end date in your agenda! If you're a busy person put multiple notifications about it, or learn to look in advance in your agenda.
I don't think there's any situation I've seen where I don't want my certificates to be renewed. As a sysadmin, the more repetitive tasks I can automate, the better.
You must've seen automated subscriptions which you did not want to renew?
My comment was more a general statement to combat a type of procrastination (a deadline which must be met slightly before its date), specifically with the mechanic of both automated and manual subscriptions. Calendar reminders work wonderful with _both_.
I can also think of various scenarios where you don't want certificates to be renewed. Or well, maybe you don't but the users would. Like for example when your server no longer works, or got confiscated. A low timeout aids in lowering the amount of those certificates.
Automated payments are a different matter from automated infrastructure that's already been paid for.
I have automated tickets created when I need to do things that I haven't automated yet, but, I still want to automate as much as possible.
I have not run into the problems that you're suggesting I should've. If a server no longer works or was confiscated, why would I still have DNS pointing at it? And does it really matter if I have some servers renewing certificates unnecessarily anyway?
Wildcard certs are still expensive. And convenience of an automated process plays a big role. CA have been asleep at the wheel, not improving their products and offering a convenient way to manage the renewal.
Totally agree. If CAs were smart they'd be implementing the ACME protocol, and attaching it to their (paid) backend systems. There's value to be added like longer expiry times and long-term assurances, a centralized management portal, wildcard and EV certificates, and higher (or no) re-issue limits that would go a long way to getting businesses using it.
I was surprised how easy it was to setup, and the huge advantage is that you never have to worry about renewals and the various problems associated with forgetting to renew ever again
Let's Encrypt isn't interested in S/MIME. You are, of course, welcome to try to repurpose their software and/or the ACME protocol to provision S/MIME certificates.
Although S/MIME isn't a real Wild West like some types of X.509 certificates, it has seen much less oversight than SSL/TLS ("Web PKI") certificates. There's also not really a great appetite for cleaning it up. If you want to be the hero in this story there's definitely an opening for that.
If all you need is basic domain validation, which is the case for pretty much any site that just wants encryption and doesn't need real identity validation, Let's Encrypt is good enough. Add their tools for quickly generating and updating certs (so long as you're not using IIS) and it's no surprise that they're leading now.
Agreed, but with the exception of EV-SSL/TLS, certs are fungible and therefore no site needs 'real' identity validation, because you can't actually see the identity anyway (short of running cert patrol or opening up the details page). It's also a crying shame that chrom* hid those details in developer tools now -- when before, at least, you had a fighting chance of seeing crypto/expiration/etc details. (Of course, with DV certs being treated the same as OV certs, someone with a DV cert could just forge all of the metadata, as long as they control the domain; DV CA's don't verify the metadata, by definition.)
In other words, I remember faxing data into verisign or thawte back in the day (not even that long ago), but OV is just not worth paying for anymore: OV provides zero value, now that DV certs are accessible. Someone who had access to your domain could just get a DV (or LE) cert and no one would be the wiser.
I'm not sad to say that Comodo wrote their own death certificate (ha) by racing to the bottom. It's hard to beat free and open source.. the only thing they've got to hold onto now is EV, and that's of diminishing value now as well.
> (Of course, with DV certs being treated the same as OV certs, someone with a DV cert could just forge all of the metadata, as long as they control the domain; DV CA's don't verify the metadata, by definition.)
The Baseline Requirements require certificate authorities to be able to verify the correctness of the information that they include in each certificate -- including for DV. See the first subsections of section 3.2.2 of
Good link, but you should read it more carefully. That metadata (i.e., Organization) is only verified if the certificate is purchased for that purpose. In other words, would you like us to do extra organization validation? pay more. otherwise, it's cheaper if we only validate the domain.
It's kind of crazy, actually: it's more expensive for you to prove who you are than it is for the attacker to NOT prove who they are. Security should be about raising costs for the attacker, not lowering them!
And, as you know, neither Let's Encrypt or any other DV CA's check or verify any information except control of the domain itself.
That's why those certs are called Domain Validated instead of Org Validated ;)
This is not what the Baseline Requirements state and it's also not how issuance works in practice. The validation requirements for things like "Organization" always apply "[if] the Applicant requests a Certificate that will contain the countryName field and other Subject Identity Information."
If you submit a CSR with any of these fields, and you're requesting a DV certificate, the CA will discard those fields. Any sane CA implementation will cherry-pick only a whitelisted set of fields from the CSR, make sure the values have been validated according to the Baseline Requirements and root policies, assemble the certificate from that data and sign the result, ignoring all other fields (or discarding the request).
CAs that run something like sign(user_submitted_csr) would not last long in any of the major root programs.
The BRs don't have any requirements about charging or not charging applicants money for certification functions, and I don't see an exemption allowing unverified information to be included in a certificate if the applicant isn't charged for it.
It looks like you were speaking to my comment above regarding forging details, and now I see what you're saying (that a DV provider like LE will just discard extra details or reject cert, rather then signing them). Thanks for the correction.
The broader point is that whether that metadata is visible or not is what renders the OV to be the same literal value as a DV; even to a security-minded audience, an OV provides literally no extra value over a DV to the website's audience/customers, because it's effectively invisible. And EV does, on some browsers, but the positive benefits of EV are diminishing.
And because OV ~ DV to its users, the security value of OV over DV is nil; they are interchangeable for the purposes of protecting a website.
They might have more value, depending on the app, on other protocols, but for HTTPS, browsers have effectively rendered them identical.
> (Of course, with DV certs being treated the same as OV certs, someone with a DV cert could just forge all of the metadata, as long as they control the domain; DV CA's don't verify the metadata, by definition.)
I don't know if every DV CA does this, but I got a couple of StartSSL's free certs a while back (before LE was around, and before the WoSign acquisition and subsequent debacle), and I recall the documentation saying "because this is a DV cert, we're just going to ignore all the metadata in your CSR and generate a certificate with those fields blank", or something along those lines.
>the only thing they've got to hold onto now is EV, and that's of diminishing value now as well.
Value in terms of cost, or in terms of utility? And if the latter, care to explain why? I know almost nothing on the business/political side of certificates.
Just pointing out that browsers are de-emphasizing the "green bar" (and, on mobile, almost non-existent), so it's of decreasing visibility/value to the website customer (and also the certificate purchaser).
I wish they would make Microsoft IIS a first class citizens. If they are true to their mission they should set any political reasons aside, and provide official support for all platforms. AFAIK there are no technical reasons that prevents them to implement support for IIS and Azure Web Sites. There are a number of user created solution for both, that works well, but i would feel more comfortable with using that in production with official vendor support. So either Microsoft should fund this or lets encrypt should and i wonder why none of them has.
Let's Encrypt doesn't directly produce any client software. ACME clients are built by our community and the community builds what they feel like building. If something hasn't been done yet, or isn't where you want it to be, that reflects community priorities, not our politics.
We'd love to work more closely with Microsoft. We've been talking with Microsoft and various Microsoft community members about building great support for IIS since before Let's Encrypt launched.
I've used https://github.com/Lone-Coder/letsencrypt-win-simple to good effect; it's under active development, and has Azure DNS support. Doesn't help with Azure-hosted sites as such, but works great with IIS.
Not trying to start a Slashdot-class flame war or anything, but why are you on a platform that is so... belligerent to being on a network? Microsoft has been dead to me in the datacenter for at least ten years, for reasons far beyond the peanut-gallery "hur hur M$ $ucks!" There's oodles of options, many of which are far simpler to manage than a Microsoft stack, with commercial support available. I'm genuinely curious, what's keeping you there?
Because I downvoted you, I'll say that critiques like this are totally ineffective without examples. I haven't worked with Windows/IIS in like ten years, but I recall they had APIs for certificate management and http, system scripting languages (including javascript), scheduling tools and so on. Not sure what else you would want or need. If you have a substantial argument, it would be interesting to hear it.
(And I'll say I'm only posting this because I've had some not-good experiences with certbot. It is essentially a big foreign environment nailed onto your host and another 'thing' to tend to. Or perhaps you would describe it as Ubuntu Linux being "network belligerent.")
I greatly appreciate the comment giving further context to your downvote. That is all too rare around here! Big criticisms I have around the recent Microsoft server offerings:
- Opaque and uncommon/needlessly-unfamiliar command-line management toolkit (seriously, what is so difficult about 'output text, receive text as input' in 2017?)
- I cannot, yet, fully manage a Microsoft host from the CLI. What I consider to be 'fully managed' revolves around: installing software, updating software, adding/removing/'managing' users, setting up basic server-like features such as simple firewall rules, viewing common log-files ('user X logged in', 'software Y tried to execute Z task, the result was A'), starting/stopping/troubleshooting daemons, etc
- Pick a major config-management system (my direct experience is with Puppet/Chef/Ansible/Salt). Configuring them to play nice with Windows-anything is an order of magnitude more difficult than even the most baroque *nix-like operating system.
- Licensing. Srsly, charging for the operating system, THEN the web server, AND the 'remote desktop server', PLUS the mail server... is downright punitive. Microsoft does exactly none of these 'well', but they still think they should be able to charge money for it. Their customers, in my opinion, must be masochists.
Lest I be accused of being completely unfamiliar/anti-Microsoft, here's a couple of things I feel they do quite well:
- remote/centralized user management (seriously, Active Directory's extensions and integration of LDAP/Kerberos are quite impressive). Pity it's not even remotely open-sourced, as such any modifications (hesitant to use the word 'improvements') are effectively prohibitively difficult to bring forward unless you're a Microsoft employee, directly assigned to the Active Directory group, in good standing with your direct manager and his manager's manager.
- AAA (authentication/authorization/accounting) - given their AD prowess above, they'd have to be a special kind of stupid to foul this one up. To their credit, it is amazingly easy to assign RBAC to a group of users and apply it site/directory/'forest'-wide, then go back and bean-count exactly which users did what and when, if needed. For a lot of environments, this is a huge plus.
Still, for the vast majority of 'webby' environments, in my professional opinion, Microsoft hasn't been able to hack it for at least a decade now. The market has moved on to more powerful, less costly, more manageable/scaleable platforms. If you're running an application, or even serving basic web content, on a Windows-anything, you're needlessly wasting your business time/money/agony if you choose to implement it on a Microsoft stack.
- Opaque and uncommon/needlessly-unfamiliar command-line management toolkit (seriously, what is so difficult about 'output text, receive text as input' in 2017?)
- I cannot, yet, fully manage a Microsoft host from the CLI. What I consider to be 'fully managed' revolves around: installing software, updating software, adding/removing/'managing' users, setting up basic server-like features such as simple firewall rules, viewing common log-files ('user X logged in', 'software Y tried to execute Z task, the result was A'), starting/stopping/troubleshooting daemons, etc
^^^ Solved by PowerShell (You can even install a GUI-less version of WS2016 and all these will work.)
- Pick a major config-management system (my direct experience is with Puppet/Chef/Ansible/Salt). Configuring them to play nice with Windows-anything is an order of magnitude more difficult than even the most baroque *nix-like operating system.
^^^ I use Ansible Tower to manage a mixed WS and Linux fleet with great success, I only need to run `kinit` once every day to obtain a ticket, all logins afterwards are passwordless.
>- Licensing. Srsly, charging for the operating system, THEN the web server, AND the 'remote desktop server', PLUS the mail server... is downright punitive. Microsoft does exactly none of these 'well', but they still think they should be able to charge money for it. Their customers, in my opinion, must be masochists.
...because they're all separate pieces of software?
I run this on my home network. It is pretty difficult to set up as I was not able to dig up really satisfactory documentation. It must be said also that I am pretty inexperienced with AD at all so there is that.
I recently switched all clients to Linux and plan to abandon this approach as I was after proper user/authorization management between my Windows 7 clients and a Linux based NAS. With Linux the integration via ssh/sshfs is much easier and fits better to the Linux authorization model.
As a comparison: Setting up AD properly on all parts did cost me at least several weekends. Setting up sshfs only few hours. And the latter is much more responsive...
> but a paid solutions is not the topic of this thread.
AWS has their own CA that issues free SSL certs for their customers for use with stuff like CloudFront, I don't see any reason Microsoft couldn't do the same if they're not willing to play nice with Let's Encrypt.
ohh I wasn't aware of the AWS had a free SSL cert solution. I don't see any reason as well why Microsoft doesn't provide a similar service for their customer.
"It requires an Enterprise Support Agreement to ask that kind of questions to the IIS / Azure Web App product team and I do not have that."
Sounds like you're running the wrong web server for your needs, then. Every OSS web server has several methods for getting LE certificates implemented mostly by volunteers (my company implemented it in Virtualmin, both the OSS projects and commercial products, within a couple days of it being available, maybe even during the beta I don't recall exactly, and we're just a couple of developers), and you don't have to have an enterprise contract to make suggestions.
I'm hoping Microsoft releases a tool/blade for managing certs in general, even with a marketplace with auto-renewals, etc. The third party extension for Azure web apps for Let's Encrypt isn't bad it's just a pain to set up the first go round.
which was quite confusing..I mean if google does not redirect to https, I was full within the range of sane assumptions about Samsung not supporting https but I see that they actually do, thanks!
Browser usage patterns are very different in South Korea due to the Korean laws requiring obsolete (and no-longer-supported by modern systems) encryption for all retail transactions. That may have something to do with it.
Which (shared) web hosts remain opposed to Let's Encrypt?
Namecheap (my host, until I get around to switching) still is, for business purposes. (their customer service rep gave me some BS about "security" as to why, but it's really because they make a ton of money selling certs)
Most of the big hosts have exclusive agreements with certain CAs that prevent them from offering free SSLs through Letsencrypt. They will usually install a cert for you, sometimes for a fee, but never in an automated way, so even if you are using Letsencrypt, you'll need to open a ticket / pick up a phone every 90 days. You can't just write a cron and quit worrying about it, which is the central value proposition for LetsEncrypt/ACME as far as I'm concerned.
Get off namecheap. They got big by standing up to GoDaddy, but their prices are just as bad now. As you said, they make money through their SSLs.com so they will never support LE. They became the villain.
as opposed to... putting all of our eggs in any number of CA baskets, where the CA's are basically random companies of random ethics in random locations and where the loss of any one of our eggs means the loss of ALL of them?
At least one major commercial CA does offer ACME to paying customers. If you want ACME and want to use their CA, and have whatever their entry level enterprise price is burning a hole in your pocket, they will take your money.
They basically position it as "As well as integrating with Windows we also make everything work automatically with your weird Unix stuff like Apache or nginx" but it's an ACME service under the hood.
ACMEv2 (the Internet Standard RFC when that finally gets published) is a bit nicer for a commercial CA because it spells out how you use ACME to say e.g. "Hey, I'm paying customer #383829, here is proof - give me certificates on my account". The only easy way this could have worked in ACMEv1 wasn't terribly compatible with the limited understanding of cryptography that say Steve in accounting has.
I think this is fantastic, it's amazing to see how Let's Encrypt just blasted off. It kind of scares me, one company having such a large presence, but I think it's one of the best companies it could have happened to. :)
Fun fact: The serial numbers are required to be random! They're unique, but random (since they are huge numbers there's no problem achieving both).
This is done because it reduces the danger from collision attacks of the sort which worked on MD5 and are likely to be possible for SHA-1. The random serial number makes it impossible to guess what the signature on the certificate you're getting will be before it's issued to you, so you can't use a collision attack (other types of attack could work, but those have never turned out to be practical on a modern crypto hash even when it's "broken" like MD5).
They chose to make the serial numbers random because that's near the start of the certificate document and collision attacks are only defeated by randomness that happens as early as possible in the document.
We use Let's Encrypt for new environments at Datica [1]. It's one of better features we've implemented. LE is one of the better companies out there. Truly doing useful work.
I'm not sure it's the good way to present this (or phrase it).
It's more that their presence enlarge the number of HTTPS sites (or service) by 50% (totally thumb in the wind estimation :-D ) than "stealing" customers to other companies.
Of course there's a part of migrating certificates to Let's encrypt, but I find it more important that the HTTPS surface had grown that much.
For the top 10 million sites examined by w3techs (which looks at sites to see if they use technologies like PNG, WebM, embedded fonts, whatever) HTTPS went from under 15% to almost 50% since they started measuring. Less than half that growth is Let's Encrypt, their growth is still spectacular, but to put things into perspective, if they had the _same_ growth and the whole HTTPS market didn't grow they would now control 110% of the market, yet in fact every other major CA grew at the same time.
I get that HTTPS is good for anything that's got sensitive data, but what really is the point of enabling it if you've got a static, old-school informational page? Aside from avoiding getting dinged by search engines?
There are a couple of reason. Encryption also preserves the data integrity. That means your ISP cannot inject ads or tracking scripts. I’ve heard that this is already quite common in some countries (including mobile services in the US). This also means that nobody who sits between the server and your internet connection can inject falsified information into that static page.
You also have more privacy. An attack can see that you are accessing Hacker News but not necessary what comment section you are on (though it can sometimes be inferred by the page size).
You might not care about that about your oldschool static side but someone who lives in suppressive countries might because it might make a big difference if they only access Hacker News for the latest and trendiest node framework or if you actively research the comments about articles how your country does something bad.
Herd immunity. If only sensitive data uses HTTPS, all HTTPS use is suspicious. If everything is HTTPS, the fact something uses HTTPS no longer makes it suspicious/interesting. It also makes policies banning access to HTTPS websites (i.e. clumsy traffic control) unfeasible.
Have you seen those Intel-ME vulnerabilities that were on the front page yesterday and people kept saying they were not as bad as they look, because an attack would need to come from your LAN?
Injecting some Javascript into your static, non-sensitive pages is a great way to start an attack from inside your LAN.
More stringent validation methods won't help with the ever-present possibility of private key compromise. So long as that's a real possibility and revocation is broken (which it clearly is), longer certificate lifetimes are a liability. Renewal needs to be automated so you don't care how often you have to renew.
Let's Encrypt will sign your ECC keys now, but we'll sign with our RSA keys. We'll likely have our own ECC trust chain some time next year.
Is anyone using letsencrypt using nginx+docker ? How do you take care of the chicken and egg situation when you spawn a nginx docker container - nginx cannot start without a valid certificate and you cannot generate a valid certificate without nginx.
I haven't looked into this for about six months - I just bought a wildcard SSL certificate for 70$ and called it a day.
Is the reason that nginx wouldn't start because you're specifying the certificate location in the config?
So when it goes to start up it will fail because ther is no certificate in place? (If not then tell me why Nginx won't start).
If that's the case, don't link to where the certificate should be. That'll let nginx start, then the certbot-nginx tool can run and it'll add links to the generated certificates directly to your config files as part of the initilisation.
All it does is insert the paths to the certificate files before the final closing bracket of any server block that listens on port 443.
I use Ansible and automate the install of new server/sites, unless there's something extra different with Docker, the automation process should still work fine. It also still works if your config is setup to redirect all incoming from port 80 to 443.
One time run commands and dependent commands are difficult in docker init. So I have to fall back upon a startup manager like supervisord. I tried that last time , but it was very funky.
Docker VM are like short lived machines with no dependency management in init. But 70$ for 3 years is a cheap price to pay versus giving up docker , nginx (vs traefik), etc
Still, it seems unusual to not be able to run a set of commands on initialisation.
I haven't looked at Docker in a long time, but isn't there something called an entryscript? I'm not sure I'm remembering the name correctly -
- yep, entrypoint script not entryscript.
I remember something about setting up Wordpress in Docker and the official docker image has this type of script that initialised the database just once.
Might be worth looking into just for interest (maybe not for fixing this 'issue' as you point out its cheap to get a 3 year cert). I imagine one time init commands would be useful to have in their own right anyway.
I used it, and I used a combination of manual and automated steps.
It wasn't worth it in the end, perhaps you could look into traefik or caddy. Both can automatically request and refresh certificates for you, caddy is good for hobby projects and is very easy to configure, traefik offers a lot of features and can be a bit harder to setup, but is truly awesome if you connect it to an orchestrator for automatic request routing.
Because a docker machine spawns with the nginx executable - and because of letsencrypt is unable to start.
Docker services are not long running processes on a base OS. The entire OS is pretty much freshly created when you spawn a container. This gets to be an issue with letsencrypt.
What we do is take a 3 year certificate and bake it into docker build. So we only have to mess with it very infrequently. I can shutdown and scale my nginx containers (meaning - spawn new ones). Since the certificate is baked into the VM, it works seamlessly.
I would argue that you are using images wrong. An image should not contain credentials, and should be independent of deployment. We use the same image to test, deploy to staging, etc.
We always have a manually managed reverse proxy in front. For us, it takes care of TLS, caching, and makes me personally feel better than having the app HTTP/TLS stack facing the internet. This is just an nginx container with `/etc/nginx` mounted.
We also run certbot in a container. The two share a volume holding certs, and we do `certonly --webroot` to grab new certs. The container is not permanent, but launched from a script that essentially wraps certbot. Just need to disable the TLS vhost for a bit manually, and don’t forget to setup cron to refresh.
> We always have a manually managed reverse proxy in front. For us, it takes care of TLS, caching, and makes me personally feel better than having the app HTTP/TLS stack facing the internet. This is just an nginx container with `/etc/nginx` mounted.
well that is both your prerogative as well as your expenditure. We work in regulated spaces (finance in India) and dont get to have a lot of leeway in hosting and infrastructure. Docker is a lifesaver that way. Which is why we like letsencrypt, but it is a blocker for those of us using docker to run nginx itself.
Not sure I follow, we run nginx and certbot in Docker too? It's just that we manually manage its configuration, and need the extra (not too cumbersome) steps to get a cert up.
I usually copy a vhost config from one of our templates, comment out the TLS vhost, reload nginx, request cert, uncomment TLS vhost, reload nginx.
So, I'm not sure what's not possible in your setup? Unless there's a limitation on what kind of volumes you can connect to your containers?
If the server isn't started yet, the easiest way is to use the HTTP validation. Once it's started placing the file is easiest. You can use both methods.
Would be nice to see the graph in absolute terms. - How much of this is people switching from other companies to Let's Encrypt, and how much of this is more websites getting SSL?
Does this count parts of the market that will be gone soon? Legitimate question didn't Starcom recently say they won't issue new certs and will shut everything off in 2020?
StartCom hasn't had its own market share since late last year when most browsers distrusted it. It's been reselling certs from Comodo and Thawte, probably contributing to the market share of these other vendors.
After spending a ton of money on SSL certs over the years, I not only salute Let's Encrypt but also fart in the general direction of those orgs who fail to support it.
Even though automated renewals are absolutely superior, as encouraged by LE, they do require some work to adopt. For a lot of businesses, it's easier to have someone manually renew every year or two than to do the migration work of setting up automated LE certs.
Many people who don't live in the tech news bubble like we do might not have even heard of let's encrypt, or haven't realized that they could use it. Or the bad old "never touch a running system" mantra at work.
Because some people/businesses are afraid of free things, and others would prefer to not have their certs rotate every 90 days (even if it's done automatically).
Side note: I use Amazon certificates (ACM) and I had someone try to "verify me" using Extended Verification procedures, ACM doesn't have EV, it is listed as one of the limitations, but the real question I have is:
What is Extended Verification
Why would I want Extended Verification
Why would I look more legitimate to someone looking for Extended Verification? Because my business/personal information would be associated with the certificate or something?
I'm not sure if this is still the case, but for some time, Twitter served both EV and non-EV certificates depending on where the visitor was located. I don't think they ever publicly explained this behaviour.
> Because my business/personal information would be associated with the certificate or something?
Exactly. EV certs are tied to a business, and said business' identity is verified in the process. Where for a domain validated (DV) cert the CA only verifies that you control the domain/the server it is pointing to, an EV cert also has the business name (and browsers generally show that). If you own ringaround.com, I can register ringaround.io and get a cert for that and try to impersonate your website to users, but I'll have a harder time getting an EV cert for ringaround Ltd.
The limitation of course it that this requires users to actually check/notice the cert isn't an EV one, which is why the usefulness of EV is questioned.
Actually, only a slightly harder time. If the company hasn't already locked down all the TLD's for their given name, then it's probably a piece of cake to register that company name in some locale and then get the EV for it. (I've done the EV verifications, and they're not really that challenging; and how could they be, since they have to validate companies all over the world, with varying amounts of paperwork, etc.) Keep in mind that companies in the United States are registered (generally with the Secretary of State) in their state, so now you've got fifty different ways of verifying just U.S. companies. Someone registered ringaround, Ltd in California already? Just register it in Nevada, or Florida, or Delaware, or.. you get the picture. Many of these can be registered in about fifteen minutes with a credit card.
But not to pick on the U.S.. if you're from outside the country, the U.S. is actually a pretty nice place to base your company. But, if you were the EV company, how would you verify a company in Nevis or Timbuktu? How will you REALLY know if that company is even legit or if the company just hands out "Corp" or "Ltd" or "IBC" to anyone who pays $50 on a credit card?
EV isn't quite a joke, but it's not really as useful as the companies pushing it make it out to be.
In principle CAs aren't supposed to hand out EV certificates without a way to actually make sure the Subject entity exists. It's common for EV to only be available for certain countries, because you're right that it's not obvious how to check that the government of Mali really authorised you to operate a company named "Ringaround, Ltd" in Timbuktu (a city in their country).
The basics are First there needs to be some sort of government agency that can say authoritatively which companies exist in their country, and either give a date when they were created or a "serial number" in some sort of register - preferably via a secure online API. Second the country must have some reliable "business directory" or similar that lists authoritative contact details for that type of business. For example Dun & Bradstreet. This is to be used to phone the business up and ask to talk to someone about this certificate they supposedly want issued.
However you're quite right that places like Delaware or the United Kingdom, despite having a reputation as perfectly law-abiding places actually have very lax regulation for starting dodgy companies; the only reason scammers aren't buying EV certificates for dodgy company names in those places is that it doesn't matter. The day we make ordinary users demand an EV certificate to trust they're really dealing with "PayPal" is the same day scammers will start brass plate companies in London or Delaware named "Pay A Friend, Incorporated" or "My Pal, Ltd" or "PP Internet Payments" or whatever. Fools will still get separated from their money. No technical fix (and EV is a technical fix) can prevent that.
Could https be used to track persons on the internet using some phase of the protocol? Perhaps the "random" number generated after a handshake is not so random and actually identifies you as a user?
Sure, your communications are encrypted by what people perceive as an infallible algorithm, and all serious websites with forms force you to use it, but at what cost?
I _think_ the number you're talking about is the session key. The session key is agreed by both parties. Key Agreement is actually some pretty clever mathematics, but it goes like this (except the real numbers are enormous and I've used small ones):
You: "Let's pick a session key, let's use method A, with the magic numbers 15 and 29. I chose my random number, and with A, 15 and 29 the answer was 4."
www.google.com: "Cool yes, method A with numbers 15 and 29 is fine by me. I picked a random number and my answer was 3."
Now both you and Google can determine that the secret session key is 9, because each of you knows _your_ secret random number and the number the other person got by using _their_ secret random number with the special method. But even if the other person lied and always picks the same number, the _result_ is random, because you did you part of the trick properly.
Nobody else knows it's 9, even if they eavesdropped on this conversation taking place, because they need one of the secret random numbers to work it out OR they need to solve a mind-bogglingly hard mathematical problem to get the answer without knowing the secret.
It is obviously very easy to evade cookies and IP tracking, so I'm not even sure why these are suggested.
However, if the https protocol itself is bugged at the TLS level, what can you do? What if the last few bytes of the ssl session number are always generated according to your unique hardware specs? What then?
I'm aware that this looks a lot like FUD, but it is rather a question, because I'm not peddling an alternative. Why implicitly trust any protocol?
With Wildcard certificates coming to Let's Encrypt, I think they will only increase in users. (https://letsencrypt.org/2017/07/06/wildcard-certificates-com...)