I really wish Let's Encrypt had a solution for those of us with lots of "internal" devices that neither need public access nor are capable of running an automated renewal job (routers, switches, IoT gizmos, etc).
All of these things come with half-baked self-signed certs, and web browsers and mobile devices these days are making it increasingly difficult to actually stomach such certs for anything at all.
I wish the "SSL All The Things!" crusaders would consider this use case once in a while, and actually offer a usable solution.
(Some of us could go through the trouble of setting up a private CA to deal with this issue, but getting everything to trust your own personal CA root seems like almost as much of a hassle at times.)
I could not agree more. Information Technology, for reasons unknown to me, seems to ferment a kind of extremism with a large helping of "if you are NOT doing it this way you are what is killing the world," and the "SSL All the Things!" people are a great example of it. I've heard people suggest that you should just dump your web-serving software in favor of a different webserver that works better with Let's Encrypt, nevermind that a lot of people in the real world are running niche proprietary software that has very specific webservers they are tightly-integrated with. And -- it has been argued to me -- the solution was to not just get in that service sector in the first place.
I think many here don't get that not working in a tech startup is very different.
The IT budget is shrinking every year instead of increasing with company profits and sometimes a 3 person team needs to handle 300 users plus those machines that nowadays are tightly interconnected. These 300 users are essentially a threat to the company because they often click on anything they get per mail.
So you need to encrypt everything as much as possible to decrease the attack surface inside the network.
> These 300 users are essentially a threat to the company because they often click on anything they get per mail.
I agree, but I have to mention as well that in general all teams (not only IT) got reductions of budgets and amount of employees and in the end everybody is very stressed most of the time, which in turn makes everybody pay less attention to what they're doing.
In my case (IT) last year I received an email with the title "ACTION REQUIRED" and I just blindly opened it and clicked on the link of the document which I was supposed to fill out, and (luckily) it was just a well-crafted internal test related to phishing (which then of course I failed => I then had to take an online course) - in my opinion the problem is that nowadays we're permanently under pressure to deliver/react/do stuff with very short lead times and it's therefore hard after a while not to start to unwillingly forget best practices related to security.
This year the company I'm working for has started as well to use quite often external companies (websites) to provide some services => doing that makes identifying phishing emails even more difficult for me.
You can terminate SSL at nginx and forward over socket or local port to service bound only on localhost. Then you can LB over the nginx's. You can even deploy the same certs to all of them.
> lots of "internal" devices that neither need public access nor are capable of running an automated renewal job (routers, switches, IoT gizmos, etc).
How is that a CA's reponsibility? I wish vendors would setup an appropriate API for dealing with that on their devices. Quite often you need to enter strange web interfaces in order to setup a certificate.
Let's Encrypt needs to verify you're in control of the hostname. This can happen with either http or dns verification. DNS verification with certbot requires you to setup almost nothing (i.e. you don't need the machine running certbot to be exposed to the public internet) if you're using a supported DNS provider (there're many) for the zone.
So, the missing part is NOT in let's encrypt infrastructure. It's in the device API. You can launch the certificate renewal script just about anywhere (e.g. a small server or vm within your network that can access the internet); but then, how would you automatically deliver the certificate to the device? That's the missing link.
The request wasn't (only) asking for a solution by the CAs but by the '"SSL All The Things!" crusaders' which includes creators of browsers and mobile devices enforcing stricter rules and a request to take these things into more consideration.
In my words: We need solutions for non-technical individuals to run a somewhat secure environment locally without being confronted with security warnings everywhere. A CA can play a role in that, but probably not fully solve it.
Are you joking? The primary reason why one would want one certificate per device is because you are directly connecting to the device instead of letting the device connect to the vendor API which is already behind HTTPS. Your idea will just lead to a lot of bricked IoT devices once the vendor takes his certificate renewal API down.
I don't understand. The device should expose an API (instead of a webinterface) to generate csrs and upload certs (ideally the key should never leave the device). What is the vendor api?
One of the challenges in this space is similar to captive portal detection. It’s not enough to have a solution, you need the vendors of those devices to adopt the solution.
Unfortunately, just like captive portals want to avoid detection by your OS [*], the vendors of these devices want to do the least possible work, for the lowest possible cost, never updating or supporting the device once sold. Any new solution is more work for them, especially early on when things are still in development/in flux and may require changes over time, since the problems in this space are actually rather difficult.
It’s not that this problem isn’t important, but rather that there’s no clear good path here. However, even still, the alternative (of HTTP) is much worse for the Web, so TLS is still promoted aggressively.
[1] The reason captive portals want to avoid OS detection is because the OS-spawned UI generally doesn’t have access to your browser cookies. The ecosystem has mostly ossified around a few captive portal vendors, who want your cookies either for advertising/demographics/traffic, or want to offer “Sign in with Social Network” (to both track and... advertising). So they want you to use your main browser, not the OS UI, and thus want to actively avoid OS detection. It’s sad.
> One of the challenges in this space is similar to captive portal detection
I really wish someone had thought of that problem when defining the 802.11 specs. Though I suspect the idea hadn't even occurred to them at the time.
This was much worse back in the mid-00's when WiFi was a newer use case. Seemed like flipping open my laptop in a public place back then would suddenly cause all the network-connected widgets on my desktop to go absolutely berserk as a result of portal nonsense.
Well Let's Encrypt has a solution, it is DNS authentication, which doesn't require Let's Encrypt to establish a direct connection to the device. But you still need a way to automatically renew the certificate (your DNS must have a supported API) and deploy it automatically. But that's not a Let's Encrypt problem to solve.
Even when you have a private CA, you still have the issue of getting everything in your environment to trust it. Even in an enterprise setting, from my experience. Most workstations and servers are managed and have the root cert in the trust store, but then you have a long tail of appliances and containerised applications, or OSS software that doesn't use the OS trust store or BYO devices.
With publicly trusted certs you instead trade the trusting-CA-issue for the issuing-issue.
I'm starting to think that it's easier to use publicly trusted certs with FQDN for internal use. Some kind of ACME-proxy, and portal, which is publicly accessible and can respond to ACME inquires. Maybe in combination with some non-LE CA that can issue long lived certs for certain uses where updates can't be automated (e.g network appliances). But then there's the Certificate Transparency Issue, leaking internal names etc.
The dream would be a LE issued sub-CA limited to *.internal.example.com, so you can have an "internal" CA that is publicly trusted.
That would mean I can issue a valid, trusted worldwide, certificate what will work on your internal site.
Using public domains for internal sites are the way to go in my opinion. As long as you pay the bills you have guarantee that you control your domain. You can issue any kind of certificates and so on. If you have a nice naming scheme you won't leak that much info in then Internet. A lot of companies I know does that, by the recommendation of security teams (due to issues with using .local or .example.com domains).
> If you have a nice naming scheme you won't leak that much info in then Internet.
IME even if you leak the name of every internal service, you're still more secure than training all users to ignore certificate errors, which is what internal CAs do 85% of the time.
Maybe I was unclear, but I was wishing for a situation where the Name Constraint extension was wildly enough used that LE would issue scoped sub-CA certs for *.whatever.yourdomain.com, which I hold the private keys to.
But yes, I agree with your larger point. However, "nice" naming scheme might conflict with "secure" naming scheme. E.g, perhaps admin-{not-yet-released-product-line}.acme.com is the most resonable domain name, but it would be unsuitable if it was publicly announced via CT logs.
Any way, having the domains in the CT logs, and naming them considering that is probably a tradeoff worth making.
I use DNS auth for internal certificates. I have multiple domains that only have entries on 10.0.0.0/8, mostly as *.identifier.domain.tld. With DNS auth they all have valid certificates from LetsEncrypt.
DNS Auth doesn't work with split-horizon DNS though, since the LE process will update the internal view instead of the external.
It sounds like you've put all your RRs in one flat zone visible internally ad externally but then that breaks device network portability, since mail.example.net on 10.x.x.x won't be reachable once outside the internal network
ACME and LE were really developed without an understanding of how certs work in the IT World. They were developed by folks who spin up a service on AWS and think that's how the world works.
Do you actually have that or are you suggesting it a solution? Unless something changed you can't get certs for subdomains unless they they are the public suffix list and getting on that list is non-trivial.
Then I use Caddy server as a reverse proxy on the internal network, which is configured to do DNS challenges to get certificates. Here's the plugin for AWS Route53 for example: https://github.com/caddy-dns/route53 - The challenge just verifies that I have control over the domain through DNS and provides a certificate to me no problem.
It's been working perfectly for few years for us on our internal networks. Was the OP asking for something different? I'm not entirely sure what the 'public suffix list' is for subdomains, but I definitely have a valid certificate right now for *.a.domain.tld, served internally and provided by LetsEncrypt.
The public suffix list is (or was) used to tell if foo.domain.com is considered different user than bar.domain.com
If domain.com is on the list then foo.domain.com is different than bar.domain.com. If not then it's considered a single user and the rate limits apply.
Those rate limits are fine for almost all internal IT uses. The 50 certificates per week limit does not count renewals, that's 50 new hostnames added every week. Maybe larger shops need to roll out certificates a bit slower in order to not exceed that limit but it's still a pretty generous limit.
The public suffix list only affects limits with letsencrypt. The limit is still quite high IIRC, you just need to have proper backoff an smoothing between devices.
> (Some of us could go through the trouble of setting up a private CA to deal with this issue, but getting everything to trust your own personal CA root seems like almost as much of a hassle at times.)
This is the actual problem... setting up your own CA for internal networks could be automated relatively easily.
The problem is not the self-signed cert or CA. The problem is managing trust on the devices themselves.
Imagine you want to trust only _your_ self-signed cert or CA root for a specific service. Good luck making that work.
This issue should be so common that there should baked-in functionality in every piece of software to allow for this. It's often totally missing or implemented incorrectly (self-signed certs or custom CAs are often trusted in _addition_ to the system CA roots!).
That seems like a nifty tool, but it doesn't seem to do anything to address the difficulty of getting your private CA trusted on more than one of your devices. I'd like to be able to access my router and switch configuration interfaces from more than one client PC, and perhaps even from a smartphone.
Getting your private CA trusted on a smartphone seems to keep getting more and more difficult and/or annoying with every iteration of the various mobile OSes.
If it was possible to make a private CA cert that was only valid for hosts under a single domain, and then for mobile devices to STFU about you wanting to install it (as it wouldn't be as "risky" from their POV), life would be a little easier.
Same here. I run OpenWRT on my home routers, and your comment prompted me to do a quick web search. uacme looks promising for luci. I’d love to hear HN experiences with it.
FWIW, tapping on .crt files in Chrome offers to open them with the certificate manager on Android, where it's then straightforward to install them. Unfortunately this also produces some very irritating (non-removable) notifications... >.<
...And I can't deny I was fetching the certificate from my PC across the room over HTTP, not HTTPS.
https://github.com/acmesh-official/acme.sh this works for me for most of my devices at home. pfSense and Proxmox both have plugins for it. Each device requests its own certificate for a host on a private subdomain of a domain I own e.g. hostname.private.domain.com
That still only works if your device is able to reach the internet. We have a bunch of servers which aren't directly on the internet, because they don't need to.
You can run acme.sh on another device using dns-01 authentication. Then you can move the certificates to your servers in some way. This would mean one machine having access to the internet and the private servers though.
But also, LE certs expire after 90 days, meaning this manual process needs to be repeated every <90 days.
Though, I'd be curious to know what the threat model for such a service is. Why is it not allowed to connect to the Internet? Can a different internal machine connect to it? And if so, is that machine allowed to connect to the Internet. What happens if someone connected to that machine and opens a tunnel through their machine to the Internet (if at all possible). Understanding the threat model will give you a solution on how best to use dns-01 challenge to generate certs and keep them updated.
Alternately, you could use a fast CA solution like Netflix' lemur or step-ca or even just openssl, make your own CA, and distribute it to everything. If your threat model is such that that stuff should never connect to the outside world, why risk it by repeatedly moving things in from outside? Just generate a closed CA for your closed environment, trust it once, and move on.
Reminds me how either Xiaomi, or some other fairly big smart things co. bricked few millions of their smart lightbulbs when their root cert expired, and they had no valid cert left to sign a new firmware.
Maybe some kind of wildcard approach would be helpful, where you generate your Let's Encrypt certificate with a wildcard, and are able to distribute it automatically to every device via a standardized API.
That way only one device (possibly a Raspberry Pi) would actually be confronted with the task of generating the certificate and validating the verification requests from LE's servers.
This is what I'm doing at home: on a public VPS I have a custom made DNS/HTTP-server, which is pointed to by the NS-Record for the subdomain int.example.com by my DNS-Provider for my domain example.com. So everything but int.example.com is under control of the DNS-Provider, int.example.com is under the control of my custom DNS/HTTP-Server.
The local Raspberry Pi invokes certbot, gets the authentication tokens from LE's servers, POSTs them to the DNS/HTTP-server via HTTP, LE's servers query the DNS-server for the TXT-Record and certbot then continues with its work of storing the new certificates. Finally a script then distributes these certificates to the local HTTP-servers.
The thing about these certificates is that they contain a wildcard certificate for * .int.example.com, and this way I get a wildcard certificate covering server-1.int.example.com, server-2.int.example.com, ...
If the IoT-devices would have a standardized API to deploy the certificates to them, then I could access the ip camera via https://ipcam-1.int.example.com.
The public DNS/HTTP-server does not respond to requests of any of the * .int.example.com subdomains. The local Pi-Hole has all the DNS-records for those subdomains.
Let's Encrypt revolves around short lived certificates and automated solutions for updating. I think your best bet is having an internal "service" (could be in a docker container), that do the updating. If you can SSH into the device, you can write a script that does it.
> Some of us could go through the trouble of setting up a private CA to deal with this issue
I've read about doing that for development purposes. But it sounds complicated. Therefore I wonder couldn't there be an open source solution that produces an npm module which when I run it creates a private CA for me?
I mean if it is complicated to create a private CA, why not automate that task?
The solution is somewhat simple - have Let's Encrypt (or any root CA) issue intermediary CA limited to your domain. You "only" need devices to trust the root CA and could issue certs at will. The problem is that this use case was/is a threat to CA business model and is not really supported in the current cert infrastructure.
Yep, as long as relevant specs are not implemented by the majority of things touching trust (including appliances with very long update cycles), this is, sadly, "no bueno".
ECDSA is fantastic for all the reasons touched on in this article, but I’m sure the use of NIST curves will draw some criticism. As far as I know, no cryptographer seriously believes the curves to be backdoored, but nonetheless seems to be a hot topic of online discussion. DJB lists them as “unsafe”. Of course, other curve parameters like 25519 don’t have the same widespread deployment that the NIST curves enjoy, so this is a very sensible choice.
Even better than curve25519, I would prefer to see the related curve448 supported. I haven't heard any reason to prefer the NIST curves when open alternatives exist. From what I understand, curve25519 is significantly faster than the NIST curves, in addition to being better understood.
The only real advantage I know of with the NIST curves is that they might be more widely supported, which you mentioned, and which is all the more reason why it would be good for an organization like Let's Encrypt to push the industry forward with regards to curve448 support.
I wouldn't mind if they offered options for what kind of certificate you would like them to issue, so people who favor NIST curves could get a certificate signed that way, and people who prefer curve25519 or curve448 could get a certificate signed that way, but I also realize that more maintenance burden is never fun.
But, those are just my thoughts on the issue... maybe I'm completely wrong.
Other keys violate the Baseline Requirements which are common rules agreed between the major CAs and Browser vendors. Now, a CA could choose to just disregard the BRs, but then they're automatically in violation of the root trust programme rules for the major root trust programmes - at least Microsoft, Apple, Mozilla, Google, then arguably Oracle (because Java) and some other smaller outfits. So that's probably the end of that Certificate Authority, at least for the Web PKI.
Individual root programmes also have their own rules, and so in this case Mozilla's rules are important because they further restrict keys to RSA or ECDSA P-256 or P-384 only.
Now, those are just policies, you can change policies, but that's not up to Let's Encrypt. If you believe Mozilla needs to support Curve 25519 then that's a bunch of software development you can volunteer to do, followed by lobbying all their competitors because it's futile if only Firefox supports it anyway.
The reasons for using the NIST curve might have less to do with technical merits or security aspects, and more with regulations and certifications. Certifying something including cryptography already certified (NIST curve) is easier/cheaper than otherwise.
As for backdoors, nobody really knows .. the nature of the algorithm itself makes it practically impossible to even detect a backdoor, even when there is one. So far, no evidence of that though. So if there is a backdoor, then whoever has knowledge of it is damned good in keeping it a secret. In my opinion a bit too good to be likely. That said, NIST has been caught before with their pants down. Rather awkward, since that's the organization supposed to certify such things. One can even wonder what worth or real meaning certifications still have, after they got caught with what they did. I'd personally consider it mostly a national US affair, but the reality is that the rest of the world is still dominated by US standards when it comes to the internet.
All in all, the reasons for using the NIST curve might be more economical and people doing security by marking check boxes, rather than about actual security or efficiency.
Stodgy, conservative organisations like banks and the military created the market for hardware security modules, FIPS-140 dongles, smartcards, TPMs and so on.
And they couldn't care less about sticking it to the NSA by rejecting the NIST curves. Whereas if you say "government standards compliant military grade encryption" they like that a lot.
If you aren't one of those organisations, but you'd still like to use a HSM/TPM/smartcard, no Curve25519 for you.
In modern times, the keys on a SSL certificate don't directly encrypt the session key used to protect communications between your browser and web-server. Instead the key only signs the diffie-helman share that a web server uses. Web-server operators are free to use 25519 for their ECDH shares if they want to, the curve used there does not have to match the certificate's curve.
Even if the NIST curves are backdoored, the backdoor wouldn't be able to passively decrypt TLS sessions. At best it might allow someone with knowledge of the backdoor to impersonate a web-server, but that's an active attack and it would be very risky for the attacker. They'd run a huge risk of disclosing their back door capability.
Active attacks are effective and not that risky. For most web services, the attacker only needs to intercept a single transaction in order to get the session cookies and then you are pwned. Since they only need one, they can hit you with a DNS cache poisoning attack and simultaneous clickjacking attack while you're on a public WiFi.
In general, I'm skeptical of any "this attack isn't really practical" arguments. They have a way of becoming practical enough for high-value targets, so it's not something I want to use.
This feels like a trap people fall into when considering how attacks work. You're right, any one instance of active TLS interception is vanishingly unlikely to be detected. But an adversary conducting that attack at any kind of scale will be deploying instances of that attack regularly. The cost of detection can be very high --- it's not "public opprobrium", as so often seems to be the mental model on HN, but rather disclosure of sources and methods to other adversaries, who are themselves exquisitely well equipped both to detect stuff and also to work back to expand the scope of tradecraft lapses in drastic and surprising ways.
Here, though, it's even simpler: you're positing an attacker who is exploiting a cryptographic vulnerability unknown to science simply in order to conduct an individual TLS interception. The consequences of the disclosure of such an attack --- one that would be discernible by amateurs --- go far beyond what curves people select in the future.
The US and a few others routinely sink billions of dollars into weapons systems they'll only ever be able to use once, like strategic nuclear submarines.
So the fact that a backdoor could only be used briefly until it was discovered, doesn't mean that some agency wouldn't invest a lot of effort into developing it, just in case.
Can you link to a config example for nginx for what you’re describing in your first paragraph? Or a web framework in any language? I’ve never seen that done before, where the DH shares don’t match the certificate type. But, I’m definitely not an expert on the low level details of TLS.
All your DH shares don't match the certificate type already because you use ECDH with either NIST P-256 or X25519 and you likely use an RSA certificate.
The two asymmetric systems (one for key exchange and one for authentication) have nothing to do with each other, you can mix and match them freely (e.g. post-quantum key exchange experiments ran by google and cloudflare).
FIPS 186-5 and SP 800-186 (currently in final draft) will have new curves:
> NIST is proposing updates to its standards on digital signatures and elliptic curve cryptography to align with existing and emerging industry standards. As part of these updates, NIST is proposing to adopt two new elliptic curves, Ed25519 and Ed448, for use with EdDSA. EdDSA is a deterministic elliptic curve signature scheme currently specified in the Internet Research Task Force (IRTF) RFC 8032, Edwards-Curve Digital Signature Algorithm. NIST further proposes adopting a deterministic variant of ECDSA, which is currently specified in RFC 6979, Deterministic Usage of the Digital Signature Algorithm and Elliptic Curve Digital Signature Algorithm. Finally, based on feedback received on the adoption of the current elliptic curve standards, the draft standards deprecate curves over binary fields due to their limited use by industry.
Yes, there isn't adequate browser (or other client) support to use Curve25519 signatures in a public CA's certificate.
I'm actually kind of confused about exactly what would be required and exactly what the state of it is, so maybe someone can chime in -- I just know it's not ready.
Another thing I don't understand is what the threat model looks like for applying hypothetically backdoored curves. That is, can you interactively or offline attack ECDHE, ECDSA, or both? Can you derive the private key from the public key, by inspection and calculation, by observing random signatures, by obtaining chosen signatures, or something else?
"Also in 2018, RFC 8446 was published as the new Transport Layer Security v1.3 standard. It requires mandatory support for X25519, Ed25519, X448, and Ed448 algorithms.[24]"
I haven't confirmed this myself, but it seems like anything that supports TLS 1.3 (a lot of things, relatively speaking) supports both curve25519 and curve448.
> Another thing I don't understand is what the threat model looks like for applying hypothetically backdoored curves.
Obviously, no one has proof that the NIST curves are backdoored, so who knows what kind of backdoor they would have? No one really wants to find out the hard way, though.
> "Also in 2018, RFC 8446 was published as the new Transport Layer Security v1.3 standard. It requires mandatory support for X25519, Ed25519, X448, and Ed448 algorithms.[24]"
This is oversold, if not simply factually incorrect.
RFC 8446 says
> A TLS-compliant application MUST support digital signatures with rsa_pkcs1_sha256 (for certificates), rsa_pss_rsae_sha256 (for CertificateVerify and certificates), and ecdsa_secp256r1_sha256. A TLS-compliant application MUST support key exchange with secp256r1(NIST P-256) and SHOULD support key exchange with X25519
x448 is not mandatory. x25519 is not mandatory, it's only a SHOULD. In addition, key exchange is for diffie-hellman ephemeral keys; that's separate from the keys used in the certificate. I don't think there's widespread support for X25519 (or X448) in X.509 certificates.
There’s a small point of terminology that’s important for getting a clear idea of the state of support for curve25519 in IETF protocols.
X25519 is the name used for key exchange (Diffie-Hellman) which has been widely supported for some years now since it was added to TLS.
Ed25519 is the name used for digital signatures (Ed is short for Edwards) which is what you need for certificates. It was a few years later to arrive in OpenSSL because TLS didn’t need it.
Note that this document is (of course) not an implementation, or even an explanation of how to implement support for this, but only a description of how to spell X25519 in ASN.1 in order to write it into a PKIX X.509 certificate.
So this document is a precursor to widespread support for X25519 in certificates only in the same way that coming up with a name for your kid is a precursor to giving birth. It's not strictly necessary, and it's certainly not the most important thing you needed to do, but I guess it can be part of the process.
> A lot of things, but probably not enough things for a certificate authority that has to support pretty much everything in widespread use to use.
TLS 1.3 is - in browser-centric scenarios - about 50% of clients today. Put another way: at least half of the internet does not support TLS 1.3, and that is larger still if you are in the embedded, large enterprise or similar spaces.
It will take a while to close out that last 50%, and I do not blame Let’s Encrypt for taking the clearly practical approach here (by far).
Could a certificate authority not have several root certificates? I really don't see any reason they would have to have only one root certificate.
They could let people choose what kind of certificate they want issued to them, and the default could be whichever is most widely supported or whatever.
People can already choose what kind of certificate they want issued to them, because the CA does not generate your private keys.
Them giving you a certificate with an RSA public key in it is impossible if you submit a Certificate Signing Request with an ECDSA public key in it (or vice versa).
Then it's only down to what intermediate certificate you want to serve. That's the constraining factor; since you don't get to choose which private key the CA signs your certificate with, you have to use the intermediate certificate whose public key corresponds to the private key that made that signature.
The sensible thing for a CA to do is to sign with (and supply) an ECDSA intermediate if they get handed an ECDSA CSR, and an RSA intermediate if they get handed an RSA CSR. I imagine this is what Let's Encrypt does already.
It can't be what they do already because Let's Encrypt does not currently operate an ECDSA intermediate. This document describes a hierarchy they've just recently issued, and which is not yet in production use.
If you present a CSR for an ECDSA public key today, Let's Encrypt will issue a certificate signed by their RSA intermediate Let's Encrypt Authority X3, for your ECDSA key.
They haven't actually specified whether you'll get certificates in the new ECDSA hierarchy from the same API endpoint or need to use a different endpoint.
Supporting a variety of options instead of just doing one thing has its own costs in terms of added complexity and maintenance. Maybe they thought about it and decided it wasn't worth it, at least not for now. There's nothing prohibiting them from offering more curves in the future.
I disagree: DJB and others have researched how backdoors in elliptic-curve cryptography (and other algorithms) could work. In Dual_EC_DRBG we even know exactly how they would work.
While I don't believe there's any publication that explains the exact mathematics of a backdoor that could have been used in the NIST curves, I think the likely algorithmic consequences for EC security assumptions probably are understood -- just not by me. :-)
I'm sure this is about the mathematical soundness of the prime numbers used in elliptic curve cryptography, but wouldn't it be much easier to access half the internet's traffic if root certificates were compromised instead?
When you say access, do you mean intercept and decrypt?
I’m fairly sure root certificates are only affect the browser’s trust of a server’s public key and is not involved in the security of encryption—for that you’d still need the server’s individual private key.
And even then I’m still not sure if the private key is sufficient given modern encryption protocols. Hopefully someone else can chime in to elaborate here.
Regarding your last point, even if you can monitor (but not alter) all traffic on the Internet, in real-time, all of the time, forever, that does not let you decrypt any traffic (now or in the future) if the parties are using forward-secret ciphersuites (those whose key exchange is performed with ephemeral private keys, instead of the key corresponding to the public key in the certificate).
At the moment, these are the DHE- and ECDHE- ciphersuites in TLS1.2 and below, and are mandatory in TLS1.3 (RSA key exchange was removed).
This only becomes a concern if your adversary can alter traffic in real-time (because they can use a compromised root to obtain their own certificate for the service you're talking to, and pretend to be them to you, and you will then negotiate your keying material with them instead, which naturally gives them the ability to decrypt the traffic before forwarding it along to its intended destination). Several technologies, such as HPKP, CAA, and CT, are designed to eliminate or mitigate this.
If you can obtain the theoretically short-lived but certainly not truly ephemeral STEK (Session Ticket Encryption Key) from a server, then any sessions which enabled session tickets even if those tickets were never used can be decrypted using the STEK.
"As far as I know, no cryptographer seriously believes the curves to be backdoored, but nonetheless seems to be a hot topic of online discussion. DJB lists them as “unsafe”."
I'm not very well educated in this space. Is DJB historically prone to exaggeration?
the NIST curves are unsafe, regardless of if they are backdoored. In order to use them safely, you have to carefully prepare your private key, because some keys are unsuitable. for curve 25519, you just need a decent secure random, all keys in the keyspace are good.
Apart from clamping not being fully and clearly justified in most of the earliest specs and papers (which led to a few broken and/or insecure implementations), it makes the whole narrative of the curve being "safe" quite arguable and leaving a flavor of marketing in the mouth (NIST P curves have cofactor 1 so they are "safer" in that respect).
"NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic interests of U.S. companies"
Because it would destroy their credibility in this field forever. Like, it's not impossible, but it'd have to be "burn all this accumulated trust right down" important. They can advance the US' interests in more subtle ways by just being above board.
Not that that's any guarantee with this administration, of course, but in general it's sound.
...While also making it more resistant to differential cryptanalysis (called DCA from now on). DES died because of the small keyspace. Without the NSA/NBS changes it would have been insecure from the start and would have been a lot easier to crack once DCA was out in public.
The changes proposed made DES harder to crack for the NSA, but less "future safe" once computational power caught up. I don't know enough about DCA to know whether DCA-weak-DES could have been made safer with something like 3des, but I wouldn't want to bet on it.
Working in the finance sector, I haven't seen push back because I haven't seen anybody or anything use ECDSA. ^^
Usage and support is relatively low. Most people have no idea what ECDSA is or how to use it. There are some issues in java versions (7 or 8) if you try to use ECDSA. I personally don't find the elliptic curves and parameters easy to grasp and I have a math background (multiplying large numbers was much more palatable), I think a developer would quickly hit a roadblock if they tried to work with it.
Long story short. There is no benefits to ECDSA (over RSA) so there is really no reason to push for it.
The only place where DSA is somewhat used is SSH keys. SSH has its own crypto routines, they've added support and pushed DSA for some time. Keys are a bit shorter so it's nicer to copy paste to github or digital ocean.
(Not sure people realized this is sarcasm.) Right because if they believe something crazy like there being a backdoor in something recommended by the government then people will just say they are not a real cryptographer. Because of course no part of our government ever does anything tricky like backdoors and even if one part of the government did, it could never influence another part that made recommendations to the public for what security algorithms to use. And of course anyone suggesting that such a thing might be possible is just an insane conspiracy theorist.
I don't want to go too far OT, but had a question if someone wouldn't mind answering it or pointing me to some materials.
I've been digging into DNS and TLS lately. Given the way ACME works (essentially using DNS to verify you own a domain before giving you a cert), what's the reason you can't just put your public key in a DNS record? I'm assuming it has something to do with security issues at the DNS level, but don't really know.
This is basically the idea behind DANE. The problem is that DNS was historically unauthenticated and so just as vulnerable to on-path attacks as HTTP - perhaps more so due to the distributed, hierarchical nature which enables certain types of takeovers.
DANE assumes that DNSSEC is in place, so that the DNS records are authenticated. This allows the DNS records to be used to deliver TLS trust information in a way that is still ultimately rooted in a certificate authority (whoever operates DNSSEC for the TLD).
Adoption has been limited, mostly because adoption of DNSSEC itself has been limited, arguably because DNSSEC is overcomplicated and high-maintenance, but also just due to a lack of motivation. Similarly, users couldn't care less about how TLS trust is delivered and DANE has never really gotten a big corporate champion to push for it, so there just hasn't been the drive. Industry seems to have chosen DNS-over-HTTPS as the long-term solution to many of DNS's problems, and it clashes somewhat with DANE - I'm sure you could make them work together but it would definitely become awkward
As other comments said, DNS is vulnerable to man-in-the-middle attacks without DNSSEC, and DNSSEC is not widely adopted.
However, even with DNSSEC, bob could obtain domain.tld's keys and then use them to MITM alice's DNS requests, getting her browser to show a malicious site for domain.tld complete with a trusted certificate. Furthermore, bob can do this for alice and only alice, decreasing the odds of detection by a large margin.
With LE you gain:
- Targeted attacks harder to pull off, since you would need to successfully attack the LE servers/networks first and then your actual target.
- LE participates in the certificate transparency initiative, meaning every time a certificated is emitted for a domain it gets recorded.
These things are particularly relevant when you consider that in most cases the DNS is either directly or indirectly controlled by state actors at their root. With DNS-only, they could pull off such targeted attacks without anybody noticing. With LE, they can still pull off these attacks, but there would be a papertrail when that happened.
DNSSEC is not about protecting MITM attacks against DNS traffic. DNSSEC is about validating that DNS records have not been tampered with. Are you thinking about DNS over HTTPS and DNS over TLS?
MITM involves monitoring and/or altering the traffic between two endpoints which believe they are talking directly. If DNS traffic between Alice's client and configured DNS server can be intercepted DNSSEC provides no real protection because unlike TLS all information to verify signed records are provided in band via DNS. Once the traffic can be intercepted and return traffic spoofed the attacker can return the appropriate records to make it appear that the response is properly signed without any need of having the DNSSEC keys being used at the authoritative DNS server for that domain.
DNS over HTTPS, DNS over TLS and DNSCurve are intended to protect against this type of interception/tampering and should be used along side DNSSEC.
I think you are incorrect here. DNSSEC protects from the root servers' (.) keys downward. If you have a compliant DNSSEC resolver it should authenticate the entire chain using the pre-shared keys for the root domain.
For a MITM attack to work in this scenario, the attacker would have to first get fake keys for the root domain (that normally come with the OS) into your computer, at which point not even DNS over anything would work because they should be able to install TLS certs too.
As far as I know, all these DNS-over-TLS flavors' advantage is that they prevent eavesdropping, not spoofed traffic (because DNSSEC can do that too as per my explanation above).
The parent comment is more right than not: end-systems in fact don't perform DNSSEC validation, but rather rely on their DNS servers to do it for them. On the network path between an end-system and its DNS server, no cryptography is deployed to protect DNS records; rather, a single bit in the header signals to DNS resolvers that some DNSSEC validation has been performed, somewhere. Along that network path, any middlebox can tamper with the DNS.
It's not an academic concern, because the two most common modes of deployment for DNS are "with ISP DNS servers", in which case one of your most important and active DNS adversaries is on-path, and "with services like 8.8.8.8", in which case the whole Internet is between you and the thing that validated DNSSEC for you.
It's not a good system, and it's true that DoH does a better job at addressing this problem.
It's certainly true that if you want your resolver to live on a separate node that you trust, yet there's a network you don't trust between you and that node, DPRIVE attempts to solve that and DNSSEC does not.
But without DNSSEC a middlebox still gets to tamper anyway, since there's no authenticity of the answers you receive, you're just moving your trustworthy source of these potentially bogus answers for whatever that's worth.
Thomas believes your ISP is "one of your most important and active DNS adversaries" and yet astonishingly he trusts them to provide you with authentic DNS answers.
If you do local validation, which you're free to, DNSSEC solves the problem.
If you do remote validation but you trust the remote DNS server to do that on your behalf correctly, DNSSEC solves the problem and then any DPRIVE protocol brings those correct answers home intact.
If you believe there are certainly authentic answers where Google is (I have a lot of used iron available, central Paris area, it's in the form of a big tower so you can't miss it, buyer collects) then just DPRIVE lets you get their authentic answers. Notice that Google does not warrant that this is true, and "where Google is" might mean a data centre controlled by your country's censors.
I don't understand why you think I trust residential ISPs with the DNS, since that's the opposite of what I believe --- residential ISPs are actively monetizing the DNS.
The example you cited is "with ISP DNS servers". But if you in fact don't trust them, DPRIVE on its own doesn't help you.
Same way upgrading from HTTP to HTTPS doesn't prevent you getting misinformation from Conservapedia. The answers on the site are bad, so ensuring nobody "tampers" with those bad answers doesn't really help you.
DPRIVE is good stuff, and one of the things that great about it is that it unsticks the deployability problem for DNSSEC client validation, but I'm guessing that's not the part you're keen on.
I'm lost. The point of using DOH (which isn't DPRIVE, which is itself a moribund project) is not using ISP DNS servers in the first place. Can you maybe rephrase this somehow?
DPRIVE is the working group that owns all the ongoing DNS privacy work, including RFC 7226 (upon which RFC 8484 DoH depends and a bis document for which is probably to be expected next year) and the forthcoming BCP document describing best practices for all these privacy preserving DNS services. The existence of a short WG to spin up, consume a good first draft and spit out a single DoH document RFC 8484 doesn't change that. I have no idea which "moribund project" you're referring to and I doubt it's relevant here.
As to "not using ISP DNS servers in the first place" I guess you do not consider Comcast to be an ISP?
Or perhaps you didn't hear. Both Chrome and Firefox will automatically give Comcast's customers Comcast DNS over HTTPS. Mozilla secured a TRR agreement from Comcast in exchange, and presumably Google feels that this agreement plus any undisclosed paperwork from Comcast that they were shown as part of their onboarding process is enough to likewise trust it automatically in Chrome. We expect to see more ISPs doing this over the next few years. If you cannot beat them, join them, eh?
I understand what you're trying to do with that mic-drop snark about Comcast, but obviously I don't think it's a good idea to direct your DoH queries to Comcast.
There's a working group that hopes to own everything on the Internet, but DoH happened outside that process, and was deployed (thankfully) over the objections of people involved in the group.
I wouldn't characterize it as "mic-drop snark" it's just that your assumptions were wrong.
There's actually a pretty big contingent on HN that wanted this, they see the alternatives as either my DNS query goes to the ISP I've already chosen and contracted with, or to Google/ Cloudflare/ OpenDNS who I maybe don't like and don't deal with otherwise - and they'd prefer the former.
The "point" of DoH is actually that it sidesteps ossification. One of the big differences from Mozilla's user privacy focused TRR is that Google's programme cuts to the chase immediately. Your DoH service must work for well formed queries you don't understand, in particular HTTPSSRV. Why? Because the "point" is to get to a world where we can actually use technologies like QUIC and ECH and we can't deploy those technologies at scale in a world where your 1990s ad-tech DNS server responds to anything other than an A query with confused silence.
One of the things DoH allowed them to throw in is a way to express "I won't tell you the answer because censorship" as distinct from just giving bogus answers. Again Google's programme is specific that it's fine to censor stuff, but Comcast are obliged to use this mechanism so that Chrome can tell users "No, you aren't allowed to look at Porn Hub" rather than "Huh, Porn Hub doesn't exist". And again, even though I'd guess you're like me and don't want censorship lots of people do so defeating censorship is not "the point" either.
You continue to make a strong case for not using Comcast's DNS servers. Since I entered this conversation believing nobody should use Comcast's DNS servers, I'm confused why you're making that case to me personally, but whatever, I agree.
What happens if both DNS requests are hijacked? Could an attacker return both a different IP and a different public key that correspond to a malicious server?
Edit: I should clarify, I think the issue stems from DNS being a potential attack vector itself. You can't blindly trust what DNS tells you. This is precisely the problem that certificate chains (issued by a trusted third party) purport to resolve-- one of trust.
Well there's DNSSEC. Implementation details (such as lack of practice in key rotation, TTL, etc) aside, DNSSEC works and LetsEncrypt validates it.
Then there is Certificate Transparency logs. It will be passive action at this point, but it's an action regardless.
Let's Encrypt checks DNS validation and DNS CAA from multiple PoPs, but I don't think it's enforced by CA/B requirements to do so (happy to be corrected).
One possible reason is that its more likely that your individual dns is being modified/tampered with than it is for LEs servers. This way your individual ISP can't mess with things.
From the article I can see that using ECDSA instead of RSA could save up to 400 bytes per connection - that's nice.
Aside from certificate and key size, my understanding is that ECDSA is less computationally intense than RSA, but this gets only a brief mentioned in the article - does anyone know how CPU use compares for typical setup of a TLS connection for ECDSA vs RSA?
Here's a 2017 Cloudflare blog post that compares the performance of ECDSA sign and verify operations (using 256 bit keys) against RSA sign and verify operations (using 2048 and 3072 bit keys) on various CPU architectures:
- Do the graphs mean that the "sign" operation of OpenSSL when using ECDSA is many times faster and the "verify" operation is ~4 times slower (compared to RSA)?
- I suppose that "sign" is linked to issuing certificates, while "verify" would be involved to checking certificates?
If it were just checking and issuing certificates, faster signing and slower verifying would be a bad tradeoff, since certificates are issued once and used many times; certificate size benefits notwithstanding.
The really nice thing performance wise is that when sending a certificate via TLS, you need to sign the TLS exchange up to that point. Making that faster means you can support a lot more connection handshakes with the same server.
It makes sense to use ECDSA for leaf certificates, because the TLS server can then handle more clients compared to a RSA based certificate of the same strength (the private key operation is much cheaper with ECDSA and is needed for every TLS handshake). The client of course, needs a few more cycles to verify the signature, but that is not noticeable most of the time.
IMHO it does not really make sense to use a ECDSA root certificate unless you have a very constrained environment, where every byte counts. The root certificate will never be transferred to the client during a TLS handshake - so the size benefit is minimal (the intermediate certificate will be a bit smaller, because ECDSA signatures are smaller). But the signature validation will take more cycles on the client in every TLS handshake.
Other than that it is a good thing that Let's Encrypt now has an ECDSA root. When researchers might find a problem with RSA in the future, we have an alternative ready to use.
That blog article is a poor explanation of the issue.
The root CA from namecheap was expiring. They tried to recreate it, only changing the date, to continue to issue certificates to customers with it the same way.
They hoped users/systems would accept their newer CA automatically after the old CA expired. CA are additive, there are many configured on a system, it's standard practice to add more by keeping existing ones and adding new ones.
This blew up in their face monumentally because having two identical CA is conflicting. Things failed to verify after the original CA expired.
They all become untrusted, if the verifier does their job. That said, a root doesn't issue site level certificates, so the intermediates and issued certs themselves would expire first.
Any untrusted, expired, or revoked certificate in the certificate chain (includes root, lead certificate, and all intermediates), means the client will not trust the connected.
It would be nice if they included a TL;DR notice at the top saying: “1. If you are a user of let’s encrypt today, this announcement does not imply that you need to do anything at all, and, 2. The main thing is that new certificates are shorter and will take up less bandwidth, plus also safer.” In any case, thanks for the great, free service!
In hindsight, Let's Encrypt is good at backwards compatibility and actionable information. I suppose the target audience is technical.
LE is phrasing out their v1 API, and they email all users of v1 if they recently issued certificates with it. I suppose majority of non technical users of LE don't even visit the web site, apart from downloading the ToS PDF.
Remembering that *.domain cannot span dots in certificates, and the consequence for SNI if you dry to use FQDN as a dot-separated space beyond the "flat" model.
The convergence of certificate issuance, domain names, domain matching logic, configs, port-binding, information leakage. Its a nightmare. 5 tuple be damned: the higher protocol layers are now deciding how to de-mux your service.
All of these things come with half-baked self-signed certs, and web browsers and mobile devices these days are making it increasingly difficult to actually stomach such certs for anything at all.
I wish the "SSL All The Things!" crusaders would consider this use case once in a while, and actually offer a usable solution.
(Some of us could go through the trouble of setting up a private CA to deal with this issue, but getting everything to trust your own personal CA root seems like almost as much of a hassle at times.)