Cool, but as others said fairly easy to do on your own already. Sort of related but a problem I have been trying to solve is, how to create a trusted certificate for a new device on a private network automatically without any configuration? For example, imagine you are turning up a new router or switch with a Web UI for management. Traditionally this is served on http initially and you can optionally install a cert and enable https. It would be nice to have a way to have this happen automatically to obviate the need for http at all. Also consider that this private network may not be connected to the internet at all and so can't rely on outside servers.
I was kicking around the idea for an rfc that would be something like this. Your DHCP server would have an option to act as a CA (or point at one), with ACME protocol enabled. Then, you add DHCP option fields telling clients to trust this new CA certificate, but only for hosts on this local network (to avoid being able to MITM outside connections). Then clients on this network would be able to request certificates from the CA using ACME automatically and others on the network would automatically trust them assuming support for this standard was added the OS or browsers etc.
There is SCEP (https://en.m.wikipedia.org/wiki/Simple_Certificate_Enrollmen...) which allows that and is often found on network devices.
You need a PKI which exposes a SCEP endpoint (ejbca or dogtag supports this).
That the certificate is used as certificate for the HTTPS is up to the device implementation of the scep client or something else in the client though.
On servers, certmonger can do scep iirc. On private infrastructure, FreeIPA provides a packaged dogtag and you can create your own certificate profiles. Clients enrolled in freeipa have certmonger installed to refresh certificates.
> You need a PKI which exposes a SCEP endpoint (ejbca or dogtag supports this).
Uhh...
> [...] ejbca [...]
Now you have two problems.
What I mean is, if you’ve been already running EJBCA for whatever reason then this is perhaps reasonable, but if your current setup is at the level of typing `openssl req` into a terminal (whether that’s a good idea or not), it sounds like a lot of additional complexity. (Can’t say anything about dogtag.)
I’ve been waiting forever for somebody to add an ACME backend to the Go SCEP library[1], but it doesn’t look like that has happened. In the meantime it includes a fairly competent standalone CA server at the abovementioned invoke-openssl-by-hand level.
Note that SCEP basically requires a trusted network, though, from what I remember.
You can get somewhat close to that with getlocalcert. When the device first boots, it calls the getlocalcert API to register a free, anonymous domain name (negative: its a UUID domain, ugly). You'd need outbound connections to api.getlocalcert.net and an ACME certificate authority (Let's Encrypt) to issue a ccertificate. When the user connects to http://<ip-address> it redirects to https://<uuid>.localcert.net, which the browser trusts since the cert is from a public CA. No need to manually trust a cert. The user would bookmark it and all future use is HTTPS-only.
If I see usage like this, there's ways I can make this cleaner.
> it calls the getlocalcert API to register a free, anonymous domain name
...rather than to get an actual certificate.
I didn't dig past the front page, but as far as I can see, they don't issue HTTPS certs at all, easy or oherwise; they offer managed DNS subdomains. You want a cert, you have to get it somewhere else.
As a security engineer I'm wary of tools that offer to issue certificates for you. You've got to trust that they won't keep a copy of your private key. ACME clients [1] generate keys on your device, which is the recommended way.
But you're right. Issuing Let's Encrypt certificates on behalf of users could simplify the process even more.
Solved for both Windows and Linux (Debian, Arch, Fedora). I might have unlikely solved this of OSX as well, but I am not buying Apply hardware just to test it.
What my solution does is check for certificates created by the project during a build step. If the certificates don't exist it creates them, installs them in the OS, and also installs them in the browser. Installation in the browsers is required in Linux and only for FireFox in Windows. These are cert chains containing a self-signed root, intermediary CA, and a local domain cert.
I have these certs configured to work with my own domains so that I can connect to a subdomain addressed to a loopback IP and the cert recognizes that domain, but the domain "localhost" works as well. Sometimes its nice to access a real domain to avoid any restrictions imposed upon accessing address "localhost". You just have to change the domains at the bottom of your OpenSSL option files.
Here is how I solved it with vanilla TypeScript in Node.js (also requires locally installed OpenSSL):
Sounds like you're describing a process similar to SCP over HTTPs. That is available (albeit windows hosted) and fully functional across any client OS.
Or, it's super easy to roll your own using letsencrypt.
1. Buy your own public domain (such as companyname.dev)
2. Setup a LetsEncrypt wildcard certificate with DNS validation
3. Update your /etc/hosts to something like `127.0.0.1 companyname.dev`
We have this working with multiple developers, each renewing their certificates themselves. Works great, it's simple, and don't need to trust an extra third party.
Correct, a distinction with this service is that the domain names are free. One part of my decision to build this service was the fall of freenom (free domain names).
Typically free domain name services are full of spam, malware, and other junk. getlocalcert seeks to avoid that as it only permits private network usage. My hope is that this model can serve that niche while avoiding abuse.
I'm potentially using this as a portfolio piece for my freelance consulting work [1]. Many clients require NDAs, so it can be tough to talk about specific, recent accomplishments.
Operational costs are low: around $40/year in domains and $20/month in servers. Soliciting donations could probably cover that (some similar free services work that way).
Last time I checked, those wildcard LetsEncrypt certs take more work to get, like passing a DNS-based TXT record challenge. Then once you have the wildcard certs, they only last 3 months. Once obtained, they can manually be copied into the LAN using a tool like wormhole. There's a lot of manual steps here which are far harder than how certbot will auto-renew certs when in the cloud - usually requiring no manual intervention, once you succeed that first time.
One of my inspirations for getlocalcert is a tool to make DNS-01 easier.
acme-dns let's you add a CNAME to another DNS zone, which let's you issue certificates for the former domain name using a convenient API for the latter zone. Seriously read about it, it's awesome.
All the steps are automatable though. I know cause I built it into my own server engine.
This service looks like the same thing. I guess if you're limited to certain then you can only do what it does, but I'm guessing there's lots of alternative software that'll do the DNS challenge if you look.
> Once obtained, they can manually be copied into the LAN using a tool like wormhole.
We haven't needed to copy the certs around the LAN. It works fine with dev's just individually running certbot renew as needed.
Yes, it tooks us a fair bit of fiddling around to work out how to do it, but final result is super simple. So I definitely would have considered a project like this in the past, but now we've got the scripts for it, it's pretty simple.
Somewhat related - I made a bridge server [1] that lets ACME clients use standard RFC2136 to solve DNS-01 challenges for internal names without them needing credentials for the actual DNS backend (Route 53 in my case).
I did exactly the same for our local-cloud products.
Our local-cloud program connects to our "certificate server", and asks for a name/ip combination.
Our certificate server gets it using API access to our "local-cloud" domain. The local machine receives it.
So the end user does not have the Domain credentials. They have credentials to our cert server, but those have very limited value (and would need to be decrypted first.)
Interesting idea, but I think I'll stick with DNS challenge and LetsEncrypt. Lazy me really needs to finally sit down and automate the process (all the pieces are in place on my registrar's side). Did the private CA thing for almost two decades, and yes, it was a PITA, but even that was mostly because I failed to invest the time to automate it (Linux and Windows devices aren't _that_ hard to manage with OSS tools, printers might be a harder challenge; Android refusing to play nice with private DNS just makes it easier to disregard it altogether, as do all those IoT firmwares that don't even pretend to give you a choice). Ultimately, the current infrastructure, like that for message encryption, is just too damn convoluted. Eventually it might get fixed, but I'll probably be long retired to an Internet dead zone by then (raising goats can't be harder than managing fleets of application servers, can it?).
Haven't used this myself, but I did find out about Lego (https://github.com/go-acme/lego) recently and used it to get a Let's Encrypt cert for a local network website I have using the DNS challenge. It was fairly straight forward,
I've a simpler system to use encrypted network for local/private use.
- Setup Tailscale on the server and personal device
- Deploy apps (self hosted tools that I use) on the wireguard interface created by Tailscale
- Deploy caddy as a reverse proxy for all these tools and use `.internal` domain
- Use Adguard's DNS rewrites feature to answer custom DNS response for specified `.internal` domains. The A record contains the IP assigned by Tailscale (wg0 interface) for that device.
Using this setup, as soon as I connect to Tailscale, I can access `miniflux.karan.internal` on a secure encrypted network without worrying about configuring HTTPS for local domains or renewing SSL certs.
It is straightforward to create your own Root CA and use it to sign certificates for your private network, using openssl.
Ensure that you implement the V3 extensions with @altnames, for the certificates you issue, with a "DNS => <FQDN>" (or you can use an IP address instead of FDN. I have not experimented with that). .local domain names work fine
If you do not implement "@altnames" the certificate will work for tools like curl, wget "openssl s_client", but not for browsers. (In my notes some place I am too lazy to go and read there is a reference to the RFC that documents that you do not have to use the subject/CN, mēh)
There are other tools (than openssl) that do this to.
Using this service means you really do not have a private network
I've been pretty frustrated with how private CAs are supported. Your private root CA can be maliciously used to MITM every domain on the Internet, even though you intend to use it for only a couple domain names. Most people forget to set Name Constraints when they create these and many helper tools lack support [1][2]. Worse, browser support for Name Constraints has been slow [3] and support isn't well tracked [4]. Public CAs give you certificate transparency and you can subscribe to events to detect mis-issuance. Some hosted private CAs like AWS's offer logs [5], but DIY setups don't.
Even still, there are a lot of folks happily using private CAs, they aren't the target audience for this initial release.
You might get away with installing your RootCA on your in-office PCs that you own.
Good luck though getting me to install it on my (external contractor) laptop or my (employee) phone, or my (hosted) VMS machines. -you- may plan to only use the CA for good, but I don't trust you (and by extension all your IT staff, present, past, and future) to be angels.
Exactly. It's one of the reasons I use separate VMs per client when I freelance. I want to install the cert, so I don't see the cert warning messages or accidentally click through a real MITM attack, but never on the host OS. That could let the client MITM all sorts of stuff like my email to other clients if Thunderbird syncs at a client site.
Kinda nuts actually that you can’t install a Root CA and tell the OS “but only ever use it for these and these domains”. That would solve this elegantly.
> Your private root CA can be maliciously used to MITM every domain on the Internet
I cannot see how. Do you mean as a specific attack on a computer with the private Root CA installed, if the attacker gets their hands on the Root CA private key?
There's two parts. The first is getting access to the private CA's private key. This is easy for insiders who already have access, but challenging for external attackers. The second is performing a MITM attack. You need to establish an in-the-middle network position and then craft certs for the sites you want to intercept. Insiders may already have the needed network position. Outsiders may find this challenging. Crafting a bogus certificate is trivial with a private CA.
Your private root CA can only be used maliciously if it's compromised. If you use certificate transparency to publicly leak all your local network administrative activities, then you won't need to worry about someone compromising your network because you've already compromised yourself.
Compromise of private CAs isn't always difficult. People store unencrypted keys on SharePoint or send them in email along with the password. There's also cases of rogue IT department staff using the private CA intended to be used for `*.corp` to MITM employee Internet traffic, so "compromise" isn't needed for harm to occur. For companies that do it right, yeah, the concern is minimal.
I think the risk of leaking internal domain names is real, but overblown. If you find a vulnerable internal service via CT logs you still need to connect to the private network to exploit it. If you're connected to the private network, you could have enumerated subdomains many other ways. Early knowledge of vulnerable subdomains can speed up an attack or target selection, but this is still a two step process. Most subdomains are pretty boring anyway; it's not surprising to learn that $corp has subdomains like webmail, wiki, etc. These names don't leak version or brand information.
I was playing with the idea of doing CT log poisoning, where you register and renew certificates for decoy subdomains along side your real subdomain names. If you add enough noise, CT logs are no longer useful for enumerating private subdomains.
Sharepoint and email servers aren't public. Transparency logs are. Small companies might not care if the public knows what they're doing. In fact they'd probably benefit from the attention. However other companies are concerned, since leaks can fuel media rumors, harmful stock trades, or worse.
A root CA leak is hard to detect, has tremendous consequences, and adding limitations to the cert (TLD/domain) don’t work across every OS.
OTOH, leaking the existence of your private services (which are likely inaccessible over the internet for the general attacker) is a much smaller risk, and can be contained with wildcard certs as a workaround.
> It is straightforward to create your own Root CA and use it to sign certificates for your private network, using openssl.
I guess I'll be the one to say it: many people, myself included, get intimidated by using OpenSSL. The CLI has a bunch of different options and dealing with the inherent complexity of a PKI makes it all a bit of a mess: where you try to find the right invocations for whatever you need to do online and even then aren't sure whether everything is done correctly.
At least that mirrors my own experience in the past and that of most folks I've talked to, both in private and prod projects. That's why I personally find software like Keystore Explorer actually decent for even things like running your own CA (probably for a small homelab, where you don't need automation): https://blog.kronis.dev/tutorials/lets-run-our-own-ca
And yet, even then I don't have the confidence that I've done things the "right" way, frankly probably not. That's even before you get into things like staying on top of CVEs and keeping everything updated - overall staying safe is very challenging. There's probably also the need to configure a web server, like Apache with SSLCACertificateFile and just plenty of things to go wrong along the way, especially if you need mTLS as well.
> Using this service means you really do not have a private network
> many people, myself included, get intimidated by using OpenSSL
Unsurprising. It is intimidating. I think a bit old fashioned in that sense. Once was a time when men were men because they could configure sendmail. All a bit silly really.
There are other alternatives. Too few, tho. It took me a week to work it all out using OpenSSL.
If you use the Redbean web server, I put a lot of thought into making local network SSL easy. https://redbean.dev/#ssl Basically all you have to do is create a single key signing key which is installed on all your client devices. Then whenever you want to spin up a web server on any given host, you just give Redbean access to the key signing key, and it'll create RSA/ECDSA serving certificates automatically, which list all of the local system's hostnames and network IPs in the correct manner (which is a nightmare to figure out how to get OpenSSL to do). In fact you can even embed the key-signing key inside your Redbean so that when you SCP it to another machine and run it via SSH, it'll just do the right thing there as well, and Chrome won't shut it down. Or you could email your redbean single-file executable to a coworker, who just has to double click on the thing, and it'll start up a valid HTTPS server on their workstation which you can access, for easy local web dev collaborations.
This sounds very much like the famous HN comment saying that Dropbox is unnecessary because you can construct an analog using rsync and half-dozen other tools.
It's like saying that having a chair is unnecessary because you have a log, an axe, a saw, etc, and can build a sit to your liking when need be.
The thing is that when the need actually comes to be, you want a seat right now and of a known working kind, and strongly prefer a comfortable one. So thank the carpenter.
I've never had success with local CAs and self-signed certs on iPhone, despite going through the whole rigamarole of creating and installing an MDM profile with the trust root. Even after doing that, apps and Safari behave as if the certificate was untrusted. Is there some documentation you've successfully used and wouldn't mind pointing to?
Caddy will do this for you as long as you don't give it an ACME config. It handles the root CA creation as well as all of the certificates for any domains that you define in the Caddyfile, so all you have to do is find the root CA it generates and install it on your devices.
> all you have to do is find the root CA it generates and install it on your devices.
Unfortunately that's the hard part - getting Firefox/chrome/edge/safari and is/libc/curl to all trust the ca. It's a little easier on Linux, but still convoluted.
And with an actual trusted CA the trust is way too wide (every domain).
Not to mention revocation lists and certificate renewals (and revocation s) ...
Dedicated letsencrypt "internal" domain is probably the sweetspot for most uses these days.
Maybe a serviceN.int.example.com where int.example.com allows for dynamic DNS updates/DNS challenges.
I've been thinking about setting up powerdns/coredns/knotdns with rfc dynamic updates for a dedicated internal domain - but for now tailscale with magic DNS mostly fills the need (unfortunately not with ssl for all k8s internal services, "only" VPN).
I've considered doing that for my home network, but if my local CA was ever compromised, someone would be able to generate certs for any site (such as my bank), and my devices would trust those certs if they were able to MitM me. Browsers don't support name constraints, so I couldn't restrict the CA to signing just my local domain.
Yes, I know, I'm small potatoes, and that would require a very targeted attack, but the idea still bothers me.
I really hope DANE will become more popular (and widely supported) some time. Works great on air gapped networks without the need for a publicly trusted CA or Let's Encrypt. No ACME daemon to monitor, just put your public key in a DNS record an forget about it.
I've usually seen DANE paired with DNSSEC, and on the internet it feels required. DANE on an air gapped network is new to me, do you just skip the DNSSEC part? I'd be fearful of joining a network that puts bogus DANE TLSA records for google.com, for example.
You're right that DANE kind of implies DNSSEC. Technically it can go without, but it's quite pointless to do because you cannot trust your TLSA record without DNSSEC.
DNSSEC works in an air gapped network when you deploy your own trust anchor in your DNS. I wouldn't touch a domain name that you don't own yourself (like google.com) but instead only use a domain name you purchased.
It surprises me that DANE is even listed on caniuse.com! I expected it to be way to exotic to be on that list. I'm under no illusion that browsers are going to support this anytime soon unfortunately.
Now let's hope I didn't wake up tptacek to lecture us on how DNSSEC is bad and how it will eat your children. ;)
My preferred solution for this is a combination of Traefik and Smallstep StepCA. I run a collection of tools via Docker compose, and new certificates are issued automatically based on the hostname.
The only problem is that I have to maintain a list of DNS entries for the StepCA container as extra hosts. I haven’t found an elegant solution for this part.
If your DNS provider has an API, you can hook into that for internal-only web servers; this handy code supports several dozen APIs so you don't have to re-invent the wheel:
It's inspired by ACME DNS, but getlocalcert is actually using PowerDNS with a Django API that's compatible. I originally built the tool as a fork of acme-dns, but lost traction somewhere, I switched to pdns and it worked great.
Even easier would be to eliminate ICANN DNS and LetsEncrypt as mandatory dependencies for using HTTPS. If I'm the CA then maybe I know what I want to trust. Not every computer network user has same use cases. One size does not fit all, so to speak. If come computer network users want to use third parties to help them figure out who to trust, then they can do so, it's always an option. But currently there are some companies standing in the way of letting any computer network user determine trust themselves. Currently, delegating trust to third parties isn't optional, it's mandatory.
The whole point of a local DNS and local CA though is to insulate yourself from cloud expenses and failures. Our DNS system has had 0 outages, whereas our Google Fiber uplink has aprox one failure a month (Weather, construction, maintenance, upgrades, unscheduled outages).
It's just as easy to deploy a trusted CA using MDM as it is to deploy this.
The split-dns setup [1] does a pretty good job of insulating users from cloud failures. Devices on your network use local DNS, no cloud needed. ACME clients start renewing certificates 30 days before expiration, and retry daily. The cloud API would need to be down 30 days in a row for the certificate renewal to fail. It's quite failure tolerant. If Let's Encrypt had an enormous outage, you could switch to ZeroSSL. Caddy actually fails over automatically [2].
Part of the reason I've been working on this is that the "distribute" step is quite difficult to do at scale across an ever changing set of operating systems and devices. Using a free public CA like Let's Encrypt let's you avoid that challenge since it's trusted out of the box everywhere.
Doesn't that require being able to add your root CA on each device you access the application from? That can be a chore in some cases, esp when you want to share the application with others.
For services I host on my Tailscale/Headscale network, I just use DNS challenges. With Cloudflare and Caddy, it's as straightforward as adding:
tls {
dns cloudflare {env.CLOUDFLARE_AUTH_TOKEN}
}
This is a poor solution to a non-problem. 99% of "private networks" don't need HTTPS for most things, full-stop. Those that do, the correct solution is to deploy a private CA and use internal DNS. The implication being you do not trust your network. So the solution is to.....use public DNS and Let's Encrypt garbageware and leak details of your internal network so you can pretend you're now more secure because the cafeteria menu is hosted over HTTPS? Save the black hats some time and just email them the Visio of your core network.
If you don't have access to install the root certificate then by definition you don't control the private network, simple as that.
> when you want to share the application with others
This is literally the definition of no longer a "private network".
> 99% of "private networks" don't need HTTPS for most things, full-stop.
The encryption HTTPS provides isn't important if you trust your network. However server authentication is important if your devices move between networks (phone, laptop, etc). Applications don't know you switched networks, they just want to connect and will happily send sensitive data to an attacker if you ever connect to a malicious network. There was an example of `git push` leaking the entire commit history mentioned on HN not too long ago.
> This is a poor solution to a non-problem. 99% of "private networks" don't need HTTPS for most things, full-stop.
The problem with your security model is that you assume your private network is flawless. Do you really think you can trust every device that connects to your network to be secure? Including your Wi-Fi router? Because I don't, even for devices I personally bought. Even less so in a corporate setting.
> use public DNS and Let's Encrypt garbageware and leak details of your internal network
I've seen certbot take 1+ hour to install on GCE micro VMs because it's so bloated. I've tried implementing the ACME protocol to avoid needing to use it, but it's one of the most byzantine processes I've seen. Let's Encrypt takes so much leverage from the edge to give us something that costs a latte. It's how freedom dies.
All my internal and external services inside and outside of my headscale/tailscale network can use a single cert this way. I generate a single cert and then reuse it everywhere.
interesting, wonder if this will help me get access to the Geolocation API for my ESP32 captive portal devices
on connect, a user is presented with an option to preselect a wifi network and input their geographic location. I managed to stuff a zip code lookup table into the code, but lat/long is really cumbersome to input, and using the phone's location sensor requires an HTTPS server
I wrote about something along these lines a couple of years ago [0] and thought the idea was dead due to rate limiting. But does LE now regard each subdomain as having its own limit, rather than taking that of the parent domain?
I'm fairly ignorant for certificates, but would anybody here happen to know whether this tech would be easily implemented on a Synology NAS? I enabled HTTPS on a NAS but figuring out how to get a let's encrypt cert or even a locally trusted cert to work seemed more difficult than anticipated.
step-ca is an excellent for running your own private CA with ACME support. Individual software support for defining your own acme directory isn't always there, but such is the life of running your a private PKI. Worst case, you manually issue a cert like you would have done anyway.
Side question: Does anyone know a simple caddy-like solution, but for non-HTTP traffic? For example, I want automatic SSL certificates for redis, mongodb, postgresql.
Doesn't this only work for HTTP too? It may work with MongoDB because it talks HTTP through TCP 27017, but PostgreSQL, for example, has a proprietary protocol on TCP 5432.
Another approach to solving this problem is to use Tailscale and their new SSL feature [1]. This of course is dependent on that you use Tailscale, but it is pretty handy and does give you a Let's Encrypt cert.
The web/cloud world is so much nonsense. First, why do you even need HTTPS on a trusted network. Also if you want a trusted network, why not just use IPSec? HTTPS was specifically built to work over the Internet because it was too hard to switch from IPv4.
Second, if it's private, surely you can just use your own Certificate Authority, instead of paying a tax to be listed on someone else's Certificate Authority.
Well, because my local instance of <WebApp> says, <Feature> isn't available unless I use HTTPS.
Also, HTTPS with a public CA just works on most systems, whereas IPsec requires Client configuration. It's still easier to roll out than switching from IPv4.
For personal or experimental things, I may use a local CA, but getting certificates for the internal subdomains of my company is trivial when you already got a public domain on a Server that's capable of using DNS-01 challenges.
Enabling HTTPS is just way easier than actually ensuring you can trust your local network. Many real-life middle-class companies start out with just having a network and not thinking about security at all. Some companies may even have untrusted internal networks by choice by allowing BYOD.
> First, why do you even need HTTPS on a trusted network.
PCI compliance among many other regulatory issues. You will be required to show that data is not only encrypted at rest but is encrypted through your entire transport chain regardless of any physical security. The threat model assumes that an attacker is able to temporarily access network resources undetected and can use simple sniffing tools to egress sensitive data. This will not be optional.
Trusted networks are a flawed concept, unless you mean a software-backed tunnel. And I'm glad the slightly-less-mad TLS won over IPSEC with some certificate extension.
I was kicking around the idea for an rfc that would be something like this. Your DHCP server would have an option to act as a CA (or point at one), with ACME protocol enabled. Then, you add DHCP option fields telling clients to trust this new CA certificate, but only for hosts on this local network (to avoid being able to MITM outside connections). Then clients on this network would be able to request certificates from the CA using ACME automatically and others on the network would automatically trust them assuming support for this standard was added the OS or browsers etc.