Hacker News new | past | comments | ask | show | jobs | submit login
Easy HTTPS for your private networks (getlocalcert.net)
272 points by 8organicbits on July 10, 2023 | hide | past | favorite | 120 comments



Cool, but as others said fairly easy to do on your own already. Sort of related but a problem I have been trying to solve is, how to create a trusted certificate for a new device on a private network automatically without any configuration? For example, imagine you are turning up a new router or switch with a Web UI for management. Traditionally this is served on http initially and you can optionally install a cert and enable https. It would be nice to have a way to have this happen automatically to obviate the need for http at all. Also consider that this private network may not be connected to the internet at all and so can't rely on outside servers.

I was kicking around the idea for an rfc that would be something like this. Your DHCP server would have an option to act as a CA (or point at one), with ACME protocol enabled. Then, you add DHCP option fields telling clients to trust this new CA certificate, but only for hosts on this local network (to avoid being able to MITM outside connections). Then clients on this network would be able to request certificates from the CA using ACME automatically and others on the network would automatically trust them assuming support for this standard was added the OS or browsers etc.


There is SCEP (https://en.m.wikipedia.org/wiki/Simple_Certificate_Enrollmen...) which allows that and is often found on network devices. You need a PKI which exposes a SCEP endpoint (ejbca or dogtag supports this). That the certificate is used as certificate for the HTTPS is up to the device implementation of the scep client or something else in the client though.

On servers, certmonger can do scep iirc. On private infrastructure, FreeIPA provides a packaged dogtag and you can create your own certificate profiles. Clients enrolled in freeipa have certmonger installed to refresh certificates.


> You need a PKI which exposes a SCEP endpoint (ejbca or dogtag supports this).

Uhh...

> [...] ejbca [...]

Now you have two problems.

What I mean is, if you’ve been already running EJBCA for whatever reason then this is perhaps reasonable, but if your current setup is at the level of typing `openssl req` into a terminal (whether that’s a good idea or not), it sounds like a lot of additional complexity. (Can’t say anything about dogtag.)

I’ve been waiting forever for somebody to add an ACME backend to the Go SCEP library[1], but it doesn’t look like that has happened. In the meantime it includes a fairly competent standalone CA server at the abovementioned invoke-openssl-by-hand level.

Note that SCEP basically requires a trusted network, though, from what I remember.

[1] https://github.com/micromdm/scep


You can get somewhat close to that with getlocalcert. When the device first boots, it calls the getlocalcert API to register a free, anonymous domain name (negative: its a UUID domain, ugly). You'd need outbound connections to api.getlocalcert.net and an ACME certificate authority (Let's Encrypt) to issue a ccertificate. When the user connects to http://<ip-address> it redirects to https://<uuid>.localcert.net, which the browser trusts since the cert is from a public CA. No need to manually trust a cert. The user would bookmark it and all future use is HTTPS-only.

If I see usage like this, there's ways I can make this cleaner.


> it calls the getlocalcert API to register a free, anonymous domain name

...rather than to get an actual certificate.

I didn't dig past the front page, but as far as I can see, they don't issue HTTPS certs at all, easy or oherwise; they offer managed DNS subdomains. You want a cert, you have to get it somewhere else.


As a security engineer I'm wary of tools that offer to issue certificates for you. You've got to trust that they won't keep a copy of your private key. ACME clients [1] generate keys on your device, which is the recommended way.

But you're right. Issuing Let's Encrypt certificates on behalf of users could simplify the process even more.

[1] https://docs.getlocalcert.net/acme-clients/#setting-txt-reco...


Solved.

Solved for both Windows and Linux (Debian, Arch, Fedora). I might have unlikely solved this of OSX as well, but I am not buying Apply hardware just to test it.

What my solution does is check for certificates created by the project during a build step. If the certificates don't exist it creates them, installs them in the OS, and also installs them in the browser. Installation in the browsers is required in Linux and only for FireFox in Windows. These are cert chains containing a self-signed root, intermediary CA, and a local domain cert.

I have these certs configured to work with my own domains so that I can connect to a subdomain addressed to a loopback IP and the cert recognizes that domain, but the domain "localhost" works as well. Sometimes its nice to access a real domain to avoid any restrictions imposed upon accessing address "localhost". You just have to change the domains at the bottom of your OpenSSL option files.

Here is how I solved it with vanilla TypeScript in Node.js (also requires locally installed OpenSSL):

* OpenSSL option file 1 - https://github.com/prettydiff/share-file-systems/blob/master...

* OpenSSL option file 2 - https://github.com/prettydiff/share-file-systems/blob/master...

* Certificate library - https://github.com/prettydiff/share-file-systems/blob/master...

* Certificate interface from build tool - https://github.com/prettydiff/share-file-systems/blob/master...

* Certificate installation - https://github.com/prettydiff/share-file-systems/blob/master...

If you have any questions just open a Github issue on the project.


Sounds like you're describing a process similar to SCP over HTTPs. That is available (albeit windows hosted) and fully functional across any client OS.



Or, it's super easy to roll your own using letsencrypt.

1. Buy your own public domain (such as companyname.dev)

2. Setup a LetsEncrypt wildcard certificate with DNS validation

3. Update your /etc/hosts to something like `127.0.0.1 companyname.dev`

We have this working with multiple developers, each renewing their certificates themselves. Works great, it's simple, and don't need to trust an extra third party.


Correct, a distinction with this service is that the domain names are free. One part of my decision to build this service was the fall of freenom (free domain names).

Typically free domain name services are full of spam, malware, and other junk. getlocalcert seeks to avoid that as it only permits private network usage. My hope is that this model can serve that niche while avoiding abuse.


How will you make money?


I'm potentially using this as a portfolio piece for my freelance consulting work [1]. Many clients require NDAs, so it can be tough to talk about specific, recent accomplishments.

Operational costs are low: around $40/year in domains and $20/month in servers. Soliciting donations could probably cover that (some similar free services work that way).

[1] https://www.robalexdev.com/


With your work history, you probably don't need any portfolio projects ;)


Yeah nice! Good to know


Last time I checked, those wildcard LetsEncrypt certs take more work to get, like passing a DNS-based TXT record challenge. Then once you have the wildcard certs, they only last 3 months. Once obtained, they can manually be copied into the LAN using a tool like wormhole. There's a lot of manual steps here which are far harder than how certbot will auto-renew certs when in the cloud - usually requiring no manual intervention, once you succeed that first time.


One of my inspirations for getlocalcert is a tool to make DNS-01 easier.

acme-dns let's you add a CNAME to another DNS zone, which let's you issue certificates for the former domain name using a convenient API for the latter zone. Seriously read about it, it's awesome.

https://github.com/joohoi/acme-dns/

That tool is open source and self-hostable. getlocalcert also provides this feature, but as a hosted service. Choose the method you prefer.

https://docs.getlocalcert.net/tips/validation-domain/

Once DNS-01 is easy, wildcard certs are easy. Here's the docs for setting up a wildcard cert via getlocalcert: https://docs.getlocalcert.net/acme-clients/lego/


Thanks for the explanations! Great links.


All the steps are automatable though. I know cause I built it into my own server engine.

This service looks like the same thing. I guess if you're limited to certain then you can only do what it does, but I'm guessing there's lots of alternative software that'll do the DNS challenge if you look.


> Once obtained, they can manually be copied into the LAN using a tool like wormhole.

We haven't needed to copy the certs around the LAN. It works fine with dev's just individually running certbot renew as needed.

Yes, it tooks us a fair bit of fiddling around to work out how to do it, but final result is super simple. So I definitely would have considered a project like this in the past, but now we've got the scripts for it, it's pretty simple.


The big difference I can see is that getlocalcert.net does not require the user to run a DNS server.


Just at the whole domain or does each dev make their own subdomain?


We setup each project with it's own subdomain... so a /etc/hosts file would be something like

127.0.0.1 client7.companyname.dev our-amazing-product.companyname.dev postgres2.companyname.dev

So, then we can use the same settings, etc across devs.

Then each dev just runs the script to renew the script as they are needed (no need to share certs between devs)


Why not put project.companyname.dev in DNS? (Pointing to 127.0.0.1, ::1?)


Yeah that seems more ideal


Somewhat related - I made a bridge server [1] that lets ACME clients use standard RFC2136 to solve DNS-01 challenges for internal names without them needing credentials for the actual DNS backend (Route 53 in my case).

[1] https://github.com/schlarpc/rfc2136_bridge/blob/main/src/rfc...


I did exactly the same for our local-cloud products.

Our local-cloud program connects to our "certificate server", and asks for a name/ip combination.

Our certificate server gets it using API access to our "local-cloud" domain. The local machine receives it.

So the end user does not have the Domain credentials. They have credentials to our cert server, but those have very limited value (and would need to be decrypted first.)


Interesting idea.

I have a workflow for creating AWS credentials that are restricted to doing the LetsEncrypt DNS challenges for just a single sub-domain, and that seems to be working well. https://linsomniac.gitlab.io/post/2019-09-10-letsencrypt-wit...


If you absolutely need a hostname that is not "localhost" you should still configure your own host record and use a utility like mkcert.[1]

Most of the time, you really shouldn't need this, and your local HTTPS development should just require:

    mkcert -install
    mkcert localhost
In the directory of your app, etc.

If you're deploying private applications, these records should exist in your intranet DNS resolver instead.

[1]: https://github.com/FiloSottile/mkcert


Interesting idea, but I think I'll stick with DNS challenge and LetsEncrypt. Lazy me really needs to finally sit down and automate the process (all the pieces are in place on my registrar's side). Did the private CA thing for almost two decades, and yes, it was a PITA, but even that was mostly because I failed to invest the time to automate it (Linux and Windows devices aren't _that_ hard to manage with OSS tools, printers might be a harder challenge; Android refusing to play nice with private DNS just makes it easier to disregard it altogether, as do all those IoT firmwares that don't even pretend to give you a choice). Ultimately, the current infrastructure, like that for message encryption, is just too damn convoluted. Eventually it might get fixed, but I'll probably be long retired to an Internet dead zone by then (raising goats can't be harder than managing fleets of application servers, can it?).


Haven't used this myself, but I did find out about Lego (https://github.com/go-acme/lego) recently and used it to get a Let's Encrypt cert for a local network website I have using the DNS challenge. It was fairly straight forward,

  $ export NAMESILO_API_KEY=...
  $ export NAMESILO_POLLING_INTERVAL=10
  $ export NAMESILO_PROPAGATION_TIMEOUT=1800
  $ export NAMESILO_TTL=3600
  $ lego --email <email> --dns namesilo --domains *.<domain>.com run


LEGO works great for this kind of thing. If you want to try with a localcert.net domain, the instructions are here: https://docs.getlocalcert.net/acme-clients/lego/


I've a simpler system to use encrypted network for local/private use.

- Setup Tailscale on the server and personal device

- Deploy apps (self hosted tools that I use) on the wireguard interface created by Tailscale

- Deploy caddy as a reverse proxy for all these tools and use `.internal` domain

- Use Adguard's DNS rewrites feature to answer custom DNS response for specified `.internal` domains. The A record contains the IP assigned by Tailscale (wg0 interface) for that device.

The Caddyfile is as simple as:

  http://miniflux.karan.internal {
    reverse_proxy 127.0.0.1:3000
  }

Using this setup, as soon as I connect to Tailscale, I can access `miniflux.karan.internal` on a secure encrypted network without worrying about configuring HTTPS for local domains or renewing SSL certs.


This is unnecessary

It is straightforward to create your own Root CA and use it to sign certificates for your private network, using openssl.

Ensure that you implement the V3 extensions with @altnames, for the certificates you issue, with a "DNS => <FQDN>" (or you can use an IP address instead of FDN. I have not experimented with that). .local domain names work fine

If you do not implement "@altnames" the certificate will work for tools like curl, wget "openssl s_client", but not for browsers. (In my notes some place I am too lazy to go and read there is a reference to the RFC that documents that you do not have to use the subject/CN, mēh)

There are other tools (than openssl) that do this to.

Using this service means you really do not have a private network


I've been pretty frustrated with how private CAs are supported. Your private root CA can be maliciously used to MITM every domain on the Internet, even though you intend to use it for only a couple domain names. Most people forget to set Name Constraints when they create these and many helper tools lack support [1][2]. Worse, browser support for Name Constraints has been slow [3] and support isn't well tracked [4]. Public CAs give you certificate transparency and you can subscribe to events to detect mis-issuance. Some hosted private CAs like AWS's offer logs [5], but DIY setups don't.

Even still, there are a lot of folks happily using private CAs, they aren't the target audience for this initial release.

[1] https://github.com/FiloSottile/mkcert/issues/302

[2] https://github.com/cert-manager/cert-manager/issues/3655

[3] https://alexsci.com/blog/name-non-constraint/

[4] https://github.com/Netflix/bettertls/issues/19

[5] https://docs.aws.amazon.com/privateca/latest/userguide/secur...


Let me put it this way;

You might get away with installing your RootCA on your in-office PCs that you own.

Good luck though getting me to install it on my (external contractor) laptop or my (employee) phone, or my (hosted) VMS machines. -you- may plan to only use the CA for good, but I don't trust you (and by extension all your IT staff, present, past, and future) to be angels.


Exactly. It's one of the reasons I use separate VMs per client when I freelance. I want to install the cert, so I don't see the cert warning messages or accidentally click through a real MITM attack, but never on the host OS. That could let the client MITM all sorts of stuff like my email to other clients if Thunderbird syncs at a client site.


Kinda nuts actually that you can’t install a Root CA and tell the OS “but only ever use it for these and these domains”. That would solve this elegantly.


Even If you could tell the OS it would fail cause much (most?) software has an OS independent path to validate the certificate.

OpenSSL for example uses a custom pem file that the program developer supplies.


> Your private root CA can be maliciously used to MITM every domain on the Internet

I cannot see how. Do you mean as a specific attack on a computer with the private Root CA installed, if the attacker gets their hands on the Root CA private key?


I think OP was envisaging the attack being done by the person whose root CA private key it is.


> I think OP was envisaging the attack being done by the person whose root CA private key it is.

Still very convoluted. I cannot see the problem, beyond some very special cases.


There's two parts. The first is getting access to the private CA's private key. This is easy for insiders who already have access, but challenging for external attackers. The second is performing a MITM attack. You need to establish an in-the-middle network position and then craft certs for the sites you want to intercept. Insiders may already have the needed network position. Outsiders may find this challenging. Crafting a bogus certificate is trivial with a private CA.


Your private root CA can only be used maliciously if it's compromised. If you use certificate transparency to publicly leak all your local network administrative activities, then you won't need to worry about someone compromising your network because you've already compromised yourself.


Compromise of private CAs isn't always difficult. People store unencrypted keys on SharePoint or send them in email along with the password. There's also cases of rogue IT department staff using the private CA intended to be used for `*.corp` to MITM employee Internet traffic, so "compromise" isn't needed for harm to occur. For companies that do it right, yeah, the concern is minimal.

I think the risk of leaking internal domain names is real, but overblown. If you find a vulnerable internal service via CT logs you still need to connect to the private network to exploit it. If you're connected to the private network, you could have enumerated subdomains many other ways. Early knowledge of vulnerable subdomains can speed up an attack or target selection, but this is still a two step process. Most subdomains are pretty boring anyway; it's not surprising to learn that $corp has subdomains like webmail, wiki, etc. These names don't leak version or brand information.

I was playing with the idea of doing CT log poisoning, where you register and renew certificates for decoy subdomains along side your real subdomain names. If you add enough noise, CT logs are no longer useful for enumerating private subdomains.


Sharepoint and email servers aren't public. Transparency logs are. Small companies might not care if the public knows what they're doing. In fact they'd probably benefit from the attention. However other companies are concerned, since leaks can fuel media rumors, harmful stock trades, or worse.


More to the point if you cannot keep one file a secret your problems are quite large.

Those sort of problems are common, true. That is saying that competence is rare in computing.

That is not surprising given the almost complete lack of accountability in this space.


The risk in both of theses is vastly different.

A root CA leak is hard to detect, has tremendous consequences, and adding limitations to the cert (TLD/domain) don’t work across every OS.

OTOH, leaking the existence of your private services (which are likely inaccessible over the internet for the general attacker) is a much smaller risk, and can be contained with wildcard certs as a workaround.


> It is straightforward to create your own Root CA and use it to sign certificates for your private network, using openssl.

I guess I'll be the one to say it: many people, myself included, get intimidated by using OpenSSL. The CLI has a bunch of different options and dealing with the inherent complexity of a PKI makes it all a bit of a mess: where you try to find the right invocations for whatever you need to do online and even then aren't sure whether everything is done correctly.

At least that mirrors my own experience in the past and that of most folks I've talked to, both in private and prod projects. That's why I personally find software like Keystore Explorer actually decent for even things like running your own CA (probably for a small homelab, where you don't need automation): https://blog.kronis.dev/tutorials/lets-run-our-own-ca

And yet, even then I don't have the confidence that I've done things the "right" way, frankly probably not. That's even before you get into things like staying on top of CVEs and keeping everything updated - overall staying safe is very challenging. There's probably also the need to configure a web server, like Apache with SSLCACertificateFile and just plenty of things to go wrong along the way, especially if you need mTLS as well.

> Using this service means you really do not have a private network

That's a fair point, though!


> many people, myself included, get intimidated by using OpenSSL

Unsurprising. It is intimidating. I think a bit old fashioned in that sense. Once was a time when men were men because they could configure sendmail. All a bit silly really.

There are other alternatives. Too few, tho. It took me a week to work it all out using OpenSSL.


If you use the Redbean web server, I put a lot of thought into making local network SSL easy. https://redbean.dev/#ssl Basically all you have to do is create a single key signing key which is installed on all your client devices. Then whenever you want to spin up a web server on any given host, you just give Redbean access to the key signing key, and it'll create RSA/ECDSA serving certificates automatically, which list all of the local system's hostnames and network IPs in the correct manner (which is a nightmare to figure out how to get OpenSSL to do). In fact you can even embed the key-signing key inside your Redbean so that when you SCP it to another machine and run it via SSH, it'll just do the right thing there as well, and Chrome won't shut it down. Or you could email your redbean single-file executable to a coworker, who just has to double click on the thing, and it'll start up a valid HTTPS server on their workstation which you can access, for easy local web dev collaborations.


Redbean may work well for a domain auto-registration idea I mentioned elsewhere [1]. I've been looking for an excuse to play with redbean.

[1] https://news.ycombinator.com/item?id=36675925


This sounds very much like the famous HN comment saying that Dropbox is unnecessary because you can construct an analog using rsync and half-dozen other tools.

It's like saying that having a chair is unnecessary because you have a log, an axe, a saw, etc, and can build a sit to your liking when need be.

The thing is that when the need actually comes to be, you want a seat right now and of a known working kind, and strongly prefer a comfortable one. So thank the carpenter.


The thing is when you build a private network you want it to be private.

The thing is when you are securing a network you are building things with powerful tools.

Thing is using powerful tools can backfire badly.


What is often not straightforward is trusting this CA on every existing and future client.


Especially if it's an iPhone.


> Especially if it's an iPhone.

That was my exact problem, an why I learnt this.

It is easy to install and easy to remove from an iOS device.


I would love to know how!

I've never had success with local CAs and self-signed certs on iPhone, despite going through the whole rigamarole of creating and installing an MDM profile with the trust root. Even after doing that, apps and Safari behave as if the certificate was untrusted. Is there some documentation you've successfully used and wouldn't mind pointing to?


This has changed recently, because I did a lot of fighting with this at one point.

Download the certificate, import/install it, and then do this: https://support.apple.com/en-us/HT204477


Thank you! This is new since the last time I tried setting up a CA for my home network, and it looks like just the ticket.


Caddy will do this for you as long as you don't give it an ACME config. It handles the root CA creation as well as all of the certificates for any domains that you define in the Caddyfile, so all you have to do is find the root CA it generates and install it on your devices.


> all you have to do is find the root CA it generates and install it on your devices.

Unfortunately that's the hard part - getting Firefox/chrome/edge/safari and is/libc/curl to all trust the ca. It's a little easier on Linux, but still convoluted.

And with an actual trusted CA the trust is way too wide (every domain).

Not to mention revocation lists and certificate renewals (and revocation s) ...

Dedicated letsencrypt "internal" domain is probably the sweetspot for most uses these days.

Maybe a serviceN.int.example.com where int.example.com allows for dynamic DNS updates/DNS challenges.

I've been thinking about setting up powerdns/coredns/knotdns with rfc dynamic updates for a dedicated internal domain - but for now tailscale with magic DNS mostly fills the need (unfortunately not with ssl for all k8s internal services, "only" VPN).


As mentioned in the excellent documentation from TFA - I should probably use acme-dns for this:

https://github.com/joohoi/acme-dns/

https://docs.getlocalcert.net/tips/validation-domain/#self-h...


I love this idea. Except I have devices on my network that don't allow me to do that. Company lock downs etc.

So I just pull in stuff from LetsEncrypt.


As a rule I don't connect to my local services from my company laptop—it sits in the guest network isolated from the rest of the network.


I've considered doing that for my home network, but if my local CA was ever compromised, someone would be able to generate certs for any site (such as my bank), and my devices would trust those certs if they were able to MitM me. Browsers don't support name constraints, so I couldn't restrict the CA to signing just my local domain.

Yes, I know, I'm small potatoes, and that would require a very targeted attack, but the idea still bothers me.


My local CA is on a raspberry pi that is offline. :)

However, my paranoia level is high.


I really hope DANE will become more popular (and widely supported) some time. Works great on air gapped networks without the need for a publicly trusted CA or Let's Encrypt. No ACME daemon to monitor, just put your public key in a DNS record an forget about it.


> Works great on air gapped networks

I've usually seen DANE paired with DNSSEC, and on the internet it feels required. DANE on an air gapped network is new to me, do you just skip the DNSSEC part? I'd be fearful of joining a network that puts bogus DANE TLSA records for google.com, for example.

Browser support for DANE is at 0%, unfortunately.

https://caniuse.com/?search=dane


You're right that DANE kind of implies DNSSEC. Technically it can go without, but it's quite pointless to do because you cannot trust your TLSA record without DNSSEC.

DNSSEC works in an air gapped network when you deploy your own trust anchor in your DNS. I wouldn't touch a domain name that you don't own yourself (like google.com) but instead only use a domain name you purchased.

It surprises me that DANE is even listed on caniuse.com! I expected it to be way to exotic to be on that list. I'm under no illusion that browsers are going to support this anytime soon unfortunately.

Now let's hope I didn't wake up tptacek to lecture us on how DNSSEC is bad and how it will eat your children. ;)


Me too. But the CA and browser mafia is too intertwined to let this change anytime soon.


Are you being facetious or do you actually believe this?


My preferred solution for this is a combination of Traefik and Smallstep StepCA. I run a collection of tools via Docker compose, and new certificates are issued automatically based on the hostname.

The only problem is that I have to maintain a list of DNS entries for the StepCA container as extra hosts. I haven’t found an elegant solution for this part.


MiniCA[0] works for this, quite trivial to setup and stamp out certs.

[0] https://github.com/jsha/minica


This leverages the ACME DNS server which has a REST API:

* https://github.com/joohoi/acme-dns

If your DNS provider has an API, you can hook into that for internal-only web servers; this handy code supports several dozen APIs so you don't have to re-invent the wheel:

* https://github.com/AnalogJ/lexicon

* https://pypi.org/project/dns-lexicon/

* https://dns-lexicon.readthedocs.io/en/latest/user_guide.html


> leverages the ACME DNS server

It's inspired by ACME DNS, but getlocalcert is actually using PowerDNS with a Django API that's compatible. I originally built the tool as a fork of acme-dns, but lost traction somewhere, I switched to pdns and it worked great.


Even easier would be to eliminate ICANN DNS and LetsEncrypt as mandatory dependencies for using HTTPS. If I'm the CA then maybe I know what I want to trust. Not every computer network user has same use cases. One size does not fit all, so to speak. If come computer network users want to use third parties to help them figure out who to trust, then they can do so, it's always an option. But currently there are some companies standing in the way of letting any computer network user determine trust themselves. Currently, delegating trust to third parties isn't optional, it's mandatory.


My way of doing private SSL (not necessarily the easiest):

* own CA, to be distributed to all systems via Ansible playbook or Dockerfile directives

* Hashicorp Vault with enabled PKI engine

* Ansible Hashivault module [1]

* Ansible role & playbook to tie it all together

* CI enviroment for automated deployment of SSL certs to target systems

Works flawlessly once set up, including restart/reload of affected services. Might do a writeup on my personal blog at some point.

[1] https://github.com/ansible-collections/community.hashi_vault


The whole point of a local DNS and local CA though is to insulate yourself from cloud expenses and failures. Our DNS system has had 0 outages, whereas our Google Fiber uplink has aprox one failure a month (Weather, construction, maintenance, upgrades, unscheduled outages).

It's just as easy to deploy a trusted CA using MDM as it is to deploy this.


The split-dns setup [1] does a pretty good job of insulating users from cloud failures. Devices on your network use local DNS, no cloud needed. ACME clients start renewing certificates 30 days before expiration, and retry daily. The cloud API would need to be down 30 days in a row for the certificate renewal to fail. It's quite failure tolerant. If Let's Encrypt had an enormous outage, you could switch to ZeroSSL. Caddy actually fails over automatically [2].

Re. MDM, I've never seen a perfect rollout.

[1] https://docs.getlocalcert.net/dns/split-view/

[2] https://caddyserver.com/docs/automatic-https#errors


I recommend just using something like XCA [1]. Just configure your own root CA and distribute it.

[1] https://hohnstaedt.de/xca/


Part of the reason I've been working on this is that the "distribute" step is quite difficult to do at scale across an ever changing set of operating systems and devices. Using a free public CA like Let's Encrypt let's you avoid that challenge since it's trusted out of the box everywhere.


“And distribute it” feels very “draw the rest of the fucking owl”-y to me


Doesn't that require being able to add your root CA on each device you access the application from? That can be a chore in some cases, esp when you want to share the application with others.

For services I host on my Tailscale/Headscale network, I just use DNS challenges. With Cloudflare and Caddy, it's as straightforward as adding:

tls { dns cloudflare {env.CLOUDFLARE_AUTH_TOKEN} }

to the site's configuration in the Caddyfile


This is a poor solution to a non-problem. 99% of "private networks" don't need HTTPS for most things, full-stop. Those that do, the correct solution is to deploy a private CA and use internal DNS. The implication being you do not trust your network. So the solution is to.....use public DNS and Let's Encrypt garbageware and leak details of your internal network so you can pretend you're now more secure because the cafeteria menu is hosted over HTTPS? Save the black hats some time and just email them the Visio of your core network.

If you don't have access to install the root certificate then by definition you don't control the private network, simple as that.

> when you want to share the application with others

This is literally the definition of no longer a "private network".


> 99% of "private networks" don't need HTTPS for most things, full-stop.

The encryption HTTPS provides isn't important if you trust your network. However server authentication is important if your devices move between networks (phone, laptop, etc). Applications don't know you switched networks, they just want to connect and will happily send sensitive data to an attacker if you ever connect to a malicious network. There was an example of `git push` leaking the entire commit history mentioned on HN not too long ago.


> This is a poor solution to a non-problem. 99% of "private networks" don't need HTTPS for most things, full-stop.

The problem with your security model is that you assume your private network is flawless. Do you really think you can trust every device that connects to your network to be secure? Including your Wi-Fi router? Because I don't, even for devices I personally bought. Even less so in a corporate setting.


If you don't trust your router I recommend trying out an open source firmware, like OpenWRT or DD-WRT.


> use public DNS and Let's Encrypt garbageware and leak details of your internal network

Wildcard letsencrypt + internal DNS means you don't leak anything and you don't have to deal with the awkwardness of installing certificates.


> use public DNS and Let's Encrypt garbageware and leak details of your internal network

I've seen certbot take 1+ hour to install on GCE micro VMs because it's so bloated. I've tried implementing the ACME protocol to avoid needing to use it, but it's one of the most byzantine processes I've seen. Let's Encrypt takes so much leverage from the edge to give us something that costs a latte. It's how freedom dies.


This is rescurrecting the mythical "secure internal network". It's been tried, a lot, and found wanting.



Yes you need to install the Root CA into each device's root of trust.

Inconvenient? Maybe. Security rubs against the grain of convenience, true


I released tabserve.dev recently.

It gives you a https url for localhost by using your browser as a reverse proxy.

Take a look: https://tabserve.dev


As others have noted, I'm not sure what this adds unless you cannot use letsencrypt to get wildcard certs.

I wrote about that here:

https://blog.katarismo.com/2023-06-04-expose-any-private-ser...

All my internal and external services inside and outside of my headscale/tailscale network can use a single cert this way. I generate a single cert and then reuse it everywhere.


interesting, wonder if this will help me get access to the Geolocation API for my ESP32 captive portal devices

on connect, a user is presented with an option to preselect a wifi network and input their geographic location. I managed to stuff a zip code lookup table into the code, but lat/long is really cumbersome to input, and using the phone's location sensor requires an HTTPS server


I wrote about something along these lines a couple of years ago [0] and thought the idea was dead due to rate limiting. But does LE now regard each subdomain as having its own limit, rather than taking that of the parent domain?

[0] https://3dbrows.dev/


The limit can be increased in some cases. https://letsencrypt.org/docs/rate-limits/#a-id-overrides-a-o...


I think options are expanding now. ZeroSSL offers ACME certs without rate limits. You'll likely need to pay them if you're getting more than three.


I'm fairly ignorant for certificates, but would anybody here happen to know whether this tech would be easily implemented on a Synology NAS? I enabled HTTPS on a NAS but figuring out how to get a let's encrypt cert or even a locally trusted cert to work seemed more difficult than anticipated.


step-ca is an excellent for running your own private CA with ACME support. Individual software support for defining your own acme directory isn't always there, but such is the life of running your a private PKI. Worst case, you manually issue a cert like you would have done anyway.


Side question: Does anyone know a simple caddy-like solution, but for non-HTTP traffic? For example, I want automatic SSL certificates for redis, mongodb, postgresql.


Caddy can act as a proxy for those services. It's ridiculously easy to set-up.

  mongo.mydomain.com {
    reverse_proxy 127.0.0.1:27017
  }


> but for non-HTTP traffic?

Doesn't this only work for HTTP too? It may work with MongoDB because it talks HTTP through TCP 27017, but PostgreSQL, for example, has a proprietary protocol on TCP 5432.


I think you need the layer4 module to get this working.

https://caddyserver.com/docs/modules/layer4


stunnel should work for all of the above


>"Eeasy HTTPS"

>Displays somewhat overcomplicated diagram with bunch of arrows and scary technobubble words.

Ping me again when HTTPs actually becomes easy


Another approach to solving this problem is to use Tailscale and their new SSL feature [1]. This of course is dependent on that you use Tailscale, but it is pretty handy and does give you a Let's Encrypt cert.

[1] https://blog.viktorpetersson.com/2022/12/23/securing-service...


The web/cloud world is so much nonsense. First, why do you even need HTTPS on a trusted network. Also if you want a trusted network, why not just use IPSec? HTTPS was specifically built to work over the Internet because it was too hard to switch from IPv4.

Second, if it's private, surely you can just use your own Certificate Authority, instead of paying a tax to be listed on someone else's Certificate Authority.


Well, because my local instance of <WebApp> says, <Feature> isn't available unless I use HTTPS.

Also, HTTPS with a public CA just works on most systems, whereas IPsec requires Client configuration. It's still easier to roll out than switching from IPv4.

For personal or experimental things, I may use a local CA, but getting certificates for the internal subdomains of my company is trivial when you already got a public domain on a Server that's capable of using DNS-01 challenges.

Enabling HTTPS is just way easier than actually ensuring you can trust your local network. Many real-life middle-class companies start out with just having a network and not thinking about security at all. Some companies may even have untrusted internal networks by choice by allowing BYOD.


Stop relying on web stuff built by others full of artificial limitations?

Either write your own code or choose your dependencies more carefully.


> First, why do you even need HTTPS on a trusted network.

PCI compliance among many other regulatory issues. You will be required to show that data is not only encrypted at rest but is encrypted through your entire transport chain regardless of any physical security. The threat model assumes that an attacker is able to temporarily access network resources undetected and can use simple sniffing tools to egress sensitive data. This will not be optional.


That's a whole lot of nonsense. What if you push your data using a custom binary protocol over UDP multicast?

Are you suggesting that regulations make it so that everything should be a slow and inefficient https webapp?


Trusted networks are a flawed concept, unless you mean a software-backed tunnel. And I'm glad the slightly-less-mad TLS won over IPSEC with some certificate extension.


how do you know your network is trusted and/or private?

trick question: you can't

even the network links between hosts in a single rack in a DC can be vulnerable


I own the machines, the switches, the routers and the cables.


And you're certain that no one will ever access your network negligently or with malicious intent?


where does all of that hardware live?

unless it's in your home, it's not trustable


Have you never been to a datacenter?

They require your fingerprints for entry and everything is heavily monitored.

You can even get a stealthy cage if you don't want anyone to be able to see what kind of network equipment you have.


and a government warrant bypasses all of that

there was this whole thing with edward snowden a few years ago, maybe you remember?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: