A word of warning, client side support of name constraints may still be incomplete. I know it works on modern Firefox and Chrome, but there's lots of other software that uses HTTPS.
This repo links to BetterTLS, which previously audited name constraint support, but BetterTLS only checked name constraint support at the intermediary certificates not at the trust anchors. I reported[1] the oversight a year back, but Netflix hasn't re-engineered the tests.
Knowing how widely adopted name constraints are on the client side would be really useful, but I haven't seen a sound caniuse style analysis.
Personally, I think the public CA route is better and I built a site that explores this[2].
I prefer to assign an external name to an internal device and grab a free SSL cert from LetsEncrypt, but using DNS challenge instead as internal IP addresses aren't reachable by their servers.
Yep. I tried the custom-root-CA approach for a long time, but there were just too many problems with it:
* Loading it into every device was more work than it sounds. We have Android, iOS, Mac, Windows, and Linux, all of which have their own rules.
* Even once loaded, some applications come with their own set of root CAs. Some of those have a custom way of adding a new one (Firefox), others you just had to accept the invalid cert each time, and still others just refused to work.
* I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.
In the end I settled on a DNS-challenge wildcard SSL cert loaded into Caddy, with Caddy terminating TLS for everything that's on my home server. It's way simpler to configure the single server (or even 2-3 servers) than every single client.
> I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.
FWIW, I solve this problem with wildcards + a central reverse proxy for containerized apps. I host most services on a subdomain of the machine that hosts containers, like "xxx.container.internal", "xxx2.container.internal", etc. Instead of each container doing it's own SSL I have one central reverse proxy container that binds to 443 and each app container gets put on an internal Docker network with the reverse proxy. Reverse proxy has a wildcard certificate for the host system domain name "*.container.internal" and you can just add an endpoint for each service SNI. I'm using Zoraxy, which makes it very easy to just add a new endpoint if I install a new app with a couple clicks, but this works with lots of other reverse proxies like Caddy, Nginx, etc. If containers need to talk to each other over the external endpoint for some reason and thus need the root CA you can mount the host system's certificate store into the container, which seems to work pretty well the one or two times I needed to do it.
I haven't really solved the annoyance of deploying my root CA to all the devices that need it, which truly is a clusterfuck, but I only have to do it once a year so it isn't that bad. Very open to suggestions if people have good ways to automate this, especially in a general way that can cover Windows/Mac/iOS/Android/various Linuxes uniformly since I have a lot of devices. I've experimented with Ansible, but that doesn't cover mobile devices, which are the ones that make it most difficult.
Historically, before wildcard certificates were suddenly available for free, this leaked all internal domains to the internet, but now it's mostly a solved problem.
I don't understand why that is such a huge problem. The alternatives have much more severe problems, all from reusing a wildcard in many places to running your own PKI.
It depends on your risk profile, but there are definitely people who'd rather run their own PKI than permit threat actor reconnaissance by publishing internal hostnames to CT logs.
When this information is useful you've either got fundamental security related issues that needs to be addressed long before this, or you're dealing with threat actors with significant capabilities. In the latter case you've probably already taking this into account when you're creating your stuff, or you have the capability and technical understanding to know how to properly roll out your own PKI.
The overlap of people that suggest that you either run your own PKI or just distribute a wildcard certificate and have the technical understanding on how to do this in a secure way is minuscule. The rest of those people are probably better off using something like Lets Encrypt.
I've used this method for development successfully (generating CAs and certs on Mac with mkcert), but Apple has broken certificates in iOS 18. Root CAs are not showing up in the trust UI on iPhones after you install them. It's a big issue for developers, and has broken some people's E-mail setups as well. Also some internal software deployments.
Apple is aware of it, but it's still not fixed in iOS 18.1.
These are exactly the challenges and toil I ran into over time with my self-hosted/homelab setup. I use regular domains now as well with DNS challenges for Let's Encrypt. I've been experimenting lately with CloudFlare Tunnel + Zero Trust Access as well for exposing only the endpoints I need from an application for local development like webhooks, with the rest of the site locked behind Access.
I used to run wildcard cert with DNS challenge from LE with CloudFlare Tunnel to expose internal server to interwebs.
I have since then switched to ubiquiti products, and now I just run wireguard server for my road-warrior devices. Would use CloudFlare Tunnel if I ever need to expose anything publically.
I do this as well, but be aware that these external names you're using for internal devices become a matter of public record this way. If that's okay for you (it is for me), then this is a good solution. The advantage is also that you run no risk of name clashes because you actually own the domain
I decided to try split DNS to avoid leaking the internal IPs, but it turned out a bit more fragile than I imagined.
Especially Android is finicky, ignoring your DNS server if it doesn't like your setup. For example, if it gets an IPv6 address, it requires the DNS server to also have an IPv6 address, or it'll use Google's DNS servers.
It works now but I'm not convinced it's worth it for me.
I use CNAME records and it works on everything except Windows, where it works sometimes.
Basically, CNAME record from service.myserver.com to myserver.internal on a public DNS server, A record from myserver.internal to 1.2.3.4 on private DNS server.
I think I could maybe get it working on Windows too by tweaking the TTLs. Currently both DNS servers are automatically setting the TTL and I think Windows freaks out about that.
This is exactly what I do. After seeing how much of my internal network was exposed in certificate transparency logs, I noped out and just do a DNS challenge for a wildcard for almost everything.
Now it’s have a nice script that distributes my key automatically to 20 or so hosts and apps and have a real SSL cert on everything from my UDM Pro to my Synology to random Raspberry Pis running containers. Most of which have domain names that only resolve on my local network.
This is made possible by a fairly robust DNS setup that consists of not only giving A records to all my hosts automatically, but also adding in CNAMEs for services and blocking almost all outbound DNS, DNS over TLS, DoH, etc.
> be aware that these external names you're using for internal devices become a matter of public record this way
Yes, I sometimes think about that, but have come to the conclusion that it's not likely to make any difference. If someone is trying to infiltrate my home network, then it's not going to really help them to know internal IP addresses as by the time they get to use them, they're already in.
> If someone is trying to infiltrate my home network
I don't think the publishing of host names was mentioned as a concern for small home networks, but more for larger organisations that might be subject to a coordinated break-in or simply have trade secrets¹² that might be hinted at by careless naming of resources.
----
[1] Their next big product/enhancement, as yet unannounced even within the company, for instance.
[2] Hmm, checking what is recorded against one of DayJob's domains I see clues as to who some of our clients are. Not really a significant issue for security at all, but I know at least some of our contracts say we shouldn't openly talk about that we provide services to that client³ so I'll drop a message to the ISC to suggest we discuss if we need to care about the matter…
[3] Though that is mostly in the form of not using their logos in our advertising and such.
you can use a wildcard of type *.internal.example.com or use names that do not relate to the service name if you want to obfuscate the tech stack used.
The only thing public is that you may have an internal network with nodes.
I last looked at LetsEncrypt maybe 8-9 years ago, I thought it was awesome but not suitable for my internal stuff due to the http challenge requirement, so I went down the self signed CA route and stuck with that, and didn’t really keep up with developments in the space
It was only until recently someone told me about the DNS challenge and I immediately ported everything over with a wildcard cert - its been great!
LetsEncrypt + DNS challenge + DNS provider with letsencrpyt compatible API for modifying records works fantastically well for getting "real" https/SSL working for private IP addresses, the automatic renewals make it largely set and forget with very little config or setup required.
I've had working validly signed SSL on literally all my private home self-hosted services and load-balancers internally for years this way.
It also easily switches to a production like setup if you later did decide to host something on the public internet.
This sounds like something I'd want to do! Is the idea that you'd have a public domain name like "internal.thatcherc.com" resolve to an internal IP address like 10.0.10.5? I've wondered about setting this up for some local services I have but I wasn't sure if it was a commonly-done thing.
Obligatory if DNS validation is good enough, DANE should've been too. Yes, MITM things could potentially ensue on untrusted networks without DNSSEC, but that's perfect being the enemy of good territory IMO.
This would allow folks to have .internal with auto-discovered, decentralized, trusted PKI. It would also enable something like a DNSSEC on/off toggle switch for IoT devices to allow owners to MITM them and provide local functionality for their cloud services.
DANE rollout was attempted. It didn't work reliably (middleboxes freak out about DNSSEC), slowed things down when it did, and didn't accomplish any security goals (even on its own terms) because it can't plausibly be deployed DANE-only on the modern Internet. Even when the DANE working group came up with a no-additional-RTTs model for it (stapling), it fell apart for security reasons (stripping). DANE is a dead letter.
It happens. I liked HPKP, which was also tried, and also failed.
This would be cool, but I think we're still a far way off from that being an option. DANE requires DNSSEC validation by the recursive resolver and a secure connection from the user's device to that resolver. DoH appears to be the leading approach for securing the connection between the user's device and the resolver, and modern browser support is pretty good, but the defaults in use today are not secure:
> It disables DoH when [...] a network tells Firefox not to use secure DNS. [1]
If we enabled DANE right now, then a malicious network could tell the browser to turn off DoH and to use a malicious DNS resolver. The malicious resolver could set the AD flag, so it would look like DNSSEC had been validated. They'd then be able to intercept traffic for all domains with DANE-validated TLS certificates. In contrast, it's difficult for an attacker to fraudulently obtain a TLS certificate from a public CA.
Even if we limit DANE to .internal domains, imagine connecting to a malicious network and loading webmail.internal. A malicious network would have no problem generating a DANE-validated TLS certificate to impersonate that domain.
I'll concede that DNSSEC is not in a good spot these days, but I don't know if that's really due to its design or lack of adoption (it's in similar territory as IPv6 TBH). DoH is (IMO) a poor workaround instead of "fixing" DNSSEC, but it's unfortunately the best way to get secure resolution today.
Putting aside the DNSSEC issues, IMO, DNS should be authoritative for everything. It's perfectly decentralized, and by purchasing a domain you prove ownership of it and shouldn't then need to work within more centralized services like Lets Encrypt/ACME to get a certificate (which seems to becoming more and more required for a web presence). A domain name and a routable IP should be all you need to host services/prove to users that domain.com is yours, and it's something I think we've lost sight of.
Yes, DANE can create security issues, your webmail example is a perfectly valid one. In those situations, you either accept the risk or use a different domain. Not allowing the behavior because of footguns never ends well with technology, and if you're smart enough to use .internal you should understand the risks of doing so.
Basically, we should let adults be adults on the internet and stop creating more centralization in the name of security, IMO.
It is not in similar territory as IPv6. We live in a mixed IPv4/IPv6 world (with translation). IPv6 usage is steadily and markedly increasing. Without asking to be, I'm IPv6 (on ATT Fiber) right now. DNSSEC usage has actually declined in North America in the preceding 18 months, and its adoption is microscopic to begin with.
IPv6 is going to happen (albeit over a longer time horizon than its advocates hoped). DNSSEC has failed.
This is the customary comment by me that this is far from the prevailing view. From my viewpoint, DNSSEC is steadily increasing, both in demand and in amount of domains signed.
As I usually have to point out to you, registrars can’t add DNSSEC to domains. Only DNS server operators can do that. They often have to have the cooperation of the registrar to do it, but not always; sometimes, if the registry supports CDS/CDNSKEY records, the DNS server operator can add DNSSEC all by themselves. And why would DNS server operators add DNSSEC to their domains, unless the domain owners wanted them to?
Meanwhile: the graphs I posted in the preceding comment are pretty striking. If you haven't clicked through yet, you should. I've pointed out previous, minor drops in DNSSEC deployment in the US. The current one is not minor.
If you can’t get the technical details right, maybe you should hold your piece; this is a technical discussion. I also think you posted the wrong link.
> The current one is not minor.
Maybe not, but I do not know the cause, and you have not proposed one either. Do you have a theory about what happened in late 2023? We’ll have to see if this trend continues; the graphs you linked do show a slight upward turn right at the end of the graphs.
DANE without DNSSEC isn't a good idea. DoH secures the connection between the user's device and their recursive resolver, but it cannot secure the connection between the recursive resolver and the authoritative name servers. If you're using DANE you need a stronger guarantee that the records are valid.
“Customarily” must not necessarily mean “major email domains”. As I understand it, DANE is commonly used by organizations wanting to secure e-mail between them; DANE enforcement is agreed by both parties, and then used without issue.
I know that is what MTA-STS is for, but haven’t heard from people actually using it. I have, however, heard from people using DANE in the way i described.
(For those who don’t know, MTA-STS is basically DANE but for people who hate DNSSEC. And are OK with requiring every mail server to also have a web server running.)
Respectfully, what are you talking about? The four largest email providers in the world all run STS. You can test for yourself: just do `dig +short txt _mta_sts.DOMAIN`. I stopped looking after I saw that even Yahoo Mail does it.
I’m talking about hearing from actual people running their own e-mail infrastructure, or who are at least working closely with some local party to run it for them. What the humongous e-mail providers do is largely irrelevant for the purposes of what people in general should do with their own systems; what large providers do is rarely in the interests of anyone else but themselves.
(Also, your test is wrong. It should be “_mta-sts”, not “_mta_sts”.)
You frequently argue that DNSSEC usage only counts when it’s added to domains by the domain owners, not when it is added by DNS server operators (who are frequently also the registrars). Why then should MTA-STS usage count if it’s done by the few huge centralized e-mail providers?
When it’s purposefully set up by actual people, I only hear about DANE. It’s only when talking about huge e-mail providers that I hear about MTA-STS. And, as I said previously, those huge providers probably chose MTA-STS not for any reason which benefits their regular users, but for reasons which benefits only themselves, being a huge operator.
No! These aren't comparable problems. MTA-STS is designed to defeat SSL-stripping. It works whether specific users know about it or not. That is not the case for DNSSEC. This is why MTA-STS is overwhelmingly deployed across email users, and DNSSEC has less than 5% deployment (and falling). Thanks for the opportunity to clarify.
If you're wondering why DNSSEC never took off, these kinds of exchanges are illustrative!
> It works whether specific users know about it or not. That is not the case for DNSSEC.
I am baffled by this claim. DNSSEC works completely transparently to the user.
Also, we were comparing the specifics of MTA-STS to DANE, not to DNSSEC. Both MTA-STS and DANE solves the same problem, i.e. fake X.509 certificates and/or protocol degradation (SSL stripping). DANE has the potential to solve the same problem for every protocol, not just SMTP, while MTA-STS is both specific to e-mail, and stupidly requires an additional web server on every SMTP server.
> and falling
It’s actually rising again, according to your sources.
In recent years, you seem to have dropped all pretense of arguing against the specifics of DNSSEC, which is good, but you have then resorted to argumentum ad populum. However, this is a bad form of argumentation unless you can explain why DNSSEC is not as popular as it could be. For instance, what happened in late 2023 to cause the dip?
Yeah that's what I do. If you use anything other than Cloudflare its really really hard to get the authentication plugins going on every different web server though. Every server supports a different subset of providers and usually you have to install the plugins separately. It's a bit of a nightmare. But once it's dialled in it's ok.
I didn't like this approach because I don't like to leak information about my internal setup but I found that you don't even have to register your servers on a public DNS so it's ok. Just the domain has to exist. It does create very temporary TXT records though.
I use Dynu.com as my DNS provider (they're cheap, provide APIs and very fast to update which is great for home IP addresses that may change). Then, to get the certificates, I use https://github.com/acmesh-official/acme.sh which is a shell script that supports multiple certificate and DNS providers. Copying the certificates to the relevant machines is done by a custom BASH script that runs the relevant acme.sh commands.
One advantage of DNS challenge is that it can be run anywhere (i.e. doesn't need to run on the webserver) - it just needs the relevant credentials to add a DNS TXT record. I've got my automation wrapped up into a Docker container.
Not OP but I have a couple of implementations: one using caddyserver[0] as a reverse proxy in a docker-compose set up, and the other is a Kubernetes cluster using cert-manager[1].
> The Name Constraints extension lives on the certificate of a CA but can’t actually constrain what a bad actor does with that CA’s private key
> Therefore, it is up to the TLS _client_ to verify that all constraints are satisfied
> However, as we extended our test suite beyond basic tests we rapidly began to lose confidence. We created a battery of test certificates which moved the subject name between the certificate’s subject common name and Subject Alternate Name extension, which mixed the use of Name Constraint whitelisting and blacklisting, and which used both DNS names and IP names in the constraint. The result was that every browser (except for Firefox, which showed a 100% pass rate) and every HTTPS client (such as Java, Node.JS, and Python) allowed some sort of Name Constraint bypass.
That’s the danger of any solution that requires trusting a self-signed CA. Better just trust the leaf certificate, maybe make it wildcard, so you only have to go through the trust-invalid-cert once?
I wa t ti be able to import a cert into by browser and specify what to trust it for myself. “Only trust this cert for domain.com” did example.
The name constraints can give me a hint what it’s designed for, but if I import a cert to MITM devsite.org, I don’t want that cert working for mybank.com.
I did some research, write-up and scripting about the state of X.509 Name Constraints, so that people you give your CA cert to don't need to trust you not to MitM them on other domains.
Packaged into a convenient one-liner to create a wildcard cert under for the new .internal TLD.
For example my government uses non-standard CA and some websites rely on it. But importing CA obviously makes them able to issue google.com and MITM me if they want to. And they already tried, so trust is broken.
I imagine something like generating separate name-constrained certificate, sign existing CA with this name-constrained certificate (I think it's called cross-sign or something like that) and import things into OS, expecting that browser will use name-constraints of the "Root-Root" certificate. Could it work?
Yes, I do it in my work to restrict my company CA to company servers [1]. You generate your own CA, and cross sign other cert with any constraint you want. It works great, but requires some setup, and of course now you have your own personal CA to worry about.
[1] Yes, company is ok with it, most of my team does it, and this makes everyone more secure. Win-win.
I assume that the mentioned “some setup” involve not only distributing the new root CA, but also somehow prepopulating the old cross-signed certificate, as the services know nothing about that and thus will not send it in their cert chain. Or am I overlooking something?
Niklas, if you are reading this - it was a pleasure to interview with you some 6 (or so) years ago :) thanks for the script and the research, I will make use of it.
Looks good, but I want to MitM my network. I want youtube.com to redirect to my internal server that only has a few approved videos. My kids do some nice piano lessons from youtube, but every time I let them they wait until I'm out of the room and then switch to something else. There are lots of other great educational videos on youtube, but also plenty to waste their time on. (I want this myself as well since I won't have ads on my internal youtube server - plus it will add an extra step and thus keep me from getting distracted to something that isn't a good use of my time to watch))
Increasingly that kind of requirement puts you in the same camp as oppressive nation states. Being a network operator and wanting to MitM your DNS makes you a political actor. Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers. (See https://pc.nanog.org/static/published/meetings/NANOG77/2033/...)
Fortunately I own my firewall. Though mostly I'm talking about linux machines that I own and control the software on.
Though I fully understand I'm in the same camp as oppressive nation states. But until my kids get older I'm in charge, I need to set them up for success in life, which is a complex balance of letting them have freedom without allowing them to make too many bad decisions. Not getting their homework done because they are watching videos is on bad decisions I'm trying to prevent.
Importantly, this is a reasonable thing to do because sites like Youtube are designed to draw their attention away from whatever important thing they're doing so that Youtube can serve them advertisements. So anyone thinking a parent trying to control what their kid watches is oppressive somehow is pretty deeply in the wrong. As a parent myself I would consider doing this to keep my son from falling into the traps that are set by giant multinational internet companies like Google to get him to form habits around Google instead of habits around what he wants or needs out of life.
So really instead of thinking about this like "parents are acting like nation states" I think it's much better to think of it like "parents are countering corporate nation states."
It's totally reasonable. My position is that I think network operators and owners should be able to do that they want. I was just pointing out that virtually any time a network operator or owner wants to control the traffic in their network a certain crowd comes out of the woodwork and decries abuse by bad actors.
Size and scope matters. Large network operators that have a de facto monopoly on Internet access in many places should absolutely not be able to do what they want, but this is a function of their market power, not something inherent to any network operator.
I was thinking more about embedded devices that people buy but don't own (Chromecast devices, "Smart" home doodads, etc). You can stick them in a VLAN and filter their access to the Internet but they're inscrutable inside and have opaque, encrypted communication with their "mother ship".
I think your goal with your kids is laudable. I do the same thing. It limits the ability to use off-the-shelf devices and software, and I'll get more flak about it as my daughter gets older and is excluded from the "social" applications that I can't allow her to use because they're closed-source and not able to be effectively filtered. I'll burn that bridge when I get there, I suppose...
> Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers.
Not just devices, Jetbrains software has hardcoded DNS too. I've had to resort to blocking its traffic entirely because of the sheer number of servers and ports it tries in order to work around my DNS blocking, now I allow traffic only during license/update checks. I'm sure other large vendors do something similar.
With mikrotik and presumably other vendors you can force dns to your dns. I do this so i can pi-hole everything and see what sneaky things devices are doing.
Sort of. That doesn't help if they're doing DoH and you're unwilling to MitM all the SSL (and, if you are, then you have to worry they've pinned certs).
What services are you self hosting for local YouTube? Right now I just hand pick videos and they get lifted by my plex server, but having a nice route to my internal YouTube will be great for when my kids get to that age!
Currently I'm not. I would like to, but I'm not sure how to make it work. If I have a youtube video that I downloaded, I can make youtube.com point to my own web server, but everything after the domain needs to point to the correct things to make it play and I'm not sure how to do that (I also haven't looked).
You'll probably have an easier time blocking youtube (or the Internet in general) on the devices in question and running something like Jellyfin locally to serve your library.
The hard part is my kids' online piano lesson embeds youtube videos for the lesson. they have enough other content that I paid for an account for my kids, but the videos direct to youtube not someplace they host which means I can't block any of youtube. This is a common way to do things - my kid's school often sends them to some youtube video for some lesson.
Of course once you finish one youtube video it switches to a "you might want to watch next" which is not the educational content I want them on.
If you control the client (e.g. you can use librewolf), then you could do something like this greasemonkey script to rewrite youtube iframes into a locally hosted video file with the same name as the youtube video id:
The event listener might have an annoying perf impact, and if the sites with the embed don't use javascript to build the page, you might be able to leave it off.
Dumb question: lots of folks are talking about name constraints not being understood by old clients since they don’t understand that extension. But is this not exactly the point of critical designation in extensions: is the client not supposed to fail if it comes across a critical extension it doesn’t understand?
For one thing, the fact something's supposed to fail on unexpected input doesn't always mean it will fail.
For another, some implementations thought they understood name constraints, but had bugs in their implementations. For example, applying name constraints correctly to the certificate's Subject Alternate Name but not applying them to the Common Name.
As for the overall X.509 ecosystem (not limited to name constraints), the certification validation logic of common clients accepts various subtly, but completely, invalid certificates because CAs used to sign (or even use as root certificate) various kinds of invalid certificates, one can probably even find a certificate, that should be logically trusted, but isn't even a valid DER encoding of the (TBS)Certificate.
I went down this path, but installing CA certificates is a pain. There isn't just one trust store per device, there are many. Make your own CA if want to find out how many there are...
Like others I went with just having my own domain and getting real certs for things.
Is "name constraints" new? I wanted to do something similar a decade or two ago and found I'd have to be trusted for all domains, which I wanted to avoid.
It's been around since ~2008 when rfc5280 was released.
But it's long been stuck in a cycle of "CAs won't issue name-constrained certificates because not all clients support it properly" and "Clients don't bother to support it properly because CAs won't issue name-constrained certificates"
And even if today's clients all support it properly - there will always be some users running ancient smart TVs and android phones that haven't received a software update in a decade.
A decade ago, name constraints was available, but support wasn't really there. I was looking into making a company CA for internal tools, but I didn't want to be able to MITM employees going to unrelated sites, and I couldn't mandate specific browsers, so we ended up using a commercial CA for everything.
It looks like support is fairly wide now, but you'd probably still need to test and confirm it works with all the tools you want, and there's still some risk to users in case the constraints don't catch everything.
* Some functionality is off-limits for sites loaded via HTTP. (Another commenter mentioned clipboard access.)
* Browsers will display annoying warning symbols whenever you try to access sites via HTTP.
* If you live in a shared living space such as an apartment you probably don't have control over your home network.
* Even if you have control over your network, a single compromised IoT device is enough to sniff your internal network traffic, assuming WPA2. (Probably not super likely tbh.)
It doesn't even have to be on the router, just the same network segment plus some ARP spoofing tricks (assuming your switch doesn't have ARP spoofing protections or they haven't been enabled) could be enough to MitM a connection.
I travel between networks with my phone and laptop. Software will ping out using whichever network I'm on, trying to connect to its backend. If I connect to hostile/compromised WiFi, those connections are at risk.
Can't any client on the same wifi read your traffic by just putting their wifi card into promiscuous mode? Obviously depends on who uses your wifi and your threat model, but that seems like a problem.
On use-case I hit just recently is web apps hosted in my internal network, without https, Firefox won't allow me to click the "copy to clipboard" buttons on those pages
One headache I've had with internal LE certs is bots abusing the CT logs to attempt probing internal names. As a result, I started requesting wildcard certs from LE. Somehow that feels less secure, because even though I'd probably recognize abuse of the cert - friends and family wouldn't. It's the same reason I don't want less technically adept friends and family having to deal with my own CA. Install one arbitrary cert ... what's the problem with this random, sketch one I downloaded?
No, why would I want that? Do I not trust my switches and routers to send the packets to the right host? Do I not trust my DNS to send the right address for the hostname? Do I not trust the other devices on the network to not be sniffing? Okay maybe that one.
Browsers could stop with their false warnings about password forms being insecure then I'd be happy.
That's a different question now. Before it was "do I not trust my switches and routers?". Now the question seems to be "do I trust all devices on my network?".
If you have guests in your network, surely they have a different level fo trust.
Anyway: If you trust _all_ devices on your network (including routers, PCs, phones, printers, light bulbs, air conditioners, thermostats, doorbells, scales, TVs, set-top boxes, coffee machines), and the software running on them to be bug-free, unrootkited unhackable, and upt-to-date, and their vendors to be perfectly honest and reliable, and those ones' employees, hardware, and software supply chain to be perfectly trustworthy, forever, then yes, you don't need any form of cryptography in your network.
Otherwise, you need something like TLS for it to be secure.
Yeah I guess I did move the goalposts. Phones are a grey area. Printers and everything after are a no-no and people would be insane to connect them but the only people who know that are the only people who would use this or be capable of using this.
If your audience is North American English speakers, on the login page, use "For the full experience".
Using fullest in the context of 'lives life to the fullest' is grammatically correct but English is strange and for this context you'd want the former.
One problem with wildcard certs is that any host can impersonate any host within the wildcard zone.
It would be great to be able to get a certificate for an intermediary CA, that is limited to one domain. And then use this CA to issue certs as needed.
If only there was a system to hint in DHCP (or a v6 RA) what certificate authority serves the .internal domain for the current network.
Devices would treat .internal as special and would validate that the hinted CA only applied to that subdomain, and would only use that CA when connected to the corresponding network.
Or maybe the DHCP/RA could hint at keys to use to validate DNSSEC for the internal DNS server, and the CA cert could live in a well-known TXT record…
Then you could have all devices work with internal certs out of the box with no config. One can dream…
wonder if it's just better to not deal with name constraints and self signed certs. lets encrypt issues certs for domains with dns validation.
so why wouldn't something like this work:
- designate sub domain for private network usage (ie, *.internal.example.dev)
- issue certificates using ACME compatible script/program (ie, lego) for devices (ie, dev1.internal.example.dev, dev2.internal.example.dev)
don't have to deal with adding self signed certs to trust stores on devices. don't have to deal with messiness of name constraints compatibilities across apps. just plain ole TLS
This is about creating and using a domain-restricted CA, which is then used to create server certificates. Point being that your (tech savvy) friends are willing to install the CA because it can only ever validate some specific subdomains (and not MITM the entire internet).
I'd rather see something with 90 days and ACME. Note sure why there isn't a simple certificate management tool that does this and maybe even brings a simple UI?
This repo links to BetterTLS, which previously audited name constraint support, but BetterTLS only checked name constraint support at the intermediary certificates not at the trust anchors. I reported[1] the oversight a year back, but Netflix hasn't re-engineered the tests.
Knowing how widely adopted name constraints are on the client side would be really useful, but I haven't seen a sound caniuse style analysis.
Personally, I think the public CA route is better and I built a site that explores this[2].
[1] https://github.com/Netflix/bettertls/issues/19
[2] https://www.getlocalcert.net/