Hacker News new | past | comments | ask | show | jobs | submit login
Running one’s own root Certificate Authority in 2023 (wejn.org)
216 points by jandeboevrie 12 months ago | hide | past | favorite | 161 comments



I really like step[1] and step-ca[2] for this, it's a lot less fiddly than having to drive OpenSSL directly.

1. https://github.com/smallstep/cli

2. https://github.com/smallstep/certificates


Wouldn't it be nice if LetsEncrypt could issue you a (1) name constrained, (2) 90-day limited intermediate CA with just the (3) DNS-01 challenge? I argue that such an intermediate CA would have no more authority than a wildcard cert which you can get today, so they should be able to issue it. [1] Everything supports name constraints now, which used to be an issue but isn't anymore. [2]

Then stick it in step-ca and issue all your certificates with internal ACME.

This would solve a lot of problems, such as leaking private hostnames in the certificate transparency log, or hitting issuance rate limits on LE servers.

[1]: https://news.ycombinator.com/item?id=29811552

[2]: https://bettertls.com/


> Wouldn't it be nice if LetsEncrypt could issue you a (1) name constrained, [...] intermediate CA

Unfortunately, name constraints don't work in all clients. Name constraints are an 'extension' to the standard and supporting them is optional.

According to [1]

> Before relying on this solution to protect our users, we wanted to make sure browsers were really implementing Name Constraints verification and doing so correctly. The initial results were promising: each of the browsers we tested (Chrome, Firefox, Edge, and Safari) all gave verification exceptions when browsing to a site where a CA signed a certificate in violation of the constraints.

> However, as we extended our test suite beyond basic tests we rapidly began to lose confidence. [...] The result was that every browser (except for Firefox, which showed a 100% pass rate) and every HTTPS client (such as Java, Node.JS, and Python) allowed some sort of Name Constraint bypass.

And even if someone went around and fixed every TLS library in the world, there'd still be millions of devices out there that never get security updates, like smart fridges and ancient android tablets.

There's a major chicken-and-egg problem: Because nobody will issue name-constrained certificates, clients don't have much reason to prioritise it as a feature. And because name constraints don't work perfectly everywhere, nobody will issue name-constrained certificates.

I doubt we'll ever see a trusted CA issuing name constrained intermediate CA certificates.

Of course, they could offer an API where if you've passed auth for *.example.com you can issue a cert for any subdomain below that without any further validation...

[1] https://netflixtechblog.com/bettertls-c9915cd255c0


That post is almost 7 years old. My second link is to the test suite mentioned at the end, which if you look at it you'll see that name constraints are now universally supported. I don't think this take is valid anymore.


And for old devices, letsencrypt should force nameConstraints to be a critical extension so that old devices will just fail to accept it so that it won't be used “rogue”ly.


I don’t necessarily disagree, but to point out: issuing an intermediate CA does change the authority model a bit, insofar as it turns a single trusted issuance into a windowed lease to perform arbitrary issuances.

On a practical level, the latter is more logistically complex (and needs to be reconciled with other hard-fought battles, like CT). Given that it’s roughly the same as a wildcard certificate in terms of end-user use, it’s IMO understandable that this isn’t a priority for the CA ecosystem to support.

(The use case of circumventing CT is probably a non-starter as well. The Web PKI doesn’t want CT loopholes!)


Nobody has ever adequately explained how a wildcard cert presents a meaningfully different security profile than a name constrained CA. Wildcard certs used to identify a subdomain are not logged in CT and nobody is freaking out about it. And if your answer is "because nested subdomains" you should also explain the differential risk of nested subdomains compared to wildcard certs, not fully qualified certs. I don't see how it's meaningfully different.


One accepts an arbitrary subdomain as a matter of verification policy, and the other has broader PKI implications: how should verifiers handle name constraints with multiple prospective paths? How should they handle CAs that issue longer-lives certificates than the CA/B guidelines permit? CAs that allow end-entity certs with known insecure algorithm choices, etc?

In short, allowing user-controlled name-constrained intermediate CAs opens up multiple cans of worms that the ecosystem is not currently prepared to handle. Presenting a compelling argument for these user-controlled CAs means explaining (and getting buy-in) for solutions to the above.


Thanks, I appreciate a response with some substance.

> name constraints with multiple prospective paths

Can you expand on this for me? Is this really a problem?

> longer-live[d] certificates than the CA/B guidelines permit

The iCA cert would only be valid for 90 days (or less, they could make it 30 or 15 days). Would a cert be valid if the iCA cert that issued it is expired?

> known insecure algorithm choices

Disallow bad algorithms by policy. Evidence of issuance using a bad algorithm gets the iCA revoked and the ACME account locked out. Many clients don't allow insecure algorithms by their own policy.

If you're securing your internal network with bad algos but it never touches the wider internet, does it make a sound? Would this be better or worse than installing self-signed root CAs everywhere?


> Can you expand on this for me? Is this really a problem?

Sure, happy to: the issue here is that the user’s root might present multiple prospective validation paths, none of which have to agree on the total constructed set of permitted/denied name constraint subtrees. Path validation also doesn’t impose an order, meaning that two clients can (correctly!) disagree on the set of names that a validation path can issue.

Is it really a problem? Maybe: the various Web/Internet PKI standards aren’t clear about how to handle situations like this, and things that change the Web PKI to rely more heavily on correct extension handling have historically revealed lots of fragile/noncomplying clients.

> Would a cert be valid if the iCA cert that issued it is expired?

Nope, I made a mistake here: this wouldn’t be an issue, for the reason you’ve said.

(Having an ICA that expires every 90 days would impose other logistical challenges, however: you’d either need to include it in the server response along with the leaf, or lean heavily on an extension like AIA to retrieve the current ICA certificate.)

> If you're securing your internal network with bad algos but it never touches the wider internet, does it make a sound? Would this be better or worse than installing self-signed root CAs everywhere?

Probably no worse for the private network scenario, although I think the proposed solution here ends up confounding the public and private cases: the certs issued under a NC’d ICA would also be valid on the public Internet, and may end up intentionally or unintentionally depending on this behavior.

If I had to TL;DR my opinion here, I think it would be: “all of these things are solvable or addressable at the policy level, but the Web PKI has historically been brittle to assumptions that currently standardized things are correctly respected by the client ecosystem” :-)


It would be impossible to enforce that issued certs are submitted to certificate transparency logs and this would break the security model around the ca system.


I don't understand the difference between wildcard certificate and intermediate certificate.

Intermediate certificate is more secure because you can use different certificates for different subdomains, insead of sharing private key for wildcard certificate with every subdomain server.

Whether it hits CT or not - is not relevant at all. What matters is if intermediate certificate hits CT.


A wildcard covers one single level of sub-domains. An NC'd CA can be used to issue for anything. Nameconstraints are 'enforced' on the client side and many don't support it. Running a public CA - even with a nameconstrained CA, is a challenge to do properly.


One of the parents claims that everything supports name constraints now, so my suggestion was with this assumption in mind.


Wouldn't the name constrain address this?


It would address the policy side, but not the transparency side (which is arguably critical to the Web PKI’s current security model).


I thought that the main problem that certificate transparency addresses is the fact that multiple certificate authorities can all issue certificates for all domains (i.e. there is an overlap in the name space).

So when a CA gets compromised in some part of the the world (or gets manipulated by a state actor that effectively runs the CA) somebody could just emit a valid certificate for a domain you control.

In case of private intermediate name restricted CAs, that can be addressed by logging the issuance of the intermedia CAs.


> I thought that the main problem that certificate transparency addresses is the fact that multiple certificate authorities can all issue certificates for all domains (i.e. there is an overlap in the name space).

Yep, that's a "main" thing that CT is intended to address. But it has other useful applications as well: because anybody (including website operators) can monitor the transparency log, they can additionally assert that their own trusted CA is not issuing more than one certificate for the domains they control.

Logging only the name-constrained intermediate CA would make third-party monitoring less useful. It would also have a perverse effect on attacker incentives: the attacker now only needs to compromise a user-controlled CA with a likely to be weaker security posture.

(That being said, maybe these tradeoffs are worth it! I don't have a strong opinion about that, other than my impression that the status quo with Let's Encrypt + intentionally leaking a few private subdomains isn't really that bad.)


Yeah the tradeoff here is that this is intended to replace the use of a wildcard certificate which wouldn't benefit of logging every hostname anyway


And they support ACME. I've been running a smallstep CA off of a Nitrokey HSM 2[1] w/ PKCS #11 for my homelab for a few years now

1. https://shop.nitrokey.com/shop/product/nkhs2-nitrokey-hsm-2-...


Can you please give bullet points of your CA setup and more broadly your home lab? It sounds … awesome.


Nice to see open source hardware token usage in the wild.


I completely agree that dealing with openssl is fiddly at best when you RYO CA. But, if your org knows how PKI works it's not a big deal; a bash script or two in the simple case or a flask app at most complex.

If your org doesn't know how PKI works shouldn't you be paying a vendor that does?


That's really neat, thanks for the pointer. ;)


I eventually need to publish an article about how to run an HSM backed root CA on the cheap with m of n auth.

Using nitrokey and some glue scripts you can get the cost below $500. If anyone is interested, let me know.


I've just started down that route. I've got the nitro key hsm2 in the mail, have heard the advice on using two levels (first root in the Key, and intermediary on the Device for easier revoking). I mainly want to issue client certificates so that I can expose internal sites on the public Internet via proxy without having to require a VPN for all of my users, though I'm also interested in certificate based SSH


Yes, please! I would be interested. Currently i'm fiddling around with vault as an ICA, so this sounds like a good next step


+1 sounds like an interesting read


Please do


an article like that would be great!


“Cultural” technical issues are so frustrating to me. A certificate is fundamentally just a type of credential, like a password, but for historical reasons they’re treated like getting citizenship papers. There’s always this ceremony even in scenarios where it makes zero sense — such as internal-use certificates used for a gRPC API server behind a load balancer.

Why - for the love of God why - can’t I just obtain a cert like this directly out of a secret store such as an Azure Key Vault!?

These things are already full hardware security modules (HSMs) with all of the capabilities required to run just about anything short of a public Root CA and maybe even that too.

But no.

NO!

Script it yourself. Make a root cert, “upload” it, make a cert, sign it, upload it, link it, renew it, re-configure it, and on and on. Oh… you wanted a CRL too? A 1kb file? Ha-ha! No. Make it and host it yourself!!

It’s absurd.

So many services depend on a simple CA->cert setup: VPNs, API auth, load balancer back-ends, clusters, etc…

But my mark my words: no cloud will have a trivial turnkey solution this decade.

This is because running a CA is culturally accepted to be a hard problem. It is! If you’re DigiCert. It isn’t if you’re building a three-server internal use cluster. But the problem is hard, you see? That accepted fact! Everyone knows it! Ceremony is required. We can’t just hand your sever a 1kb credential file! That would be… unconventional!

It’s just not the way things are done, so stop asking.


It’s about damn time that we should be able to get intermediate signing certs that are domain limited from a public ca and issue out own “real” certs. This is fully supported in the standards. Anyone offering this product affordably yet?


The other “half” of the issue is getting the os, browsers, and devices to support the standard as well. That’s a whole other can of worms.


Like gorkish wrote: No new standards or standard changes affecting any of those necessary. It's only a matter of will and culture on part of CAs.


Name constraints are an optional feature in the standards. A client can ignore the constraints and be completely standards compliant.

Should the CAs issue intermediate certs that are only secure if a client implements an optional feature?

And even if most web browsers support name constraints properly - who knows if that cheap network webcam does, or that old mail client, or that 20 year old retro PC game?


This isn’t strictly true.

If you want to uphold the name constraints in your CA cert, mark the field as critical. At that point clients that don’t understand them should fail validation of the CA cert.


So it may have limited use-cases today if you require full compat for all clients. For example internal controlled networks like discussed in the article.

Just like you presumably already wouldn't issue LE certs when you need to support clients with ancient CA bundles.

How do you think TLSv1.3 ever got rolled out?


Domain limiting is implemented using x.509 cert nameConstraints, which the last time I've checked were not supported on Apple devices..

Edit: has been fixed in osx 10.13.3. Idk about iOS.


macOS 10.13.3 was released in January 2018, well over five years ago. Can we please stop repeating the "can't use it, it's not supported" line?


Interesting idea. What use cases would this make easier or even possible? At first, it sounds like more work (you’re now your own CA but without full freedom of a truly self hosted one).


The main advantage is that this CA and all downstream certificates would be globally-trusted (limited to the domain), which is not the case for a custom CA.

Security-wise it shouldn't be any worse than wildcard certificates which are already a thing. It would actually improve things, because the user can now issue downstream certificates much more granularly without having to interact with the root CA (so you can issue fully offline for an intranet, or issue extremely-short-lived certificates that would otherwise run afoul of root CA's rate-limits).


It would improve things for everybody but the certificate authorities.

They're selling something with a marginal cost of zero for $50 each. A wildcard certificate costs more not because it is materially different or "harder" to issue, but because it replaces many $50 certificates. Thus, it "must" cost more, or everybody would just use wildcard certificates everywhere and reduce profits at the large public CAs.

It is actually possible to get a domain-specific CA like the one you're thinking of. I saw one at a large department of education. It allowed unlimited issuance of certificates such as HTTPS, mail-signing, document-signing, and some other types that could be locked to a DNS domain. However, there was still a cost per certificate and the up-front cost was huge, something like $100K.


I made a comment up-thread, but name-constrained CAs can issue for anything. It's enforced client-side, and not supported in far, far too many places to work. You'd be giving everyone the ability to issue for anything. Not to mention that running any kind of public CA is harder to do properly than most people thing.

I get the negativity towards public CAs, but much of what you said isn't quite right, either.


The only "safe" way to introduce these would be to make a certificate format that's intentionally incompatible with existing implementations; that way only new implementations (which are aware of the domain constraint) will accept it where as old implementations would just reject the certificate as invalid/corrupt.


Hopefully let’s encrypt continues to put the pressure on them so eventually the other cert authorities start offering things worth paying for.


LetsEncrypt already destroyed that "$50 ceremony for $0 cost" business model.


This is a decent rant, and I mostly share your frustration.

But At least GCP and AWS have certificate authority products which essentially do work the way you want them to:

https://cloud.google.com/certificate-authority-service https://aws.amazon.com/private-ca/

Azure may well have one too, I just don't use their service.


You just answered a rant about too much ceremony by linking to an entire service that costs a staggering USD 400 / month. That’s the cost of an FTE in like half the world… to issue what are essentially “fat passwords”.

Imagine with a straight face trying to sell someone a “password generator service” when in reality that’s just a one line script snippet.

The disconnect between the physical reality on the ground and how it is treated by industry is just absurd.

Ref: https://aws.amazon.com/private-ca/pricing/

Getting a signed cert issued should be a one-liner script referencing an AWS KMS or an Azure KeyVault!


Azure will let you tap into a HSM too


> Ceremony is required.

Ceremony, as in “a well established, rigorously implemented, meticulously documented, and diligently audited process” is required to establish and maintain trust. A CA promises that folks showing up bearing their certificate _are who they say they are_ - this doesn’t mean it is perfect (there are plenty of cases where public CAs went wrong) and some random person can place their trust in that assertion.

The whole _point_ of a PKI is trust, and why would this be any different for internal systems?


Trust can be established in many different ways. Through paperwork (literally the case for some public CAs!), via automation, or some other means.

The difficulty in running a CA at a very large scale (public or private) is that the root certificate revocation becomes prohibitively difficult. For this reason, its security is critical and much ceremony is warranted.

A CA doesn’t have to be a giant basket with millions of eggs in it. I have scenarios where I need a CA cert just so it can sign two (2) leaf certificates to make some load balancer happy, or to establish a site to site VPN or whatever. This is just "marking" the issued certs with a common attribute so that a load balancer can trust 'n' servers without having to have 'n' distinct certificates in its trust list.

That scenario does not need the same ceremony as DigiCert or the Microsoft kernel driver signing root certificate. It’s just a “shared secret” in a Cloudformation/Terraform/Bicep template.

If that CA gets compromised, I can just re-run the template to generate and deploy a new one! No ceremony. Just press play.

Not every gets married like Beyonce at an exotic island destination with 500 rich & famous guests. Some people just go down to the local marriage registrar office, get the paperwork done, and go home.


> The difficulty in running a CA at a very large scale (public or private) is that the root certificate revocation becomes prohibitively difficult. For this reason, its security is critical and much ceremony is warranted.

Unless the end-to-end, from the CA to the final certs, are managed by yourself or your immediate team, the scale would be irrelevant, and some ceremony would be required, as would be some processes that are internal to the team. My personal approach to avoid much of this is to use really short-lived certs to avoid the revocation theatre, but that doesn’t avoid the need to ensure that the CA is fundamentally secure.

> If that CA gets compromised, I can just re-run the template to generate and deploy a new one! No ceremony. Just press play.

Indeed! That, and re-distribute and implement trust for your CA Chain to all machines that need it. Your examples work fine (and I agree with) for small-scale use-cases, but if you run PKI for anything except yourself, you have to take care of the P for the KI.

Look, I am no fan of the rent-seeking bullshit the big public PKI vendors make us go through, and the incredibly high cost per certificate model. Charge me a reasonable fee for verification, and charge me a reasonable fee to set up my certificate vending endpoint, and be done. The same goes for internal use-cases. But the fact remains that verification and process are required. Fortunately, ACME for internal use and sane APIs and workflows such as Vault or smallstep provide make things very easy and cost effective. As for $400 per month for Azure certificates? Run your own instance, with your own (supposed) management processes, and see how it tallies up…


A password is a secret. A certificate is fully public (it’s useless if not). I don’t see how they’re similar in that dimension. Or how certificates are possible without a third party (a core reason for all the ceremony). Passwords get away with not needing a third party involved because they’re a prearranged process. Certificates aren’t, I need to be able to hit any website at any point, for the first time.


The key phrase here is internal use. Think authentication between two micro services, not HTTPS for someblog.com.

The third party in this case is just a file: the internal CA key.

Try to think of this in the following terms: the issued certs are just signed tokens, nothing more. They’re also conveniently a public-private key pair, but that’s not the point.

The point is that if you used a 1kb random password for service to service auth and someone tried to sell you a $400/mo service to generate them, you’d laugh in their face and then have security escort them from the building.

Sprinkle a tiny bit of cryptography on top and suddenly people think $4,800 annually is a bargain for 1kb passwords.

I can buy a decent used car for that kind of money.


There are ways to use internal CA certificates to authenticate microservices, though they're mostly based on Kubernetes. Take Istio's solution, for example: https://istio.io/latest/docs/tasks/security/authentication/m...

Very few people still use TLS client authentication, but it does work. You can definitely get it up and running for less than 400 dollars per month. The magical keyword for Google here is "mutual TLS"

If you're using a Windows based system, you can do this stuff automatically through Active Directory: https://learn.microsoft.com/en-us/windows/security/operating...

The thing is, credential management is very difficult, and people are willing to pay up to solve very difficult tasks. If you build a (non-Kubernetes} certificate management solution, you'll probably want to sell it for big bucks as well. Generating certificates is just an openssl command line on a timer, the difficulty is getting those certificates deployed in the right places automatically. It's all possible through some basic scripting, but to do it right you need to solve more than just the certificate part.


> The point is that if you used a 1kb random password for service to service auth and someone tried to sell you a $400/mo service to generate them, you’d laugh in their face and then have security escort them from the building.

Go ahead, run an internal secret store that issues properly trustworthy internal certificates. It isn’t hard, but neither is it cheap not. You oversimplify and gloss over a few _really_ important aspects of why and how certificates work. A certificate _acting like a password_ in some cases doesn’t mean it actually _is_ a password, and if you don’t really need the additional functionality to the point where the cost outweighs the delivered value, you are using the wrong tool for the job.

Running a CA is not at a technical challenge. Running a _trustworthy_ (there is that word again) CA is process heavy and it involves quite a few people doing work on a regular basis, and without those processes you may as well just something else - like an untrusted CA, or some kind of token issuing service (which, shockingly, also costs money to run properly).

A few really good options around DIY CAs were mentioned in this discussion, smallstep is pretty good for example. Nothing (except possibly your $org’s security policies) prevents you from throwing that up somewhere and start vending certs.

The fact that it is an internal CA to be used for internal purposes makes no difference, unless you believe that “everything internal is trusted because it is internal” in which case many other discussions on a very wide variety of subjects is due.


Internal here means I don’t need the certificates trusted by a third party.

Having built many Enterprise PKI systems — including Smart Card auth — I do know the complexity involved. I can prattle on for hours about how the Key Recovery Agents should be distributed and stored, and how “offline” means no network cables at all you dimwit.

I also know that there is virtually no difference between a root CA certificate and a signed leaf certificate.

They’re both just files.

The difference is the amount of ceremony.

DigiCert’s Root CA certificate files had a lot of ceremony — with good reason.

But the CA for “I need five devs in India to get VPN certs with a common root” is practically zero.

No, it does not take a “a lot of infrastructure” to host a 1kb file. It really doesn’t, and your persistent confusion is my point: you are simply unable to let go of your preconceptions.

Just last week I needed a pair of certs with a common root for a load balancer’s back end. Not for transmitting NSA secrets over intercontinental backhaul.

I already have access to a bone-fide HSM! For pennies!

Why can’t I be allowed to use that pre-engineered secure certificate storage system for its intended purpose!?


PKI can provide non-repudiation while signed tokens and API keys cannot. There's a big difference transmitting a bearer token vs establishing a TLS connection.


I get that you don't get it... that's my point.

A public-private key pair is clever cryptography, yes, but INTERNALLY within a network they're not Magic(tm) that requires a multi-billion dollar market cap company to issue with some Indian call centre verifying my identity papers.

The same cryptographic algorithm has two wildly different uses: one that is $0.0000001 in value, and one that requires a third party organisation that needs to pay their employees and can maybe justify asking for tens of dollars. (Narrator: Let's Encrypt showed that they can't justify this either.)

People conflate the two and then try to charge $50 for the $0.0000001 use-case, which is a markup of five million percent. That's what's upsetting. It's just so absurd, and people blink slowly and then start suggesting $40 options as-if that discount somehow makes it okay. Or they start talking about "all the things you get" for that $50, when it just doesn't apply.

There should be a trivial set of commands along the lines of:

    New-AzKeyVaultRootCertificate -VaultName 'xyzinternal' -Name 'ContosoAPIServiceRoot'

    New-AzKeyVaultSignedCertificate -VaultName 'xyzinternal' -RootCertificateName 'ContosoAPIServiceRoot' -DnsName 'apisvc1352.internal.cloud'
You can emulate the above with a 20-line script now, but it's fiddly, and doesn't cooperate with Bicep deployment templates. Similarly, there ought to be a built-in renewal mechanism (which is JUST 'cron' for the love of God!), but instead requires Azure Functions, layers of complex authorisations, and who knows what else...


> People conflate the two and then try to charge $50 for the $0.0000001 use-case

Then… don’t buy certificates? Use letsencrypt or run your own CA? There are tons of options out there.


You can't use Let's Encrypt for private DNS zones in the general case.

"Running your own CA" like it's a big ceremony is precisely what I'm saying ought not to be necessary.

Do you "run" your own random password generation service PaaS with custom Kubernetes controllers and everything? That's what someone else suggested, and not in jest!


But it's not really that much work. Disclaimer, I'm the author: 1) https://gruchalski.com/posts/2020-09-07-certificate-authorit..., 2) https://gruchalski.com/posts/2021-03-28-firebuild-rootfs-grp.... There are many options for various levels of entry.


A certificate is a key pair with both a public and private part. The private key is needed to sign data, while the public key can only validate that signature.

You can think of the private key as equivalent to a password, in the context the parent is talking about.


Thanks for the clarification. It’s a misuse of that term then. A certificate is not a key pair. A certificate is very clearly and narrowly defined: a statement of the form “this public key belongs to this name” (in this case a DNS name, but can be any name), signed by a (mutually) trusted authority. Once you trust that authority, you trust the connection of name to key.


Actually running a CA, even if only for private purposes, without certain regret down the road involves more than creating an OpenSSL cnf file, creating a root cert/key, and running with it. That said, it's a starting point. If you're looking to use more modern (i.e., faster) crypto than RSA keys, maybe check out my sping on a CSR generator wrapping `openssl`, available at https://johannes.truschnigg.info/code/tls_req_gen

If you need a self-signed cert instead, maybe try https://johannes.truschnigg.info/code/tls_cert_gen


I run a squid proxy with TLS intercept on a raspberry pi, with my own CA.

I have things set up so that the RPi connects to a WiFi, and then a cable from the RPi goes to another WiFi router.

I connect my MacBook Pro to that other router.

This way the MacBook Pro cannot reach the internet.

Then I set the http and https proxy configs in Firefox so that it goes via the squid on the RPi. And I have the root CA from the RPi trusted in Firefox.

Additionally I have set some env variables and added my root CA cert to some cert storages on the computer, so that git can clone via squid, and I can install and update things with brew etc.

It works great :D

But then I tried to set up my iPhone to also connect to that WiFi. I think I managed to trust my root CA on the phone. But I couldn’t manage to set up the http/https proxy on the iPhone and so for now only the MacBook Pro can use it, and not the iPhone


macOS uses certificate pinning for some .apple.com and .itunes.com sites. If you pass all your traffic through the proxy, some stuff like the app store will not work. Do you bypass the proxy for those or just let them fail?


I do that on purpose. I don’t want macOS itself to reach the internet. Only Firefox, brew, etc


I think iOS has http proxy settings in the wifi configuration for a given network? Haven’t tried recently.


You can use a transparent proxy to avoid this


I went with DNS based Let's Encrypt for internal certificates, since I'm okay leaking my internal DNS names.

> An obvious downside of this is having to guard a bunch of secrets and the need to rotate the host certificates yearly – because Apple says so.

The guarding secrets thing makes me too uncomfortable with managing my own CA. I'm sure it'd be fine, but since there are other equivalent and safer ways to do it.. Name constraints are a thing in the spec for restricting your CA to specific domains (which is amazing,) but browser/etc support was crappy when I looked at it and maybe getting better? I don't understand why name constraints aren't implemented everywhere. Unless an enterprise environment is doing TLS inspection, name constraints are a way saner implementation.


> I don't understand why name constraints aren't implemented everywhere.

They have weird semantics, especially in scenarios with multiple prospective validation paths: path `EE -> ICA' -> ICA'' -> TA` might result in different name constraints than `EE -> ICA' -> ICA''' -> TA`, resulting in hard-to-debug validation errors (or successes) depending on the user's trust store state.

(I don't believe that's why Chrome doesn't support them, however. Chrome's stated reason[1] is that they don't support them on user roots because RFC 5280 doesn't require them to, which is IMO a correct reading of the spec.)

[1]: https://bugs.chromium.org/p/chromium/issues/detail?id=107208...


I used to have my own local root CA as well but now trying the Let's Encrypt with DNS-01. What is the easiest combination of software to try it? I have failed miserably trying Opnsense + ACME client plugin + Cloudflare DNS + HAProxy / NGinx. I would get 100% ssllabs certs but somehow the reverse proxy won't forward to internal services. Next I am gonna go caddyserver for reverse proxy as it has SSL with LE inbuilt. Let's see.


I found LE + CF DNS trouble-free.

Dockerfile:

``` FROM certbot/certbot RUN pip3 install certbot-dns-cloudflare cloudflare ```

docker-compose.yml:

``` volumes: - ${CREDENTIALS_DIRECTORY:-.}/cloudflare.ini:/cloudflare.ini - ${STATE_DIRECTORY:-./certbot}/:/etc/letsencrypt/ - ${LOGS_DIRECTORY:-/var/log/certbot}/:/var/log/letsencrypt/ command: " \ certonly \ --non-interactive \ --agree-tos \ --email postmaster@foo.bar \ --preferred-challenges dns-01 \ --dns-cloudflare \ --dns-cloudflare-credentials /cloudflare.ini \ --dns-cloudflare-propagation-seconds 30 \ -d foo.bar,*.foo.bar" ```


A friend of mine runs dns01 thusly: https://ipng.ch/s/articles/2023/03/24/lego-dns01.html


I've had a lot of success with https://github.com/dehydrated-io/dehydrated . It exposes the different parts of the process (deploy challenge to DNS, deploy cert to filesystem, etc) as hooks, so it's pretty easy to integrate with anything and however you want, if you don't mind writing a bit of bash. There's a few scripts out there that use Cloudflare that you can use as well.


This ACME client looks promising, but I haven’t tried it yet: https://github.com/go-acme/lego


do you use https/have a cert in your webserver as well, or just on the proxy?


The one thing you can’t do with Let’s Encrypt is generate a certificate with a CN of localhost which, since browsers are getting really picky about mixed HTTP/HTTPS content, is a real issue with local development using certain web features.


Why not register a domain, get a cert for it, and point it at 127.0.0.1? Then nothing can complain.


The other advantage of running your own PKI is you can intercept and decrypt arbitrary traffic on the network.


No. Knowledge of the CA private key does not allow for this. In all cases you have the webserver's private key, whether you issue the cert yourself or if someone else does.

If you have the web server's private key, you can decrypt the traffic, but only if PFS ciphers are not used.

It is a common misconception that knowledge of CA keys allows you to decrypt traffic. It does not. It allows you to issue valid certificates. It's only used for signing.

It's the webserver's key that protects the traffic. The webserver's operator has that key regardless of who the CA is. In the case of PFS cipher suites (eg EDH) an ephemeral key is used for confidentiality and the endpoint keys are just used for integrity and key authentication. Even with the webserver key you aren't decrypting those streams.

In no case does the CA private key help you decrypt traffic.


> Knowledge of the CA private key does not allow for this

Of course it does. You just generate certificates for every TLS handshake you see and MitM them.

And yes, it works for PFS too. There is no difference for MitM.

> In the case of PFS cipher suites (eg EDH) an ephemeral key is used for confidentiality and the endpoint keys are just used for integrity and key authentication. Even with the webserver key you aren't decrypting those streams.

You never encrypt data with a key from certificate, even without PFS. That would be too slow. You generate a pre-master key and then use it to generate keys for symmetric algorithms, like AES or Chacha20, that are actually used for data stream encryption. The only difference between PFS and non-PFS is that in the former case pre-master is generated using some form of DH, with promise of keys being destroyed after some time, and in the later case, client just generates a random key, encrypts it with the public key and sends it to the server.


MitM is not "intercept and decrypt arbitrary traffic on the network" which is what was asserted. You are describing an active attack, while fiddlerwoaroof was discussing a passive one.


I wasn’t distinguishing the two sorts of attacks.


My router blocks it to protect against DNS rebind.


If it's just for development, you could set it up in the hosts file.


> is a real issue with local development using certain web features.

Is it? I thought browsers treat localhost/127.0.0.* specifically as if it were served over https, even if it isn't - otherwise, you could basically forget developing anything locally.

Is there any feature which doesn't treat localhost as a secure origin?

I figure you can always buy a hostname, get a cert using the DNS-01 challenge, then resolve the domain to 127.0.0.1 though - or getting back to the OP and running a custom CA.


I have a dashboard I run via nginx on localhost that makes a bunch of requests to various https endpoints. It definitely doesn’t just work unless you have a trusted SSL certificate and run localhost as HTTPS


Huh, that's odd. Gonna test this as well then.


I think it’s the mixing of HTTP and HTTPS that most browsers doesn’t like.

If you develop locally and it’s only HTTP with no HTTPS, then I think it works.


I always thought that "mixed content" was defined as mixing "secure" and "insecure" origins (where a "secure origin" could be either https://something or http://localhost) and not literally mixing http and https urls. But this would explain the problem.

To my knowledge, https on loopback doesn't give you any kind of added security: Everything with enough privileges to capture the encrypted packets on loopback also has enough privileges to just capture the unencrypted memory directly. And "localhost" is also a single "domain", so a cert wouldn't even give you the ability to distinguish between different origins (as other ports or hostnames that resolve to 127.0.0.1 would do)

So it's just some buerocratic hoops to jump through to satisfy browsers.

Edit: I remember reading about some objection that the "localhost" domain could be resolved to something other than 127.0.0.1, either through the hosts file or through a faulty DNS resolver.

I think those objections were addressed in the "let localhost be localhost" proposal which mandated that the hostname "localhost" must be hardwired to 127.0.0.1 in the OS/browsers/resolvers etc and must never be permitted to point to anything else.

But maybe this proposal didn't gain traction and so browsers are defending against such rebound localhost domains.

In that case, I'd try and check if "http://127.0.0.1" works, as the IP address can't be rebound in the same way as the hostname.

Edit2: And of course there is the issue with everything that is defined on top of TLS itself, e.g. ALPN and HTTP2. If you want to test anything involving that, you'll of course need to run TLS on localhost and are gonna need a cert too.


You also can't generate a cert for a double wildcard, like mydomain . com . * . * Or an entire domain, although I'm unsure if that is possible with your own CA as well


You can generate the certificate but browsers don't like multiple wildcards. Any application following the rules set out by RFC 6125 should reject multiple wildcards as far as I can tell.

Some browsers (notably Firefox) used to support multiple wildcards, but then again it also trusted domain certificates signed by other domain certificates for years, so that's not much to go by. These days, I don't think a single browser will accept ..foo.bar.


I’ve also struggled with this. Is there an elegant solution that you’re aware of? Everything I’ve tried feels really rickety.


I've been using cfssl[1] to generate a root certificate + a localhost certificate and then trusting the root.

[1]: https://github.com/cloudflare/cfssl


For localhost, there’s not much downside to a self-signed cert.


Browsers won't offer to save passwords on self signed sites.


You might be able to get around this using the Chrome "flags" page, search for unsafely-treat-insecure-origin-as-secure.


Chrome flags are pretty annoying to use, especially if you use the same browser for regular browsing.


> Name constraints are a thing in the spec for restricting your CA to specific domains (which is amazing,) but browser/etc support was crappy

It's well supported now. I use it and it works for OpenSSL, Firefox, and Safari.

Personally, I don't think there's much to gain from using public PKI for internal infrastructure. I already manage secrets on my personal devices and this is no different. Also, being able to issue certs for .home.arpa domains is nice too.


Thanks, I've incorporated the name constraints into the article now. (it is indeed supported by Apple and FF just fine)


Very nice!


> I went with DNS based Let's Encrypt for internal certificates, since I'm okay leaking my internal DNS names.

Lets Encrypt offers wildcard certificates, there is no reason to have internal DNS records exposed.


Did you use elliptic curve instead of RSA?


I've done similar for something like 8 years with vault as my intermediate issuer, almost exclusively using cert-manager once that was mature enough, and my own little utility before that. It's so nice getting certs for side projects or self hosting in an instant and with an encrypted (pgp) offline (flash drive in a safe) CA I'm never really worried about having to reroll. Installing the CA is pretty trivial on most devices and means I don't have to worry about CTLs or rate limits, which is especially helpful when I'm hacking on a saas side project that ends up requesting 10+ certificates every test run.


Easy-rsa to the rescue. Been using it for a while, works great and makes life easier :)

Link: https://github.com/OpenVPN/easy-rsa

Summary from that page:

easy-rsa is a CLI utility to build and manage a PKI CA. In laymen's terms, this means to create a root certificate authority, and request and sign certificates, including intermediate CAs and certificate revocation lists (CRL).


It's funny how I go from "YEAH! CA's SHOULD be a quick one-liner!" to "Should laymen be generating Root CAs?" inside of like 10 seconds of scrolling.


Doesn't the first Apple link specifically say the 398-day limit doesn't apply to self-signed CAs?

> This change will affect only TLS server certificates issued from the Root CAs preinstalled with iOS, iPadOS, macOS, watchOS, and tvOS.

> This change will not affect certificates issued from user-added or administrator-added Root CAs.

The second link about the other restrictions (including <=825 days validity) does appear to apply to all CAs.


And yet, my homegrown root CA cert with 3650 days of validity hums along just fine...

[edited: but since I also want to have host certs that are on various internal servers, the short validity applies to them]


I am not sure if the same rules apply to 802.1x authentication, but we use self signed certs with 2 year validity for EAP-TLS and have never had any issues on iOS devices


This is not really an Apple thing, it's an industry trend (and a good one IMO). Apple's generally applying the same criteria Chrome is: https://chromium.googlesource.com/chromium/src/+/HEAD/net/do...


Seems Chrome is specifically making an exception for custom root CAs though:

> This will only apply to TLS server certificates from CAs that are trusted in a default installation of Google Chrome, commonly known as “publicly trusted CAs”, and will not apply to locally-operated CAs that have been manually configured.


Looking into it further, that's actually Apple's policy as well.


I'm building an app whose GUI runs in the browser talking to a local http server. The app (if and when it is ready) would be distributed as a standalone executable. A bit like Atom/Electron I guess.

What I haven't figured out yet is how the browser-GUI could talk to its local backend-server over https. Can an exe contain its own root-certificate-authority somehow in a way that the app-exe can work without having to update that certificate part ever?


Dear god why? People using ports (even on ::1) is super annoying when it collides with development. Hunting down the offender is not always straightforward and introduces a new failure mode for you (when I’m already using that port for something else). Also, it opens the door for literally any other software (or xss attacks) to connect and issue api calls you may not expect.

Just create a native app. Use something like C# MAUI or whatever language you’re familiar with. Your users will thank you for a smooth experience.


There's a set of ports allocated for http by the standards. The app will choose to use the first one available, starting with 80. If all such ports are in use then tell the user sorry all ports are already in use. Please stop some apps already running on your PC which may be using these ports.


So you have to run your app as root on Linux and windows firewall is going to have a field day? I don’t know what happens on a Mac, but I assume it will also require root to use low ports. Why not use port numbers dedicated to this kind of thing (1024 – 65535) or better yet, just use pipes and avoid opening a network server at all.


what's wrong with using a dynamically allocated port?


How do you have a dynamic port assignment and know how to connect to it without publishing it somewhere? As far as I know, that is impossible.


The app will open the browser on the URL it responds to, including the port it found to be available.


Why? What's the potential attack you're preventing by encrypting communication of two local processes?

You'd have to install a shady custom CA system-wide on your users, for a green lock that's 100% placebo. Just run firefox/chrome in app mode and hide the URL bar.


That's a good point. Maybe it is not needed. I'm thinking maybe in the future there could be a use-case to allow also others to connect to the app running on my PC.

Basically it is a content-creation app. You use it to develop content you want to later publish and which you can develop and QA on your PC. But when used in an team-setting there might be a use-case for the QA-group to easily see the latest version of the content, even though they would not be allowed to edit it.

I'm also thinking maybe some organizations might have a rule that only https: can be used.


Trivia question: What is the limit, if any, for how many DNS names (SANs) one can include in a single self-signed certificate. It is common to see TLS proxies that will generate certificates on the fly as SNI in clientHell is received. Is this due to a limit on how many SANs one can include in a single certificate.

(Maybe the performance degradation of generating a certificate on the fly is less than using a certificate that includes 100s of SANs.)


I'm sure sure about a hard limit in the x509 standard (would need to dig into the RFCs) - but the BadSSL site has two test domains that have certificates with 1,000 and 10,000 SANs respectively:

https://1000-sans.badssl.com

https://10000-sans.badssl.com

1,000 works in Firefox and Chromium, but 10,000 gives `SSL_ERROR_RX_MALFORMED_HANDSHAKE` in Firefox, and `ERR_SSL_PROTOCOL_ERROR` in Chromium. OpenSSL won't connect to it either - it gives `read_state_machine:excessive message size:ssl/statem/statem.c:610`

So in practical terms, the answer seems to be somewhere between the 1,000 and 10,000.


I think user-perceptible performance degradation might set a lower practical ceiling than purely technical limitations would. I've seen as many as ~200 SANs on a single cert -- but I'm not claiming any special expertise or insights.


I made a web server / microservices thing that issues certs for clients from a CA root it automatically generates. Then internal reverse proxy connections use that cert so the whole path is TLD encrypted with full cert validation.

https://github.com/fsmv/daemon


Looks like step-ca/step-cli [1] and mkcert [2] have been mentioned. Another related tool is XCA [3] - a gui tool to manage CAs and server/client TLS certificates. It takes off some of the tedium in using openssl cli directly. It also stores the certs and keys in an encrypted database. It doesn't solve the problem of getting the root CA certificate into the system store or of hosting the revocation list. I use XCA to create and store the root CA. Intermediate CAs signed with it are passed to other issuers like vault and step-issuer.

[1] https://smallstep.com/docs/step-ca/

[2] https://github.com/FiloSottile/mkcert

[3] https://hohnstaedt.de/xca/


Shameless plug, there's also https://github.com/linsomniac/rgca

I've been using it at work for the last year for our certs and it's been quite nice. It can do pre/post hooks as well, so it directly commits the updated CA serial files to our git repo.


I tend to treat service TLS certificates more like shared keys than PKI. Too many pieces of software don't handle revocation, it's easier to regenerate the CA and the entire set of certificates when you change your setup.


If you control all the clients/browsers (i.e. you can immediately modify the required trust stores), you don't have any use for PKI whatsoever.

PKI's use case begins with shipping a trust store into the wild, where it will run unchanged for months or years.


The better solution is to do what vault does. Use only ephemeral certificates for servers and clients. It wouldn't be too hard to change them every week or so using the ACME protocol.


ACME has a revocation workflow. Having my lost certificates be valid for a week is still unacceptable.


https://hohnstaedt.de/xca/

This doesn't cover deployment of the CAs to clients, but it's easy free desktop software.


There have definitely been many guides and I took a stab at this a few months ago https://github.com/leonletto/ca-for-labs. I tried to make it simple enough for anyone who is wanting to build an internal lab. Happy to receive any feedback or requests. No web interface yet. Thinking about building an interface that conforms to the ejbca api?


Yahoo's in-house CA for doing this at scale:

https://www.athenz.io/


It's actually really difficult to import custom CAs now on Android. Since version 7 you can only import into the user store and apps ignore this one by default so it's pretty useless.


This is more like "making and installing own root certificate".

I expected the article to be about actually being a certificate authority that every browser will trust by default.


Is there an Open Source program I could just run on the command-line, to have my own Certificate Authority? Or just to create a certificate?

Or do I need to know all the gory details myself?


mkcert if you need only on one system - usually for development. step-ca/step-cli if you need a CA at residential or office domain. XCA if you don't mind a GUI and know how to get the root CA certificate into the system/browser trust stores.

links in this comment: https://news.ycombinator.com/item?id=37542794


Thanks for the links.

Let me re-phrase my question slightly:

Is there any npm-modules I can install (with npm) which would allow Node.js to automatically serve https?


I'm afraid I don't have a firsthand reply to that question. Sorry! Have you given this a try? https://nodejs.org/api/tls.html#class-tlsserver


Thanks for the link


mkcert is the easiest


I've done this previously, about 10 years ago, for a distributed kv store I wrote.

Not a lot of fun.

Meanwhile, I'm guessing, there are new useful utilities out.

One I remember was easyCA or was it easyPKI?


HashiCorp Vault supports being an ACME server. Why not use that?


To be fair, this was only introduced in the most recent major version release not too long ago.

But I agree, it's a great feature and worked as advertised after following their guide. Doesn't sound like a ringing endorsement but in my experience it's rare for custom CA tooling to Just Work and until now none of them resulted in a fully functional ACME provider to serve up your certs.


I agree. I just set it up the other day in a container. It’s going to be our ACME provider. It is stupid easy


Because:

1. ACME is a dumpster fire prone to mitm attacks.

2. without HSM (an additional investment) it's super bad idea to host your root CA signing key somewhere.


This is an internal, airgapped network.

We stood up the root CA, created the certificate, imported it, then destroyed the root CA. It’s a common security practice. Root CA can then never be compromised


If you destroy the CA, how do you issue new certs via ACME?


Sub CAs or Intermediate CA

The root CA certificate is used to establish trust in the chain of trust, but it is not directly involved in the certificate issuance process once the trust has been established.


"Only" missing how to safely distribute, trust and revoke/renew the root cert - and how to enforce/distribute revocation lists for certs...


"only" out of scope.

But based on the comments here, I guess you could use the smallstep CA with Nitrokey HSM if that's your jam...


Well, the title says: "Running one's own root Certificate Authority in 2023".

"Running a CA" is pretty much dominated by managing certificates? Including distribution and revocation - not just issuing?



OPNsense has built-in easy CA/PKI management.


Using RSA in 2023 for a CA that is expected to be around for an extended period of time is just silly.

ECDSA 384 or longer depending on the expected CA lifetime and security margins (see https://www.keylength.com/) is a saner choice.

Also, https://blog.trailofbits.com/2019/07/08/fuck-rsa/


Makes no difference in his setup as he can rotate the CA and the rest of the infrastructure quickly.


HashiCorp Vault is a one-stop shop for this. It's an amazing piece of software.


Agreed. I've introduced an internal, selfs-signed CA using Vault, ansible and Jenkins for my personal infrastructure. Issues certs via pipeline job and restarts / reloads affected target services if needed.

I might do a writeup soon on this, it's not even that complicated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: