Caddy is the greatest thing since sliced bread. It is such a good reverse proxy and a paradigm-shifter for its auto-certificates and HTTP/3 support. It's a great example of how high quality Go software can be. (Thank you Matt Holt)
> I still have nightmares about trying to set up SSL with nginx and my own self-managed certificates.
For anyone who needs to run their own CA (which I'm now doing for my homelab), I've found that using GUI software like KeyStore explorer is a sufficiently easy and lazy way of doing that, which actually works well, both for securing regular sites, as well as doing mTLS: https://keystore-explorer.org/
For what it's worth, using OpenSSL directly and automating that for more frequently rotated certificates wouldn't be quite as pleasant, yet doable.
> Shoutout to Let’s Encrypt as well for making this so much easier!
For ACME stuff, Caddy will be excellent and honestly is probably the best option out there right now!
Nginx (and certbot) or Apache (and mod_md or certbot) will get you most of the way there as well, though the route will be a bit longer.
Caddy is amazingly simple to setup. Automatic HTTPS is a killer feature.
I have to use Envoy at work for gRPC services and I want to quit the industry every time I have to edit their YAML/protobuf monstrosity of a config system.
Envoy config surely is complex, but it's also the most flexible and robust way of managing config on a large scale I have come across.
The way envoy lets you create clusters of envoys, then just setup their config to come from a centralized config source through a grpc connection is honestly the most sane way of managing thousands of proxies at scale I have found. Trying to push nginx (or any other config as a file proxy) updates at scale is a nightmare of its own.
We manage a large number of envoy clusters, where the state of how proxying should happen is all contained within a SQL database where the rules and records change dozens or hundreds times a minute. There is one service that's responsible for monitoring the DB and translating it to envoy configs, then pushing them out to 1,000s of envoy processes. It has been extremely reliable and consistent. For a given input, always produce the same output. It's very easy to unit test, validate and verify, then push the update.
Nginx, and Caddy I'd imagine, are great at set-it-and-forget-it configs or use cases. But envoy is a programmable proxy where you can have dozens of clusters with different configs that get updated dozens of times a minute. I don't know of any other proxy that offers something like that.
Caddy does (some of) that too actually. It has a live config API and support for clusters and synchronized configs and TLS cert management. It can also get the proxy upstreams dynamically at request-time through various modules. Some of the biggest deployments program/automate Caddy configs using APIs and multi-region infrastructure.
Envoy is definitely a powerful & useful tool, we use external auth to centralize our authentication, I just dislike editing large yaml documents with 10 levels of indentation.
My websites run on https because how easy Caddy makes it. Caddy made it possible for me. Cannot thank Matt Holt enough for creating Caddy and making it available to all of us.
I haven't used Caddy and I'm sure it's great, but you could have used nginx or anything else as well also. Offering https is honestly pretty easy these days.
I've been using nginx for years and switched to Caddy just because I was so fed up with configuring nginx to automatically renew TLS certs issued by Let's Encrypt. This is so much easier and reliable with Caddy.
I know about certbot and have considered it, but our customers can use their own custom domain name, which means we need to be able to select the certificate depending on the SNI hostname, which is a bit tricky in nginx. It's possible to use the $ssl_server_name variable in the ssl_certificate and ssl_certificate_key directives, but then the certificate will be loaded for each TLS handshake. And also need to be careful with race conditions when refreshing the certificate, to ensure that the certificate and they key are matching. Overall, it's doable, and people do it, but it's not as straightforward as just using Caddy.
It's really opinionated about it though. I still don't know how to stop it from trying to get certificates for specific hostnames. It seems to work with everything auto, or nothing at all.
That's its value really. It has the defaults you usually want with minimal boilerplate. If you need/want something more complex it's not necessarily the right tool any more.
I say this not as any kind of dig against Caddy but I feel like the entire value proposition is that its default configuration covers the 90% case so well. Sometimes being easy to use with good defaults goes a really long way.
Define the host as http://hostname in the config instead of just hostname and it will do only http for that config. You can have a separate https config that's is different as well.
In simple terms, you can think of a reverse proxy as an http server that is a middle man. For a simple case why you might use one. SSL/TLS can be a pain to set up in your web application code. So what you can do is write your web application and not worry about SSL/TLS certs. Then you can place a reverse proxy infront of it and configure the reverse proxy for SSL/TLS. This way your not dealing with that complexity in your code and someone else is managing it. From there, the reverse proxy takes requests and reroute them to your web application. My reverse proxy is exposed to the internet on port 443, when a packet hits it, it knows to rereoute the traffic to server running on my machine at localhost port 8080. You can also have a reverse proxy to have one singular ingress point for many web applications. The reverse proxy will know that requests for http://MyCoolWebApp.com go to localhost:8080 and http://MyOtherCoolWebApp.com go to localhost:8081.
Reverse proxying is just one task that a web server can perform. Caddy also has a file server (directly serving files from disk or from some virtual filesystem), can write static responses, can directly execute code via plugins like FrankenPHP, can manipulate/rewrite/filter/route the request and/or response, etc. Just look at this list https://caddyserver.com/features
No, the article says that total IT employment increased by 700 jobs last year. The first derivative declined but was still positive (barely), so the second derivative was negative.
Apple switched to USB-C because making separate hardware SKUs to comply with the EU would be costly. Even then, they only did so after creating a new standard and accessory ecosystem to replace lightning port revenue (MagSafe).
When it comes to allowing third-party app stores, Apple is only going to do that in regions where law forces them to do so.
Apple already had and has multiple hardware SKUs for the iPhone to support different regions, for example the US phones have no SIM tray while other areas still did/do.
They were already using USB-C on the Mac and iPad so I don’t believe the EU mandate forced anything but maybe the timeline.
That said, I wouldn’t be surprise if side-loading is limited to the markets where it’s required.
From what I understood Apple will allow third-party app stores soon due to EU regulations, and is already (reportedly) in process of making that happen.
So the question is: should there exist a method that means “bring down the whole ship”. (Or, should that method be something more severe than “panic”, and panic should be redefined with a more limited scope.)
I agree. I like C# where you can expect libraries to play nice. In theory it is possible for C# library to crash your app but it is definitely not on the trodden path.
But I do prefer return values over exceptions! I have not used Go and I think the err != nil boilerplate might drive me a little insane, but Haskells option types seem a nice fit.
Yep! This and a few userchrome.css modifications is all it took for me. A few css transitions and hidden elements and viola, it functions better than edge does.