There are some features in Caddy that are simply outstanding - HTTP/2 and Let's Encrypt integration, to name a few - both pretty much work out-of-the-box with zero-configuration.
On the other hand, there are still some gaping holes - for example, to block (or allow) a 192.168.0.0/17 IPv4 subnet in Caddy, one needs to do all of below:
- Install an addon [1];
- Which used to require recompilation, with 0.9 release you can just click an option during download, yay!;
- Add 128 ranges to cover this single subnet: 192.168.0.0-255, 192.168.1.0-255, ...192.168.127.0-255. Configuration doesn't support subnets, only ranges. And only ranges in last octet, i.e. 192.168.1.0-255 (meaning 192.168.1.0/24), or implied ranges by trailing octet(s) omission, i.e. 192.168 (meaning 192.168.0.0/16) [2].
[2] Which is mildly confusing notation too, since traditional UNIX inet_aton() call would interpret this as 192.0.0.168. Try typing "ping 192.168" on Linux.
Except for iptables doesn't work on per-host or per-URL level (well, technically one could try with a string-matching, but that would be just a crude hack).
I love the super-simple configuration, but it comes with some tradeoffs. For example, last I checked, I couldn't forward some requests to a hostname to another machine, while serving certain URLs (e.g. /static) from the local machine.
On a related note, I haven't used Caddy but have used lego (which Caddy uses under the hood) and lego makes amazingly easy to get certs via commandline and DNS challenge.
Literally, one line (without needing root). Happy user.
Sure, if you want to serve TCP reset / ICMP port-unreachable to blocked users, then yes - iptables or external firewall is a more natural way to do it.
But what if you want to serve different content to "blocked" users? i.e. login page, "we're sorry" page, redirect to read-only version of the site, reminder to connect to office VPN, anything like that...
Another example - more than one site on the same (server) IP, and only part of them are (client) IP-restricted.
Considering that firewalls are the best tools to "filter" by IP ranges, I'd suggest this "hack": Reroute the "blocked" packets (my favourite method is DNAT with iptables) to a different webserver which only serves the special pages.
I would love to hear experiences from people/companies using Caddy in production (for example, replacing nginx). The project looks very promising, but for production usage one might also like a decent group of other people using it in production.
I only use it to host a couple low traffic personal sites but so far it's been fast & reliable. Configuration is easy because things like SSL cert management are baked in. It's perfect for my needs. Not sure if I'd recommend it for anything with a lot of diverse traffic, but it serves up basic WordPress blogs and node.js apps fine.
The one thing that bothers me (which is not Caddy's fault at all) is the plugin system. If I understand correctly, I have to recompile Caddy for every plugin I want to use, right? Sounds like a limitation with Go which is really unfortunate.
I'm a Gentoo user myself so from a personal perspective I don't see any problem, but in a professional environment it's another thing I have to do - Install (and update) Go from somewhere because it's obviously not in the CentOS repositories (at least in an acceptable version), set up a build process and make sure to build updated versions, etc. Just dropping in libraries into some directory to be loaded as plugin would be arguably easier and is kinda "standard" for plugin systems.
I use Caddy on CentOS for quite a while. There is a Repo on Copr that let's you install it minimal or with plug-ins and it works just you would expect.
>Isn't that the same for Nginx? You still have to compile when you add new plugin?
nope. That has been fixed with recent development versions that support loading plugins as shared libraries. For distros, that's very important because it allows them to ship much more configurable nginx packages as now plugins can be added as-needed.
I'm not a big fan of this either. Now you need to keep track separately of what your snowflake build enables, and when it's time to update, or if you want to add something else, then you need to go back and fill out the form manually again, and update manually.
Also, it makes distro packaging dreadful, since you can either ship nothing and be useless for nearly everyone, or ship everything and surprise users if they switch to the official builds and find out stuff is missing.
Nginx used to be as bad, but that's been fixed recently.
Personally Caddy is also not very useful for me, since I use a reverse proxy, and Caddy didn't seem very helpful there the last time I tried. Oh, and it would have been nice to be able to make it generate self-signed certificates for staging environments.
Go does not allow dynamic library loading. So either your plugin system have to be backed by a third party language that has a interpreter/vm/whatever coded in Go, or you have to rely on some RPC method of some kind. So yeah it's a huge limitation with Go that makes it just unsuitable for a wide range of use cases.
Go does allow for consuming dynamically linked libraries, but it's not that simple as with C. For example, you have probably seen OpenGL bindings for Go - that's one of these things that would not work without dynamic linking.
The issue is, that Go runtime has it's own ideas how the memory layout and stack layout should look like and these ideas are not compatible with __stdcall or __cdecl, so when calling C code, the runtime has to do a clean up. That used to involve a thread switch, now it involves a full register swap.
What Go does not allow, is putting Go code into dynamic library and then calling that from another go application - i.e. native go plugins for go apps.
What Go does not allow, is putting Go code into dynamic library and then calling that from another go application - i.e. native go plugins for go apps.
Since Go allows you to call C libraries, and allows you to write C libraries, couldn't you write a Go program that loaded a Go library but just talked over a C-style API?
> Since Go allows you to call C libraries, and allows you to write C libraries, couldn't you write a Go program that loaded a Go library but just talked over a C-style API?
It would be like writing a Go interpreter in C. Pointless.
> What Go does not allow, is putting Go code into dynamic library and then calling that from another go application - i.e. native go plugins for go apps.
That's exactly what I said, you're talking about dynamic linking not dynamic library loading.
It allows dynamic loading, but only C code, not Go code. You can do dlopen()/LoadLibrary() and then dlsym()/GetProcAddress(), but only with C calling convention.
> It allows dynamic loading, but only C code, not Go code. You can do dlopen()/LoadLibrary() and then dlsym()/GetProcAddress(), but only with C calling convention.
That's a C feature not a Go one. You still can't do that ie calling a Go compiled library from a Go executable AT run time, like one can load JAR and execute some Java code from a Java program at runtime. Because the language doesn't allow that.
No matter how you people try to spin that thing this is impossible to do with Go directly. Period. Using C involves a crazy overhead and complexity which is absolutely not worth it.
> What Go does not allow, is putting Go code into dynamic library and then calling that from another go application - i.e. native go plugins for go apps.
Yes it does, since version 1.5, but the toolchain doesn't support it yet across all supported targets.
I might try to do it as a learning exercise of the state of Go tooling, but I have better things with my life than proving things to strangers on the Internet.
I'm awesome to see Caddy come so far. We started using Caddy a little over a year ago, when we needed a replacement for nginx as a reverse proxy that could talk directly to mesos to figure out routing. At the time I rewrote the reverse proxy middleware to get the functionality I needed, but we ended up maintaining our own fork (which is now widely behind), because we needed our own plugins (and a mesos reverse proxy didn't seem useful enough to integrate into caddy core), so its great to see first class support for plugins.
Thank you for your work on the proxy middleware, Nimi! Hopefully in the future we can get it to the point where you won't have to go to all the work to maintain a fork.
Can Caddy work as a reverse proxy to other backend services? And if so, can I use QUIC for the backend and plain TCP for the front-end? Would that give me any benefit?
Yes, if you mean HTTP/1.x. Also, you can serve static files from the proxy itself. Everything not mentioned in explicit "proxy" directive would be served from local files in the same directory as Caddifile.
> can I use QUIC for the backend and plain TCP for the front-end? Would that give me any benefit?
Caddy only has a QUIC server implementation. But I'm inclined to think that if you have a QUIC->TCP flow going on, you will lose most of the benefits of QUIC at the TCP part.
GoKit does that (http://gokit.io) – I heard about it on the changeling podcast [1]. Really cool set of scripts. Haven't used it personally though. There's also a changelog episode about Caddy [2], although I'm sure some of that will be outdated starting today.
When I went to use Caddy (because I love the idea of it), I was disappointed to find that there was no yum repo.
Of course, this makes sense because you have to compile the features in.
But, it would still be nice to have deployment automate-able. Maybe an Ansible role that combines the feature list you need and downloads it via an API.
It's the major, and only reason I quit and went back to nginx.
The repo does contain some init/service scripts for using Caddy on various Linux and BSD distributions. They are created and maintained by the community, but this combined with the binary should make it quite easy to package: https://github.com/mholt/caddy/tree/master/dist/init
Because then you also need to host a feed, and keep it updated when new releases become available, and keep it updated when new plugin releases become available, and ensure the feed stays up, and maintain patches, and maintain required dependencies. All the things that package maintainers (thank you!) do for the ecosystem. Unless there's substantial gain why not just stick with Nginx?
For what it's worth though FPM is awesome, and has made my life better a number of times. If you have to have software that isn't packaged and you aren't familiar with packaging, look into FPM.
Right, it's not a tonne of work if you need it or really want it. But it's still extra effort to move away from a supported package provided by the package management software. Not providing repos means you lose the users who might want to play around with it but don't have any packaging experience. This shouldn't be very controversial.
I would refer you to this https://caddyserver.com/docs/faq for a detailed break down of that. The second points answers your question with the stand out point for me being that Candy focuses on use for static files. Also, it serves all pages via HTTPS by default.
NGINX is a battle tested C high performances general purpose server, while Caddy looks more like a collection of diverse Go libraries bundled together. I don't see a lot of protocols implemented by Caddy itself. There is also the question of Go garbage collector performance and its over head, there is no GC pauses with C.
This is a great project though, the author is young and talented.
The Go GC with 1.5+ (and especially the work going on currently for 1.7) makes it all but unnoticeable for the majority of non-soft realtime (think HFT) systems:
For 99% of users, it is simply not noticeable anymore. Also note that I've worked at said HFT nanosecond latency types of firms the past 9 or so years.
I just replaced nginx with caddy on a staging server. Works flawlessly, very easy to install. The docs could use some improvement though, especially in the way of examples.
On the other hand, there are still some gaping holes - for example, to block (or allow) a 192.168.0.0/17 IPv4 subnet in Caddy, one needs to do all of below:
- Install an addon [1];
- Which used to require recompilation, with 0.9 release you can just click an option during download, yay!;
- Add 128 ranges to cover this single subnet: 192.168.0.0-255, 192.168.1.0-255, ...192.168.127.0-255. Configuration doesn't support subnets, only ranges. And only ranges in last octet, i.e. 192.168.1.0-255 (meaning 192.168.1.0/24), or implied ranges by trailing octet(s) omission, i.e. 192.168 (meaning 192.168.0.0/16) [2].
Oh, and ipv6 filtering doesn't exist at all.
[1] https://caddyserver.com/docs/ipfilter
[2] Which is mildly confusing notation too, since traditional UNIX inet_aton() call would interpret this as 192.0.0.168. Try typing "ping 192.168" on Linux.