Hacker News new | past | comments | ask | show | jobs | submit login
Caddy 0.10 Released (caddyserver.com)
190 points by anc84 on April 21, 2017 | hide | past | favorite | 122 comments



So far I'm loving Caddy. Super simple to configure, and the auto management of SSL certificates is magical. However, for all of the simplicity, they are still pretty resistant to properly packaging it up in the repositories. By far the longest part of getting it running, is the stupid manual configuration of launch daemons, working directories, and permissions. For software that prides itself on dead simple management, the basics of getting it into a package manager should not even be a discussion.

With that said, I 100% moved all of my sites from Nginx to Caddy after the config file setup is around three lines per site.


Of the 5 announcements we made yesterday, milestones for 0.11, 0.12, and 1.0 was one of them (and their details will be published to GitHub soon): one of the main purposes of 1.0 being stable is so that we can find ways to distribute Caddy better for people who want to use package managers. I know holding out isn't a popular decision but we really want to make sure we get it right given Caddy's unique needs with regards to plugins.


That's very nice to hear! I prefer this "make it great and release when it's perfect" attitude because it means way better quality and care I can rely on for the long term. Thanks for making Caddy, I'm pretty sure I will use it at one point!


awesome! Thanks for chiming in Matt.


>the basics of getting it into a package manager should not even be a discussion.

Last time I looked into it, their problem was that Caddy plugins must be configured at compile-time, because runtime plugins seem to be hard to do with Golang (besides RPC). Because there's a handful of Caddy plugins, they'd either have to provide a slim package with no plugins, or ship with all or only some blessed plugins. Obviously all approaches have drawbacks … they discussed the option of shipping their own package manager for Caddy plugins, too (shudder).

See: https://forum.caddyserver.com/t/packaging-caddy/61


Yeh I've dug through that forum post a hundred times by now, the best option is the public script. https://getcaddy.com/


Nginx had the same issue. Ubuntu ships 3 or 4 versions - nginx-light, nginx, nginx-full, nginx-extras - for different numbers of modules included.


With dynamic modules support now in Nginx, that practice should end.


As Michael Lustfield said in Debian bug #790623 in October 2015.


> [...] their problem was that Caddy plugins must be configured at compile-time, because runtime plugins seem to be hard to do with Golang [...] they'd either have to provide a slim package with no plugins, or ship with all or only some blessed plugins.

Or ship the source in the package and build the target binary in the post-install. Or do what Debian does with Exim and nginx: provide several packages, each with different set of plugins compiled in. And these are just ideas from top of my head.


golang somewhat recently gained dynamic plugin support; wonder if that can be leveraged in Caddy?

https://beta.golang.org/pkg/plugin/


Yes, but at the moment its only Linux


Caddy is a web server written entirely in Go.

Features:

- Easy configuration with Caddyfile

- Automatic HTTPS via Let's Encrypt; Caddy obtains and manages all cryptographic assets for you

- HTTP/2 enabled by default (powered by Go standard library)

- Virtual hosting for hundreds of sites per server instance, including TLS SNI

- Experimental QUIC support for those that like speed TLS session ticket key rotation for more secure connections

- Brilliant extensibility so Caddy can be customized for your needs

- Runs anywhere with no external dependencies (not even libc)


Are you sure it doesn't even depend on libc? Most Go programs do end up depending on libc by default; they have to be specially compiled to avoid the libc dep (IIRC, anything that depends on net or net/http or something of that sort).


Go ships with both a pure Go and a libc-based DNS resolver. Some systems (like OS X) don't let Go make DNS requests directly. For more see https://golang.org/pkg/net/#hdr-Name_Resolution


> Some systems (like OS X) don't let Go make DNS requests directly.

I don't think this is a reasonable summary of the situation on OS X. That being that if you turn the firewall on and then turn on block of incoming connections on 10.11 and earlier, responses seem to get dropped. Characterizing that as a blanket disallowing of direct DNS requests is misleading. (I'm not defending OS X here, it's just that summaries like this turn into cargo cults.)


So is it ready for "prime time"? Should I be considering it over Nginx for side projects or real projects?


imho - no

Unlike Apache which served me well for 14 years without failure or interruption, I've had 3 hours of downtime in 2 months with Caddy. I haven't dug into the code, and I don't know Go, but my sense is that it needs to do much better at compartmentalising and isolating failures, and providing meaningful diagnostics for things like configuration errors.

Two specific examples:-

- If caddy can't obtain SSL certs for a configured domain, it completely fails to load. As an ops guy, I don't want 5 virtual hosts down because of a problem with a sixth, non-critical domain.

- A missing "}" in my Caddyfile caused a load failure with a message "invalid email address". Took me 30 minutes to find the problem. An error saying "Missing } in Caddyfile" would have far more helpful.

The project has great promise, but from an ops perspective, you'd be pretty bonkers to prefer Caddy over the more mature alternatives.


I asked this question yesterday. People ask if it's "production ready" but that's like asking if something is "secure" -- I don't really know what that means. Many people use Caddy in production; it's also great for local development. People use it for "real" projects. (I do.) If it is a good match for your needs, give it a try! Start small if you're nervous, but you'll probably like it.


> but that's like asking if something is "secure"

no, that's asking if an enterprise grade project is fronted with Caddy.

> People use it for "real" projects. (I do.)

Sure, but you're not Facebook, Google or Oracle, or big brand X or Z. If you can get Coca Cola, or Nike website run on Caddy, it will change that perception.


Caddy uses go's standard libraries for most of its web serving parts, so you know at least that the http server code is used by google, cloudflare etc. Same goes for SSL and many other parts.

However most companies out there aren't Facebook, Google, Oracle, Coca Cola or Nike anyway.


Using Go standard HTTP libraries doesn't guarantee things are stable.

Here's an example how a missing option (that is Go 1.8+ - which is a recent release) can lead to a DoS: https://github.com/containous/traefik/issues/1322


Yeah, as it happens in software in general there aren't any guarantees. I could too give you a link to heartbleed. Caddy wasn't affected.


Well, yes, although security and stability are different matters. Every software has bugs, I just said that the fact something is using Go HTTP library doesn't automagically mean it's production quality. There are always many ways to accidentally shot oneself in the leg.

(Caddy, in my experience is production quality. I've used it in a few projects and haven't had any serious issues. Maybe that's just my use cases, though.)


Stability is relative. Knowing that it uses standard Go networking libs is an indication of how stable this thing is.


>no, that's asking if an enterprise grade project is fronted with Caddy.

Just because you'll be the first to use something for some purpose doesn't make the tool unfit for said purpose.

The parent post's point stands: "production-ready", like "secure", is a highly relative term.


I think I'd disagree.

While Caddy doesn't do that, production readiness is something that can be objectively measured. Define some metric, like "how many % users encounter stability-related issues/error conditions", throw in telemetry and there will be solid numbers. Many do that with their own software deployments, using Sentry or similar solutions, deciding on whenever their beta deployments are "production ready" or "not yet". Of course, that only works with a large enough user base, otherwise the error margins are too high for the number to be any meaningful. It is relative only in regard whenever, say, "0.01% (just a random number) of users have encountered severe issues" is "way too many" or "oh, it's just a few".

Security, on the other hand, is a different beast. Formal proofs aside - trying to define some similar metric "how many installations were hacked" or "how many security issues were found" isn't really meaningful.

But I think it would be really off-topic. =/


You're missing my point, I think. What you're describing is an epistemological problem, i.e.: "how do I know this thing is production ready?". This problem is (mostly) independent of whether or not the thing actually is production-ready.

The point is that knowing about previous use in production is one way of knowing to what extent something is production-ready. There are other ways (e.g.: well-controlled tests).

As for your point about security, I couldn't disagree more strongly. Comparing histories of compromises and mitigations across various projects is, in practice, a very useful metric. See OpenBSD for an example.


Yes, it's pretty nice and sometimes is even nicer than nginx in some regards. E.g. it notices DNS changes without SIGHUP, so if your app is containerized, Caddy just works when you start and stop containers, when nginx requires special tricks.

However, it lacks in some areas, for example if you need some complex regex-based routing it could be somewhat unpleasant (while probably doable, the configs would look really messy).


> Caddy just works when you start and stop containers

That doesn't sound great for zero downtime deployment but

> notices DNS changes

That sounds like it could support zero downtime (new container started, DNS records updated to this new container and then stopping old container).

Is this something like DNS SRV support in Nginx Plus? [0]

[0]: https://www.nginx.com/blog/service-discovery-nginx-plus-srv-...


Wait... why would a web server need to know about changing DNS?


It doesn't need to know about the changes, but it surely needs to notice them as they happen.

For example, FLOSS nginx does DNS lookups just once on startup. On config (re)loads, to be exact.

E.g. when you have "proxy_pass http://spam;" or "uwsgi_pass spam:9000;", you'll end up with permanent 502 (until a SIGHUP) after you've started a new "spam" Docker container and then removed the old one.

There are well-known workarounds like "set $backend "http://spam"; proxy_pass $backend;", but it's nicer when you don't even have to think about this and know that TTLs are properly honored. Caddy does that.


Absolutely use it for side projects! I have used it for a lot of small throw-away sites and loved every minute.

It's up to you whether you trust it for large products, but so far it has been fairly magical for me.


The MitM detection is interesting. How long until these appliances start altering the HTTP request's User-Agent header to avoid detection (or adding logic such that the handshaking process mimics that of the browser), though?

To me, HPKP with preloading seems like a more reliable approach (and browsers shouldn't allow this to be overridden [1]).

[1] If this breaks corporate MitM attacks, great. This practice always struck me as incredibly invasive and I'd personally be concerned if an employer was doing this. I think some traffic data should be able to be logged and timestamped in case of abuse (e.g. SNI info, TCP connections, DNS lookups) but I don't see a need to intercept application data - and honestly, it seems like the vast majority of these appliances hinder security.


> How long until these appliances start altering the HTTP request's User-Agent header to avoid detection (or adding logic such that the handshaking process mimics that of the browser), though?

A TLS proxy that does this is meddling with the application layer and is thus broken (IMO). A TLS proxy that doesn't want to be detected shouldn't mess with the application layer.

The best thing a TLS proxy can do (other than be turned off) is preserve the characteristics of the original TLS connection exactly.

Indeed, Caddy's implementation is mainly for detecting TLS proxies that do a lousy job at it.


I'd also be concerned if an employer was doing it, unless there was a good reason - and I can think of a few of those. For example, companies that deal with sensitive health, financial or legal information. They may need assurances (or at least a paper trail) that's a lot stronger than "we don't MITM and we trust our employees to do the right thing".


Even with those reasons, I believe the risk outweighs the potential benefits. It's not just about trusting the intentions and assurances of the company, but also their competency and knowing they're not being compromised etc. Hackers don't care about paper trails. These are things you typically can't measure.

If you have untrustworthy employees, you have bigger problems than you can solve with by MITM attacking everyone.


I've worked at a couple financial institutions where all http/https traffic is proxied transparently (with internal key installed on corp computers)... certainly makes working with command line utils interesting... HTTP_PROXY HTTPS_PROXY are mostly friendly, but some aren't so nice.


> For example, companies that deal with sensitive health [..] information

I can understand this motivation but I'm not sure in practice the SSL interception really gives you more than the traffic metadata I already described. I think this is interesting though, so I've sent an FoI request to NHS England out of curiosity to see if they're doing anything like this.


How does it compare with https://traefik.io/ (reverse proxy, written in Go, automatic HTTPS / letsencrypt) ?


Traefik is purely a proxy, Caddy is a webserver.

Traefik can efficiently deal with proxying HTTP traffic but cannot serve files and is not as easy to configure if you're not using Docker or similar.

Caddy can serve files or PHP but is not as good with proxying HTTP traffic.


Yep, I use the similar eBay/fabio which pulls my LE certs from hashicorp/vault, much better than leaving them in a filesystem. Can't imagine using a proxy/ingress/LB without automatic reactive routing these days. All I have to tell Fabio is what interface to listen to.


Why do you say it's better to pull them from Vault? Unless MTLS is set up talking to Vault, you are pulling secrets unencrypted (and maybe unauthenticated) over the network... a local file system is not bad for small scale deployments if setup with the proper permissions.


It looks like traefik has a few more features tbh, I'll definitely be giving this a serious look.


Awesome product, I'm using it at work and it's incredibly fast. Configuration is easy, documentation is clear. The systemd unit provided in the repo is insane, all the latest security and isolation stuff are in it, a great inpiration for writing good systemd units.



I would just like to bring up how much I admire the way they want to profit off Caddy[1][2]. Sponsorships and focused development; followed by "remember, caddy is open source". My only feedback would be that they introduce a "$50/mo; my bank is not big enough" for people that wants to endorse their model/software.

Look at nginx, where new functionality is hidden behind a paywall. I don't want to deny them [nginx devs and sales people] their well-deserved money, but it pushes me away.

[1]: https://caddyserver.com/blog/options-for-businesses

[2]: https://caddyserver.com/pricing


Thanks for the feedback! We are mindful of people who want to contribute funds but aren't able to because of the price. We get a lot of requests for stickers and T-shirts and even jackets/sweaters; so we might go that route for individuals being able to contribute.


Why not introduce a $50/month (hell, even a "pay what you want") option, with no sponsorship benefits (mentions, link, etc.)? Depending on volume, you could still offer Slack access as a benefit, perhaps if paid up front for a year.


That's possible! We wanted to gauge initial interest first but let me talk to Cory about this.


I would be interested in something that made it so when I donated, an additional plugin was created in the download that outputted my support of caddy in the header or as some kind of call to be included in web pages.

Something similar to how SVG based icons are used in many Git repos to display TravisCI scan results of the number of downloads. A similar badge could be supplied via plugin to supporters that shows how much the user has donated to the Caddy project or that they're a contributing member.


PSA for those who use caddy's proxy "without" configuration item - it is broken in 0.10: https://github.com/mholt/caddy/issues/1604

This bit our app during testing.


Surprised there isn't more talk about the MITM detection here. Anyone got a TLDR of how this is supposed to work? I'm going to read the full doc when I get home, but I'd be interested in hearing peoples opinions on how accurate it is likely to be.


The authors of the original paper [1] identified that the set of client cipher suites advertised by each browser can be used to fingerprint and identify a browser.

Caddy records the cipher suite advertised by the client during the TLS handshake and then later examines the client's user agent. Using the fingerprinting techniques mentioned in the paper, Caddy then determines whether or not the advertised user-agent is compatible with the user-agent that it inferred through the client cipher suites.

TLS interception proxies establish their own TLS connection to the server. Depending on what underlying TLS library the proxy uses, it also has its own unique fingerprint. When the TLS proxy forwards the user's request, Caddy detects the mismatch and flags it as a MITM.

[1] https://jhalderm.com/pub/papers/interception-ndss17.pdf


Anyone with experience using this as a dynamic reverse proxy - I need to proxy certain requests to private container (ports) where the port isn't known until the container is booted, and containers come up and down as users require them.


Depending on what that private container is serving up, an API gateway might work better. Tyk is one that's written in golang. It has a rest API and hot reload, so it should be able to handle your use case of dynamically allocated ports.

http://tyk.io


You can use service discovery tools like Consul or Etcd. Basic idea that on container/app boot, you register your app to service discovery service (you give it IP and current port), and it stores information about your all running apps.

After you can use consul Nginx integration (it will dynamically generate Nginx config on all updates, and will restart it).

Tyk mentioned here also have Consul service integration and much more.


Sounds like Traefick.io mentioned above should do the trick there. Looks like an interesting tool...


Nginx can proxy to a server specified in a variable. All you have to do is define that variable though perl or lua to get port dynamically from somewhere, like a file.


eBay/fabio or Traefik.io both fit that description. Linkerd and Envoy probably do too but I have less experience with those.


Pricing page is pretty bad. Draws your eyes with primary colors and bold fonts to $5000/yr and $9900/yr. I had to swallow the sticker shock and look around the page to see the weird, faded side-bubble telling me it's free.


I always really liked this video by Matt:

https://www.youtube.com/watch?v=ZyVA9tuif4s

The way he explains things makes me wish he did tutorials for programming languages.


They mention that 'Default Timeouts' have been disabled. Urging users to 'Act according to your threat model!'.

I'm not sure I understand. How is not having these timeouts a security thread? Someone could potentially open up enough HTTP connections to starve others from having the opportunity to do so?


This is true; but slowloris attacks don't require opening as many connections. We've seen one or two instances where buggy (or malicious?) clients were slowlorising Caddy instances; but we were too eager to enable timeouts by default I think.


No, not having timeouts is rather bad practice. There are a lot of clients out there that never close connections for various reasons. And since TCP doesn't use healthchecks and most systems by default have limits on the number of descriptors per process, your web server in default configuration will simply leak descriptors and memory over time until the whole thing stops working. The only sane choice here is to have timeouts enabled everywhere and explicitly allow them to be disabled, not the other way around.


Believe me, we wanted to do this, but it broke a lot of WebSocket connections and other legitimate, long-lived connections. It confused many users. It's hard to know what is legitimate and what is not. I wish we could configure timeouts on a per-request basis but not without some serious hacking around the net/http lib for now.

I do encourage setting timeouts when you are able to. It's easy, for example:

    timeouts 30s
I have heard of occasions where people have done this and it made their servers breathe again. At this point it's just up to your discretion/judgment.


If I remember correctly, websockets have ping/pong support on both ends for that. So to handle them properly you have to detect connection upgrades and then start issuing ping requests to clients periodically, closing their connections if they fail to respond in time, but still using timeouts everywhere.


Yeah, after an upgrade of Caddy on my server the syncthing ui broke. Not sure why but with timeouts the ui just doesn't work. It was a little surprising but I think it's more a problem syncthing should solve.


Yeah, you could take all the file descriptors by opening a whole bunch of connections.


Does Caddy have a pluggable extension set where you can write your own middleware, in Go?


It does and it's documented on the wiki. I've written a few plugins privately, like implementing SCEP into caddy and adding a few other features.

Due to the nature of the plug system, you'll have to build caddy yourself if you add customizations like that.



Random question: is there a way to start Caddy as root so it can bind to port 80 (for example) then change the user so a non-root user can send a `USR1` signal to Caddy to get it to reload the configuration?


At least with systemd caddy starts as a non-root user, if you use the provided unit-file: https://github.com/mholt/caddy/tree/master/dist/init/linux-s...


Is there something like `setcap` for macOS?

    Give the caddy binary the ability to bind to privileged ports (e.g. 80, 443) as a non-root user:

        sudo setcap 'cap_net_bind_service=+ep' /usr/local/bin/caddy


There isn't. There's a port of authbind, but I don't think anyone was able to get it working with caddy. The only solution I've heard of working is using the pf firewall to port forward, for example, 8080 to 80. Like this: https://salferrarello.com/mac-pfctl-port-forwarding/


Better systemd integration would use the LISTEN_FDS mechanism, letting the service management subsystem open the privileged-access listening socket as instructed by a socket unit.


Yes. From their GitHub readme:

Running as root: We advise against this. You can still listen on ports < 1024 using setcap like so: sudo setcap cap_net_bind_service=+ep ./caddy


Any benefit in using this inside a docker container instead of nginx? No need for SSL or many of the other features I'm seeing listed here since it's all behind an Amazon ELB.


We are using Caddy as a simple reverse proxy in Docker environments. The configuration is a bit simpler than nginx and we love the tiny Docker images we can create (not sure how large an nginx-full installation is).

That being said, we did run into a few issues that forced us to go back to older Caddy versions, like broken websocket support or the timeout issue in 0.9.5. Also, sometimes the documentation is a bit lacking and unclear. DNS resolution seems to be flaky sometimes (we're using alpine-based containers and sometimes Caddy just won't resolve names of other containers, even though a curl inside the container can resolve the names just fine).

So if you've got a working nginx setup, I'd say stick with it. For new projects it is worth to check out Caddy. The issues we ran into occured early in our development process, they didn't just suddenly happen in production, so once you've tested everything, Caddy just works.


When would I use Caddy, rather than using nginx and configuring Let's Encrypt for myself? Honest question.


It's a lot easier than ngxinx + letsencrypt + cron job - there is zero config to get letsencrypt up and running and certs renewing automatically.

It supports http2 out of the box and will keep improving as the go net/http library does, performance is good (certainly good enough for 99.9% of the static sites out there, I haven't specifically benchmarked against nginx but response times didn't change much). I'd recommend it as a proxy if you need one, particularly if you need one that handles your tls certs.


Why would I use nginx and go through the hassle of configuring let's encrypt and sane SSL settings when I could just use caddy?

But seriously, for all "hobby" projects I've started recently, and even some production services now, I go straight to caddy.

My config consists of a 1 line global config file:

    import sites-enabled/*
And 90% of the individual services have this config:

    example.com {
      proxy / http://localhost:5000 {
        header_upstream Host {host}
      }
    }
Compared to nginx it's just so much simpler and so far I haven't found myself missing a single feature. (In fact, caddy has active health checks, which nginx only supports for its extortionate enterprise fee)


What's the performance like? Can you use it as a proxy server?


I use it exclusively as a proxy server.

I've never noticed any performance issues, I think I saw a benchmark where it said caddy can handle 2000req/s, which is at least an order of magnitude or 2 faster than any upstream service I've used it in front of.


>I think I saw a benchmark where it said caddy can handle 2000req/s

Those are application level numbers (e.g. with logic and all).

Caddy should be able to handle an order of magnitude more req/s than that easily.



yeah, I clearly misremembered the benchmark

It was slightly slower than nginx, but not enough to impact typical applications


2000 req/s is tragic. We have services with 100 000req/s so i need a reverse proxy with at least 500 000 req/s. What is the reason of this low performance?


I am not sure why this is getting downvoted, care to explain?


My guess: At 500Krps on a single proxy you're probably using 40GbE or higher right? So you're a fairly niche user and should have some sort of in-depth knowledge of this space. Your current setup should already some sort of horizontal scaling system in place. Just throwing out you "need" a 500Krps came across as fake boasting/clueless/whiny.

I'm sure HN would love a more in-depth comment where you say how you're accomplishing things today and what you expect out of general purpose reverse proxies, etc.


If your HTTP request size is 2048 bytes and the RPS is 500000 we get 1024000000 bytes / sec that is ~ 8 Gbit/s, lets say with overhead we are hovering around 10Gbit/s. Based on the response size you can do similar calculation. As you can see in the TechEmpower benchmarks frameworks with 3M+ req/s are not uncommon.

https://www.techempower.com/benchmarks/#section=data-r12&hw=...

This is on Dell R720xd dual-Xeon E5 v2 + 10 GbE. Now if I chose technology that cuts this performance 200x times I am wasting resources.

I like to idea of Caddy but before we can consider using it in production I need to make sure it can keep up with the performance requirement we got.


Yeah but as a proxy you need to add the request and response and you get that much full duplex. Those numbers are also with pipelining returning a tiny static response - basically as close to a TCP echo test as you can get.


It can act as a reverse proxy, yes. I would also love to see some performance benchmarks.


Proxy-server performance is fine.

File-serving performance — not so good.


A lot of the other responses have been focusing on how simple Caddy is to get started with - batteries included. We've been using Caddy for a bit over 2 years in production. At the time we had just moved our infrastructure from an assortment of self-written python scripts to Mesos/Marathon. We were looking for a solution that would reverse proxy to our app servers without having to rely on DNS. I initially thought about writing a plugin for nginx before I found Caddy. When we found Caddy, it was a better solution for us as it was far easier to write a mesos integration for rather than trying to integrate it into nginx.


I use Nginx in production, but caddy on my dev machine, where I might have dozens of test sites running at any time. Those sites all live on subdomains of a dedicated dev domain and I rarely create more than 2 or 3 per week so I can always just use https with certificates generated on the fly. In Nginx I would need to constantly re-run certbot to add subdomains manually.


For all local/non-public instances I just use "tls self_signed" and have Caddy generate a cert on demand.


I feel like the niche Caddy's trying to fill is for people who don't want to bother with that, or don't want to learn. If you already know how to set up nginx and LE (like you and I do), its only appeal is relatively minor: potentially saving a quarter or half hour.


its only appeal is relatively minor

Is that factual, or also something you feel like?

To us simple folks, you know, it looks as if Caddy has a number of use cases where it's a much neater fit than nginx. We like the extreme simplicity and the transparency of it all. Some of us also like the Go it's written in. What with the magic Go modularity, you can actually embed this server in your web-app. Or is it the other way round? Comes in handy, at any rate.


What with the magic Go modularity, you can actually embed this server in your web-app. Or is it the other way round? Comes in handy, at any rate.

It's the other way round.

You can embed your app in the server if you want to get its features for free, though given it uses net/http and most of the features are exposed there you should probably just use that directly instead if you are doing anything significant.


That was a feel. I never thought of the idea of embedding it. Thanks for teaching me more!


When you're a solo entrepreneur/developer/sysadmin/whateverhatineed hat I need to wear. Streamlining processes as much as possible is a good thing. Using something overly complex, just to use it, is silly. I stumbled upon Caddy, because I was struggling with getting LetsEncrypt setup properly for all my sites on Nginx. Now with a few lines of config, all my sites use, redirect to, and update their certificates automatically. One less piece of bullshit I need to sift through for all the projects I'm working on. In my opinion, something like SSL certificates SHOULD be an automated process.


I'm not opposed to learning, but it's like learning emacs when nano legitimately covers all your use cases: potentially useful, but unconvincing ROI. And if we pretend for a moment that it's really only a quarter hour saved, why would I not save that time if all other things are held fixed?

Caddy isn't the universal solution, but it sounds like you're rationalizing using the more complex tool because you know it and have been using it rather than because it's necessarily superior.


The appeal is huge for a huge number of people. It's an opinionated web server. I've used nginx for probably 6-7 years, and not once have I done something I'd consider that special (I've setup fpm, passenger, gzipping of content, TLS, regenerating my DH param, etc).

Every single thing I've done, could be implemented in an opinionated away by the web server. I'd love to just say "php" and have it do the right thing.

In fact, I'd go as far as to say, it would be amazing if I could just say "proxy ____ drupal" instead of php, and it'll add the drupal rules as well.


It all depends on how complex your setup is. Caddy isn't just easy to setup, it is easy to setup every part of it until you reach the end of its functionality.

I can set up both nginx and apache (and get paid for it), I've automated LE for new sites via ansible, etc. Nowadays if I can do with caddy, I just use that and proceed to the next task. Also remember that easy (and short) configuration means easier maintenance.

Devs also love it, they can develop locally using advanced functionality (e.g proxying an API that doesn't provide CORS).


I think over time there will be a similar server written in Rust, where at least we can be sure that there are no buffer overflows..until then nginx is probably better in every situation.


Unless it uses unsafe code (which Rust also provides) Caddy wouldn't have buffer overflows either, since Go is garbage collected and doesn't allow for the kind of pointer arithmetic C does in normal code.


Caddy is written in go, not C, so I'm not sure why it should be especially vulnerable to buffer overflows...


Do we really need someone to mention Rust in every thread ?


No, it just makes sense to have a fast and safe HTTP server at the same time.

Go is ona average 3x slower than C according to bechmarks, and it's hard to get rid of this issue because of how the language is architected. I just don't see Caddy as a possible long term replacement for nginx.


As a member of the Rust Evangelism Strike Force, I am morally obligated to. /s


someone had to say it :p


why would there be buffer overflows in caddy?


I made the complete switch to Caddy just because the configuration is simplified and it has features like git webhooks built-in. Setting up BasicAuth on a directory is one line in the config. And it's fast. No issues so far, been using Caddy for over a year now.


I vouched for this comment to bring it back from dead. dev247, you seem ghosted for all the account's lifespan for no apparent reason. Contact hn@ycombinator.com to get reset to a proper user account!


So did I. What's the deal with this absolutely harmless comment being going all grey and dead?


The account was probably automatically flagged by some algorithm 3 years ago. Yay for smart machines and no human oversight!


This is the first I've heard of Caddy and I'm really impressed by the feature set. Looks like a breath of fresh air, can't wait to try it.


Is there an official Docker image yet?


No, but this is the one I usually recommend for now: https://hub.docker.com/r/abiosoft/caddy/


Is ECDHE_ECDSA possible on Caddy?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: