Hacker News new | past | comments | ask | show | jobs | submit login

HAProxy is an enterprise load balancer that's available through Red Hat or other OSS Vendor. Nginx is just so easy to configure...



HAProxy is a wonderful load balancer that doesn't serve static files thus forcing many of us to learn Nginx to fill the static-file-serving scenarios.

Caddy seems like a wonderful alternative that does load balancing and static file serving but has wild config file formats for people coming from Apache/Nginx-land.


I keep a Caddy server around and the config format is actually much, much nicer than nginx's in my experience. The main problem with it is that everybody provides example configurations in the nginx config format, so I have to read them, understand them, and translate them.

This works for me because I already knew a fair bit about nginx configuration before picking up Caddy but it really kills me to see just how many projects don't even bother to explain the nginx config they provide.

An example of this is Mattermost, which requires WebSockets and a few other config tweaks when running behind a reverse proxy. How does Mattermost document this? With an example nginx config! Want to use a different reverse proxy? Well, I hope you know how to read nginx configuration because there's no English description of what the example configuration does.

Mastodon is another project that has committed this sin. I'm sure the list is never-ending.


> The main problem with it is that everybody provides example configurations in the nginx config format, so I have to read them, understand them, and translate them.

This is so real. I call it "doc-lock" or documentation lock-in. I don't really know a good scalable way to solve this faster than the natural passage of time and growth of the Caddy project.


LLMs baby! Input nginx config, output caddy config. Input nginx docs, output caddy docs. Someone get on this and go to YC.


You're absolutely right. I'm going to do this today.

It's clear from this thread that a) Nginx open source will not proceed at its previous pace, b) the forks are for Russia and not for western companies, and c) Caddy seems like absolutely the most sane and responsive place to move.


LLMs do a horrendous job with Caddy config as it stands. It doesn't know how to differentiate Caddy v0/1 config from v2 config, so it hallucinates all kinds of completely invalid config. We've seen an uptick of people coming for support on the forums with configs that don't make any sense.


For just blasting a config out, I'm sure there are tons of problems. But (and I have not been to your forums, because...the project just works for me, it's great!) I've had a lot of success having GPT4 do the first-pass translation from nginx to Caddy. It's not perfect, but I do also know how to write a Caddyfile myself, I'm just getting myself out of the line-by-line business.


You could've used the nginx-adapter and skip the faulty LLMs

https://github.com/caddyserver/nginx-adapter


Thanks for the link! Maybe less thanks for the attitude, though--I'm well-versed in how these tools fail and nothing goes out the door without me evaluating it. (And, for my use cases? Generally pretty solid results, with failures being obvious ones that fail in my local and never even get to the deployed dev environment.)


> This is so real. I call it "doc-lock" or documentation lock-in. I don't really know a good scalable way to solve this faster than the natural passage of time and growth of the Caddy project.

I think you are totally right here - gaining critical mass over the time for battle tested solution. On the other hand, the authors [who prefers Caddy] of docs will likely abandon providing Nginx configs sample and someone else will complain on that on HN.

"Battle tested" can be seen differently of course, but in my opinion, things like the next one,

> IMO most users do require the newer versions because we made critical changes to how key things work and perform. I cannot in good faith recommend running anything but the latest release.

from https://news.ycombinator.com/item?id=36055554 , by someone working at Caddy doesn't help. May be in their bubble (can I say your bubble as you are from Caddy as well?) noone really cares on LTS stuff and just use "image: caddy:latest" and everything is in containers managed by dev teams - just my projection on why it may be so.


How would you imagine this in practice? Should one to provide instructions how to unwrap docker images/dockerfiles project uses (quite many do lean on Docker/Containers nowadays and not regular system setup) to for example setup the same on FreeBSD Jails? Where to stop here?


Just for completeness sake and probably not useful to many people, HAProxy can serve a limited number of static files by abusing the back-end and error pages. I have done this for landing pages, directory/table of content pages. One just makes a properly configured HTTP page that has the desired HTTP headers embedded in it and then configure it as the error page for a new back-end and use ACL's to direct specific URL's to that back-end. Then just replace any status codes with 200 for that back-end. Probably mostly useful to those with a little hobby site or landing page that needs to give people some static information and the rest of the site is dynamic. This reduces moving parts and reduces the risk of time-wait assassination attacks.

This method is also useful for abusive clients that one still wishes to give an error page to. Based on traffic patterns, drop them in a stick table and route those people to your pre-compressed error page in the unique back-end. It keeps them at the edge of the network.


FYI: Serving static files is easier and more flexible in modern versions of HAProxy via the `http-request return` action [1]. No need to abuse error pages and no need to embed the header within the error file any longer :-) You even have some dynamic generation capabilities via the `lf-file` option, allowing you to embed e.g. the client IP address or request ID in responses.

[1] https://docs.haproxy.org/dev/configuration.html#4.4-return

Disclosure: I'm a community contributor to HAProxy.


Nice, I will have to play around with that. I admit I sometimes get stuck in outdated patterns due to old habits and being lazy.

I'm a community contributor to HAProxy.

I think I recall chatting with you on here or email, I can't remember which. I have mostly interacted with Willy in the past. He is also on here. Every interaction with HAProxy developers have been educational and thought provoking not to mention pleasant.


> I think I recall chatting with you on here or email, I can't remember which.

Could possibly also have been in the issue tracker, which I did help bootstrapping and doing maintenance for quite a while after initially setting it up. Luckily the core team has took over, since I had much less time for HAProxy contributions lately.


That's the best part -- you can choose your config format when using Caddy! https://caddyserver.com/docs/config-adapters


True and I've made use of the Nginx adapter, but the resulting series of error messages and JSON was too scary to dive in further. The workflow that would make the most sense to me (to exit Nginx-world) would be loading my complex Nginx configs (100+ files) with the adapter, summarizing what could not be interpreted, and then writing the entirety to Caddyfile-format for me to modify further. I understand that JSON to Caddyfile would be lossy, but reading or editing 10k lines of JSON just seems impossible and daunting.


Thanks for the feedback, that's good to know.


> but has wild config file formats for people coming from Apache/Nginx-land.

stockholm syndrome


the syntax of nginx configs might not be hard, but its semantics (particularly [0]) is eldritch evil I don't relish dealing with

[0] https://www.nginx.com/resources/wiki/start/topics/depth/ifis...


I can see that. But for me, I was so very relieved to no longer deal with Apache config files after switching to Caddy.


A load balancer shouldn't serve static files. It shouldn't serve anything. It should... load balance.

I can see why you'd want an all-in-one solution sometimes, but I also think a single-purpose service has strengths all its own.


For a lot of web apps, having an all-in-one solution makes sense.

nginx open source does all of these things and more wonderfully:

    Reverse proxying web apps written in your language of choice
    Load balancer
    Rate limiting
    TLS termination (serving SSL certificates)
    Redirecting HTTP to HTTPS and other app-level redirects
    Serving static files with cache headers
    Managing a deny / allow list for IP addresses
    Getting geolocation data[0], such as a visitor’s country code, and setting it in a header
    Serving a maintenance page if my app back-end happens to be down on purpose
    Handling gzip compression
    Handling websocket connections
I wouldn't want to run and manage services and configs for ~10 different tools here but nearly every app I deploy uses most of the above.

nginx can do all of this with a few dozen lines of config and it has an impeccable track record of being efficient and stable. You can also use something like OpenResty to have Lua script support so you can script custom solutions. If you didn't want to use nginx plus you can find semi-comparable open source Lua scripts and nginx modules for some individual plus features.

[0]: Technically this is an open source module to provide this feature.


Quite intersting - in theory, "pure" load balancer shouldn't not, but in practice most of my LBs, especially for small projects do. Even for larger projects I do combine proxy_cache on LB making it serve static files or using to serve websites public content and splitting load over several application servers for dynamic content.

And I think it's fine.


Caddy config is no worse than HAProxy.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: