Hacker News new | past | comments | ask | show | jobs | submit login
Nginx 1.25.0: experimental HTTP/3 support (nginx.org)
379 points by thunderbong on May 23, 2023 | hide | past | favorite | 283 comments



I want to add that I have learned so much by following the nginx mailing list for more than a decade.

Can we give a huge round of applause to Maxim Dounin for community support and technical excellence?

Maxim and team are answering the deepest of technical questions patiently, to the point.

Every time I read into those threads I am impressed by Maxim. By his dry communication style, his precision, and his patience. It's inspiring.

When you get his reply (which is likely to be the case), you typically get the problem you presented described in his words: with precise language/terms. Very likely he provides a solution. Or a precise quote of reference docs or spec describing why something doesn't work, conceptually. Or a patch (he often replies with "here's a patch that should work", showing a clean diff).

So: https://mailman.nginx.org/pipermail/nginx/ -- highly recommended if you want to learn more about HTTP and web servers in general.

By the way: for forcing DNS re-resolution (mentioned in this thread here) in the open source version by the way there is a weird but extremely powerful workaround (which really works, we have used it in DC/OS successfully over years), also see https://github.com/dcos/dcos/tree/master/packages/adminroute....

It was of course Maxim who described this little trick in the mailing list in 2011 :-) https://forum.nginx.org/read.php?2,215830,215832#msg-215832

It's still highly relevant in 2023 for controlled dynamic service discovery with nginx.


Is mailing list the recommended way to get help on nginx?

I recently made the mistake/challenge to use nginx as a SSL reverse proxy for a bunch of non SSL services running in docker containers .

To my dismay there is no decent documentation for what I thought would be a common usage case - namely docker for everything including nginx.

* SSL was easy enough - I have a wild card certificate and nginx does have good docs on setting it up

* Docker networking was a bit of pain - but I solved it by making a separate network.

* proxy_pass is where I got really bogged down - I got to rewrite location /api and serve it at the internal network + port.

        location /api/ {
        rewrite ^/api(.*)$ $1  break;
        # proxy_pass http://172.19.0.3; # also works
        proxy_pass http://172.19.0.1:9090;
        # most likely something else is needed for fix relative paths
        }
So now I have the problem that proxy works for mysite/api/index.html but not for any relative paths ie static/css/style.css is not loading (but docker exec -it mycontainer curl does work)

Maybe it is Google's fault but it seems near impossible to find a good AUTHORATIVE reference on setting up reverse proxy server with nginx.


Not entirely clear why you need to rewrite if you're also doing vhost based "routing" (wildcard certificate).

But as the saying went with apache, if you have a routing problem, you can fix it with mod_rewrite - now you have two problems!

You might want to (re)read:

https://nginx.org/en/docs/http/request_processing.html

and skim:

https://nginx.org/en/docs/http/load_balancing.html

And (re)read:

https://nginx.org/en/docs/http/ngx_http_proxy_module.html#pr...

It seems dubious that you need any rewriting for your setup.

You might need a handful of server blocks (vhosts) with either proxy_pass or a few locations with proxy_pass?


Indeed I am wary of rewrite from apache days.

So using a subdomain should solve routing issues - api.myproject.myorg.org instead of myproject.myorg.org/api ?

Two issues - my wildcard cert is *.myorg.org so not 100% it would cover subdomains of subdomains.

Second issue - you'd need to set up DNS for subdomain of subdomain, would you not?

Sadly DNS setup would require opening an uncertain to complete support ticket in myorg...


> my wildcard cert is *.myorg.org so not 100% it would cover subdomains of subdomains.

it won't:

https://www.rfc-editor.org/rfc/rfc2818#section-3.1

> Matching is performed using the matching rules specified by [RFC2459]. If more than one identity of a given type is present in the certificate (e.g., more than one dNSName name, a match in any one of the set is considered acceptable.) Names may contain the wildcard character * which is considered to match any single domain name component or component fragment. E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.


Ah, I assumed you already had subdomains set up. Path based routing should be fine, but you probably still don't need rewriting - just "mount" the appropriate proxies in appropriate location blocks (read over the examples in the documentation carefully).



This sounds like an nginx-based k8s ingress-controller. See:

https://kubernetes.github.io/ingress-nginx/deploy/


I still love nginx, but it's a damn shame they gate some really useful features behind their nginx plus license, namely service discovery via DNS SRV records, cache purge requests and request queueing for upstreams.


I think they have perfectly selected features for plus. Service discovery, request queueing etc. are only required above a certain size of backend, and in this way almost 100% of the hobby sites are fine with open source version. And the hobby sites are getting reliability and security that could only comes with paid service for free.


No personal offense intended, but I really /hate/ this justification. "Open Source" should not be equated with "Hobby Projects". It should be possible to run large-scale important things on pure open source.

My employer (Wikimedia, which is a non-profit) has been running all our server infra on pure open source for many years, for very principled reasons. We even publish all of our internal infrastructure config in public git repos as well. We're an open-source-only shop that's doing very non-hobby work. We have thousands of servers, we run our own CDN on multiple continents, we have somewhere around a billion unique users per month, and our average global rate of inbound HTTP requests is around 135K/second.

However, it keeps getting harder to stick to our open source principles every year as more projects shift towards open-core models with this kind of "Anyone who needs feature X is surely a big corporation making billions who doesn't mind running closed source software" type of thinking. It's not just bad for us, it's bad for the whole larger open source and Internet ecosystems if all the important bits for running at scale are locked up and hidden away in closed-source software.


This is a rather weird point to make, considering the way Wikipedia has handled the collection and usage of funds. Investing in its technical infrastructure would be a less controversial usage of their money than what they are doing currently.


As a theoretical exercise, I'm not sure that's strictly true. If you're reliant on donations, then income is never guaranteed. If you suddenly fall short of targets, maybe you have to lay some staff off. But stopping your license fees means you then have to pay someone to switch to an alternative.

Avoiding recurring payments you can't easily stop makes reasonable financial sense to me. Particularly for a business that doesn't sell anything.


As a theoretical exercise, you could also choose to "lavish" funds on the developers of the OSS platform instead of a recurring license fee for use, accomplishing a more outreach oriented use of funds than paying non-profit "employees" north of market with remarkable packages. If funds get tight, you can simply stop laving on the OSS, while enjoying ongoing use. Such a license can definitely be arranged -- I've done it at scale.


Let’s say you need a feature in nginx. You can either pay for an nginx plus license, or fork the oss nginx and develop the feature in-house.

Many people assume that developing it in-house is a one-time cost. However, in my experience, code is not an asset but a liability for which you will have to pay interest in the form of maintenance. More specifically, in this case there will most likely need to be adjustments made to the in-house extension as the oss nginx version it was built on top is being updated over the years.

My point here is that you should probably treat in-house code as a recurring cost, because in practice it usually is.

The question then becomes which recurring cost is higher: The license or the in-house maintenance?


What are they currently doing?



I wholeheartedly agree with you, and I am so glad that wikimedia is pushing to stay truly open source.

Accepting these "sprinkles on top" licenses is highly foolish. I do hope that the true free and open source licensed software prevails in the server space. As many as possible must continue to hold out and push forward with new better and better scalable fully open source software.


> My employer (Wikimedia, which is a non-profit) has been running all our server infra on pure open source for many years, for very principled reasons

Curious how deep does it go? Do you only use firmware in server which has open source code. Do you require all drivers to be open source?

> It should be possible to run large-scale important things on pure open source.

It is definitely possible. With enough developer time, you could create copy of any software. Every paid or open core software runs on the philosophy that it is cheaper to pay them rather than to build them.

Which brings to the core issue, developer time is not free or cheap. The fact is that software quality and reliability is dependent on the money that the software has means paid software quality in general should be higher. While wikimedia could get enough money through donation, no one donates to open source projects that could be counted in full time developer salary, likely not even wikipedia(correct me if wrong).


You are very much the exception that proves the rule. What other non-profit needs anything close to that kind of capacity?


The NFL was a non-profit from 1942 till 2015.


nginx opensource is more than enough to have a very capable solution. All bells and whistles that are fenced off are more or less for people who don't want to bother engineering their infrastructure properly. Or can't do that for a reason. The only thing that may really be needed is name resolution inside an upstream. But you can work around it in many cases. If there is something you need in OSS - consider asking in a mailing list. Best case - you'll have a workaround (an unlikely outcome: feature will be moved to OSS). Worst case - you lose nothing, don't you? It's quite difficult to get a decent feedback on a thing like a webserver. You can't add a pop-up window or email your users. You have to rely on an active feedback. A vague comment somewhere in the internet doesn't count as such.


You are correct, but you are an outlier. The vast majority of organizations who are serving the quantity of requests trust Wikimedia is serving, are commercial entities.


DevExpress is closed source, but they give sources to customers under a promise that they won't leak. Maybe nginx can do something like that? Or openresty has a module for your needs?


This is called “source available”,l and is not the same thing as “open source”.

https://en.m.wikipedia.org/wiki/Source-available_software


In case they worry there's something horrible hidden there.


Have you tried contacting nginx and explain the usecase/problem as you did here? A lot of software companies have different or free licenses for cases like yours.


In Wikimedia's case it's more likely a matter of principle than cost. They have plenty of money despite what the donation popups make it seem. They want open source because it's open, not because it's cheap.


AFAICT nginx has a true open-source edition, which anybody can fork. Was there enough interest in the community in developing SCH features in the open-source code base?


Most large projects make money. Wikipedia is one of them (they have more than sufficient funds). If you are a company that writes open source software you have to earn money somehow. I think aiming paid features at customers that make money (i.e. large enterprises) is logical.


I like to overbuild my own things as a way to build a skillset, and I've basically only been doing that on software where only extremely specialized features are gated (e.g. Vault's HSM service, can't justify to afford one of those anyway).


[flagged]


I'm sure you like your site, but to be honest anything that doesn't make at least $3,675 per year is not a business, it's a hobby. Nobody said that hobbies are insignificant, they just don't (necessarily) make money.


If it were $3675/business/year, I might agree. But $3675/server-instance/year is insanely expensive for many very real businesses. That's $300/month! Imagine a business like Shopify, but with one server instance per customer. No way can you afford to do that if you have to buy nginx pro for each instance. Unit economics still matter to lots of businesses.

That said, you can run quite a big site without any of these features just fine. Also nginx is open source and you can implement these features yourself, either using C or Lua, relatively easily as well.


nginx is a reverse proxy. You're not meant to be running dozens of instances, you're supposed to stick one instance in front of a whole collection of application instances. Your nginx costs don't scale with your server load, they scale with your infrastructure complexity.


Nginx is a lot more than a reverse proxy, including a not too shabby static resouce host. In fact Netflix hosts their videos ultimately from Nginx.


Profit is the difference between hobby and not?

Non-profits are not hobbies.


I didn't say anything about profit. Non-profits do (and should) make and spend money, too.


Charities should make money?

Best to you.


It's one cost of so very many these days.

These prices (per instance!) add up; there are many other pieces of software clamoring for their share, hardware costs, networking costs, labor costs. It's death by a thousand cuts.


You're saying anything that makes under $4k a year isn't a business, but you're using it to justify that anything that makes under $8k a year isn't a business. Because you're earmarking those profits to go to a vendor, which then are not profits.

Are people not entitled to make $300 a month because they owe Nginx for a bot throttling feature? Can you run a small website without throttling these days?


> Are people not entitled to make $300 a month because they owe Nginx for a bot throttling feature?

Where is this attitude of entitlement coming from? Are people entitled to use Nginx's code for free? Yes, free software is a wonderful thing, and I do believe that most software should be free. But the reality, in this world, is that software developers need to eat, and relying solely on donations only works for a handful of projects (as does relying on selling support). Open-core is a reasonable middle ground.

If somebody wants to reimplement that feature on top of Nginx and release it under a free license, there's absolutely nothing stopping them. If you want to use an already written and well-integrated version from Nginx themselves, then you can pay up.


On the other hand, though, why does anyone believe they're entitled to a bot-throttling feature for free?

Certainly someone could write and release a third-party, open-source nginx module to do bot throttling, if they wanted to.


Big Same feels from me as you. I watched countless good things at Heroku get locked behind enormously expensive “enterprise” contracts.


It's not a significant amount of money for any business that gets this amount of traffic. Being this stingy on software your business relies on deserves derision.


I am biased, but Kong Gateway has built SRV resolution in the OSS product, which runs on top of Nginx: https://docs.konghq.com/gateway/latest/how-kong-works/load-b...

It has been built by extending Nginx with an OpenResty-based DNS resolver instead. You could even extract it away from Kong itself and use it standalone.


And, come to think of it, some additional great features would be automatic TLS certs via Let's Encrypt and maybe even being able to use a shared cache with multiple instances of nginx.


I put off adding TLS certs to my personal website for years. It's just a few static files served by nginx on a server - there wasn't a great reason to bother, I thought.

It took me almost exactly 3 minutes start to finish with letsencypt. Where 'start' was "I should stop putting that off" and typing "letsencrypt.com" into a browser, and 'finish' was nginx serving up https:// on all my domains. I'm genuinely curious what nginx could possibly do to improve the situation there.


It's true that setup is simple enough but what other servers like Caddy provide is automated renewals.

You don't have to mess with having a .well-known always accessible on each domain, for example. Caddy does it for you and only when it's needed.


Caddy targets single instances. No coordination is required to avoid receiving receive multiple certificates for multiple instances for same domain.

Nginx has a profit from big business that would require coordination to avoid multiple certificates. For comparison, it is worth noting that to ensure such coordination, Traefik requires an Enterprise plan and a separate agent. Such a business also often has mechanisms for storing certificates, so they do not necessarily want to integrate it into Nginx.


With Caddy, you can cluster easily by either sharing its filesystem data storage across machines, or configuring a different storage driver (redis, consul, db, etc.) and they will automatically coordinate for cert issuance. Caddy writes a lock file to the storage which prevents multiple instances from stepping on eachother's toes.


I recently moved to Caddy. Out-of-the-box TLS, HTTP3, Dumbed-down simple configuration, static files... it has been a breath of fresh air.


Caddy is amazing.

Still irritated that they ship it with an unprotected admin endpoint enabled by default on localhost:2019 that eats JSON and allows reconfiguration of the webserver like adding new sites that enable further attacks.

Put "{ admin off }" as a separate block in the root Caddyfile to disable it.


Our view is that localhost:2019 is inherently protected though - that only allows requests from the same machine. If you're running the machine with shared users, then of course it's up to you to further secure things.

That said, see https://github.com/caddyserver/caddy/issues/5317, we are considering changing the default, but that would be a breaking change although it would likely be a transparent change for most users. The default that makes the most sense depends on the platform and installation method, which is why it's complicated.


SSRF is a thing, just because you trust the code doesn't mean you've eliminated all security risks. The tools we use in the industry should all have secure defaults.

Glad to hear you're considering changing the default!

It would be enough for me if Caddy generated a password (that's hard for attackers to predict) on first launch, set that in a config file it has write access to (autosave.json for example), and then required Basic auth using this password unless the configuration specified otherwise. My problem is that this endpoint is entirely unauthenticated.


Have you demonstrated SSRF on a server that is not running insecure or untrusted code (i.e. is not already compromised)?

We have yet to see this, but if we do see a practical demonstration, we're happy to reconsider.


You've never seen an application with an SSRF vulnerability? I've encountered multiple working as a penetration tester.


This is a horrible approach to security. Bad. Bad. Bad.


Making decisions based on demonstration of practical attacks is "bad bad bad"?


Assuming that everyone on local host should have admin access to the web server is an extremely bad default, yes. Listening on ports that the user has not requested is bad enough but doing so without any authentication on that interface is bonkers.


What is the problem here, that is any different from allowing untrusted code on your machine? Yes, if you don't trust your other users you need to lock down your system. That's not new or unique to Caddy, and indeed you can lock down Caddy too. Once the machine is popped locally it's already game over, authentication can't prevent that.


Large enterprise codebases will almost always have some security vulnerabilities in them but are still considered trusted code. Caddy is a fine alternative for ingress to such applications if you ask me, maybe a little on the immature side, but it's certainly getting there. An admin might absolutely pick Caddy for ingress in order to keep config simple, writing a Caddyfile, and ending up with the aforementioned admin endpoint enabled by mistake.


Defense works in layers. Punching through one of those layers makes securty worse even if that layer was not the first or the primary defense.

The problem here is that a priviledged process (and yes, a process that has access to Port 80 and 443 is priviledged even if it doesn't run as root) gives unauthenticated control to less priviledged processes.

With good security design you don't lock things down as needed but instead start fully locked down and open up only the accesss you really need.


> With good security design you don't lock things down as needed but instead start fully locked down and open up only the accesss you really need.

This. @mholt what we're arguing for is just to have secure defaults.


Also written in a memory-safeish language.


Yes -- hugely underrated. Memory safety vulnerabilities are the cause of 60-70% of exploits [0] including the infamous Heartbleed. Caddy is not susceptible to these types of vulnerabilities. That we continue to deploy C code to the edge -- including software we think is hardened like nginx or Apache or OpenSSL -- boggles my mind.

[0]: https://www.memorysafety.org/docs/memory-safety/


Caddy does it all automatically and you don’t have to remember to do things by hand.


Check what caddy does. I think that is what op is after


Lots of folk have mentioned Caddy, but I'd like to also mention Traefik which kind of does the same thing but I found it to be less of a pain in the arse.


Traefik has a very verbose config with non-sensible defaults; Caddy works out of the box with 1-3 lines to setup your domain and sensible defaults.

What's the pain?


Tarefik because it is better architectured, auto discovery of docker services , battle tested


Link for others curious about the discovery:

https://doc.traefik.io/traefik/providers/docker/

(but preferring Caddy so far)


That can be done with Caddy as well: https://github.com/lucaslorentz/caddy-docker-proxy


This didn't work when I tried Caddy before, and it was the entire thing I needed to do.

I should revisit Caddy and see how it gets on.


It didn't cope with services starting and stopping well, and couldn't provision services from docker (at least when I looked at it).


Could you elaborate? We'd like to improve where practical.


Try Caddy. I was mind blown when I fired up a simple Caddyfile + Docker and got SSL out of the box.


> And, come to think of it, some additional great features would be automatic TLS certs via Let's Encrypt and maybe even being able to use a shared cache with multiple instances of nginx.

Well, for what it's worth, certbot is still pretty good: https://certbot.eff.org/

That said, with servers like Caddy supporting automatic TLS and even Apache httpd having built in support now with mod_md, Nginx feels more like an outlier for not supporting ACME out of the box: https://httpd.apache.org/docs/2.4/mod/mod_md.html

For my personal sites I actually use Apache because it's pretty good and has lots of modules (including OpenID Connect support for Keycloak etc.), but Nginx has always been really easy to install and setup, especially for serving static files (e.g. built front end apps) or being a basic reverse proxy.

Here's a bit more about how I use Apache: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...

And a little bit about some of the occasional warts of Nginx: https://blog.kronis.dev/everything%20is%20broken/nginx-confi... (though otherwise it's fine)

That said using HTTP-01 instead of DNS-01 across multiple nodes with the same hostname is needlessly complex in most cases and DNS-01 doesn't always have the best support in web servers (Apache in particular is just like: "Yeah, we can run whatever script you want, but the contents and integrating with your DNS server is up to you").


You just want to get a free ride, and they want to feed their kids.


Certbot is practically a one liner in a cron job once you fill out the config.


Does Certbot still pull in 40 odd dependancies? I wanted to use it but ugh, the amount of extra libraries it wanted to pull in. acme.sh does it all with a single bash script (plus a few supporting binaries, to be fair)


Not sure, I think I've only used the version they distribute on pip. Which can be a bit obnoxious if you have to build the python cryptography package for your target os/platform but if you don't it's a fairly minor install.

There are quite a few other options besides certbot, I was just suggesting it as a "install + one liner" for adding auto renewing ssl certs to a server.

https://letsencrypt.org/docs/client-options/


Blocked by that stupid snap crap :(



Ubuntu on a server? Just use Debian lol.


LE is dead easy to integrate using Docker, the work related to integrating it into webservers themselves (IMO) isn't worth it.


There's literally zero work with Caddy. I'm not sure I understand what you're trying to say.

Also, there's massive advantages to having TLS issuance built into the webserver, such as proper OCSP stapling, having an active process to trigger renewals as soon as a revocation happens (hearing about it via OCSP), solving the ACME TLS-ALPN challenge without extra steps, and unique features like On-Demand TLS that many SaaS companies are relying on to provide a custom domains feature to their customers. None of those things are possible unless it's tightly integrated in the webserver.


How would you rather the devs generate revenue?


I'm just daydreaming here, completely understand they have a business to run and they probably thought long and hard about which features to put into which license. No criticism intended, just something that eventually might catch up on them if another http server/proxy can offer everything that nginx does plus the missing items.


They’re owned by F5, so selling support and services.


it is really, really hard to sell Nginx. their conversion rate is about 0%. they've tried really hard to so, but nobody would buy it.


I thought they were making decent revenue now?


because they sell Plus.


I know of companies that buy Plus just for the support and don't even use the Plus features.


Oh, I thought their only product was plus and I didn't realize that the comment was referring to something else! Which proves the point the comment was making very well!! So from what I understand, they also sell regular ngnix support


What support and services do you actually need though. This would create a perverse incentive to make the software more complex and unreliable and then selling the resources to fix it for you.

Vs just actually selling the software and having it work flawlessly with minimal work.


Have you ever worked in a large enterprise? They always want support for stuff like this.

Always


The way that Red Hat and Nextcloud do.


RedHat provided an entire platform that you pay support for... including supporting nginx. How can nginx (the company) replicate the same thing?

Pay for the features you want. You will be better off for it. Expecting every company to follow the RedHat model is unsustainable.


If people want that badly enough under a FOSS license, there's nothing really stopping them from implementing it themselves, as an add-on or a fork of the BSD-licenced codebase.

If it's actually gated, not enough people care enough to get together and make and maintain a better one themselves.


I found that most of the time it was worth the time to invest a few hours/days into implementing the missing feature with Lua/OpenResty, if there wasn't already an Open Source extension available on GitHub.


Damn shame people won't pay for features they claim to need.


The thing is, they aren't 'needs' just nice-to-haves that are provided for free with other open-source software. Thus, when trying to choose technology for personal projects, people don't choose your software. They choose a 'competitor.' Thus when it comes time to choose software with a green-field or startup, the non-free solution isn't chosen because devs aren't familiar with it.

Literally, at work (a startup), we use Traefik (which is horrible imho) instead of nginx (which I've used literally everywhere else) because the devs who originally started this project had never used it on a personal project.

So the issue isn't paying for it, it is about making it useful enough for personal projects that people use it for personal projects that it then gets used on professional projects. Right now, it is barely useful for personal projects unless you know what you're doing.


This is a great point. People use Caddy because for simple cases the configuration is seemingly "easy", and more importantly it includes Let's Encrypt out of the box. I've tried to argue with these users that nginx has many powerful (and necessary) features, to no avail. Traefik is a similar story: It integrates neatly with Consul for service discovery so it's a go-to tool for TLS termination in front of a service mesh.

Do I personally need DNS SRV support? No, I have a templated Nomad config that will re-render and reload the nginx config if the consul upstreams change. Setting this up though is definitely a bigger hurdle than just specifying a Consul service as an upstream.


I mean, if nginx offered a free personal license that worked on 5 servers (which would match up to Ubuntu Pro's free personal license), I'd be all over that. I have 3 servers and I'm very unlikely to dish out $500 a month, per server, to get nginx+, but I could sell that to my employer if I was already familiar with the technology or could convince my coworkers to go give it a try and see what they thought.

It seems they're more interested in high-touch sales (which is unlikely to happen on a dev team), vs. organic growth.


There is of course the issue of consistency: you can't be said to believe in free software (which is an ideology based around the rights of users) if you operate a proprietary software vendor.

A vendor that releases some free software and some proprietary software is a proprietary software vendor. (This describes, for example, both Microsoft and nginx.)


Yes, but people have to be ABLE to pay for your software. I simply cannot afford nginx+ for a hobby, and I'm a "free" asset when I make decisions for a company that CAN pay. Also, a startup may use the 'personal' license pre-revenue, then when it is time to scale up, they have to pay. Their features are coupled to your product, and they are going to pay because they have to (or rewrite a lot of code), a 'trial' simply isn't going to allow this to happen.


Maybe "these users" are not as braindead as you think? nginx still defaults to settings that made sense in 2005. I usually go for caddy now not because I am not familiar with nginx, but because it doesn't require me to drop hundreds of lines of config across a dozen of includes to get a properly configured HTTP server. Automation doesn't make much sense in our use case, and it gets tiring maintaining a separate ansible playbook just for nginx setup.

Caddy has sane defaults for many settings and I only need to add a couple of response headers, drop the domain name, and where to proxy the requests to. It takes care of maintaining things like the list of TLSv1.2 ciphersuites.

Adding more domains takes two lines of config thanks to parameterized includes — another important thing nginx misses that has to be implemented manually via scripts using something like envsubst (or again, full-on automation which is problematic if many of your servers are just a little bit different from each other — for good reasons).

Apache httpd has it by the way: https://httpd.apache.org/docs/current/mod/mod_macro.html



Or maybe, just maybe, devs are sick of software that pretends it’s still 1995. Nginx is awful to work with, and has an awful community.

The cherry on top is the OpenResty “community”. You know the one that call you an idiot for not pretending Lua is orgasmically good; the one that declares you “just” implement thousands of lines of code because there’s no possible way a useful library could exist; the one with the ecosystem full of awful, worse than js leftpad garbage that hasn’t been updated in three years.

Give me a break. F nginx and openresty.


Not all of us are enterprise customers with enterprise budgets. There is no middle option. It starts at 3675 annually per instance. That’s more than the price of most of my servers


It's per server

We do pay them, for a bunch of boxes. But I can't afford to run their plus version in front of every microservice. Or even in front of our entry point server. That's 5 times as many machines, plus staged versions of the app, plus other services running on the same boxes.

If you're running anything that is not multithreaded, you're running a reverse proxy in front of it. And even then you can still improve server availability by running a reverse proxy.


I love paying for software features that I need, but anything over $100 a month is too much for my personal budget for side projects. For example, I really want to run HA Traefik with Let's Encrypt shared across three instances in my datacenter rack, but Traefik Enterprise is many times more than what I can afford to pay, so I make due with one instance because the service discovery features for Nomad are fantastic. Same for Hashicorp Vault. I'd pay $100/mo for an Enterprise Vault for the HSM integration.

I really wish companies would come up with SMB pricing to help us side project hackers out so we could grow into paying for the bigger plans. Also, I understand why they don't. SMB support is a huge PITA.


Or maybe we just like open source...


Weird how Nginx is like the only webserver in the history of mankind that offers caching capabilities but doesn't handle DELETE or PURGE verbs


Just use traefik and move on with your life. Apache and Sendmail and xinetd send their regards too.


How dare they try to make money and have a successful business.



Check out Envoy. The config has a bit of a learning curve but it's quite powerful


Don’t shame an amazing open source project for trying to be financially sustainable.


> Don’t shame an amazing open source project for trying to be financially sustainable.

While one can talk about corporate contributions and basically ownership of some open source projects (the F5/Nginx case included), I also think that the funding is a big challenge for many projects out there.

Now, this is from 2019, but I found the article "Software below the poverty line" to be a useful if a bit grim look at the way things are: https://staltz.com/software-below-the-poverty-line.html


I've been running Cloudflare's implementation of HTTP3/QUIC called Quiche[1] on my server's NGINX for over a year. It powers several websites and has served hundreds of millions of responses. It was a little weird to set up, but I've not encountered any issues with it so far.

It will be interesting to see how their native implementation compares.

[1] https://github.com/cloudflare/quiche/tree/master/nginx


NGINX has now an own QUIC implementation? I have to look again.

Implementing QUIC seems no fun and there are almost no implementations. Almost everybody claiming HTTP/3 support uses Quiche under the hood for QUIC (besides some outliers and AWS who are one of the very small group of orgs who have their own QUIC lib). I was under the impression NGINX kept building on the current foundations with Quiche.


I think you’re understating how many quic implementations exist. Google, Apple, F5, Facebook, Fastly, Cloudflare, Microsoft, AWS all have separate implementations servicing significant production traffic, most of them open source. That doesn’t even count the smaller, language-specific implementations. Searching for quic interop tests is a good way to discover the various implementations.


ASP Net Core can use QUIC. It uses MsQuic library and libmaquic.


Oh, right! There is also MS. I've forgot to mention their lib as I'm not working with any MS technologies. Apologies.


IIS is still waiting for it, you have to run some esoteric commands to try it out and it doesn't reliably function.


Well, it's not ready for prime time:

https://interop.seemann.io/

A more or less full list of implementations is here:

https://github.com/quicwg/base-drafts/wiki/Implementations

But I personalty would have qualms with all the C implementations. Crypto and very complex protocols are nothing one should implement in C, imho. This cries for disaster. And what's left, especially in an usable state, is quite underwhelming currently.

Higher level libs / frameworks like Java's Netty and Rusts Hyper chose Quiche for QUIC.

The only other ready to use libs I've found were AIOQUIC for Python and quic-go. Those are also the only ones with a working WebTransport feature currently.

Looking on the above linked inter-op tables MS will need some time to catch up. QUIC is indeed complex.


We really, REALLY need a "standard" QUIC API, this thing is getting out of hand IMHO.


This was called for. QUIC is UDP based, and shifts the burden of implementing stuff like congestion control into userspace. There are already plenty different congestion control algorithms in the (linux-)kernel for TCP so you can expect a myriad of different implementations (and configurations) in QUIC - but this time on a per-program level.


Ah yes, the waylandification of a protocol.


I do understand that, but I'd still like a "common" basic API for those people that just want to transmit data efficiently from point A to point B. If QUIC is harder than TCP, people will just stick to that instead of switching to QUIC. I don't say we need a SOCKET_QUIC, but it would be nice to have at least a way to open a connection that's similar on every library.


I wish Nginx would improve their developer experience. Their deployment patterns and configuration are horrid compared to Caddy which is amazing to use.


Caddy has a lot of great features, but (at least as of last year) Nginx has the edge on documentation. Caddy’s docs were almost useless. Hopefully they’ve improved since I last used it.


Caddy cannot be found in the default repositories of Debian or RHEL. This raises the question of why one would use such a server. Personally, I am hesitant to download a random pre-built executable from Github, even if it is open source. I would much rather use the apt or dnf version, as anything else seems like just another toy server.


Debian's requirements for packaging of Go software is unreasonable. They expect every single dependency to be individually packaged. The total dependency chain of Caddy ends up being massive. We (the Caddy maintainers) don't have time necessary to allocate to a single distribution, to package and maintain every single dependency individually when all we want to do is ship a single static binary (plus some support files).

Instead, we ship with our own debian repo, hosting graciously provided by CloudSmith https://caddyserver.com/docs/install#debian-ubuntu-raspbian. This is packaged via CD with GitHub Actions, and you can verify the authenticity of the build since it's signed by Matt Holt's GPG key.

For RHEL, it's in COPR, and that's the best you'll ever get for similar reasons https://copr.fedorainfracloud.org/coprs/g/caddy/caddy/


Adding to Francis input, the release artifacts (not the .deb packages, which are signed with Matt's key) published on GitHub are authenticated with Sigstore tooling[0]. You can verify the artifacts and the .deb packages were not tampered to the byte! The builds are reproducible and verifiable. FUD doesn't have any room to loiter.

You can also build it from source using the `buildable` source archive artifact that includes all the deps so it can be built in air-gapped machine. Like its sibling artifacts, the source archive is signed, the signature is published, the signing certificate is available, and the checksum is published and also signed. What's so concerning?

[Disclaimer: Affiliated with Caddy]

[0] https://www.sigstore.dev/how-it-works


What's the reasoning behind that packaging requirement on Debian? Thanks for working on caddy by the way! I find it very neat.


Debian only ships free software (in main, but that's a detail).

This is actually enforced and there is processes in place to ensure that it stays that way.

This means that all new software that Debian packages is audited by a group of volunteers, the ftp-masters team, they check copyright, license and stuff like that.

If all binaries in Debian would vendor all of their dependencies, this would cause a lot extra and duplicated work for the ftp-masters, a team that already have a lot to do.

Same with security, if a popular go library needs to be patched to fix a security problem, then it's easier to do that in one place instead of patching it in N different binary packages.


Honestly, I don't understand it fully. I just know the barrier-to-entry is too high for us to spend time on it. We don't have contact with any debian packaging maintainers that would be willing to work with us. But https://go-team.pages.debian.net/packaging.html is one of my main resources for my understanding of their requirements.

And that goes without saying that Debian in general tends to release much slower than we'd be comfortable with. We don't want users running outdated and potentially insecure versions of Caddy. Best if users keep up to date by using a first-party installation method where we have control over the distribution pipeline.


All software in Debian needs to be Free software - the user must be able to modify and run it (ie recompile after modifying). And for software packaged for Debian that means being able to work with "apt-get source" and "apt-get build-deps". This of course includes dependencies.

That creates a bit of a split between Debian packages and language specific packages like rust crates, golang, python eggs or ruby gems.

There's some friction there, but the reasoning makes sense (but it is ok to disagree of course).


Isn't caddy fully free and open source software?


Certainly AFAIK. I didn't mean to imply otherwise. But FOSS as distributed in binary packages by Debian needs to remain possible to inspect, modify and build via the Debian (source) mirrors - hence all dependencies need to be packaged too (as opposed to living their seperate existence somewhere "go get" may be able to retrieve them from - or not ten years from now - and your respirator depends on a certain version of caddy for it's status display...).


Ahh that makes sense!! I get why Debian maintainers would want that, but it does seem quite hard to manage as a developer. I've went down the rabbit hole of how different Linux distros manage their repository packaging after your original comment so thanks for that :).


> Debian's requirements for packaging of Go software is unreasonable. They expect every single dependency to be individually packaged.

Based on the sibling comment that points out a volunteer has packaged caddy for Debian 12 - that work has been done?


>Caddy cannot be found in the default repositories of Debian or RHEL.

Debian 12 (bookworm) will have it: https://packages.debian.org/bookworm/caddy


FWIW, that was created by someone not affiliated with the Caddy project, and looks to no longer be maintained (latest is v2.6.4, but it has v2.6.2). So as a maintainer of Caddy, I cannot recommend using that repo.


This is the official Debian repository. The package versions are frozen in each major Debian release. However, they may backport security and bug fixes.

In practice, in the case of less popular packages, they do this on demand, when someone requests it in the bug tracker.


Well, users should know that if they report issues while using releases from that source, we can't reasonably help them, and that they should use an official release to get bug and security fixes promptly.

I want to emphasize that we have no contact at all with the people maintaining that Debian package, they've never reached out to discuss anything. We're absolutely open to that (and they know where to find us, not hard to contact us either on GitHub, Twitter, our forums, here, etc).


It's exactly the same way tens of thousands of other packages have been shipped for decades, including many other web servers like nginx, httpd, lighttpd. No need to paint so much drama over this.

They will contact you if the need arises. It's the same usual process that has been used since the 90s to great success.


Users will reach out to us first, not to debian, because we're easier to reach for help (via social or our forums). If they tell us they're using an outdated version which doesn't have the fix for what they need, I have no other choice but to tell them to stop using the debian-maintained package, and use our officially maintained package.


> Users will reach out to us first, not to debian, because we're easier to reach for help

Maybe. That is indeed a risk with third party distribution.

But do note that Debian has its own support channels, and infrastructure (like the "reporting" tool: https://packages.debian.org/stable/utils/reportbug ).


Oh please, you do have plenty of other choices.

It's ok to not want to support older versions or downstream packages (even if imo there is value in doing so) but don't be a drama queen and claim you can't.


Choices such as? How else would we get the user to run an updated release with the fixes they require?


Has anyone actually done any research on how good the backporting of security fixes is in frozen distros?

Maybe it's pretty good for very popular packages, but how about the more niche ones (and when it comes to Debian I'm not sure how popular Caddy is in their view)?


Anecdotally, my experience has been okay... but not great -- you can end up with something Frankenstein would create

The versions often feel arbitrary and don't line up. For example... I've been watching this for years:

https://bugs.launchpad.net/ubuntu/+source/firewalld/+bug/183...

This is more on the edge case side of things, too. Not really security patch related -- but a consequence of picking/choosing component levels

With this the firewall can randomly just stop being effective

When things aren't exactly upstream, the knives you're juggling get a little bigger and more unbalanced.



This comment is so out of touch with how Linux distributions work. This is the package most Debian users probably should be using, unless they absolutely require one of the newer versions.


IMO most users do require the newer versions because we made critical changes to how key things work and perform. I cannot in good faith recommend running anything but the latest release.


> we made critical changes

That's exactly why people (including me) tend to like LTS - no critical changes till next release. Upgrades for security with minimal surprises. I go further and often use unattended-upgrades on my Ubuntu fleet. I don't wanna version bumping until I explicitly ask for it as much as possible.


A lot of users require stability and this is how stable software distribution works. Only security fixes get backported, but no functional changes. It is unfortunate that Caddy hasn't adopted a segregated LTS and non-LTS approach, but that's not Debian's fault.


> we made critical changes

In two patch versions? With minor version unchanged?


While it is convenient to have software prebuilt in a trusted repo, these repos are more about providing toolchains. If something isn't in the repo (or the repo, as it often is, ie out of date) use the toolchain to build what you want.


Caddy provides their own yum repo and I'm pretty sure it's in EPEL too.


Just build it from source?


And then watch it like a hawk for vulnerabilities and rebuild as needed. No thanks.


The docs were brilliant for v1, it wouldn't surprise me if they were the spec for a great user experience and the code came second.

Despite v2 supporting a very similar config file, the documentation doesn't emphasise that and tries to steer you towards its API, confusing JSON config syntax etc.

It's still a very good web server for very few lines of config, but I don't relish trying to learn something new from its docs like I used to.


> and tries to steer you towards, its API, confusing JSON config syntax

I disagree. The docs don't do that. What you're probably talking about is the Getting Started guide https://caddyserver.com/docs/getting-started, which is a tour of how Caddy works, so it first shows you the "bare-metal" look at how it works, then it introduces the Caddyfile which allows you to simplify your user experience.

There's even a comparison table between the two https://caddyserver.com/docs/getting-started#json-vs-caddyfi... which explains when you'd want to use JSON (i.e. if you want programmable, API-based usage) or Caddyfile (i.e. for quick-and-simple hand-written config, 95% of users choose this).

I recommend starting from https://caddyserver.com/docs/caddyfile-tutorial or https://caddyserver.com/docs/caddyfile/concepts to get an idea of how the Caddyfile works.


That might be true now, but I spent a lot of time trying to use caddy v2 and not being able to since the only docs where for json and some things where available in caddyfile and some in json.

If it is fixed now then that is great, but it took at least a year (I'm guessing more) after the launch of v2.


Of course it wasn't going to be perfect at the initial release, nothing ever is. Caddy v2 was a complete rewrite from v1, so there was a lot of TODOs to polish it up.

Definitely take another look now, there's been a ton of progress since then, 3 years ago. The initial v2 release was in May 2020, soon after the pandemic hit.


> Of course it wasn't going to be perfect at the initial release, nothing ever is

What I'm trying to say is that on launch v2 was not a good replacement for v1, especially in the docs area. I've seen quite a few major version bumps in OSS, and it feels docs is an area that is usually neglected, and for quite a while (at least a year, I'd say more) the v2 docs where not useful for someone who had not participated in the caddy v2 community discussions.

I'm just trying to describe what led me to go from a avid caddy proponent back to an nginx user.

I'll take another look next time I have a project that needs a http/s server!


> Now you know that the Caddyfile is just converted to JSON for you.

> The Caddyfile seems easier than JSON, but should you always use it? There are pros and cons to each approach. The answer depends on your requirements and use case.

Followed by a table comparing the json and caddyfile approaches. What's the confusion?


> The docs were brilliant for v1, it wouldn't surprise me if they were the spec for a great user experience and the code came second.

I agree with this, v1 was an excellent user experience, I actually ran it on my servers for way too long. There was also the Wedge fork which might have helped with the EOL but sadly it didn't go anywhere: https://github.com/WedgeServer/wedge

> Despite v2 supporting a very similar config file, the documentation doesn't emphasise that and tries to steer you towards its API, confusing JSON config syntax etc.

Others responded to this a bit more, but while I agree that different config types are a confusing experience, at the same time I appreciate that they support something like that in the first place. I might not use it often, but it's nice that you can.


It's almost like the docs were written for someone upgrading from V1 (or familiar with V1) instead of a newcomer who knows nothing (like the V1 docs were written for).


That's not true. The docs are written with the expectation that the user understands how the web works. We can't reasonably teach that in our docs. Instead, users should read MDN for that stuff. If you were coming from v1, the only page that makes that assumption is the upgrade guide https://caddyserver.com/docs/v2-upgrade. Everything else is either a getting started guide, a tutorial, or reference docs for Caddyfile and JSON config.


I didn't say it was true, I said it just seems like it. As an example: https://caddyserver.com/docs/caddyfile/directives/php_fastcg... -- it shows the syntax, but nowhere on the page does it tell you WHERE to put it in the config file. Is it top-level? Do I nest it in something else? Keep in mind, most people are starting with a mostly blank file, and zero context about how Caddy works (whether or not they understand how the web works). This page won't answer the basic questions of where it is allowed, and neither will the getting-started docs, which tells you to make a json file instead of a Caddyfile but the docs for the thing I looked up doesn't look like json (maybe I know this, maybe I don't). It's all very confusing for someone looking in the docs for a solution and instead has to learn how everything works, whether they want to or not.


That's what this page is for: https://caddyserver.com/docs/caddyfile/concepts#structure. You shouldn't be reading directive docs before the concepts page. The directives page https://caddyserver.com/docs/caddyfile/directives at the top also tells you where these go, which is where you must have came from to find that directive's page in the docs. The information is all there, you just need to read it. Read things in order, don't skip steps; don't skip steps in the getting started guide either, that defeats the purpose of the tour it gives you.


Google is not going to take me to that page first when I search for how to do something in Caddy. It's going to take me to the directive, most likely.

To say it again:

> It's all very confusing for someone looking in the docs for a solution and instead has to learn how everything works, whether they want to or not.


How can we fix this without repeating the Concepts page on every page of our docs?

Maybe we should just un-index all docs pages except the intro pages.


I think there are several ways to handle that. Assuming you are using templates for the docs:

1. A simple link like this: https://docs.k3s.io/installation/configuration#:~:text=For%2....

2. An aside at the top that informs the person where to get more information with a link.

3. An example showing the structure of the file.

4. A link to examples/tests in the repo showing how it can be used.

I would think that a simple aside in your template would work wonders. Maybe saying something like:

> See our page on [directives](link) to learn how to best use this in your configuration.


Couldn't agree more, I absolutely LOVE Caddy, but the docs were truly awful the last time I had to look, all forums, etc also referenced v1 a lot which was really frustrating.


I think we've improved them a lot since then. A new website is in the works which should improve them even more.


Personally I love the nginx configuration, I find it intuitive and simple to work with. I for example use it to proxy traffic to linux network namespace or unix sockets. Nginx also have great performance so I use it to offload SSL/TSL connections. It also runs on few resources and I've used it on tiny VPS' with only 128MB of memory.


I like the general syntax but some things are just annoying or at least surprising like add_header replacing all headers added in parent scopes which would otherwise be inherited.


Speaking of documentation, there’s this weird edge case where =404 in try_files causes $args to no longer work(!) and I ended up wasting an entire day on it.

It’s too bad, I only moved away from caddy because of weird disconnection issues when reverse proxying certain apps.


I'm not sure I understand. Please write a bug report if you think you found a problem https://github.com/caddyserver/caddy/issues


I am not at all convinced that the measly 1-4% performance we'll manage to eek out of this is worth the effort and complexity.


Latency at higher percentiles (crappier internet connections) improves pretty meaningfully in most of the articles I've seen. Here's a recent one from Dropbox:

https://dropbox.tech/frontend/investigating-the-impact-of-ht...

(Discussed at https://news.ycombinator.com/item?id=36027702.)


Quote from article:

  For the majority of our global users, HTTP3 reduced network latencies by 5-15ms (or 5%). While this is an improvement, these wins would appear negligible to the average user. At p90, however, HTTP3 demonstrated massive improvements, with a latency reduction of 48ms (or 13%)—and at p95, a reduction of 146ms (21%). This could be explained by the fact that HTTP3 is better at handling packet drops in parallel connections by eliminating head-of-line blocking; because packet drops are more likely to occur in networks with suboptimal connection quality, the benefits of HTTP3 are more visible at the higher percentiles.


Given that the majority of people uses the web through very often quite crappy mobile links HTTP/3 is actually a significant win from the user experience perspective. It can be all the difference between "site is completely unusable" and "site is slow, but you get through".

Of course it's true that QUIC is a complexity monster. OTOH HTTP/3 is actually quite simple when you have the QUIC layer implemented. A simple HTTP/3 server is no more than this:

https://github.com/aiortc/aioquic/blob/main/examples/http3_s...


The average amount of time users are willing to wait for a web page to load will remain the same, thus any latency reduction will simply be eaten up by more crap being served, because greed. The constraints aren't technical, they're human.


> The average amount of time users are willing to wait for a web page to load will remain the same

People have definitely gotten more impatient over the years.


Actually, you're right. TikTok, Instagram et al. demonstrate that people have gotten more impatient... to get more crap!


QUIC is also more of a response to the havoc wreaked by network middleware, ageing kernel network stacks, and arbitrary censorship. In another sense, QUIC is an attempt to revive the end-to-end principle.


Interesting, how does QUIC help with 'arbitrary censorship'?


It mandates perfect forward secrecy TLS cipher modes mandatory which makes it impossible for man-in-the-middle hardware to intercept and read users' connections while still pretending to be secure.

There was quite a bit of pushback on this in the IETF from financial institutions that think they have mandatory obligations to spy on their employees.

Here is a relevant HN discussion thread from 2016 about TLS 1.3, most of which applies to HTTP/3: https://news.ycombinator.com/item?id=12641880


> There was quite a bit of pushback on this in the IETF from financial institutions that think they have mandatory obligations to spy on their employees.

To be fair, they do have mandatory requirements to prevent their employees from doing some things online in some cases. For example - some of the rules around coordination on a trading floor: https://www.sec.gov/rules/sro/nyse/2017/34-80374-ex5.pdf

Or - in many cases they are legally required to retain a copy of communications sent, and there are a large number of sites that offer diverse services banks want that also happen to have "chat/email" hidden as a feature. That's legally communication, and they often can't collect and retain it.

Long story short - they don't really care so much, because many of them are already doing this collection now in other ways... my first job out of college 13 years ago was helping large banks transition this monitoring and policy enforcement to browser extensions (Guess who was grumbling about the MV3 changes in chrome, for very similar reasons).

Now they're moving to directly adding the monitoring in the OS/Kernel


> Now they're moving to directly adding the monitoring in the OS/Kernel

Does this mean financial software has root-kits build in? Good to know!

So this means every banking computer is fundamentally compromised at the OS level. Let' see how long it takes until this backfires. Could be a nice global firework when it goes off.

Who exactly builds those root-kits? How good are they protected against supply chain attacks?


Good to know, thanks for the perspective. I only know that there was huge pushback against these somewhat niche (but still important) requirements making TLS 1.3 less secure for everybody. I'm glad somebody held firm.


They are of course free to use http in a closed environment, without any encryption. Or use any internal proprietary protocols. There is absolutely nor reason to mandate the world to follow their 'requirements'.


Access to some external sites might be necessary in some cases. They still have to monitor such connections.


This also means you can tunnel http3 traffic through cloudflare without them decrypting it?


Well, if your intention is to use Cloudflare's network for your H3 tunnels, then expect an API for it soon: https://blog.cloudflare.com/building-privacy-into-internet-s...


See: https://ooni.org/post/2022-quick-look-quic-censorship/

I assume Google and co. will fix this if it ever starts to seriously benefit platforms like KiwiFarms, which in the last year was being blocked by CenturyLink, a major US ISP. I also predict these QUICfixes will be met with broad enthusiasm by HNers.


And a good improvement on head of line blocking that http2 introduced?


That performance impact is worth millions of dollars in improved conversions.


Bear in mind QUIC was primarily designed to improve advertisement penetration by making it harder for good actors to interdict and remove bad domains. (Something like dns-over-http/3 is, allegedly, referred to internally at Google as the anti-Pi-hole)

plus, in real-world use cases it's probably a perf loss running TLS like this fwiw.


> Bear in mind QUIC was primarily designed to improve advertisement penetration by making it harder for good actors to interdict and remove bad domains.

Are there any prove points for this claim besides this shout-out?

QUIC is more like a modern TCP. What you do with such a protocol is unrelated to the protocol as such. You can open secure connections and stream data with it. That's all. Everything else is on the application side.

> Something like dns-over-http/3 is, allegedly, referred to internally at Google as the anti-Pi-hole

This claim sounds like anti QUIC FUD.

Nothing can stop you from using a Pi-hole like device as your primary DNS resolver!

(OK, I admit Google could try to hard-code their DNS servers in Chrome. But I'm very unsure they would make it through the following shit storm in one piece.)


Not OP, but Google do hard-code their DNS servers in other products (e.g. Chromecast), so they'll bypass your pi-hole. It is possible to intercept DNS traffic to 8.8.8.8 and redirect it to your own router, however. With DNS-over-HTTPS this is impossible short of installing custom SSL root certs on the device, which is close to impossible. But that's got nothing to do with QUIC or HTTP/3, but is effectively just enforcing signed DNS requests.


It might be a good idea to not let software/hardware you don't trust onto your own networks.


Some (most?) browsers support a non-transparent forward proxy. But you really have to trust it, because you're giving it man-in-the-middle control over all of your browser sessions.

It dates back to a more draconic era of firewall management, but has also worked its way into DHCP (https://en.wikipedia.org/wiki/Web_Proxy_Auto-Discovery_Proto...).


But doesn't snooping/modifying that traffic still require breaking TLS, which is only possible if you can install a root certificate on your device?


> Google could try to hard-code their DNS servers in Chrome

More relevantly, there's no reason they couldn't do the same before HTTP/3. Even with DNS traffic hijacking, they could just as well do DNS-over-TLS. Infiltrating advertising-related DNS is completely orthogonal to HTTP/3; agreed that the gp comment is FUD.


Google is an advertising company. Why on earth would anything they do NOT be in order to improve advertisement delivery? I don't understand the water-carrying in this thread, but filter bypass has been the primary criticism of this technology since it was invented.


> Bear in mind QUIC was primarily designed to improve advertisement penetration by making it harder for good actors to interdict and remove bad domains.

What? What is your source on this? How does the protocol stop you from using e.g. uBlock to filter the domains at the application level?


It doesn't. HN users tend to believe that everything is some crazy conspiracy targeted at them and their pi hole setup.


I just wish Nginx had the good will to include decent metrics. The built-in stubs are far from enough.

It's the main reason why I'm thinking about migrating everything to Caddy.


That's the issue with doing one FOSS version + one paid version, some things will be gated behind the paid version and probably never end up in the FOSS one, as it drives people to purchase the paid version.

In this particular case, the ngx_http_api module offers way more monitoring options but is gated behind nginx Plus.


I think that if there's one feature they should put behind Plus, that's monitoring.

Only "serious" companies need that (as a hobby project maintainer you probably don't need it)... and if you really want to make sure you don't give them a cent, you can build your own monitoring on top of nginx easily enough.


Interesting! I’ve been using Caddy for a while and actually recently went back to Nginx just because I am now self hosting PeerTube and there was a config file for Nginx to proxy PeerTube server ready made.

But if Caddy can serve metrics which I can then collect with for example Grafana, that’s very interesting!


Yep, Caddy has metrics: https://caddyserver.com/docs/metrics

But caveat: we don't have maintainers who understand metrics currently, so it's nowhere near as good as we'd hope it to be. Help wanted!


Caddy's metrics have been disabled for like 9 months because they cause a massive performance penalty

https://github.com/caddyserver/caddy/issues/4644


HTTP metrics (and only HTTP metrics, not all metrics) were changed to being opt-in, support wasn't removed. See my docs link above. But yes, there are performance consideration we're not satisfied with, and we need help to get them resolved.


They're not disabled, just off-by-default for now.


I'm on the same boat. I started to use Nginx for side projects a while ago. When things became a little more serious and I needed decent metrics, my only option was to upgrade to Nginx Plus. So now I'm learning Caddy.


nginx has been a fantastic project ever since its launch with little real competition out there.

That said, I've been running it since the early days, on super heavy production loads and never felt the need to have some more "decent" metrics out of it. I assume you're referring to real time metrics here.

Most if not everything can be gathered from the logs, which nginx is very flexible with.

I fail to understand why some missing metrics in a great product make you think to migrate everything to Caddy?


Because I use as a gateway for fastcgi and as TLS initiator, mostly. Nginx doesn't allow you to have distinct metrics for both use cases, which is far from ideal.

My tests in controlled environments show that caddy is absolutely fine for my workload and has perks that make my life easier. Some have been mentioned in this thread already.

Metrics is a must have in our case as we have many things to manage, few people to maintain things and alerts are the only way to grow. For alerts you need metrics.


Regarding metrics from logs, you might agree that it's now an overkill.


I don't disagree, but there are always multiple solutions to problems we face, and the solution is not always simply to move on to the next product.

No product would ever cater to each and every need, and that's fine.

You have choice, if something doesn't satisfy you, you can sometimes fix the problem, improve the product or write your own module (when its opensource), or switch to another solution. Look at the opentelemetry comment below as another possibility to get what you want done.

There are so many choices today, and you enjoy the freedom of choice. If something else works better for you, use that other something.

There's no need to complain.


Not sure if this would help, but saw it some months ago - https://github.com/open-telemetry/opentelemetry-cpp-contrib/... - I've not used nginx myself, so not sure how viable/useful this is


How do Nginx and Caddy compare with Traefik in this regard?


Here here! There are 2 or 3 other goodies they lock behind their paywall which I think would be prudent from them to release so they stop ceeding ground/mindshare.


Anyone else used to pronounce it “en-ginks”? It wasn’t until I started working as a web developer that I learned the truth. And it took me a minute to piece together that “engine X” was the same thing.


I say Engine X. Enjinks is cute. As long as you don't say "you-buntu" we can be friends.



I still do. It's hard to force myself to read it as "engine x"; "n-jinks" just looks like the natural way.


it will always be "en jinks" to me


always cute to hear a recruiter pronounce it


Since OpenSSL will "never" support quic, what is this using? BoringSSL?


Most QUIC implementations I've seen so far indeed use BoringSSL.

Not sure about the status quo in NGINX. They had a HTTP/3 implementation on Quiche (a Rust lib implementing QUIC, and HTTP/3 as a side project, but afik NGINX never used that part, only the QUIC protocol implementation). But after reading a post here I'm not sure they still use Quiche (and with it BoringSSL). Maybe they have now an own QUIC implementation (with support for other crypto libs). But given QUIC is complex and therefore hard to implement I would actually expect they still base of their HTTP/3 efforts on the by now quite popular Quiche library. But I'm maybe wrong in this regard. Have to look into that.


> They had a HTTP/3 implementation on Quiche (a Rust lib implementing QUIC, and HTTP/3 as a side project

AFAIK that was a Cloudflare project, and since then they moved away from nginx due to it's stagnation.


Or LibreSSL.


I guess, now is the ideal time for me to experiment with LXD or Docker images that are based on Alpine Linux or Debian Slim for QUIC support for Nginx.


Debian Slim all the way. musl is only good when you are building for and testing against musl. Third party software on musl has burnt me enough times to know that sticking to glibc is the way to go unless you like that sort of pain.

Footnote: I actually love musl, just for my own software that explicitly targets it and is validated against it.


I would strongly avoid Alpine.

Most software isn't written with Musl in mind! And that shows.

You can run in all kinds of extremely hard to debug issues and especially massive performance problems. Stuff may slow down to fractions of the performance comparison to regular Linux distros.

If you want maximal small containers go with Distroless.


Any documentation on what parts of HTTP/3 are and aren’t supported by this implementation?



Is there anything like nginx based on hyper?


While I don't think it is hyper based there is this built in rust: https://www.sozu.io/

I haven't used it so I can't vouch for the quality.


linkerd2 proxy uses hyper, but I don't know where exactly.


Open Litespeed has had HTTP3 for a while now. And supports Apache style configs, rewriterule etc. My go to web vps setup at the moment is cyberpanel which sets everything up nice for you. I used to hand roll everything but life's too short these days.


Any docker images for this? I noticed there is no official one: https://hub.docker.com/_/nginx/


Update: nginx:1.25 up now.


nginx by default adds a version header in the response. I did not expect that nginx.org themselves would be 2 minor versions behind latest.


So this is QUIC, right?


Well, HTTP 3 is standardized off the initial QUIC implementation (UDP), yes.

Update: I guess, technically, HTTP 3 is the protocol built on top of the QUIC protocol.


Exactly, it uses QUIC as the transport layer. You can do more with QUIC, there are beta implementations to use it for WebRTC data channels[1] and there have been experiments to use it for for WebRTC media transport, for example[2].

[1] https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/...

[2] https://www.w3.org/2011/04/webrtc/wiki/images/6/69/Media_ove...


WebRTC? I'm not sure WebRTC has a bright future.

AFAIK nobody ever implemented WebRTC on top of QUIC. The new kid on the block is called WebTransport.

WebTransport could be described in short as "WebRTC / WebSocket, the good parts". It's way simpler than WebRTC and solves some issues with WebSocket like the lack of low-latency unreliable transport (WebTransport can send Datagrams without having to use a reliable connection channel for that). A big difference to WebRTC is: All the STUN / TURN complexity is gone. WebTransport gives you at the end more or less almost "raw" QUIC streams or datagrams. It's almost like a socket interface for web user agents, only with all the features of QUIC.

Given you have already HTTP/3 (which implies you have also QUIC) WebTransport is a pretty simple and slim addition on top:

https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-ht...

(The only thing I'm missing is client authentication. It would be really good to do this in-band, and for example allow to send client certs with the WebTransport CONNECT request. For some reason this was left out on purpose. Only that I did not find out why actually.)

What I'm not sure about is who good WebTransport would be in a P2P scenario. But given libp2p implemented WebTransport it would seem that it's good for that also. (So could replace WebRTC completely.)


> A big difference to WebRTC is: All the STUN / TURN complexity is gone

but STUN / TURN is there for a reason. I always thought of the P2P scenario as the main use case for WebRTC.


I don't know about any significant use case of WebRTC in the P2P space. OTOH, like mentioned, libp2p implemented WebTransport. So WT seems to be also adequate for the P2P use case. (No clue about the details there, though. So maybe they had to jump loops to make it happen. But it doesn't look like there would be major show stoppers.)


It looks like STUN/TURN can be integrated with QUIC so you don't need additional ports, but I wouldn't say "the complexity is gone" ;-)

https://w3c.github.io/p2p-webtransport/

> This specification extends the WebRTC [WEBRTC], ORTC [ORTC] and WebTransport [WEBTRANSPORT] APIs to enable peer-to-peer operation using QUIC [RFC9000]. This specification supports the exchange of arbitrary data with remote peers using NAT-traversal technologies such as ICE, STUN, and TURN. As specified in [RFC7983bis], QUIC can be multiplexed on the same port as RTP, RTCP, DTLS, STUN and TURN, allowing the API defined in this specification to be utilized along with the functionality defined in [WEBRTC] and [ORTC] including communication using audio/video media and SCTP data channels.


Oh, that's cool! :-D

It implements the missing parts on top of WebTransport. I was especially sad that WT doesn't do client authentication. You can't send credentials with a CONNECT request. But this thingy adds exactly this. Great!

Also having some standardized way to open QUIC Streams through NAT sounds nice. (Even I think the proper fix for this issues would just be IPv6.)

Frankly it's very early days. It's not implemented, and not even considered currently.

> It is not a W3C Standard nor is it on the W3C Standards Track.

But as I'm currently into implementing a WebTransport lib maybe I should try to add the features form this draft. Would be funny to have a P2P ready WT lib. Only that NAT traversal part I would leave out for now, I guess, as I'm not sold on the idea that this complexity is strictly needed. NAT just needs to die finally…


Essentially. It's HTTP semantics on top of QUIC on top of UDP. Google and Microsoft open-washed their implementions designed for mega-corp use cases through the IETF and now it's called HTTP/3. One special feature of HTTP/3 as implemented that's really good for for-profit use cases but terrible for human person use cases is that it cannot establish a connection to a non-CA TLS endpoint. That is, you want a random internet visitor to be able to visit your HTTP/3 website, you have to get continued approval from a third party incorporated CA. This makes it pretty much useless for things like IoT and human uses at home. But it's great for things involving money.


Could you please stop spreading uninformed FUD?

> it cannot establish a connection to a non-CA TLS endpoint

That's plain wrong. Self signed certificates and internal CAs work just fine.

The whole point of HTTP/3 is: You can't establish an unencrypted connection. And that's a very good idea!

I'm running at the very moment a HTTP/3 development server on this machine here. I did not have to ask anybody for allowance to do so.

(Actually I'm right now building a WebTransport server. WebTransport has even more strict rules for certificates but even there it's still possible to connect to an endpoint that uses a self-signed cert that isn't signed by any CA cert.)


> Could you please stop spreading uninformed FUD?

Inform me then. Did you compile boringssl or openssl+quic or whatever TLS lib yourself and enable the proper flags so you could do this? You and I both know that doesn't count. You certainly can't if you're using a binary distributed browser made by Microsoft, Google, Apple, or even Mozilla. If you look at the traffic you're sending it's probably hitting the HTTP/1.1 endpoint first then going to http/3 for further traffic.

Internal CAs work but that's internal and irrelevant to being able to host a visitable website to a random persons.

>You can't establish an unencrypted connection. And that's a very good idea!

That's a very good idea for incorporated persons and their websites that involve transfers of money and other private details. The trade-off is that the entire system is more fragile and complex and needs continous constant updating and approval from a third party corporation. These are very bad traits for making a personal website that can last more than a few years. It's bad for the longevity of the web and so it's health.


> Did you compile boringssl or openssl+quic or whatever TLS lib yourself and enable the proper flags so you could do this?

No, I did not.

> You certainly can't if you're using a binary distributed browser made by Microsoft, Google, Apple, or even Mozilla.

That's the part that simply isn't true.

I'm using for testing a Chromium derivative with a stock Blink engine (Vivaldi).

You can use self signed certs just fine. (But using a custom CA set up by `mkcert` is actually the simplest way for a dev setup).

Chrome has a `--ignore-certificate-errors-spki-list=${CERT_HASH}` flag enabling the use of self signed certs for HTTP/3, given the right cert hash.

I admit that this isn't something an average user could do. You need to invoke some `openssl` voodoo to generate the hash appropriate for usage with the mentioned browser flag from a given cert. But that's actually a feature, imho, as it makes talking someone into casually starting their browser with this flag for some arbitrary domain quite difficult, or in a lot of cases even impossible. (And the addition of CA certs by an unauthorized user can be prevented by other means).

For WebTransport (where it's anticipated that the endpoints could be quite well some ephemeral machines, maybe even without DNS records) you can pass just a cert's sha1 fingerprint to the `WebTransport` constructor on the user agent. This will make the browser accept the designated (self signed) cert without any further checks.

> If you look at the traffic you're sending it's probably hitting the HTTP/1.1 endpoint first then going to http/3 for further traffic.

No, I don't even have a HTTP/1 (or /2) endpoint in this setup. I need to pass `--origin-to-force-quic-on=localhost` explicitly as Google's engine is otherwise too stupid to recognize the HTTP/3 server. (ALPN seems to have currently also issue besides this, judging from some comments online. But I'm not using this mechanism anyway currently, and have just a pure HTTP/3 endpoint.)

> That's a very good idea for incorporated persons and their websites that involve transfers of money and other private details.

It's a good idea in general.

Have you ever considered that alone knowing who visits which website when is privacy related information?

Meta-data is often even considered more interesting than the actual data. The USA stated boldly things like: "We kill people based on metadata"…

Encrypting just everything is the only way forward! Otherwise sending or receiving encrypted traffic already would be a data point as such—a data point which could be used against some.

> The trade-off is that the entire system is more fragile and complex […]

Yes, it's a trade-off.

Also, I think that the CA system is fundamentally broken.

But this is nothing new coming with QUIC!

> […] and needs continous constant updating and approval from a third party corporation. These are very bad traits for making a personal website that can last more than a few years.

I'm not buying this argument any more. Before something like Let's Encrypt you've been right with that argument. But since then this point is moot.

You don't need any "approval", you just need to prove that you own the domain for which you like to have a cert. This is a completely automatic and anonymous process. Set up once it will work as long as something like Let's Encrypt exists. (And Let's Encrypt very likely won't disappear anytime soon!)

> It's bad for the longevity of the web and so it's health.

No, it makes no difference.

An unmaintained website will go away sooner or later anyway. You need at least to pay bills to keep it up. At least…

But besides that this point is also moot. Nothing of this whole digital stuff will last very long. Ever tried to open an ancient file format? By ancient I don't mean 20 000 years old like some stone carvings, I don't mean 2 000 years like some papyrus role, not even 200 years like some a little bit older book, I mean a file as "ancient" as something made by some firm that went out of business 20 years ago…

And talking about "health" in the context of the web given the current state of the internet is a joke in its own. I don't want to offend anybody but that's just the truth. The web is broken beyond repair. And that's mostly not even for technical reasons. (Even there would be also more than enough of those. But QUIC is actually one of the technologies that are more or less sane—even complex—and a step in the right direction. Every middle-box it kills on its way is a huge win for the net!)


I appreciate the level and thorough response.

>Have you ever considered that alone knowing who visits which website when is privacy related information?

I get this a lot. To be clear, I'm not against encryption. I am against only allowing connections to sites which are encrypted using a third party incorporated entity's tools. HTTP+HTTPS is definitely the way to go so that people can chose the HTTPS endpoint if they want, but still access the site if that fails for technical (lack of maintaince when acme2 came around and acme1 dropped, etc, etc) or malicious reasons. The problem is that HTTP/3 only allows the one mode.

> But this is nothing new coming with QUIC!

Correct. But HTTP/3 on QUIC does make it much, much more of a problem because only 0.000001% of worldwide users are going to be passing --ignore-certificate-errors-spki-list=${CERT_HASH} to chrome after their browser first prevents the link from working.

>An unmaintained website will go away sooner or later anyway. You need at least to pay bills to keep it up. At least…

I know too many late 90s/early 2000s websites to count that haven't been touched in the last decade+. And I know they would not exist not if they relied on HTTPS only or HTTP/3.


> HTTP+HTTPS is definitely the way to go so that people can chose the HTTPS endpoint if they want,

This opens up the way to downgrade attacks.

Imho there should not be any unencrypted traffic on the net. Not even the technical possibility for that as long as you're using std. software. Call me an old school crypto-nerd but I just don't see any alternative. Everything else is going to get exploited. There is just too much initiative form very powerful fractions. So crypto needs to be enfoced at a very fundamental level. Security by design, privacy by design!

> HTTP/3 on QUIC does make it much, much more of a problem because only 0.000001% of worldwide users are going to be passing `--ignore-certificate-errors-spki-list=${CERT_HASH}` to chrome after their browser first prevents the link from working.

I agree that the requirement for CA signed certs is sub-optimal.

I for my part only care about the (forced) encryption.

And that's actually all QUIC requires. The protocol does not force checking certificate chains. (Otherwise the above mentioned switch wouldn't be possible while having a complaint implementation.)

The check of the cert chain is a implementation detail of the HTTP/3 stack in browsers AFAIK. (I could be wrong here and HTTP/3 may require "WebPKI" trusted certs; I didn't read all of the spec until now.)

I would love it of course if the CA based "WebPKI" would get replaced by something decentral with self service options.

Having CA certs you didn't install yourself in your browser (after sorrow consideration!) is a mayor security risk, imho. Just look at the list… You're "trusting" in the end more of less everybody with money and power on this planet. That's not how it should be. But I don't know how an alternative could look like. (And I guess nobody really knows. Maybe you?)

But something like Let's Encrypt, which only checks the part that actually matters—namely whether you own the domain for which you like a cert, no further questions asked—is imho as close to "decentralized self-service" as it could get at the moment.

> I know too many late 90s/early 2000s websites to count that haven't been touched in the last decade+. And I know they would not exist not if they relied on HTTPS only or HTTP/3.

Why do you think HTTP/3 would have prevented the sites to last as long as they did?

Getting and renewing certs is an one-time setup. I'm pretty sure it will just work for the years to come once up and running.


Yes. A website run by a human person is vulnerable to downgrade attacks in the same way that a human person is vulnerable to rocket artillery. In some contexts, like say, active war zones or hosting a cryptocurrency market, it matters. But in most cases human people don't actually have to worry at all. Especially since the downgrade "attack" is not really an attack at all. And you're only "vulnerable" to it if you execute javascript. Othwerwise there's no intrinsic damage to using HTTP. That only applies to commercial/money exchanging contexts and things like hospitals.

>Why do you think HTTP/3 would have prevented the sites to last as long as they did?

If the site was HTTP/3 only then their cert would have expired or their update system broken and browsers would not be able to access the site.


now let's wait for Chromium's proxy support h3


I am ashamed to say that my eyes misread HTTP/3 as Web3 for a couple of seconds. For a moment, I was confused as to what Nginx has to do with Web3.


Considering Web3 is mostly used as a marketing scam, it usually has "to do" with everything.


HTTP/3 is a Google/MS open-washing scam. They've successfully pushed a protocol designed entirely for large corporate use cases as a general HTTP version which is supposed to be used by human persons as well as corporate persons (entities). Big surprise, it's terrible for human person use cases.


We are two.


Only .htaccess support is missing from nginx. Good web server.


Related (opinionated) https://www.nginx.com/resources/wiki/start/topics/examples/l... "Stop using .htaccess. It’s horrible for performance. NGINX is designed to be efficient"


.htaccess doesn't have to affect performance; this is more of a historical artifact. With an HTTP server expecting a read-only site root, you could only read every .htaccess once, when you replaced the site root filesystem with a new version. A user could still see a read-write filesystem, only with a "save" button (command, etc.) that created and mounted a new read-only site root.

While it is a performance and security burden, people reject .htaccess is too readily. It has enabled users who aren't quite programmers to assemble sites out of web applications and components that live in different subdirectories. It has clear value. (Not that I think nginx should implement it.)


I think the bigger issue is that the use case .htaccess was designed for (multiple users sharing a single physical server) just isn't really a thing anymore AFAIK. If you're just managing rules in your own container somewhere, there's no sense keeping the logic in multiple places.


> With an HTTP server expecting a read-only site root, you could only read every .htaccess once, when you replaced the site root filesystem with a new version

In that case you would still need some way to trigger a reload of the htaccess, right?

If so, is there a usability difference to just including nginx configs on the same locations?


> This happens for EVERY request.

It doesn't need to, though.

These days you only need .htaccess for compatibility with old PHP apps. Stacking another instance of Apache on top feels bloated.


For those old PHP apps, you can probably convert the .htaccess into nginx config.


2006 called and would like you to come back


.htaccess is self inflicted pain, no need to downvote :)


nginx conf is self-inflicted pain...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: