Hacker News new | past | comments | ask | show | jobs | submit login
Freenginx: Core Nginx developer announces fork (nginx.org)
1131 points by bkallus 11 months ago | hide | past | favorite | 475 comments



Worth noting that there are only two active "core" devs, Maxim Dounin (the OP) and Roman Arutyunyan. Maxim is the biggest contributor that is still active. Maxim and Roman account for basically 99% of current development.

So this is a pretty impactful fork. It's not like one of 8 core devs or something. This is 50% of the team.

Edit: Just noticed Sergey Kandaurov isn't listed on GitHub "contributors" because he doesn't have a GitHub account (my bad). So it's more like 33% of the team. Previous releases have been tagged by Maxim, but the latest (today's 1.25.4) was tagged by Sergey.


This isn’t just “a core nginx dev” — this is Maxim Dounin! He is nginx. I would consider putting his name in the title. (And if I were F5, I’d have given him anything he asked for to not leave, including concessions on product vision.)

That said, I’m not sure how much leg he has to stand on for using the word nginx itself in the new product’s name and domain…


> not sure how much leg he has to stand on for using the word nginx itself in the new product’s name and domain

pretty sure they can't really do anything to him in Russia. Russia and US don't recognize each others patents, same as China.


What do patents have to do with this?


Many people seem to confuse trademarks and patents...


i know the difference i just don't care enough. same thing applies - u would gave to register business and trademark in Russia and pay taxes to have any legs


It's a .org domain, the registry is held in the USA.


Right, they will just go after the domain forcing either a rename or a move to a Russian domain


nginx is simultaneously a registered trademark of F5 and imnsho mouth garbage for English speakers. This is a good opportunity to rename the project.


It's pronounced "engine X".


Exactly—there's never been a better time to actually make the name look like it sounds.


We already have a mess that is Dapper and Dapr. No thankyou


So… just add two `e`s?


This.


He was working for free for the last two years, and F5 was quite happy about it.


He *is* nginx ?

https://freenginx.org/hg/nginx

I don't see it. Sure, he contributes. But in the last 3-4 years he definitely does not look like he is nginx based on that log. Or am I looking in the wrong place?


And this is why counting commits doesn't give you an accurate picture of productivity.

(Regardless, if you scroll back past March 2020, the timeline "resets" to this past year, and you see a ton of Dounin commits. Looks like an artifact of how the hg web viewer deals with large, long-lived branches getting merged.)


I think the mercurial log is not doing us any favors here, most of the first few pages is the history of the `quic` http/3 support branch which indeed Maxim is not working on. Scroll past it and he'll be much more prevalent. See for example the log of stable-1.24: https://freenginx.org/hg/nginx/shortlog/420f96a6f7ac


And that's how 100x developers don't get the recognition they deserve.


Philosophically, if a lead developer is doing most of the commits on a project, then they are monopolizing both the code and the decision making process, which is a sure way to kill a project.

If the basketball or soccer team captain were also a ball hog, they'd have trouble keeping the bench full.

When you become lead, you have to let some of the code go, and the best way I know to do it is to only put your fingers into the things that require your contextual knowledge not to fuck up. If you own more than 10% of the code at this point, you need to start gift-wrapping parts of the code to give away to other people. If you own more than 20%, then you're the one fucking up.

Obviously this breaks down on a team size of 2, but then so do concerns about group and team dynamics.


> which is a sure way to kill a project

Nginx is one of the most widely used open source projects in the world. It's hard to read this without laughing, as if it's still to be determined whether Nginx could be considered successful.


>Philosophically, if a lead developer is doing most of the commits on a project, then they are monopolizing both the code and the decision making process, which is a sure way to kill a project.

Or, just follow me here, a open source project really doesn't get that many dedicated contributors beyond the leads and everyone else may casually drive-by.


I think there are problems where this will apply to, such as crud applications, and projects where deep understanding of core components makes it difficult to scale teams horizontally as it will effectively require a hive-mind.


If the 'core components' are half of the project, there's no core. It's just important and less important components.

And in all likelihood if you are expecting a core competency in enough domains for the situation you reference to be true, it's because you have a bad case of NIH, you aren't concentrating your efforts in the areas your company is purportedly focused on. That makes it difficult not only to scale up a team, but also to scale it down. The first major revenue hiccup you encounter may be your last.

If you are concentrating on a narrow domain you intend to be experts in, then that will be 15-25% of the code. Meaning to maintain a decent bus number, you only need to be primary on about 10%, if you have half a dozen people or so.


What does "doing" a commit mean?

Crafting the change? Applying the commit?

The former is where we should strive for heterogeneousity. The latter is a janitorial duty that should be guarded and centralized.

Do not underestimate the importance of janitorial duties though! That is the way we build culture and community, and that is the only scalable way to build any quality above that it compiles.

Reluctance to accepting commits and keeping a strong culture is something that is common to all successfully scalable open source projects.


Philosophically and realistically if there is no clear management then everyone will do whatever they want and the project will fail.

And yes, there are lots od companies with bad management, bad decisions, or even surviving despitr bad management. But most of the time thr management at least gives some direction.

You sound like one of those freeloader types, who dont contribute.


What an absurd thing to say. You do realize that a single active main developer + a bunch of drive by contributors is the NORM for most open-source projects, right?


> which is a sure way to kill a project

Nonsense


Good luck with your empire building sans team building.


How about we live without empires?


> then they are monopolizing both the code and the decision making process, which is a sure way to kill a project.

I believe that was my thesis. Who are you agreeing with here?


There's something wrong with the list. It's ostensibly sorted reverse chronologically but scroll further and you'll see it go from 2020-03-03 to "9 months ago" and from there on it's all him.


Judging from the graph view (https://freenginx.org/hg/nginx/graph), it has to do with the QUIC branch landing onto the main branch, suggesting he had little role in the QUIC development but heavy role outside of it.


you should have googled his name, and you would have known within seconds. I mean it's everywhere nginx is mentioned (or dev of it)


Is this what the security disagreements is about https://mailman.nginx.org/pipermail/nginx-announce/2024/NW6M...?


Yep. Maxim did not want CVEs assigned.


It would be worth flagging in this comment that you represent F5. I didn't realize that until I found your other comment below.


Why wouldn't he want CVEs assigned?


I haven't read the content of the patches to understand the impact of the bugs, but from my own experience [0] I can suggest a few reasons:

- CVEs are gold to researchers and organizations like citations are to academics. In this case, the CVEs were filed based on "policy" but it's unclear if they are just adding noise to the DB.

- The severity of the bug is not as severe as greater powers-that-be would like to think (again, they see it as doing due diligence; developers who know the ins and outs might see it as an overreaction).

- Bug is in an experimental feature.

I'm not saying one way is right or not in this case, just pointing out my experience has generally been that CVEs are kind of broken in general...

[0]: https://github.com/caddyserver/caddy/issues/4775


To summarize: the more CVEs a "security researcher" can say he created on his resume, the more impressive he thinks he looks. Therefore, the incentive to file CVEs for any stupid little problem is very high. This creates a lot of noise for developers who are forced to address sometimes nonsense that are filed as "high" or "critical".


So true...

If you run a web app of any sort, and you don't have "X-Frame-Options: Deny" in your headers, you'll get lots of "researchers" (that are probably bots) e-mailing you that you have a CRITICAL security issue.

"Beg bounties", we call them.


The issue you linked to is an excellent example of why everyone and their dog is becoming a CNA these days. It's the only way to keep CVE spam at bay. The system has been broken by the gamification of CVEs and is in desperate need of reform.


"Denial of service" is never a security bug; it's a huge mistake people have started classifying these things as such to start with. Serious bug? Sure. Loss of security? Not really.


> "Denial of service" is never a security bug

That very much depends on what service is being denied. Nginx is _everywhere_. While not a direct security concern for nginx (instead an availablity issue) it could have security or safety implications for wider systems. What if knocking out nginx breaks a service for logging & monitoring security information? Or an ambulance call out management system? Or a payment progressing system for your business at the busiest time if your trading year? There are many other such examples. This sort of thing is why availablity can be considered a security matter and therefore why DoS vulnerabilities, particularly those affecting common software, are handled as security issues of significant severity.


Almost every bug can be considered a security bug under the wrong set of circumstances.

With fairly cheap ddos services you can "just" order you can knock most servers offline anyway. Internet reachability is rarely safety-critical, and if it is, that's probably a huge design flaw somewhere because there's tons of reasons outside of your control that can make the internet not work for either the server or clients.

Is all of this inconvenient and (potentially) a serious problem? Sure. But not "zomg criminals have credit card records / can spoof random domains / read private data / etc. etc." type serious.


> Almost every bug can be considered a security bug [...] With fairly cheap ddos services...

A DoS bug and an DDoS attack are very different things. One is a flaw that can bring a service down, the other is a brute force technique for making a service unusable. You can DDoS services without exploiting bugs.


I am aware; my point is that "denying the service" is pretty easy even without the presence of any bugs in the service. Stealing credit cards on the other hand...


We could argue that about almost anything though . There are always secondary effects possible and sometimes even likely. I can only think of the proverb/poem - "For want of a nail".


In those cases you just know that any problem can cause you trouble, so you pay attention to all problems including low severity ones like DoS, performance slowdowns or lack of bells and whistles.


Many security specialists via security as described by the CISSP material (Certified Information Systems Security Professional). Loosely speaking, that means ensuring the confidentiality, integrity, and availability of the system (including data received, data stored, and data sent).

Viewed in this light a bug that enables a successful Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack is a security bug. A bug that causes a DoS or DDoS, but is not exploitable, would not be a security bug (e.g., some idiot added an infinite loop to the startup code). That's where issue triage comes in, a bug should never be assigned before its triaged. Sometimes triage results in 'we don't know enough' and someone gets assigned to evaluate the bug to answer specific questions before triage can finished. After triage is get assigned - or even better, a developer with a matching skill set chooses it to work on for the next release/sprint/etc.


Eh, it's widely considered that part of security is availability.

But I agree DoS is kind of a strawman since everything connected to a network is vulnerable to some form of DoS without extensive mitigation.


> "Denial of service" is never a security bug.

What about serving certificate revocation list, with another system relying on say one day old cache? (Sure, that's "fail open" - but still...).

Or proxying LDAP for sync to a central auth/authz system?

Ed: proxy giving access to logging system goes down - alert on failed logins silenced, disabling rate limits for brute force attacks?


Almost any bug in those kind of systems are potential security bugs. Not having the service available at all is probably among the least critical type of bug that can happen.


AFAIK, mandatory OCSP is turned off by default. Exactly because it fails regularly. Try to turn it on and see how it goes.


For how long does it fail? Because I have not seen any availability issues with OCSP stapling (including must staple in the cert) using Let's Encrypt.


You are clueless


Please provide a counter-argument, a suspiciously fresh trollish account. Shallow dismissals are against the guidelines.


>The most recent "security advisory" was released despite the fact that the particular bug in the experimental HTTP/3 code is expected to be fixed as a normal bug as per the existing security policy, and all the developers, including me, agree on this.

>And, while the particular action isn't exactly very bad, the approach in general is quite problematic.


What does any of that have to do with assigning a CVE?


> Maxim did not want CVEs assigned.

... to this specific bug in an experimental feature.

Originally I read your comment as Maxim doesn't want to use CVEs at all.


MegaZone as in Usenet MegaZone?


No, a MegaZone. Haven't you heard, we come in six packs now. ;-)

Yeah, very, very likely one and the same. Since 1989.


Wow, that's a throwback. I was an ISP person back in the Portmaster era. You're at F5 now, I guess!

Can you say more about the CVE thing? That seems like the opposite of what Maxim Dounin was saying.


Yeah, I've been with F5 since 2010 - gotta love those old PortMasters though, Livingston was good times, until Lucent took over. I was there 95-98.

I don't know what else there is to say really. The QUIC/HTTP/3 vuln was found in NGINX OSS, which is also the basis for the commercial NGINX+ product. We looked at the issue and decided that, by our disclosure policies, we needed to assign a CVE and make a disclosure. And I was firmly in that camp - my personal motto is "Our customers cannot make informed decisions about their networks if we do not inform them." I fight for the users.

Anyway, Maxim did not seem to agree with that position. There wasn't much debate about it - the policy was pretty clear and we said we're issuing a CVE. And this is the result as near I can tell.

Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.


Oof. Presumably Dounin had other gripes about the company that had been building up? This seems like a pretty weird catalyst for a fork. Feels more like this was the last straw among many.

I get that CVEs have been politicized and weaponized by a bunch of people, but it seems weird to object that strenuously to something like this.


Oh my god, the Internet is such a small place. Good to hear you're doing well - we interacted a bit when I was running an ISP in the 90s as well. (Dave Andersen, then at ArosNet -- we ran a lot of PM2.5e and then PM3s).

And appreciate the clarification about the CVE disagreement.


Those were great times. I learned a hell of a lot working at Livingston, because we had to. We were basically a startup selling to ISPs right as the Internet exploded and we grew like crazy. Suddenly we're doing ISDN BRI/PRI, OSPF, BGP, PCM modems, releasing chassis products (PM-4)... Real fun times, always something new happening. I even ended up our corporate webmaster since I'd been playing with web tech for a few years and thought it'd be a good idea if we had a site. Quite a way to jumpstart a career.

And the customers were, by and large, great.


I don't know much about this situation, but from what I've read, you were clearly in the right. It doesn't matter if the feature is in optional/experimental code. If it's there and has a vulnerability, give it a CVE. The customers/users can choose how much they care about it from there.

> Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.

I recently did exactly that when a vendor refused to obtain a CVE themselves. In my case, I was doing it as part of an effort to educate the vendor on how CVEs worked.


You bring up NGINX+, a commercial product with a CVE reporting policy, but just from reading the docs on it it doesn't support QUIC or HTTP/3. So I guess I can see why the maintainer would be mad about a commercial policy applying to noncommercial work in the absence of any real threat.


https://www.nginx.com/blog/quic-http3-support-openssl-nginx/

I know there are other mentions - it's been in the commercial product since R30, hence the CVE.


> Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.

Even if third parties can file CVEs, do you think it hits different when the parent organization decides to do so against the developer's wishes? Why do he and F5 view the bugs differently? It sounds like the fork decision was motivated less by the actual CVEs and more about how the decision was negotiated (or not at all).

(PS. Thanks for participating in the discussion.)


Personally, I think its more honest if the parent org does not try to contest a CVE being assigned to a legitimate issue. If a CNA gets a report of a vulnerability in code, even if its an uncommon configuration, they should be assigning a CVE to it and disclosing it. The entire point of the CVE program is to identify with a precise identifier, the CVE, each vulnerability that was shipped in code that is generally available.

Based on my observation of various NGINX forums and mailing lists, the HTTP/3 feature, while experimental, is seeing adoption by the leading edge of web applications, so I don't think it could be argued that its not being slowly rolled into production in places.


[flagged]


> Nothing suspicious about that at ALL no sir nothing to see here.

Yes, very suspicious that he didn't want to issue CVEs for

checks notes

Two DOS attacks that only apply to users that explicitly enabled experimental QUIC support (by default it's disabled)


I don't see anything more in that mail list thread beyond the post you linked too.

Where was the disagreement hashed out, so I can read more?


Internally at F5 (where I work as a Principal Security Engineer in the F5 SIRT and was one of the people responsible for making the call on assigning the CVEs).


Given this fork still boasts a 2-clause BSD license, the corporate nginx can still make the effort to backport patches. It's certainly harder than requiring a single converged development branch, but how closely they track Maxim's work is ultimately up to them.

If nginx continues to receive more attention from security researchers, I imagine Maxim will have good reasons to backport fixes the other way too, or at least benefit from the same disclosures even if he does prefer to write his own patches as things do diverge.

Though history also shows that hostile forks rarely survive 6 months. They either get merged if they had enough marginal value, or abandoned outright if they didn't. Time will tell.


I'm curious to see where this fork will go. The whole situation is a mess:

- nginx is "open core", with some useful features in the proprietary version.

- angie (a fork by several core devs) has a CLA, which sounds like a bait and switch waiting to happen and distro's won't package it

- freenginx is at least open source. But who knows if it'll still be around by June.


I remember being surprised by the open core thing some years ago.

I had been an Apache user for quite some time, and thought I'd take a look at the (at that point, a few years old) "new" shiny thing. I found that something as simple as LDAP authentication required a payed plugin; a free Apache module has been available for this for ages. That made nginx a non-starter for this particular use case.

I wonder if the fork will accumulate free plugins for things that the old core required payed plugins for, slowly eroding their business case.


Most of this simple premium features/plugins were probably funded by companies because they had business value. It's probably unlikely freenginx will re-create them without those contracts.

Unpaid Open Source developers tend to focus on interesting/cool core stuff and ignore all the stuff businesses care about (like LDAP authentication).


Apache is unpaid and fully open source, though


And businesses prefer the open core Apache alternative.


FWIW, nixpkgs packages angie


I admit I haven't followed closely this issue, but what is he talking about?

>In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.


We (F5) published two CVEs today against NGINX+ & NGINX OSS. Maxim was against us assigning CVEs to these issues.

F5 is a CNA and follows CVE program rules and guidelines, and we will err on the side of security and caution. We felt there was a risk to customers/users and it warranted a CVE, he did not.


I worked there before and after the acquisition. F5 Security was woefully incompetent. We spent 3 months trying to get approval for a web hook from Gitlab -> Slack, including endless documents (Threat Model Assessment), and meetings - god, the meetings - at one point on a call with 35 people. So I feel Maxim’s pain trying to deal with that team at F5.

On the other hand nginx core developers (the Russians) were arrogant to the point of considering anyone else as inferior and unworthy of their attention or respect, unless they contributed to nginx oss. They managed that project secretively and rewrote most “outside” contributions. They also ignored security issues - one internal developer spotted security issues with NGINX Unit (a failed oss project 20 years out of date before it started) and was told to fix the issues quietly and not to mention “security” anywhere in the issue messages or commit history.

So I can imagine exactly how these meetings would have gone, I’m sure it was the last straw!


I can agree to this. I worked there too, and it took 2 months to get a simple approval for a similar project, despite preparing extensive TMA documents, etc


This seems like a much larger story than the fork, given the install base of nginx.

For clarity are you referring to CVE-2024-24989 and -24990 (HTTP/3)?


This is confusing. The CVE doesn't describe the attack vector with any meaningful degree of clarity, except to emphasize how you'd have to have a known unstable and non-default component enabled. As far as CVEs go, it definitely lacks substance, but it's not some catastrophic violation of best practices. It hardly reflects poorly on Maxim or anything he's done for Nginx. This seems like an extreme move, and it makes me wonder if there's something we're missing.


it's most likely the last straw rather than the sole reason.


Maybe, but he only mentioned disagreements on security policies. Doesn't sound very convincing as a last straw, especially from a marketing standpoint when trying to gain more traction for his fork.


Yes, those are the two CVEs I was referring to. All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental.


QUIC in Nginx is experimental and not enabled by default. I tend to agree with him here that a WIP codebase will have bugs that might have security implications, but they aren't CVE worthy.


We know a number of customers/users have the code in production, experimental or not. And that was part of decision process. The security advisories we published do state the feature is experimental.

When in doubt, err on the side of doing the right thing for the users. I find that's the best approach. I don't consider CVE a bad thing - it shouldn't be treated like a scarlet letter to be avoided. It is a unique identifier that makes it easy to talk about a specific issue and get the word out to customers/users so they can protect themselves. And that's a good thing.

The question I ask is "Why not assign a CVE?" You have to have a solid reason why not to do it, because of default is to assign and disclose.

I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously.


FWIW, in my project the main reason we don't issue security advisories for "unsupported" code ("experimenal" or "tech preview") is to reduce the burden for our downstreams: many of our immediate downstreams are expected by their users to apply every single security patch, regardless of whether they even use the affected functionality. For cloud providers doing this across a massive fleet, this is a fair amount of work that's worth avoiding if we can.

On the other hand, since the definition of "supported" is specifically designed to help downstreams, if it were known that some bit of code was widely used in production, we'd be open to declaring it "security supported", regardless of whether we thought it was "finished" or not.


Recently I had to support a client who had a "no CVEs in a production deploy, ever" policy.

The stack included Linux, Java, Chromium, and MySQL. It took multiple person-years of playing whack-a-mole with dependencies to get it into production because we'd have to have conversations like:

  Client: there's a CVE in the this module 
  Us: that's not exploitable because it's behind a configuration option that we haven't enabled
  Client: somebody could turn it on
  Us: even if they somehow did and nobody noticed, they would have to stand up a server inside your VPC and connect to that
  Client: well what if they did that?
  Us: then they'd already have root and you are hosed 
  Client: but the CVE
  Us: 
So I definitely appreciate any vendor that tries to minimize CVEs.


In their defense, if the latest version of a module has a CVE, then it's either a 0-day, or an unsupported module.

In either case, you should probably do something about it.


That's just a braindead policy.

Really, really dumb. Not at all good security, just checking boxes.


I mean, yeah, but if that's the way big bureaucratic organizations get sometimes. Bigger means more likely to have a brain-dead policy like this, but also more money... so, do you give up the money, or do you accommodate their policy while trying to minimize the cost?


Oh, I'll cash their check. I'll tell them, in professional terms, why they should change their policy, but I'll still cash the check.


> The question I ask is "Why not assign a CVE?"

There's tons of reasons why you wouldn't, but the core reason for this fork probably isn't really about the CVEs as such. It's either the final straw in a long line of disagreements, or the entire thing was handled was so badly that he no longer wants to work with these people. Or most likely: a combination of both.

I once quit after a small disagreement because the owner cut off my explanation on why I built something the way I did with "I don't care, just do what I say". This was after he ignored the discussion on how to design it, and ignored requests for feedback when I was building it. And look, I don't mind to re-doing it even if I don't agree it's better better, but I did put quite a lot of thought and effort in to it and thought it worked very well. If you don't even want to spend 3 minutes listening to the reasons on why it's like that then kindly go fuck yourself.

It's not the disagreement as such that matters, it's the lack of basic respect.


What does policy says about reporting security issues with experimental/not-enabled-by-default/unstable code?


As an outsider to this whole thing (having discovered this issue in this thread, like pretty much anyone), the CVE rules simply say that you cannot assign a CVE to vulnerabilities in a product that is not publicly available or licensable. Experimental, but publicly available features are still in scope.

This makes sense IMHO: experimental features may be buggy, but they may work in your limited use case. So you may be inclined to use them...except you don't know they expose you in a critical way.


Exactly - this very question came up. And pretty much everyone looked at me as I'm the one who sits on every CVE.org working group (BTW, the CVE rules are currently being revised and in comment period for said revision) and I explained exactly that - just because it is experimental doesn't mean it is out of scope.

Also, something that keeps getting lost here, the CVE is NOT just against NGINX OSS, but also NGINX+, the commercial product. And the packaging, release, and messaging on that is a bit different. That had to be part of the decision process too. Since it is the same code the CVE applies to both. This was not a rash decision or one made without a lot of discussion and consideration of multiple factors.

But one of our guiding principles that we literally ask ourselves during these things is "What is the right thing to do?" Meaning, what is the right thing for the users, first and foremost. That's part of the job, IMHO. Some vendors never disclose anything, but that's not how we operate. I've written a few articles on F5's DevCentral site about this - "Why We CVE" and "CVE: Who, What, Where, and When" are particularly on topic for this, I think.


All features have limited use case, but experimental features may be buggy in all use cases, which is exactly what happened here. CVE is uninformative there, defects are implied, might as well create a CVE for every commit "something happened, don't forget to repedloy".


  The question I ask is "Why not assign a CVE?"
Exactly: why not ? Glory to the Linux Kernel which is on its way to assign CVE for everything :)


That's a whole different discussion - which isn't as dramatic as it is being made out to be.

Other hats I wear (outside of my day job) include being on every (literally, every) CVE.org Working Group and being the newly elected CNA Liaison to the CVE Board. This has been a subject of discussion and things are a bit overblown right now, IMHO. Some of the initial communications were perhaps not as clear as they could have been. But it isn't going to be every kernel bug being a CVE - not every bug is a vuln.

I'm also one of the co-chairs for the upcoming VulnCon in Raleigh, NC. Just a plug. ;-)


Answering your original question to posted to me a bit down thread with this important context. The answer to "why not issue a CVE?" is the same reason that you don't call every random car burglary or graffiti an act of terrorism.

While I agree the whole Linux CVE thing is a bit overblown, but as an outside observer the new policy [1] does not read like they are super happy with CVE in general.

Too bad the CFP is closed for VulnCon, it might be fun to do a "Assume everything is wrong and you can't do anything the way you do it now - how do you build CVE 2.0" (also that title is too long).

1. https://lwn.net/ml/linux-kernel/2024021314-unwelcome-shrill-...


We got around 150 submissions for 30ish panel slots over three days, so we're good there. Schedule should be out soon.

The CVE program has grown and changed a lot the past few years, and the rules are undergoing a major revision right now (comment period currently) taking in a lot of the feedback. And the rate of CNAs joining has been picking up rapidly as global interest in the program has increased.

No one thinks it is perfect, but that's why a lot of us are active in the working groups and trying to keep moving things forward.


EVERYTHING.

Found a missing comma in the documentation of a function? Yup - That's a CVE ;p


Why did he not want CVE's assigned?


I think you'd have to ask Maxim. My take is he felt experimental features should not get CVEs, which isn't how the program works. But that's just my take - I'm the primary representative for F5 to the CVE program and on the F5 SIRT, we handle our vuln disclosures.


I'm inclined to agree with your decision to create and publish CVEs for these, honestly. You were shipping code with a now-known vulnerability in it, even if it wasn't compiled in by default.


if it's not compiled in by default, then you aren't shipping the code! Somebody is downloading it and compiling it themselves!


Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.


Nobody does it like that though, what vendor declares unsupported is unsupported.


That .. is the definition of shipping the code, the code is being shipped to the people downloading and compiling it for themselves


If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.


BRB, filing CVE's against literally any project with example code in their documentation...


That's actually supported by the CVE program rules. Have at it if you find examples with security vulns.


I've actually seen CVEs like that before, I agree that's bonkers but I have seen it...


Given how frequently people copy and paste example code… why is that surprising? Folks need to be informed. CVEs are a channel for that.


Pssst: People who copy+paste example code aren't checking CVEs


Yes. It's no different from any optional feature. Actual beta features should only be shipped in beta software .


You and I have very different notions of "shipped". It's open source code, it's being made publicly available. That's shipped, as I see it.


This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.


This. CVE has become garbage because "security researchers" are incentivized to file anything and everything so they can put it on their resume.


Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?


Because it's not actually part of the distribution unless you compile it yourself.

It is not released any sense of the word. It is not even a complete feature.

I am actually completely shocked this needs to be explained. Legitimate insanity.


It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.

By what definition is that not shipped?

> I am actually completely shocked this needs to be explained. Legitimate insanity.

Right back at you.


I've had an optional experimental feature marked with a CVE. It's not a big deal as it just lets folks know that they should upgrade if they are using that experimental feature in the affected versions.


>just flagged as experimental and not compiled by default

Are UML diagrams considered in scope too?


UML diagrams are not code. You cannot file a CVE for something that is not an actual (software or hardware) implementation.


> to be used en route to being stable

Where did you get this info? It might be the feature is actively being worked on and the DoS is a known issue which would be fixed before merge. Lot of projects have contrib folder for random scripts and other things which wouldn't get merged before some review but users are free to run the script if they want to. Experimental compile time build flags are experimental by definition.


You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.

Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.


"made no sense" from a narrow, CVE announcement perspective, but Maxim disagrees from another perspective:

    > [F5] decided to interfere with security policy nginx
    > uses for years, ignoring both the policy and developers’ position.
    >
    > That’s quite understandable: they own the project, and can do
    > anything with it, including doing marketing-motivated actions,
    > ignoring developers position and community.  Still, this
    > contradicts our agreement.  And, more importantly, I no longer able
    > to control which changes are made in nginx within F5, and no longer
    > see nginx as a free and open source project developed and
    > maintained for the public good.
I'm not sure what "contradicts our agreement" means but the simple interpretation is that he feels that F5 have become too dictatorial to the open source project.

The whole drama seems very short-sighted from F5's perspective. Maxim was working for you for free for years and you couldn't find some middle ground? I imagine there could have been some page on the free nginx project that listed CVEs that are in the enterprise product but that are not considered CVEs for the open source project given its stated policy of not creating CVEs for experimental features, or something like that.

To nuke the main developer, cause this rift in the community, and create a fork seems like a great microcosm of the general tendency of security leads to wield uncompromising power. I get it. Security is important. But security isn't everything and these little fiefdoms that security leads build up are bureaucratic and annoying.

I hope you understand that these uncompromising policies actually reduce security in the end because 10X developers like Maxim will start to tend to avoid the security team and, in the worst case, hide stuff from their security team. I've seen this play out over and over in large corporations. In that sense, the F5 security team is no different.

But there should be a collaborative, two-way process between security and development. I'm sure security leads will say that they have that, but that's not what I find. Ultimately, if there's an escalation, executives will side with the security lead, so it is a de facto dictatorship even if security leads will tend to avoid the nuclear option. But when you take the nuclear option, as you did in this case, don't be surprised by the consequences.


OK - I need to make very clear that I'm speaking for myself and NOT F5, OK? OK.

Ask yourself why this matters? What is the big deal about having a CVE assigned? A CVE is just a unique identifier for a vulnerability so that everyone can refer to the same thing. It helps get word out to users who might be impacted, and we know there are sites using this feature in production - experimental or not. This wasn't dictating what could or could not go into the code - my understanding was the vuln wasn't even in his code, but from another contributor. So, honestly, how does issuing the CVEs impact his work, at all?

That's what I, personally, don't understand. At a functional level, this really has no impact on his work or him personally. This is just documentation of an existing issue and a fix which had to be made, and was being made, CVE or no CVE. And this is worth a fork?

What you're suggesting is the best thing to do is to allow one developer to dictate what should or should not be disclosed to the user base, based on their personal feelings and not an analysis of the impact of that vulnerability on said user base? And if they're inflexible in their view and no compromise can be reached then that's OK?

Sometimes there's just no good compromise to be reached and you end up with one person on one side, and a lot of other people on the other, and if that one person just refuses to budge then it is what it is. Rational people can agree to disagree. In my career there have been many times when I have disagreed with a decision, and I could either make peace with it or I could polish my resume. To me it seems a drastic step to take over something as frankly innocuous as assigning a CVE to an acknowledged vulnerability. Clearly he felt differently, and strongly, on the matter. Maybe he is just very strongly anti-CVE in general, or maybe he'd been feeling the itch to control his own destiny and this was just the spur it took to make the move.

His reasons are his own, and maybe he'll share more in time. I'm comfortable with my personal stance in the matter and the recommendations I made; they conform with my personal and professional morals and ethics. I'm sorry it came to this, but I would not change my recommendation in hindsight as I still feel we did the right thing.

Only time will tell what the results of that are. I think the world is big enough that it doesn't have to be a zero sum game.


Docs say its compiled into the Linux binaries by default:

http://nginx.org/en/docs/quic.html

"Also, since 1.25.0, the QUIC and HTTP/3 support is available in Linux binary packages."


I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.


>I guess a vulnerability doesn’t count unless it’s default lol.

It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.

CVE's are not for tracking bugs in unfinished features.


It IS in the code that anyone can compile to use or integrate in projects as is the OSS way. Splitting hairs because it’s not in the default binary is absurd. Guess all the extra FFMPEG compilation flags and such shouldn’t count either.


You know that random thing you mucked around on Github X years ago then forgot about, and it's amongst 30 other random repos?

Should people file a CVE against that?


(not explicitly asking you, MZMegaZone) Does anyone understand why a disagreement about this would be worth the extra work in forking the project?

I'm not very familiar with the implications, so it seems like a relatively fine hair to split- as though the trouble of dealing with these as CSV would be less than the extra work of forking.


It probably wasn't. There's likely something else going on. Either Dounin had already decided to fork for other reasons, and the timing was coincidental, or there were a lot of reasons building up, and this was the final straw.

Or he's just a very strange man, and for some reason this pair of CVEs was oddly that important to him.


[flagged]


If you have more information, share it (I don’t think you do, as all you could say was “I’m sure”.). People actually involved sharing their side is a unique advantage of HN. Empty ad hominem attacks are not allowed here, and you have no right to tell anyone to “get out of here”.


Could you expand on your reasoning here? I'm genuinely curious what makes you react in this way?

To me it seems like a very simple disagreement with policies, and because the implications of the decision that was made and the impact it has to the agreed relationships.


I don't get it...does not he knows about angie [1]? It was created by NGINX core devs after F5 acquisition if I'm not mistaken and it's a drop-in replacement for NGINX.

[1] https://github.com/webserver-llc/angie


angie is run by a corporate entity that could do exactly what F5 did.


> not run by corporate entities

> webserver, llc


This surely is the question. Why not Angie?


Could be related to the fact that Angie offers 'pro' version: https://wbsrv.ru/angie-pro/docs/en/

From statement: "Instead, I’m starting an alternative project, which is going to be run by developers, and not corporate entities"


Hm.

I guess this consultancy-on-a-paid-version model doesn't bother me (and clearly didn't bother the developer of freenginx while they were paying him).

But a double fork can't be good.


> clearly didn't bother the developer of freenginx while they were paying him

Clearly it did, so much so that he gave up all that pay.


That is not why he gave up all the pay, is it? F5 closed the Moscow office.


You are right, my mistake.


I assume USA companies are by far the highest revenue source for Nginx Plus. Both of these forks seem to be based in Russia. How is a USA company supposed to pay either of these vendors for their consulting or Pro versions?

How long until F5 submits requests for domain ownership of freenginx.org, and how quickly does Angie get takedown requests for their features that look remarkably similar to Nginx Plus features (e.g., the console)?


> features that look remarkably similar to Nginx Plus features (e.g., the console)

Its illegal for products in the same space to have similar features?


Please compare the two and let us know if you think "similar" is the right word.


Compare what? Console/dashboard is open sourced by F5, so anybody can fork: https://github.com/nginxinc/nginx-plus-dashboard


Thanks, I was trying to find the license for the nginx console but thought it might just be part of the plus offering only.


The main criticism is that it requires signing a CLA, so they might switch to a non-free license any day now.


But anyone, including you and me, could re-license MIT/BSD-licensed open-source project under a different license, including non-free. CLA does not affect that.


> Unfortunately, some new non-technical management at F5 recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.

Ah, I completely forgot F5 was involved in this, probably most of everyone else and F5 gets no money from this. Shouldn't matter to them, do they even have competition in enterprise load balancer space? I spent 9 years of my career managing these devices, they're rock solid and I remember some anecdotes about MS buying them by the truckloads. They should be able to cover someone working on nginx, maybe advertise it more for some OSS goodwill.


I dunno about rock solid. I’ve had plenty of issues forcing a failover/reboot, multiple complicated tickets open a year, etc. But we have a sh ton of them. To be fair, some are kernel bugs with connection table leaks, SNAT + UDP, etc.

Buuuut, they have by far the best support. They’re as responsive as Cisco, but every product isn’t a completely different thing, team, etc. And they work really well in a big company used to having Network Engineering as a silo. I’d only use them as physical hardware, though. As a virtual appliance, they’re too resource hungry.

Nginx or HA-Proxy are technically great for anything reasonable and when fronting a small set of applications. I prefer nginx because the config is easier to read for someone coming in behind me. But they take a modern IT structure to support because “Developers” don’t get them and “Network Engineers” don’t have a CLI.

For VMWare, NSX-V HA-Proxy and NSX-T nginx config are like someone read the HOWTO and never got into production ready deployments. They’re poorly tuned and failure recovery is sloooow. AVI looked so promising, but development slowed down and seemed to lose direction post acquisition. And that was before Broadcom. Sigh.


I'm very out of date so take my opinion with a grain of salt. The customer support I received from F5 when they acquired a telco product was about the worst support I've ever seen. Now this wasn't the general LB equipment that F5 has the reputation around, it's some specific equipment for LTE networks.

We'd get completely bogus explanations for bugs, escalate up the chain to VPs and leadership because there was an obvious training, understanding, and support for complex issues problem, and get the VPs trying to gaslight us into believing their explanations were valid. We're talking things like on our IPv4 only network, the reason we're having issues is due to bugs in the equipment receiving IPv6 packets.

So it's one of those things where I've personally been burned so hard by F5 that I'd probably to an unreasonable level look for other vendors. The only thing is, this was awhile ago, and the rumor's I've heard are that no one involved is still employed by F5.


I completely get this. I feel like every product I’ve had outside of a vendor’s wheelhouse has gone that way. We just use the BigIP gear from F5 and they’re better than the load balancers we used in the past. Thank god Cisco just abandoned that business.

I can’t imagine them supporting telco gear. The IPv6 thing has me LOLing because I just had a similar experience with a vendor where we don’t route IPv6 in that segment and even if we did, it shouldn’t break. Similarly, a vendor in a space they don’t belong that I imagine we bought because of a golf game.

A thing I dread is a product we’ve adopted being acquired… and worse, being acquired by someone extending their brand into a new area. It’s also why we often choose a big brand over a superior product. It’s not the issue of today, but when they get bought and by who. I hate that so much and not my decision, but it’s a reality.

It’s also a terrible sign if you’re dealing with a real bug and you’re stuck with a sales engineer and can’t get a product engineer directly involved.

I have a list of “thou shalt not” companies as well, and some may be similar where a few bad experiences ruined the brand for me. Some we’re still stuck with and I maaaay be looking for ways to kill that.


> I have a list of “thou shalt not” companies

Can you share that list?


First, I don’t make these decisions but sometimes have influence. These opinions are my own and not my intentionally unnamed employer, and might be flat out wrong. This list is very focused on big companies at stupid scale with a lot of legacy… applied tech.

Generally my rule is “except for their very core product.” But this is full “hate everything” that pops into my mind:

RedHat won’t accept gifted patches for critical bugs in their tools that they won’t troubleshoot themselves. Getting the patch upstream means you get to use it in the next major version years later. That predates IBM. I won’t use their distribution specific tooling anymore. Outside the OS sucks worse. If I hear ActiveMQ one more time… [caveat: I probably hate every commercial Linux distro and Windows because my nonexistent beard is grayer than my age]

IBM… kind of feel sad about it, but they now suck at everything.

Oracle has good support, but they’re predatory and require an army of humans to manage inherently hodgepodge systems. Also creates an organizational unit of certified admins that can’t transition to alternatives because they’ve only memorized the product. Cisco’s the same except the predatory part and without many good alternatives for core DC gear.

CA, Symantec were awful pre-Broadcom and even worse now that they’re Broadcom’s annuity. Where products go to die.

Trellix (ex McAffee) is like the new Symantec or something.

There’s more I wish I could list for you, but can’t for various reasons.

On the other end, Satya has made MS a reasonable choice in so many things. Still a lot that sucks or is immature, but still… I didn’t think that was possible. I had to shift my mindset.


When was this? I worked with them 2009-2018, support was really top notch. We could get super technical guys on the call and even custom patches for our issues, but our usage was relatively simple. I contrast them with McAfee products we've used, now that was a complete shitshow as a product and support.


The last two companies I've worked for have paid for Nginx+ since software LB is all we really need.

Handling a few thousand RPS is nothing to nginx, and doesn't require fancy hardware.

That said, it replaced Kemp load balancers, which it seems is the next biggest competitor in the hardware load balancer appliance space.


The world has moved on in the sense that "good enough" and cloud eats into their balance sheets I'm sure, but there's loads and loads of banks and legacy enterprises that maintain their ivory tower data centers and there's nothing to replace these with AFAIK. Google has Maglev, AWS perhaps something similar, MS no idea, everyone else just buys F5 or doesn't need it.


Amazon used to run entirely behind Citrix NetScaler hardware; no F5 at all. This was back in the early 2010s so I assume things have changed by now.


Yup - there was a massive internal push to move off of SSL terminating LBs back in ~2018


How come?


Cost.

Now, SSL termination is done at the host level, using a distributed SSL termination proxy developed by S3 called "JBLRelay"


Lots of people are using haproxy


My org moved off nginx for haproxy after we learned that (at the time, maybe it changed) reloading an nginx config, even if done gracefully through kernel signals, would drop existing connections, where haproxy could handle it gracefully. That was a fun week of diving in to some C code looking for why it was behaving that way.


How did you come to that conclusion? I always believed a reload spawned new workers and let the old one drain off.


Yes I reload nginx all the time and it doesn’t drop connections. I just use the debian nginx package. Not sure what the gp is talking about.


Nginx abruptly drops http/1.1 persistent connections on reloads. This has been an issue forever and Maxim refused to ever fix it, saying it was to spec (yes it was, but there are better ways to deal with it).

It’s a reason why many large, modern infra deployments have moved away from nginx.


It doesn't drop it, it's just not persistent on reload, isn't that what you mean? Actually dropping a connection mid-request is something I haven't seen nginx (or indeed Apache) do for many years despite doing some weird things with it.

I can see where you're coming from, but it's not unreasonable behaviour, is it? Connections needs to migrated over to the new worker and that's how all major servers do it. If that's a problem then maybe something designed as proxy only instead of a real server is the way to go?


It doesn't drop mid request. But it closes the TCP socket abruptly after any in flight requests are completed. Clients have no idea the connection is closed, and try to reuse it and get back a RST. In heavily dynamic environments where nginx reloads happen frequently, it leads to large amounts of RSTs/broken connections and high error rates (you can't necessarily auto-retry a POST, a RST could mean anything).

The sane approach is connection draining - you send a `connection: close` response header on the old worker, then finally remove any remaining idle connections at the end of the drain.

In http/2 it's not an issue as it has a way for the server to explicitly say the connection is closed.


What you describe is basically how persistent http works, is it not? Even a persistent connection terminates at some point. Which web server does not work like that?

I guess you could send the connection header on draining, but anything less than what the big servers do is bound to cause some compatibility problem with some niche client somewhere. I can certainly see why a web server with millions of installs would be reluctant to change bevaviour, even if it is within spec.

I can only guess at the use case here, but maybe something designed from the start as a stateless proxy and not a general purpose web server would be a better fit.


I'm late to return to the thread, but this was the exact scenario we hit. We had it behind a CDN as well as behind an L4 load balancer for some very high volume internal consumers, and when it would just blast back RST packets, the consumers would freak out and break connection, returning errors on their end that weren't matched in our logs, unless we were lucky and got a 499 (now Maxim can talk about standards). As a general purpose reverse proxy for many clients on the Internet, I'm sure that's fine, but in our use case this made nginx unpredictable and no longer desirable.


Isn't the typical behaviour of an application to re-establish the persistent connection on demand? I wonder what the requirement is to have these persistent with no timeout.


Yep. Persistent connections are bound to fail sooner or later anyway, so a robust application should have its own recovery.


we went in the opposite direction, not because haproxy was bad, just because nginx had a simpler config, and i think we were paying for haproxy but don't pay for nginx.

all that said, neither drops existing connections on reload


Another issue with nginx IIRC is that it allows HTTP request smuggling, which is a critical security vulnerability.


That's been fixed for years. The CVE I can find was resolved in 1.17.7 (Dec 2019), and further hardening was applied in 1.21.1 (Jul 2021).


Can nginx send requests upstream over HTTP/2?

I see this question has remained unanswered for a couple of years.

https://security.stackexchange.com/questions/257823/what-are...


It cannot. There's more detailed reasoning at https://trac.nginx.org/nginx/ticket/923, but the tl;dr is:

> nginx is already good at mitigating HTTP desync / request smuggling attacks, even without using HTTP/2 to backends. In particular because it normalizes Content-Length and Transfer-Encoding while routing requests (and also does not reuse connections to backend servers unless explicitly configured to do so)


nginx supports graceful reloading and I’m pretty sure it has for a very long time - there are references to it in the changelog from 2005

https://nginx.org/en/docs/control.html


AVI if you're using VMware already


I'm pretty sure that AVI just wraps Nginx, even though they claim otherwise.

I think this because Nginx has a bunch of parsing quirks that are shared with AVI and nothing else.


HAProxy is an enterprise load balancer that's available through Red Hat or other OSS Vendor. Nginx is just so easy to configure...


HAProxy is a wonderful load balancer that doesn't serve static files thus forcing many of us to learn Nginx to fill the static-file-serving scenarios.

Caddy seems like a wonderful alternative that does load balancing and static file serving but has wild config file formats for people coming from Apache/Nginx-land.


I keep a Caddy server around and the config format is actually much, much nicer than nginx's in my experience. The main problem with it is that everybody provides example configurations in the nginx config format, so I have to read them, understand them, and translate them.

This works for me because I already knew a fair bit about nginx configuration before picking up Caddy but it really kills me to see just how many projects don't even bother to explain the nginx config they provide.

An example of this is Mattermost, which requires WebSockets and a few other config tweaks when running behind a reverse proxy. How does Mattermost document this? With an example nginx config! Want to use a different reverse proxy? Well, I hope you know how to read nginx configuration because there's no English description of what the example configuration does.

Mastodon is another project that has committed this sin. I'm sure the list is never-ending.


> The main problem with it is that everybody provides example configurations in the nginx config format, so I have to read them, understand them, and translate them.

This is so real. I call it "doc-lock" or documentation lock-in. I don't really know a good scalable way to solve this faster than the natural passage of time and growth of the Caddy project.


LLMs baby! Input nginx config, output caddy config. Input nginx docs, output caddy docs. Someone get on this and go to YC.


You're absolutely right. I'm going to do this today.

It's clear from this thread that a) Nginx open source will not proceed at its previous pace, b) the forks are for Russia and not for western companies, and c) Caddy seems like absolutely the most sane and responsive place to move.


LLMs do a horrendous job with Caddy config as it stands. It doesn't know how to differentiate Caddy v0/1 config from v2 config, so it hallucinates all kinds of completely invalid config. We've seen an uptick of people coming for support on the forums with configs that don't make any sense.


For just blasting a config out, I'm sure there are tons of problems. But (and I have not been to your forums, because...the project just works for me, it's great!) I've had a lot of success having GPT4 do the first-pass translation from nginx to Caddy. It's not perfect, but I do also know how to write a Caddyfile myself, I'm just getting myself out of the line-by-line business.


You could've used the nginx-adapter and skip the faulty LLMs

https://github.com/caddyserver/nginx-adapter


Thanks for the link! Maybe less thanks for the attitude, though--I'm well-versed in how these tools fail and nothing goes out the door without me evaluating it. (And, for my use cases? Generally pretty solid results, with failures being obvious ones that fail in my local and never even get to the deployed dev environment.)


> This is so real. I call it "doc-lock" or documentation lock-in. I don't really know a good scalable way to solve this faster than the natural passage of time and growth of the Caddy project.

I think you are totally right here - gaining critical mass over the time for battle tested solution. On the other hand, the authors [who prefers Caddy] of docs will likely abandon providing Nginx configs sample and someone else will complain on that on HN.

"Battle tested" can be seen differently of course, but in my opinion, things like the next one,

> IMO most users do require the newer versions because we made critical changes to how key things work and perform. I cannot in good faith recommend running anything but the latest release.

from https://news.ycombinator.com/item?id=36055554 , by someone working at Caddy doesn't help. May be in their bubble (can I say your bubble as you are from Caddy as well?) noone really cares on LTS stuff and just use "image: caddy:latest" and everything is in containers managed by dev teams - just my projection on why it may be so.


How would you imagine this in practice? Should one to provide instructions how to unwrap docker images/dockerfiles project uses (quite many do lean on Docker/Containers nowadays and not regular system setup) to for example setup the same on FreeBSD Jails? Where to stop here?


Just for completeness sake and probably not useful to many people, HAProxy can serve a limited number of static files by abusing the back-end and error pages. I have done this for landing pages, directory/table of content pages. One just makes a properly configured HTTP page that has the desired HTTP headers embedded in it and then configure it as the error page for a new back-end and use ACL's to direct specific URL's to that back-end. Then just replace any status codes with 200 for that back-end. Probably mostly useful to those with a little hobby site or landing page that needs to give people some static information and the rest of the site is dynamic. This reduces moving parts and reduces the risk of time-wait assassination attacks.

This method is also useful for abusive clients that one still wishes to give an error page to. Based on traffic patterns, drop them in a stick table and route those people to your pre-compressed error page in the unique back-end. It keeps them at the edge of the network.


FYI: Serving static files is easier and more flexible in modern versions of HAProxy via the `http-request return` action [1]. No need to abuse error pages and no need to embed the header within the error file any longer :-) You even have some dynamic generation capabilities via the `lf-file` option, allowing you to embed e.g. the client IP address or request ID in responses.

[1] https://docs.haproxy.org/dev/configuration.html#4.4-return

Disclosure: I'm a community contributor to HAProxy.


Nice, I will have to play around with that. I admit I sometimes get stuck in outdated patterns due to old habits and being lazy.

I'm a community contributor to HAProxy.

I think I recall chatting with you on here or email, I can't remember which. I have mostly interacted with Willy in the past. He is also on here. Every interaction with HAProxy developers have been educational and thought provoking not to mention pleasant.


> I think I recall chatting with you on here or email, I can't remember which.

Could possibly also have been in the issue tracker, which I did help bootstrapping and doing maintenance for quite a while after initially setting it up. Luckily the core team has took over, since I had much less time for HAProxy contributions lately.


That's the best part -- you can choose your config format when using Caddy! https://caddyserver.com/docs/config-adapters


True and I've made use of the Nginx adapter, but the resulting series of error messages and JSON was too scary to dive in further. The workflow that would make the most sense to me (to exit Nginx-world) would be loading my complex Nginx configs (100+ files) with the adapter, summarizing what could not be interpreted, and then writing the entirety to Caddyfile-format for me to modify further. I understand that JSON to Caddyfile would be lossy, but reading or editing 10k lines of JSON just seems impossible and daunting.


Thanks for the feedback, that's good to know.


> but has wild config file formats for people coming from Apache/Nginx-land.

stockholm syndrome


the syntax of nginx configs might not be hard, but its semantics (particularly [0]) is eldritch evil I don't relish dealing with

[0] https://www.nginx.com/resources/wiki/start/topics/depth/ifis...


I can see that. But for me, I was so very relieved to no longer deal with Apache config files after switching to Caddy.


A load balancer shouldn't serve static files. It shouldn't serve anything. It should... load balance.

I can see why you'd want an all-in-one solution sometimes, but I also think a single-purpose service has strengths all its own.


For a lot of web apps, having an all-in-one solution makes sense.

nginx open source does all of these things and more wonderfully:

    Reverse proxying web apps written in your language of choice
    Load balancer
    Rate limiting
    TLS termination (serving SSL certificates)
    Redirecting HTTP to HTTPS and other app-level redirects
    Serving static files with cache headers
    Managing a deny / allow list for IP addresses
    Getting geolocation data[0], such as a visitor’s country code, and setting it in a header
    Serving a maintenance page if my app back-end happens to be down on purpose
    Handling gzip compression
    Handling websocket connections
I wouldn't want to run and manage services and configs for ~10 different tools here but nearly every app I deploy uses most of the above.

nginx can do all of this with a few dozen lines of config and it has an impeccable track record of being efficient and stable. You can also use something like OpenResty to have Lua script support so you can script custom solutions. If you didn't want to use nginx plus you can find semi-comparable open source Lua scripts and nginx modules for some individual plus features.

[0]: Technically this is an open source module to provide this feature.


Quite intersting - in theory, "pure" load balancer shouldn't not, but in practice most of my LBs, especially for small projects do. Even for larger projects I do combine proxy_cache on LB making it serve static files or using to serve websites public content and splitting load over several application servers for dynamic content.

And I think it's fine.


Caddy config is no worse than HAProxy.


There is another fork already from some "ex-devs from the original team" https://angie.software/en/ https://github.com/webserver-llc/angie


Thanks, I've never seen this fork mentioned before. This alone is compelling:

"Simplifying configuration: the location directive can define several matching expressions at once, which enables combining blocks with shared settings."


Also owned by a for-profit company who offers a pro version.


Gotta pay the bills somehow


Maybe a coop of sorts could be formed where they pull in funds from sponsorships. A non-profit maybe. Devs could "lease" themselves to corporate sponsors and work on the project + some percentage time towards features they need. Sponsored development..

IDK could be a way to do it, pay the bills and some, and also limit the negative impacts public business or VC funded growth startup.


That doesn't work. For example Apple, benefiting from FreeBSD Foundation's work, never gave back any single penny to them, never sponsored any project within the Foundation. 1 million a year would mean a world to the Foundation, and would be less than a rounding error in Apple balance sheet.


Per the discussion at https://news.ycombinator.com/item?id=39374312, this cryptic shade:

> Unfortunately, some new non-technical management at F5 recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.

Refers to F5's decision to publish two vulnerabilities as CVEs, when Maxim did not want them to be published.


>freenginx.org

IANAL, but i strongly recommend reconsidering the name as the current one contains a trademark.


They could take the Postgres naming approach.

Ingress was forked; the Post fork version of Ingress was called "Post"gres.

So maybe name this new project "PostX" (for Post + nginx).

Though that might sound too similar to posix.


Postgres name is said to be a reference to ingres db, not a fork of ingres.

> The INGRES relational database management system (DBMS) was implemented during 1975-1977 at the Univerisity of California. Since 1978 various prototype extensions have been made to support distributed databases [STON83a], ordered relations [STON83b], abstract data types [STON83c], and QUEL as a data type [STON84a]. In addition, we proposed but never prototyped a new application program interface [STON84b]. The University of California version of INGRES has been ‘‘hacked up enough’’ to make the inclusion of substantial new function extremely difficult. Another problem with continuing to extend the existing system is that many of our proposed ideas would be difficult to integrate into that system because of earlier design decisions. Consequently, we are building a new database system, called POSTGRES (POSTinGRES).

[https://dsf.berkeley.edu/papers/ERL-M85-95.pdf]


Isn't this a bit pedantic.

Fork vs "hacked up [Ingress] enough ... Consequently, building a new database system" named Postgres.


"Postginx" has a nice ring to it, could be an alcoholic beverage, a name of a generation, or even a web server.


gintonx


Sounds like an character from the Asterix comic book :)


Go roman? nginxii ?


are we at the twelfth fork? :)


... and postfix


Not necessary. It’s not like F5 is going to go to Russia and file suit against any of them.


Maybe not today, but one day they might. Better to start with a workable long term name.


nginy?


Bump each letter in nginx and we get.... ohjoy!


Insane find. Brilliant!!


Dude, please, just create a fork & explain the name. ohjoy sounds perfect and the meaning is brilliant. This must be it.


This might even look like enough a reason to spend the rest of their life maintaining it.


Wow, that's perfect!


Jesus Christ. That’s incredible.


There was also a time where ng postfix was used to denote "next generation", so they could go with nginxng :)


How about EngineF?


https://my.f5.com/manage/s/article/K59427339

All F5 contributions to NGINX open source projects have been moved to other global locations. No code, either commercial or open source, is located in Russia.

yeah, yeah


Is called "rage-fork" perhaps this. So proposed title: nginx dev rage-forks over security disagreement with boss company

But then perhaps he also has every right to do it, even though AFAIR the original author was somebody else.


Rage-fork doesn’t show up anywhere in their announcement, nor does it read like they’re doing something specifically out of rage.

Everyone has a right to forking the project. Only time will tell if they get critical mass of developers to keep it going.


It's worth pointing out that Maxim Dounin is, by himself, likely critical mass for Nginx. Since he started in 2011 he is by far the most active contributor to the codebase.


Surely "Nginx" is trademarked, copyrighted, etc. A cool and collected fork would do some basic work to avoid trivial lawsuits, consider the other forks already in the space, and write up a bit on how this fork will be different from the others.


A quick glance at USPTO and https://www.f5.com/company/policies/trademarks confirms this.


Russia has laws on the books that allow them to exempt domestic operations from international IP enforcement and to nullify any damages if the entity has a connection to an "unfriendly state."


Of course it is not mentioned, it is implied. The term "rage-fork" is a made up one, not sure whether anyone else uses it in this form. But if you imply that there was no rage in the decision to fork - well that's something to have doubts about and it very much seems the move was done with some significant emotion in it. although somewhat concealed in the original post.


Igor, the original author, left in 2022 according to wikipedia: https://en.wikipedia.org/wiki/Igor_Sysoev


Why does the identity of the original author matter here?


In my opinion the original author did a really good job, so I found it interesting to know where and whether he might continue his vision.

Edit: I see now from the hg history that Igor hasn't been coding on Nginx for a decade actually.


Indeed, the original work done by single dev (Igor) to get the nginx project running was very impressive timewise, and as a volume of code produced. I can't really recall why he left, but with other comments around the thread implies such forks have happened more than once.

As a sidenote I believe the people who start projects that they themselves run in excellent manner, should be praised, supported, noted and there is nothing more for their identities to matter. It very much matters some particular person with weird nick burntsushi created this wonderful tool rg, and kept growing it for long time. Besides, I can bet for projects such as Cosmopolitan C, it absolutely matters that jart started/did it.


One of the most heavily used Russian software projects on the internet https://www.nginx.com/blog/do-svidaniya-igor-thank-you-for-n... but it's only marginally more modern than Apache httpd.

In light of recently announced nginx memory-safety vulnerabilities I'd suggest migrating to Caddy https://caddyserver.com/



After using Nginx for something like 15 years I dropped it a couple of years ago.

Using Caddy instead.

A point came where I realised I didn't enjoy Nginx. Configuring it was hard and it felt brittle.

A particular pain point is certificates/ssl. I absolutely dreaded doing anything with certificates in Nginx.

When I heard that Caddy automatically handles SSL/ certificates I jumped the nginx ship and swam as fast as I could to Caddy.


What a coincidence, some days ago I was reading some HN posts related to lighttpd and I found [1]. The link is dead and it has inappropriate content, so use arhive.org. The author doesn't go too much in detail of why nginx being purchased is a problem, but in how to configure lighttpd. And the first comment predicts the hypothetical case of F5 being problematic.

[1] https://news.ycombinator.com/item?id=19413901


I have been using lighttpd which can also host static content and do proxying, on top of those lighttpd supports cgi/fastcgi/etc out of the box as well, and it takes 4MB memory only by default at start, so it works for both low end embedded systems and large servers.


I've recently needed to build a docker image to run a static site. I compiled busybox with only it's httpd server. It runs with 300kb of ram with a scratch image and tini.

I didn't compile in fastcgi support in to my build, but it can be enabled.


yes busybox httpd or civetweb is even smaller, both around 300kb.

for tini you mean https://github.com/krallin/tini? how large is your final docker image, why not just alpine in that case which is musl+busybox


Yep that tini. The docker image is about 1.90mb. It's a repack of https://homer-demo.netlify.app/ I pre-gzipped a few of the compressible file extensions too so they can be served compressed.

In this case, I didn't need alpine. I generally aim to get the image as minimal as possible without too much hassle. I end up doing stuff like this alot when I feel like a community image maybe too bloated when something like alpine or distroless can be used. Entry point scripts have all kinds of envars and a shell dependency, I'd rather rebuild the image to cater for my needs and execute the binary directly, and mount in any config via k8s.


I used it to avoid having to learn lots of stuff about web configuration that bigger servers might require. Between lighttpd and DO droplets, I could run a VM per static for $5 a month each with good performance. I’m very grateful for lighttpd!


It seems every time I read about a project being forked, they use the (probably) trademarked name in the project's fork, just to need a rename a few weeks after.


Just curious how do folks make a living with free contributions not associated to any company? Is it sponsorships or they do some contract work on the side ? It feels these devs are soo underappreciated for the tremendous work they do, so much in software is supported on so many of these projects and companies dont sponsor or do the right thing !


Living in Russia could be very cheap compared to other countries. If you own a flat and you don't need cars or travel, then it's possible to live a few years just on money saved from your previous software job.


But its still seems like a massive system failure which cant help folks who have built that is used by so many industries.


I'm hoping the fork will allow having code comments.


Tangent, but I got curious about contributing so I went to the Freenginx homepage, it looks like this project will be organized over mailing list. I would love if someone would create a product that gives mailing list a tolerable UI.


Have you tried HyperKitty/Postorious? Does it get closer to what you would consider tolerable?

https://mail.python.org/archives/list/mailman-users@python.o...


SourceHut? It’s a forge organized around an email rather than pull request workflow.


F5 is spinning this to be about not disclosing CVE's when the truth is more that the experimental code that was flagged was not considered production ready and whomever is running it should know they are on their own. This CVE is an obvious bug, and

when your KPI is CVE's per month every bug looks like a CVE

F5 wants this feature prioritized over what Maxim planned, and Maxim doesn't have to comply, he is a volunteer.


It was already mentioned in the other thread, but it looks like F5 owns the trademark for the Nginx name. Maxim should consider rebranding the project to avoid any legal blowback.


As I suggested elsewhere [0] if you bump each letter in nginx you get... ohjoy!

[0] https://news.ycombinator.com/item?id=39376657


I feel like scrambling the letters to "ginnx" (pronounced jinx?) or something might be better.


Fun but bad!


I hope he implements the least connection load balancing option for free users.


So - The big question...

Is the fork going to allow you to change the nginx Server response header (A PAID feature in the current fork...) without requiring you to mod it in and recompile it? :p

Yes - You read that correctly. They refuse to accept PR's to add additional functionality because that functionality is restricted to the paid version :p


Anyone have more info about the changes nginx made?


Page won't load for me, Wayback Machine caught it:

https://web.archive.org/web/20240214184151/https://mailman.n...


I dunno seems like a tempest in a teapot. Not sure why Maxim would not want CVEs to be assigned to something. Maybe it was just the final straw after a series of bad interactions. Every project has a lifespan, sometimes trying to keep them going forever is not the answer. I will miss nginx a lot if I need to migrate though.


no - the CVE process is at the center of new broad laws in the EU regarding business registration and security assurances. You are exactly wrong about the significance of this fork, basically.

see EU CRA


Time for me to slowly start looking for an alternative.

There was a time when I wanted to move away from it and was eyeing HAProxy, but the lack of the ability to serve static files didn't convince me. Then there was Traefik, but I never looked too much into it, because Nginx is working just fine for me.

My biggest hope was Cloudflare's Rust-based Pingora pre-announcement, which was then never published as Open Source.

Now that I googled for the Pingora name I found Oxy, which might be Pingora? Googling for this yields

> Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective.

Any non-Apache recommendations? It should be able to serve static files.


Maybe take a look at Caddy (https://caddyserver.com/)



That's a third party plugin, not core Caddy.


Ohhh I didn’t realize.

Then nothing


And?

(That isn't about Caddy, rather a third-party plugin.)


Also, it's not always about vulnerabilities directly, but how well / fast things are patched.


Have you finally decided to match FQDN URLs correctly?

I'd love to get rid of the part of my clients' codebase that starts with // workaround for broken caddy servers


I'm going to third the suggestions for caddy, I've replaced nginx as a reverse proxy in a couple places with caddy and it's been so much easier to maintain.


Caddy, simple & easy, almost zeroconf.


I mean I’m not sure how it’s good to want to move to a dev who is against CVEs and disclosures…


I think people are seeing this as a very generic "big bad globocorp destroying OSS community", and not moving past the headlines. I'm with you, this seems like a foolish thing to decide to fork the project over. Probably there is other conflict brewing, and this was just a convenient opportunity.


Did I miss something regarding that Maxim didn't want CVEs and disclosures? I was not aware of this. And F5 are the ones wanting to add the CVEs (as happened in the announcement which was released an hour earlier)?

I could have sworn that I've read about Nginx CVEs in the past.


Well it seems he didn’t think this particular thing should have one despite the criteria being clear.


I did miss the post [1] where he explains that experimental features should not get assigned a CVE if the feature is experimental.

In that case I'd agree with his view, though I think his reaction is a bit over the top.

[0] https://freenginx.org/pipermail/nginx/2024-February/000007.h...


Note for some reason Maxim chose to link to http://freenginx.org, instead of https://freenginx.org


typo? it forwards to https anyway.


Pretty sure that's an extension on your part. Neither chrome/firefox/cURL redirects that to https for me.


you're right I forgot about https auto forwarding in my browser as default.


Are you sure? not for me


wondering also whether Igor and Maxim are ok, what w/ the geopolitical situation there.


I stop using Nginx when i needed ability to assign an Ethernet port (IP address not yet available) and Nginx developers refused to do this.

Before you ask why would I do that, Ive got all Ethernet interfaces on dynamically IP created on a on-demand basis and only wanted ONE specific interface (non-public) to host the HTTP/HTTPS protocol.

And no, we do not want to jerry-rig some fancy nginx config file shell -script updater whenever an IP address gets assigned/reassigned.

Here came lighthttpd and Apache to the rescue.


Is F5 trying to kill the original nginx. [Cfr hostile take-overs of Microsoft]


seems like an annoying but necessary thing, so lets give the original a quick death and migrate to freenginx

Infrastructure like that should not be run by for-profit corporations anyway, it will always end up like in this case sooner or later


Did we find out why the dev of freenginx did not want the nginx CVE that caused this fork? Some contex would be nice as it seems like a weird reason to fork.



IIRC from reading the post, the reasoning was that the bug was in a feature which was marked as experimental (HTTP/3).


My biggest gripe as an internet keyboard warrior with an opinion is not being able to understand the source control and build process of Nginx.

Probably a skill issue but when I last tried to compile Nginx from the Github mirror I spent hours trying to figure it out. I wish there was a GitHub page with an easy to understand build process... and that I could just run "cargo build --release" lol


./configure

make

make install

I just ran this to be sure I wasn't delusional and it took only 2 minutes.


Really?

https://github.com/nginx/nginx/tree/branches/stable-1.24

I cloned this and it doesn't have a makefile or configure script

Neither does the official repo?

https://hg.nginx.org/nginx/file/tip

Do you run it from /auto/?


Oh snap, F5 just Hudson’d themselves.


If I ever need nginx I'll use freenginx. But funny enough all my services run in Traefik these days. 15 years ago Apache httpd was the norm, and lately nginx has been, and now I can't even think of a reason to use it.


Apache my beloved


I only use apache on one server: as an dav server, since i could not find a simple dav server(nextcloud is for that already too much) and nginx as an frontend for that, since i use some servers with unix sockets which apache still doesn't support.


Dissatisfaction, like water, will always find its level.


Curious how to support Maxim despite Russia complications.


bitcoin solves this


Can it un-swap the behavior of SIGTERM and SIGKILL please?


Swap SIGTERM and SIGQUIT behavior? I don't think you can catch SIGKILL.



Correct. The only other untrappable signal is SIGSTOP.


Judging from the comments of the guy from F5, it seems that Maxim didn't wanna assign a CVE to the latest vulns. I wonder why


Just looking at comments here makes me feel like this is pretty much underrated.


Innovation is being kept hostage by MBAs, marketing, PR and recruiters


How the heck am I supposed to pronounce that? "Free-en-gen-icks"?


Freen Ginks


F5 closing moscow office: Is this a result of US sanctions?


I hope some people will find the time to help him.


This fork should use the Apache Foundation for its hosting and things.


Bravo!


Godspeed


It is scary to think about how much of web relies on projects maintained by 1 or 2 people.


Not that scary when you remember there are some systems that haven't been significantly updated for decades (e.g. the Linux TTY interface). A lot of stuff can just coast indefinitely, you'll get quirks but people will find workarounds. Also this is kind of why everything is ever so slightly broken, IMHO.


> Also this is kind of why everything is ever so slightly broken, IMHO.

OTOH, things that update too often seem to be more than slightly broken on an ongoing basis, due to ill-advised design changes, new bugs and regressions, etc.


The problem with bug full updating software is usually that they don’t release changes fast enough, ironically.

Apple routinely holds back changes for a .0 release for advertising reasons. This means that they routinely have big releases that break everything at once. Bugs could come from 4 or 5 different sets of changes. But if they spread out changes… bug sources would be way more easy to identify.

And bug fix velocity going up could mean people stop treading water on bugs, and actually get to making changes to avoid entire classes of bugs!

Instead, people think the way to avoid bugs is to avoid updates, or do it all at once. This leads to iOS .0 releases being garbage, users of non-rolling release Linux distros to have bugs in their software that were fixed upstream years ago, and ultimately to make it harder to actually fix bugs.


As a user, my problem is that I receive functional or design changes that I didn't want and that make the software worse for me. So I tend to avoid updates. e.g. the last time I updated Android was for that webp cve. Otherwise I just want it to stay the way it was when I bought it, not how some new product designer wants to make it to show their "impact". Especially when it's things like "we're going to silently uninstall your apps (Google) and/or delete your files (Apple) and add nag screens when you turn off our malware (Google again) or add ads (Microsoft)".

I do regularly install updates on my (Linux) desktop/laptop because guess what? It consistently works exactly the same afterward. Occasionally new formats like jxl images just start working everywhere or something. But otherwise it has just continued to work unchanging with no fanfare for the last decade or so. It's amazing to me how much higher quality in that way volunteer software is compared to commercial software.


This means they should either push updates quickly on an ongoing basis, or not push them at all and provide service packs at regular intervals like Windows XP and 7 used to do.


I am thinking with things that don't update often, we just get used to the broken parts. People learned to save every five minutes in Maya since the app crashes so often, for example. Every now and then, a PuTTY session will fill the screen with "PuTTYPuTTYPuTTYPuTTYPuTTY[...]" but it's been that way for at least 20 years, so it's not that remarkable.


The "PuTTY" string is because a program sent it ^E: https://the.earth.li/~sgtatham/putty/0.67/htmldoc/Chapter4.h...


When I was in Systems/Linux Operations you wouldn’t believe how many tickets from other internal teams we supported that said “Putty is down” in the title. It never ceased to make me chuckle every single time.


if not a secret, where have you moved to and why, is it better/worse?


tangent but i havent seen that happen on any of my putty clients in years and i use it everyday, so i think that finally got fixed? or maybe was a side effect of something stupid


next question: why are people still using putty


Putty met my needs in 2004 and my needs haven't changed. It still works as good in 2024.

I'm not 100% sure when I started using putty, but I definitely used it in 2004. I still need a ssh client and terminal emulator for Windows. I still don't want to install a unix like environment just to have a terminal. I still don't want tabs in my terminal, lots of windows works just fine. I still need X11 forwarding so I can run programs on remote systems and display them on Windows (VcXsrv is an easier to get going X server than others I've used on Windows).

I might like to have something that can do whatever magic so I can gcloud and aws auth on my remote machine without cutting and pasting giant urls and auth blobs to and fro all the time; but I'm using a auth token that needs to stay connected to the windows machine. In a more integrated corp environment this would probably be keberos/active directory/magic?


The difference in 2024 is that windows ships openssh client and server as a built-in optional component and it also ships a workable terminal emulator. No WSL needed in either case.

(But yeah I'm still using putty, too)


I've started to dropping of Putty since WSL1 and later native openssh-client landed into Windows. I was missing ability to use ~/.ssh/config - comparing to Putty's GUI way of changing things, especially en-masse, like updating JumpHost for 10+ servers (saved session in Putty terms) and no inheritance of options.

So I'm not using Putty since I guess ~ 2018 or so. Not insisting other should stop using it, of course.


Microsoft stopped shipping HyperTerminal, last I checked. It wasn't really worth the effort to make it do SSH.

I'm not really a fan of cmd or powershell, although I guess I could use them in a pinch. Wouldn't look like what I'm used to though. :p


HyperTerminal is for greybeards :)

What was meant is Windows Terminal


same. if i want a term, it's putty. windows shell and builtin ssh is a backup for when i am working from a foreign system


Because Windows does not have a good SSH implementation and PuTTY has always worked extremely well for me as a serial and SSH terminal (also, it starts up instantly and never crashed on me).

Are there any better alternatives?


Doesn't Windows ship OpenSSH these days?


It does. I use Windows Terminal and the native OpenSSH client literally daily.


I like having a library of hosts to choose and maybe multiple tabs in one place, and although there are some slightly less cumbersome PuTTY frontends like KiTTY (please keep your expectations very very low), I'll rather use WinSCP (no quantum leap in usability either). Edit: to those suggesting W10 command line - yes it's there and works, but it's just that, a command line, not much help when you have dozens of servers.


Windows 10 natively supports SSH as far as I can tell, I don’t use it a ton, but haven’t had any issues just typing ssh username@domain


Many people I know just use SSH from the WSL CLI.


You can run SSH from a Windows terminal without even having the WSL installed...


I do like 99% of the time and in quite specific cases from the host machine (Windows native openssh) - mainly due to my environment is in WSL in terms of dotfiles, cmd line prompt, shell history and so on.


That's a very limited terminal in terms of capabilities.

Then there's things like x11-style copy-paste.


PuTTY is from before WSL, and old habits die hard.


I used to use KiTTY, because it is more versatile.


They're used to it, tutorials online recommend it, admins install it out of inertia, some places have old Windows versions, etc.


Why shouldn't people use putty?

I still use putty because it does what I need for it to do. No need to change just because MS has their own terminals application, which besides I far from trust.


You trust them to run the entire OS and every stack included in it, but not to make an ssh client?


There's trust in the security sense, which yeah, you're stuck with the whole deal.

But there's also trust in the rely on sense. Which at least I try to compartmentalize. I can trust Microsoft (or Google) to make an OS I can rely on to run other people's apps. If Microsoft or Google want to provide apps, they'll be evaluated as they are, not with a bias because the OS provider shipped them.


The client shipped with Windows is literally OpenSSH.


I don't have a problem using OpenSSH, really. But I'm not going to use a Microsoft terminal emulator, unless it has some advantage over the terminal emulator I've been using for decades, when the Microsoft product has no advantage other than Microsoft included it with the OS so I can save a 3.5 MB download. Same reason I don't use Internet Explorer / Edge / new Edge or Windows Media Player. On a level playing field, I would never use those products (well that's not true, IE 3 was ok when it came out, IE 4 and 6 were good when they were new, but I don't have a time machine), so why use them because the field is unlevel.


There's some obscure settings that putty supports that other terminals do (did?) not. It's been a while so I don't remember all the details, but for example, some systems expected the DEL key and not Ctrl-C to interrupt. You can change the interrupt key with `stty` on other terminals, but it only allows setting the key to a single character, and DEL is typically an escape sequence.


its great for serial and raw on windows.


If you want to move fast, you must accept that things break.

If you want things not to break, you must slow down.

It isn’t reasonable to ask for these two things at once:

* lots of change

* stability


And the reasonable answer is to judge proposed changes in terms of their impact to stability, and make relevant risk/reward tradeoffs (and ideally including risk mitigation within the scope of the change).

The current milieu seems dramatically skewed toward churning out low-value changes without sufficiently considering the impact to stability, causing frequent breakage, and resulting in net negative value.


That only helps if it stays static. For example, if the Linux TTY interface was unchanged for decades to such a degree that nobody worked on it, but then had a vulnerability, who would be able to fix it quickly?


This already happened with the kernel console, no more scrollback. https://security.snyk.io/vuln/SNYK-UNMANAGED-TORVALDSLINUX-3...


I recognize it fixed a security issue, but nonetheless it's very inconvenient. I don't always have tmux at hand, especially when the system is booting in some degraded mode...


Perhaps someone with more knowledge can chime in. But, my impression is that there are vulnerabilities with TTY, it's just that we stay educated on what those are. And we build systems around it (e.g. SSH) that are secure enough to mitigate the effects of those issues.


SSH was a replacement for Telnet. But any weaknesses at the TTY level is orthogonal to that, right?

Unless you mean, having thin clients use SSH as opposed to directly running serial cables throughout a building to VT100 style hardware terminals, and therefore being vulnerable to eavesdropping and hijacking?

But I think when we talk about TTY we mostly don’t refer to that kind of situation.

If someone talks about TTY today, I assume they mean the protocol and kernel interfaces being used. Not any kind of physical VT100 style serial communication terminals.


I miss rooms of green and amber screen terminals hooked up via serial cable. As an undergrad I remember figuring out how to escape from some menu to a TTY prompt that I could somehow telnet to anywhere from. Later, I would inherit a fleet of 200 of them spread across 12 branch libraries. I can't remember how it worked except that somehow all the terminals ran into two BSDi boxes in the core room of the central library, and it had been hardened so you could not break out of the menus and telnet to arbitrary places. Over a year I replaced them all with windows machines that ran version of netscape navigator as the shell with an interface that was built in signed javascript. It was the early days of the web, and we had to support over 300 plug ins for different subscriptions we had. The department that ran the campus network didn't want to let me on the network until I could prove to them everything was secure.


SSH was a replacement for RSH, not telnet.


This was on HN two(?) days ago: https://news.ycombinator.com/item?id=39313170

> I wrote the initial version of SSH (Secure Shell) in Spring 1995. It was a time when telnet and FTP were widely used.

> Anyway, I designed SSH to replace both telnet (port 23) and ftp (port 21). Port 22 was free. It was conveniently between the ports for telnet and ftp. I figured having that port number might be one of those small things that would give some aura of credibility. But how could I get that port number? I had never allocated one, but I knew somebody who had allocated a port.

Emphasis mine.

Cheers.


https://docs.oracle.com/cd/E36784_01/html/E36870/ssh-1.html from man page: It is intended to replace rlogin and rsh, and to provide secure encrypted communications between two untrusted hosts over an insecure network.


Where does this idea come from? I see it repeated a lot, but it's not correct.

rsh was common on internal networks, but almost never used on the wider Internet. telnet was everywhere all across the net.

ssh was a revelation and it replaced telnet and authenticated/non-anonymous ftp primarily.

And also sometimes rsh, but less importantly.


How could it be incorrect? rsh was clearly modelled after rlogin, and ssh was clearly modelled after rsh.

The command line options were almost identical for an easy switch. ssh even respected the .rhosts file! Last time I checked, that functionality was still in place.

Both the rlogin-family of commands and the telnet/ftp-family were in use across the Internet, certainly in cases where Kerberos was used. I would think telnet was more common, certainly so outside the UNIX sphere of influence, but things like Kermit also existed.

They all got SSL-encapsulated versions in time, but Kerberos solved authentication for free, and for the simpler use cases ssh had already taken over by then. And in the longer run, simple almost always wins!


Agree that ssh was modeled after rsh. But rsh was a different kind of security problem, which wasn't really relevant on the wider Internet.

ssh solved the "pass credentials in cleartext over untrusted networks" problem. Consequently it replaced telnet and ftp. It also duplicated the functionality of rsh and rcp, so those protocols became irrelevant. But that was not the important goal.

> Kerberos solved authentication for free,

This made me laugh. Kerberos didn't do anything for free. :)

Even in Athena, Kerberos had reliability problems. In the wider world, it was very hard to find a well-managed Kerberos implementation. Things are different now!


I wonder how many of these things that are just coasting are gonna have issues in 14 years.


They're open source.


Nginx is still evolving a lot though.

Eg: http3 support was stabilized with 1.25.1 , which came out June 2023.


This isn't one though. I think the issue he is talking about is around the CVEs that came out with the HTTP3 implementation. This is an area of very active and complex development.


Not the web though


Certainly the web can mostly coast indefinitely. There are webpages from decades ago that still function fine, even that use JavaScript. The web is an incredibly stable platform all things considered. In contrast, it's hard to get a program that links to a version of Zlib from 10 years ago running on a modern Linux box.


> Certainly the web can mostly coast indefinitely.

I'm not sure about that, for anything besides static resources, given the rate at which various vulnerabilities are found at and how large automated attacks can be, unless you want an up to date WAF in front of everything to be a pre-requisite.

Well, either that or using mTLS or other methods of only letting trusted parties access your resources (which I do for a lot of my homelab), but that's not the most scalable approach.

Back end code does tend to rot a lot, for example, like log4shell showed. Everything was okay one moment and then BOOM, RCEs all over the place the next. I'm all for proven solutions, but I can't exactly escape needing to do everything from OS updates, to language runtime and library updates.


this problem -- great forward compatibility of the web -- has been taken care of with application layer encryption, deceitfully called "transport layer" security (tls)


The web is the calm looking duck that is paddling frantically. You want to be using SSL from the 90s, or IE vs. Netscape as your choice etc. Nostalgia aside!


HTTP 1.1 isn’t really changing is it?

That and a small collection of other things are standards based and not going though changes.


Sure, but HTTP3 was proposed in 2022.


Yeah but you can just continue to use HTTP/1.1, which is simpler and works in more scenarios anyway (e.g. doesn't require TLS for browsers to accept it).


You could have stayed with HTTP/1.0 as well. Or Gopher.


Without HTTP/1.1 either the modern web would not have happened, or we would have 100% IPv6 adapation by now. The Host header was such a small but extremely impactful change. I believe that without HTTP/3, nothing much would change for the majority of users.


But also, the only thing in most of the organizations I've been in that was using anything other than HTTP 1.1 was the internet facing loadbalancer or cloudflare, and even then not always. Oh yeah we might get a tiny boost from using HTTP/2 or whatever, but it isn't even remotely near top of mind and won't make a meaningful impact to anyone. HTTP/1.1 is fine and if your software only used that for the next 30 years, you'd probably be fine. And that was the point of the original comment, nginx is software that could be in the "done with minor maintenance" category because it really doesn't need to change to continue being very useful.


Maybe you just haven't been in organizations that consider head-of-line blocking a problem? Just because you personally haven't encountered it, doesn't mean that there aren't tons of use cases out there that require HTTP/3.


>Maybe you just haven't been in organizations that consider head-of-line blocking a problem?

I have not. It is quite the niche problem. Mostly because web performance is so bad across the board that saving a few milliseconds just isn't meaningful when your page load takes more than a second and mostly is stuck in javascript anyway. Plus everybody just uses cloudflare and having that CDN layer use whatever modern tech is best is very much good enough.


Sure, but there's video streaming, server to server long polling bidirectional channels, IOT sensors and all sorts of other things you probably use every day that can really benefit from HTTP3/quic.


Meanwhile my anaconda installation died after a casual apt-get update lol

I now believe that every piece of software should be shipped as a container to avoid any system library dependencies.


That is what Snap is for, but there are… issues


IME, the best software is written by "1 or 2" people and the worst software is written by salaried teams. As an end user, it's only the encroachment by the later that scares me.


Yep. IME the only way to make a salaried team of 10 devs work efficiently is to have enough work that you can split it cleanly into 5-10 projects that 1-2 people can own and work on autonomously.

Too bad every team I've ever worked on as a consultant does the opposite. The biggest piles of shit I've ever seen created have all been the product of 10 people doing 2 people's worth of work...


On one hand projects developed by 2 passionate devs ; on the other hand a team of entry to mid level devs working on someone else's project for the money.

That team changes every 6 month when another company offers more money. If only one or two people are working on a project, that's a high risk for the company.

If you got one or two highly skilled people in that team of 10, you are lucky. Managers don't want them to work alone on their project, they want them to help the team grow.


Yes and no. Small 2 person teams as vastly more efficient, but who will take over when they quit/retire/die? Larger teams have more continuity, I think.


I don't worry when it's open source, as if it's that valuable someone will pick it up, or corps would be forced to. I do wish those 1 or 2 devs got more support monetarily from the huge corps benefitting.


It's not that scary. If a project everyone depends on is broken and unmaintained, someone else will manufacture a replacement fairly quickly and people will vote with their feet.

NGINX is the de facto standard today, but I can remember running servers off apache when I began professionally programming. I remember writing basic cross-broweser spas with script.aculous, and prototypejs in 2005, before bundlers and react and node.

Everything gets gradually replaced, eventually.


I still deploy Apache httpd, because that’s what I know best, and it works.


You can also probably host without a reverse proxy. Also there are alternatives like Caddy. IIS!! And I imaging the big cloud would swoop in and help since their expensive CDNs and gateways will rely on it, or maybe Kubernetes maintainers, since most likely they use it.


Best memberberries ever


For the vast majority of use cases nginx from 10 years ago would not make a difference. You actually see the nginx version on some html pages and very often it's old.


nginx from 5 years ago has some pretty nasty actively exploited CVEs.


Not scary at all. I think much better of such projects compared to ill-functioning multi-people projects which get worse and worse over time.


>> It is scary to think about how much of web relies on projects maintained by 1 or 2 people.

This is one reason maintainability is very important for the survival of a project. If it takes an extra person to maintain your build system or manage dependencies or... or... it makes it all the more fragile.


HTTP/1, HTTP/2 and HTTP/3 are huge standards that were developed, considered and separately implemented by hundreds of people. It's built in C which has an even more massive body of support through the standard, the compilers, the standard libraries, and the standard protocols it's all implemented on.

1 or 2 people maintain one particular software implementation of some of these standards.

It's interesting to think of what a large and massive community cheap and reliable computation and networking has created.


I mean at that point you might as well talk about the people building microchips and power plants. You can always abstract down, but you're ignoring the fact that nginx is ~250k very important LOC with huge impact on the world. That is non-trivial in its own right.


Exactly, you might as well. It compares precisely to the original hyperbole that two people are somehow the linchpin to the entire internet.

For example, how much of that code is the mail server component and how much is the http component? How much does http/1, or http/2 or http/3 take up? How much of that is necessary to keep the internet actually running?

I'm not suggesting it's trivial but the original perspective was highly overblown. To think of it another way, if these two men died tomorrow, how much of an impact would it actually have? Some, to be sure, but the internet wouldn't even notice.


Evergreen xkcd is evergreen. https://xkcd.com/2347/


That's why they work well. Not corrupted by corporate systems or group governance. Individuals have better vision and take responsibility.


I think if 2 people designed most of the world’s water treatment plants, that’s not scary.

If 2 people are operating the plants, that’s terrifying.


We detached this subthread from https://news.ycombinator.com/item?id=39373804. Nothing wrong with it (well, it's a generic tangent but not a nasty one), but I'm trying to prune the large thread.


It is also why companies don’t buy SaaS services from single founders or small companies where risk of key people leaving is high impact.


Expand on that comment for me, because it has high impact. I dont doubt the surface logic, but the implication is that to succeed in B2B SaaS, you _must_ be sufficiently well funded to have a decently sized staff team. That is, there are no organic 2 person startups in B2B SaaS. Is that really true?

(Obviously once bigco buys such a startup's offering, that startup needs to hire, fast)


You probably can get your foot in with $500 a month recurring payment if some dev/employee wants to do or try out stuff and his manager puts in credit card.

But that is peanuts and for me basically no difference than B2C and that is not something you can put on "customers that trusted us" banner on your landing page.

If you want big company to rely on your services and have 50-100 users each seat paid $500 a month form a single company, that is not just some manager swiping CC and for that you have to have a team and business continuity.


This is your semi-annual reminder to fork and archive offline copies of everything you use in your stack.


There's plenty of copies of the code. That doesn't help with the actual problems with the setup.


Obligatory XKCD: https://xkcd.com/2347/


[flagged]


0000?


And physical access to the football.


Had a nuclear launch code. He doesn’t remember it anymore.


Relevant Xkcd comic: https://xkcd.com/2347/


Just use Apache


As someone who used Apache 1.3.x through 2.x heavily from 2000 to 2015, I respectfully disagree with this statement. Nginx and Traefik are easier to configure, have better communities and in most cases perform better.

Traefik Opensource is my go to for almost all of my use cases theses days and I have never stopped and said hmmm I wonder if Apache would do better here. It is that good.


Apacge still can't work as an reverse proxy for servers that utilize unix sockets.


NGINX are FSBs shills.


I don't understand why some people use a Russian software! Especially in this age.


In some cases there are not great alternatives that fit the needs. I have not found anything that matches LFTP Using the mirror subsystem with SFTP and connecting to chroot SFTP servers. It replicates the behavior of rsync in a chroot SFTP-only environment. Only downside is that since there isn't a syncing daemon on the other side, directory enumeration is much slower. File transfers are exponentially faster however as it can do as many SFTP sessions as desired for batches of files or even one big file with the only limit being the bandwidth from client to server.

For NGinx I have been able to make use of HAProxy and Apache just fine. Long ago Apache was slower than NGinx but ever since APR 1.7 and Apache 2.4 there are about the same performance wise. Some here don't like the configuration syntax but I am used to it.


While I may share the general sentiment, there’s a freaking lot of “Russian software” out there which you may be knowingly or unknowingly use.

There is JetBrains, for example.

But there is also core-js which is a little polyfill library being used by like way more than half of high profile websites. Also written by a Russian national.

If you excise all contributions by Russian nationals to PostgreSQL or the Linux kernel, they will be left in a not very runnable state, I’m afraid.

On the other hand, it’s not like you are giving them money directly, unless you do; I also can see that in, say, both Linux and PostgreSQL there is also enough people from the “geopolitical opposition” so that even if the Russian contributors are asked by some stern people from the Apparat to sneak something backdoory in, it will be sniffed rather quickly and prevented from going much further.

So tl;dr is that there is no simple response.


Regarding Jebrains - both owners have renounced their Russian citizenship and are now citizens of Cyprus: https://www.forbes.com/profile/valentin-kipyatkov/ Plus the HQ is in Prague, Czech Republic: https://www.economist.com/europe/2021/05/27/russia-puts-the-...


Interesting. They have founded the company in Czech Republic but choose to become citizens of Cyprus.


Likely because Cyprus can make you a citizen, figuratively speaking, overnight, for enough money. Czechia wants you to spend some years in there and integrate first.


I think being open source is important here; I don't care if you're German or Russian or Finnish or Chinese or what your governments policies are, as long as we can inspect what's going on. "Trust but verify".


You can still sneak nefarious stuff if you have a lot of reputation score so that most people who actually put their eyeballs on the stuff tend to trust you blindly, especially if the development on the project is so active that you don't have the bandwidth to inspect all the changes.

There is also the mighty bystander effect at play: surely, someone else is going to look at it. Someone else will have time to test it. He's our hero, the Someone-Else-Man!

Mind you, it only takes to catch you once, and your mountain of reputation will poof out of existence in an eyeblink. This is the price.

Mind you, asking to downplay a vulnerability "because it's in an experimental module not built by default" would make me suspicious on the simple grounds that even if a module is experimental, you ship it alongside your stable code, and for sure someone builds it and is using it. Depending on who those users might be, there could be also parties interested in them not patching the vulnerability for as long as possible.

This sounds paranoid for sure, but your being paranoid doesn't mean there's nobody out to get you!


I never said it's perfect, but at least I have an opportunity to inspect things.


Yes, this is exactly the counter-argument I always make whenever someone says "but there are never the eyeballs to inspect open source, see Heartbleed, so why bother, it's not better", blablabla.

But at least I have the option, dammit! Contrary to the proprietary software where your problem will be diligently filed into a ticket, given a number, and be left to rot.

Which doesn't change the fact that people are lazy and do turn the blind eye... :-( and sometimes the Someone-Else-Man won't come and save the day. But that's just life.


Jetbrains is Czech and most Czech people would get offended by your characterization of them as Russian.


There’s quite enough companies and real estate in Czechia owned by Russians, despite even said Russians hastily changing their citizenship in 2022.

It’s sometimes beneficial to pose as an EU-based business if a purely Russian business was either sanctioned or considered too risky/dirty/shady to deal with.

So while Czechs don’t like to be equated with Russians, not all of them would quite sing „běž domů, Ivane” or share the feeling.


Sales, yes. Development, has been in Saint Petersburg not so long ago.


Because they aren't short sighted like some others?


Well maybe this core dev can impact some better malware into it and update the defaults.

Nginx loves to pretend it’s 1995. It barely has http3 support and does insanely stupid things by default.

No wonder people move to haproxy, Traefik, caddy, etc. Cloudflare doesn’t use it anymore for good reason.


There is no news other than this individual post. I wish he could describe it more. It says it is free but where is the github page for it?


> It says it is free but where is the github page for it?

Not sure if serious, but you do realise that free is not at all about having a GitHub page?

Maxim has been working on nginx for years and just forked the project so that he can continue working on it. The license remains the same as the original nginx project and you can already download its sources here: https://freenginx.org/en/download.html


I honestly didn't see the download button. I thought the web page was broken because the design looked super ugly and not trustworthy. My first instinct was to ask for a repo here.


They don’t use GitHub

http://freenginx.org/hg/nginx


There's a world of software out there that's not on github or using git.

"I don't always git clone, but when I do, it's hg clone"


I have mixed feeling about github's dominance. They have created facebook for devs and uplifted collaboration of software to a high level, but I can't help but feel like I'm renting storage space in someone else's private shop when I use them. Yes you get engagement, yes you get one link to share your dev profile and timeline grid in CV, but it's a for profit business that is run by MS.


“but I can't help but feel like I'm renting storage space in someone else's private shop when I use them”

I’ve been looking for the words to put to that feeling myself but was unable to pinpoint it so well.

I loved GitHub at first. “Look at all the cool stuff I made” was kinda a way of showing my capabilities (and is still a great way today!) but somewhere along the way it became a platform for egos and star stroking and blind following into the nights. They improved their search but it could be so much better. Not everyone has a graphic designer on staff to make pretty README.md’s


I'm much less concerned with this because github seems to have to lowest vendor lock-in of any platform. If you want to switch platforms, it should be as easy as changing your upstream and push. Switching from MySpace to Facebook never looked like that.


Sure you can switch, will others too? That's the problem, when something gets dominating critical mass individual actions stop to matter. You can replace Google with another search engine as easily as opening a new tab, does the dominance make it any less scary?


>>I have mixed feeling about github's dominance.

Yep - I remember what happened with Source Forge/VA Linux. I actually paid for github when it first came out, just to fund it.

Still makes me nervous tbh


Good old times, freshmeat.net also ;)


I was working at a startup where a potential VC sent over their questionnaire with a question of where the github repo was located. Since we were not using github, and using a totally different git repo service, I was forced to move the repo to github just because of this question.

Some people just don't have a clue and only know buzzwords


Not everyone uses GitHub. They are using something else: https://freenginx.org/en/docs/contributing_changes.html



They are using mercurial! This is such a breath of fresh air.


Fossil would be a breath of fresh air, and I use mercurial at work.


Source code repo is here, not everything need a Github account to be free: http://freenginx.org/hg/nginx




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: