Hacker News new | past | comments | ask | show | jobs | submit login
Do svidaniya, Igor, and thank you for Nginx (nginx.com)
1384 points by nrvn on Jan 18, 2022 | hide | past | favorite | 346 comments



I really take for granted how well Nginx works across a number of web backend functions.

Some of the container/orchestration world has tried to supplant the need for it as a reverse proxy, but you get so many goodies out of the box just by sticking this in front of your app, and for very little overhead.

I remember the pre-Nginx days and all of the struggles people routinely ran into with options like Apache or other reverse proxy tools.


I mean there was lighttpd before nginx and they have/had pretty similar structures, weights, etc.

I feel like I knew at one point why it got so thoroughly supplanted by nginx but I don't remember now why that happened.


The difference is that nginx really works. I had Panoramio, a photo website featured in Google Earth / Maps, using Apache. It started to fail down under load, and I quickly switched to lighttpd. It was faster but crashing, getting OOM, etc. I fixed a memory leak and a few more bugs, but it still crashed every now and then and I looked for alternatives.

This was 2006 and nginx was the only realistic alternative on the market. It worked beautifully since day 1. It saved my startup. Next year we got acquired by Google.

I only got 1 crash with nginx and it was partially my fault, I had an "expires 30y" on some images, and a morning on feb 2008 I came to the office and the whole site was down. After a very quick gdb session under panic I realized it was trying to get a weekday name on an array with a negative index. Nginx was adding 30 years to the current date and that was over 2038 and it overflowed. Igor fixed that issue in hours, and he graciously explained that I could have used "expires max"

Nginx has powered all my startups since then (Freepik, Flaticon, Slidesgo, Besoccer).

This guy has added more real value to the economy than most unicorns. A true hero.


Panoramio, Freepik and Flaticon? Man, you just collapsed what I thought there was an early Spanish startup success story and two different corporations from the US into a single person :D Maximum respect.


Wait you made Flaticon? I would like to to say thank you. Before I truly got into software I was a humble associate consultant and I honestly don't know how I would have made all those decks without you.


Thank you! My partner Alejandro Sánchez is actually who got the idea of Flaticon, and Fernando Fernández did most of the initial implementation. When we hired Fernando he was flipping burgers at BurgerKing :)


No thank you. I honestly don't know where I would be without you and Fernando. Those initial presentations gave my bosses the confidence to let me hang out with the engineers and start messing around with the code base even though I didn't know how to code. A few years later and I had my first CS paper published.

From the bottom of my heart, thank you.


How did you find Fernando?


IIRC we posted an internship. He was doing vocational training and applied. He didn't have any previous experience, but he was good on the interview. After the internship we hired him.


At the Burger King :-)


Panoramio was so good. I had photos there. People wrote me comments. Then Google just killed it. Fuck them.


Major Panoramio fan here. I have traveled a lot of places on that site. Thank you for making it :)


Yeah thanks to this thread I am definitely now remembering running into issues with memory usage and crashes with lighttpd.


Thanks for making Freepik


Panoramio user here! Big fan!


panoramio was amazing, thank you! a shame it was shutdown by g*gle.


The performance engineering in NGINX back then was really quite something.

This classic 2007 tutorial starts by pointing out that NGINX parses the HTTP verb by looking at the second letter first, so that if it's O it knows to check for POST or COPY!

https://web.archive.org/web/20070505051653/http://www.riceon...


Doing Boyer and Moore proud.

I think that if you want to support all verbs, you face at least 3 'ambiguities' whether you first check the first, second, or third character of the string. (It must be at most the third, as the shortest verbs are 3 characters long.)

First checking the first character is ambiguous between POST, PUT and PATCH. First checking the second character is ambiguous between HEAD, DELETE, and GET. First checking the third character is ambiguous between GET, PUT, OPTIONS, and PATCH. [0]

edit As danachow points out, the verbs are not all used with the same frequency. If real-world performance is the goal we'd presumably want to optimise for the GET case, which presumably means first checking the first character, as the 'G' is unique to GET.

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods


A bit of a micro-optimization, but couldn’t you interpret 4 bytes (including the trailing null on the three-letter verbs) as a 32-bit unsigned int and then do integer comparisons or a case statement?


NGINX does exactly that (after finding the space character after the verb):

https://github.com/nginx/nginx/blob/7587778a33bea0ce6f203a8c...

There are several points to note: This trick only works if the architecture supports unaligned memory access. Macros are used instead of functions, so you don't have to rely on the compiler to do inlining. The shifts and ors should get optimized out by the compiler. I once tried something similar in C#, but sadly the JIT doesn't do that optimization.


> This trick only works if the architecture supports unaligned memory access

Presumably then it's relying on undefined behaviour.


pretty sure it's just implementation defined. In the sense that the implementation defines the allowed alignment, if nothing else (ie. on x86 the allowed alignment is 1 byte even if that's not optimal).


I don't think so. What I can find on this topic with a quick search points to it being outright UB.

https://stackoverflow.com/a/47622343/

https://stackoverflow.com/a/28895321/

https://blog.quarkslab.com/unaligned-accesses-in-cc-what-why...


Ah nice! I was interested in looking but don’t know the codebase.

That method does look better. My idea assumes that the method has already been checked to see if it is a valid method, which has its own cost. Otherwise, POSABC would be parsed as POST. Their method does that more or less on the fly.


You could add a computed goto to really spice things up.


call it "adventure plumbing"


This is close to what nginx does today, if your platform supports unaligned reads. The check for 'O' is a specific fast-path only for HEAD, most others are done via ordinary comparisons.


Most requests in reality are GET. So if you first check for G,

  if (m[0] == 'G') {
    r->method = NGX_HTTP_GET;
      break;
  } else if (m[0] == 'P') {
    switch(m[1]) { ... }
  } else if ...
this should be fast enough.


> this should be fast enough.

'Should' implies uncertainty; 'fast enough' implies there's no speed constraints / requirements. This particular bit of code - checking what request type is coming in - will be executed trillions of times across millions of servers across decades; it's the kind of thing you want to be as fast and secure as possible, so it's worth taking away these insecurities and guesses about performance.


How would this work if the method field was something like "Gibberish"? Would this have any security vulnerabilities?


Yep, you can generally get request splitting style vulnerabilities especially as a reverse proxy if you misparse reqs. (And sometimes even if you don't, one reason a rev proxy architecture is somewhat insecure).

Hopefully the actual code does a full parse after guessing what to try by the character test heuristic.


If its "Gibberish" I'd suggest it's safer to assume it as a Get request regardless.

Something I don't see many people talk about, but I've setup web instances with both read and write clusters similar to db's, and have the nicety to getting more umph from the read cluster by making sure it only hits read db nodes and disabling any framework stuff for tracking changes on a model.


"Safer to assume" is making a lot of assumptions already. What if nginx itself or some other application behind it makes assumptions about the maximum length of a request method, and it causes a buffer overflow?

I expect anything that talks HTTP to reject requests with invalid HTTP methods. HTTP is a well-known standard and servers do not need to accomodate for sloppy, wrong, or malicious implementations.


Did you experience any read after write inconsistencies that present as heisenbugs?


Would you expect that to be more of an issue under GP's method than with traditional sharding and replication? Presumably writes are then replicated to all instances, just not all instances accept them.


Seems likely an optimal algorithm could probably get there just looking sequentially at the first three letters in order. Would be an interesting code-golf.


These tricks are cute, but at the time nginx's performance came from much more fundamental design decisions.

First, the async model was literally years ahead of Apache. It leaned heavily on interfaces like epoll to manage large connection pools with a small number of processes, while Apache still used a thread or process per connection.

Second, it removed exactly the right features - those with minimal benefit and high performance impact. The classic example is .htaccess, which adds (at least) one stat to every single request, but in practice was only needed for the horrible multi-tenant LAMP reseller setups of the day - everyone else was fine with static centralized configuration.


More on nginx vs apache here, for those interested:

The Architecture of Open Source Applications - volume II - nginx:

https://www.aosabook.org/en/nginx.html


note: this thread is about nginx and lighttpd. I don't think anyone is under any illusions that apache was way behind the curve either of those were setting.


Nice! I can't be the only one who went looking and spent way to much time trying to acquaint myself with c++ :)

Here's there I ended up: https://github.com/nginx/nginx/blob/363505e806feebb7ceb1f9ed...

PS, I'm not totally sure, but they definitely use the count of letters as an optimization, and it seems they increment the bits associated with each type, so the order of the bits behind each NGX_HTTP_GET etc seems to matter...!

https://github.com/nginx/nginx/blob/67d160bf25e02ba6679bb6c3...

Someday I will understand :)


It's been a while since I have C/C++'d so some of my terminology is probably wrong.

ngx_http_v2_parse_method iterates through all the tests, starting with the first test (GET).

They compare request method string length to test string length then on matching lengths compare each character in the request method string to the test string.

Fully matching strings set the request method numerical value from the test value and returns OK.

Any non-matching characters GOTO the next test.

After that it does a sanity check on the request method string characters (A to Z or _ or -) and returns OK or DECLINED as appropriate.

For the macros defining the HTTP method numerical values I think they're set up that way as bitmasks for bitwise operations.

For example, they do things like[1]

  if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD)))  
Here HTTP request method value is bitwise AND against (0x00000002 OR 0x00000004)

Any non-zero value here would be true and any zero value is false. So if the request method bit value AND matches either GET or HEAD bits then this conditional is false.

[1] https://github.com/nginx/nginx/blob/a64190933e06758d50eea926...


Yes, nginx code is something you spend a lot of time studying and hope one day you can do as well.


Does it really look at second letter first or is that snippet taken out of context (it isn’t implied that it does in that email, just that it doesn’t use a library function) ? Since most requests are GET it still makes sense to handle that case first. Though after trying to common cases looking at the second letter for the P subcases may save some branching.


For who want to check current implementation: https://github.com/nginx/nginx/blob/429150c1fa78317bdb19de38...


From my limited experience, lighttpd has non-stellar documentation and the community (including devs) is kinda rude. nginx has better documentation and a much more welcoming support community. At least for the tasks I use it for (proxying a bunch of random services on a home server), the syntax is a lot easier than lighttpd and it's easier to bring in goodies as modules that would require recompilation on the lighttpd side. It might be a bit of a chicken-and-egg problem, but the nginx binaries are also a lot of up to date in the package managers I used. On a raspberry pi I ended up needing to compile from source to get a modern version, which got kinda annoying.


Not sure which year (decade? :P) you're talking about but in the beginning (before 2010, maybe 2007-08?) Jan was still kinda involved in the German PHP scene and I can't imagine that to be true, I only remember good interactions and we were one of the heaviest users of lighttpd back then. But the docs were never that great, and it seemed to be a one man show, later with a very small team, that's when I think it stalled and only picked up pace years later.

But there indeed came a time (maybe 2010ish?) where nginx took the lead with gread strides and most people (even the die-hard fans) mostly moved to nginx, that's about when it was clear that it would probably win and stay for the forseeable future. Back in those circles at least (see some other comments for the PHP FPM story) Apache was only kept for setups with lots of different other dependencies, like mod_svn or webdav, if you "only" needed a webserver to front PHP it was nginx.

I also remember many people holding out on adopting Apache 2.2 for a looong time.


Years ago, I was working at a small startup here in Austin (named “Ihiji”), and it was my job to completely rebuild their cloud systems infrastructure. They wanted to use lighttpd instead of Apache. It took a while, but with a bit of a kickstart from a friend who worked at Opscode Chef (thanks, Matt Ray!), I was able to get that done.

With Apache, at their current max load, the systems would just completely fall over, but with lighttpd, at that same load, the system breathed a little hard. I could push lighttpd 10x more before it fell over.

Then we started looking at replacing the haproxy solution, and I looked at nginx. I tried it out. I also tried out replacing lighttpd with nginx. And no matter how hard I pushed nginx, I couldn’t get the damn thing to breathe hard. I couldn’t even push the load average up over 1.0.

We went back to lighttpd and haproxy because those tools gave us better monitoring and logging, but we always held nginx in our back pocket in reserve, in case we needed another 10x beyond what lighttpd+haproxy could do.

And yes, I did get invited to Edinburgh to do a nice little talk on the subject.

They had their ups and downs over the years, but ihiji did end up getting acquired by Control4, and the founders are now off doing other weird and cool things.


It was believed that lighttpd leaked memory quite badly, and there was also a period of time where the development/updates on lighttpd dramatically slowed down. Both of which wasn't a concern in the early days of nginx.

https://serverfault.com/questions/330413/lighttpds-memory-le...


My memory is hazy but I think I remember running into an actual memory leak in lighttpd circa 2008.

We were serving dynamic content via FastCGI, IIRC.

This was a long time ago and I'm pretty hazy on the details, but I'm pretty sure I remember finding memory leak bug discussions on the lighttpd website around that time, and no clear answers on how to avoid it.


Back in 2008, if you had a FastCGI backend that produced a large body, such as a file download, lighttpd would allocate the same amount of memory and forget to free it afterward. Everything would work fine for a while, but then someone attaches a large file on your busy PHP forum and bam! Your 720MB linode is thrashing like there's no tomorrow.

The proper workaround was to send an X-Sendfile header to instruct lighttpd to fetch the file itself, instead of serving the content directly from the backend. It was also more efficient, but it required changes to backend code that made it less portable. I don't know if the bug was ever fixed as lighttpd development had slowed to a crawl and nginx arrived just in time to take away all the market share.

The introduction of PHP-FPM around the same time was another factor that favored nginx, because lighttpd typically integrated with PHP using a more fragile setup called fcgi-wrapper. PHP-FPM was much nicer to work with, and nginx could even load-balance across FPM pools. A lot of WordPress sites switched to nginx and never looked back.


Just to say thank you for this closure ~15 years later. I was on team lighttpd back in the day (I think I liked the config syntax more? I honestly can't remember) but this bug caused a lot of grief and at the time, it appeared like no one knew the cause. Still, lighttpd/fcgi was still better than the epic custom Apache C++ module disaster in the previous iteration.


Yeah, the mod_fastcgi/mod_fcgid split on the Apache side was a total mess. The Apache modules also took on the thankless task of spawning and manageing PHP-CGI processes themselves, whereas nginx had FPM take care of it. IIRC lighty's spawn-fcgi wrapper design was halfway between these two approaches, but it wasn't particularly stable.


Thank you for explaining this so clearly. Like the other poster commented, this gives me a sense of closure.


I used lighttpd from 2007 until early 2009. Then the company I was working for at the time went to managed hosting for about a year, and during that time, we had to use Apache (possibly even with the prefork MPM). When we decided to go back to self-managed dedicated servers in early 2010, I chose nginx. What I remember from that time was that nginx was rising in popularity, and it was being used by WordPress.com. Also, unlike the 2007 setup, this early 2010 setup had a load balancer with three application servers behind it (using FastCGI I think), and if I remember correctly, it was easier to do that with nginx than lighttpd.


Lighttpd has received very nice updates lately (HTTP/2, etc.) and I use it daily for mission critical servers (facing private customers) behind HAProxy, Varnish and the like. No problems so far after 5 years.


I like Traefik, not sure if it "as light", but it works well for me in the personal setting, I use it just by adding some lines to my docker-compose file, no further configuration required. It sits in front of several services and automatically uses Let's Encrypt for certs.


I remember seeing performance comparisons between lighttpd and nginx back in 2006 or so, and I'm pretty sure nginx was able to handle more requests per second & had lower memory usage.

Back then people were getting into using VPS services like linode, and memory usage mattered a lot.


lighttpd consumes 4MB while nginx is 8MB, so deeply embedded devices lighttpd is still preferred even today.


I’m sure that when it was only Nginx and Apache for reverse proxy options it was the only way to go, however, these days for rev proxy, I prefer HAProxy for enterprise and Caddy for personal stuff..


I tend to use haproxy only in situations where I don't have access to a 'proper' load balancer (e.g, something which does tcp connection state failover and all that jazz); make it listen on lo0 on each client server, who then just talk to 127.0.0.1:whatever..

It does mean you are doing health checks from each client service/app, but it only eats a few mb of ram, and then you don't have to deal with making your haproxy service HA :}


Use Caddy for enterprise stuff, too. Enterprise deployments deserve memory safety, too.


In January 2021, Nginx took over Apache as the #1 web server on the Internet that users can install:

https://i.imgur.com/pjU1G61.png

https://trends.shodan.io/search?query=http+port%3A443#facet/...

I'm a bit surprised it didn't happen earlier as it feels like it's been the dominant choice for tech people.


Too many shared hosts with LAMP not updating their stack in ages, I suppose.


Wordpress had some say in this i think.


Definitely.

I've founded and grew several webhosters, one specialised in WordPress. Our HTTP stack was varnish->nginx(loadbalancer)->nginx->phpfpm.

It was pain. Not even WordPress core could (can?) run all its features; e.g. the SEO-friendly-URL thing relied (relies?) heavily on - I kid you not - rewriting the .htaccess file from the CMS: really: the CMS rewriting webserver configuration files from the web.

Let alone all the plugins and themes. The community of plugin and theme devs is generally professional, but there is a staggering amount of stupidity found. Like a payment-processing plugin that would write all its payments into [bankaccount-number].txt files. Web-readable. Obviously a severe security breach for one of our clients. The plugin-devs reaction? "Not a bug: we include a .htaccess that denies access to those text-files. So no-one can read them but the plugin". I can't even...

Point being: WordPress is highly coupled to Apache. If you want smooth experience of hosting, just go for Apache. Or don't use WordPress. I'd advise the latter.


I don't recognize anything about the WP Apache marriage.


With nginx becoming more popular this may have changed.

But I just looked, and the official lesson on "installing wordpress" mentions only Apache[0]. As does the "how to install WordPress"[1]. Though checking a recent download, I see that it no longer ships with .htaccess by default, so apparently things are changing.

[0] https://learn.wordpress.org/lesson-plan/local-install/ [1] https://wordpress.org/support/article/how-to-install-wordpre...


It still surprises me that NGINX beat out Apache so quickly even though Apache had way more modules and was/is entirely free vs. NGINX which is more or less "open core" with some nice features requiring commercial licensing.


On the other hand, the unreadable weird-ass pseudo-XML configuration files of Apache made anyone touching them wish for something better.

I also expect ngx_lua did a lot for adoption, the fact that you could always "shell out" to lua if you needed was a huge boon even just for peace of mind.


> On the other hand, the unreadable weird-ass pseudo-XML configuration files

If I have one gripe about NGINX it's that its configuration is a still-half-baked DSL that has quirks you wouldn't expect and when they error you don't get great feedback.

Examples: You can have an if clause, but no else attached. You can't have an if clause with multiple conditions. Finally, "if [might be] evil." 1

You end up writing a bunch of partitioned control flow statements and you're never really sure at what level of config hierarchy they would best be applied.

I love the product but Apache's XML versus NGINX's semi-declarative, hierarchical blocks aren't night and day better.

1 https://www.nginx.com/resources/wiki/start/topics/depth/ifis...


I agree with you completely. Nginx's config syntax is better than Apache's but it still feels like mystery meat. Can you use this directive or option within this block? Maybe, maybe not. If not, why? Who knows. It's just not allowed to use map within a location block and that's just how it is, okay?

My dream web server has Nginx's capabilities and Lighttpd's Lua configuration files/scripts. Is that what ngx_lua does? I've heard of it before but never really gave it a look.


With the rewrite and map blocks it is maybe a little easier for you to write fewer if statements…. https://stackoverflow.com/questions/47724946/nginx-rewrite-b...


Oh I've used these plenty but there are still conditionals which sometimes require or are most clearly defined with if statements, particularly complex redirects that rely on a number of individual conditions to be met.

I've seen these manifest in the wild as stuff like:

if ($thing ~* (match)) { $setWeirdVar = "Y"; } if ($otherThing = "value") { set $setValue "${setWeirdVar}E"; } if ($thirdCondition ~ (another|match)) { set $setValue "${setWeirdVar}S"; } if ($setValue = YES) { # do a thing here }

As clunky as that is, I've found it recommended in SO threads.


To be fair NGINX config is not better. An ad-hoc grown soup of syntax without a clear concept to govern it all.

I would prefer a simple JSON file any day. Or some Lispy S-expressions. Or some TOML or well structured XML and XSD even.

NGINX makes you learn another lang only for one tool and for a config, which mostly (always?) does not need anything more than being declarative config.


No JSON, please. You can't have comments. A JSON config would a deal-breaker for me to use a server.


Oh gosh, I had to try and figure out an Apache config file some time last year - it was a real slog trying to figure out what was happening thanks in no small part to the poor documentation of their pseudo-XML.


You can do similarly in Apache with the Perl sections...

    <Perl>
    # dynamic perl config goes here..
    </Perl>


It should be remembered that NGINX is used as a reverse proxy for a lot of servers behind the scenes. That NGINX is the web server identified up front doesn't mean as much as it might because of this architectural construct. I use NGINX to front a sites that have Apache on the back end and as a result, the Internet spiders think my websites are running NGINX rather than Apache. NGINX is incredibly easy to configure as a reverse proxy, image router, and SSL front-end. Thanks Igor.


> That NGINX is the web server identified up front doesn't mean as much as it might because of this architectural construct.

The exact same argument can be made to explain why nginx is undercounted. A lot of setups will run nginx behind proxies, so you'll count a proxy: a Varnish, a single nginx, cloudfront servers (are they running nginx?) while in reality there may be many nginx-es running.

Nonetheless: nginx is a gift and thanks go out to Igor, regardless of how good the spiders can count the number nginx instances.


Granted I've done exactly this before, but why put Nginx in front of Apache? In my experience it added headaches without any real benefit.

(Unless you don't mean Apache webserver but rather some other Apache product)


Nginx manages a ton of connections better and can serve static files very fast. It can then multiplex the dynamic requests into fewer connections to Apache. If you mean why not only use nginx, I would guess that's easier than changing your legacy systems to use nginx (e.g. if you have a ton of htaccess files). It's also possible you got better performance with mod_php although most people seem to claim that php-fpm with nginx is faster.


> If you mean why not only use nginx

Yes, this is what I meant. I originally wrote a SaaS app that was hosted through Apache and ended up putting NGINX on top of it for the aforementioned reasons. But eventually testing showed that removing Apache just made the whole thing a whole lot more manageable. I have friends with similar anecdotes. Just putting NGINX in front from the get-go would have saved a lot of tech debt.


2012-2015 I worked at a shared hosting company and towards the end of my tenure there we revamped the architecture to be centered on nginx (SSL termination, HTTP2 support, etc.) and invested quite a bit in API and GUI support for rewrite rules, redirects, etc.

However, for better or worse, a lot of the software people want to run on shared hosting come with a .htaccess file and documentation for how to configure it otherwise. So we gave customers a choice to put Apache behind nginx.

Unfortunately I left too early to learn what %age of customers ended up enabling Apache but they‘re still running this architecture today.


I have done this to host multiple services (running using multiple users and setups) from one host.


These days the cool kids love to call reverse proxies "load balancers" (when you have n>1 backends).


We added Nginx to our hosting environment in front of Apache and knew a bunch of other folks who did the same. The outwardly visible adoption of Nginx was not necessarily zero-sum with Apache’s footprint at first.

In my case we scaled Drupal and Wordpress sites by using Varnish as a reverse proxy cache in front of Apache. But then we wanted to go HTTPS across the board, which Varnish does not handle. So we terminated HTTPS in Nginx and then passed the connection back to the existing Varnish/Apache stack. I know other folks just skipped or ripped out the Varnish layer and used Nginx for both HTTPS and caching.

At the time both Drupal and Wordpress (and other popular PHP projects) depended on Apache-specific features for things like pretty URLs and even security. Over time, the community engineered away from those so there was little reason to prefer Apache anymore.


The web changed. We moved away from static HTML pages and CGI scripts to monolithic application servers in java, ruby, python, etc. Apache excelled with these static content sites and simple auth scenarios (remember .htaccess files?) but became painfully complex proxying application servers. Nginx was doing exactly what was needed at exactly the time it was needed.


And yet interestingly, nginx started in 2002, which was still old-school internet. So really, it was ahead of its time.


2002 was the start of the glory days of java web monoliths, like big monstrosities with spring, Rails, Django, etc. came a couple years later and monolith app servers really started to take off.


Painfully complex proxying? Can you explain? I still use Apache as my go to HTTP server and proxying is just 2 config lines.


Around here, Apache was heavily used for its mod_php. It could run php embedded without complex fcgi setup.

Then everyone moved to ruby and python (and also perl) and mod_php stopped being an advantage.


Everyone moved to Ruby and Python? In your bubble perhaps, but PHP is probably more popular than Ruby and Python combined globally.


Definitely not everyone, and you might be right based on actual number of websites, but the zeitgeist definitely moved to Rails and Django for a while.


Somehow I still see .htaccess files in projects that aren't that old (and in a few cases never used Apache).


Yes - this would be my take as well.


Back when the Apache was beaten there was no commercial licensing in Nginx.

Also the Apache that was beaten was Apache 1, which was fork-only, and that was the whole reason Nginx was written in the first place.

Then Apache did Apache2 with mpm modules and badly missed the mark. After that Apache was doomed. No async support == dead. It was that simple.


This jives with my memory of that time as well. Apache just couldn't keep up with Nginx' async speed, and if you weren't having to deal with PHP (before FastCGI's adoption), there was no real reason to use Apache.

And post-FCGI's adoption, you didn't need to use Apache, so... why use it?


mpm_event though from Apache 2.4 was async and kind of great.


I think the modules were Apache's curse, they made it possible to bring down Apache. Speed is great, but Always Responding is a more important feature. I'm sure most Nginx configurations could have been done with Apache without any real performance issue, but Apache hurt its own reputation by doing extra things poorly.


Nearly all the performance reviews between the stock Apache and Nginx at their hype time were equal to compare Word vs Notepad. An Apache installed from distribution package (with their range of enabled modules) and an Nginx compiled from source without nothing. A vanilla and good built Apache it's perfectly fine for realworld use, at the same level than Nginx, because when you are close to the limits of this pieces of software, your scalability problems are in other place.


I'm reminded of how Linux beat GNU Hurd, or how systemd is slowly replacing SysVinit. Highly modular systems often lose out to more monolithic ones, since they tend to be slower, more complex, and harder to use in practice, despite their theoretical advantages.


At the time time there was no commercial Nginx, only open source. Also, Apache was a huge pain to configure for anything other than configuring static files. Nginx config was a delight to deal with by comparison.


Yes - this. Building my first web site (we didn't call them apps back then) and wrangling with Apache and OpenSSL to enable encryption was ... not fun.


Tried Caddy yet? They provide really compact configuration templates and if needed can be reconfigured using the API.


For me, the simplicity of Nginx is what beats it out over Apache.

I've always felt like Nginx "just works" by default and creating configurations is relatively easy.


Yeah I remember starting a project with Apache in 2017 I think, and when I was discussing the (very quick) move to nginx it appeared that Apache's default settings are great for a personal home page and not much else, while nginx's default settings seem to handle a moderately busy e-commerce site (or more) with no trouble at all.


To me, it coincided with async (long polling/comet/SSE), more live, web applications. Apache had a horrible story around this, with one thread per connection (I believe Apache 2 may have had an optional execution model, which was also uncomfortable for some reason).

I used lighttpd for this, mentioned in another thread, rather than nginx, which was a similar breath of fresh air coming from Apache -- not only for the event loop model built around epoll and friends, but also the configuration and general deployment.


Back in 2005-6 , nginx was so far ahead a generation of engineers adopted it… it’s use of signals to zero downtime upgrade (USR2) - still is one of the best features few other servers get right.

The syntax to configure is clear enough while not being super verbose…


These days systemd with its file descriptor store makes implementing live update of a service without dropping a single connection rather straightforward. But Nginx managed to do that on its own long time before systemd.


Nginx always worked better and didn’t need to be tuned like Apache until you got to really enormous scales which were rare while even a little load on apache would require tweaking settings and experimentation.


I remember in the early 00s WAMP/LAMP was the stack of choice in getting quickly setup for writing web applications, but configuration was often painful, especially on the Apache side. At that time I was working on hobby projects like one private server I used to administer. When Rails came out it was just a breath of fresh air compared to PHP and I distinctly remember switching to that. NGINX was also picking up steam at that time as well


Convenience is often worth a lot more than the ultimate in flexibility.

This is why email is now more or less the domain of a couple of very large companies.


Which modules do you miss in nginx that are free in Apache?


I don't know how surprising it is, considering ease of use and "just works" beats all other considerations every time. If it didn't, we'd still all be using Novell.


As an open source developer and commercial OSS startup founder myself, Nginx gave me a lot of confidence to challenge status quo. Apache was so revered that you would have been crazy to think you could improve it, but he did and that really had an impact on me.


Fun story time: a few years back I worked at a major EU "traditional" (non-FAANG) IT company, and they were using Apache for handling web traffic. Rumour was that nginx, being already a backbone of half of internet, was dismissed as "too new" :) (we're talking mid-2010s)


Haha that reminds me of a company I worked at that used some MS library for .net.

I think it was like Microsoft.WebMatrix.Data

It essentially was a micro ORM written using dynamic but had no caching so it performed terribly with all the reflection. It was a drop in replacement to use Dapper. But dapper was dismissed due to it being “Demoware” despite it running stackoverflow. I left that place 2 weeks later.


This story really shows the hype of nginx. It wasn't the backbone of half the internet until 2021.

Don't get me wrong, I am an nginx user now for the past decade at least but when it first came out I was very skeptical. People were saying apache was too bloated but you could already run apache with as few modules as possible so that was a false argument.

Then there was the c10k challenge of course. Basically, a lot of hype for nginx but it came out on top in the end so I guess it doesn't matter.


Waiting for everybody else to test the product before you migrate is perfectly common sense strategy. Especially if that product does not give you any special edge over competition.


I think the irony is that newness is irrelevant once it's being used at a certain scale. You can battle test more in an hour than you could a small scale project in ten years.


How do you battle-test in an hour the ability of the upstream developer to provide security fixes? To provide updates at all as the ecosystem develops (e.g. the rise of systemd, taking advantage of advancements in worker models, SSL library API changes, new Lua versions)? Ability to keep backward compatibility with modules?

Your approach might have led you to invest heavily in lighttpd at some point in time.


Also: how do you battle-test a security track record?

It takes years to to tell if serious vulnerabilities are being found often or not.


By the mid 2010s it had definitely proved its mettle.

Arguing that it wouldn't have provided enough benefit to justify the switch is different than saying it was unproven by that point.


I think some of the pressure to update products is irrational. Just because something is newer and better is not yet reason to upgrade.

If Apache did everything they needed I can imagine a company to completely forgo investigating Nginx and this might have been cause of that kind of statement. Or maybe this was just a way to explain it to younger devs who could not understand "don't break it if it works". We don't know.

The correct way to decide this kind of decision (and many other) is to look at the RoI and your available bandwidth to run multiple projects.

I am still keeping some very old (but still actively developed) products. I am busy with other projects and there just have not been any pressure to update. When I have some time available I prefer to choose a project with highest RoI rather than update stuff because of peer pressure.


Well I think you and I are saying the same thing. Don't chase the shiny new thing.

That said, by that time Nginx was a proven performance upgrade over Apache 1.x and 2.x. Quantifying that value is tough but it certainly had value attached to it.


Whether there is any value depends heavily on your application.

If your Apache is responsible for 0.1% of your costs then this is at most what you can save, even if Nginx was magically zero cost (like zero to install, maintain, zero computing resources, zero outages, zero hiring, zero project risks, etc.)

From my experience, most projects have way more important problems to solve and better opportunities to pursue than chase those very small improvements. Frequently it does not matter if one is 10 or even 100 times faster than the other.


Again, we don't disagree. But the purported reason for rejection was the newness of Nginx.


No, switching creates risks. Risk of configuration errors leading to downtimes or vulnerabilities, risk of unexpected delays in deployment, risk of running into bugs that the users are unaware of.

Many software projects fail by facing delays due to excessive complexity and tech churn. Moving carefully helps.


> No, switching creates risks.

Absolutely - but again, that's apparently not why NGINX was dismissed as an option.


I remember the days circa 2009 when the Nginx docs pages still had lots of Soviet-style graphics... those were the days :)


Yeah that was great. I think we all felt like we had a secret superpower few others knew about.


My job as an intern was to write an Nginx plugin for a specific type of filtering for high performance- boss was abit of a masochist ;)


That’s actually pretty cool. Wish I found an excuse to need to do that.


I'd love to see this if anyone has a screenshot from this era!



Love it! Thank you for sharing!


Yup, look at that logo especially!


I remember watching a video from some conference where Igor participated. As soon as he says "Hello, I'm Igor Sysoev, creator of nginx" the audience bursts with extra-long applause. He even had to tell them "Come on guys, you haven't heard my presentation yet"


Would love to see that talk!


Here are relevant Russian discussions on OpenNET[0] & LOR[1].

N.B. From Nginx company history on Wikipedia:

> On 12 December 2019, it was reported that the Moscow offices of Nginx Inc. had been raided by police, and that Sysoev and Konovalov had been detained. The raid was conducted under a search warrant connected to a copyright claim over Nginx by Rambler—which asserts that it owns all rights to the code because it was written while Sysoev was an employee of the company. On 16 December 2019, Russian state lender Sberbank, which owns 46.5 percent of Rambler, called an extraordinary meeting of Rambler's board of directors asking Rambler's management team to request Russian law enforcement agencies cease pursuit of the criminal case, and begin talks with Nginx and with F5.[2]

[0] https://www.opennet.ru/opennews/art.shtml?num=56535

[1] https://www.linux.org.ru/news/opensource/16745652

[2] https://en.wikipedia.org/wiki/Nginx#History


Was the case resolved? Wikipedia doesn't provide any further information?


Yes, Wikipedia -- in broad strokes -- just sucks, check what happened to the Scottish Wikipedia (but there are any number of issues in the English one as well, the "no credentials" policy made sure scientists shun it because they don't want to endlessly argue with neckbeards with an agenda).

Anyways, https://tadviser.com/index.php/Company:Nginx everything is dropped in Russia, there's a lawsuit in the US but at first the court dismissed the whole thing in 2021, I expect that one go exactly nowhere.


That's why we rely on people like you to update article at wikipedia, your services are invaluable!


When I do this, with sources, I find some power user has reverted the change within minutes or hours without so much as a meaningful comment.

Not sure when it happened, but Wikipedia has long been a collection of fiefdoms, jealously guarded by power users and their sycophants.


If only said power users used their time and energy to fix content instead of just revert it.


Not in a million years. I am not touching that with a ten feet barge pole.


Thank you. This was a great source.


I think the characterization on Wikipedia is also incorrect. Igor seems to have had a permission directly from the CTO to open-source the code, but 10 years later the company claimed that the CTO was not in a position to do so.


The problem is that the CTO, who is rather famous in the Russosphere, only gave a verbal permision, and only mentioned this happening when he was long gone from the company.

The lesson here is that, open source or not, you always need real documents to demarkate your IP, otherwise you're asking for trouble later down the line.

In typical US or UK companies software written would just go to the company, period. Here's a good article from Spolsky on how this works:

https://www.joelonsoftware.com/2016/12/09/developers-side-pr...


Thanks for further clarification. Verbal approvals are not good.

However, your reference to Spolsky is not correct as nginx was not a side project, but a core work project that powered all of Rambler's properties. The situation is similar to a Yahoo employee open-sourcing the Apache Traffic Server (very similar project with similar timelines, by the way; Rambler was once the "Russian Yahoo", while Yandex is the "Russian Google") and then Yahoo 10 years later claiming the open-sourcing was illegal. I understand that something may have been done wrong (and that's why I appreciate Eclipse and Apache legal team support and due diligence), but I have a hard time believing that Rambler didn't notice its core internal project being open-sourced for 10 years.


The subtle thing here is that in the US and the UK typically things created "in the course of employment" (as defined in UK) would typically be owned by the employer. This is enforced either through a contract (e.g., in US), or both national law by default and the contract (e.g., in UK). So the employee would have no right to just open source things, or even create a business around it later.

This is why a mentioned that Spolsky's article - it explains the reasoning behind this situation very well.

In Russia things are different. I don't really understand all the legal details. It all boils down to Sysoev's contract and his precise duties.

Anyways, Sysoev was an system admin at Rambler at the height of its popularity in early 00s. I believe that Rambler management back then did not really understand the importance of OSS, or even software in general, or search engine business in particular. This is not unsimilar to the Yahoo story.

So Rambler just ignored the whole thing back in the day. So did Sysoev - the public is not aware of any written permission he was given in regards to Nginx. This situation lacks the legal clarity necessary for a working businesses... Now that Nginx is bigger than Rambler ever was, scavengers decided to check if they can find a dollar or two here.


You can find a summary at https://tadviser.com/index.php/Company:Nginx

TL;DR: the Russian investigators closed the case of Rambler Group against Nginx/T5 in 2020 "for the absence of a crime event". Another company co-owned by the same owner of Rambler Group started a case in the USA but it was dismissed by a court in California in 2021.


One could only hope to build software as great as NGINX, keep it up for 20 years and receive a send off like this.

Bravo


Thanks Igor for making a difference! I've always been after simple tools that do their job well, and, for the past 15 years, nginx was always one of them. Good luck with everything that comes next for you.


I know this is massively off-topic (have a good well paid "retirement" Igor), but I assumed that it would be written as

Dos Vidaniya

instead of (the correct)

Do svidanyia

My Russian studies is limited to listening to Sean Connery in The Russia House, and I guess I took Dos from the latin languages. Odd.


It's worse than that, the first thing that I scanned in the article is if he is alive. A title like that without a pre-defined context more often, in my native Russian perception, could mean much worse that just leaving a company. I'm glad he's doing well, I really enjoy NGINX as a casual user. It is a great gift to people. Удачи (good luck) or всего хорошего (best wishes) would not trigger such a reaction to scan the article for me.

I've just counted: only on the 13th paragraph I could get the answer.


As another native Russian speaker, to me the headline explicitly did mean that Igor is alive, and the subsequent meeting would be in the physical plane of our existence. Had it been one of the closer synonyms of "Good bye", I myself would have surmised the worst.


I know zero Russian, or any other language other than English. I had a very similar reaction to you. It sounded almost like an obituary.


maybe they've updated, but the fourth paragraph is currently "we announce today Igor has chosen to step back..."

which would seem to imply "not dead". but given the tone of the first three paragraphs i think even that is a bit too late in the post to clarify.


Yeah, but that's like the fourth paragraph.

I know hardly any Russian, only about enough to recognise "da svidanja" as "goodbye", so I'm not sure "in what language" I digested the headline (=link here on HN) -- probably a bit of all of my eclectic blend of European ones... But to me it certainly felt fifty-fifty whether it was a "changed jobs" or a "dearly departed" post. Checking which it was probably constituted my main reason for clicking through.


I "scanned" the article twice, so could have missed. First time I did quit, because HN comments are often more clear and useful instead of "reading" every noise they publish out there. And I did not find what I was looking for. The second time I scanned again after I posted, just to find the paragraph if it was there at all. Even if it was in the 4th originally, it's still far too off. It should have been in the first sentence of the first paragraph.


It sounds the same to an american english speaker without knowing a word of Russian since we can infer context and use the same farewell structure for the deceased.


> the first thing that I scanned in the article is if he is alive

Same thing, my first reaction was "oh my god, no, please no" and I rushed to see if he's alive.


Maybe it was just some sort of click bait to keep the reader on the page...


I would rather think machine translation and learning foreign languages should be better in general. But that is probably a C2-level subtlety, so if a non-Russian was writing that I could understand.

I imagine a situation: EN copywriter asks a RU colleague how to say "Goodbye", gets "Do svidaniya" as a transliteration without a context, and just puts it there. Which sounds like farewell.


Do (до) is basically “till”

Svidaniye (Свидание) has several meanings:

- most common modern single-word usage is for date as in “romantic date”

- archaic is for “meeting” that remained in this goodbye form.

So “do svidaniya” is literally for “till we meet again” :-)


I wonder why is it romanized to "do", when it's read as "da"?


Because it's spelt "do" in Russian. It's only due to stress and the fact it's a preposition that the way it is said becomes "da".

As the preposition is so small, it's considered together with the following word, which in the genitive has its stress on the "а".


English. The answer is almost always English, and its "quirky" way of transliteration.


I don't think so. From what little Russian I know, the original is до свидания, with an O, but in most dialects an unstressed O is pronounced A: https://en.wikipedia.org/wiki/Russian_phonology#Unstressed_v...


It's even more convoluted than that. Due to vowel reduction, unstressed o is pronounced as a, but до is one syllable so the o is stressed. When pronouncing до separately, it would definitely be pronounced as do, but до свидания is always said quickly as if it was one word, so the до turns into an unstressed syllable.

As a side note, Russian pronunciation is remarkably uniform compared to languages like English. Dialects do exist but in practice the vast majority of speakers will pronounce things very similarly. The phonetics are also so different from English than the typical English speaker will not pronounce any Russian words close to native pronunciation.


> As a side note, Russian pronunciation is remarkably uniform compared to languages like English.

AFAICT, that goes for absolutely every other language too. Nothing even comes close to English for inconsistency in pronounciation (and even more so, spelling).


Interesting, thank you. Years ago I tried to learn Russian by myself, but it was quite difficult and I was too busy with other stuff. I hope one day I can learn it.


I had the same thought. I think it's more Slavic vs Germanic/Romantic. "sv", without a vowel, doesn't exist in any word I can think of. However, in Russian, consonant clusters like that are pretty common. See also, from the article, Sberbank. I'd bet there's plenty of examples in reverse too.


svelte is probably the most common english one


Ah that's a good one!


s: with, together

vid: videt' = to see, vid as in video

anie: just a suffix like "ing" in english

"till together-seeing"


The process is called “Romanization of Russian” [1], and there are various standard ways to do it.

[1] https://en.m.wikipedia.org/wiki/Romanization_of_Russian


I thought it was a typo and meant to say "to". Why not use Cyrillic here? Bit odd.


Dos Svidanya would be a great cocktail name.


Or early PC software.


do (till) svidaniya (seeing, i think it's called gerundive in english grammar)


Oh my gosh, I thought he passed away.


Same. An editorialised title may have been preferable, but I understand the rules here generally don't allow that.


An inserted translation of the literal meaning might have been allowed, since the rules also say (ISTR) that this is an English-language website.

Colloquialise that from the formal exact "Til we meet again" to be a bit more informal (because that can still be read as "...in Heaven"), and you'd get something like

    "Do svidaniya (=See you later), Igor, and thank you for Nginx"
...which probably would have been much less likely to make half the readers start to tink he'd died.


> a novel architecture

Is this simply referring to event-driven I/O (using select, epoll, or the like), or something else? I'm pretty sure event-driven, as opposed to forking or thread-per-connection, web servers were well established by 2002, though perhaps primarily in commercial products like Zeus.


I also wondered this. Similar:

> In particular, Igor sought to solve the C10k problem – handling 10,000 concurrent connections on a single server – by building a web server that not only handled massive concurrency but could also serve bandwidth‑hogging elements such as photos or music files more quickly and efficiently.

...I'd love to hear more details



There is also the worker model and the zero-impact reload (but I don't know if Zeus had that too).


With Almaty, Kazakhstan in the news recently because of protests [1] , I thought this part was interesting

> Igor came from humble beginnings. The son of a military officer, he was born in a small town in Kazakhstan (then a Soviet republic). His family moved to the capital Almaty when he was a year old.

[1] https://www.bbc.com/news/world-asia-59927267


All the soviet republics are former imperial Russia territories and have substantial russian minorities. Of course, Russia would never admit doing colonization like its western counterparts, but just spreading culture and civilization around.


“Igor has chosen to step back from NGINX and F5 in order to spend more time with his friends and family”

I wonder what this is really about.

Either way Nginx is the reason myself and many others were able to survive without raising venture capital because we didn’t need a massive horizontal cluster of Apache servers consuming 20 to 100Mb per concurrent connection. Personally I scaled above 100k current connections on a single front end nginx box with 6 Apache application servers on the back end in 2007 thanks to Igor’s incredible work. He really has made a massive contribution to the fundamental plumbing of the Web and should be recognized for it.


> I wonder what this is really about.

Maybe just that? That's pretty much what retirement is about, also. Doesn't say that Igor won't work on personal or even non-personal projects at all, just that more time will be available for, well, friends and family.


It is standard resignation speak. It’s the same thing I told HR to tell my team when I decided it was time to leave, because it is what everyone else says when they resign.

I left due to burnout and failed negotiations. I did share the burnout with my team individually.


I will disagree with this, this is a completion of a P&L merger, a golden parachute some know it as, Gus also recently left.


perfect opportunity to spend some time with the family, though


Perhaps. However, I do find it slightly 'off' that his departure is announced in the form of an article, and not a personal statement in which he would have more or less unavoidably had to have elaborated on the reasons for choosing this moment to depart.

At least in my personal experience, when this happens to a senior leader it's usually because they had been informed by others that they WOULD be resigning.


Exactly.

> a high schooler in the mid‑1980s

Seems like a good time to spend with family to me.


It's been 20 years, it could be true.

Usually not, but that's the nice part of this kind of letter; it's always plausible.


I think it's usually true. But that's what makes it so useful to people casting around for something to cover for reasons that might be less savory or less pleasant. And that resulting contrast means we really notice it in high-profile cases as an excuse, making it almost a cliche.


Usually? But then I wonder what it would say if it were the case? Hopefully in the majority of cases, things do end this "boringly".


Playing the numbers, most founders move on through a couple new ventures before retiring to hang out with friends and family.

I'm not saying "usually" like it's 98%... more like 70% odds.


I think he was detained recently by the police because there is a claim that he started Nginx while employed at another company. Seems like some sort of shakedown by politically connected people.


Can someone summarize what allowed nginx to surmount the C10K problem? Was it some clever trick or just good software design?


I was working on adding custom sharding for a reverse proxy in Nginx ten years ago. The code was absolute bare-bones. No comments, no tests. And still it worked really well. Scary and cool, were my thoughts at the time.

There are three things I think stood out (not tied to C10K):

1. The configuration format is light-weight. Compared to Apache, lighttpd and others at the time, you could build a static file server or a reverse proxy in just 3-4 lines of configuration. It lowered the bar of entry, and is probably what led to wide adoption.

2. The core of Nginx was (is?) an async data pipeline. The individual modules (proxy, file system) defined how the pipes tie together, but the actual pumping of data was done in a kernel. You never had to care about epoll(2) and the like; you just defined the DAG. And that was easy to do correctly even in bare-bones C. This was a good architecture.

3. Single-threaded IIRC, which might be the C10K answer you were looking for. Apache had the complicated configuration where you had to decided to use prefork, or threads, or...

Lastly, it was fast. Probably because of (2), and a prerequisite for (3).


If memory serves me, nginx succeeded by relying on epoll primitives for handling many connections rather than spinning up a thread per request by default like apache did at the time. That was the big difference back then. These days I imagine Apache has adopted/honed these same techniques.


Yep, epoll was a big part of it making it work around async io rather than threaded.

If someone is interested in reading more, "Flash: An Efficient and Portable Web Server" is a good read on the topic: https://www.usenix.org/legacy/events/usenix99/full_papers/pa.... It has no relation to Adobe Flash.

epoll has the advantage of operating in O(1) time rather than O(n) time as well which becomes important when you have a lot of file descriptors.

I'd also note that epoll landed with Linux 2.6 so it wasn't really available before 2004. Apache Server was created in 1995 long before epoll and Nginx was initially released in 2004. It's one of those situations where you introduce new capabilities like epoll being able to handle lots of FDs in O(1) time and someone finds a way to use that capability to make something great.


Although, similar facilities were already available in other systems such as Solaris, Windows NT, FreeBSD.


Igor was from Russian FreeBSD community so kqueue was probably the first.


Was epoll around and no one was using it for a web server yet? It seems like something that would have been put in the kernel explicitly at the behest of web servers.


IMO Ngnix takes advantage of the fact that most of the web workload is I/O bound. It’s tight loop main thread coupled with asynchronous delegation enables it to stay single threaded. It doesn’t spawn new thread per request which means it doesn’t need additional memory to handle new requests.

This is a very good article which goes into details, highly recommended

http://aosabook.org/en/nginx.html


I can't help having bad feelings about this. The first thing I could think of was what Google has become during the post-Brin-Page era.

I hope I'm wrong but this could be an indicator of some changes F5 is about to introduce.


I mean, as soon as F5 bought nginx you could have predicted that.


A side note left out in the history mentioned here is that Igor first[1] developed two third-party modules for Apache, mod_deflate and mod_accel[2]. I think especially the latter was a big step towards NGINX already. It was a much more capable replacement for the mod_proxy module that was bundled with Apache, and that would slurp up the response from the server as fast as it could, storing it in a local file cache while starting delivery to the client immediately (and optionally re-use the cache for future requests); it freed up the back-end server quickly, which was very helpful to reduce the number of concurrent processes in a system that used fork (as the Perl based systems I was working on did). It made performance better than FastCGI (as at least the Apache FastCGI implementation would not do the step of slurping up the response and copying it to a temporary file, and thus tie down the back-end process until the response was fully delivered to the client).

My interpretation of the history is that Igor first solved the scalability problems for a direct need (IIRC he was working for a large Russian website at the time[3]), while doing that probably realized that the Apache code base could be replaced whole-sale to do more than just HTTP proxying, and introduced the async approach to make it scalable itself, too.

[1] IIRC I saw the public NGINX announcement a few months after starting to use mod_accel. [2] Amazingly the page is still up: http://sysoev.ru/en/apache_modules.html (and it is still linking to the "Babelfish English translation" that I made by auto-translating and manually cleaning up the docs and that I hosted on a DynDNS domain that I've long since lost). [3] Rambler, from reading the other comments here.


Impressive. I thought the only way you were allowed to quit working on an open source project was to commit a new version where you delete everything and introduce a chunk of code to put clients into an infinite loop. I'm impressed Igor was able to find an alternative. /s

(More seriously though, his work is impressive and I hope his next adventure is at least as fulfilling).


Igor and the rest of the Nginx team achieved commercial payments by offering premium modules. Later on they were acquired by F5 networks. He achieved the goal.


unrelated but ive always thought the F5 acquisition of NGINX made little sense. I think F5 saw the writing on the wall a little late, panicked, and bought the first competitor they could come up with that showed up in a Gartner quadrant.

so much of the NGINX product that aligns with F5 as a competitive element is essentially already implemented and free by people who are already completely competent in load balancing. unless companies seek to reduce risk by bolting on a support contract...why F5 at all?

can someone from the biz side of the HN house chime in perhaps?


F5 was very much a hardware company in a world of software vendors. The high and low end of the hardware market has shrunk rapidly as cloud has taken over. At high end, cloud providers like Amazon, Facebook and Google are building their own hardware[0] and at the other end, companies are increasingly just using those cloud providers.

For any large company, the easiest way to enter a new market is to just buy some of the competition. Of Nginx's competitors, most are either not "enterprise-y" enough (Caddy) or are already part of the CNCF (Traefik, Envoy). Really, I think the only other option could have been HaProxy.

[0] https://www.geekwire.com/2017/amazon-web-services-secret-wea...


Well, if may offer a view.. your are correct in your second paragraph the aspect that most people miss is that F5 is not only an enterprise product but primary systems (energy/gas/etc) and Telco (3/4/5G). The acquisition of NGINX is to complement that footprint. Just a different view.. be well yeah..


I will say that as someone who managed a leased hosting environment years ago, we dropped our expensive F5 device once it became clear we could do the same load balancing in Nginx for the cost of a virtual server. Even with a small support contract from Nginx it was way cheaper.


I mostly see it in businesses that aren't very competent and/or have a support/licensing partner that presents them a tunnel vision they cannot deviate from. F5 specifically is usually also just 'what they always had', or part of a bundle.


They had no integration plan, so I assumed they bought it to get some street cred for the push to become a modern devops software company (instead of hardwar) and also to kill nginx as they were beginning to win sales. I used to work at nginx and F5.


iRule are the things that makes the F5 appliance special, it enables you to run TCL on a variety of events (ex: HTTP_REQUEST, CLIENT_CONNECTED...). Some typical uses are requests and responses modification, logging, bug fixes on closed source web applications... I also the F5 device support various permissions levels to enable delegation/separation of administrative privileges.


It's so cool that they wrote a little biography of him. As they say, his software powers the majority of websites on the planet: that's such a huge accomplishment, but normally not for the kind of job that gets biographies written about it.


So reading between the lines, the commercial aspirations won out over the core ideals of the creator who is now being pushed out completely, likely with some form of compensation package to not make a big fuzz, so they can now dive headfirst in a pool of money while abandoning any open source ideals?

So natural question, how long until they start squeezing more money out of users and how severe will it be for hobbyists and small companies?


To non-russian speaking users: "Do Svidaniya" means "Goodbye".


To add a bit more, "svidaniye" means a date, and "Do svidaniya" literally means "till [our next] date". In English, "see you" or "see you later" translates closest to the original phrase in Russian.


Many indo-european languages use something like "see you again" in their goodbyes. Specifically, notice the V and D from the Proto-Indo-European root "weyd-"[1] meaning "to see":

* arriVeDerci (Italian)

* auf WieDersehen (German)

* ¡hasta la Vista! (Spanish)

* do WiDzenia (Polish)

* do sViDaniya (Russian)

* la reVeDere (Romanian, (because I'm Romanian and it seems like an important language to mention))

[1]: https://en.wiktionary.org/wiki/Reconstruction:Proto-Indo-Eur...


wieder means "again" and has nothing to do with sight.

sehen means "to see" and does not have your PIE root.


It's interesting, because "svidanyie" in Russian is probably closely related to "svitani" in Czech, which means "sunrise", nothing else. But more literally "svitani" would probably translate as "meet again". Fascinating how meanings morph and sometimes the original meaning is fairly obfuscated.


Funny, as a Ukrainian I never connected it this way, precisely for the same reason. In Ukrainian, “svitanok” means sunrise, with the root “svit” (world, but also light). In Russian, “svidanie” should have the root “vid” (view, sighting). So, to me, those are words with totally different roots :) https://ru.m.wiktionary.org/wiki/свидание points to https://ru.m.wiktionary.org/wiki/vidět in the etymology, among other things. But your interpretation would have been quite cool, though I don’t meet people often at the sunrise!


you see, "svit" exists in Czech too. It's used mostly as "svit slunce" meaning the light of sun. But then in broader sense "svitit" is a verb meanning "to light" - so it's fairly obvious that "to shine a light" or "a surise" or to "greet someone" are all related to "meet againt" or "see again". It's all variations of the same basically. At least it seems so, I am no linguist.


Linguistic sources say that svet/svit/light/sunrise/world and videt/see/meet have completely different proto-indo-european roots - kweyt and weyd respectively.

So nice theory, but no. “Svid” is not a root here, it’s “s”+”vid”.


interesting, didn't know that! can you point me to those resources to learn more, please? fascinating topic for me.


it's not. it's from "videt'", to see


yeah but it's still probably related. "videt" and "vitat" (to welcome) is probably the same etymological base. and "svitat" is kinda like see again, or welcome again. But yeah, this is diggging kidna deeper in the meaning that you normally don't think of in regular use of our languages.


Because we get to see the Sun again each morning...? (WAG)


svidanie stems from vid, which is a root of "to see".

s-vid... - means completed action, as in "to spot".

...vid-anie - a noun version of the same.

s-vid-anie - an event of managing to see someone/something.

do ... - until ...

Pretty logical, actually.


> In English, "see you" or "see you later" translates closest to the original phrase in Russian.

Often used ironically by movie villains. In English it might almost be taken as a threat.


Doesn't it literally mean "Until we see each other [again]", which is even closer to "See you"?


It does mean exactly that, though not literally (there are only two words in the phrase literally).


Close enough. There are very close matches in Italian ("arrivederci"), French ("au revoir"), Romanian ("la revedere"), but for English these are the closest.


"Do svidaniya" sounds more formal than "See you" though.


That is true, thank you for pointing it out. My personal preference is "vsego Vam dobrogo", which is even more respectful (it translates roughly as "All the kind (best) to You").


I think in general, it's probably best to avoid this kind of etymological explanation of the 'actual meaning' of some word or expression because they tend to obscure the usage and, well, actual actual meaning. Saying 'goodbye' in English doesn't really mean saying 'God be with you' to someone, even though that's what it's originally a contraction of. They're fine as etymology, of course!


>Saying 'goodbye' in English doesn't really mean saying 'God be with you' to someone

except, of course, when it does.

I regularly re-pronounce holiday as holy-day internally, and the same with welcome and well-come; i am, of course, a weirdo.


When I say/hear "do svidanya", I usually don't consciously interpret it as "till next meeting" each time, it's a set phrase and it's usually perceived as a whole, "goodbye". Only you if you put some effort to pay attention to the actual roots that you realize, oh that actually means "till next meeting" (especially since in modern Russian, "svidanye", when used alone, now means "romantic date"). Same with "hello" which literally means "be healthy", it registers as just "hello" in my brain, I don't immediately think "they're wishing me good health". But maybe it's just me.


>except, of course, when it does.

To a good first approximation, zero people mean "god be with you" when they say "goodbye". That might be the etymological origin of the word, but meanings shift over time.


Most people don't become polyglots either, though, so that's not a good measure of the utility. It's not a question of language-learners learning etymology would be useful to the man on the Chatham bus, it's a question of whether it is useful to language-learners.

Knowing the etymology of, e.g., "goodbye" makes it click faster/not be weird when you learn, e.g, that "hello" is "Dia is Muire duit" (God and Mary be with you) in Irish Gaelic.


and putting utility aside almost completely, it’s just pleasing to know things.


except, you know, the person you're conversing with this very moment, so that's a really bad (local) first approximation!


> except, you know, the person you're conversing with this very moment, so that's a really bad (local) first approximation!

Well there's that saying about all models are wrong, and some are just more useful than others.

My country is thankfully becoming more secular as demographics change, so hopefully that first approximation improves over time.


because forgetting things about the only tool we have for communication, imperfect though it is, is always an improvement. How else will we rediscover them badly?


On the contrary, I find etymological explanations often most enlightening. Although, I am fully aware that it might just be an origin of a word or a phrase and not the contemporary meaning. Your argument sounds defeatist to me. To paraphrase it: since the 'actual meaning' alone is not a perfect explanation we should not even try to understand the roots of words and their heritage. Then you can stop understanding the world altogether, because no amount of knowledge will remove all obscurity and contradictions. Yet human knowledge is prospering.


I concur, one can learn a lot about the culture when learning a new language. For example, in Swedish you say hello simply with “hej”. And as you may have guessed, Swedes are not big on introductory formalities even in the business context. On the other hand, German and many other languages have different words for you and You (“du” and “Sie”, pronounced Zie). As one may guess, You do not address unfamiliar people with “du”, there is a process to get to know people to the point when You ask a person if you shall switch to a “first-name basis” (this phrase gives a hint that in English, one uses the last name to address an unfamiliar person instead).

Edit: I was deeply impressed when I saw a photo of one of Einstein’s letters after he migrated to the US. Such a highly accomplished scientist still opened his letters to colleagues with “Sehr geehrter Herr Professor Dr.” (highly esteemed dr. professor, sir). This tells me all I need to know how cultured Einstein was.


Your paraphrase sounds more like some hypothetical argument you'd like to take engage in, not anything I actually wrote.


Well, if that was not your intention, I am sorry. But it was my honest understanding of what you wrote.


As a Russian learner, it seems a bit wrong not to transliterate that as "vsevo Vam drobovo" even though I understand that's not how it's spelled originally. For whatever reason, that's what my brain is expecting.


I actually wanted to write “vsevó Vam dóbrava” (o not under stress often becomes a) but changed it to an grammatical transliteration for some reason.


My favorite "bye" in Russian is "davai" which literally means "give!". Go figure.


"Until we meet again"


Auf Wiedersehen!


Genau! Actually, this perfect duality also extends to other phrases, like “priyatnava appetita”, just like Guten Appetit! At the same time, in Ukrainian you’d say “smachnogo”, similar to Swedish “smaklig måltid” (lecker Mahlzeit), as “smak” means taste in both languages.


smaklig måltid

When I see these accents I can't help but read it in Hatari voice.


In Russian the very informal version is "Пока". Literally "Till" with implied "till we meet again".


"Farewell"?


Not exactly. You bid farewell with “proschayte” (literally begging for forgiveness) or “vsevo dobrava” (wishing all the kind/best), but “do svidaniya” has a hint of looking forward to meet your counterpart again.


Something like "Hasta luego" in Spanish? Literally "Until later".


I started with Apache long ago, and then I moved to Nginx. 15+ years have passed.

Why did Apache never become a real competitor? I didn't A/B test with Apache and Nginx - I just read slashdot and later HN, and I trusted people who ran much bigger sites. But how could Nginx take such a lead that Apache could never catch up to?


You can do a lot with the the one process per connection model of Apache 1.3 and 2.x mpm-prefork, but there are some limitations. But I think the code reorganization to support all the different mpms was a lot of work, and not a lot of benefit for everyone, so some organizations stayed with 1.x. There's a lot of jack of all trades in Apache, but if you want to handle specific conditions, you really need to tune it properly. And some of the out of the box stuff was just wrong.

If you're pre-forking, you really shouldn't scale the number of children up and down, it should always run the maximum number you want to run (whatever that is). Because a connection ties up a child, you need to do a bunch of stuff to limit the time you need apache to be touching the request; you really should run an os with accept filters, and not touch the connection until the request is ready; you need a large socket buffer so you can write the whole response and close the socket and let the OS finish sending it while Apache works on the next request; you also need to disable http-keepalive (or do something crazy; Yahoo had a 'cheapalive' daemon that would pass client sockets back and forth with (y)apache --- when the connection was idle, pass to cheapalived which puts it into a select (or kqueue/epoll, I don't remember) loop; when a socket had data to read, it would be sent back to (y)apache to process the request.

But the apache documentation wouldn't guide you into any of this, really. You'd try a normal seeming config, get things that worked, until it got busy, then spun up too many children and ground to a halt, probably swapping excessively on the way. Or just had too many people trying to do keep-alive and have 0% cpu and doing nothing. Or maybe spinning your wheels trying to get threaded mpm's to work (but they don't work with most of the popular mod_X's anyway). The market seems to be telling us that a big kqueue/epoll is the solution we want, but well tuned pre-fork can also do a lot; it kind of depends on if your bottleneck is the http server or if your bottleneck is your application code. If you've written your application code well enough that the http server is the bottleneck; congratulations! (or you might be a static content server, but then your bottleneck might well be your NICs or your OS TCP stack)


>I just read slashdot and later HN, and I trusted people who ran much bigger sites. But how could Nginx take such a lead that Apache could never catch up to?

I read that in the last 10 years or so, this is what has happened to other successes like Slack etc : The users are becoming the decision makers as opposed to CXO handing down tools. I think developer advocacy was one of the major reasons.


Yes, this is the kind of a story which are being dramatised in movies - a highly functioning autism, habit of attention to details and principial insistence on doing thing "just right" and enough free time and lack of distractions (by degens) to carry it on.

I remember when I saw the nginx code for the first time - I was so impressed that I wanted to build libnginx.a (of core data structures) for my own projects. Never happened.

The only comparable story I know is Rob Pike - similar principles and obsessions of doing just right, which, in turn, is related to good mathematics of finding and using just right abstractions by generalising from actual pattern.

If you want to learn C programming (and programming in general) read nginx and unit(nxt). Look how systematic, minimal and just right is everything.

I like to than Igor for teaching me and being an example.


This motivated me to learn a little bit about nginx architecture. This article is nice: https://www.nginx.com/blog/inside-nginx-how-we-designed-for-...

Why are the nginx workers implemented as processes sharing memory, rather than threads? Is that so they can have different privileges in the Linux permissions/ownership model?

Is it easy to create multiple processes sharing memory in Linux? How do I go about doing that?


It took a good 5 minutes read of the entire blog post to understand that "Do Svidaniya" is actually "До Свидания" (Russian), meaning "until next time/Goodbye".


Listen to more Zemfira...


Wanna me to kill the neighbours that won't let you sleep?

Vow, for a rare occasion that one translates rather nicely.


I thought maybe it was somebody's name and just a really horribly formed sentence. Because, ya know...


i speak russian fluently and still parsed the title as "Do" + <indian name>


I'm not fluent, but I've been studying for 3 years now. I hate when people use latin characters to 'sound out' russian words. When I see Cyrillic, my brain immediately switches to Russian. When it's latin sounding out Russian, it takes a stupid amount of time for me to realize and comprehend what they are trying to say.


Accurate transliteration should be fine though. There are also Slavic languages, such as Polish or Czech, that use the Latin alphabet.

One of the annoying things about Latin transliteration is that people use different phonetics of western languages. Should you use y or j for palatal sounds? Seems like there are also french influenced transliterations for vowels. These can be inconsistent based on the source.


> Accurate transliteration should be fine though. There are also Slavic languages, such as Polish or Czech, that use the Latin alphabet.

Yeah, that leads to the question of which transliteration to use.

> One of the annoying things about Latin transliteration is that people use different phonetics of western languages.

Eng Michael(?) Gorbachev, Ger Michail Gorbatschow, Swe Mikhail Gorbatjov...

> Should you use y or j for palatal sounds?

'Xackly: Eng/Ger Boris Yeltsin, Swe Boris Jeltsin.


> I hate when people use latin characters to 'sound out' russian words

Except nobody knows how the Russian characters even sounds:

https://youtu.be/m0i8IBZklZg?t=9


I'm an experienced dev with a deep knowledge of full stack web as well as infrastructure but I've never put nginx in front of any app. I want to but I honestly avoided it because I thought it was difficult. This may not be the right place to ask but is there a good guide for someone who's deep into nodejs who just wants to set up nginx on Debian in front of a node server with https? and just see how it goes.


I'm just a fool on the internet, but if your appserver is NodeJS, you might want to consider HAProxy over nginx (I say this as a fan of nginx).

The reason being that (unless my information is stale), NodeJS will happily accept all the connections thrown at it, eventually causing each connection to be starved of compute capacity and finally falling over. HAProxy is able to keep a connection queue and feed a maximum of (for example) 4 concurrent requests to the backend(s), thus providing back-pressure to incoming requests. Makes it a lot easier if you need to eventually scale your app horizontally, too.


> HAProxy is able to keep a connection queue and feed a maximum of (for example) 4 concurrent requests to the backend(s), thus providing back-pressure to incoming requests.

NGINX can do this as well, either with the "max_conns" parameter for upstreams, or (trickier, but perhaps more effective when the upstream is async) in combination with rate limiting:

    limit_req_zone $server_name zone=root:10m rate=100r/s;
    limit_req_status 429;

    location / {
        limit_req zone=root burst=100 delay=4;
    }


Thank you, that's good advice.


Given your experience, it should be extremely simple, especially if all you want to do is drop NginX infront of your NodeJS server. All you have to do is install it and add a line in the config that points traffic to your NodeJs instance. Btw often times people use Nginx as SSL termination and use http to the backend (if on the same instance).

Let's Encrypt can automatically add lines to the nginx config that enable SSL, but in some cases it doesn't work properly and the config is malformed. In any case not hard to fix.


NGINX course on https://acloudguru.com/ is good.


It's not that hard. DigitalOcean has good guides.


> Do svidaniya

I wonder if using English letters to write Russian phrases is acceptable practice for native Russians.

Does it sound respectful, neutral or like a mockery?

I don't mean in context of that post, which obviously is respectful, but in general. Especially when unicode is a thing and you could just write до свидания


Transliteration is extremely common in all Slavic speaking countries.

Flip note: the letters are not 'English', they are Latin (or Roman).


It's common practice amongst Russian/Ukrainian/Belorussian/Bulgarian speakers to write in Latin alphabet when Cyrillic is not available. I know for a fact that this is taught at school in Russia. Not sure if in other countries.


Native Russian speaker here. It is a perfectly acceptable and common practice when one wants to include Russian phrases in an otherwise non-Cyrillic text. "Doveryai no proveryai", etc.


What if he was Japanese? Would you prefer to see "Sayonara" or "左様なら"?


I know this is getting very off-topic, but Japanese speakers would probably not write 左様なら either - it's usually written in kana: さようなら.

Back on topic, as a speaker of another language using Cyrillic, for me romanisation is perfectly normal/expected in this context. I don't expect English speakers to have to learn a new alphabet just to be able to read the title of a blog post which is otherwise in English.


I don't know. I'm not Japanese. My question is about how people from original culture feel about latin transliterations.


While it is a little more difficult for us to read transliterated text, it is pretty common, and the only way to write something in Russian without a Cyrillic keyboard. So no worries, it is quite acceptable.


I think it depends :)

Back when texting (SMS) was still a big thing, you had a choice to either write in English letters and enjoy 140 char limit per message or write in Cyrillic and have it reduced to 70 chars. Many were doing the former. I assume many other countries with non-latin alphabet had the same.


That depends on the person who is listening/reading.


Are you Russian? How would you feel about it if you read it?


Feels respectful, but again even for me personally it would depend on other factors.


I still remember when it was new and so small a novice can review it without a headache. It was to me the answer to attacks like slowloris and C10K problem. And the config, by far much more pleasant than apache or IIS. I use it to reverse proxy all the time! Thank you Igor and best wishes.


[flagged]


It was downvoted because you originally posted it to the wrong thread (https://news.ycombinator.com/item?id=29981188, instead of https://news.ycombinator.com/item?id=29985871). I've moved it to the right thread now.

Can you please review the site guidelines and stick to them in the future? You broke more than one of them badly here.

https://news.ycombinator.com/newsguidelines.html


Sorry


Thank you Igor for Nginx. Love using it!


My first HN post was "Nginx established as a company"[1]. I guess this starts the third chapter?

1. https://news.ycombinator.com/item?id=2776622


Is the title a reference to hitchhiker's guide to the galaxy or is that too far-fetched?


I half-suspected that it might be, but have no data to support that hunch. Guess it partly depends on how "So long" translates into Russian; was the title there "Do Svidanya, and thanks for all the fish"?


Ha, I didn’t read it like that but now I see it totally could have been a reference.


It always amazes me to see that even big projects star from one idea, from one person.


What an impact, great job Igor!


But is it pronounced "Ee-gore" or "Eye-gore"? https://www.youtube.com/watch?v=RyU99BCNRuU


Ee-gor, with r a palatalized alveolar tap.


It's a pity they chose to ignore cloud native and got completely overtaken by Envoy. Legacy software now, and Unit has been worked on for almost 6 years and nobody uses it or knows about it.


I still remember pronouncing it as nging-eks, not engine-eks.


"At last!" - said Igor and installed lighttpd ;)


Just want to give a Hat tip to Igor and say thanks for giving the world Nginx, which I've successfully been using it to power production Linux for over a decade.


After suffering for years with Apache in the late 90s and early 2000s NginX was nothing less than a revelation. Thank you so much Igor.


I chose NGINX as a web server in my book, because if you don't know what to use, use NGINX. Thanks Igor for all your work!


All my business servers are fronted by Nginx. I can't thank you enough Igor for such immense contribution to developers.


am i a bad person that by the mentioning of "Igor" I'm immediately reminded of Young Frankenstein?


Thank you for all the you have done Igor!


You’ll be missed pal.


Now if only Putin and Biden could follow this model and cooperation for the greater good


Wow, you usually don't start a headline with "do svidaniya", unless it's for an obituary.

Glad to see everything's fine.


My heart also skips a beat if I see a news article starting with "<First> <Last>, <occupation>".


People need to stop with this Cloudflare captcha madness. I literally accessed nginx.com a few hours ago and now it hit me with a captcha again.


Also... It's nginx.com, owned by F5. Am I the only one who thinks it's a little weird, borderline embarrassing, that they're not fronting traffic themselves? "We make the best load bouncers and web server. That's why we ... Outsourced our ingress to another company that doesn't use our load balancers and which only uses our web server as part of their stack. Let the experts deal with heavy traffic, y'know?"


Maybe they don't want multiple TB/sec of ddos? They make the software, not necessarily should have the equipment to eat that kind of traffic.


Are you using Safari by any chance? I used to get a lot of these on Safari, but issue is almost gone on Firefox. I think it's connected to how Safari handles cookies.


Chrome Mobile (Android).


Cloudflare uses a heuristic trust model where it pulls multiple trust signals from the client. It can use several things (including stable IP address, cookies, and I think even a bit of JavaScript grabbing a nonce from local storage).

If you run with a lot of "identity fuzzers" (browsing through Tor, JavaScript off, cookies banned), Cloudflare can't build its trust heuristics and needs to challenge-response more often. I suspect there's overlap between HN readers and use of those sorts of tools, so I think there is a disproportionate number of people around here who run into this issue (whereas most "regular" folk almost never see a Cloudflare challenge / response).


I always encounter a "random" stop-and-solve-me-a-visual-puzzle when visiting an Australian forum related to SOHO networking equipment. According to the description on the challenge page, "completing the CAPTCHA proves you are a human and gives you temporary access to the web property". Thank you, Cf, I guess?

(To be fair, consecutive requests don't get this treatment, just the one in which I jump there from eg. a search result.)


They are breaking the web, plain and simple. Google with AMP, Cloudfare with their idiotic capchas.


No server is obligated to vend my client data.

More importantly: no server is obligated to vend data until it falls over and dies, depriving all users access to that data if it isn't mirrored.

I think Cloudflare has honestly done an admirable job of coming up with a novel solution to the problem of loadbalancing and traffic-shedding in a world with a small-but-persistent percentage of hostile actors.


What's the alternative? Abusive traffic is the norm rather then an exception. This gets into the same argument when people say they don't need a CDN; it's a relic from when bad actors were rare.


If all I'm doing is requesting an HTML page, why do they have to send me a CAPTCHA? I understand that my request is coming from an IP belonging to a VPN and maybe someone used the same IP nefariously, but I doubt they requested the same page I have.


You can knock over a server with malicious requests of static content. In fact, what you are describing (requesting different pages) is the first step to trying to defeat a firewall rule that would protect against that attack.

This is the era after the invention of Low Orbit Ion Cannon. Attacks that would previously have been technically sophisticated can now be done with a few GitHub downloads and either many volunteers or many compromised machines.


Speaking of which, is it possible Cloudflare is paid by Google to display re-captchas? Someone needs to train those deep neurons.



Doesn’t Cloudflare use Hcaptcha now? They dropped Google because of the cost.


One might argue that it's the abusive actors Cloudflare blocks who are breaking the web.


> Cloudflare uses a heuristic trust model where it pulls multiple trust signals from the client.

The wording "heuristic trust model [...] trust signals from the client", would not be out of place in the context of a sigint discussion.


Just plain Chrome Mobile, nothing else, no VPN.


Interesting. The only other thing that immediately comes to mind is that the web site owner may have blocked the entire country (https://support.cloudflare.com/hc/en-us/articles/200170136-U...). This would be easiest to verify by seeing if other people with similar phone configuration standing next to you get the same experience.


this guy is not an ethnic Russia by the way, he is also not Kazakhs either




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: