Hacker News new | past | comments | ask | show | jobs | submit login

Even without optimization. Peak HN traffic is in the 20 req per second order of magnitude



Without optimization, Nginx defaults to allowing a maximum of one worker process with 512 connections and 60s timeout. 20 unique visitors per second will lead to 500 errors in less than a minute.


Are you assuming each request takes a second, or how long? A blog should return fast enough that this isn't the case

Edit: Yes, even WordPress. Unless there are 50 plugins, an unoptimized theme, zero caching, and a resource constrained server.


It does not matter how fast the backend is. A persistent HTTP connection will last 60 seconds following the latest client request, unless the browser goes out of its way to explicitly close the connection.

P.S. OP's website uses Apache but the same issue of overly conservative limits still apply.


There's no way the connection just sits idle and the worker can't serve other requests for the full timeout, right? That just sounds... Wrong. And is not consistent with load testing I've done before with a nginx setup


Apache will spawn a process (at least with pfork) and the process will wait for a keep alive connection to send a new req.

Everything old is new again. Gotta tune it out.


Is this true? So Apache basically launches a Slowloris attack on itself for every connection?


With prefork there is one process per connection. Look at server-status. There was a threaded version as of 2.4 but I don’t think it worked well.


Depends on the concurrency ability of the web server (async) or parallelism (threads)


I can’t imagine a case where the browser wouldn’t do the decency of closing a connection.


I would expect they keep it open for an improved user experience. Much like the prefetching that browsers seem to do by default.


> zero caching

<strike>When each request is from a different person, you get essentially zero cache.</strike> Nope, server-side caching reduces back-end processing.


There are multiple strategies for caching in the server, without them IIRC the php code will be interpreted on each request, files parsed, and obviously hit the database for each request.

There's fastcgi caching in nginx, php opcode in php-fpm, and WP specific cave plugins like Super Cache. At least this was the case ~10 years ago.


Also for $5/mo you can use Cloudflare APO to cache WordPress pages at the edge. Yes it will cache even the "dynamic" pages (unless you're logged in of course)


$5/mo is roughly my server electricity cost. The Cloudflare offer is not for me.


That's fine, I was just listing another option. It should be noted you should still do server side caching. This just lets you serve from Cloudflare's caching layers too


Also something like Varnish, which is what Wikipedia uses.


You are correct. I didn't think of the back-end delay because I have a static site (with a comments plugin loaded separately).



Are you sure? My understanding was that nginx would fill up all free connections up to the max, but then would begin draining idle connections so it could create new ones for new visitors.


I can only find a reference to what you said with regard to upstream connections. Are you sure this also happens with downstream clients?

https://www.nginx.com/blog/overcoming-ephemeral-port-exhaust...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: