Hacker News new | past | comments | ask | show | jobs | submit login

Out of curiosity in what scenarios do you see a forking web server being faster than a evented server that balances requests across cores and can direct a request to the core with the best cache for the request?

I completely agree with need to async. The hard part is that many operations are async without an async interface. For example memory allocation, or even memory usage if the memory was not truly allocated by malloc.




What does it mean to balance requests across cores? To run T event loop threads/procs, where T is tied to the number of CPU cores? So like, a pre-forking, multi-proc, evented server?

I actually can't think of a case where a multi-threaded/forking-only web server would be faster than that. Again, assuming complete support for async libraries used throughout the web application.

Are there any web servers that have this architecture? NodeJS obviously doesn't. *

* Actually, for maximum absurdity, it looks like Kore, the web server we are currently discussing, has this architecture


Any non asynchronous application.


Why would you have a web server that doesn't process requests async?


There are a few reasons I can think of.

1) Client libraries you might need to use in your web service might not be available in asynchronous versions.

2) Writing blocking code is much easier to write than asynchronous code.

3) Your server code is CPU bound, so there's no benefit to an asynchronous model.

4) If your web app runs in an asynchronous server and your app crashes, it'll crash the whole server. On the other hand, in a forking model, only the client that the child is serving will be impacted; the other workers will be unaffected.

5) Memory leaks are easier to contain in a forking model, assuming the child can exit or be killed after N requests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: