Hacker News new | past | comments | ask | show | jobs | submit login

Was quite excited to try out a little websocket server with Kore till I saw it fork's per connection. I don't really want 20k processes for handling 20k connections, I was really hoping for an event loop.



Kore does not fork per connection.

It uses an event driven architecture with per CPU worker processes. The number of workers you have can be controlled by the config.


Evented io is great for extremely high concurrency, but that isn't always the right thing to optimize for. A forking web server might be faster for users depending on the application.

Lastly, you can't just have an event loop without also creating an entirely async platform. For an event loop to work well, all operations from file reading to network requests need to be completely async.


Out of curiosity in what scenarios do you see a forking web server being faster than a evented server that balances requests across cores and can direct a request to the core with the best cache for the request?

I completely agree with need to async. The hard part is that many operations are async without an async interface. For example memory allocation, or even memory usage if the memory was not truly allocated by malloc.


What does it mean to balance requests across cores? To run T event loop threads/procs, where T is tied to the number of CPU cores? So like, a pre-forking, multi-proc, evented server?

I actually can't think of a case where a multi-threaded/forking-only web server would be faster than that. Again, assuming complete support for async libraries used throughout the web application.

Are there any web servers that have this architecture? NodeJS obviously doesn't. *

* Actually, for maximum absurdity, it looks like Kore, the web server we are currently discussing, has this architecture


Any non asynchronous application.


Why would you have a web server that doesn't process requests async?


There are a few reasons I can think of.

1) Client libraries you might need to use in your web service might not be available in asynchronous versions.

2) Writing blocking code is much easier to write than asynchronous code.

3) Your server code is CPU bound, so there's no benefit to an asynchronous model.

4) If your web app runs in an asynchronous server and your app crashes, it'll crash the whole server. On the other hand, in a forking model, only the client that the child is serving will be impacted; the other workers will be unaffected.

5) Memory leaks are easier to contain in a forking model, assuming the child can exit or be killed after N requests.


Really?

| Event driven architecture with per CPU core worker processes

So each process should be able to handle a lot of concurrent connections, just like nginx.

And I tried the websocket example, and saw only the first worker process responding whenever a websocket is created.


Unless you are optimising for space, there is no real reason to use C in IO bound processes (for which event loops are ideal); you may as well use Python (or even JS if you must) as your performance will be dominated by IO time.


There's always Vibe.d from the D programming language. Granted you would have to write in D, but most of the libraries of C are available. Concurrency is definitely accounted for since D supports it internally in the language. If you're seriously considering a 'native' approach to web development.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: