Hacker News new | past | comments | ask | show | jobs | submit login

There was a very good talk about this at #nodeconf this week by Tom from Joyent. The basic summary was that threaded network servers, like Apache, say, approach the speed problem by "pre-allocating" resources, a chunk for each thread/instance. The issue is that there are a finite number of resources on each machine (RAM mostly), and blocked threads are eating up a slice of those resources even when doing no work. This sets the upper bound of number of simultaneous connections that can be handled.

In an event based system the overhead for each connection is at least three orders of magnitude lower, sometimes four or five (hope I'm remembering this right). This translates into _considerable_ increase in number of connections that can be handled simultaneously, not quite the equivalent number of orders of magnitude due to other aspects of the system becoming more of the bottle-necks, but still dramatic.

This was a data-driven observation, not conjecture or assertion. I listened in during a break while the nodejs guys argued about approaches to getting TLS working better - they were worrying about the 1MB overhead for a TLS connection because that's a significant percentage of a connection handler. Think about that for a minute in the context of an apache threaded instance. 1MB matters? wow!

I'm new to this area and this was very interesting stuff and seemed related to the various discussions below about a threaded system can do what an event based system can do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: