> This is only true if you think your application can do a better job of scheduling than the underlying OS kernel. In many (if not most) cases, this is false.
Why wouldn't you be able to virtually always do a better job of scheduling events (when you control all the code competing for resources) than a generic OS scheduler?
> Small fork()-ed processes can still compete with clever poll(), /dev/epoll, or kqueue()-based servers, if you can keep each instance lightweight.
Do you know of any examples of ultra highly scalable fork()ing servers?
> Why wouldn't you be able to virtually always do a better job of scheduling events (when you control all the code competing for resources) than a generic OS scheduler?
Simply put, I would say that "you" (where "you" is an average webapp developer) probably don't understand scheduling, event-driven programming, or memory management as well as the average kernel developer. I of course don't mean this to extend to anyone in this thread. Imagine, though, giving your average PHP or ASP developer, who may struggle to implement a basic sort algorithm, the problem of implementing cooperative multitasking in a scalable way.
> Do you know of any examples of ultra highly scalable fork()ing servers?
Under real-world workloads, I still consider Apache to be "highly scalable." In my experience, given 1:1 investment in hardware for front-end and database servers, the RDBMS craps out long before the webapp, so being able to accept another 5000 incoming connections is only going to hurt you.
Why wouldn't you be able to virtually always do a better job of scheduling events (when you control all the code competing for resources) than a generic OS scheduler?
> Small fork()-ed processes can still compete with clever poll(), /dev/epoll, or kqueue()-based servers, if you can keep each instance lightweight.
Do you know of any examples of ultra highly scalable fork()ing servers?