Hacker News new | past | comments | ask | show | jobs | submit login

Node does support two models for concurrency: Non-blocking I/O and worker processes.

Non-blocking I/O is no magic pony, so if you need access to more CPU cores, you start creating worker processes that communicate via unix sockets. I would argue that this is a superior scaling path than threads, because if you can already manage the coordination between shared-nothing processes, moving to multiple machines comes natural.

Otherwise I agree with the post. Nothing will allow you to scale a huge system "easily".




There are features you miss by relying on IPC. UNIX Sockets are zero copy, sure, but you still lose performance doing the marshal/unmarshal tasks, and that could be largely avoided using shared immutable memory.

I'd much rather node have a multi-threaded Web Workers implementation of nodejs that uses immutable shared memory for message passing. Not coincidentally, I'd much rather push the real distributed scaling problems (such as coordination and marshalling/demarshalling) up to a much much higher level in my application stack. As it stands now, I need multiple marshal/unmarshallers, for the IPC layer and the app's scaling-out/distributed layer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: