Hacker News new | past | comments | ask | show | jobs | submit login

> Bad Use Cases: CPU heavy apps

I wouldn't say that Node (i.e. V8) is a bad use case for CPU-heavy apps. In math/statistics for example, V8 stacks up well against other scripting alternatives. See e.g. the benchmarks here:

http://julialang.org/

Of course it's still not as fast as C++, but not terribly off either. Expect dramatic improvements here thanks to the push for making web apps faster.




In that example, V8 javascript was 400x slower than the alternatives for matrix multiplication. That's because it's not binding to the BLAS libraries like the 'real' statistical languages. Of course, node supports calling out to C APIs, but that's not an argument for javascript, dynamic languages will naturally be terrible at math compared to C.

More to the point: Node does nonblocking i/o but cpu work is in fact blocking. This means that a CPU-heavy request queues up all requests behind it, waiting to get access to the CPU. You're better off using a threaded architecture for CPU-heavy work.


That's silly. No one in their sane mind would serve web requests AND do heavy processing in the same thread - no matter what language.

In Node, you would have a process responding to web requests, and launch a separate process -- via a convenient child_process with built-in pipe communication -- for doing the heavy CPU work.

The only case where that wouldn't make sense is if you need BOTH a ton of concurrency AND heavy CPU work. In that case, threads would make sense as they would be cheaper memory-wise. But my point remains: CPU-heavy applications don't necessarily rule out Node as a feasible alternative.

PS: V8 was 40x slower, not 400, in that one benchmark result you picked.


Well, in Java I serve rpc requests and doing heavy processing from the same thread all the time, and it works great.

What would we have to do in node to get the same level of performance? We'd need a front end instance delegating requests to 8 back-end instances for each core? Writing all these bytes over internal pipes and blowing L1/L2 cache all the time as the data moves across cores? At a certain point, isn't it just way easier and more effective to have one thread spinning an accept loop and a threadpool handling the requests, all within one process bound to a port?

As I said upthread, I'm using node for a side project right now and I'm liking server-side js for a variety of reasons, but squeezing every cycle out of the CPU is not one of them.


"That's silly. No one in their sane mind would serve web requests AND do heavy processing in the same thread - no matter what language"

Sure you could, depends on what you are doing.. In many cases it's fine that the request takes a second to complete, but blocking the entire server for 1 second is not.

Guess you define "heavy processing" a little differently, but in event driven you can't do any (blocking) processing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: