Hacker News new | past | comments | ask | show | jobs | submit login

Anything CPU-bound is a very bad fit for node. But most web services are IO-bound, and Node is excellent for them.



> But most web services are IO-bound

Node's still relatively slow for those workloads.

https://www.techempower.com/benchmarks/#section=data-r18&hw=...

See how far down in each section you have to scroll to find node, even for workloads that are purely "accept a request and respond with a static string". You'll see lots of Java and Go on your way down.

And most services will have far more compute than just shoving bytes in between services. There's request parsing, response encoding, usually at least a tiny bit of data manipulation.


This benchmark mainly measures the performance of the HTTP parsing and database libraries, often in a suboptimal default configuration. In node land, they're admittedly not amazing, but nothing about the language prevents them from being better.

For example, for a long time, the only reason that node was slow on these benchmarks was the built-in URL parser. Replacing it with this carefully written module https://www.npmjs.com/package/fast-url-parser resulted in 2x improvements on the benchmark. I haven't looked closely at the situation nowadays but I imagine its still quite similar with lots of low hanging fruit lying around and stalled due to backward-compatibility concerns.

For proof find "es4x" in the benchmark list, which basically replaces the entire stack of HTTP parsing and database libraries with the ones from vert.x and runs JS on Graal, even though Graal is currently at least 2.5x slower than V8 in terms of JS performance: https://github.com/graalvm/graaljs/issues/74

Node core (and the libraries around it) has unfortunately stalled in the "good enough" zone for quite a while. The good bit is that they stay in the good-enough zone after adding your own code.


I think your point is actually just proving something important that I ended my initial post with - that there is almost never a "pure IO" workload, but that compute is actually an extremely important part of any service. Given node's concurrency model it's even more important, as compute can block other operations.


I had another glance at the framework benchmark, and the quality is absolutely dreadful

I removed the unnecessary middleware bloat (pug html renderer middleware for an API server, really? body-parser and form parser even for endpoints where it's not being used?) and switched to standard pg instead of pg-promise (standard pg also supports promises, pg-promise hasn't been needed for quite a while now)

The performance went from 600req/s to 5500 req/s on the db benchmark, 9x improvement with 10 minutes of work. I think thats a pretty damning result for the tech-empower framework benchmarks quality, at least when it comes to node. This is just standard libraries and practices, not even hacks like replacing the built in url parser with fast-url-parser.


Submit that to them then?


It might be a good idea, although I fear that if I only fix one node framework and keep the rest intact it will create a false impression that that framework is somehow amazing.

Still better than a false impression that nodejs is somehow slow.

They really need some QC though.


Maybe we'll have to disagree because I'm still pretty sure nodejs is somehow slow.


You don't have to trust me, here is a diff you can apply of the work I did: https://gist.github.com/spion/2779ae6dc9552c229c1eeacd90c03b...

you can run ./tfb --test express-postgres and compare.

If you can't wait for all the tests to complete, a representative one can be obtained more quickly by running

./tfb --test express-postgres --type db --query-levels 10 --concurrency-levels 128


I agree, although for server side rendering of web frontends, it is still the best choice even though producing vdom and rendering it to string is usually cpu bound.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: