Hacker News new | past | comments | ask | show | jobs | submit login
Node.js has jumped the shark (unlimitednovelty.com)
321 points by bascule on Oct 4, 2011 | hide | past | favorite | 126 comments



Am I the only one that considers all of this ranting about Nodejs to be a little bit strange?

I would have never expected a post that was obviously a troll to prompt this much of a reaction on both sides of an issue. That so many of these rants and counter rants made it to the front page of Hacker News is somewhat discouraging.

I have been playing around with node for a few months, and I have tried to stay completely out of this "conversation". With that said, I would like to contribute just a few points:

Bad programmers will be bad programmers regardless of the tools that they use. If they use node and fail to write code that is completely non-blocking, then that is what we call a teachable moment. There is no magic button, all technologies have downsides and tradeoffs.

People keep talking about how nodejs is not good for computationally intensive tasks, but v8 is not a slow environment. Am I the only one that puts computationally intensive tasks into a queue to be taken care of by a pool of seperate processes? I am only just getting into web programming, and it seemed fairly obvious to me that you would not put something like that into your main event loop.

Also, if you find that you must put something computationally intensive in your main event loop, then you should use something like node-proxy or nginx to proxy those requests to a number of "nodes".

Over and over again, I have seen that people complain that node is not a good multithreaded environment. Well, yes... That is the tradeoff of using something that is closely tied to the concept of the event loop.

If you are using node because you feel comfortable with threads and you need threads, then you are making a serious mistake. If you are a new programmer and you are using node because someone told you that it is cool, then you are making a serious mistake. If you are using node because you have a problem that can be solved or addressed with an event loop and you understand the tradeoffs inherent in this approach, then you are doing the right thing.

To use nodejs effectively will often require rethinking your approach to fit the tool that you are using.


So, I'm not a node.js guy, and nothing against it, but being a server-side guy in general, I'll paraphrase a quote that Zed Shaw paraphrased from Chinese Kung Fu novels:

"So the intermediate guy is doing all of these backflips and spinning roundhouse kicks and all that, it's very impressive, you couldn't imagine being in that good control of your body. He decides he's pretty good and spars with a master. The master barely even moves and puts the guy on his ass."

I think infatuation with event-driven i/o is one of those intermediate programmer things. We all took an advanced systems course in college, saw the literature from single-CPU days about how it makes so much sense, and more importantly saw how damn clever it was. There are situations where it makes a lot of sense (static file server), and then there are situations where you say "Wait, so you want me to run 8 instances of this thing in order to utilize my 8-core server?".


I think in this case, we have yet to see the master put anyone on their ass. We have non-masters slinging insults and ineffective demonstrations.

Don't typical Python and Ruby deployments also need 8 processes to utilize 8 cores? If I'm not mistaken, this is in fact how Heroku works.


As far as I'm aware, they do. GIL means that even using threads, they still need 8 processes to utilize 8 cores.

There are more languages than those 3, though, and if those 3 are the languages that you're considering, evented vs blocking i/o is way beside the point when it comes to performance. A super-simple thread-per-connection program with blocking i/o calls from C or Java will demolish the most sophisticated evented system you could ever design in a scripting language. That was my "barely even moves" vs "spinning roundhouse kick" comparison.


I did some benchmarks a few years ago playing around with different techniques in C#.

If you spin up a thread for each connection, and the connections are short lived, the time spent spinning up a thread will dominate the time spent communicating on the socket. Event loops perform much better than one-thread-per-connection in simple web services because they don't have the extra thread overhead. Its no accident the hello world app on nodejs.org is this sort of service.

But by far the fastest technique I found was to use a thread pool and an event loop together. When there was any work to do (new connection or new data), I scheduled the next available thread from a thread pool to do the work. This technique requires that you deal with both callback hell and thread synchronization. But, its blazing fast. Way faster than you can get with a single nodejs process.

The code is also many times more complicated, so it really depends on what you're doing. Despite knowing how to make a super high performance server, I do most of my work in nodejs. - It performs fine in most applications.


Could you take a look at SignalR? (http://www.hanselman.com/blog/AsynchronousScalableWebApplica...)

* Since this is dealing with async operations, an IIS worker thread takes an incoming request and hands it off to be processed on the CLR thread pool. At this point, the IIS worker thread immediately becomes available for processing more requests

* Threadpool threads will not be tied up by open connections and waiting for IO. Only when executing actual code/work will they be in use

* To explain why threadpool threads are not tied up: Async operations use async IO. As soon as an I/O operation begins, that calling thread is returned to the thread pool to continue other work. When I/O completes, signal is sent and another thread is picked up to finish up work.


> A super-simple thread-per-connection program with blocking i/o calls from C or Java will demolish the most sophisticated evented system you could ever design in a scripting language

Are you really sure about that? Perhaps 5 years ago, but have you tried recent versions of node (V8) or Python? I have, and even as an old-timey programmer, I'm impressed.


Summary:

Ted: "Node is cancer, because it's not the one true tool that can do everything! It may be good at IO bound code but it's not so hot at CPU bound stuff".

Node hackers: "Yes it is the one true tool!".

Me: face-palm.

OK, Threads can be used to do anything, but they are hard, while async is pretty easy (unless you want to do CPU bound stuff). async sucks for CPU bound stuff, but that's not the problem it's trying to solve (and anyone who makes a big fuss over it from either side is a fool). But none of this really matters much, until you actually need to scale.

If you are doing something simple, C or Java will be best, simply because they are faster. But not everyone uses C or Java, and it's not like speed matters that much when your App server can scale trivially, your DB is the real bottleneck, and you only have 3 users (one of whom is your cat).

If you need a lot of connections, some of which block (due to a call to a web service like BrowserID or Facebook - and there's a lot more web sites that need to be optimized for web APIs calls rather than calculating Fourier transforms), you need lots of processes (which is too heavy in Python), lots of threads (and then you need thread-safety, which is a pain, and very easy to screw up), or something async like Twisted or Tornado. Given that Tornado is already really easy to use, and basic async stuff is fairly easy to get right, the choice is easy for me. (I don't know enough about JS, Node, and V8 to really comment on Node, but I'll just assume it's roughly equivalent).

The thing is, I just don't trust threads (at least, not if I'm doing the code). There's far too many ways you can have weird bugs that won't show up without a massive testing system, or when you get a non-trivial number of users. And you can't integrate existing libraries without jumping through a lot of hoops.

Using callbacks looks like "barely even moves" to me, while multi-threaded code looks like a "spinning roundhouse kick", but then, maybe I'm just not good at multi-threaded code.

I guess the most important things is that threads need to be 100% thread-safe. Async code only has the be "async-safe" in the bits that need to be async. It looks like this:

    @async
    def do_something():
        do_something_not_async_safe()
        do_something_async(callback=finish_up)

    def finish_up():
        do_something_else_not_async_safe()
Whereas threads look like this:

    def threaded_code():
        do_something_100_percent_thread_safe()
        do_more_thread_safe_stuff()

        # fail here when you have a few users
        do_something_that_uses_an_unsafe_library()

        more_thread_safe_stuff()
If you really need to do CPU bound stuff, async sucks. You can create a second server, which handles the CPU bound stuff (and call it using a web interface which is async safe). Or there are pretty easy ways to call a subprocess asynchronously. But arguing over the merits of async programming using Fibonacci sequences as a talking point is not even wrong. Anyone who brings it up is just showing themselves to be a complete tool. That might be what Ted was trying to do (as many of the Node responses have been unbelievably lame), but it doesn't prove anything except that the internet has trolls, and plenty of idiots who still take the bait.


Please, show us even one Node developer who posted "Yes it is the one true tool!" – or even a sentiment that is remotely similar.


https://github.com/glenjamin/node-fib is close enough. Trying to explain how Node can do concurrent Fibonacci without calling another process is falling for the troll's trap - Node isn't "the one true tool", and you don't need to try to prove that it is. The trolls will just say that Fibonacci is too trivial, and you wouldn't use the same approach (co-operative multi-threading) for less trivial CPU-bound tasks.

You can use a hammer to bang in screws, but it's not usually the best way. If you are using co-operative multi-threading as a way to get Node to handle concurrent CPU-bound requests, you are Doing It Wrong (TM). In general, it's better to create a second (possibly not async) server to handle CPU intensive stuff, or fork stuff off to a subprocess (depending on the actual task). There might also be other ways - I'm not an expert.

I'm sure that if glenjamin had a less trivial CPU-bound task, he/she would handle it a better way (depending on what the task was). But to the Node haters, node-fib is just troll food. It's also an interesting example of how node.js works, but the trolls don't care.


Hm...

Bunch of people with limited social skills or sanity, in a virtual desert, having flame-wars with strawmen?

Sounds like burning man.


No, heroku doesn't work that way, heroku lets you spin up processes and those processes can be threaded. I use JRuby and get one process that can use all cores.


I think it's safe to say that typical Ruby deployments (as I qualified) are not using JRuby, especially not those on Heroku. Thus, even if the process spawns threads, they will be limited to one core at a time.

Whether JRuby's threads are actually running on multiple cores simultaneously depends on what mode Heroku is running the JVM in.



Except that event-driven io is actually the simpler solution compared to threads, isn't it?


Depends. For an application where you have significant CPU work, a simple worker pool (not coded by you), with an accept loop

while (s=accept()) { dispatchToWorkerPool(s) }

And an application-programmer method handleConnection(s) is pretty darned simple. Just do blocking I/O from your threads and rely on the machine to swap in and out appropriately.

For the next level, you can have an evented i/o layer that passes sets of buffers back and forth to your worker pool. That's more complex but still spares application programmers from having to worry about either eventing or threads, they just worry about passing back the right bytes.

Here's where things get interesting, though. It turns out someone studied this (in Java) and actually found that stupidly throwing 1k threads at the problem with blocking i/o performed better than a clever nonblocking server, due to linux 2.6's threading and context-switching improvements.

http://www.mailinator.com/tymaPaulMultithreaded.pdf

Some of that might be specific to Java's integration with linux, but think about it.. the linux guys did a pretty good job at getting the right thread to wake up and assigning it to runnable state. Moving to evented i/o might be the right move for your workflow but it also might make no difference at the cost of some additional complexity.

tl;dr context switching has gotten cheaper/better as linux has improved, 16 CPUs vs 1 CPU changes the math about context switching vs using select(), and worker pool libraries mean you shouldn't actually manage threads yourself.


That thread pool isn't actually that simple. How many threads do you use? If you throw 1,000 threads at the problem with a 2MB stack each, that's 2GB of DRAM you've thrown away (instead of 20MB * ncores per Node process) -- DRAM that could be caching filesystem data, for example, which could have a huge impact on overall performance.

With Node, the DRAM and CPU used scales with the number of cores and actual workload. With a thread pool, the DRAM used scales at least with the number of concurrent requests you want to be able to handle, which is often much larger than you can handle simultaneously (because many of them will be blocked much of the time).

Assuming you're not willing to reserve all that memory up front, the algorithm for managing the pool size also has to be able to scale quickly up and down (with low latency, that is) without thrashing.


What a load of crap... you just wasted 2gb of address space not of DRAM... You'll "waste" exactly up to the amount of stack each thread uses, rounded up to PAGE_SIZE which is usually 4kb

Let me guess, a nodejs fan?


You're right -- it's the touched pages that count. In many cases, that's a few MB per stack, which is what I said.


Alternatively, create the socket, fork N processes and have each of said processes run an accept loop. Assuming they're CPU bound, this is going to be pretty much just as efficient and you don't have to prat about with threads -or- evented I/O.

Ain't UNIX grand?


It depends, but I don't think either one is inherently simpler than the other. As a general rule of thumb, I find that asynchronous solutions are generally better for I/O bound stuff while threads are better for computationally expensive stuff.


I think this post is about ridiculing the childish behavior of the Node.js fandom. I have to agree that I've never seen anything quite like this before except the Church of St. Jobs. I always thought that we software engineers are reasonable people. But apparently a subset of us aren't.

However reluctant I maybe to join in this time-wasting back and forth. I'm just glad that someone has taken the time to point out just how inexperienced the crowd and ridiculous the whole situation is. We have enough technology in the world already. We don't need another one that makes making mistakes so much easier than before.

Lastly, I'm extremely disgusted by the misleading tag lines Ryan Dahl put on Node's front page. Programming isn't easy. It can only be easy for so long. Scaling is even harder because it actually requires deep knowledge and insight into how computers and networks work, something that I very much doubt that many of the current Node.js crowd understands. If your solution to your multiprogramming problem is to spawn more Node.js processes, maybe you've picked the wrong tech.


I've seen it from the following communities: PHP, Ruby, Perl, Python, Java, C++, C, C#, Haskell, Emacs, Vim, Common Lisp, and Scheme.

People are tribal. Some people are attracted to tribes, become attached without really knowing why, and start having the strong urge to fuck with the other tribes. Apparently this was good for the survival of the human race. Perhaps it's a bug now that we should consciously compensate for, like the desire to eat a box of doughnuts. mmm.... doughnuts... Good when you were a caveman. Bad when you sit in front of a desk for 16 hours a day and sleep for the other 8.

What's the alternative to scaling an application by running multiple copies? Scaling an application by running multiple threads? Exact same thing. The only advancement over this model is when the runtime can automatically parallelize your code, and you actually scale linearly when it does that. In the mean time, if you use a database server and not an in-memory database, guess what, you're "using queues" and "scaling by starting processes". People do it because it's easy and it works.


This whole thread/process/event queue non-sense is really ruining my appetite. Mmm fried bacon sausage wrap on a stick…

Guess what. When the packets come in from the network, they sit in an event queue. When Apache takes a request event, it takes it from a queue and hand off to a process. If you are proxying, the webapp server in the back takes the request events from a queue and hands off to a thread. When you make a DB call, the SQL goes to an event queue and the DB processes them 1 by 1. Real world web apps will always, is always and have always been done in a combination of event queues/threads and processes.

Nothing to see here. Moving on… oh look! Takoyaki!


__People are tribal. Some people are attracted to tribes, become attached without really knowing why, and start having the strong urge to fuck with the other tribes. Apparently this was good for the survival of the human race. Perhaps it's a bug now that we should consciously compensate for__

That this tribal behavior occurs among software engineering is a rather disappointing fact. Computers are pretty much the edge of technology in many aspects, technological achievements by Mankind should be the prove that we're able to use our brains in more advanced ways than basic instincts and produce great achievements like modern software.

This flood of programmers that prefers to join a tribe instead of enjoying the good things from many different 'tribes' just makes me wonder how many of us are really devoted to do something useful/positive.


  > That this tribal behavior occurs among software engineering 
  > is a rather disappointing fact. 
It occurs amongst software engineering humans. All humans get this to some degree, it's basic ingroup/outgroup psychology.

We're all humans here. It's nothing to do with "devoted to doing something positive" or "using our brains in more advanced ways." It's just the reality of being an evolved ape, and the lack or presence of this trait doesn't make anyone any better/worse than anyone else.


Yes, thank you for writing this. We are all humans, even if we don't want to be, and we have to think about our actions in the context of our genetic programming. In the case of tribalism, even though it's a strong feeling, we have to ignore it because it doesn't get us anything in programming language debates.

The best attitude to have is one of acceptance and an open mind, because the right programming tool applied to the right problem can make solving that problem orders of magnitude less difficult. You can have programmer friends even if you don't unconditionally hate the enemy. In fact, it seems, most people don't care about who you don't hate.


"I always thought that we software engineers are reasonable people. But apparently a subset of us aren't."

Unfortunately no one is immune to it, not even engineers, scientists, Wall St., etc. I try to remind myself of this and keep my baser instincts in check by periodically rereading pieces like Charlie Munger's 'On the Psychology of Human Misjudgement'[1], the list of logical fallacies [2], and anything I can find relating to the psychology and neuroscience of judgement, decision making, perception and bias. Becoming emotionally invested in anything makes it more difficult to admit you're wrong about it and change your mind in the presence of refuting data.

[1] http://duckduckgo.com/?q=munger+on+the+psychology+of+human+m...

[2] http://duckduckgo.com/?q=list+of+logical+fallacies


The whole target of his dissatisfaction is this quote on the Node home page: "Almost no function in Node directly performs I/O, so the process never blocks. Because nothing blocks, less-than-expert programmers are able to develop fast systems." That's bullshit. Evented programming isn't some magical fairy dust. If your request handler takes 500ms to run, you're not going to somehow serve more than two requests per second, node or no node. It's blocked on your request handling.

And all that stuff Apache does for you? Well, you get to have fun setting that up in front of the nodejs server. Your sysadmin will love you.

Basically if you're doing a lot of file/network IO that would normally block, node is great. You can do stuff while it's blocked, and callbacks are easier to pick up and handle than threads. But how often does that happen? Personally my Rails app spends about 10% of its time in the DB and the rest slowly generating views (and running GC, yay). AKA CPU-bound work. AKA stuff Node is just as slow at handling, with a silly deployment process to boot.


Whoa, whoa. "Just as slow at handling?" It's most likely an order of magnitude faster at handling those.

But you're right, deployment isn't a totally solved problem. Unless you just use Heroku: http://devcenter.heroku.com/articles/node-js

I mean, Node was created in 2009. I don't see anybody bragging about how easy it is to deploy yet; just that it's fast and easy to understand.


[deleted]


It's possible I'm misinterpreting the results (or the benchmark is flawed), but this seems to imply that it is often the case: http://shootout.alioth.debian.org/u32/benchmark.php?test=all...


Things will be a little different on x64 but you seem to be reading the data just fine ;-)

http://shootout.alioth.debian.org/u64/benchmark.php?test=all...

The problem isn't really "the benchmark is flawed", the problem is that most of us seem to wish for data that tells us more than it can, magical wishful thinking on our part - Will the application I haven't designed yet be faster, and scale better, and be completed by us faster, if we write it in X or Y? :-)

http://shootout.alioth.debian.org/dont-jump-to-conclusions.p...


V8 is a lot faster than Ruby, so I'd expect page rendering to be a lot faster with Node than with Ruby (supposing both use good rendering engines).

Also it seems a lot easier to "outsource" computationally intensive tasks to other processes with Node than with Rails. With node you can stall the response until you receive a result of the computation (within limits), and use the wait time for other tasks. With Rails if you do that, you block one thread of your limited pool of threads. As an example: with my first Heroku Rails app (free hosting) I discovered the pool of threads was one.


That's a limitation of Rails, not Ruby. One could use Node.js in the same manner. Ruby has been doing async I/O in the form of EventMachine longer than Node.js has even existed.


NodeJS builds on some C async framework, so pretty much every language build on C could go the Node route - but it would be a lot of work. Personally I wish somebody would adapt it for Arc - or maybe not, so that if I ever apply to YC again I can do that for bonus points :-)

EventMachine is OK, I suppose, but I have heard that it has some warts. Are there any good web frameworks that build on EventMachine? Though I suppose the basic Sinatra-Style framework would not be that hard to build...


As of 1.3, Sinatra can do streaming using an evented webserver like Thin, which uses EventMachine. Goliath may also be worth checking out.


> Am I the only one that puts computationally intensive tasks into a queue to be taken care of by a pool of seperate processes?

Thank you. I really find it hard to believe that the reaction to Ted's post was to mess around with Fib implementations rather than to just say, "use a background queue," and move on.

If Ted has a point at all, it could be that he thinks that people using nodejs don't realize they're running in a loop. Maybe that's fair.


Ted hates queues, too.

But I don't, so instead of playing tough-guy and picking on random open source communities that don't want me, I get to write cool computer programs. Maybe we should all do the same.


What's his critique of queues?


It's here: http://teddziuba.com/2011/02/the-case-against-queues.html

TL;DR - It's not so much about queues, but about stacks, i.e. new software stacks. His proposition is that quite often you're better off with existing systems. He specifically mentions syslog, so you're logging your tasks and then the consumers monitor this log. Prevents data loss and lets you potentially restart lost tasks.

(That's his argument. I'd agree if you'd have to reimplement something like that in the pre-built *MQ solutions, but I don't know enough about all of them, maybe that – and more – is already in there.)


Interesting. I actually agree with his points (they are consistent with lessons I've learned overusing queues in the past).

I still use queues today, only more judiciously and with the issue of failure states a very well defined part of the design.


He seems to argue against having a blocking worker wait for the queue, which is exactly what you wouldn't do with Node.js


So I've worked on on an older open source project that does exactly what you suggest. The main loop accepts socket requests using non-blocking IO, figures out how to handle that request, and then forks a child process to handle it appropriately.

This can work great and can scale very nicely, but it also has some major drawbacks compared to other solutions. The whole point of the original post was that single-threaded non-blocking event loops are very fragile. You can easily end up accidentally freezing the entire application.

Other problems with forking event loops include:

- Simple Event loops require you to manually CPS-transform your code. This isn't an issue for a simple request-response cycle, but it quickly turns into spaghetti code for anything more complex. For a college class I once wrote a toy non-blocking P2P client in Python using entirely non-blocking IO, while all the other students chose to use threads. My client was very performant, but the code ended up being FAR more complex than everyone else's solutions for the simple reason that I had to code in CPS the complex multi-step interactions that occur between peers.

- If any of your tasks are not independent and must interact with each other, you have to start dealing with IPC, which can be very difficult to get right without a good message passing library. Even with one, it can still be more complicated than using threads. Clojure's STM, for instance, is useless without a shared memory model.

- Your application is not portable to Windows if you rely on forking.

And these are just some of the problems with using a simple non-blocking event loop to drive your entire application. Throw in the fact that you are using javascript, a very unscalable language never designed to do anything more than provide interactivity in the browser and that has incredibly sparse library support, and you've got lots more problems on top of that. Node.js just isn't as useful as its very enthusiastic proponents claim it to be.


This post from Glenn Vanderburg a few years back is incredibly relevant:

http://www.vanderburg.org/blog/Software/Development/sharp_an...

Especially this excerpt:

"Weak developers will move heaven and earth to do the wrong thing. You can’t limit the damage they do by locking up the sharp tools. They’ll just swing the blunt tools harder."


It seems we now have three camps:

1. "Node is the magic bullet."

2. "Node sucks."

3. "Nothing is that simple. Learn you some computer science."

#3 is the correct answer, but I'd like to see this point explored in more detail. The current back-and-forth isn't productive. What would be productive is if we had an informed discussion of what Node is good at, what not to do with it, and how it compares with other tools and approaches.

Specifically, I'd love to hear opinions on some of these points:

* Node is opinionated about threads. Negatively so. See http://nodejs.org/#about . Do you agree with these claims?

* Non-blocking IO doesn't solve every performance problem under the sun. Far from it. But what problems does it solve exceptionally well?

* Is Node a breakthrough in computer science? Events and callbacks have been around for ages. What new ideas does Node bring to the table?

* Why exactly is Node supposed to be so scalable? Is scaling to more machines easier with Node than with other approaches? How about scaling to more cores on the same machine? Does Node provide any special support for distributing a task across multiple workers and possibly over the network, e.g. with something like MapReduce?


I've been trying to debunk the overly negative views on threads in the Node community. Here is my presentation on fibers and my common-node project (includes benchmarks): http://www.slideshare.net/olegp/server-side-javascript-going...


+1 I like node (and ringojs) because it's a cool javascript interpreter with a nice set of libraries for practical use.

I can buy the argument that node's single thread architecture can make things faster, but not before understanding why. I've googled this quite a lot every now and then since node showed up but noone is willing to explain why, every one just assumes is faster, lighter and easy to scale for some reason. Not doubting the slightest bit, just would like to know why is that so.


#3 isn't even necessary -- you're telling someone to go learn calculus when all they really need is basic algebra, at most.

2-3 hours spent on layman's summaries of the practicalities of modern CPU architectures (special attention to cache coherence, what fun), user vs. kernel space and switching between them, select() and its cousins, and TCP would do most of the people involved in these debates a world of good.


I'd venture that 2-3 hours is far too little for someone to learn the topics you list from the ground up. But I'm just nit picking :). Your point is well-taken.

But one big problem is that these are unknown unknowns for many people. Suppose a new but intelligent programmer finds Node and wants to decide whether it's good for some new project. How would that programmer know what to look up? How would s/he know that, e.g., user vs kernel space is even a relevant concern in the decision to use a single-threaded, event-based framework?


All right guys, this is getting absurd.

I am having a ton of fun using NodeJS to actually build stuff. It's fast, it's scalable, it's maintainable, it's great.

Yet another pants-on-head troll has decided that NodeJS is a crime against his tech religion? I don't fucking care.


Actually, this is getting pretty funny. Don't bring your logic and reason to this and ruin for those of us who just bought a bag of popcorn.


This is the worst rant I've read in a while.

It starts with lambasting the fact that Ted Dziuba didn't intend the Fibonacci example to be the one role model of comparing Node.js and other languages, and then the Node.js community (or at least a few members of it) rallied around showing that Fibonacci was actually fast in Node. I agree with this. We should be talking about the big picture, not just single implementations.

Then, it goes on for 4 paragraphs about how slow Node.js was with Fibonacci.

Wasn't it the point that Fibonacci wasn't the point? If the point is to show that Node.js is inefficient with computation, and Fibonacci was just a dumb example, why was the rest of the article about the dumb example mentioned previous?


the point was that putting any $computationally_expensive_fn in your event loop was a terrible idea. in this case he then did $computationally_expensive_fn = fib;

the point is, unless this event loop pretty much does nothing or hands off the work to a pool and returns immediately, it is utterly unscalable and needs a multiplexer to sit infront of it.

most servers do

while(1) { $conn = accept_conn(); new EventThread($conn); }

Thus it's simply multiplexed. Alternately most current other web languages take the approach of integrating with apache which does this one level [up/down?] and they don't have to worry about it at all.

The point is you can write safer code in other languages even if it is slower by it self. With node.js you can do some really dangerous things if you go it alone. You must deploy node with something else, but that's kind of an after thought left up to the dev, and loads won't because they are all under the mistaken impression that because node.js is faster than php/python etc that alone solves all problems and excuses them from sane scalable coding practices. At best it just puts it off a little further down the road. And anything built into a webserver doesn't have to worry, while node will.


I thought the point of this article was to generate a cliff notes version of the last week of arguments, coming from the side of someone who thinks node is cancer. Why are we even discussing this?


Yeah. I really want to see this discussion taken in a direction where we're actually talking about Node.js and its computational (non-?)efficiency, instead of this needless back-and-forth.


>Wasn't it the point that Fibonacci wasn't the point?

Not of this post. This guy saw the wrong-track rebuttal and it was his rabbit hole into node. Apparently he wasn't impressed.


Best part of the post was where someone suggested he increase his heap size to 1GB to get a fibonacci algorithm to run.

   After pointing this out, a member of the Node.js
   community (post now deleted) suggested I might have an
   obsolete version of Node with a 1GB heap limit and that
   I recompile without the 1GB restriction so that this 
   retarded algorithm can continue eating up all my system RAM.


It is curious that an array with 1 million doubles would take anywhere near a gigabyte in V8. 1000000 * 8B is about 8MB. Even with 5000% overhead for the data structure itself, that's only 400MB. Where did the other 600 MB go?


Call stack?


Author seems to have missed that node-fib is a joke.

Ted Dziuba is clearly a person suffering from profound intellectual insecurity, and a lack of technical acumen. We should offer him our sympathy rather than enable his self-destructive behavior by praising his "previous trolling" activities.

This article, like the poor sad unfortunate Mr. Dziuba himself, seems to be combatting a strawman: the idea that Node.js users and developers are advocating that it be used for everything. It's a fun platform to use, and pretty good for a lot of tasks, but any grownups close to the project (and there are a few of us) are quite vocal about the fact that it is targeted at a very specific use case: IO-bound network programs.

People get excited about the programs they use, and want to see how well it can do things. When your hammer is fun to swing, you want to see what happens with you hit stuff with it. Ted Dziuba has previously advocated using xargs instead of Hadoop for parallel map-reduce, so I would have thought he might understand that sometimes you use the tool that you understand (if it works) rather than something else, even if that other thing might be objectively better for that particular task.

I don't know what node users articles like this or Ted's are talking about. I've been using node almost as long as Ryan, and totally love it. I don't know how we could be more clear about what node is and isn't good for.

At NodeConf, in the Committer Panel, someone asked, "What isn't node good for?" We all rattled off a variety of things. The node community is actually pretty sane. We just laugh at stuff like this, since it's so ridiculous, and make jokes like node-fib.


> Ted Dziuba is clearly a person suffering from profound intellectual insecurity, and a lack of technical acumen.

Perhaps, as the one responding to a criticism of a piece of software with personal attacks, you are the one that is insecure? I can understand how, as a fanboy of something that was criticized, you would now take it upon yourself to hurl insults on the person who has wronged you so. This is the internet after all!

> We just laugh at stuff like this, since it's so ridiculous, and make jokes like node-fib.

Actually, you opened this comment by personally attacking Ted, because he insulted some software you happen to be a fan of. I think the most likely reason that you would contradict in such an obvious way something you just wrote is that you don't have a great deal of working memory. Looking at your code on github (https://github.com/isaacs) seems to confirm that theory.


> I can understand how, as a fanboy of something that was criticized

Can you define "fanboy" for me, please? I don't know what it means.

> Actually, you opened this comment by personally attacking Ted

I wasn't attacking Ted. Actually, my heart really does go out to him. I mean, he's a human being, right? Could you imagine living with such a crippling psychological disorder? Just think of it. The pain, the emptiness. The worst part is that he's almost certainly too proud to ask for help. It's tragic.

> I think the most likely reason that you would contradict in such an obvious way something you just wrote is that you don't have a great deal of working memory.

Sorry, I lost you like a third of the way through that sentence.


How is saying that someone has a psychological disorder not attacking them?


Your comment is indicative of the anti-mental-health sentiment in our society: that diagnosing or treating mental health issues is an admission of failure, rather than a sign of courage and responsibility.

You should check your priors, and maybe help create an atmosphere where people with these problems don't feel like their only options are denial or self-destruction.


Since you're giving out free diagnosis over the internet, would you mind doing me?


I'm just saying we should be supportive of Ted's recovery, rather than enable him, that's all.


Your sarcasm sensors are failing. Might want to get those checked.


tl;dr;

Don't use JavaScript for numerical computation. Don't block your IO loops for CPU intensive stuff.

commentary:

I've never met a Node hacker who would ever do any of those things. This post should be called "Straw man Node.js n00bs jump the shark."

I've written web apps in a wide variety of dynamic languages, and you know what, I've never done anything CPU intensive in the request response loop, ever. This has nothing to do with Node, fibonacci in PHP is just as bad an idea (although it might take you a little longer to realize it, as you've probably got lots of PHP threads running behind your Apache).


If you've been following this argument from the beginning (I'm very sorry to say that I have), you'll notice that Ted's point is not that you WILL do CPU bound tasks in Node, but that Ryan Dahl's misleading statements on Node.js's front page is absurd. Of course no one with any sense will put CPU bound jobs on the same process/thread/event queue as the one processing the request. But you do have to realize, it is exactly that no one will ever do the above that renders Node's existence rather pointless. Threads/processes or events, you'll still pass off the heavily computational stuff to a different processing queue. Events or threads, when that single process starts topping out, you'll still have to launch another process. What makes Node.js different from any other technology that have come before? Nothing except that it comes with a server and it's written in JS. If you like the Node REPL and JS, that's fine, just don't propagate the urban myth that's on Node's front page. I think this is Ted's point IMHO.


My first reaction to this whole business: why would anyone run fibonacci on the same machine that is serving traffic?

For that matter, why would anyone write a CPU-intensive web application in anything but Native Client?


This only proves that many in node.js community and some trolls outside of it don't really understand programming at the event level, BUT insist on writing about it.

First, you shouldn't judge anything by the worst of its kind.

Second, why I keep seeing related stories piling one misunderstanding on top of another on Hacker News?


Hmm, sounds more like reddit is the thing that has jumped the shark :)

Most people using Node I've had contact with are firmly in the 'right tool for the job' camp. No one sane does CPU intensive tasks inside the event loop... that's what delayed job processor stuff is for.


That's my takeaway as well. Who cares how long a webserver takes to do a Fibonacci loop anyway? The point of a web server is to process lots of relatively small requests quickly and concurrently. Testing node.js with a Fibonacci loop is like testing a Ferrari by seeing how much weight it can tow.


Testing Ferrari by seeing how much weight it can tow would actually get you pretty useful performance metric.

Point of this whole thing is that you cannot reasonably do complex computations in Node. While most "Web 2.0" applications probably does not need to do that in response to request, there are also many web applications that actually do something that might be reasonably called complex cpu-intensive processing in response to user request (although most such things are hard to scale). So it really boils down to "right tool for the job" - and for many practical (but maybe un-popular) problems, Node simply is not the right tool.


This is a serious question, because I honestly don't know the answer. What purpose does node serve in more complex(i.e. non-static page) situations?

Should it just be the intermediary between the users and your database process? Heck, should the database process also handle all the formatting of the data as well as the queries? That is fairly CPU intensive(and doesn't scale well at all), especially if you're getting to larger data sets.


I'm just tired of this, it's beyond dumb.

I remember the spectacle of the Java hype machine and the dot-com 1.0 hype machine. In comparison the node.js hype doesn't even register on the meter. Moreover, I don't see a rash of people misusing node.js to the degree that is prevalent for pretty much every other web technology out there. Are some people going to misuse technology? Always. Is the node.js community out there trying to proselytize to the world, selling node as the solution to every problem on the planet for every developer on the planet regardless of whether it makes sense? Not in the least.

This is a faux-controversy.


And the point of this is?

You use Node.js for writing evented programs using Javascript. It is awesome for that. It is better than PHP, and a bunch of other solutions, for actually handling web clients. Heck, it brings us back to the days where we can handle the C10K problem without much work! And it's friggin Javascript!

All these politics are missing the point. This blog post is missing the point. Things SHOULD be simple. There SHOULD be an insanely fast evented solution to do things. And there is. Get over it.


I'd never heard of the C10K (http://www.kegel.com/c10k.html) but I love scaling issues and this looks to be some fascinating reading. Thanks!

EDIT Interesting annecdote... Cal Henderson is the author of "Building Scalable Websites" http://www.amazon.com/gp/product/0596102356 and has worked in "the trenches" as the lead engineer of FLICKR and developer at B3TA.

Apparently his newest company http://tinyspeck.com/ is using node.js for their game engine.

Assuming he's planning on scaling to a reasonable size, that seems to be a pretty resounding endorsement that there's at least something going for it. I mean ... that guy's got a bit of experience in working at scale.


I'm very glad the node.js community got trolled so effectively. It's been clear for a long time that these most of these advocates are not grounded at all in reality, and it was very entertaining to have them revealed in their ignorant zealotry. Evented solutions have been available in every modern language for some time. They aren't used that often though because they have serious inherent limitations!

I understand the appeal of node.js. It makes the web stack feel much less complex, and I like to reduce complexity as much as the next guy. But in the end, node.js is just a thin wrapper on top of a simple C library. In the end, it really only excels at solving a small, narrow, niche problem.


I use node.js for long-lived connections (sending/receiving large amounts of data, or long-polling). This is a task to which it is ideally suited. Putting CPU-intensive operations inside a HTTP handler is something that is obviously not going to work in a single-process, single-threaded event-driven framework. Ted's complaining because node.js is unsuitable for something that it's not intended to be used for.

Defending CGI, in my opinion, also hurts his credibility. The "good old days" weren't so good. Spawning a new process for each request? Re-establishing database connections every time? That shit only worked because there were three people on the internet at the time.


> Defending CGI, in my opinion, also hurts his credibility. The "good old days" weren't so good. Spawning a new process for each request? Re-establishing database connections every time? That shit only worked because there were three people on the internet at the time.

Apparently the server behind SQLite.org and Fossil-scm.org, which gets 250M requests/day, spawns a new HTTP server for each request. There's no database, though.

http://www.mail-archive.com/fossil-users@lists.fossil-scm.or...


fossil-scm.org uses SQLite database (in WAL mode), and the website is the repository itself.

http://www.fossil-scm.org/index.html/doc/trunk/www/selfhost....


Oops. Thanks.


This article is terrible. It boils down to "Ted stirred up some shit in the node community, which I like to troll because I wrote my own programming language and see node as the enemy. As a result of the shit-stirring, people that don't know much about programming defended node. Meanwhile, I wrote my own programming language that nobody uses because nobody is as smart as me, the creator. What's up dawg?"

OK. You have to realize what the community is and isn't. The community isn't every person with a blog. Those people are fanbois; and they exist everywhere.

You've got to filter the noise out. Don't submit every article about something to HN. Don't tell your friends "hey, read this article about a guy memoizing fib, completely missing the point that it was an example CPU-bound algorithm". This is all noise, people that don't know what they're talking about talking.

So, is accidentally blocking an issue in node? YUP! Is leaking space in Haskell something you should worry about? YUP! Is passing a string to a function that requires an int something to worry about in Ruby? YUP! Should you lose sleep at night worrying about whether your Java application is leaking memory? YUP! Should you have nightmares about input data causing your C program to write to memory it didn't allocate? YUP!

Why do you think writing working software is difficult? Because we all have phenomenal tools but are just morons? Nope, we're all morons and we have shitty tools too. All programming languages suck. So you must use them for their strengths, not their weaknesses.

Node.js' strength is that it was designed from the ground up to do everything that can be done asynchronously asynchronously. This is mostly due to the standard library and a bit of C, rather than something intrinsic to the runtime or language. The problems it runs into are CPU-bound computations. People are afraid to split applications into many processes and have some existing tool "scale" them as necessary. As a result, node.js does not work for them, because it doesn't have Java-style threads, which is the hammer they want to use to drive in their screws.

Anyway, I don't really even like node.js that much, but it just feels worth pointing out that no other language solves the denial-of-service problem. If you use preemptive multitasking, you eventually run out of memory for threads. If you use an event loop, blocking starves the other handlers. If you use multiple processes, your process table fills up. The question is: how are you going to deal with it. If you write a thread-based application, you have to figure out how to collect blocked threads (and hope the OS schedules your collector thread). If you write a process-based application, you have to figure out how to collect hung processes. If you write an event-based application, you have to figure out how to ensure you never block. In the end, all three problems are equally hard. It's just that node makes it easy to make 100000 connections hang without killing your Chrome session with your blog editor in it. Write a CGI script in C, use mpm_prefork or something, hit Apache with a million connections, and watch the OOM killer annihilate your system.

Programming is hard. Let's stop blogging.


> You've got to filter the noise out. Don't submit every article about something to HN. Don't tell your friends "hey, read this article about a guy memoizing fib, completely missing the point that it was an example CPU-bound algorithm". This is all noise, people that don't know what they're talking about talking.

>... Programming is hard. Let's stop blogging.

The problem with this sentiment is right now too many programmers equate popularity with quality. Their only esthetic is "X number of people follow the project on github". Combine this with the relative inexperience of most programmers and we've got a situation where you can easily flood the market with shitty technologies that only work because everyone believes in them, not because they actually work.

I'm not really talking about Node here, but more your assertion that "this is all noise and us serious real programmers should ignore it". The sad reality is, real serious programmers should speak out about shitty popular technologies before every job requires "20 years Node.js experience".

I'd love for the state of the art to be defined by the state of the art, but sadly, it's currently defined by the best marketing and propaganda. Answering that propaganda with criticism and writing better software is the real answer.


Didn't a similar event occurred a few years back (in HN as well)? Like around 2006-2007 with Ruby on Rails and the framework war? Big waves, big disagreements, flame-wars.

Good prediction though Zed, I'm seeing ghetto Ruby (Rails related code) lately.

Then it becomes the language wars between newer dynamic/functional languages.

Now it becomes Node.js vs the rest of the world. Are we going to see crappy JavaScript code soon?

PS: At the very least, Rails brings real productivity value to the table (cause all other frameworks sucks back then).


crappy JavaScript code soon

Is there any language in which a larger quantity of crappy code has already been written?


PHP runs a close second


Good point. Should've put "Server-Side JS code" maybe?


"Programming is a pop culture" strikes again, basically.


I think the way we talk and reason about tech has jumped the shark.

* JavaScript is a nifty little language. * V8 is a nifty little VM. * Node is a nifty little project. * Isn't it lovely that Ryan Dahl had an idea and like, actually did it! He doesn't write ranty blog posts (often), he writes code for people! * There are worse things to rant about. Things that come to mind are the GFC, the US Govt. Bailout, indications it didn't work, education, unemployment. Lots worse things than Node JS.

P.S. If you ever deployed a Rails app to production then, well my friend, the joke really is on you!


Jumped the shark posts have jumped the shark.


"Jumped the shark” considered harmful.


"Jumped the shark" posts are an antipattern.


"Jumped the shark" joke comments are the sort of thing HNers vote down.


Is """'"Talking about being down-voted before you voice an unpopular opinion" posts have jumped the shark' is dead!""" the "Considered Harmful"-Killer?


I think that's what happened. I guess my meta joke was misunderstood...


jokes are not appreciated


Jumping the shark means some organization has run out of ideas for one of its products and ends up resorting to the absurd to keep people interested. I'm not sure how this relates to node.js.

From my perspective, there was an over-abundance of node.js articles on HN between 6 and 12 months ago. It got to the point where there was a huge backlash and since then there has been relatively minimal coverage of node.js. In other words, node has transitioned from its rapidly fluctuating transient state to its long-term steady state and articles like this just won't resonate with anyone anymore.


Besides entertainment value doesn't all of this back and forth reduce down to the age old "use the right tool for the job in front of you"?

Node.js as it currently stands isn't the best solution for applications that will involve CPU intensive tasks. Perhaps at the outset of a project you don't know when and where those types of tasks will pop up so basing an entire architecture on it might be risky.

However, it seems to me that regardless of the above, there are tasks Node excels at and it would be silly to dismiss it as a technical solution all together. I'm building browser based games with it and love working with Node. The bulk of the logic is in the client and Node serves as the glue that allows people to play games together from different clients. Nothing I have read gives me pause about this implementation.


I'm pretty sure the only sane way to implement CPU intensive tasks in Node and browsers are web workers. Not sure about the state of web worker support in Node but the resulting code should be straightforward. Even so, the speed of JavaScript is limited, so any serious number crunching will be a pain in the ass.

Node and browsers are seldom used for CPU intensive tasks for now, so it's not a priority.

There are alternatives that handle node's current warts elegantly, can do serious number crunching painlessly, but just don't have the cool factor.


Javascript integers can only reliably represent values up to 2^53, as they are implemented by 64-bit floats [1]. This means that fib(73) and below are correct, but fib(74) and onwards are almost certainly going to be wrong.

The one-millionth fibonacci number, fib(1000000), has 208988 digits when written as a decimal. It takes about a minute to compute fib(1000000) with python 2.6 and write it to file.

I am completely missing the point, but this does amuse me. Framework/language pissing match descends into performance benchmark battle where no-one cares about correctness?

1. http://www.jwz.org/blog/2010/10/every-day-i-learn-something-...


Minute is too long:

  $ time curl -s http://localhost:1597/sicpfib/1000000 >sicp.out

  real    0m1.856s
  user    0m0.000s
  sys     0m0.000s

  $ wc sicp.out 
       0      1 208988 sicp.out
https://github.com/zed/txfib


I have a question about Node.js I hope it's legitimate.

If I have a endless loop in some method that is supposed to generate some part of some webpage of my webapp will it stall the whole app until I restart the server?

From my tests it seems so.

What additional facilities are required to be used so that single bug won't kill the whole app for all users?

Rant mentiones putting nginx in front of node.

UPDATE:

It seems that there is a tool called monit that can restart your server when it stalls:

http://howtonode.org/deploying-node-upstart-monit


Isn't it the point of all this madness that jumping the shark for these small and trivial problems is fun and thought provoking?


Jumping the shark is the moment when something, such as a TV show, or in this case a community around a web framework, begins to decline in quality beyond recovery, so “jumping the shark for these small and trivial problems” really doesn't make sense.


No, "jumping the shark" is a single event so monumentally stupid that the show can never recover from it.

Node.js didn't jump the shark. It has some stupid users, but what language doesn't? That he got stupid answers from the community doesn't mean Node.js itself is bad.


I remember having debates like this about the TRS-80 in middle school. These guys need to give it a rest, srsly.


I decided to take it a step further and add fibonacci generation as an Express Middleware.

https://github.com/nuckchorris/express-fibonacci


I'd like to point out that by the time this article was posted, and thanks to some constructive discussion about the approach, the node-fib "project" on github has been updated with a much faster recursive approach which doesn't use loads of memory and doesn't resort to memoisation, but still doesn't block the loop and serves almost as many requests/sec.

My point here, and in writing the lib in the first place was that naive implementations are naive in any language/vm. I lump splitting an algorithm across the loop into the same band as deciding this task should spawn a thread/worker. And using child processes as workers is still an option in node.


I don't get why all these back and force rants get upvoted.


Geez, so many generalizations and false arguments. Some of you could use a basic reasoning class (possibly Philosophy 201 or something of the sort).


right tool for the right job. solving real problems is hard. unix, microsoft, beos python, java, brainfuck whatever. experts tend to be overspecialized and one dimensional. pundits have their own agenda. religion closes the mind's eye. forget about your preconceptions and embrace the world around you.


Does the same criticism applies to code written with Greenlet/Gevent (see for example Brubeck web server)?


Yes, not yielding control from a function in any evented framework will block the main event loop.

In programming languages that support coroutines, yielding a sleep every few loops will release the event loop. In languages without coroutines like javascript you will need to write your algorithm in CPS to release the event loop.


Huh. Just write the algorithm in say c, and have node do a callback when the algorithm is complete.


I don't see how discussions like this signify the decline of NodeJS. Quite the opposite. Lots of people are thinking about how to do this kind of computation the right way, within the context of NodeJS.

Honestly this discussion has kind of highlighted how bad almost all other platforms are and how much room for improvement there is.


Actually what I have found it has highlighted is just how many of the Node advocates completely and utterly fail to understand their competition. I really, really wanted to think it was just a few isolated people, but the evidence suggests that the meme that Node is actually some sort of multiprocessing breakthrough has spread further in the last few months, rather than dying out.

So let me lay it out for you: Node's multithreading is primitive. There are in fact other languages that can simultaneously do long-running computations and serve web pages at the same time, because they are not single-threaded under the hood. That is a Node limitation, not a fundamental computing limitation. Many of those other environments can also trivially use multiple cores within the same OS process.

We are discussing the limitations of Node with regard to long running precisely because it is a limitation that Node in particular has, and that there are other environments that do not have it. We are discussing this limitation precisely because the casual presumption you make in your last sentence that Node is the epitome of programming platform, and therefore if it has a problem then everything else must suck too, is false.

I am staggered at the degree of ignorance of other platforms being displayed by the Node partisans here, because it's not "well, OK, that's nice but in practice I don't care", which I would disagree with but would at least consider debatable (and I mean that quite honestly), but rather just a continuous talking past the Node partisans entirely, with little evidence that they even understand what is being said. I say this from experience over the past couple of days, where it's pretty clear the Node partisans seem fundamentally incapable of considering the concept of "environment that does not have the fundamental limitations of Node".

Several of you seem to be expressing confusion about why some of us aren't coming around to seeing the light. To you all, I'd recommend this essay: http://www.paulgraham.com/avg.html Hint: Node is blub.

Let me be clear about my motivation in writing this, and generally trying to tamp down on the Node hype. First, it's bad for Node. I've seen this cycle before, and as fun as the ride up is, when it is predicated on false claims it tends to explode at some point and the whole thing goes down in flames. (A process that may now be starting.) Second, those of you who only know Node and think it is the epitome need to have it explained to you that there are in fact other tools that are more useful in these cases, so if you ever encounter those problems you can use the right tool for the job. Otherwise, you'll blow untold manhours trying to force a old paradigm to do something you could have done in a more modern one much faster and more effectively. I don't hate Node; I hate the hype, and the damage it is doing to a large group of people by lying to them about what the competition does.


how many of the Node advocates completely and utterly fail to understand their competition

I agree, but as someone who is generally excited about Nodejs, I see it as necessary growing pains. These kinds of criticisms and back-and-forth and flamewars, if a language or platform is having them, it doesn't make it fade into obscurity, it forces it to mature, either technically or socially. It also has the benefit of bringing it's most ideological advocates down to earth. I feel as though I watched this whole process with Rails, which went from The Greatest Thing Ever(tm) to simply another framework with some big advantages to go with it's known disadvantages.

And now it's happening with Nodejs, and I think that's ultimately a good thing.


As somebody who got into Ruby shortly before Rails (because I'd heard it was a "modern Smalltalk") and was writing JavaScript back when DHTML was the new shiny thing, that's my perspective too. The evolution of Node.js reminds me a lot of Rails. It lent an air of legitimacy to a language that a lot of people used to look at sideways if at all, and it's rather overhyped by an influx of people who were plagued by the problems it solves, but it is fundamentally pretty good even when you drill past the hype.


My take on that article is that the fact that node is written in "blub" could equally be seen as an advantage.

Popularity will also mean a broad based skill set ready to work for you when it comes hiring time.

Of ever developer who's worked in Ruby, PHP, C#, Blub... they've all had to have their hands on Javascript. So this one-ring-to-rule-them-all isn't necessarily about being the best & most powerful language ... but it's the one that might be easier to hire for.

What of the social cachet attached to working in the "cool" language?

I wrote project in node and I loved it. I learned a mountain about Javascript. And in that respect, it's been a resounding success.

It has even been mildly successful running at ~1000 uniques a day, ~3000pv. In retrospect, it was absolutely the wrong tool for the job. Totally stupid. Should have used PHP & MySQL. (Or even node+mysql for that matter).

Regardless - I'm now a lot clearer on what a good use of node is vs. what a bad use is. (Hooray for deliberate professional practice).

I've read about some other languages that "already do what node says it does" - I've heard "Ruby's twisted something-or-other does that", either LISP or SCALA or something.

But I couldn't work in those languages and simultaneously increase my knowledge and understanding within my current professional practice (in a way that is directly relevant). With node/javascript I am able to get both.

I'm not actually sure if my point is very well made. It's certainly not a rebuttal to what you've said (Or even a very effective redirect for that matter).

I'd love to know more about the other stuff & the competition (as you say), but I was attracted to node. I've only got so many hours a day to program. I've got to start somewhere and picking up node (to me) seemed a really fantastic place to start.


If you really want to find out about what makes Node.js tick, do yourself a favor and spend some quality time within C and the related I/O APIs, like select/poll, epoll, kqueue, AIO and all that stuff.

After that you'll have 3 revelations:

(1) everything sucks badly

(2) non-blocking I/O really is available in every platform and programming language

(3) you'll learn to appreciate older developers that have solved these problems years ago, without ranting on stupid blogs


Bulldonkey. Javascript on client side will continue for the foreseeable future which means those developers would want to use the same language to do server-side. Just because it hasn't taken hold, doesn't mean it won't.


Another boring, silly "jumping the shark" post


The man is butthurt about trolls trolling trolls trolling a man who's butthurt about bad software.

All I have to say: the cycle continues.


I finally decided to do some research into what Node.js actually is and it turns out that it is just another web server. Seriously? That is what all the fuss is about? Its main feature seems to be that it has an event driven loop. Yay, an event driven loop, is not like they have not existed for a really long time. Whomever is marketing this thing must be a marketing genius.

On the other hand, the real appeal might be that it allows a lot of programmers that only know JavaScript or that feel most comfortable with JavaScript to do sever side programming with JavaScript. Something that as far as I know wasn't available before which is why this thing has become popular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: