Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on Rails, Node, and the web apps of today (paulbjensen.co.uk)
83 points by paulbjensen on July 12, 2012 | hide | past | favorite | 99 comments



Node is maddening for one simple reason: asynchronous code shouldn't need to look like asynchronous code, and in fact it's much safer and less verbose if it doesn't.

The whole performance argument is completely beside the point. Yes, we get it, in-process asynchronous parallelism is where its at. But it simply doesn't follow that we need to live in inversion-of-control hell.

Even the simplest node program ends up drowning in error-handling code, because there's no way to automatically propagate errors back to the right place. Every function needs to handle error conditions immediately, because you can't throw an error up the stack, because your stack is completely meaningless.

This is a huge step backward. It feels like coding directly against system calls in C.

It doesn't need to be this way. There's no reason your language runtime can't be smart enough to switch contexts automatically, so that your code gets to be "blocking" even though the actual OS-level process is never blocked. Erlang does this, Stackless Python does this, any Lisp with good old call-with-current-continuation does this.


It's absolutely a step back and the Node community is way too harsh on those that point this out. But they are probably making the right call by not forking JavaScript. If you want a synchronous style its easy enough with node-fibers. The problem is mostly that there isn't a cultural consensus around a node-fibers based ecosystem of libraries.


This is absolutely correct. The lack of consensus is the heart of the problem. There are several nice solutions (node-fibers being one), but none of them is dominant, so none of them work out of the box with everything else.

I'm convinced the only way you get this kind of consensus is by building a feature into your language's standard library. This is one big reason that languages with robust "batteries included" standard libraries have been successful. It makes it so much easier to interoperate when everyone can just assume the same abstractions are always available.


For those interested in the fibers way of doing things, I recommend checking out https://github.com/scriby/asyncblock. It provides an easy-to-use abstraction to get the best of both worlds (straight-line code without blocking).

I don't think there really needs to be a fibers based ecosystem. Isn't it best if modules don't rely on fibers such that they can be reused in either context?


> It feels like coding directly against system calls in C.

I disagree. Coding directly against system calls is much harder in C, because you don't have the advantage of closures and flexible typing. But, that aside, node is intended to be a very low-level library that facilitates higher-level extensions and abstractions in userland. It is more like C than it is like Python, and that is by design.

Node is JavaScript on the server. It is not JavaScript-like, or Compiles-to-JavaScript, or Erlang-with-semicolons-and-braces on the server. We don't mess around with the language runtime, we take it mostly as-is, and there is a huge benefit to doing that.

When and if V8 implements generators (as they are likely to do somewhat soon), then I expect we'll see a lot of experimentation in this area in userland modules. They'll have to be run with a --harmony_generators flag, most likely, but they won't need to be compiled or do scary bad-touch things with threads and stacks.

When the area has been explored a bit in those userland modules, and one or a few of them are popular and good and intuitive to use, and V8 moves generators out from behind a flag, and they're fast enough to be used in node without introducing performance regressions, then we'll investigate adding something like this to node-core.

Part of the reason why you meet such backlash from people in the Node community when you complain about "callback hell" is that the model is very simple, and it really is not as bad as it looks at first. JavaScript's bulky "function" keyword does make CPS quite a bit uglier than it is in Scheme, but it's a very reasonable approach to the problem which is extremely extensible.

The crappiest part in my opinion is doing `if (er) return cb(er)` all the damn time. Domains make that a little bit easier, but you're just trading one bit of boilerplate for another, so I don't know. Generators are probably the ideal approach to that problem, but I'm personally not sure they're worth the complexity cost they introduce. I am often wrong, and try to be quick to admit it. We'll see how they change the shape of things once they're a real thing and not just an idea.

In the meantime, use named functions. Use the "async" utility. Use Stream interfaces and .pip() them to one another. And most of all, Don't write big apps! Write small modules that each do one thing, and assemble them into other modules that do a bigger thing. You can't get into callback hell if you don't go there.


If you're writing really sensitive things then callbacks are great, but I strongly disagree that this should be considered "good enough" for app-level code, but with the current state of node as a community it seems like would be very very awkward to even introduce something else right now. node-fibers is definitely annoying since you have to compile but even if you didn't you'd still be wrapping all of your favorite existing libraries to have a nice interface. As for back-pressure etc there's no reason why that doesn't work well if not better and implicitly with sync-style, many languages do this already and do it really well. I love node but the core community needs to stop being ignorant towards other concepts thinking that callbacks are simply "the way to go" when they're simply codesmell for many if not most applications.

Stuff like "hey @nodejs people lets make a website called http://fibersarestupid.com in which we provide education on how to use callbacks and streams" certainly doesn't help, it just makes node as a community look childish, maybe the site should be called iDontUnderstandFibersThereforeIDismissThem.com... come on.


> The crappiest part in my opinion is doing `if (er) return cb(er)` all the damn time.

That is heart of my complaint. And it's why I made an analogy to system calls, because there you end up doing the same thing -- manually propagating error codes.

> It is more like C than it is like Python, and that is by design.

I agree, which is why the original article simply makes no sense when it presents Node as a competitor to Ruby. They aren't really comparable.


Like I said in my nodeconf talk. "Callbacks are Hard... in C!" JS callbacks are a very elegant tool for a very hard problem. In node you have the power and responsibility to manually decide when your thread of execution stops and when is resumes. That's what is hard. Coroutines are another tool for the same problem, but they come with their own set of problems and complexities. Callbacks at least are very simple to understand and reason about.


It seems like so many arguments against Node, or Javascript in general, is based on ignorance of Javascript and programming techniques that you would never do in other languages anyways (like deep nesting).


Typo: s/\.pip\(\)/.pipe()/


Word!


I'm disappointed that the discussion has (yet again) turned into debating the merits of Node rather than the much more important topic of where to place the MVC logic (client vs server), regardless of the language/framework chosen. Node was just a personal preference for the OP, based on his performance requirements.


The reason these types of arguments are generally perceived as being ignored is because they generally argue a point that everyone who has experience with asynchronous programming already realizes. You need to change your level of abstraction. That can be through language choice, or through flow control programs, or streams, or switching to a monadic coding style or replacing nesting with a more functional approach.

If you prefer that the language does that for you, then that's great. There's enough viable options out there to fit anyone's preferred asynchronous style. Node exposes a lot of the low level elements of asynchronous programming (much like the low level programming you could do in C). You can stick with that, or use a flow control library, or employ any other number of viable strategies to avoid deeply nested callback soup. You get to pick the level of abstraction you work on, and not everyone wants to work on the highest levels of abstraction (where the language or framework handles everything and you just write synchronous style code).

The node community could probably do a better job of directing people to good documentation on those different levels of abstraction (it's a bit silly to assume someone will know to use streams, or monads to clean up the style of their code). When node first started to gain traction, it felt like almost everyone wrote a flow control library. Now the problem is solved in almost every way imaginable. The problem is that new users don't know where to look to find those 3rd party solutions, or how to evaluate the tradeoffs.

You're right to find it maddening. A lot of other people did as well. It doesn't need to be maddening though, and while you have a valid point, it's not especially relevant given the number of possible solutions (you just have to know how to find the style you're after).


The problem with the "solve it with a library" answer is interoperability. You and your dependencies usually need to agree on which paradigm you're going to use.

The Node ecosystem clearly has no consensus on this point. So you spend a lot of time gluing pieces together.

And I do know how to find the style I'm after. At the moment I'm doing a lot with Q, and I have also used node-fibers. But in either case, you end up doing a lot of plumbing, because you end up depending on other people who chose different (or non-existent) high-level flow control abstractions.


Care to share a link to Q? I haven't heard of it before, and as you might guess, it's almost impossible to find it through search :)

Edit: forget it. Someone else mentioned it in another comment: https://github.com/kriskowal/q/


Another point: you're implying that not blocking the OS level process is in itself good thing. But that then requires that your runtime implements its own scheduler, in effect duplicating work the OS is already doing.


This is true, and in an ideal world we would definitely leave it up to the operating system. I would love to see the Linux kernel get so good as massive parallelism that we can simply do it that way.

But as far as I know, it's still not competitive. Which makes sense, since the kernel offers far stronger separation guarantees.


Your argument is valid. But Node's lightweight-ness comes exactly from the async non-blocking model. The more concurrent connections you have, the bigger is it's advantage. Node can scale without the need to increase the memory as much.

If you like to have a sync model on node, fine, add a layer like threads-a-gogo and you are ready. By doing so, you give up some of the lightweight-ness and will have increased memory consumption. It is your choice and it depends on your use case. If you expect just a few hundred connections or you have lots of memory, you don't need to think about node.


Except that you can have the benefits of lightweight async development without the problems that JS-style inversion of control brings.

Raising an exception and dealing with it in the same lexical scope sure is nice.

The author lists some examples, another is: http://www.gevent.org/


Yes, coroutines are promising.


This completely misses the point. You don't get rid of non-blocking single OS thread. You get rid of the horrible API to use it. Node says "here's a low level API, now write an ad-hoc version of green threads on top of it". But other languages have been saying "we already wrote and tested a solid green threads implementation on the low level async API for you, you can just use that" for years now.


Even green threads increase memory consumption and add implementation overhead.

Anyway, the call for a sync model is nearly as old as node is and I agree that a solid thread implementation should be part of node for those who like to code this way.


Show me the benchmarks. I have no reason to believe that the buggy, ad-hoc versions being written over and over again in every node app are faster than writing one good version and using it.

And it is not a call for a sync model. It is a call for a sane API.


I got you with your call for a well designed API and I am with you.

About a benchmark, I have none, but it is the nature of threading. Threads need to store and switch contexts. Even if it is just a few 100KBs with 100000 connections this easily multiplies to a gig of additional memory use. In the real world this is often a few MBs per thread making that scale even impossible for a mid class server.


No that's not how green threads work, which was the whole point. You can have an API that presents you with "threads", but not have any actual threads underlying it. It is just a state machine running async, event driven code under the hood. The overhead of such a system is no higher than it is using the naive and error prone event loop approach.

Here's a paper on using userland threads to get the API advantages of threading, with the scaling advantages of event driven programming: http://static.usenix.org/events/hotos03/tech/vonbehren.html


Programming against system calls doesn't suffer from this problem. In fact, if you use synchronous system calls, the OS does exactly what you're asking for: switching contexts when appropriate.


That's true. I was referring to the non-ability to throw meaningful exceptions, which is a side-effect of Node's inversion of control.

Basically it means you can't rely on exceptions at all, so you're back in a "always check for the error code" world.


Sounds like what you want is a good Promises library. They will guarantee that errors float back up to the original caller (if not handled directly). Promises might be baked into ES6. For now these are the best 2 implementations:

https://github.com/kriskowal/q/ https://github.com/cujojs/when


Yes, I've used Q, and it helps a lot.

But until the Node community can agree on a standard for this, library writers can't make their APIs dependent on these techniques, so we're stuck at the lowest-common-denominator.

And even if everybody was using Q, it's still an awful lot of boilerplate compared to saying the same thing in a language with coroutines.

And there are painful design choices that make it unlikely everyone will ever agree on a promise library. For example: Q guarantees that promises will resolve on a different stack than where they were created. This is nice, it helps you reason about the code. But there are places where you absolutely need to resolve a promise on the same stack (Node's IO handlers, or parts of the window.openDatabase API), and dealing with the edge cases is really gross.


Library authors won't, and I would argue shouldn't, make their code dependent on a particular promises implementation. When/if either ES standards body or Node authors choose a method that will become the default. Until then individuals will use the library they like, or none at all, and we've gotten by without promises in the last 10 years that JS has exploded in popularity.

By the way, Q can easily wrap node-style callbacks in a single line.


Yes, we get it, in-process asynchronous parallelism is where its at

We actually got it in the 90s when Tcl had this, only better...


IcedCoffeeScript...


IcedCoffeeScript is a nice attempt, but it doesn't really address my core concerns. For example, it's not exception safe (according to its tutorial).


Every piece of Node code I've seen has looked like a giant mess, for the reason you state. I agree, it doesn't have to be this way.


Am I the only one out there that _doesn't_ think single-page apps are the future? It reminds me of the mid-2000s, "AJAX all the things!"

I might be wrong about it, just not on this bandwagon at all. I still find the true power in the internet is hypertext, and single page apps seem to break it. Of course, I also prefer articles over video and not having a firehose of information thrown at me. Can someone point me to a way to use Node that actually makes sense for something that doesn't have to be near-real-time?


You are not the only one.

Client side apps are strangely messy, and make it impossible for search engines to index your site. This is exactly why Twitter is largely walking away from their client-side web site.


There is a whole class of web applications where it does not even make sense for search engines to index your site: todo lists, real-time chat, finance, and so on. So yeah, if you think it is important for your app to export content, then by all means don't build a single-page app.


> Am I the only one out there that _doesn't_ think single-page apps are the future? It reminds me of the mid-2000s, "AJAX all the things!"

I absolutely agree with you. In my opinion the mid-2000s AJAX craze has died down into "AJAX where it's useful, otherwise do whatever." I expect the single page app idea will do the same.

I think the end result will be dedicated pages as the "homepage" of some topic that give full access to that thing and partial access to related concepts. That preserves some of the "I just want to fiddle with the kerblob, don't make me navigate away from what I'm doing" idea without throwing out the hypertext/linking/other beauties that make the internet great.


You seem to mix webpages, which almost never need to be single paged, except if they a book or something and webapps where the concept of a page doesn't make sense.

So no, we will properly never see that many blogs that are single paged, but it makes sense for your bank application to be single paged.


I completely understand that, and agree, but for the 99% of websites that are somewhere in-between a "page" and an "app", this sort of punditry is not really helpful. Most websites are interactive in some way, and also benefit from search engine indexing and linkability. Blogs like these seem to think all apps are true apps (DO something on the web), and other think they are pages (CONSUME something on the web. The truth is that most web applications are a hybrid; you will DO something that persists so others can consume it.


the problem is that one page apps are great for delivering what 10 years ago would have been a stand alone desktop app. You're just now delivering it via http.

But no one really qualifies their fanaticism. sure node.js is great! clearly we should use it for our brochure-ware sites too!


> Can someone point me to a way to use Node that actually makes sense for something that doesn't have to be near-real-time?

I'd like to see a CMS+Framework like Drupal built in node. Serving an entire dynamic site (say, a corporate site with customer portal) in node could be extremely sweet.

Edit to point out: just because it's in node doesn't mean it's a single-page, ajaxy app. It's obviously the killer application for now, but as time goes on and we link together more and more with external APIs (that might not answer you right away), the async model will make more sense for even a mid-level webapp or customer portal IMO.


I'm surprised no one considers that single page apps can still work without breaking hypertext. HTML5 history lets you do that and fall back to full pages. Just look at what GitHub does in their repo view.


I started a side projects months ago and the first question I agonized over was: 'web app' or 'api app'.

We went through all the screen mock ups several times before any code was laid down. First thing I knew was it was going to be - for me - a lot quicker to just stick with Rails over node. I played with node before and didn't see my project 'needing' it.

I thought a lot about using Spine. Still hadn't gotten over the Backbone hump, so that was out. And in the end, I just decided that for prototyping and just getting something ready, I needed to stick with the easy and most productive route and just make it a page to page web app. Most of the world still finds this approach acceptable and I was just trying to not get sucked into the technology vortex. 'Real artists ship' I kept telling myself. I did end up using MooTools which seems so old school now, but it totally works fine and I prefer it over jQuery. I felt like a sorta ajax Rails app here and there would be good enough for what I needed to do.

Over the past 8 months, I found pockets of time here and there to put down some code, in spite of a newborn and contracting at a few different startups. I haven't regretted using any of the tech I used and I've been mostly focused on just delivering functionality and getting something ready for people to use. Real artists ship.

Will my Rails app handle 100s of requests a second? Probably not. But, I'm going to love having that problem. It will mean that people will actually be using the app and then I can start thinking about how to scale it up/out.

The shop I'm working with now has a bunch of young kids that are all about node. I think it's great and they even took an old Rails app and rewrote it in node because no one there knew Rails. Ok, dunno if I woulda done that, but it all boils down to - I think - what people are comfortable with.

I'm not that comfortable with node yet. I did see the whole bubbling up error thing and sorta spaghetti code possibilities when i was using it. Like some other posters have said, there are MVC frameworks out there that can alleviate or solve these issues. But at this point I just feel like most apps - most users - will not need or appreciate realtime. I've seen pretty pictures and buttons and ui that have put the sparkle in more people's eyes than if ajax was being used and thusly, realtime.

Is everything going that way (to realtime)? Absolutely. Are we there yet? No way. But I definitely see the writing on wall for Rails as a page to page framework. Maybe they'll come up with something - I've already seen evented php frameworks, so no reason to think this will not get sucked in to Rails somehow. And I've been using Rails as an api platform for years already, so the whole api discussion doesn't surprise me. But for now, I think there's still plenty of life left in Rails to make apps and companies on.


You are certainly not the only one, and as you point out this is nothing new. Web development is incredibly fad-driven, and the vast majority of web developers can't even tell you why they are jumping from fad to fad.

If you are writing a mail client, or chat app or something, then a single page javascript app makes sense. But I don't think those apps are the majority now, and I don't think they will become the majority any time soon.


[dead]


I'll stop being mischievousness, but basically the single page app replaces the desktop app of the past, this means:

- We build for the browser not the OS, no dependency in an organisation on windows, linux etc, it will even work on your ipad and mobile. This simplifies desktop support.

- server can easily publish updates frequently, very agile and responsive to change

- Takes advantage of the cloud and collaboration, your web app now has an API (your desktop app didn't!) so you can share data, link to other services and other web apps. Think of google docs, why would you create a word document and email it around or save on a share when you can share it in a web app and even have multiple people edit the document at the same time!

I believe the browser is replacing the OS, not surprisingly it doesn't do a very good job and it's a pain to work with but the end result is so much better for your users.

lastly I don't understand that suggestion ajax was a fad from mid 2000's ?? it's used everywhere and to mostly :) good effect !


> basically the single page app replaces the desktop app of the past

Yes, and I am saying this is shortsighted. We went from silos of information and data on a single computer and evolved into creating information in the public internet. Why go backwards? Outside of a small niche or two is there really a need for desktop apps hosted online? If they don't interact with the larger web, then just leave them on the desktop.

> - Takes advantage of the cloud and collaboration

I find this particularly ironic, since single-page apps actually break the standard of collaboration that has been around since the birth of the internet. I should be able to refer someone to a specific piece of information, they should be able to share it to others. And I'm not talking about hashbangs, just good old HTTP links.

> I believe the browser is replacing the OS

I don't. Browsers can't exist without an OS, for one. We're always going to need an OS of some fashion, and it's always going to have opinions and lock us in to certain paradigms. And that's actually a GOOD thing. I sure hope this misguided fad of the OS-less world will fade soon.


I'm not sure why Sinatra was not considered in the mix here. Sinatra is great for building simple APIs in Ruby. Heck, Express essentially started out as a clone of Sinatra for Node.

One challenge in the Node community is the ecosystem. There are not nearly as many libraries in npm as there are in RubyGems. Of that small set of libraries, a large portion of these have been abandoned. Of that smaller set that haven't been abandoned, there are large portions of missing functionality, are not very stable, or are constantly changing.

There are certain cases that Node works really well for right now, like building real-time chat apps. But having worked on some medium-scale Node projects recently, I constantly find myself re-implementing Rails functionality or rewriting common Ruby gems due to the lack of mature libraries.


Very true, libraries for nodejs are usually not as mature as those for Ruby. That said, the most useful are actively maintained, when I have a question or submit a pull request it is generally answered/merged within a day.

As the OP says, Nodejs is not nearly as mature and proven as Rails, but I believe (and that's really a faith-based opinion) that this kind of architectures is the future.


+1 on Sinatra. Sinatra is simple and is great for shipping. Most of your apps will never have to scale to millions of users, so ship something fast with a tool that you know. Node.js is probably a premature optimization for most projects.

Get your project out the door, if you have to rewrite the down the road, that's fine, at least it's no longer tightly coupled to your whole web app views/controllers.


I think the author emphasized performance. Sinatra might be Express's precedent, but it doesn't have Node's runtime behind it to deliver performance (async I/O, low-memory footprint, etc).


"deliver performance" is too abstract. I usually throw varnish in front of my ruby apps to get performance. I find this gives me a more productive programming environment (ruby), and insane performance.

Node only becomes useful for me when I need to keep a lot of connections open and handle them simultaneously, and a few other rare situations like that.


Performance is one of the few legitimate arguments the Node community makes. It is wicked fast out-of-the-box compared to Ruby out-of-the-box.

Unfortunately there are many other factors involved in building software that matter as well.


If you want async sinatra, try https://github.com/kyledrake/sinatra-synchrony


Tried running Sinatra on jRuby?


I'm sure Node is generally faster and consumes fewer server resources than Rails. How about speed of developing things though, that do involve a relational database on the server? Rails has so much in terms of convenience functions that it's hard to jump to anything else, outside of applications where it's clearly not suited. Erlang has been faster and better at async than even Node since before Node even existed, but it's painful to use after getting used to Rails because it feels like a lot more typing and a lot more doing stuff by hand that Rails handles easily.


I agree, Erlang is very interesting (especially for the hot-code swapping capability). I've been looking into it recently, specifically a project called Elixir (http://elixir-lang.org) which is a language built on top of the Erlang VM.


Also, check out Chicago Boss!


Erlang also has the benefit of being able to compile to native code, if you wishes to do so.


Well, V8 kinda barfs out native code too... I know it's not the same thing[1], but it's not like that to run a loop in Node.js the parser has to read the line "console.log('hello');" 10 times, tokenize it, generate a parse tree, etc.

I just wanted to point it out because I think most people don't know about it, and that's why they are amazed at Node.js's speed (how can a interpreted language like JS be so fast?!).

[1]: It's definitely slower than native code generated by Erlang, I'm not saying it's faster or even on par; but still, it's much better than an interpreter! This StackOverflow answer is relevant: http://stackoverflow.com/a/4220550/347353


http://shootout.alioth.debian.org/u32/which-programming-lang...

Actually does show V8 as being faster than Erlang's HiPE... I guess it pays off to have someone like Google sinking all that work into making things fast.


Yes. But let's hope that Go thing doesn't distract them...


> I see a picture emerge where the back-end will become purely an API component

Not so fast.

That makes sense for highly interactive apps with private content (Trello, Gmail, etc), not so much for public-facing and content-driven sites.

That is, unless you're OK with sacrificing that organic Google traffic to your site?

If not, you have two options: do the entire rendering + biz logic on the server (in which case Rails still makes a lot of sense, though some Node frameworks are getting there), or duplicate your client-side code on the server so that Google's hashbangs work.

The bigger picture is that the web + search engines are kind of broken right now: There's a tremendous opportunity to make progress and use the architecture pointed out by the OP, but as of today there is no simple solution to make the architecture play nicely with Google without duplicating the work on the server (Meteor seems to be making progress there).

Until a simple solution is found, I think we're stuck with the dilemma and will have to continue to do MVC both on the client and on the server.

It sucks.


I assume he's talking about web apps, not web sites.


> I have abandoned Ruby and Rails in its entirety

So I thought he was talking about both. (As I once considered doing, too).

Also very frequently the distinction is rather blurry: Is Twitter an app or a site?


When you have a content-driven site that has to be indexable while supporting user interaction, usually server-side generated HTML with client-side AJAX (preferably with graceful degradation) can be an effective pattern.

For line-of-business apps or apps in general that can benefit from a rich interface and possibly local storage, full client-side presentation layer (HTML or native) coupled to server-side pure web services can be effective.


Dammit, it's not real-time web. There are no time constants considered, designed for or proven.

At best it's "live" web or "quick update" web.


Sorry.


Sorry, I didn't mean to come over as angry. No need to apologise.

It just bothers me that the term real-time is being misused a lot nowadays.


Yep, I like the term Live Web.


I understand.


Realtime-ish. ;)


Rails developers seem to be having this revelation a lot recently. I don't think it's a particularly new idea, and I don't see why Rails vs Node vs Java has anything to do with it. MVC is ok for a backend that's "just an API" (although there are probably better ways to structure things, such as CQRS). The language is irrelevant apart from the reasons it's always been relevant (speed of development, ease of deployment, testability, etc.).

I think the point at the end is worth expanding on, though: as long as your API is an API and not a thin connector to a specific backend technology, you can in theory swap out parts or the whole thing. Some time ago my company decided to build a new "web application", and there was some debate over how to structure the communication between (at the time) Flash and the backend. Some argued in favor of a proprietary protocol that makes it very easy for Flash to talk to Java and share data structures. Had we gone with that approach, we would have: 1. not had an easy path to having an "external API" by leveraging the identical internal API already built, 2. been stuck with Flash or at least had a much harder time converting to HTML when it became obviously the right thing to do, and 3. been stuck with Java on the backend and no clean service interface between the backend and fronted models.

It's always necessary to take some shortcuts in development, and I believe it's very important to choose the right shortcuts. Compromising on interfaces and encapsulation is almost always a terrible idea; better to spend the time hiding an ugly implementation behind a clean interface so that you at least have the option to fix it later.


We had this revelation already back in 2002!

Back then, I was working for a startup that had a product similar in concept to Rails but based on Apache/TCL developed around 1998. The initial architecture ideas were taken from AOL Server, in case anyone still remembers it.

Around 2001 we reached the conclusion that it wasn't scaling any longer for the type of loads that we needed, and more a better language infrastructure was required.

After some research, the framework was ported to .NET, on those days still beta, but their JIT could already yield much more performance than our home grown solution.


> Back then, I was working for a startup that had a product similar in concept to Rails but based on Apache/TCL developed around 1998. The initial architecture ideas were taken from AOL Server, in case anyone still remembers it.

Which Apache Tcl thing? Who are you, by the way? Your profile doesn't show anything.

BTW, Rivet can be scaled, like anything else can, although you're right that given the same resources, a compiled language is going to be more efficient.

These guys use Rivet successfully:

http://flightaware.com/

- A former developer of Apache Rivet.


This was way before Rivet was known, at least in Portugal.

I'll be sending you an email describing how it was.


Ah AOLserver and ArsDigita, the heartbreaking technology tale of the early web. Such great ideas ruined by branding. What ever became of all those people and what are they working on now?


Having built with node, ruby, python, php and even did a web services framework in node, I can say that the author is both right and wrong. Right - node is fast/faster and that rails is probably overkill for making web api stuff. Wrong - single page apps are the future.

I write single page apps, especially mobile JS apps, BUT rails style frameworks are still incredibly important for 1 huge reason - search engines. If you have a content site, you will have a web front end that is indexible for at least the next 5-10 years. Single page apps are not as indexible/crawlable yet.

So, will you write some single page apps? Yes, are they the one true future, no. The future is probably a combination of native apps on various devices, a traditional web front that is indexable, and probably some single page javascript stuff powering admin panels and highly interactive portions of your site that don't need to be indexable.

For example, in creating ReMeme (http://reme.me) it has a web front that is indexable by google, it has mobile apps and the whole thing is backend API driven by a sinatra app. At this point I have potentially 4 different platforms talking to the api, not just a "single page web app". If it ever gets on more mobile platforms or even on the desktop, that's even more.

The web's a big deal and will stay that way, but you'll probably find yourself writing a bunch of interfaces in different languages, platforms, etc. before you know it. Rails vs. single page isn't the debate. Pick a tool and ship your API, it doesn't matter if it's rails or node or python or php.

Once you ship your API, your bigger problem will be how do you manage the complexity of developing for so many platforms?


> BUT rails style frameworks are still incredibly important for 1 huge reason - search engines.

A project I toyed with a while ago, but never finished (because I don't need it right now), was an app server that can render and serve Backbone views on the server, for search engines and to speed up the first page load (after which most clients would switch to client-side rendering.) See https://github.com/stdbrouw/backbone-express. https://github.com/developmentseed/bones does something similar. There's a ton of reasons why people would want to stick with Rails or another server-side framework, but searchability isn't necessarily the big problem.


I am involved with a Rails project where the creative developer decided a single-page app was what we needed. We use the views/partials as fetching pieces, calling them via ajax calls and all is well. The added benefit is every unit of namespace has its own home, so SEO for everything is achieved. I am a huge advocate for the newer and better, but Rails is a great solution for single page applications. Also, it's great to write code in Ruby whenever one can; cmon, life is too short!


> This decoupling of MVC is well-supported, but you then find that you're using Rails as an API alongside being a web application; It's not a clean definition of responsibilities.

I'm not so sure about this part. To me, it feels like the difference between a web app and an API is simply the fact that an API returns machine readable data (be it JSON, or XML, or whatever). The JSON/XML/etc is just another way of presenting the same data - you should still be able to share a lot of the M and C between your two Vs.


We've been using both NodeJS and Rails for the last 4 months. Javascript is an amazing language, and coffee makes it even better. After writing a lot of Resque jobs today I have to say my Ruby-fu is not strong, and yet - NodeJS ecosystem is so so so so not mature enough for prime time.

The reason I wrote my jobs in Resque? Confidence. If shit will break - it will be on my side, not inside a package required by another package somewhere deep inside node_modules. I'm reading the source of almost every node package I use. I check all the issues on GitHub before trying to play with it.

While node core is stable, and everything is so damn fast (thanks Redis) and I enjoy every minute of developing with this stack, I find myself too many times inventing the wheel all over again - instead of focusing on my product.


What about that old saying: "The right tool for the job"?


Can you not mix Rails with backbone.js and the like? I thought it was possible.


Backbone.js was originally developed as part of the front-end of a Rails app: DocumentCloud.org

... that said, it's being used with a vast variety of different backends these days: http://backbonejs.org/#examples


Not only that, but there's a whole book on the subject (I'm not affiliated with it):

https://workshops.thoughtbot.com/products/1-backbone-js-on-r...


You can, it just makes some of Rails' niceties unnecessary. Views and "Controllers" are handled up front by backbone. You just need routing and models from Rails.


> You can, it just makes some of Rails' niceties unnecessary. Views and "Controllers" are handled up front by backbone. You just need routing and models from Rails.

backbone.js sends CRUD requests to rails app which are handled by the controller, and the main page is rails view.


Backbone is built with that intension in mind. It is quite opinionated in that regard as well. Look at how Backbone collections work to understand why.


Very good article and exactly how I experienced my last projects. There have been other articles lately on HN about the changing role of the MVC pattern. The views and models are now on the client. This is ideal because this way the client can respond to several actions without calling the server. The server's role changes to a data driven backend via an API. The API however can also be built using MVCs on the server.

Node.js is an ideal candidate for the server, because it is very well suited for building APIs.


Paul has articulated clearly what I had as a visceral reaction after attending this year's RailsConf: Rails is being relegated to being an API machine, and DHH et al are none too happy about it: http://amillionbetterthings.com/2012/04/26/the-rails-times-t...


I heard the first call for "Rails is dead" around 7-8 months ago. While then I completely disregarded it, now I'm even more so disregarding it.

I choose the right tool for the right task. Linger on that over-used saying for a moment.

I use Rails when I'm not familiar with the domain enough (this is the business of Software Engineering, we never are as familiar with the domain as the domain experts) and it provides me a platform for radically fast development, in the end I may also have a product that can withhold 2-3 years of evolution, before we need to scale, if even needed (premature optimization..).

I also use Rails when the quality levels are set high, and when I need tons of libraries, and I need those fast. In node, the quality level of such libraries are much worse (if you haven't seen it, you're not doing Node enough time - because I haven't seen it at my first half a year on Node). The number of quality libraries / npm modules are probably several magnitudes of order less than that of Ruby gem world. You're going to need to wait 4 years until you get that level of diversity and quality and it sets up in a comparable way.

And I use Ruby when I don't care about slow clients, or doing system-level work.

At any given time, I keep myself the option to use JRuby, which has comparable to MRI (the "normal", C implementation of Ruby) performance and better on a considerable number of criteria. IMHO DynamicInvoke on the JVM will be a game changer in terms of people considering JRuby against, Groovy, or Scala.

I use node.js when I know the domain will remain small, non-complex, and I'll be dealing with slow clients (real users).

To top all of that, backend systems will NOT turn into thin api servers. Not by any chance. Twitter themselves are rolling back their SPA and bringing back server side template rendering. The number of problem you get into while moving all of your logic and rendering to the client requires a huge blog post which I'll probably make one day. Most people are not aware of that because they never came from full-fledged enterprise level desktop apps -- I used to design CADs and pretty familiar with complex client-side architectures, there are so many dragons there, I'm very happy I now live on the server side, where things are much more predictable.

So no, Rails is not dying and neither is Ruby. With the advent of progress on the JVM and JRuby they never will -- this whole discussion reminds me of "Java is about to die" when Scala came around and "Java is about to die" when Oracle came around. And guess what - it didn't (go read about Java 8).

tldr; I use both, and I'd recommend anyone would, too. Server-side will never be just a thin api, Ruby or Rails will not die (at least not by that sword), and the only thing one might want to work on is a good foundation of intuition of when to use which of those.

@jondot


Ilya Grigorik is the man. He had all the best ideas, he had implementations and presentations about asynchronous Rails, about how Node was nothing special, nothing that could not be done with Rails. I wish he would still an integral part of the Rails community. Huge loss.


How big an application have you built with Node.js?

How complicated was the business logic?


I see where this question is going. The logic was rather simple and even though the client-side minimized JS code was already more than 200KB. The optimum cut probably lies some where in between and having logic on both sides.


The reason why Rails is going to continue being awesome even in the API-dominated web apps of tomorrow, is the asset pipeline. There's just nothing else out there that manages your assets quite the same way. In fact, the asset pipeline makes it dead-simple and even efficient to program big, high-performance apps in CoffeeScript. Sure, it doesn't force asynchronous behavior, but I feel that's more of a feature than a flaw. Sometimes, asynchronous calls are just not possible (like in the case of Authorize.Net's API), and you need to build something a little more robust, and easy to test when things go wrong. Therefore, I feel Rails is actually the best choice for an API backend, because it has so many drop-in monitoring tools available as well as ways to enhance performance by dropping frameworks you don't need, plus you can code asynchronously with it but you don't have to.


Great article and still wonder that not more people got the idea of Node, Mongo and related tech. Rails isn't bad and full of good ideas and a rich ecosystem but at the same time it's terribly aged in lots of areas.


I don't get it, because I have being doing asynchronous server side programming for years in C, C++, C#, Java.

It is already enough that I have to deal with JavaScript on the UI side.


Could you imagine a dedicated JavaScript developer accessing your code through a messaging service? Then you wouldn't have to deal with it at all...

* why the downvote? Is it for being stupid, or for sounding snarky?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: