I was lucky to be able to catch a preview presentation of this "Holy Grail" project at SpainJS, last summer. It's very interesting stuff. I think that one of the more important caveats to mention is that to the extent that your application is composed of rich interactions, visualizations and interactivity, and logged-in-user-only content, that stuff tends to remain on the client-side alone ... but for your basic rendering of JavaScript models to a flat HTML page, this is a great project to keep a close watch on. In particular:
* Caching of shared static HTML for fast-as-possible page loads.
* Google indexing and searchability.
* Backbone apps that feel more like a series of pages and less like a single-page app.
... are all things that are addressed quite neatly by Rendr.
Jeremy, I think you're mistaking this for Keith Norman's SpainJS presentation (http://www.youtube.com/watch?v=jbn9c_yfuoM). He proposes the same approach, but I don't know if it ever got past a demo. Although it seems like they may be using some form of this at Groupon in production.
Anyway, it is exciting, isn't it? This is just the beginning for us -- we've had to make a few hacky design decisions to be able to ship, but I think we will get the kinks worked out. The trick, and the challenge, seems to lie in choosing the right set of abstractions to hide the code complexity from the application developer. I hope to open source it as soon as I can, to benefit from the input of luminaries such as yourself!
Oh yeah, and the offer to give a Tech Talk at Airbnb next time you're in SF still stands :)
Oh my goodness. I totally am mistaking it for that presentation. It's the same general concept, and they also called it the "Holy Grail". Apologies for the confusion.
Jashkenas, do you have any plans to venture into this area?
I was just talking today about how I am tempted to try some of the other libraries that are going this direction. But that whenever I look at their code I'm envious of how clean Backbone is. Seriously the biggest turn-off to Angular is reading the code and seeing that people aren't as nit-picky, pseudo-OCD, whatever you want to call it as you are. Just curious.
I've got no immediate plans to play around with any Backbone-on-the-client-and-server stuff ... the next web app library in the works is only semi-related:
The basic idea is that for many public-facing websites (think NYTimes, natch, or Airbnb's search result listings, for example), the usual Rails convention of "Hey, here comes a request, let me generate a page just for you", is fairly inappropriate. Lots of "publishing" applications will melt very quickly if Rails ever ends up serving dynamic requests. Instead, you cache everything, either on disk with Nginx, in Memcached, or in Varnish.
But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?
I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
It's only a small step from there to pushing down the updates to Backbone models for active users ... but one step at a time, right? No need to couple those things together.
>But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?
>I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
You mean most things don't already do this? I've been working on a personal blog engine with this as one of the core ideas (basically all static assets and pages are compiled on edit), and I thought it was a pretty obvious way to go about it. Looks like I'm indeed not the only one to think of it, but how "new" you present the idea as is a bit surprising to me.
For simple problem domains (Blog, Mom and Pop store website, etc) it's trivial to pre-generate content. For larger content systems you can run into a more complicated dependency tree. Then you have the choice between keeping the dependency logic accurate vs regenerating the entire content set on any change.
It also turns out that content sets that change infrequently, but also unpredictably are a pain to cache. You can cache them for a short time (as long as stale content can be tolerated), but then you lose cache effectiveness. Or you can cache it forever with some sort of generation/versioned cache, but that doesn't interface with named, public resources very well. Telling your visitors and Google that it's yourdomain.com/v12345/pricing not yourdomain.com/v12344/pricing doesn't really fly.
I definitely concur with your surprise about it being novel though. I think that for many situations it's just easier to run extra boxes to handle the increased load of generating dynamic content on the fly over and over again. It's good for SuperMicro and AWS. It's not so good for the planet.
I'm very excited to see Jeremy's approach to addressing the problem.
> "Hey, here comes a request, let me generate a page just for you"
In a blogging context, stuff like Wordpress where pages are generated per request, then cached to handle any form of serious load just rubs me the wrong way... Such an infrastructure to display a few pages seems ludicrous.
> The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
... so this is exactly what my WIP custom blog engine (ultimately meant to replace my posterous blog) looks like, initially composed of markdown source and makefiles, then ramped up to some rake tasks and a ruby library. An entity change (edit post, add comment...) should trigger generation of each page referencing it exactly once, and possibly immediately.
"But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?"
I'm not trying to pick a fight or anything, but it sounds like you're arguing against lazy loading?
Eager caching is very situational, and not something you want to do unless you can reasonably anticipate the thundering herd or have very few items or have unlimited resources to generate and store a complete cache.
Your comment reminds me of a CMS-like system I built over a decade ago based on plain old unix 'make'. 'make' tracked all the dependencies to determine what parts of the site needed to be updated when new content was added or updated. Content was authored unstyled in a simplified subset of html, and make ran it through an XSLT to do styling and aggregation, like to build indices. The whole thing worked very well, building over 2k pages in seconds. I still miss it!
The cool part is that Rendr builds a backbone hierarchy around server generated HTML, attaching the rich interactions after the page has been displayed. Although, if the user starts to interact before that view hierarchy has been constructed, for example by clicking on links, the behavior won't be "rich". (However that would require some fast clicking!)
The author (spikebrehm) speaking here - that's a good observation! It's possible that the user will click on a link before the JS finishes downloading. Luckily, we've taken care to use real URLs for all links, and our pages render fully on the server as well, so if they click before the Backbone views initialize, they will fall through to the server. Not the fastest experience, but still fine at the end of the day.
This is exactly what I've been waiting for. I can't wait til this is open-sourced! Was just thinking about trying out Angular because Backbone just wasn't cutting it, but this is going to make me hold out.
Random piece of feedback: it's weird to use data-model_id (instead of data-model-id). I assume you're trying to match some pre-existing naming convention (but JS tends towards camelCase anyways...), but I think it would be better to go with dashes as that is HTML attribute standard. That was the only part that looked sloppy to me.
Another thought: Did you guys experiment with event-based logic for postRender instead of pre-defined method hooks? I find the pre-defined method approach hacky feeling.
Agreed, the underscore in the data-attribute isn't ideal. I initially tried to use camelCase, but alas, we often forget that DOM elements are case-insensitive:
Why not stick to conventions and go with dashes? Underscores are out of place in Javascript and the DOM... If I'm going to be forced to use underscore_case in Rendr that's different than the rest of my codebase (or other codebases I don't control which are all camelCase in Javascript) it's going to feel more unclean from the start.
Or you could say I'm not forced to use it, but now I'm using camelCase for variables and underscore_case for data attributes... so the two don't match, so why have the non-standard underscore_case at all.
(This would be an example of one of the things that makes libraries feel cludgy, and one of the reasons I love how consistent Backbone is, like I was talking about further up the thread with Jashkenas.)
I'm open to that. Like I've said, there's definitely room for cleanup of the implementation. Either way, this isn't a detail that's exposed to the application developer.
Yeah, it's unfortunate that all of Javascript uses camelCase instead of underscores, but since it does I'd rather use it everywhere, instead of using underscores when they aren't standard in either Javascript or HTML.
DerbyJS is a pretty cool project. I was checking it out a while ago but there was no way to hide private data (user details etc) from the client. It seems they have made a start on that now though[0]. Might be time to take another look.
This is really interesting, although I'm still new enough to node and complex JS apps that I'm struggling to take it all in. That being said, could a hack-ish approximation of what Airbnb is doing be accomplished by rendering the first call of an app in phantom.js and pushing it out in the response stream?
Questions about session state management are popping up in my head but maybe that's some of the secret sauce in the Rendr portion of Airbnb's app.
Great work guys, this is really pushing the boundaries of full stack app development with js.
Author here (spikebrehm). Good question. Yep, you could accomplish something similar by booting up PhantomJS or node-chimera [0] and scraping yourself, but that seems hacky and hard to scale. Justin Tulloss of Rdio talked about this approach in his Airbnb Tech Talk [1], saying that the tricky part is determining when the page is actually done rendering, especially if you have a bunch of JavaScript that's performing DOM manipulation.
Fantastic. This kind of thing, and Meteor, is really what Node is made for. Till recently, Node web frameworks mostly been "let's do RoR or Sinatra, but look it's in JS! #Neat". Sure, there's some benefit in terms of developer skill set and avoiding schizophrenic context-switching; but still, the great promise of SSJS is running the same actual code on both sides.
To be honest, Meteor is nice and a very ambitious project but the fact that you are writing an application with it means you are probably wedded to it.
That's why I've been using Express for my middleware, npm modules as needed, and mongoose as my db wrapper (or if I want to switch it out for redis, I can do so easily). Not sure how easily I can port my "meteor app" to a different framework that comes along.
I've been playing with Meteor lately, and I can certainly see some element of lock in when using it. That scares me naturally. However, the positives are:
1) Speed: It is insane how little work has to be done to stand something up that is decent and funtional
2) Packages: Not a long list, but the ones that exist are ridiculously easy to implement.
Seriously, the sheer amount of work that you end up NOT doing almost makes it feel like cheating. I can completely understand why people don't like it. I fully intend to build something in Derby soon, and to examine Rendr. But I'm wanting to build an enterprise focused metadata management app, and Meteor seems to me to be the right way to go, because at the end of the day I just want to get something built that provides a capability.
Agreed. You can still get a lot of the code-reuse benefits without lock-in by using tools like Browserify and Socket.io. With Browserify (or any of the other CommonJS packagers), I can use the same lib directory for both my client and server code, and all my utilities/templates/validators/algorithms/NPM modules work on both. One of the most awesome things about Socket.io that's not really emphasized in the docs is that the API for client and server is almost exactly the same, so you can change your mind almost on a whim whether you want a certain RPC to work on the client or the server.
To me, it's a lot easier to "grok" exactly what's going on when you're using a simple middleware layer and a simple socket framework, than when you're using a full-stack solution like Meteor.
That's not the holy grail. I'll tell you what the holy grail is:
A web app that runs completely on CSS.
No need for stupid web servers, but since web servers are handy we'll build one with css.
And I scoff at HTML; but because of performance I made css compile to html too.
I'm working on a project that'll I'll be unveiling as my 'open source master piece'; It's just a little thing I call node.css. That's right, css bindings to C++. No more stupid C++ either.
Strap on some CSS build automation and what do you get? That's right, the holy grail. I'll call it CSS on Rails.
What's more, I've already done it and launched my current employers flagship product on it. Hope it doesn't screw the entire business over the long haul. Oh well, I can switch jobs if that happens and pretend I never posted this.
I'd much, much rather move Django to the client than JS to the server. With CoffeeScript, this would be more tolerable, but even sacrificing Python, I love Django's architecture so much I don't want to give it up.
I'm utterly confused. The main benefit of this approach is summed up in the paragraph:
"Compare this with serving the full search results HTML from the server.... It feels 5x faster."
But doesn't ruby on rails (and most other web stacks) already do server side rendering VERY well and are hugely supported by enormous communities. Javascript is useful for things like infinite scroll, interactive client side calendars or making browser based games; but a web search is absolutely simple in ROR and doesn't need backbone or javascript at all.
Don't get me wrong, I love javascript and backbone / angular. But why push logic to the client side for a search page and then try to pull client-side technologies back to the server side in an effort to resolve performance problems that are already solved by existing technology?
In the words of Carl Sagan "why not skip a step". Unless you just love javascript so much that you're willing to recreate rails on the server side with it. That would be a sensible reason to do it.
Not trying to troll, just thought I'd throw this perspective out there.
Not to mention SEO. The points outlined above in the top comment (* Caching of shared static HTML for fast-as-possible page loads. * Google indexing and searchability.* Backbone apps that feel more like a series of pages and less like a single-page app.) are all things that Rendr accomplishes that are already taken care of by not making a full blown client-side JS app.
I think us developers are running into the same issues or pitfalls that we ran into during the rise of flash, where we move everything to the client because we can. The problems that Rendr is solving seems to be the same problems caused by giving too much responsibility to JS.
It sounds like you are really questioning the usefulness of client-side JS apps in general. Certainly not every app makes sense to be a "single-page app". But if you are creating a single-page app, then why not also allow it to render on the server?
I like to think of Rendr as just another Backbone app, that happens to be able to serve HTML from the server as well.
There is the straw man again. If you are creating a completely client side app and you find it's better to have some work done on the server side... why not use mature existing server side technology along with client side technologies. It's not a one or the other scenario.
Why work hard to move client-side technologies to the server? javascript is not so good that I want it on the server-side. And server-side lanaguages are not hard to learn or hard to find people that know how to use them.
Yes, I think I am too. I can see the need for a single page apps for really immersive applications like games or possibly for very small targeted applications like reading mail, or showing stats, but in the case of the latter you probably wouldn't run into ten seconds worth of loading scripts.
I think I'll understand when I run into the problem Rendr solves...
You're missing the part about how single-page, client-side apps are much more performant _once_ the page loads. Spike is trying to get the best of both worlds (hence "Holy Grail").
Server-side gives you fast page load times.
Client-side gives you fast user interaction times.
That was the straw man I was thinking I saw in the post.
Why not use rails to load the page (no BYO framework involved) and then use javascript/backbone/etc on the client-side; as opposed to bringing the client-side technologies server-side to accomplish the same thing?
I'm thinking specifically of the search page example that was represented as a screenshot in this blog.
This sort of integration has been happening more and more with the data format as well for me. After around the third mobile client for a server I'm tired of native data models and just want the closest thing to the server's (and the DB's in the case on Mongo) JSON. I don't need it repeated in SQL DML, JSON, Java, Objective-C, C#...
Great article, especially in coming up with a clean way to reaching the holy grail. I remember seeing in Spike's tech talk some months back that they initially chose DerbyJS, but it seems from the blog post that they went with Express instead. Curious to find out why.
Really hoping EmberJS gets this feature as well. It's possible now (with some fancy hacks), but I've read they plan to bake it into the framework (along with tight rails integration). Basically it just comes down to generating the initial view HTML on the server (by running the initial api call and render), then subsequent calls would be handled by the client.
Doesn't have to be 100% Node.js front to back (you can still write your APIs in another language), but Node provides that handy bridge to get the server side rendering of client code.
I was just thinking about this today. The web devs I'm working with are talking about ditching Rails entirely in favor of in-browser template rendering by JS. With a backend also in JS, it seems to clearly be a winning development environment for new projects today. Just the diminished learning curve would be enough for me to look closely at this. This is coming from a very non-front end guy, though, so I might be a little starry-eyed.
Hey guys, looks pretty good.
I'm interested in the decision to move to server side templating. You mention that the loading of js, then making an ajax call, then rendering the page, was slower than just serving up html.
Did you guys try bootstrapping json data server side? That way you could avoid making an ajax request on load.
Do you have any perf numbers to share?
Author here. So, yes, bootstrapping the JSON on initial pageload would be a lot faster than waiting for the Backbone.Router to fetch it (for the Rails + Backbone approach). But therein lies the problem I was talking about: to bootstrap the data for a particular URL, but to also have the client-side request the proper data for that URL in response to a pushState event, you end up needing to duplicate the mappings from app URLs to API resources on the client- and server-sides, and you need to be able to access the same API on both sides as well.
We avoid this with Rendr by defining routes, controllers, models, etc in a way that can be run on both sides.
I've been looking for resources on when to use rails vs when to use node.js. I've been programming in ruby/rails for 2 years and picked up node about 9 months ago. Any tips or resources to learn which tool to use depending on the problem?
I'd think the use case being addressed in the article (having to share backbone views that can be rendered either on the client or server) would be the sweet spot for node.
I'd just like to note that those working on Backbone single-page apps hosted by an ASP.NET site can also achieve this idiomatically, with the use of the Nustache view engine and controller actions that can do HTTP content negotiation.
Excellent stuff, thanks a lot for sharing. The Airbnb tech talks have been great, you guys are breaking some great ground and it's cool that you're so enthusiastic to share your discoveries.
While the library (Rendr) is written in CoffeeScript, you would of course be able to write your application code in JavaScript, or anything that compiles down to it.
* Caching of shared static HTML for fast-as-possible page loads.
* Google indexing and searchability.
* Backbone apps that feel more like a series of pages and less like a single-page app.
... are all things that are addressed quite neatly by Rendr.