Hacker News new | past | comments | ask | show | jobs | submit login
Principles of Rich Web Applications (rauchg.com)
534 points by rafaelc on Nov 4, 2014 | hide | past | favorite | 148 comments



This is a great article. It's extremely thorough, and touches upon most the difficulties I've encountered in my (limited) experience coding JS on the web, as well as several I hadn't even considered.

My only complaint, if it can be considered a complaint, is that the author doesn't address the real-life costs of implementing all his principles. The main one is the complexity of the server- and client-side architecture required to implement these principles, even for a minimal application like TodoMVC [0].

I agree that user experience is extremely important, and perceived speed is fundamental, but I certainly don't think it's important enough to justify the cost of figuring out how to implement all these principles, especially for a startup or other small team of developers.

Of course, the hope is that tooling will quickly progress to the point that these principles come essentially for free just by following the best practices of whatever libraries/frameworks/architectures you're using. There was probably a time where the basic principles of traditional static web apps (resourceful URLs, correct caching, etc.) also looked daunting for small teams, but that's quite manageable now with Rails/Django/etc. (and maybe earlier frameworks).

[0] http://todomvc.com/


There is one such framework: http://derbyjs.com/ It's been in development for about 3 years, so it certainly has a cost to implement. It is being used in production by Lever (YC) to ship very usable enterprise software.

There is even a TodoMVC example being submitted for Derby: https://github.com/derbyjs/todomvc/tree/master/examples/derb...

Another nice thing is that the templating engine can be used client-side only if you want: https://github.com/derbyjs/todomvc/tree/master/examples/derb...


Client-server frameworks (in JVM: http://vaadin.com/ or in JS: https://www.meteor.com/) are something that help (or should help) overcoming the complexity of all this. Mostly taking care of UX, communication and optimization parts. I guess they are not for those who want to build everything from scratch, but speed up teams quite a lot.


Very important point, especially for the HN audience. There's some really good principals here but in reality you don't need to worry about most of these an an early startup. If you use modern frameworks then a lot is taken care of already.

Things like stale code on the client is simply not an issue until you've got a lot of people using your app. Don't worry about it.


Fair point. One should be particularly wary of the lure of 'liveness' and reactivity. They are nice things to have but don't underestimate the expense in terms of performance, stability and complexity (events are spaghetti).


JavaScript "is the language of choice for introducing computer science concepts by prestigious universities"!? God help us all.

I see the link to the Stanford course, but I hope it's still in the minority. JS is not a language that I would want to teach to newbies, especially if those newbies are on a path towards a CS degree.


FWIW, CS 101 isn't a required (or popular) Stanford course, and isn't part of the CS major. The intro series is 106A/106B/107, which are taught in Java, C++, and C respectively.

There are plans to switch out Java for Python eventually but no major classes are taught in JavaScript--the closest it gets is a graduate Programming Languages class that spends a few weeks on JavaScript to illustrate closures and first-class functions.


There's no undergraduate class that illustrates closures and first-class functions (whether in javascript or otherwise)? That seems odd...


The undergrad core (and degree) is 90% Java, C, and algorithms.

To be fair, though, many undergrads take graduate classes and the department encourages it--the only difference is a 2XX course number instead of 1XX, and sometimes a bit more rigor.


I wasn't aware that JS was used in CS university courses, but I'm not so offended by it. The nasty parts of JS (type coercion WTFs, void, with, etc.) aren't very relevant to teaching basic CS principles, and it's got everything you need unless you consider "normal" object-orientation a basic CS principle. It's got all the flow control, loop constructs, obvious anonymous function syntax, etc. you would expect.


Additionally, it's one of the only popular prototype-oriented languages. I think having to teach some of the less-than-sane parts of javascript can be a distraction from whats important in a 100 level course, but for a higher level course that teaches different paradigms, it makes sense to teach Javascript.


It's not the language of choice for introducing CS to college students at all. The author should do some research first before making such a statement. No sensible CS departments would do something like this. For some reason, I see many "web" people think that web development is the center of Computer Science. It is not. Web dev is not even a required course for many CS undergrad programs.


And web development shouldn't be the center of a Computer Science curriculum. Web development happens to employ many of the concepts one learns about in a Computer Science curriculum such as data structures, algorithms, asynchronous and synchronous communication, and UI interactions.

But, like a machine learning course, or a data science course, which also touch on many concepts which are part of a core Computer Science curriculum, it is not a center because the center of a good Computer Science curriculum is the core concepts through which all other ideas flow (algorithms, data structures, message passing and communication, programming language design, compilers and interpreters, object oriented design, functional programming, etc).

Generally all the rest is (and should be) elective. And most curriculums presumably require electives so students can delve into areas they're interested in applying the aforementioned skills to.


Any particular reason? Considering its relation to C, fairly simple, highly available and super-widely used, seems like an OK choice to me.


I'm not even a JavaScript hater, but: its relation to C is merely syntactic, and is to its detriment, frankly. And, simple? Assuming a hypothetical comprehensive encoding of programming language semantics, I can't imagine that JS would be anywhere near the top of the list for shortest complete descriptions of same. That and the constant churn of libraries du jour in the JS world makes me think that it is far from an optimal language for the beginning programmer.


Javascript is probably semantically closest to Scheme - it's a language built around closures and one universal data type (lists in Scheme, objects/dicts in JS). Scheme is a pretty good choice for intro CS courses, because of its simplicity and the need to re-implement many common language constructs from first principles.

JS shares this property, but also gains the benefits of being widely used in industry and available in every web browser, with a good debugger in Firefox and Chrome. I'd never really considered it as a teaching language, but I think it'd be a great choice.


Immutability-by-default is a necessary, if not sufficient component of the semantics of Scheme that is lacking in JavaScript, and it's that same lack that leads me to think that any comparison between the two languages is basically spurious.


Scheme certainly has mutability, through set!, set-car!, set-cdr!, and procedures based upon them. It's not the default - you always know when you're mutating something - but then, there's a pretty direct syntactic translation into Javascript, where = without a var always means you're mutating something.

Why is this important for students, anyway? Shouldn't they be exposed to mutability fairly early on?


It's not important for students, I just dispute the notion that JavaScript has any special affinity with Scheme beyond that of any other language with first-class functions. Anyway, the major advantage of immutability by default is that it enables under-the-hood optimizations like structural sharing that make FP the paradigm of least resistance. To the extent that JavaScript doesn't do that, it doesn't really deserve to be called a functional language any more than e.g. Ruby.



Straight from the horse's mouth:

http://brendaneich.com/2008/04/popularity/

JS certainly does have other influences, notably Self and Java. But from that article:

"As I’ve often said, and as others at Netscape can confirm, I was recruited to Netscape with the promise of “doing Scheme” in the browser....I’m not proud, but I’m happy that I chose Scheme-ish first-class functions and Self-ish (albeit singular) prototypes as the main ingredients. The Java influences, especially y2k Date bugs but also the primitive vs. object distinction (e.g., string vs. String), were unfortunate."


Did you have tons of C/C++ libraries thrown at you in 100 level courses? Probably not, I'm guessing.

JavaScript is great because it runs in a browser, provides immediate feedback without having to compile/recompile and allows for all the simple flow control concepts mentioned by other posters.


Well, I assume OS X is quite popular among students. I'd say opening terminal and typing "python" or "irb" beats the hell out of "running in browser".


I don't see running in the browser as a benefit. It means that a required step for a neophyte in producing meaningful output is getting a handle on the utterly brain-dead DOM API.


Its a programming language for the web. Not only is running in browser a benefit, its a necessity.


There's 'console.log' which outputs to the debuggers output screen.


But then the mere fact that a computer comes with a browser already installed is not a notable advantage. I can get a Python REPL running on a computer with an amount of effort and know-how comparable to that necessary to install Chrome.


One of the "benefits" is that learning Javascript (ignore the DOM for now, it's not necessary to touch this to learn Javascript) requires nothing more than a browser and notepad. Hell most browsers have debuggers already built in. I can see the appeal.


I can't stand how the Facebook feed updates in realtime. I read a bit, leave the tab, and when I come back to it it updates, so I have to find my place again (which I don't do, I just go "screw facebook" and close the tab ^^). The same for forums, if I want to see new or changed posts, I'll hit F5 -- and when I don't press F5, it's not because I forgot, but because I don't want to. Pressing it for me is a great way to make me go elsewhere; or in the case of facebook, to stay and resent you.

I don't need to know in realtime how many people are on my website. I need to know how many there were last week, and compare that with the week before that. Likewise, I don't really need to see a post the instant it was made. At least for me, the internet is great partly because people chose what they do how and at what pace, because it's more like a book and less like TV, and making it more like TV is not improvement.

This is not against the article per se, which I found very very interesting and thorough, just something I have to get off my chest in general. Though I really disagree with the article when it comes to the facebook feed, I think that should serve as an example for what not to do.

Please, think twice, and never be too proud to get rid of a shiny gimmick when it turns out it doesn't actually improve anything. Let's not sleepwalk to a baseline of stuff we just do because everybody does it and because it's technically more complex. As Einstein said, anyone can make stuff more complex :P


I would argue that in the case of Facebook's updating feed that the idea is fine, the implementation is buggy.

What should happen is the new content is loaded above and instantaneously the document scroll position is updated to preserve the previous position. This will allow new content to come in without disrupting your experience.


That's fine for a few updates, what if you have a lot? Memory may be cheap, but why not use it for interesting things? I mean, what problem does an auto-updating feed or endless scrolling solve that are easily dealth with by pagination, expire headers and manual refresh? Other than "user stickyness is not 100% yet" I mean, which I don't recognize as a valid problem.


"The internet is great partly because people chose what they do how and at what pace, because it's more like a book and less like TV, and making it more like TV is not improvement."

Well worth thinking about.


What's great about the internet is that "it" doesn't have to choose whether to be more like a book or more like TV. It can do both things equally well, and whether you're making something "book-like" or "TV-like" it's cheaper to get started than it is to publish an actual book or TV program.


It can do both equally well, but not at the same time.


A single page app is just this: a web page that doesn't ever reload the page or reload scripts. It doesn't matter how much content the initial page had on it, and it certainly doesn't mean that you send an empty body tag. The history APIs even let the back button and URL bar behave exactly as the user expects them to, but without a single round trip if you already have the data and resources.

While it's certainly true that the first page may load slower, and you'll load a few scripts as well, you never need to reload those again. Frameworks like Angular encourage you to use a "service" mindset that capitalizes on this property.

The longer you use a single page app, the fewer round trips you will have. If you ask me, your communication should only be for raw materials (scripts, templates) that you won't need to validate or request again during the current session, and raw data (json). This is more loosely coupled, more cacheable at all the different levels, and more scalable in large part due to the decoupling.

Once the initial view loads, I totally agree that you should intelligently precache all your resources and data asynchronously in the background to usher in the era of near zero-latency user interactions. Preferably, you do this in an order based off of historical behavior/navigation profiling to best use that time/bandwidth you have before the next click.

I get the impression articles similar to this one that there was once a similar mindset surrounding mainframes and dumb terminals. The future is decentralized, web included.


I think people have different use cases that they aggregate under "web application". If you are building a desktop replacement application then I can see that initial load might not be such a big deal but if you are building a less complex application like the Twitter UI then server side rendering makes more sense. It's all context specific in the end. To hard to make general claims.


Actually, I think twitter is a prime candidate for a JSON-driven client-rendered application. It's a pretty static shell, with a static tweet template iterated over large amounts of tweet data.

You could completely eliminate a huge number of round trips by just getting tweet data and user data after the initial pageload. The user data is quite cacheable and you could create a single endpoint that allows the user to precache the required user data (display name, photo url, id, etc) for everyone he follows in one round trip.


https://blog.twitter.com/2012/improving-performance-on-twitt...

They moved away from client-side rendering. Quoting:

There are a variety of options for improving the performance of our JavaScript, but we wanted to do even better. We took the execution of JavaScript completely out of our render path. By rendering our page content on the server and deferring all JavaScript execution until well after that content has been rendered, we’ve dropped the time to first Tweet to one-fifth of what it was.


I'm not a Twitter historian, so I may be wrong, but I'd be willing to bet that their client side rendering allowed them to scale the way they did. When your customers' computers are doing half the work or more, adding a customer costs you half as much or less.


The future is decentralized, web included.

Decentralized work is a bad thing. Instead of Mozilla/W3C solving some problem for everyone, we get every website coming up with their own clever solutions for nearly everything. (Also, thousands of clients re-doing the work that could be done once and served from cache.)

Also, single-page apps increased reliance of the entire web on the handful of CDNs and framework providers.


> every website coming up with their own clever solutions

I take it you're not a fan of open source, then? :-D I don't think the problem you describe has anything to do with decentralization, though. You can have people thinking they know better than the rest of the world writing COBOL for AS400s.

Further, I don't think you can soundly argue that decentralization is a bad thing. For one, it's the the only way you can scale horizontally whether that's done on the server side or by deferring appropriate work to be done on the client side. For another thing, as OP's article states, round trip time has a theoretical lower bound. The only way to improve performance past some a point is to distribute closer to your users.

I 100% agree by the way that if it can be cached it should be. I don't, however, think that necessarily extends to a SPA. JSON is just as cacheable as HTML if the interface is designed with proper REST semantics.


Open Source needs to be used first. Do you want to rely on websites implementing something like spell-checking? In your native language? I don't. I want common problems to be solved in the browser and I want to have some reasonable control over my browsing experience and some consistency - something SPAs actively undermine.

How are SPAs different from Flash? Sure, you have some remnants of standard HTML there, but with canvas, local storage and JS compilers, those two technologies look remarkably similar both in terms of advantages and problems. You get a blob of non-semantic, imperative code that does non-standard things.

The funny thing is, many of the stated advantages of SPAs can be easily implemented in document-based web apps. A lot of the repetitive info could easily be removed with proper caching, and it would be even more efficient with cacheable client-side includes. Not only it would be roughly as efficient, it would require much less work to implement.


*I get the impression from reading articles similar to this one...


This is not a great article and is resistant to criticism by virtue of its excessive length. Still, I'll try to point out some major flaws.

1. Single page apps have many drawbacks that are conveniently not mentioned: Slow load time, memory bloat, persistence of state on reload, corruption of state and others. SPAs are just another misguided attempt at making web apps more like desktop apps. Web apps are network applications - if you remove the network communications portion, what is the point of them?

2. JS is a great tool to make pages more responsive. This has been the case for years and I am lost on why the author writes on and on about it without any poignant observations or facts.

3. Using push (web sockets) is a valuable tool for accomplishing particular features. This does not mean that more is better and we should start using it for everything. Server pull is a strong feature of the web and is arguably a key to much of its success.

4. Ajax is great, no argument.

5. Saving state as a hash value in the URL not only puts JS actions into history, but makes them visible and bookmarkable as well. Push state is a quagmire.

6. The need to push code updates is one of the problems caused by SPAs that is not needed in normal web apps. Even so, this could be solved with a decent, as yet unimplemented, application cache.

7. Predicting actions is overkill. If you focused on doing everything else well, there is no need to add significant amounts of complication. More code = poorer performance and decreased maintainability.


I agree with all of your points but number 1. I think a good SPA should be a network-reliant application whose complexity is demanded by the use case. And a good framework for one should have provide near instant load times: a small core library with additional resources and logic only loaded on demand.


I think of good applications as a collection of SPAs. There is no benefit to forcing every single feature of an app into only one page. Ajax means we can implement more functionality without a page reload, which is good to a degree, but at a cost of slower initial load, losing URL state, and difficulty tracing UI behavior to underlying code.

I am working on a legacy SPA now and it's horrible. (Our users have learned to refresh frequently due to uncertainty of state.) I am not sure how much to blame on poor implementation as opposed to inherent weaknesses of SPA architecture.


Almost all of the SPA frameworks out there right now are nightmarish. I think React is pretty good but limited. What you're describing is probably a much better way to handle the needs of most sites.


Furthermore, pushing most of the UI layer to the client reduces server load and results in a more horizontally scalable solution.


With a little work you can make a single page app that keeps the state preserved in the URL. The reason you try to remove all blocking on network communication is the obvious one: performance.


Is there any web application framework, presumably encompassing both client-side and server-side code, that implements these principles? I'm guessing that Meteor comes closest.


N2O is another one.

https://github.com/5HT/n2o

It is really different and extreme -- in a good way. It supports both server side or client side rendering. It seems mobile performance and concurrency were the main goals in building it.

But it also takes things to another level, such as it establishes a Websocket connection after it loads the page then, lets you shuffle page data as a binary (!) payload over.

I've only played with it as I am not doing much web related stuff at the moment but it looks pretty cool.


DerbyJS (http://derbyjs.com/) is built specifically for these principles. I'm working on big updates to the documentation right now, but I can say that we have a great solution for Server HTML + Client DOM rendering, realtime data updates, and immediate optimistic updates on user interaction in the client.

By building on ShareJS, DerbyJS uses Operational Transformation for collaborative editing of data in realtime, which means that users can see data updated immediately in their browser even if other users can edit the same thing at the same time. This is the same approached used by Google Docs.


Also, here is a video from a very informal talk I gave on similar concepts last year at our office: https://www.youtube.com/watch?v=iTC5i63eOzc


Nitrogen for Erlang (http://nitrogenproject.com/) does much of this. N2O, also mentioned in another post, is a variant of the Nitrogen approach.

Basically, you build your application frontend with pure Erlang terms to construct server-side templates, then make live changes to the interface over and websockets (as of the soon-to-be-released v2.3), ajax, or comet (if websockets are not available, or the websocket connection crashes, it'll establish reconnection or fallback to comet/ajax).

I recently did a talk at the Chicago Erlang Conference about it: https://www.youtube.com/watch?v=nlV4gm8SpVA


GWT is quite close, it allows both client & server to be written in Java which is then compiled to Javascript. A lot of the criticism of SPAs here doesn't really apply to it, as all your css / javascript files, and even most of the images, are compiled into one html file, and only that file is loaded at page load. That reduces the number of http requests, and with gzipping and caching, your page loads up quite quickly.

All of the other things, including history management / back button, are built into GWT.


By principle 1, "Server rendered pages are not optional", I think GWT is missing a big piece: it doesn't let you choose between server-side & client-side rendering -- instead it's designed to make client-side rendering easy.

Yes, GWT can be quick, but it still has to make those extra round trips to get data from the server. If it had an option to render the initial page wholesale on the server, it could get that first impression up faster.

In order to add that to GWT, you'd have to emulate the first few rounds of communication on the server --- incorporate a server-side DOM model & fake communication layer, either using the cross-compiled JS or Java. Maybe leverage the debugging tools?


GWT UiBinder doesn't permit server-side rendering, but other templating systems do (Google Flights for example uses GWTSOY, Google Inbox uses something we call JsLayout)

GWT has always supported minimizing the number of HTTP requests. For example, GWT RPC long supported pre-serializing the first couple of requests into the initial HTML, so that you don't need to do an AJAX request before you can render.

It's also one of the first, if not the first, framework to support automatic asset packaging, sprite-sheeting, etc (since 2009). This inlines all CSS, small images, and other resources into a single chunk, which can be downloaded in one HTTP request.

I am working on an architecture I call POA, Page Oriented Architecture, which combines the best of Web Components, React-JS style binding, and server-side rendering into a single framework for GWT.

Google already uses "patching" to deliver small pieces of JS, we call DeltaJS. So when you visit inbox.google.com, the server computes the diff between what you have in local storage, and what has been recently pushed, and sends down a patch that updates your cached copied. I'm looking at combining this with ServiceWorkers for GWT so that all GWT apps by default can leverage offline and delta-js application. Hopefully I'll have something to show by the GWT.create conference (gwtcreate.com), we'll see.


> JsLayout

I went looking for Inbox's template engine and came across:

https://code.google.com/p/google-jstemplate/wiki/HowToUseJsT...

The "jstcache" attribute looks the same. Although, looking at the code.google.com project's source, it doesn't look like any code/non-wiki changes since ~2008. So maybe a forgotten attempt at open sourcing the/precursor-to JsLayout?

POA sounds interesting; hopefully it'll have a great unit testing story too?


Yeah, that looks like an early version of it, I'll ask why we aren't developing it in the open anymore (it may have been because of rapid changes need by Inbox)

The central idea of Page Oriented Architecture is to return to the Web 1.0 era of "A URL for everything", where the application consists of a bunch of pages, and if desired, any page can be rendered server-side.

However, what really happens beside the scenes is GWT processes all the pages for the whole site, synthesizes an application, which each page between a GWT "splitpoint". So you have your cake and eat it too. An application that is developed a 'page at a time' with crawlable, indexable, server-side URLs for everything, while at the same time, you get a monolithically compiled, globally optimized, Single-Page-Application out of it.

There's an old set of slides here: https://docs.google.com/presentation/d/1JobclkctBvciYZ8CzHIo...

However, that's since been super-ceded by a system I'm working on that makes everything look like Polymer, only it's monolithically compiled and optimized, and using ReactJS style virtual DOM-diffing.

I want to combine this with DeltaJS techniques and ServiceWorker to produce offline-by-default high performance mobile web apps out of the box.


> Google already uses "patching" to deliver small pieces of JS, we call DeltaJS. So when you visit inbox.google.com, the server computes the diff between what you have in local storage, and what has been recently pushed, and sends down a patch that updates your cached copied.

Does that mean that if a user has an app cached, but something changes within only a single split point in that app, then the user only has to re-download that split point, vs re-downloading the whole app again?


I'm not sure how that'll work yet. I don't maintain the DeltaJS infrastructure, I use just it, and as of now, Google Inbox doesn't use code splitting, however that is the focus of post-launch activities, reducing code size download, making startup faster, etc.


Actually, with GWT you don't have to load your whole app on page load. You can divide your code, and only load the code needed to display a loading / splash page, on initial load. This way, there's only about ~100KB of code loaded initially, which brings up a splash screen very quickly, and then the next batch of code is fetched and displayed.

Also all of these files are cached once loaded, so they don't have to be loaded again until they are changed.


A splash/loading page is a workaround to avoid the user seeing a blank/unusable page while your app loads.

The goal of server-side rendering is to show the user everything he supposed to see when a typical single-page-app loads, shows the a splash page, fetches the initial data and renders it on the page.

From your description, GWT does not offer that.


Fair enough, but if you render the page on server, you will likely have to do some processing (e.g database/api calls) in order to get the data to display to them. The screen will be blank while all of this is going on.

If you go the splash / loading page route, users will immediately see the loading screen come up.

Plus, with the 'render on server' route, if users visit any other page, they will have to wait for a full page reload, whereas with an SAP, they will just see a loading icon for a second or three before the new page will come up.

Its a lot more responsive overall, and the pros outweigh the cons in my experience.


The article is advocating for a framework that does both -- it renders the initial impression on the server (yes, including a few DB reads), snapshots it as an SPA, sends it to the browser, and then lets the user continue the interaction that began on the server.

It adds significant server complexity for what some would call a minimal improvement in setup time. The article is advocating it for exactly that: overcoming that (maybe minimal) setup time on the client, which includes a one or more server round trips.

In many cases, this will undoubtedly be premature optimization. But if the framework made it easy, then it might simplify your transaction model. You could unbundle transactions, knowing that you aren't paying a latency tax on them, at least on startup, since it's all happening on the server.


I would rather have something show up for the user immediately, than have him see a blank screen while DB calls are going on at server. But, to each their own.


My perception of how fast a page loads is that if you show me an 'app loading...' splash screen, it makes me feel your page is slow and bloated. I react better to a blank screen with the browser's progress bar moving. I suppose it's because it allows me to shift blame away from your webpage to the quality of my internet connection.


I'd say cheerp pretty much nails it. http://leaningtech.com/cheerp/


Meteor fails the server rendering principle, at the moment it serves an empty body and the client does all the rendering.

Server side rendering is on the roadmap tho.



[Wt](http://www.webtoolkit.eu/wt) seemlessly degrades from websockets over ajax with polling down to rendered HTML and it's extremely fast being made in C++. It also qualifies for many other things. Check it out.


React seems to do pretty well on this list (though I don't think it helps with hot code reloading, for instance). It allows for very clean server-side initial rendering (via nodejs) and subsequent client-side updates (with a stateless DOM).


He does mention React as being a library that hits some of the points, some of them are just your own personal UX/project decisions which a framework would have no say over.


Rendr from Airbnb is pretty close: https://github.com/rendrjs/rendr


This is not the right question to ask.

Instead of trying to find one tool that solves all of your problems, try understanding what your problems are. Once you have a list of problems, ask how you can solve each of those problems. When you have solutions, THEN you figure out which technologies are best for implementing those solutions.


The reason frameworks exist and are often very useful is that if a set of problems is common enough, it is more efficient to do your first few steps once and wrap the outcome up in commonly usable components. "Web application with good UX" is a very common set of problems, so it is sensible to wonder if the leg work has already been done.


That's pretty much the argument I had to make on my current project. We've got an internal smart client app-in-a-browser that we built last year after throwing away a legacy system. Unfortunately, we were on a tight deadline, and most of the team had little serious JS experience. The client codebase just grew grew randomly, with no real architecture or design, and turned into a giant maze of nested jQuery callbacks and 2000-3000 line JS files. I'd suggested we look into some MVC frameworks before we started development, but then I wound up working on a separate task and no one had time to actually go research, select, and train everyone on a new framework. During the development process, one dev in particular was busy building his own little framework from scratch. Looking at it later, he was basically reinventing Backbone.View, including templating and saving a jQuery reference.

After we hit our initial development deadline, we had time to actually research Angular, Ember, and Backbone. We settled on Backbone on the grounds that we could refactor our codebase incrementally, and it seemed to fit our use case best. Unfortunately, this later led to clashes with the "write it myself" dev, as he didn't want to use any outside frameworks beyond jQuery.

Anyway, having used Backbone for the last year, it definitely has a lot of limitations, but it also provides a very definite set of well-tested pieces that clearly help solve common development use cases. So yeah, very much in agreement with you there. Why try to totally build something from scratch if someone else has already done it for you, and it's been battle-tested by other developers?


I agree, but I stand by my statement. Asking "What framework should I use?" is not a good sign. Using the process I outlined could easily answer this question.


If you start with "how common is my problem?" and the answer is "extremely", then it is reasonable for the next question to be "should I use a framework that solves this common problem?", and you should have very good reasons if you want to answer "no" to that. If you answer "yes", then you have quickly arrived at "what framework should I use?". I guess perhaps we agree that "what framework should I use?" should not be the first question, but I don't think it takes long to get there.

I personally prefer (and find it more fun and a better learning experience) to build things from scratch with no frameworks at all, but I don't think it's a very savvy business decision unless you're doing something very novel.


Here's a few thoughts:

I can see that in terms of bandwidth, SPA can be more efficient then normal HTML page. But this makes a few assumption. First, that your JS package never changes. As soon as 1 character changes in your package, the cache is invalidated and the whole package needs to be downloaded. Like you said, it's application specific. But if your app has ~3 pageview per session, it becomes very hard to justify the use of a SPA.

As for acting as soon as there's a user input, this can be done with SPA or not. One thing to mention though, is that Pull-to-refresh is something that is gradually falling out of favour.

Besides those 2 things, insightful post.


> I can see that in terms of bandwidth, SPA can be more efficient then normal HTML page. But this makes a few assumption. First, that your JS package never changes. As soon as 1 character changes in your package, the cache is invalidated and the whole package needs to be downloaded.

Sure, but there's strategies against this, right? Generally, vendor and 3rd-party code doesn't change often, so minify and stick all that together. Then, you've got your core application code, which you attempt to keep as small and fast as possible.

I will say, I'm not as experienced as the author of this piece, but at the end of the day I feel like the author is making blanket-statements that honestly don't hold up to the reality of what users actually want. I think it also makes assumptions about your stack, and your resources- yes, if you've got incredibly fast, top of the line servers, server-side rendered pages are probably a better idea, as the time difference between a json payload and the page being rendered by the server is much smaller.

On the other hand, even a cheap rails(ie slow) server with a CDN handing off the client code can shove some JSON out no problem, and it can do it very fast, even the worst-off users usually only at 300ms for total receive time- a time which generally, is 100-200ms slower than your average server's render time of the page alone.

Furthermore, it lets you offload who is delivering said content- if a CDN is giving up all that Javascript, then the initial render times may actually not be that much slower than if it was server rendered.

----

I also get the feeling a lot of people are making the mistake right now of assuming that because there's been a lot of evolution in the frontend framework world in the past 2 years, that it also means we're hitting peak performance, which couldn't be further than the truth. Angular apparently(I'm going off of what I've seen many say about 1.0 in the post-2.0 world) completely bungled performance the first time around, but it'll be better next year. Ember is already well on its way to being fast, and by summer of next year is going to be blazing quick with all of the HTMLBars innovations.

I think we're barely getting started figuring out frontend frameworks. Even if right now it may not be the best idea for your personal use case, I'd check back once a year until the evolution slows down to make sure you don't end up regretting not jumping in.


webpack has a way to make this efficient. It can package resources as chunks, then files are written as hash.extension. So only those that changed gets to be downloaded again.

https://www.youtube.com/watch?v=EBlUng3IU4E#t=1323


if you are using AMD, the single modules will be cached so when your application changes, only the corresponding module will have to be re-downloaded.


There are a few issues with these:

1) "Server-side rendering can be faster" - the information in this part quietly ignores the fact that:

  * even if you have server-side rendering, you are still going to load external javascript/css files

  * browsers optimize multiple resource loading by opening multiple concurrent connections and reusing connections

  * you can and should use cdn (hence actually lowering the 'theoretical' minimum time)

  * browsers cache excessively - and you can make them cache even for longer

  * the fact that rendering on the server-side takes a lot of cpu and hence increases response time dramatically the more requests are made
6) while reloading the page when the code changes is a good idea, hot updating javascript is a really bad idea - beyond the fact that it's terribly hard, will most likely result in memory leaks in the end and as far as I know no one is doing it, it'll be extremely hard to maintain or debug.

The rest of the principles are quite true, informative and should be practiced more often (assuming you actually have the time to engage in these kind of improvements as opposed to making more features).


Just to nuance your 1) points about server side rendering, with pro-points :

• you only need html and css loaded to show content to your user, and js loads while the user is watching content, js has some time to be ready on first interaction.

• still feels slower than showing stuff with only html+css

• for pages content that changes a lot, if you rely on cdn for html pages, you need to update content with js on page load and you either ends up with a splash wait-while-we-are-loading or a blinking christmas tree.

• if your html is small enough, the cache checking round-trip is not that faster than loading content, while a JS rendering will need cache round trip AND data loading round trip. You can eliminate some html round trip with cache expiration, but at the expense of reliable deployments.

• still, JS rendering/update can be slower than server side CPU, especially on mobile devices.


There is a very simple way to get both maximum cache (without cache round trip, I.E. the ETag or Older-then) and reliable deployments - use version identifiers on the url and no expiration cache.

The only thing that the browser should always load is your base html, and have a single linked js/css that is concatenated and compressed, whose url changes every deployment - most web frameworks already have a way of doing it (Rails, Django etc...).


This is not entirely true, I believe:

1. With server-side rendering browser can get HTML and display it before other resources are downloaded, parsed and executed (for JS).

2. In pre HTTP/2 world resource requests can be expensive, since browsers limit the number of outstanding requests.

3. Some users still use slow phones with slow and laggy network connection. Server-side rendering can improve experience for them a lot.


You can send a pre rendered HTML shell with text content, when using server side rendering with something like React, quite quickly. I think this gives users a very nice experience especially compared to staring at a blank page!


Wow, the views counter... Haven't read the article yet, just astonished at the rate of increase...


Do you know how the counter works ? I am a JS noob. I see at the bottom of script he's updating the counter, but who is calling the update ?


It's a WordPress website that communicates with a Socket.IO server. I wrote about how to accomplish this here: http://socket.io/blog/introducing-socket-io-1-0/#integration

It's true to the spirit of the post as well: the count gets rendered on the server, then reactive updates come in realtime and the view is updated.


Love your work, but the yellow flashes did encourage me to scroll that counter above the fold before I started reading. I get distracted easily.


Thanks for the info!


Chrome devtools shows he's got a WebSocket open. In the Network tab you can filter on WebSockets, find the request, and there's a Frames tab showing all the frames (unfortunately does not update in realtime).


> there's a Frames tab showing all the frames (unfortunately does not update in realtime).

Take another look in Canary ;)

http://gyazo.com/fae74178ac1aefea3d4b19993f366bbd


Nice!


> Server rendered pages are not optional

> Consider the additional roundtrips to get scripts, styles, and subsequent API requests

If you're using a framework like GWT, it compiles all of the relavant css files, javascript, and ui template files, into one .html file. Then there's only one or two http requests to download this html file, and the server only has to handle requests for fetching data, updating or adding stuff, etc. You can also gzip + cache this .html file, to make it even smaller.

It runs lightning fast, too.


Another problem I had with that point is that there is no theoretical floor to the number of requests that can be made in parallel. If it takes 100 requests to get all your data, and the browser can handle 100 requests in parallel, you can get all your data with that same theoretical 50ms latency floor. This is a pedantic point, because as far as I know, all browsers currently limit parallel requests to a fairly low number, but it isn't true that more requests requires more wall-time latency in a theoretical sense.


Often on mobile phone browsers and such, the number of parallel requests is much lower. That's where something like GWT can help a lot, as it can cut your 100 requests down to just 2.


100 requests was purposefully hyperbolic. In reality, everything I've done recently has been 3 static requests (html, js, css), and however many requests for images and data are necessary. It's really just the requests for data (which I don't think GWT helps with?) that end up being "extra", because server-side rendering still usually requires separate requests for js, css, and images.


> 100 requests was purposefully hyperbolic.

Have you checked out your average content web site?


Very fair point, but I was sort of not considering images. Although it's probably still a fair point, but so much of the rest is third-party ad junk, which isn't really affected by the server-side vs. client-side rendering question.


FYI, most of your images are also compiled into that single .html file I mentioned. This includes both the images used in css files, and the ones in <img /> tags. That means only one http request for loading your css, js, and most of your images.


That's really intriguing. Not sure I've seen anybody else do it that way. Thanks for enlightening me!


No problem. Just to clarify, only the small / medium sized images are inlined in this way. The large images are done as their own http request, so the browser doesn't slow down with rendering them.


Actually the ad stuff would largely go away if everything moved server side.


How so? You mean by having the server send the user's info to the ad provider? That doesn't seem to be a common setup, because of the advantages of cookies for tracking users across sites. Maybe I'm missing something though.


When it is done server side, any tracking cookie work has to be done ahead of time. So you have a single trip from the browser to get the ad, and maybe a second trip to confirm it was successfully shown. That's it.


Ah, but that won't work for the requests to third-party services that are for the purpose of telling those services "the user identified by the cookie in this request just visited this site" rather than actually showing ads, will it?


Absolutely it will (if that's what they want it to do). As I said, you've already done the cookie join, so you know what the magic userid is that the third party has in their system for that user.


This isn't as awesome as it sounds. When inlining everything there is no control over the prioritization of resources. Large files like images can block the rendering of your layout giving the appearance of being slow even if the overall download time is less. HTTP2 has stream prioritization to solve this problem and is much more cache friendly.


> When inlining everything there is no control over the prioritization of resources.

You can use split points to divide up your code, only the resources in a given split point are loaded. Example: If the user is viewing your 'Sign up' page, then only the resources for the 'Sign up' page will be loaded.

> Large files like images can block the rendering of your layout

Only small to medium files are inlined. Large files are downloaded as usual.


If I recall, only very small images are inlined as data URLs, and other are either sprited or left alone. There's a lot of control over these optimizations.


This might sound good on paper, but have you actually tried using a large GWT application daily? I use Google Adwords every day, and it's one slowest and most frustrating web experiences ever. And if Google can't get a GWT application right, who's to say you can?


I've been developing GWT apps for over a year, and without any doubt, they are the fastest running apps I've built compared to everything I've built using regular javascript / AngularJS. My clients have also used terms like 'insanely fast', 'runs like native' (on mobile), etc referring to sites done in GWT.

I've used adwords a little bit, and it runs fine for me. The only slowness I can think of is when you request keyword / bid data and it fetches it from the server. That takes a while. But for that, it probably has to query a gigantic dataset on the server, and that's probably the reason for the slowness.

You can contrast that with Angry Birds' html5 version, which was also written with GWT.


But Google is not pushing GWT anymore since a long time, they don't want more adoption of the framework, instead, they want you to start using Dart, so beware of GWT's future.


Google isn't pushing GWT sure, but it does have a dedicated GWT team and also just launched a brand-new product based on GWT. More to the point, based on the acceleration of the GWT community, it honestly doesn't matter how Google Proper cares about it. It's a stable and growing platform (e.g. http://gwtcreate.com)


https://news.ycombinator.com/item?id=8554339

Also, GWT has been open source since 2-3 years ago, and its development has been steadily going on. Even if Google was to abandon GWT, it would continue on being used & developed by others.


Are you suggesting that we now need a sophisticated framework in order to concatenate files together?


It does a lot more than just concatenate them: http://www.gwtproject.org/learnmore-sdk.html


I didn't call it a sophisticated framework for nothing. Of course it does a lot more than that.

Point is, it is ridiculous suffering from client side rendering latency penalties due to an absurd number of round trips loading all these different components. Concatenation is NOT a hard concept folks. Even if you aren't using GWT there is no excuse for letting this drive the problem.


Seems like a well-reasoned set of opinions at first blush. I'll have to give it more time to sink in for the most part, but the one bit that elicited immediate disagreement from me was the particular illustration of predictive behavior. There is unquestionably value in some predictive behaviors (e.g. making the "expected path" easy) but breaking with the universal expectations of dropdown behavior doesn't seem like a strong example to follow.


Funny enough I've had to deal with many of these when implementing http://platform.qbix.com

I pretty much agree with everything except #1. Rendering things on the server has the disadvantage of re-sending the same thing for every window. I am a big fan of caching and patterns like this: http://platform.qbix.com/guide/patterns#getter

You can do caching and batching on the client side, and get a really nice consistent API. If you're worried about the first load, then concatenate all your js and css, or take advantage of app bundles by intercepting stuff in phonegap. Give the platform I built a try, it does all that stuff for you, including code updates when your codebase changes (check out https://github.com/EGreg/Q/blob/master/platform/scripts/urls... which automagically makes it possible)

I would say design for "offline first" and other stuff should fall into place.


For real-time updates in response to user actions, which is a bigger concern: average latency, or its variance?

Example: Server generates a sine wave which gets displayed as a rolling chart waveform on the client. As client spins a knob to control the amplitude, the server-generated stream should change (sine wave is a trivial example, representative of more complex server-side computation).


The real-time updates he's talking about don't require server-side processing - the google homepage switching immediately to the search view for instance - that processing can be contained within the Javascript application, and state is simply maintained against the server (and then by extension across other instances of the application).

I don't imagine he's suggesting we try the same approach where server-side processing is required.

If I have misunderstood you then I apologise.


You're right, and your explanation helped me understand the article better when I read it again. Thanks.


I really like the simplistic principles http://roca-style.org defines for web applications.

I find single page applications way too complex. The amount of code duplication is horrific. So everyone ends up building platforms like GWT or Dart in order to hide that overhead. But that does not mean that things get simple.

(Maybe I'm getting old.)


I can see where you're coming from but I find that React (with node on the server and a RESTful database) eliminates a lot of the code duplication because I can run the same view rendering logic on the client and the server.

ROCA is an appealing idea, but my concern is that in order for the the-API-is-the-web-client approach (which ROCA as I understand it seems to advocate) to work you end up mixing two entirely separate levels of abstraction: what may be a good abstraction on the API level may not be a good abstraction on the UI level. It's sufficient if your web app is just an API explorer, but not every app lends itself to that.

You could say that then we shouldn't be building those apps, but that's simply not realistic.


A neat example is github.com. When browsing a repository it refreshes only the relevant part of the page. But the URL changes and can be used to navigate to a specific resource.

But as the article points out is often the case, at github.com the HTML loaded does not include the already rendered resource; it must be pulled in via a separate request.


The way GitHub works is pretty decent, but also pretty basic. It uses PJAX, so the HTML is still rendered on the server but the body content is updated.

It still has a few issues though, I work on flakey connections now and again and sometimes it just gets stuck - it would be nice if the request were retried automatically after a few seconds.


Great example! Github needs to give their code some TLC though. It seems like new comments on a pull request don't show up until after I push a new commit.


> "Server rendered pages are not optional"

I don't get this, in my opinion they are optional, you can show the ios png placeholder (shown in the next item) which is a very static and cacheable content, while fetching your highly dynamic data from a database or somewhere else.

It feels like the first principle contradicts 2, 3 and 4.


The solution you offer is exactly what server rendering is meant to stop. You should load content as fast as possible and that means rendering it on the server.

Please stop putting loading icons and spinners where your content should be.


I hear you on this, but the notion that client side is inherently a bigger download is kind of crazy, no?

Heck, if the concern is really about having lots of round trips, rather than server side rendering, you could have the server side stitch the client side components and still allow client side rendering. In fact, doing it that way makes it a heck of a lot easier to avoid reloading the entire page each time. Some kind of weird disconnect here.


It depends. I've worked at a company where the compressed and gzipped JavaScript file was over 1MB because there were multiple libraries being included for the use of one function.

That's obviously an extreme example.

Ultimately it's the engineer's job to make good decisions.


> That's obviously an extreme example.

No, I think that's missing the point. Sure it could be larger, but presuming you are trying to optimize the experience, there is nothing that would require doing it server side.

Why would the total payload needed to render the page client side _have to be larger_ than if it were client side? Unless you are talking about rendering an image with client side logic instead of sending a PNG/JPEG (in which case, sure, but that isn't what most people are talking about), I can't quite see it.


Just one thing to point out: it seems as if a lot of your SPA arguments are predicated on the idea that apps don't chunk and/or stream their logic. While the front-end SPA framework I use is currently pretty bad for SEO, almost none of the download or latency issues are applicable...


I think these are great ideals to strive for, but they seem lower priority than a couple things that they can get in the way of if you're not careful.

First, in your quest to show me the latest info, please please please don't introduce race conditions into my interface. I don't want to go to hit a button or a key or type a command, and have what that means change as I'm trying to do it.

Second, it's often important to me what has happened locally vs. what is reflected on the server (especially if that's public). Please do update the interface optimistically in response to my actions rather than sitting and spinning, but please also give me some indication of when my action is complete.


"A slightly more advanced method is to monitor mouse movement and analyze its trajectory to detect “collisions” with actionable elements like buttons."

Is this a joke?


Read this: http://bjk5.com/post/44698559168/breaking-down-amazons-mega-...

for another useful example of this technique.


This is totally different and makes sense. You simply observe if mouse moves off within a certain angle to not make a submenu disappear. This is cheap and useful. It helps the submenu not disappear when you are moving to it, not having to move the mouse exactly along the tiny space that connects the two menus. This can actually save like 10 seconds each time (inexperienced users)!

The other one will ajax preload a dropdown content when it detects that current mouse trajectory is in line with it. Come on.


No. Similarly Microsoft has done some work to improve input latency by predicting what gesture you're about to do once you start touching the screen.


Given the limited number of gestures this is cheap and easy. This is as easy as "did the user go left or right after touching? Ok, then did it go up or down?".

Analizying a trajectory is a completely different thing.


When you are developing a web application for phone, tablet and desktop, is it a good principle to use the same HTML for the three and a separate CSS for each device? is there a case where this would cause problems?


It depends how this is setup. If you are using a template engine then I don't see why would this be a big deal if its a technical decision followed throughout the project. If you are not using a template engine and using Javascript to throw things around (like a bunch of Jquery piled on top of each other) then it becomes an issue.

Are you using any kind of server side framework (Like Django/Rails)?

Are you using any kind of client side framework (like Angular)?

Are you using any kind of layout framework (like Bootstrap)?


After a certain scale, its better to just decouple mobile and tablet form web.


Can we first discuss design 101, i.e. don't put blinking elements in the user's periphery unless it's something really super important? The page view counter not being such a thing.


It could be something important. I don't know the exact intention behind this particular view counter, but consider for instance the scenario of using such a counter as an element aimed at enhancing credibility and establishing trustworthiness. You'd be more likely to read an article knowing it was also read by other 100K people, wouldn't you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: