For truly trivial things like upvoting comments, fine.
But for sending an e-mail? Not in a million years. I want to see the spinner, and then know that it was actually sent, so I can close my web browser.
E-mails can sometimes be terribly important things.
If my e-mail web app always instantaneously tells me "sent!", then I never have any idea if it actually was -- how long do I have to wait to know before it tells me, "sorry, not sent after all." What if the app doesn't get back an error code, but the connection times out? What if the app doesn't implement a timeout?
Basically, if I don't get a real, delayed, "sent" confirmation, then I know there was a problem and can investigate or try again. But if I get an instantaneously "sent" confirmation, and then don't get a "sorry, there was a sending error" message, I can't be 100% confident that the data actually got to the server, because maybe there was a problem with triggering the error message. And since I'm a web developer, I can imagine all SORTS of scenarios that a programmer might not account for, then would prevent an error message from being displayed.
But there's a third choice, which I hope is the sort of thing he meant:
as soon as you send a message, it goes into a little list on the side of your screen of things that are transferring to Google's servers. You can see it there, and you will see it go away when it has been transferred, so you know what's going on. But in the mean time, you can go back to your inbox, look at other emails, or do whatever else you want. That's how an asynchronous interface should be done.
One thing I didn't notice the article mentioning is that it's possible to have blocking only for certain parts of an interface. So if you press a "load picture" button, then maybe a gray square with a spinner will appear, but the rest of the interface should continue working as usual.
I think this is very key - you can take away blocking from a lot of scenarios without hurting user confidence/comfort if and only if you add an "aside indicator" to give the user peace of mind about important things that they want confirmation on and about the progress of the system. But, yes, they should be asides.
I think he more means like the "Background Send" lab in Gmail, which gives you progress ("Sending in background..." -> "Sent.") while still allowing you to continue to browse the UI.
The default "Sending..." notation blocks you on the same page, doesn't it?
This is what MS Outlook does, as much as you might hate the program. Outlook gives you lots of information on whether and when your mail is still sending, if there was an error, etc. It is tolerant of closing the window or losing network connection. And sending a mail doesn't get in the way of doing other things in the program.
maybe i don't have experience with many other mail programs but this is what thunderbird does also, its not unique to outlook. I hate outlook because its slow (UI search and UIwise) and because its stupid enough to think that storing all its mail data in a monolithic file or small number of files is a good idea, hello corruption problems.
This is kind of what the gmail mobile web app does .. As soon as you click send a blue bar at the bottom shows there's a pending outbound message and you're free to browse other mails until it gets sent.
Technically, seeing the spinner doesn't block you from doing other things, if there are other UI elements that exist on the page. Problem is, sometimes, web devs make the entire page a lightbox, and you can do nothing else except watch the spinner.
One of the Gmail labs allows you to do this (assuming I understand what you mean) - it's called "Background Send".
So instead of blocking and showing the "Sending..." message, it redirects you to the main inbox and shows a "Sending in background..." message, until the message has been sent. Of course, Gmail is so fast for me that usually I'm barely back at the inbox before the message is finished! :)
But at that point, you still have a blocking call to put it on the queue at Google's servers. Which can just as well be, well, a mail server (which maintains a queue of itself). So adding another layer of abstraction on top of it kind of defeats the purpose.
Interesting point, but what happens if you click "send!" and then close your laptop. Is your message sent or not?
While your point about the server-end being a queue is true, there's an expectation once your message is offloaded onto Google's queue, they will reliably process the message in a reasonable amount of time.
How is that call to put it on the queue "blocking"? "Blocking" is being used here to describe the UI preventing the user from further interaction (such as reading a different email) until the server acknowledges receipt. You can still wait for queue acknowledgement without preventing the users from browsing to other parts of the app.
Ah, but the queue doesn't exist on Google's servers. It's purely on your computer.
So the idea is that instead of time-consuming, blocking operations, you have fast blocking operations that put things in queues, and the queues then handle the slow operations.
The SINGLE biggest e-mail scaling mistake we made at AOL was to insist that the client stay in a wait state until we were Absolutely. Positively. Sure. that the message was guaranteed to have been delivered.
It ended up turning what could be a shared-nothing transaction (like every other ISP) into a network-wide two-phase commit requiring (at the time) millions of dollars of fault-tolerant hardware.
Meanwhile, guess what? You have no idea if that mail is queued at your ISP because the destination is down on a volume with a writeback cache on a RAID drive with a dead battery. You can never be 100% confident unless you have delivery status notifications, and those are pretty much dead these days.
There's a huge difference between "we have received your email and will make every effort to deliver it to the recipient" and "the recipient has received your message in their inbox". For purposes of "can I close my browser window?", the former is fine! Sounds like AOL decided on going for the latter?
Yep. It wasn't even so much a conscious initial decision as a conscious decision not to change the semantics once we got big.
At first, all mail was local, and so naturally was stored in a database, like most non-Internet mail servers of the day. It was what Dave Crocker called "rock mail" - I stick a message under a rock, and you know to look under that rock for your message.
The bigger we got, the harder that was, in the days where horizontal scaling by moving I/O to another machine was just as expensive (because the network was even slower than the disk). But we were sure that our distinction was important, and Internet-style queued mail was widely considered flaky (due in no small part to our own poor Internet delivery, no doubt). So we kept it, to the point of storing user mailboxes on Tandem NonStop machines that did multi-site replication with SQL implemented at the drive controller level.
Many of our scaling challenges were due to decisions that made sense early on and that we consciously refused to ditch; I wrote a bunch up here:
Gmail actually has a wonderful middle-ground solution.
In Google Labs, there's a feature called "background send." I love it. It shows "sending in the background," allowing you to go do other things. If you try to close the tab/browser, it warns you the same way it warns you when sending normally.
> "If my e-mail web app always instantaneously tells me "sent!", then I never have any idea if it actually was"
But you already have no idea if it actually arrives, let alone at the right mailbox, let alone was read/processed - until you get an asynchronous response.
And even if a web app blocks on 'sending mail', you can still suffer timeouts and disconnects, meaning the case remains of occasionally having to refresh and manually verify an action truly went through.
So if you're not concerned about those things, why be concerned about blocking on 'sent'? And if you are concerned about those things, how does non-blocking materially increase your concern? At least an asynchronous UI could automatically follow-up on a client-side send command that was never acknowledged.
Don't get me wrong; I can see an argument for operations that you truly do want to block all activity on until you receive a pass/fail. Email just doesn't strike me as a particularly good example of that.
Until the message has made it from my computer to the mail server, plenty of things I do could stop it from going out:
- I could quit my browser
- I could close my laptop
- If my train goes into a tunnel, I could lose my internet connection
…and I might never find out that the email didn’t make it, because my mail provider might never have found out that I was trying to send one. I need to know when the message is safely out of my hands.
> "I need to know when the message is safely out of my hands."
So what do you do with IMAP clients? Those almost always have an async UI. Or clients operating through an intermediary (BES)?
I understand that for some messages, sure, you want an acknowledgement. I just don't see how that process is notably different for an async web client compared to what's already out there on desks and phones and particularly as compared to a blocking web UI.
If you had a blocking UI, any of those 'interruption' events could occur while you're staring at a spinner. And you (rightly) wouldn't know or feel confident that the message was sent until you re-established your connection and verified the item had made it to your sent items.
Which is the same as it would be with an async client: it's an important email, so until you saw it in the Sent Items folder, you wouldn't have the warm-and-fuzzy feeling.
Not to mention that in any Internet mail system, the Sent Items folder has no real link to what's been sent; the fact that your outgoing emails also get appended to your Sent Items folder is a pure client convenience, done via a different protocol over a different connection with a different copy of the message body, and it's quite possible for them to show up even if the transmission itself failed.
Right, when things are important, you could just break the user's perception that things are happening instantaneously and give an asynchronous notification when the operation succeeds, instead of just when it fails, couldn't you?
Outlook and Apple Mail already solve this issue by showing a notification (audial or visual) when the email was sent. I don't see how the situation you're describing is any different to what has already been solved in desktop applications.
If your email never got sent to the server in the first place why could the website not use local storage to determine an error or not? Gmail uses constant POSTing with drafts here to solve that issue (not sure if it uses local storage).
I don't see why this UI couldn't show you a notification if Send was pressed and communication to the server wasn't successful. Especially since it is doing so much client side work already.
I hear you, but I think that this can be solved with the UI instead of a blocking approach. The synchronization status of the app (and even individual records) can be shown without preventing a user's next action. As Alex states in his article, you can also warn users about pending requests when they attempt to leave the page or close the browser.
This isn't how most client side email apps work. For example, on both the Mail app for mac and ios, the UI for sending is dismissed immediately. You know when it's sent when you hear the whoosh sound (or when you look to see that there is nothing left in your outbox).
Bottom line: you can have a non-blocking UI while still communicating to the user when there are problems.
I'm sure there are other ways we can indicate that an email that was supposed to have been sent was sent without blocking the whole UI. I think that was the point. Don't block the user from getting stuff done.
This. In the case of Gmail specifically, there can actually be data loss, if you're disconnected from the network and the app optimistically closes out the message screen. So not only was it not sent, it's not even in your Drafts folder, because the web page couldn't reach the server.
The discussion was on the pitfalls of the user not knowing something was sent, and one of them is data loss. The fact that a web app /can/ ameliorate a lost connection is orthogonal. If the user was told the email was "sent" he's going to feel free to close out all instances of e.g. Gmail, so that clever Javascript isn't going to be running. If -- if -- he re-uses this same browser in the future, which is not guaranteed since he thinks the email was sent and may have no concept of browser storage anyhow, the email may send then, which could be hours, days, weeks or months later, unexpected behavior which will cause unexpected results.
I think there's a better solution for e-mail. The moment it reaches the server the UI unblocks.
Later, if there is an issue with sending your e-mail it alerts you inside your browser. If you're unreachable through your web browser it sends you a text. What's wrong with that?
So what you are saying is that the server, upon receiving the email request, immediately sends a 200 OK response and sends the email request to another server (or spawns a new thread). Isn't this probably what happens anyways? In the case of Gmail it's likely a comet request that is held until the email is finished sending. Is the former method really that much better?
Or could it be that you only expect the "sent!" message because that's been the norm up until now? Would you agree that at the very least a "sending" message should be non-blocking?
Ah, thick clients are coming back again, and now we've reached the point where people start trying to build asynchronous applications because they're frustrated with choppy UI.
Unfortunately, pretending the network isn't there doesn't make it so. The flakiness has to come out somewhere, sometime. Either you make the user wait now, or you explain later, after you've lied about what you did. It's a tricky tradeoff.
Let's fast-forward to the end of the movie: You'll end up with a zillion special cases that are impossible to test properly. You'll decide to restore sanity by replicating the data into a client-side store with low latency and high reliability, so you can go back to a synchronous UI that your developers can reason about. All the craziness will be in a background process that syncs the client and server stores, which will still have to cause weird behavior as reality demands it, but at least the logic is contained. (I just described an IMAP mail client, or--for a Normandy-invasion-scale example--some versions of Microsoft Outlook.)
Then a new thin client platform comes along where you can't do all that complicated client-side stuff. The cycle repeats.
There are significant costs for real world apps to what the OP is suggesting, and you can't abstract them away in a framework or library, as much as you may wish it to be the case.
Nice post. I'd like to briefly respond to the bit about the difference between Spine, which generates pseudo-GUIDs for models created on the client, later overwriting them if the server responds with a real id; and Backbone, which has a "cid" (client ID) for every model regardless of the canonical server ID.
The reason why Backbone provides a persistent client id for the duration of every application session is so that if you need to reference model ids in your generated HTML, you always have something to hang your hat on. If I have '<article data-cid="c530">' ... I can always look up that article, regardless of if the Ajax request to create it on the server has finished or not. With Spine's approach: '<article data-id="D6FD9261-A603-43F7-A1B2-5879E8C7926B">' ... I'm not sure if that id is a real one, or if it's temporary, and can't be used to communicate with the server.
Optimistically (asynchronously, in Alex's terms) doing client-side model logic is tricky enough in the first place, without having to worry about creating an association based off a model's temporary id. I think that having a clear line between a client-only ID and the model's canonical ID is a nice distinction to have.
Couldn't you just allocate a pool of IDs to each session (open client), and let the client generate real, unique IDs directly using them? This way you wouldn't need collision detection, synchronization, etc. You only need a big enough IDs space, or a way to reuse IDs of the pool that weren't actually used by the client (by including them in new pools).
That's what (N)Hibernate calls the hilo algorithm. Compared with GUIDs, it has the advantage that IDs are somewhat sequential. (Random keys are terrible for index fragmentation.)
I am afraid that would create a mess in a db(its just my guess). Though your idea sounds nice to my ears. The hardest part would be if session runs out of id-s I guess. One can never allocate optimal pool of IDs for every user, there are always going to be "bad cases".
From the server perspective, it stores the real ID and the CID given by the client. In the extremely rare case where CID could be duplicated, the server could send back an error with an unique CID and the client would update itself.
That way, we're sure the ID is always unique on the server, but the client simply use a cid. In fact, the 'cid' could even be abstracted away by calling it ID. I.e. the server has 2 ids, the server one and the client one.. the client doesn't need to know the server-side one.
Sure, if you want to get fancy you can always do something fancier.
But in many apps, your server-side IDs are auto-incrementing MySQL or Postgres ids, or even a Flickr-style ticketing ID server. You really don't want your DB to be worrying about what are essentially transient JS/HTML references.
> "In the extremely rare case where CID could be duplicated, the server could send back an error with an unique CID and the client would update itself."
That's exactly the catch.
If you still need to consider CID collisions and write code to handle a potentially-different canonical server ID, there's no conceptual or complexity savings. You're not doing 'only' a CID; you're doing all the same work.
Well, I don't want to go into deep maths but the chance of creating the same cID is infinitesimal.. so one doesn't have to manage this case graciously. I.e. Just refresh the page or make a soft refresh (clear the models and re-send the data from the db, which isn't huge since that part is already coded for the initial loading). It's not the same as if it had 1/10 chance and you had to be clever to fix it. I mean, even without this problem, it happens from time to time that the best software needs to refresh because of a small bug.
That might be a practical improvement for some applications. But at a framework level I don't see any clear reason to prefer that approach. Particularly as the savings evaporate for any applications that can't handle collisions 'less than graciously'.
And, personally, it still 'smells' to me. I know that's not an objective argument, but there it is. It feels like a particularly leaky abstraction that will end up causing more trouble than it spares.
edit: clarified sarcasm that looked ambiguous on second glance.
That isn't right. The GUIDs should be internal only.
This leads us to the complexity of the "Async" UI in general.
In order to redirect (navigate) user to a proper URL you DO need to wait for the response from the server.
But the real async UI should just allow user to do something else. It doesn't mean it should not WAIT for the server response.
But from reading the article he says that with Spine, using either the temp or final ID in your app will work. It seems to handle the temp to final ID linking under the hood.
That's assuming I'm reading the article correctly.
That is notncorrect. GUID is only preserved within one single page. Go to the example, create a page. Copy the link and open in a new tab. You will see.
The author managed to pick the worst possible example of a site 'doing it wrong.' First, GMail practically invented the asynchronous UI, you'd think they know what they're doing. And, of course, they do. The reason it blocks when you send an e-mail is because that way you can be sure the damn thing was actually sent.
I love the feeling of immediacy users get when using AJAXy applications over render-view-submit-rerender applications, and my users actually comment on this to me (not in as many words, but they say that it is "light", "fast", "easy to use", etc), but the development costs of going the extra mile to asynchronous strikes me as likely to be very high indeed. It already costs me about ~5x development time to do something client side versus server side, just because of how much time wiring up Javascript takes. (And praying it doesn't break, because Javascript is orders of magnitudes harder to test than Ruby is.) The costs for rewriting the entire app to exist simultaneously in the browser and the server, and to magically never fall out of sync even when users do something user-y, scares the heck out of me.
The whole toolchain for reasoning about stuff happening in the browser is still laging a few years behind what we have on the server, which is a related but larger problem. We have Firebug, which gets us truly revolutionary features like "output log messages... in a browser!" and "inspect the internal state of objects in memory... in a browser!" But many of the rest of the cutting edge developments from the 60s and 70s haven't quite made it to the browser yet, or they're not yet at the point where they can be used by mortals. (Selenium: I want to love you, and yet I can't actually use you for anything because you break my brain.)
And then some young-gun will come along and create Ruby on Rails for the client side and all these frustrations will be abstracted away. Something being hard to do doesn't mean it shouldn't be pursued.
I really dislike the attitude of "errors are rare, so don't spend much time on them" espoused by the article. Errors are rare in the sense that you will often miss them, but most of your users will run into them.
Let's say your AJAX requests have a .1% chance of failure. If your users perform a thousand actions each on average, then 50% of your users will have been exposed to your error flow. Hope it's better than "Sorry, an error occurred."
Individual errors are rare compared to successes. Overall errors happen all the time.
I wouldn't say he's saying spending less time on them. He's saying that they are the exception, so don't make the whole interface depend on the possibility of them. Proceed assuming the optimal case and when they do happen, provide a safe, friendly means of dealing with them.
Amazon: 100 ms of extra load time caused a 1% drop in sales (source: Greg Linden, Amazon).
Google: 500 ms of extra load time caused 20% fewer searches (source: Marrissa Mayer, Google).
Yahoo!: 400 ms of extra load time caused a 5-9% increase in the number of people who clicked "back" before the page even loaded (source: Nicole Sullivan, Yahoo!).
Answered my own question but will leave for anyone else interested: but does anyone have the sources for those facts?
I recently used several of the techniques described, but I carefully chose when and where to implement them. For example, when a user "deletes" an item, rather than removing anything from the DOM before the request, I hide the appropriate elements, send the request, and if successful, remove DOM elements. The advantage of this approach is that the UI feels snappy, but it is easy to fall back if something goes wrong. Being optimistic that things will "just work" is alright in a fairly controlled environment, but when mobile is introduced, a mix of optimism with a soft fallback is a good approach.
From the UX perspective this approach is still wrong. Any sort of operation that may fail needs to provide an intermediate indication of an in-progress activity.
For example, if an item is updated and the backend balks, but in 10 minutes, there is no clear and concise way to indicate this error to the user unless the item was marked as "in-progress". If the backend is normally snappy, then it might make sense to delay showing the in-progress indicator (so that the majority of users won't ever see it), but discarding it altogether is not a way to go.
Another example, say there is a list of items keyed by a name. I delete A, then rename B into A, and then the deletion of A fails. Ain't that a pretty mess to shovel yourselves out of?
That's not to say that there aren't certain UIs that could be made to work in "instant" fashion, but realistically there's just not very many of them there.
You make some good points, but I don't think the approach I describe is wrong, unless like you said, I allowed requests to go on indefinitely. In the scenario you describe, it certainly makes sense to block the UI if editing uniquely identifying information is co-mingling with delete actions. I think the point we are both trying to make is that creating responsive applications should not be at the expense of the user's understanding of what is happening. This is a difficult balancing act, but perception is an important part of the UX, and whether we like it or not, if using an application feels faster it will generally be perceived as a better experience.
> ... it certainly makes sense to block the UI ...
There's no need to block the UI. It is perfectly sufficient to disable just the affected item.
> ... creating responsive applications should not be at the expense of the user's understanding of what is happening.
In this case you are bound to repeat the Microsoft's Distributed COM fiasco. They tried to blur the line between accessing in-process, in-machine and over-the-network services behind an abstract API. It was nice in theory, but practically it was a disaster. It is really hard to write a meaningful app - even an asynchronous one - when an API call can take between few ms and several seconds to complete.
In case if the parallel is not clear - their idea was the same as yours - "devs need not to know what's happening". This does not work. Devs need to know, as do users in your case. Perception is indeed an important part of the UX, no arguing there, but the UI needs to be designed in a way to preclude them from making false assumptions that would prove frustrating and disastrous should the backend go kaput. Faking snappiness does the opposite, it makes believe.
There's something to be said about user confidence and a user's confidence level directly correlating to their productivity using your app.
I could not have thought of a worse example than removing a progress indicator from sending an email. Making an "async UI" work in a fluid way that provides confidence to the end user is much harder than simply changing the state immediately and hoping that 99% of the time, it works.
Error handing can be a pleasant experience if done correctly, and in this blog post it's just an afterthought.
Here's a better way to do it:
- I click "Send Mail"
- My UI changes as if it were sent, allowing me to do other things in the meantime.
- I receive a growl notification in some other part of the UI that tells me the email has been successfully sent.
- If 1 second has gone by and I did not receive a response from the server to confirm that the mail has been sent, I will see an indicator that tells me that the sending is in progress, where the growl indicator would have been.
- If it is an error, the indicator changes and allows me to click it to go back to the mail composition view.
The concept of providing perceived performance is not new, but the details are in the execution, and you will shoot your self in the foot if you don't cover all the little details that are required to make something like this work.
Otherwise, some company is going to implement some jarring async UI incorrectly and piss off a lot of users.
Yes, blocking a UI is bad, but notifying the user of progress and task completion is a very good thing.
There's a difference between a non-blocking UI, and the UI that hides progress of an operation that actually takes time. I'm all for non-blocking UIs, sure, let me do other things while I wait. I'm not so crazy about hiding progress. Call me a control freak, but I do want to see that the action I requested is actually completed, not just appears to be.
I don't think that's true for most users. Coders might think of it that way because they understand the complexity of apps.
When an average user completes an action and sees the results instantly, he's not wondering if something went wrong, or if something is ongoing. They've already had UI feedback suggesting a successful result.
It goes without saying that mechanisms in the back-end need to be implemented in order for AUI to provide a great user experience.
For instance:
- On error, an action should be retried.
- Long-lived processes should be queued and upon failure, requeued.
Hopefully, the user won't reload the whole javascript app before this action is successfully completed and should never notice it failed in the first place.
It's not perfect, but I find that it is better. There's definitely work to be done on the error handling front. The UI should be able to clearly notify the user when something he previously did, didn't work.
You're saying "hopefully" the user won't visit a different site in the same tab or close down the browser entirely after completing an operation. That's quite the hope.
Merely notifying the user, long after, that an error occurred and his work is not saved is hardly sufficient. Even assuming that the user doesn't leave your app, they could easily be off doing something else miles away from that operation.
You do know that you can show the user a prompt warning them that they'll lose unsaved data if they close the page or go to another URL, right? That solves this issue.
Then what happens? How long is the user expected to sit there waiting for unsaved data to save that might never save?
And what about the second point; How useful is it to alert the user that something they thought was done wasn't long after they've stopped caring about it?
That's a UI issue. You can show a more prominent loading progress indicator if they elect to stay on the page after trying to leave it, so that they know when it's safe to exit.
And on the second point, presumably you'd always show some kind of indication that the message is sending, just not one that blocks the rest of the UI.
This is how desktop mail clients I've used work. The sending indicator is small, but if I try to exit before it's finished sending, it blocks the exit and alerts me about it.
The problem with using email as an example is that it's too perfect. Everyone is comfortable with the concept of the outbox where messages go to be sent. Email messages are fully self-contained and independent from every other message. And while in an email client, all you do is send and receive messages, so any UI related to that is fully expected.
The real question is how would this apply to applications slightly more complicated. An app where operations have consequences beyond just the single item you are working on; were users are clicking "save" and not "send"; and users go from working on entirely different entities. It seems like this adds a lot of additional complexity for very little perceived performance gain.
Obviously you need to weigh the gains vs the development time required for adding and maintaining this, as with anything.
In terms of the user experience, if the perf gains are small enough, then the chance that the user tries to exit while a request is in-progress should be slim, so interrupting their exit is fine as an edge case, and the gains in responsiveness should be weighed against your core user or business metrics - several hundred ms extra delay per action can have a significantly negative impact.
If the gains are large enough that the user is likely to interrupt something when exiting, then blocking the entire UI for each request is a terrible experience and you need to do something about it anyway.
You can say truth 90% of the time and still be considered a liar. The value of your feedback goes through the floor if it doesnt accurately reflect the state of the system. Telling the user that something is done when it isnt is plain and simply wrong.
Nice idea, definately not new. There is one major problem with this approach - you save document, request processing, you navigate away from page, start new work, after 30 secs your request failed. Now the code complexity for you to handle this situation is high. Multithreaded/asynchronous systems are always hard.
The author specifically talks about that in the article saying that you can catch "leaving a page" and notify the user that there's still something pending and data would be lost.
And in the case of a big error, you can either refresh the page.. or clear the models and resend them in json format to stay sync with the server. It might be a little more complicated, but with a good framework, not that much more. I.e. multithreaded code is a pain.. asynchronous or not, it's a pain. And, in the rare case where you really need to wait for the server to answer back, well, use a loader. But there's a difference between using a loader when you absolutely need it, and everywhere. Think of how apps work on the desktop.. everything is lighting fast, but in some rare occasions, it blocks. What is better? Something that always block or something that sometime block?
No, but that's not the point. If the web page is telling you "Wait, your email is being sent" and you close your browser, do you expect the email to be sent? Or if it says "Please wait, the document is being saved" and you close the application..
I.e. The point is that asynchronous make it feels smoother. If the browser crashes while data is being transmitted, there's nothing you can do. Ajax, asynchronous or whatever. So, what will happens is the data will be lost. The asynchronous part doesn't resolve all problems.. it just feel faster for the user.
And, by the way, why the ":)" at the end? Is it because you were happy? Personally, I find that a bit provocative. (i.e. in gaming, people would say "You suck :)" or if they'd crush you, they would just say :). It's being bad manner.) But then, if you were happy and just wanted to show it, sorry for this comment.
A lot of data entry is hard to unwind and correct in those kinds of situations; where a multi-step process is fully filled out and submitted based on key initial data that's later found to be wrong/invalid.
But I find that you often wind up having to write code to deal with that anyway, to handle cases of inadvertent user errors.
(e.g. I reserved a flight, room and rental car in Kansas City, KS -- but I was supposed to be reserving a flight, room and rental car in Kansas City, MO.)
So while I'm intimately familiar with and sympathetic to the challenge and complexity involved; I don't know that it's additional challenge or complexity.
I'm curious how SEO will play into this trend of async UIs and JS frameworks.
In 2 of the example studies given (Amazon and Yahoo!), we're talking about content/commerce sites where rankings matter.
If you reduce load time by Xms and increase conversions by Y%, your net gain could still be negative if you get bumped to page 3 for important searches and lose traffic.
Do any of these JS frameworks consider SEO and have appropriate features built-in? (I'm thinking of things like hash fragments)
Can someone who runs a content/commerce site that cares about SEO comment on this?
I'm about to launch a new version of my business's website today which incorporates the asynchronous UI concept while keeping SEO in mind.
I settled on building a JS-free version of the website using the templating system I've developed for the backend, and then loading in JS at the end of page load which replaces and rebuilds the site into an interactive UI for users with JS enabled.
Assuming Google doesn't try too hard to execute JS on the page, it should get a clean, "normal" version of the site, with all text & menus and everything else accessible, while users get something a little bit different (but with the same content).
We do the same thing. Our CMS delivers views based on the type of agent and whether JS is enabled or not. It allows our users who aren't able to handle all the fancy UI stuff (we have some blind users) to use the site without losing any functionality, much like Gmail static. With the proper design it's really simple to extend and add content this way.
this article is talking about UIs, and more specifically user-interaction driven interfaces. it shouldn't be relevant to SEO at all, because it concerns what happens after the page loads, not the initial page load (which is what the search engine sees)
A nice solution here is to have a 'traditional' application with new page loads for links, so that search engines can follow them. You then enrich this application with Javascript, placing click handlers on the links to replace their default action (loading a new page) with a similar action performed through Javascript/AJAX.
Gmail optimistically updates the UI in many cases, for example when starring a message or marking read/unread. Not doing so for send was a very conscious decision due to the severity of the failure cases.
For me, when something loads too fast, I think something broke because my brain has been wired to learn than actions through a web browser are generally not instantaneous and take a bit of time. Even if it's just a fraction of a second.
I really like this idea, but for some reason I think my brain would be more comfortable with a ajax spinner appearing for 300ms rather than an instant page load. For instance, I built something recently which loaded images on a page via ajax calls. It happened very quickly, 50ms maybe. The loading seemed way too fast so I actually delayed the images by about 300ms. It seemed a much more comfortable delay, and a few of my non-developer friends agreed.
Is there a sweet spot, or am I crazy? Let's just ignore amazon and google's data for the sake of argument.
I think something like this will be key. For me, as absurd as it sounds, the delay is the indicator. That indicator needs to be replaced if actions are to become instantaneous.
I liked this article a lot, so please don't think I'm being negative. The only thing that I sort of disagreed with was "we should optimize for the most likely scenario"
I disagree.
1) optimization is fragility
2) the extremes will inform the average
The "most likely scenario" is a visitor with a fast-enough Internet connection that a few hundred ms more won't matter.
So we should build for the extremes? Well, that's a little extreme (see what I did there?). But if you point to stats like "5-9% hit the back button...", that is not the most likely scenario...it's, well, 5% to 9% of the scenarios...
There's a documentary called Objectified that examined this with physical products, check it out. I think when developing and/or designing for speed, the "most likely" person is the least of your worries. The people still rocking slow dial-up connections are the ones who will be impacted...design and develop with them in mind.
One example from Objectified was a toothbrush. When they targeted extremes and made a handle that musclebound roidheads, people with MS, and old people could easily use (i.e. the extremes of human mobility), the "average" consumer was more than taken aware of and the extremes were satisfied.
If you develop and design for the slow browsers and the wonky old Internet connections, or at least keep them in mind, the normal folks will be more than satisfied (ideally).
Sorry to be so picky it just caught my eye and I felt compelled to chime in whilst waiting for school to end...
For what it's worth, GMail proposes a lab feature that enable asynchronous email sending (i.e. `Send` is clicked, and you go back immediately to the last location, while the email is sent in background).
Whoop, I get to reuse a comment I made on an earlier article, almost verbatim:
"Now that the client is the MVC execution environment with the client-server interaction used mostly for asynchronous data replication, plus some extra invokable server-side behaviours, we can congratulate ourselves on having more-or-less reinvented Lotus Notes."
This is exactly the philosophy that Google Wave had, except they called it an "optimistic" UI - it always assumes that every action will succeed on the server side.
It solved all these problems mentioned and more - for example, it used the operational transform algorithm to merge your changes with those of other users on the same page and update the client state to reflect this asynchronously. It also could continue working without a network connection - it'd just keep queuing your requests, and when you plug in the network again, it'd just start working again, albeit possibly with a big backlog of changes to merge together.
These are the kinds of problems you might have to start thinking about if you want to go down this path. Remember that Google Wave died from its own complexity.
> Again, this is an exceptional event, so it's not worth investing too much developer time into.
I have to disagree here. Exceptional events are exceptionally important here, since so much progress is hidden from the user. It is absolutely critical to inform the user of what happened, so their expectations aren't broken, and to cleanly recover so the application is not in an incorrect state. I think this is the most important thing to invest developer time into in an application built in this way. Otherwise, you'll lose customer confidence do to unexpected behavior or even lost/corrupted data.
I totally disagree with updating the UI BEFORE the request gets back. It's wrong for so many reasons. They all boil down to the fact that server state is independent from the client state.
The speed argument also doesn't hold. If requests take too long to process then you have either problem with your API (doing something synchronously on server side which should be done asynchronously, granularity problems,...) or your server is freaking slow. At worst a request should take under 100ms of pure server time. Add latency and you have 300ms.
a sync problem on the server can't be worked around on the client side. You would end up introducing complexity in an unstable, unaffordable and insecure client.
also, actions like filling a page with data from a db do require the client to wait for the server to complete.
It's always the eternal "Good for user" vs "Good for programrers". I.e. When creating a new language, one must make choice.. should it be easier to read/use for the coder? Or easier to code from the developer side.
And, if we look carefully in the past, it seems that it always start with the "Easy for coder first" -------> "Easy for user". For example, when the first examples of Ajax came out, it was really hacky and most programmers would have never believed what they'd see today.
So, I think that you are half right with the "introducing complexity in an unstable, unaffordable and insecure client." Maybe with the actual technology and framework, you are right. But I'm certain that in the following months/years, we'll go toward the road of a better UI.
And, I still believe that it's not as hard as people think to make UI update first and update later. 99.9% of the time, the server returns "ok" or something we already knew. In the last 0.01%, we have to choose if we really want to make it to 100%.. but in these rare case, a hard refresh is perfectly fine.
I think that sync'ing client and server state is a concept that most people do understand. For instance, the Dropbox UI clearly shows sync'ing between client and server. Mail apps show spinners to indicate messages being sent. Asynchronous UIs and their subtle cues have been around for quite a while.
Stating that this is the "future of web UI" implies that most of us will have to develop duplicate logic on client and server side, possibly with different languages (as the author actually does). While he mostly talks about the validation, it seems to me that just plain validation will not suffice - we would have to keep lots of business logic duplicated as well. And "duplicate" usually meaning "almost the same, but with bunch of edge cases not behaving exactly the same way".
From reading through the various responses on this post, I believe one very feasible and worthwhile solution to asynchronous UIs is to maintain what has been referred to as a transaction log somewhere in the UI for the user to be able to see containing all requests and their subsequent status/response message when the proper event fires. This would assume that any actionable items would trigger immediate changes to your UI in favor of the "success" case. It would be up to you as to whether you'd like to revert that scenario in the scenario that a failure occurs in an event response.
This would remove the dreaded "blocked UI" scenario because everything appears to happen instantaneously, however there would be failsafes in place when something goes wrong (the infrequent scenario).
To me it seems more a matter of order reversal in how we handle AJAX calls (assuming you aren't using an async/evented system).
I can, however, think of downsides. Take, for instance, a scenario where you may have a nested tree of actionable items that may have prerequisites on the other's completion. You could chain the events, but you might end up with a queue unbeknownst to the end user. Worse, a failure might occur at the parent level which leads to failures for all subsequent calls. I myself am not sure what the good alternative to this might be in terms of non-blocking UIs.
This asynchronous sending of emails sounds nice but it reminds me of the times when I started using email some 15 years ago. I would sit at a Unix terminal, fire up Pine, write all my emails and hit send with no delay or blocking, go to sleep and hope that during the night some script actually succeeds in sending those emails.
I really appreciated this post. Everything Alex mentioned in his article I learned through trial by fire doing mobile development. Performance was critical and UI responsiveness was a must. It was then that it dawned on me that all the same techniques could be applied to a web application just like Alex mentioned. Most web developers seem to get stuck in the framework rut. All the tools and techniques are there to build something fast and responsive.
If there is one thing I can truly appreciate about what he is trying to do with spine it's the client id generation and request queueing. This has got to be the core of what makes good "AUI". Every developer dealing with remote requests should have this in their back pocket. 101 stuff.
... but really, if you're building a large JS app, it doesn't matter particularly which library you pick -- the main benefit is simply to get your state out of the DOM and into rich, reusable models that make it easier to reason about and manipulate.
I have some APIs I have to call that take up to 5 seconds to return and resist caching (hotel availability, for instance). Would those delays become even more jarring with an approach like this?
You pointed a good example of where this wouldn't work. However, you can still make everything else asynchronously fast; and use a loader when it needs to really wait for 5 secs.
For instance, take gmail. Some part might hard to use that way.. for instance, chat communication where you somewhat need to receive the answer of the other person to show it. However, adding labels, deleting a message, etc.. that can all be made asynchronously.
Totally agree with the premise that user actions should provide responses instantly, I was building those kinds of responsive UIs 2 years ago. But, I have a problem believing that the future of web applications is based on serializing all ajax requests and duplicating model validation on the client. Come on, this is 2011, this technique isn't new. Let's work on things that will really change the state of the art.
'Asynch UI' has its uses but, in case of email, I don't think benefits to users have much substance. Yes, perception is a critical design factor we must all deal with on daily basis but we shouldn't forget that 'magic show' entertains at best and offers no real value to users. 'Magic' by another name is hoodwinking and can easily induce confusion and anger when misapplied.
Great post however many things don't fit the pattern. For example, lacking precognition, your search app can't know what people will search for. There are many similar UI examples, where you can't do stuff until you get input from the user, leading to a fundamentally synchronous (from a user's perspective) transaction.
I couldn't agree more. For connected web based games this is a requirement. https://www.switchpoker.com/client makes use of asynch calls to give the appearance of an extremely responsive UI.
I did a bit of work towards this last year: the user experience is really nice, comparable to Silverlight or Flex. Only both Silverlight and Flex have a much nicer development experience at the cost of a plugin.
I prefer to put the worker queue on the server. It's not as snappy, but it's snappy enough. And queued up operations will not get lost if the browser is closed/crashes.
what are people using on the backend for apps like this? I was just starting an app with approach like this (both for these reasons and to harmonize across web and mobile clients) and I was planning on using RoR given its first class support for JSON and maturity. Thoughts?
Rails and Node are both good options for the backend. I found Alex's screencast on integrating Spine and Rails with the spine-rails gem to be a good introduction:
Well the website is mostly static; that'd be overkill. Maybe the author wanted to use another library to create the documentation automatically. But, you've got a point saying that if it's not your first choice for simple static pages, it's a bit scary to use it for huge production website. I.e. Django might be overkill for a simple static page, but it's still trivial to use it for that.
But for sending an e-mail? Not in a million years. I want to see the spinner, and then know that it was actually sent, so I can close my web browser.
E-mails can sometimes be terribly important things.
If my e-mail web app always instantaneously tells me "sent!", then I never have any idea if it actually was -- how long do I have to wait to know before it tells me, "sorry, not sent after all." What if the app doesn't get back an error code, but the connection times out? What if the app doesn't implement a timeout?
Basically, if I don't get a real, delayed, "sent" confirmation, then I know there was a problem and can investigate or try again. But if I get an instantaneously "sent" confirmation, and then don't get a "sorry, there was a sending error" message, I can't be 100% confident that the data actually got to the server, because maybe there was a problem with triggering the error message. And since I'm a web developer, I can imagine all SORTS of scenarios that a programmer might not account for, then would prevent an error message from being displayed.