Hacker News new | past | comments | ask | show | jobs | submit login
Avoiding Event Chains in Single Page Applications (code-experience.com)
130 points by dmnd on Aug 31, 2014 | hide | past | favorite | 63 comments



I've read this 3 times and I still can't see what the advantage of this dispatcher is.

In the event model, Events that deal with model A also have to be dealt with in models B,C, and D. Wouldn't this lead to exponential explosion?

The fact that B has to explicitly wait for A in the code also seems prone for errors. If A changes to no longer deal with that event, will B block forever?

And this still builds an event chain, because B is blocking on A. I don't see how this is different from an event chain, execpt that the dependency is non-obvious (whereas a callback chain/future chain will at least show what data is being passed into B from A)

This is maybe OT as well, but this article could really do for some concrete examples in the explanations. "Model A" and "Model B" are not examples.


> If A changes to no longer deal with that event, will B block forever?

No, A would just no-op, and control will return to B. This is all written in the context of JavaScript, with a synchronous dispatcher and no threads which can block.


Isn't it simply a case of 'spaghetti code' vs 'explicit and clear code'?

The dispatcher takes a potential mess of calls between various components, centralizes them and makes the sequence of events and dependencies much easier to reason about.


It took me a while to come round to it but this kind of madness is something that Ember's event/render loop and implicit property dependency graph makes really, really easy to solve.

People knock Ember for its complexity but in reality it's simply a complete/coherent solution for the kind of problems people like this are iterating towards - and watching their code get more complex as they do.


Having built large Ember and Flux apps, Ember seems a big opaque mess.

Computed properties which depend upon other computed properties never seem to update quite right. The object system is tightly coupled to the template system. It's a monolithic framework that depends upon hidden magic to bind the pieces together.


> implicit property dependency graph

How similar is Ember's dependency tracking to Knockout's?

With Knockout I've had exactly the problem described in the article, where properties end up depending on eachother in a chain, and it is hard to keep track of what actually happens when data changes across multiple view models, particularly when inevitable special cases appear, where really I want to do something slightly different half way down the chain depending on what initiated the change, but all I have is a generic "property changed" event. How does Ember handle that?


It looks to me like the case where this is required is actually screaming out for a new event type, not fixed dependency order. B shouldn't be listening for the same event as A is if B depends on A getting something done. A should be firing a new event which B can listen for.

Is there some subtlety I've missed here?


The flaw with such a strategy is that it does not scale. (We used to do exactly what you described). In a big "eventful" app, you will have way too many events to keep track, and you will constantly be browsing trough different modules to understand the chain.

Lets assume the user clicks somewhere, so an event is fired "user-clicked-x" that model A listens to.

So you know that model B needs to change as well and you fire a new event type from model A, say "model-a-changed-because-of-user-clicked-x" or a more generic event, say "model-a-changed".

Both will cause headaches, because they require a quite high cognitive load. With the generic one, you have the issue that sometimes you want A to change but not B, which leads to lots of conditionals (all of which you need to remember).

With the specific one, you have two events that mean almost the same, but are different. With every model C and D you need to carefully evaluate which event you listen to.

In isolation, this is perfectly fine.But if you have 10+ of those things interacting with eachother, it will be a big mess. You will have 30-40 different events, that map out a hierarchical order that you need to track down through multiple modules every time you change something.

If you do not abstract the chain away, the complexity is simply too high. You'd have to keep roughly 15-20 links in your head, which is too much. With Flux or similar patterns, you just have to keep one pattern in your head.

In the end it's a simple story of abstraction. Your mind can only deal with a couple of different things. If you reach the threshold, you need to abstract. Flux shows one way to do this (and it is only useful if you've reached the need for abstraction threshold).

Having one or two short event chains is perfectly fine. If you have 5 different user interactions that have a complex ripple effect through the state of your app, you need to do something, or development speed slows.


This I can well believe, but I'm interested in this bit:

> you fire a new event type from model A, say "model-a-changed-because-of-user-clicked-x" or a more generic event, say "model-a-changed".

These two seem to be on opposite ends of a spectrum where I'd try to pick a middle point. I wouldn't want the semantics of "this is a reaction to a UI event" anywhere past the first event, it's way too detailed. I'd try to pick something like "model A's date field changed ", or ideally something more meaningful like "The User updated their address".


This still introduces a new link, which will overload your brain if you have too many of those.

To give you some context:

In the app I'm developing now, we've got roughtly 25 global user interactions (meaning an event that will be triggered with a DOM event).Most of these events affect more than one model. Quite a lot of those also trigger complex event chains.

Multiple models have complex dependencies that would form branched dependency chains with conditionals.

It's just impossible to have everything in your mind at all times, which you need to if you want to extend a chain without bugs.

Now you might not have this issue in your projects, because they don't require that kind of interactivity. If thats the case don't bother, keep doing what you are doing.

But if you ever realize that things get pretty complex in one of your projects, then you know where to start :).


At floobits we use "actions" in addition to model events for our flux like implementation. Views trigger usually actions, modifying models triggers events on the model. Views in themselves do not usually trigger model updates.

So think about opening up a file in a web editor, that's an action. Many things might care about when a file is opened. The tab UI, maybe a log, other users connected to your session. Sometimes views correlate well to model updates, like a form, but many times they do not in which case a different event type can be useful.


This makes sense. I've thought for a while that separating commands and events was the right thing to do, even if they use the same distribution mechanism.


That this works automatically and assures all events will be seen appropriately.

That is, now he just has to declare it in the interested party -- whereas in your case, the programmer will have to fully manually manage who gets what.


Well, right now, he has to explicitly listen for one event in two places, and annotate the order correctly.

With two separate events types, he would only have to listen for the events.


Nope, he doesn't have to annotate the order correctly. Except if you mean locally, in which case it's trivial.


I'm suggesting the difference is between:

    A: listen for X
    B: listen for X
    B: execute after A
and

    A: listen for X
    B: listen for Y


it's amazing how JavaScript is becoming the new java.

everyone thinks that adding a ton of extra complexity in the guise of a simpler api will make it more maintainable.

in this example you now have 4 events under the hood and two event dispatchers. but it seem simpler because you only "see" two and one dispatcher.

while it may make things just a little bit simpler to maintain you may enter debug hell when there's bugs on that added under the hood complexity... which will often be a code you're unfamiliar. making the article main argument (helping on debugging edge case bugs) kind of two edged


writer of the article here:

I simply cannot follow the argument about "added complexity"

The use case defines the minimum amount of links you have to make, in this case 4.

Now you have two options: You just roll with it (we used to do that), or you try to abstract away a few steps (we do this now with the dispatcher).

The dispatcher here is roughly 100 lines of fairly simple, unittestable code, so I really cannot see where the elusive bugs come from. 100 lines that you will easily save if your app is complex enough by the way.

With the dispatcher, you can abstract away a couple of steps consistenly over your app. Now I fully agree that you need a certain threshold of complexity to make it worthwile.

But you cannot avoid the original complexity of your usercase.

There is no alternative to decent abstraction if you reach a certain level of complexity, because it is given externally.

Other concepts like two-way databinding do exactly the same.


It seems all you've done is inverted and hidden the sequence dependency in a series of waitFor chains instead of event chains, which in some ways is worse; compare

    A:
     update;
     notify B;
    B:
     update;
     notify C;
    C:
     update;
     notify D;
    D:
     update;
with

    D:
     wait for C;
     update;
    C:
     wait for B;
     update;
    B:
     wait for A;
     update;
    A:
     update;
The former is a clear sequential path, the latter builds up and then unwinds in a stack-like fashion. It's like the difference been non-tail and tail-calls.

It's only a technical nuisance that one model needs to wait for the other to update.

I would certainly not view a sequence dependency that's critical to correct operation as a "technical nuisance" to be hidden away; it's an important fact.


yep, that's all it does. It just inverted the sequence dependency. You call it "worse". But I argue that it is a much better mental model in the specific use case of updating state given an external disruption from the user.

You see, what you call a clear sequential path is what I call evil :). We had tons of those in our app, and it was so hard to keep track of them, and everytime I didn't look at it for a while, I had to invest 10 minutes to track down the event chain.

There is one simple reason for this: If you have a sequential path, you loose the context after the first link. A updates because "user-clicked-x", and B updates because A updates. But if you're working on B, you ask yourself why the heck did A update in the first place?

You might argue that you are always aware of this, but we found that when our app grew more complex, we actually didn't always know.

I'm sure you could find a solution where you keep the sequence and don't lose context, but then you have to state the dependency in the wrong place: In model A you have to say that model B should update. But that's all wrong because A shouldn't care about models that depend on it. The models that depend on A should care!

So by inverting the whole sequence dependency, you gain the following:

1. in every model you are aware of the context, i.e. the original event that triggered the state change in the first place

2. you are also aware on what other models this model depends on (it is explicitly stated and not hidden like you said).

This means that you can work on model B and extend it without ever looking at other models, while knowing exactly the origin of your state change. In my opinion, this is decoupling at its best. It also helps unittesting greatly.

While I agree that inverting the dependency is a drawback, I've gained things that, at least in my opinion, heavily dominate that drawback.


> A updates because "user-clicked-x", and B updates because A updates. But if you're working on B, you ask yourself why the heck did A update in the first place?

Stack traces are a thing. Chrome even has asynchronous stack-traces.

A updates because "user-clicked-x", but it might also update because "user-clicked-y". Or because "user-clicked-z". So now I have 3 things to add to A and to B, because B needs to refresh everytime A does. So if you forget about one of these, then suddenly B is out of sync.

A "solution" to this problem already exists in terms of event listeners. If B listens to any change on A, then A doesn't have to deal with B, but B updates properly and doesn't have to deal with all event types A has to deal with.


>Stack traces are a thing. Chrome even has asynchronous stack-traces.

Now this argument line has dissolved into plain sillyness.

He showed how he made the congnitive load less and the dependencies more explicit and locally evident IN THE CODE, and you tell him to use "stack traces" for that?


From what I understand, this methodology increases code duplication, which leads to higher risks of bugs later on.

I'm trying to understand whether this is truly the case, or if I'm missing something.


Well, I remember all these things just too well from our Backbone App (not Backbones fault, but ours!), and I would never go back there again.

So now I have 3 things to add to A and to B, because B needs to refresh everytime A does

We used to do this. A lot. It was a big mess. Because in reality (for us), it is never that clear cut. You almost never have full dependencies so that thight coupling is the right way to do model it. There are always exceptions that you need to be aware of.

We found that decoupling reduces cognitive load greatly, and changing the relationships can be done a magnitude quicker because "listen to except if this and that happens" doesn't scale. There are just too many this and thats if your app is big.

I know that the status quo method always looks easier than something new, because its unfamiliar. But restriction of communication flow is, in my opinion, the ONLY way to keep order in a big, complex app.

Just consider this: I've tried both ways extensively, and I favor the Flux way. I might be an idiot, but there is a possibility that I'm not. This should encourage people to really try this once to avoid status quo bias.


two-way databinding does not always do the same thing. Angular's digest cycle is frame-based, where everything is recalculated until you hit a fix-point. This is more analogous to the event dispatcher (A is updated, then on refresh B will) than your dispatcher example.


The thing is that the hidden complexity will not appear, as rendering and state management are now separate. Both can be unit tested in separation. Components are modular and composable.


mmm... as I get older and (hopefully) more experienced, I have become doubtful of this point. Apparent simplicity comes often from ignoring the subtle difficulties in a problem. Adding a layer is often the disagreable but reasonable thing to do.


Isn't this simply trading one level of complexity for another?

Maybe the OP is just giving a bad example, or maybe I'm missing something - but one thing I really don't like in his example is that model B's 'waitFor' call needs to know about Model A, and that model A has an appDispatch. This kind of tightly coupled code isn't going to do anything useful for your codebase if you have a large application. It might make debugging easier right now because things are more explicit, but it will make you less flexible in the long term. It also doesn't solve the problem that you still need to understand that A has to finish before B can start - which I suppose most people would simply solve with an additional event. The OP does mention this in the article, but I'm not convinced that the trade off is worth it here. If I'm to explicitly state dependencies within the event handling code, then that means that any time I want to change how things handle events, I've got to remember exactly which of my dependents reference me. This isn't helpful - and in fact it may be more painful than the current situation.

I guess my main problem here is that the benefits this brings just aren't enough (in my view) to offset the potential pain. To me, it kinda just feels like cutting off your nose to spite your face. You might make one area a bit easier to debug, but you lose out in other areas too.

On the surface the code may look more sensible and easier to think about, but in practicality I'm not convinced that this won't introduce additional pain later on.


A model listening to other models? Yuck, that's just wrong. In an MVC application, only controllers should update models. Just because Backbone lets you listen to model events from anywhere doesn't mean it's a good idea.

IMHO Application events should simply bubble up through the DOM, and get caught by appropriate controllers (Backbone.View class in backbone parlance) that then modify the appropriate models. If need be it's the top-level controller for the app, eg:

document.body.addEventListener(function(e) { modelA.doSomething(e.data) modelB.doSomethingElse(e.data) });

I'd take a simple approach like that over an event bus/dispatcher/coolest-new-pattern-since-sliced-bread - any day, and sleep at night knowing if something happens to me, any employee will be able to parse and debug my code.

PS And no, I don't use Backbone, but my own 500 LOC MVC framework that's got zero dependencies and is about 100x faster.


Backbone is considered a high-quality javascript project, written by a very talented and experienced developer.

Could you explain what you do differently in your MVC framework to achieve those gains?


Easy - he doesn't support anybody else's requirements, app designs, or coding practices.


OT but I'm looking for a comparison between Angular and React/Flux, preferably with a real case study. Working with templates that update themselves without having to bother with anything else just seems to much easier...


I do not have much experience with Angular, however, I feel the arguments over different frameworks depend a lot on the specific nature of your app.

two-way databinding is perfectly fine and really helpful for a lot of apps. However, it will screw you for certain use cases.

In my opinion, it really boils down to how much global interaction you have in your app. If you simply render a lot of models that can only be changed trough the view, then I don't really see a reason for a Flux like architecture. Two-way databinding is perfectly fine there. (React vs whatever then boils down to performance vs toolset)

If you do a lot of graphical stuff that allows for heavy interaction, filtering and other kinds of stuff, I believe that two-way databinding will be your downfall. You need to clearly and explicitly map out on how data flows through your app. That is the time where an architecture like Flux shines.

So ask yourself this question:

"How often can a user do something in your app, and a lot of Views need to rerender?"

If the answer is "often", then you should consider something like Flux. If not, Angluar might be a better fit, especially if you already know it.


My comparison is here: http://noelwelsh.com/programming/2014/08/17/angularjs-vs-rea...

It isn't a 100% fit for what you're looking for -- there is no case study -- but it might be useful to you.


I have had lots of problems trying to understand and use Angular. Most people don't. The TODO MVC site has comparisons for a simple app:

http://todomvc.com/

The main problem is that a TODO app is too simple. But if the app were more complicated it would be much harder to prepare and read the examples.


If ModelB needs to wait for ModelA, and it already is aware of ModelA (which your code requires) - why not make ModelA into an eventDispatcher, have ModelB listen to ModelA, and have ModelA dispatch an event when it's done (Or write a small onX function in ModelA).

It's also clearer, easier to implement, already implemented in tons of frameworks and easier to unit test (since you no longer need to mock the appDispatcher).

You should probably try to avoid using a global eventDispatcher (as any other global), whether you can specify order of execution or not.


Coming from Backbone.js so this post was extremely helpful to me. Thank you. I do recognize the exploding complexity of event chains.


You may also be interested in an open source library I wrote called EventAPI https://github.com/benaston/event-api


Does anyone know how this dispatcher is different from WPF dispatcher? Maybe that's the way to sell it.


IMHO WPF doesn't have so much of a good reputation, to be used to market other products.


not for marketing per se, but to sell to understand what is it and why is it good. A lot of people know WPF and you could use the analogy.


"But when somebody uses his mouse to click somewhere and expects something to change, things might get tough. If you're lucky, it's just a local action, like a dropdown menu that needs to open. If you're unlucky, it's some complex filter action on your data heavy app that causes three ajax requests and changes to seven models."

Reading this I start to think if isolated javascript components and a jquery pubsub system publishing the 'newclick' event from the server isn't simpler and more modular..


So Flux is poor man's BizTalk.


Yes -- but only if you half-understood Flux, didn't read the article, and just wanted to make a snark comment.


Now that's a reassuring yes :)

What I meant is it seems (I've never used or seen it before) to be closer to a message broker than a bus - http://www.udidahan.com/2011/03/24/bus-and-broker-pubsub-dif...


Complex.


How? Everything about this seems incredibly straightforward


Call me old fashioned, but function calls are pretty nice events.

Rather than heaps of code, and weird concepts, you can just implement firing the event into the A, and B models by calling a method on each. Simple!

  A.getFilteredDays(payload);
  B.getFilteredDays(payload);
That's only two lines of code compared to 30 or so. This scales up because with more models, you have less code.

In practice these are easier to debug, because the debugger supports function calls with stack traces, and easier to read, because people understand function calls. They are also faster.

This is why I think that long article is complex. Remember that OO uses messages and objects. Method calls are events. Also remember that there is an M in Document Object Model (DOM). So you see we already have a Model, and events?

Please also consider how much less code there is in the jQuery version of the todomvc app compared to the react one. http://todomvc.com/


Are you saying there is a chance we go back to jQuery as the superior solution? :)


I'm with you, bub.

The people arguing against you don't know what they're talking about. They're stuck in a paradigm where "de-coupled event-driven architecture" is a holy goal and can't see their feet on the ground anymore.


Hmm, this answer seems to have missed 7-8 years of the web's development, and shows a preference for convoluted, messy codebases with huge conginitive overload...


Explain how smaller, simpler and faster code is worse.


Smaller: Not really. You just avoid some K of helper code already build (by the React team). Your code will end up being more than the one you'd have written on top of React -- because it will also have to reimplement and handle all common cases (either poorly, or time-consumingly).

Faster: could be, could be not. Without profiling this is an empty statement. Besides, "premature optimization is the root of all evil".

Simpler: Noope. You have to manually track all the interactions, and you strongly couple things together with your "OO and functions" idea. It might be simpler to just churn out code initially (instead of understanding an architecture), but it becomes an ad-hoc mess soon.

Let me put it this way: it's not like you have discovered a novel way of building stuff compared to these coders that overcomplicate things. What you propose is what these engineers have already tried, used for a decade or so, and found unscalable and wanting.

It can work for small and not complicated pages, but it's not a solution to modern single page web apps.

You might think that you are proposing a clear and simple way of coding as opposed to something like overengineered J2EE patterns mess.

But what you describe is more like the "Why use procedural code etc, GOTOs for flow are simpler and faster", or "why use functional programming, imperative is simpler and faster".


The code I mentioned is definitely smaller than in the article.

It is faster, because it is just a function call. It doesn't create any event objects, or have to go through six layers of other function calls.

It's simpler because there is less code, and there are less concepts.

No, it's not novel or new. Messages as function calls, and MVC comes from 70s smalltalk.

It's not strongly coupled because of a few reasons. Firstly because in JavaScript it is simple to dynamically reassign objects and functions. The receiving object does not care where it got the function called from. Only the sending object knows where it is sending to. However, from Coupling theory in Software Engineering, we know that if there are a small amount of outputs then the strength of the coupling is still low. Luckily in many apps there are often only a few places where you want to send a message. http://en.wikipedia.org/wiki/Coupling_%28computer_programmin...

Yes, if there are lots of outputs it may be useful to reduce the coupling. In my experience, even with large apps (100 person teams), this is often not the case.

It is often useful to know what your app is doing by looking at where the events are going. Just using function calls makes this very easy, both in source code and in the debugger. So this also needs to also be weighed up in the decision of how you are passing events around.


>The code I mentioned is definitely smaller than in the article.

Yeah, so this is a first sign of misreading the dangers.

Of course the code you mentioned is "smaller than in the article". The code in the article is a toy example to illustrate a specific point.

It's when you start to build a full app, with all the (necessary for the requirements) complexity that it's gonna get much more and more unyieldy that the code one would write on top of React.

>It is faster, because it is just a function call. It doesn't create any event objects, or have to go through six layers of other function calls.

Again: premature optimization. After you build your colossal temple of function calls in a NON TOY example, it will either be slow (because you'll have re-introduced abstractions and layers by yourself) or it would a complex spaghetti of cross-calls.

>It's simpler because there is less code, and there are less concepts.

Less concepts != simpler. Assembly has less concepts too.

And it's only "less code" because you're comparing a "toy example + framework" with a "toy example without framework". It's after that level that it gets hairy.

>No, it's not novel or new. Messages as function calls, and MVC comes from 70s smalltalk.

Yeah, and it's what all these coders going to React etc have tried already for a decade and found out that it doesn't cut it with modern apps, because the environment they run in is nothing like a Smalltalk MVC application.


Oh boy, jQuery soup. Sounds fun to unit test.


Unit testing has been done with jQuery since it came out in 2006. Rather than one giant monolithic app, jQuery promotes putting things into plugins or separate libraries.

What stops you from unit testing function calls? Nothing.


It also promotes procedural code for modifying DOM, not declarative. This makes unit testing much harder imo, since you're into the realm of either working against a real DOM, or mocking everything.


In the example from the article we are talking about, it would be easy to mock out the one function call for modules A, and B. There's no DOM manipulation in the article.

Keeping DOM modifying code separate is good practice in either case. Then you don't need to involve the DOM in much of your code. Doing end to end testing is also completely ok to do.

It's worth repeating, so I'll say it again... It's better to keep DOM manipulation code separate. Then you don't need to mock, or simulate anything. Except where needed. You do need to test DOM manipulating code somehow.

AngularJS also uses the DOM in their tests: https://github.com/angular/angular.js/blob/master/test/ngAni... Here's a jquery-ui test for comparison: https://github.com/jquery/jquery-ui/blob/master/tests/unit/s...


With React there really is no notion of manipulating DOM, so there is no need to test it.


Of course it eventually touches the DOM. It also has a virtual DOM.


Right, but it's declarative. You only care about how the DOM looks right now.


disappointing...thought you meant F.lux




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: