In my experience, observing individual properties on model objects just isn't very useful in real world apps. Cocoa has long offered "key-value observing" (KVO) for this purpose, but other mechanisms like notifications tend to be more practical, usually because listeners need some context in order to do the best thing in response to a change, and an individual property change won't carry any of that context.
Here's a practical example... Imagine an editor where an on-screen element is deleted. The controller might do a bunch of stuff such as:
If there's a listener on "container", it will get notified even before the selection was cleared. That's probably not the intention.
If we know that's going to happen, we could move the selection manipulation before the container access... But then we have fragile code that depends on implicit behavior in a model class. Always better to send an explicit event.
I now subscribe to the React-style "wait, why are you listening to anything?" mindset. However, if you're going to go down the mutable path, I think KVO was particularly well designed.
The main reason was that it made composition pretty sane and created clear places to handle changes. With KVO, you implement your KVO-compliant mutation methods, and all changes would then get funneled through there (most notable container ones). Compare the mess that is NSView (addSubview/insertSubview:positioned:relativeTo:/removeFromSuperview/etc) to the post-KVO NSTreeNode. NSTreeNode simply exposes .mutableChildNodes, which you treat as a normal array (addObject:/etc) and after adding a child, it then has parentNode set to the parent after. This was great for lots of reasons: 1. No need to re-create API, you just used NSMutableArray's existing API. So [someNode.mutableChildNodes removeObjectsWithPredicate:somePredicate] and all the parentNode relationships get handled for free. Compare this to pre-KVO methods where every time you wanted to take advantage of a neat NSMutableArray feature, you'd have to re-expose it yourself like removeSubviewsWithPredicate:, or wind up having to copy out the array, mutate, then re-assign with setSubviews: 2. Less room for error since all your cleanup code just happens in removeObjectFrom<Key>AtIndex:, one funnel.
Again, I believe this to be a great local maxima for this style of programming, one that was never that fully explored since the move to more KVO-ness was eclipsed by the attention to iOS. I think most the problems are better solved through React's top-down approach.
I like the React model, but it does make one key assumption. It assumes that DOM mutation is so slow that re-rendering and diffing the entire virtual DOM for every update is basically free in comparison. I like the simplicity of React's approach, but this assumption makes me a little uncomfortable.
React's assumption isn't actually true. DOM manipulation is fast - all modern browsers use a dirty-bit system where modifying the DOM is basically a few pointer swaps. Layout, however, is slow, and dirtying the DOM requires at least one layout as soon as control returns from JS. If you intersperse measure and modify calls, it's more.
However, what is true is that:
1. Both JS and the DOM are fast enough that you can lay out most reasonable pages within the threshold of visual perception (~150-200ms).
2. If you need to layout the page during an animation frame, you're hosed anyway. You have a frame budget of 16.67ms and layout will take 100ms+ on mobile.
And so the practical, qualitative conclusion is the same: you can use React for your basic page transformations & rendering, but you need to use CSS transforms and offload the work to the GPU if you want to have a prayer of making animations work on mobile. And React also protects you from completely fucking up performance (which is trivially easy with JQuery), so many real-world pages from inexperienced webdevs experience a very significant speedup.
"diffing the entire virtual DOM for every update is basically free in comparison"
If you use something like Immutable.js you can avoid a lot (most? all?) of the diffing since comparing trees of objects becomes a simple object reference comparison. This is common in the Om world, and becoming more popular in JS too.
If you have a complicated, deeply interconnected model, updating immutable data structures properly doesn't seem much easier than properly implementing shouldComponentUpdate().
You could use a lens to make the programming easier, and the logarithmic depth property of DOM-like trees makes it likely that allocation will also not be a problem.
Is it not easy because the syntax is cumbersome (eg. chained calls to setIn([...]) in immutable.js), or because it is fundamentally difficult? The former is something that's specific to Immutable.js and many other javascript frameworks. Other languages have solved that problem rather elegantly (cursors, lenses). If it's the later then I doubt shouldComponentUpdate will be much less complex than the state update function (in terms of complexity, lines of code, easy of understanding etc).
As an anecdotal data point, that assumption survived approximately no time at all during a recent exercise I was involved with evaluating how to build a new and reasonably large web app. With similar scepticism but trying to keep an open mind, we did an experiment allowing the entire DOM to rerender from a top-level React component when the underlying data model changed. It performed pretty much as badly as everyone thought it obviously would.
We then switched to exploring more standard approaches like having individual components monitor the relevant model events and manage their own state, using React's rendering model at a much finer granularity. We haven't finished our investigations yet and there are some other concerns about React's performance under certain conditions, but so far, the finer-grained approach seems more promising and is our most likely way forward for this particular project.
Was your data mutable or immutable? The former requires you to write shouldComponentUpdate logic that isn't just old == new. If you didn't write any of that logic and didn't use immutable data rendering everything again from the top down due to a single change is of course going to be terrible performance.
If you wanted to be more adventurous you should look at alternative virtual dom implementations http://vdom-benchmark.github.io/vdom-benchmark/. What you'll find is that React is actually the slowest one out there based on performance. For instance, on dbmonster InfernoJS sees 50-100% more fps and on my machine ~20x better benchmark performance. If you want the next popular implementations look at virtual-dom, which is leveraged in a variety of frameworks like https://github.com/ashaffer/vdux, or Google's Incremental DOM, https://github.com/google/incremental-dom.
We've also found that some of the other virtual DOM implementations that are popping up can be somewhat faster than React's, though I confess I can't remember exactly which ones as I write this.
But I don't think that really matters. Fundamentally, once we have a moderately complicated data model such as configuring a few thousand entities of perhaps 50-100 different types with assorted properties each and assorted constraints between them, even a relatively passive scan over the model to rerender the UI from the top following each transaction on the data model does not appear to be a viable architecture today.
Fortunately, that architecture is also unnecessary, as a modest amount of finer-grained integration work presents a much more realistic alternative. The declarative style of specifying a UI still has considerable potential as a way of composing manageable components, we just need a scalable way of decomposing the overall design to match, and it seems for now that means smaller-scale components need to be aware of their own concerns in some cases.
> we did an experiment allowing the entire DOM to rerender from a top-level React component when the underlying data model changed.
That's really not how you'd write a large idiomatic React application.
Conceptually, you should be running render() methods on the actual items which have changed, and a very small number of parent components with little-to-no DOM elements. You should also be executing a relatively small number of shouldComponentUpdate() methods which should be extremely fast to execute.
If you are doing that, then where exactly was the performance bottleneck, and how was your alternative approach avoiding it? I mean, sure, if there's simply no possible way for you to figure out what to re-render cheaply, then your app will have poor performance. But that doesn't seem to be true from your other comments.
In fact, it sounds like your second approach might be closer to a normal idiomatic React app than your first implementation. Incidentally, are you using some sort of flux architecture?
That's really not how you'd write a large idiomatic React application.
That's a bit No True Scotsman for me. Just in the replies to my previous comment, there are several quite clear yet apparently contradictory views expressed about how these things are supposed to work. I doubt that anyone, probably including the React developers themselves, yet has enough experience to sensibly determine what is and isn't idiomatic or best practice here. In any case, I'm far more interested in effective results than dogma, which is why we've been doing practical investigations into what really happens when the tools are used in different ways.
Conceptually, you should be running render() methods on the actual items which have changed, and a very small number of parent components with little-to-no DOM elements. You should also be executing a relatively small number of shouldComponentUpdate() methods which should be extremely fast to execute.
And this is where the assumptions started breaking down in our experiment.
For one thing, we have a somewhat complicated data model and a somewhat complicated UI to present it. Managing the relevant shouldComponentUpdate mechanisms was getting tedious even in our short term experiment. If you're rerendering top-down in a heavily nested UI then it seems you need to provide shouldComponentUpdate in quite a lot of places just to deal with the parent-child relationships with acceptable efficiency, and each of those implementations is a potential source of errors because now you have multiple sources of truth.
Also, for things that aren't extremely fast to execute because they rely on derived data, you wind up with questions of what intermediate results to cache and where. Caching intermediate results goes against the kind of approach much of the React community seems to advocate, hence props in getInitialState being an antipattern and all that.
Time will tell, but once the abstraction started leaking in these kinds of ways, it wasn't clear to us that trying to follow the pure, declarative approach that a lot of React advocacy promotes was really better than other strategies.
If you are doing that, then where exactly was the performance bottleneck, and how was your alternative approach avoiding it?
Bottlenecks were as above, among others.
Avoiding them comes down to the simple principle that if each component attaches to listen to events in a modular way and sets its own state accordingly, then the default is essentially not to rerender anything and has zero overhead. As soon as you start making active decisions about what to rerender, you immediately have the overhead of those decisions to worry about, and sometimes that overhead is significant if you're, say, responding to every character typed in a text box.
If you respond at a more local level, you can choose the granularity of responses to keep the overheads more controlled, which it seems so far is effectively getting the best of both worlds.
In fact, it sounds like your second approach might be closer to a normal idiomatic React app than your first implementation. Incidentally, are you using some sort of flux architecture?
In the sense that user interactions send meaningful messages to an internal component, that component is responsible for updating some stored data accordingly, and the internal component then signals changes that other parts of the UI can respond to, yes, it's somewhat like Flux in overall architecture.
It's always been clear that React alone isn't enough to write a full application; you need to combine it with some other pieces. The official recommendation from Facebook is to use flux, and the most popular flux implementation (redux) is quite good, but you can also use Om-style approaches (Omniscient, Baobab, etc.), or wire it up to legacy Backbone models, or whatever. But you need something; you can't write an MVC app with only the V and expect good results.
You guys tried to write a large "pure" React application, and it failed in all the ways that large React applications always fail. And you learned from it, and hacked together a kinda-sorta-fluxish solution, and it's working for you, and that's great.
...but if you want to take another look at it someday, I'd highly suggest checking out redux. It's very slick, and it was explicitly designed to solve the exact issues you ran into.
(In particular, a redux app is built of a small number of smart components which respond to to a specific subset of events, each managing a tree of "dumb" components, which gives you the granularity you were looking for. And then you can use the PureRenderMixin and immutable data, which gives you the very fast shouldComponentUpdate methods you were looking for for free, no need to ever write your own since, as you found, that way lies madness. The docs even cover how to properly cache derived data, another issue you noted running into. And so on.)
Sorry, but I think you have completely misunderstood me. We're not avoiding the popular Flux-style tools like Redux because we don't understand the need for a model behind the rendering components. We're avoiding them because they are nowhere near powerful enough out of the box for our data modelling and constraint enforcement needs. We're not kinda-sorta-fluxish, Flux is kinda-sorta where we were about five years ago on the Web side and probably closer to a decade ago if you include applications written in languages other than JS for native clients and the like. And we don't want to write a full application in React; we already have 90% of the application and we're just experimenting with whether React might offer a cleaner architecture for the easy bit in our next version.
Ah, that is a very different situation than I believed you were discussing. In which case...sure, that sounds logical. If you already have a good architecture and a very large, complex app, of course it's going to work better to keep the architecture you have than switching to the sort of naive React design that you'd normally find in a TodoMVC example project. :)
(...of course, that doesn't really tell you much about React; seems like all you really learned is that badly architected apps perform badly. Not sure that one needed much testing. And of course, your initial comment seemed to be making a very different claim.)
I'm sorry if my earlier posts weren't clear. I mentioned the kind of scale we're working on in another post -- the previous version is under 100K lines of code so still not really that large or complex as UIs go. But yes, it's a little more challenging than TodoMVC. :-)
In any case, our current view is that there is a useful middle ground where React may work well for us. The app in question predates just about all of the modern JS UI libraries/frameworks, at least in anything close to their current forms, so much of the rendering and event management code right at the front end is built with jQuery, early toolkits, and in many cases a lot of home-grown code. All of this still works pretty well, but quite a large proportion of it is functionality that every UI library in the universe provides today, so there's little reason to maintain another version any more. The rest of the app does have a good architecture that has stood the test of time, but for the next big update we're trying to solve much the same issues as everyone else with managing UI code as the app grows.
React is of interest to us for several reasons, from being a useful templating mechanism, through providing a way of composing components that are relatively clean and modular in design, to its efficient DOM updates and declarative nature. These too are presumably the features that make it attractive to many of its users. But it is also of interest for this project precisely because its scope is tightly confined to just the rendering parts of the system and it doesn't try to lock the whole app into its own framework.
We just confirmed early that some of the hype really is only that, and that scaling up to the kind of UI that has more than a few hundred data points on screen at one time is not as easy as some of the advocacy might suggest, so we're exploring how we can use the tool to take advantage of what it clearly does offer and play to its strengths without paying too high a price. In that respect, it's really no different to any other library we'd evaluate.
"You may be thinking that it's expensive to change data if there are a large number of nodes under an owner. The good news is that JavaScript is fast and render() methods tend to be quite simple, so in most applications this is extremely fast. Additionally, the bottleneck is almost always the DOM mutation and not JS execution. React will optimize this for you by using batching and change detection. [...] Don't underestimate how fast JavaScript is relative to the DOM."
Putting everything in one map and naively rendering it is a pattern that comes from the clojurescript subset of the React community and it works because the cljs wrappers assume cljs' immutable datastructures and all implement shouldComponentUpdate for you.
React isn't magically fast, it's just O(n_size_of_diff). Getting it to perform well here requires you to figure out how to prune the size of the diff. You're doing it by splitting the app state into finer grained trees (each tree is pruning the branches it's not listening to) but any gains you're picking up from doing this could be done on the single root solution with appropriate shouldComponentUpdate methods.
The alternative (and easier) way of reducing the diff size is to simply window your data before passing it into the render, which can work for things like grids and lists but doesn't sound like your problem.
You also need to occasionally reshape your data to make the diff smaller. I work on a search app that thrashed for 1.2s every time new search results came in. The problem was that the results were invalidating a recursive menu component which was re-rendering 4 times per update. Avoiding the invalidation fixed the specific delay but the menu was still slow. Switching the menu's data organization from at tree of objects with `open` attrs to a static tree + open path and only rendering the visible nodes solved the perf problems in the app.
Getting it to perform well here requires you to figure out how to prune the size of the diff.
The most efficient way of doing that is by never doing an unnecessary diff in the first place.
Everything after that is a trade-off between convenience and overhead. Sometimes it will be worth incurring some overhead for greater convenience, sometimes it won't.
You're doing it by splitting the app state into finer grained trees (each tree is pruning the branches it's not listening to) but any gains you're picking up from doing this could be done on the single root solution with appropriate shouldComponentUpdate methods.
But shouldComponentUpdate is the wart that makes the whole abstraction leaky. The assumption that such tests have negligible cost is essential to the assumption that React used in the style you're advocating is efficient, but with non-trivial data models that won't necessarily be the case. Even if that assumption holds under any particular set of circumstances, you still now have two sources of truth, with more code to write and more scope for introducing errors.
At that point, it seems you're not necessarily any better off than you would be with a more traditional, event-driven architectural style, you're just making different trade-offs. For example, given a parent-child relationship between components, rendering top-down using React might avoid some of the need to co-ordinate updates if underlying data is appearing or disappearing. On the other hand, it might also mean writing a bunch of boilerplate shouldComponentUpdate functions just to keep rendering efficient.
My assumption was that writing the handful of sCU I think you'd need is simpler than handling coordination. You seem to have a reasonable understanding of the tradeoffs.
> At that point, it seems you're not necessarily any better off than you would be with a more traditional, event-driven architectural style
Not having to implement the dom state updates and rAF update batching are fairly big wins. The only other way I know of doing this is via data binding and the difference there is that individual updates are more efficient but establishing the bindings is usually O(n_size_of_input) and in practice every app I've worked on that did data binding eventually broke the approach. Having the shouldComponentUpdate escape hatch has allowed me to push much larger volumes of data through the system without it breaking.
I've implemented a number of reasonably complex apps but the two largest have been on immutable datastructures where React's assumptions always hold unless you screw something up. The largest non-immutable app I've written was a 25k LoC layout builder where the two shouldComponentUpdates for vdom pruning were pretty straightforward.
Not having to implement the dom state updates and rAF update batching are fairly big wins.
I agree, and these are two of the main reason we're interested in React. We've had in-house code doing some similar things for a long time, more the batching than rAF in our case, but for much the same reasons. Now that mainstream libraries are offering similar tools and with reasonable trade-offs in terms of functionality offered versus level of dependency, there's not much reason to maintain our own in the next generation of the UI.
In case you're interested, our model code also presents its interface in the form of immutable data structures, with quite fine-grained events available to monitor quite specific parts. But even with that, in something like our small multiples test case, where we're plotting a number of interactive SVG charts using modestly derived data but thousands of underlying data points, it appears that the indirections and tests do still add up to a noticeable level of overhead. Of course that's more demanding than what a lot of web apps will ever need, and our conclusions for our project won't necessarily be anyone else's conclusions for theirs.
Isn't this supposed to be a non issue because react can intelligently compare the higher level components of your application and decide whether they need to be re-rendered?
The trouble is, those comparisons might be relatively cheap compared to a DOM update, but they still aren't free. Even with an efficient overall algorithm and with all the multiples having a key property set to minimise unnecessary changes, if you're dealing with hundreds of components in your page that need rendering at any given time and that rendering in turn relies on significant conditional logic or derived information, those comparisons to see what rendering is needed (or, if you prefer, to determine whether re-rendering is needed) start to add up.
What's the scenario where you need to re-render hundreds of components? Is this something like a state change propagating down a long line of components? If you assume a roughly balanced tree like structure to your components though wouldn't even this still be bounded logarithmically by the number of components on the page?
That's an optimisation. It's not core to the React model.
If DOM mutation was ever to be faster than the cost of the diffing then they'd just remove the diffing and your applications would still work the same with the same API.
I've interacted with KVO for years before iOS ever came out and I can honestly say it's just the worst. It's stringly typed with zero compile-time validation so as soon as your model key paths change your app breaks in the least expected places. Then throw bindings in the mix and your key path constants are in your nibs too! God have mercy.
I'm not the least bit surprised they discarded bindings in iOS, then (effectively) discarded KVO in Swift.
Next time, try playing with creating an observer protocol:
I have no interest in returning to any programming model that fundamentally revolves around complex sync-up code (also, yes we "tried" this aplenty long before KVO/bindings).
As I mentioned in my previous post, I felt KVO was a really interesting design path to be sure, but I agree with you it had its fair share of issues. That being said, observer patterns are just incremental improvements on this. Conceptually its still the same: figuring out ways of getting different parts of a system to eventually "settle" on the same value. Sure, strict typing and whatnot can make this safer, but its ultimately a lot of code that has no real connection to the actual problem your program is trying to solve.
Personally I find immutable everything types (through persistent data structures) combined with immediate mode programming for rendering has more or less taken away the cognitive overhead I used to have around this set of "problems". You have a model, that can lead to a new model in the future. You then have a render function that is a function of this model. Its completely deterministic, there is only one path things can go down, you can grab the state of your program at any time and keep re-rendering it to figure out whats going on -- there's no "implicit" conglomeration of state through the value of all the free variables in your program at any particular point in time and their knowledge of each other.
If I were to return to iOS programming today, I'd probably give React Native a stab.
To me tomorrow land is being able to change code and see it live update without recompile or provisioning profiles or what have you.
KVO's API could be a lot better (even just making it target-action would be a huge improvement), but at the time it was created you couldn't do a block based API (since they didn't exist yet), and thread-safe automatic deregistration without zeroing weak pointers would be very complex and have a lot of overhead.
I was thinking that observing properties could be useful for hooking on to changes to some internal state that a third party library maintains, but doesn't provide a callback for (for instance, for persistence and/or synchronization).
Is there a more idiomatic way to work around this in JS other than Object.observe if you don't want to resort to dirty checking or patching the library to provide a callback?
O.o seems like a bad option in your case, since not only would you be relying on an internal API that could change at any point, but you'd be relying on the shape of its internal data staying the same too.
In my opinion this is a good move. As the post notes, the JS world is moving away from the side-effect heavy code that motivates Object.observer. JS doesn't need any more legacy mistakes enshrined in the language spec, and without an overwhelming need for this it should not be adopted.
As the post notes, the JS world is moving away from the side-effect heavy code that motivates Object.observer.
It may well be that Object.observer was the wrong level of granularity, and I agree that language specs should evolve cautiously.
That said, I think it is far too early to make generalisations as broad your statement above. React-style development is still a tiny fraction of overall web development, and React itself isn't even at version 1.0 yet and IMHO still has significant fundamental architectural concerns that haven't been fully explored yet. It promotes some interesting ideas, and I'm glad someone with the profile of Facebook is challenging assumptions, but in turn reasonable people could challenge a lot of the ideas coming out of the React team and we're still far from understanding the long-term trade-offs of that style of building for the web.
Bummer. The latest web trend is immutability and event streams, but javascript data structures are almost all mutable by default, and O.o is a great way to take advantage of that fact. Instead of abstracting away data changes behind events, why not use data directly? Instead of fighting the language, why not (at times) embrace mutability?
Alternatively, why not embrace immutability and add native immutable data structures to JS? I believe there is a TC39 proposal for this, in fact. Even if O.o had become mainstream, I would've stayed away from it. The immutability trend is not due to the impracticality of dealing with object observation, but due to inherent issues with mutable state.
ha! i actually commented on this guy a few months ago. javascript is designed in such a way that i think true, clojure-like immutable data structures are not possible. we'd either need to overload comparators (===, ==) so that they compare immutable data by value (rather than by reference, which is the case for their mutable counterparts), or implement java-style .equals methods for immutable objects (or as static methods on their constructors). neither seems to be a great fit for javascript.
I think performance is the reason, but that's only speculation from a long-time JavaScript developer with plenty of in-the-trenches experience on resource-constrained platforms.
The platforms that do not shy away from data and embrace mutability at all times -- I believe those are called databases, and when necessarily distributed for scale, that mutability is the enemy of consistency.
Indeed, Proxy always seemed like the more general version of Object.observe. It not only "tells you when state changes", but lets you write code to say what that means.
What's the state of Proxy? Are the people behind it still working on it? Are we going to be able to use it? Will it be fast?
As far as I know Proxy is impossible to polyfill. There are some patches which make old browser versions of Proxy meet the spec, but I just don't think you can close the Proxy gap with pure javascript.
It's possible if you also translate all of the code that uses the polyfilled object by converting every MemberExpression to a function call, but that's probably a nightmare for performance.
what happens when i add a property to the object that i'm proxying after the Proxy is already instantiated? you wouldn't be able to detect the addition and add getters/setters for the property unless you use some sort of O.o or dirty checking on the underlying object.
JavaScript is all over eventing. But such a system requires cooperation or control between those firing and those subscribing to events. When the object we're talking about is e.g. an HTMLIFrameElement and it changes its own height it would be neat to be able to peek into object attribute changes.
No, JavaScript is a language embedded in browsers. Web specifications detailing standard events are all over eventing, because browsers use event loops. On a related matter, Node does, too. JavaScript does not require an event loop.
Any misunderstandings within this system are a misunderstanding regarding DOM events. HTMLIFrameElements don't change on their own. If they change based on the window resizing, you listen to the "resize" event, etc.
It actually does now that Promise got added to the language. For extra fun the interaction of the JavaScript event loop and the HTML/DOM event loop is not actually specified and getting it specified will be rather exciting: both sides want to "own" the event loop...
I wonder how that works in practice outside of web environments, considering the spec doc doesn't mention it anywhere. I get the premise of having Promises in the language requiring an event loop, but without one, you'd be creating objects that never get resolved across frames (since they don't exist), which is still entirely possible.
I guess the only issue is that what happens if you have unresolved promises when your Job queues go empty is implementation-defined, because an implementation is allowed to terminate when there are no Jobs. But if it doesn't terminate, resolving the Promise will enqueue a PromiseReactionJob. And I guess not actually call NextJob; that part is up to the implementation.
Hovering is handled via the HTML/DOM event loop, since it needs to be coordinated with mouseover events.
Animations, though, do need to be worked into this somehow. And they need to be able to run on a completely independent event loop to some extent so you can do animations on the compositor thread, not the DOM thread.
We still have Object.defineProperty (ES5) where you can have functions for get and set. I use this for database ORM for native Node.js object persistence.
What are the implications of this for Angular 2.0? Aren't they basically relying on this feature to make the framework efficient? Or was that just for improving Angular 1.x?
I hope they keep it in V8, if only for debugging. There are some cases where this is the only practical way to see what is mucking with your object and when. If they take it out of the spec, it would be nice if they could add it as a method to console or something, at least.
I understand wanting to discourage directly mutating Objects, but I wish we knew what was going to replace O.o. IMO, we should have access to something like NodeJS' EventEmitter in the browser natively.
I think the main thing that hamstrung Object.observe was that observations were asynchronous (scheduled as a microtask). It made data flow very difficult to follow for developers (particularly when driven by data binding systems).
This is unfortunate. I used O.o to implement type-safe lenses in my TypeScript project (https://github.com/wereHamster/avers). I always cringe when I see Immutable.js `c.setIn(['some','deeply','nested','field'], x)`, so much opportunity for typos, and no support for refactoring.
For a while now I wanted to use proxies, but they are only supported in Firefox, and no timeline when they get to Chrome. I'll have to see what I can use as replacement.
So they expected major frameworks to switch to an unfinished spec without a clear ETA for approval and that didn't work out?
I think this is a major blow to web performance.
We are bound to perform O(every object on page) computations on every event, instead of O(tiny bit that's actually changed).
This is how having a device with gigs of RAM, close to ghz CPU we still measure FPS of a page with one table and 5 images
> and hope to remove support from V8 by the end of the
year (the feature is used on 0.0169% of Chrome pageviews, according to
chromestatus.com).
I hope nobody uses that stuff in production ... even if there are polyfills, their performances are nowhere near the performances of the native implementation.
> After much discussion with the parties involved
Would be interesting to know the content of these discussion.
Any implications for Google Polymer? I haven't kept up, but I remember from the time that I was working with it, Polymer depended on (i.e. built upon) either a native implementation of Object.observe or a polyfill that performs "dirty checking". The latter is not exactly a high performance underpinning for a library (though it works, sure).
Ah, thanks... I read the headline, glanced quickly at the link and typed up my polymer-related question as it came to mind. I should have actually read the letter. :-)
Junior devs? I could see this being yet another gross tool used by seasoned and junior devs alike, with the former trying to be clever with over-engineered solutions and the latter not knowing better.
A sad day for Javascript. Why were only frameworks considered? Some of us still prefer vanilla JS. Also, more options are better than fewer. This is like saying, "Hammers have become quite popular lately, so we've cancelled plans to make a screwdriver."
I used to work on a legacy Java codebase that made heavy (mis)use of the Observable pattern, and I must second your sentiment.
Invariably a system acquires enough stuff going on that you get "observer storms" of feedback, or people find really weird couplings that are not immediately obvious because something several modules away is observing something it doesn't need to.
In order to observe an Observable object, you need to have a reference to it. Adding an Observer to something that you shouldn't isn't possible without introducing it as a dependency. If someone does inject something so they can observe it, it's easier to spot the coupling.
It sounds like the legacy code you got stuck maintaining put logic inside the observers, which isn't the Observer Pattern's fault. In my own experience, not using event patterns like the Observer Pattern is probably the biggest cause of difficult-to-maintain code.
The original web 1.0 where each state had an url and was rendered sequentially from scratch was actually arguably much better than the stateful event-driven UI mess we have now.
Moving from raw native ui frameworks very similar to todays js UI libs to HTML pages was a big paradigm shift, but was IMO better for most standard apps.
I've been thinking for a long time that it would be a good idea to build a client-side library where each state has an URL. Essentially just take the classic backend view rendering technique and move it to the client. Could work out really well I think. Event-driven techniques can then be reserved for really interactive apps like games.
Classic example... I login to the EMR system my GP's office uses, and click around a bit, looking at test results, notes, etc. Then, for whatever reason, I happen to reload the page or click the "back" button. Boom, I'm out of the application and have to login again from scratch. Grrrrr... SPA's and this fancy stuff have their place, but let's not pretend that it doesn't come with a cost.
Here's a practical example... Imagine an editor where an on-screen element is deleted. The controller might do a bunch of stuff such as:
If there's a listener on "container", it will get notified even before the selection was cleared. That's probably not the intention.If we know that's going to happen, we could move the selection manipulation before the container access... But then we have fragile code that depends on implicit behavior in a model class. Always better to send an explicit event.