> Immutable data unlocks powerful memoization techniques and prohibits accidental coupling via shared mutable state.
Memoization is still more expensive than just tracking changes and avoiding unnecessary recomputations directly (it only starts winning when doing dynamic programming). There is a point on accidental coupling but this is more of a correctness rather than performance issue.
In the high-performance computing field, use of immutable data structures is suicidal; even tries are a magnitude slower than in-place mutable data structures. And he only compares against naked shared mutable state, not against managed mutable state with change propagation.
And that gets to the end of the talk: the real reason is they want to avoid using frameworks that already solve this problem by tracking changes, and React somehow avoids that since you can do just the diff post facto. Ok, I get that.
Yes, but immutability means that reference checks are enough to spot unchanged areas. And we're not in HPC land right now, we're writing JavaScript. In that world, Om (immutable) is vastly faster than Backbone, Knockout, Angular, Ember &c (mutable). Partially because, when you can reason more clearly, it's easier to do something about the performance.
Reference checks are conservative in spotting unchanged areas (if true, definitely no change, if false, maybe no change) unless all values are internalized (a bit expensive to do that). Also, diffing is only needed at all when you need to compare values anyways; dirty bits are otherwise sufficient to mark changes.
I'm not really familiar with the web ecosystem, but why would it differ so much from say C#/WPF? or a system based on change propagation instead of diffing?
If everything is immutable then a reference check is all you need. They key is to avoid deep copies. Observables are problematic when changes can ripple through your model/view-model - resulting in multiple DOM changes. This equally applies to WPF.
WPF has a retained scene graph, so no diffs are necessary, all changes are just O(1).
Reference equalities only work to know what really hasn't changed, they of course can't tell you that two values are still equal even if their references are different (unless compketely internalized, of course). For react, that's fine: it's just some extra work if false inequality is encountered, there are other applications where its not ok.
Memoization is still more expensive than just tracking changes and avoiding unnecessary recomputations directly (it only starts winning when doing dynamic programming). There is a point on accidental coupling but this is more of a correctness rather than performance issue.
In the high-performance computing field, use of immutable data structures is suicidal; even tries are a magnitude slower than in-place mutable data structures. And he only compares against naked shared mutable state, not against managed mutable state with change propagation.
And that gets to the end of the talk: the real reason is they want to avoid using frameworks that already solve this problem by tracking changes, and React somehow avoids that since you can do just the diff post facto. Ok, I get that.