This appears to be perhaps describing "reactive programming" as that term is used in practice, but FRP is actually something significantly more constrained.
The term was, to my understanding, invented by Conal Elliott shortly after his work with Fran. To his mind, FRP is defined as describing synchronous, continuous signals in a language with denotational semantics that clearly suggest those signals. "Events" are included in the denotational model in order to consider signals which change "infinitely quickly".
It turns out that these programs are usually executed using synchronous or even asynchronous event passing networks, but this should be considered an implementation detail alone. Further, implementation events are not the same as FRP "Events".
Typically, to my understanding, things in the "reactive extensions" family of code are not FRP nor are intending to be. They are influenced by Elliott's woork, I'm sure, but they tend to describe effectful, asynchronous event-passing networks directly. Eric Meijer has a talk about this [0].
So it is certainly reasonable to describe "reactive programming" using this mental model, although it turns out that "reactive programming" is sometimes avoided as a term. I think the reason is two-fold: (1) "functional" is kind of sexy today and (2) "reactive programming" has become incredibly diluted as a term and it's difficult to really understand what anyone might be talking about any more.
I think the first reason is compelling if a bit cheap, but the second reason should be a cautionary story to misusing the term "FRP". If it goes the way of "reactive" we will have lost even more capability to speak to one another precisely.
I think part of the problem is that if we restrict the usage of the term FRP to only include Elliott's work, then there is a void currently as to how to characterize all these derivative forms, and not many are offering an alternative classification.
This is why I appreciated Evan's talk in attaching labels to different portions of the space.
I agree and would like to see a proliferation of more well-designed technical terms. I think the rest of the space sort of lacks enough clustering/commonality/analysis to pick too many terms yet—the places which are well-defined already have names like RtFRP and AFRP.
For another take on Reactive Programming from Erik Meijer, check out "Duality and the End of Reactive" [1] (video).
I watched it just last night and found it very thought provoking (so much that I stole some of the concepts to create transduce-async [2] this morning).
The problem I always run into with reactive programming is that of glitches, and stuff being recomputed more often than necessary.
Glitches can appear when, for example, two inputs of a certain "functional box" change almost at the same time. When input 1 changes, the output changes, and then when input 2 changes, the output changes again, thus resulting in lots of superfluous computation in the chain that follows this output.
The nice part about reactive programming is intended to be that stuff gets recomputed incrementally, but in practice this often does not happen!
There is nothing in reactive programming that implies being incremental, but glitch does that anyways (in the sense that tasks whose dependencies haven't changed aren't replayed). On the other hand, glitch might replay tasks more than is optimal.
> There is nothing in reactive programming that implies being incremental
Yes there is. Because if it's not incremental, then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.
Well then, functional reactive programming is not incremental, especially implementations based on pull (ie polling). Event stream based systems (like Rx) provides incrementally growing event streams, but that's it. I'm not even sure if they suppress event firing on value streams when a value matching the last comes in.
I mean, it seems like reactive computations should be incremental, ya, but you won't find many (any?) models that deliver on that. On the other hand, you have plenty of incremental models that aren't reactive (eg SAC). Glitch is designed to be reactive and incremental, so it does "replay when something changes" and is able to suppress change propagation if/when a change fizzles out.
> then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.
I find that to be completely desirable. That behavior is exactly what I want... simply done more efficiently [0].
[0] Also modulo a whole bunch of local state. AFRP handles this well by having a good notion of what it means to "switch an arrow in" but it's been a challenge for applicative/monadic FRP. Rx programming (e.g. "not FRP") tends to solve this problem by ignoring its existence and just littering local state everywhere.
Just to note, many definitions of "reactive" are not based on recomputation (just react to those events dammit, whatever that means), while even those that are based on recomputation, there are many other issues to consider vs. incremental performance, like state preservation in a stepped recomputation, or implicit state that arises from aggregating over an event stream (FRP). Note that FRP is incremental in the time dimension (add a new time point to the program, you don't have to recompute the program for previous time points), but we really want more than that (don't recompute parts of the program execution that are not affected by changes in the time point).
It isn't clear to me that arrowized FRP is incremental. I think some of the less pure FRPs are (like Flapjax) given that they do a topological sort of the signal graph to repair it (if they were going to redo everything, the sort wouldn't be necessary).
To be clear, I'm trying to state that a (nice) recomputation semantics is desirable because it means it's possible to reason about your program ignorant of time. Obviously, an implementation is preferable if it can speed things up via incremental computation and that ought to be nice as well
AFRP is a good example here. AFRP semantics are easily stated in a recomputational way. It makes it necessary to talk about causal AFRP and bounded history AFRP which are nice terms to think about a computation (if sort of obvious). Then, the efficient implementations (of causal AFRP) themselves are incremental for efficiency.
Perhaps I'm not understanding the terminology, but every implementation I've ever seen consumes input incrementally. E.g.
data (i ~> o) = A (i -> (o, i ~> o))
data (i ~> o) = forall s. A (i -> s -> (o, s))
in each case, the inputs are consumed sequentially, the outputs returned immediately, and the local state updated for the next time around.
If you're referring to the ability for a new event to update the signal network only partially (in Elliott's terminology, "push" semantics) then there's Amsden's TimeFlies library.
1) Is "the chain that follows this output" pulling the data, or is the data being pushed to it? If you used a pull model rather than a push model, then inputs 1 and 2 could change a million times but no computation would be executed until the "chain that follows this output" requested the latest value.
2) If you need a push model, couldn't you just use .throttle / .debounce? I feel like FRP has plenty of tools to tackle this problem.
If your worry about recomputation is about efficiency, then admittedly FRP is probably not your best choice of paradigm. FRP consistently chooses reduced code complexity at the expense of efficiency (with the assertion that mutable state / imperative code is inherently more complex in a complex system than .flatMap.throttle.etc, which is obviously debatable).
Very interesting. One thing I notice about this architecture is that in the example given, the fourish features (loading suggestions, displaying suggestions, reloading all, reloading one) are pretty tightly complected together (especially further on, for example in the combineLatest area). Is this necessarily the case? Is there some better way of accomplishing this?
It's not functional reactive in the Elliott sense. It is reactive in the general meaning of reactivity, but that is also a spectrum (actors don't work well for UI, for example).
If fbp is a form of dataflow programming, I think you could say it is reactive..
I would try to separate data flow from the FRP style that reifies the time step into first-class values .e.g. w/ "signals" or "behaviors", or from the "arrowized" forms of FRP
I've studied "fbp" in more depth than "frp" but my conclusions were that they do achieve very similar things with vastly different semantics and terminology.
One of the interesting aspects of FBP and many dataflow systems is that it can also be defined as a schema for actor-model messaging. i.e. you still have the general actor model as the low level abstraction, but then you assume actors to be archetypal and immutable between runs of the network, existing only to implement a finite, typed, addressable number of "in" and "out" queues on each actor instance, which are subsequently configured into the final network. An explicit model of time is not used - "all messages delivered" is the standard termination state, but if one breaks the Morrison model to add local actor state again, it becomes trivial to defer message delivery across runs of the network.
The term was, to my understanding, invented by Conal Elliott shortly after his work with Fran. To his mind, FRP is defined as describing synchronous, continuous signals in a language with denotational semantics that clearly suggest those signals. "Events" are included in the denotational model in order to consider signals which change "infinitely quickly".
It turns out that these programs are usually executed using synchronous or even asynchronous event passing networks, but this should be considered an implementation detail alone. Further, implementation events are not the same as FRP "Events".
Typically, to my understanding, things in the "reactive extensions" family of code are not FRP nor are intending to be. They are influenced by Elliott's woork, I'm sure, but they tend to describe effectful, asynchronous event-passing networks directly. Eric Meijer has a talk about this [0].
So it is certainly reasonable to describe "reactive programming" using this mental model, although it turns out that "reactive programming" is sometimes avoided as a term. I think the reason is two-fold: (1) "functional" is kind of sexy today and (2) "reactive programming" has become incredibly diluted as a term and it's difficult to really understand what anyone might be talking about any more.
I think the first reason is compelling if a bit cheap, but the second reason should be a cautionary story to misusing the term "FRP". If it goes the way of "reactive" we will have lost even more capability to speak to one another precisely.
[0] http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Key...