Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Introduction to Reactive Programming (gist.github.com)
192 points by lpsz on Nov 1, 2014 | hide | past | favorite | 31 comments



This appears to be perhaps describing "reactive programming" as that term is used in practice, but FRP is actually something significantly more constrained.

The term was, to my understanding, invented by Conal Elliott shortly after his work with Fran. To his mind, FRP is defined as describing synchronous, continuous signals in a language with denotational semantics that clearly suggest those signals. "Events" are included in the denotational model in order to consider signals which change "infinitely quickly".

It turns out that these programs are usually executed using synchronous or even asynchronous event passing networks, but this should be considered an implementation detail alone. Further, implementation events are not the same as FRP "Events".

Typically, to my understanding, things in the "reactive extensions" family of code are not FRP nor are intending to be. They are influenced by Elliott's woork, I'm sure, but they tend to describe effectful, asynchronous event-passing networks directly. Eric Meijer has a talk about this [0].

So it is certainly reasonable to describe "reactive programming" using this mental model, although it turns out that "reactive programming" is sometimes avoided as a term. I think the reason is two-fold: (1) "functional" is kind of sexy today and (2) "reactive programming" has become incredibly diluted as a term and it's difficult to really understand what anyone might be talking about any more.

I think the first reason is compelling if a bit cheap, but the second reason should be a cautionary story to misusing the term "FRP". If it goes the way of "reactive" we will have lost even more capability to speak to one another precisely.

[0] http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Key...


I think part of the problem is that if we restrict the usage of the term FRP to only include Elliott's work, then there is a void currently as to how to characterize all these derivative forms, and not many are offering an alternative classification.

This is why I appreciated Evan's talk in attaching labels to different portions of the space.


I agree and would like to see a proliferation of more well-designed technical terms. I think the rest of the space sort of lacks enough clustering/commonality/analysis to pick too many terms yet—the places which are well-defined already have names like RtFRP and AFRP.


For another take on Reactive Programming from Erik Meijer, check out "Duality and the End of Reactive" [1] (video).

I watched it just last night and found it very thought provoking (so much that I stole some of the concepts to create transduce-async [2] this morning).

[1]: http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Key... [2]: https://github.com/transduce/transduce-async


Previous discussion on this article:

https://news.ycombinator.com/item?id=7964873


> "FRP is programming with asynchronous data streams."

Maybe, if you drop the F. For clarification on FRP there was a great talk at strangeloop this year: http://www.youtube.com/watch?v=Agu6jipKfYw


Thanks so much. This is one of the best talks I've seen in a long time.


The problem I always run into with reactive programming is that of glitches, and stuff being recomputed more often than necessary.

Glitches can appear when, for example, two inputs of a certain "functional box" change almost at the same time. When input 1 changes, the output changes, and then when input 2 changes, the output changes again, thus resulting in lots of superfluous computation in the chain that follows this output.

The nice part about reactive programming is intended to be that stuff gets recomputed incrementally, but in practice this often does not happen!


You might want to check out Glitch, a programming model that attacks those problems head on with replay and rollback:

http://research.microsoft.com/en-us/people/smcdirm/managedti...

There is nothing in reactive programming that implies being incremental, but glitch does that anyways (in the sense that tasks whose dependencies haven't changed aren't replayed). On the other hand, glitch might replay tasks more than is optimal.


> There is nothing in reactive programming that implies being incremental

Yes there is. Because if it's not incremental, then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.

PS: Thanks for the link, I'll look into it!


Well then, functional reactive programming is not incremental, especially implementations based on pull (ie polling). Event stream based systems (like Rx) provides incrementally growing event streams, but that's it. I'm not even sure if they suppress event firing on value streams when a value matching the last comes in.

I mean, it seems like reactive computations should be incremental, ya, but you won't find many (any?) models that deliver on that. On the other hand, you have plenty of incremental models that aren't reactive (eg SAC). Glitch is designed to be reactive and incremental, so it does "replay when something changes" and is able to suppress change propagation if/when a change fizzles out.


> then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.

I find that to be completely desirable. That behavior is exactly what I want... simply done more efficiently [0].

[0] Also modulo a whole bunch of local state. AFRP handles this well by having a good notion of what it means to "switch an arrow in" but it's been a challenge for applicative/monadic FRP. Rx programming (e.g. "not FRP") tends to solve this problem by ignoring its existence and just littering local state everywhere.


Just to note, many definitions of "reactive" are not based on recomputation (just react to those events dammit, whatever that means), while even those that are based on recomputation, there are many other issues to consider vs. incremental performance, like state preservation in a stepped recomputation, or implicit state that arises from aggregating over an event stream (FRP). Note that FRP is incremental in the time dimension (add a new time point to the program, you don't have to recompute the program for previous time points), but we really want more than that (don't recompute parts of the program execution that are not affected by changes in the time point).

It isn't clear to me that arrowized FRP is incremental. I think some of the less pure FRPs are (like Flapjax) given that they do a topological sort of the signal graph to repair it (if they were going to redo everything, the sort wouldn't be necessary).


To be clear, I'm trying to state that a (nice) recomputation semantics is desirable because it means it's possible to reason about your program ignorant of time. Obviously, an implementation is preferable if it can speed things up via incremental computation and that ought to be nice as well

AFRP is a good example here. AFRP semantics are easily stated in a recomputational way. It makes it necessary to talk about causal AFRP and bounded history AFRP which are nice terms to think about a computation (if sort of obvious). Then, the efficient implementations (of causal AFRP) themselves are incremental for efficiency.


Do you have an citations about AFRP implementations being incremental. I haven't been able to find much in my own literature survey.


Perhaps I'm not understanding the terminology, but every implementation I've ever seen consumes input incrementally. E.g.

    data (i ~> o) =           A (i -> (o, i ~> o))
    data (i ~> o) = forall s. A (i -> s -> (o, s))
in each case, the inputs are consumed sequentially, the outputs returned immediately, and the local state updated for the next time around.

If you're referring to the ability for a new event to update the signal network only partially (in Elliott's terminology, "push" semantics) then there's Amsden's TimeFlies library.


1) Is "the chain that follows this output" pulling the data, or is the data being pushed to it? If you used a pull model rather than a push model, then inputs 1 and 2 could change a million times but no computation would be executed until the "chain that follows this output" requested the latest value.

2) If you need a push model, couldn't you just use .throttle / .debounce? I feel like FRP has plenty of tools to tackle this problem.

If your worry about recomputation is about efficiency, then admittedly FRP is probably not your best choice of paradigm. FRP consistently chooses reduced code complexity at the expense of efficiency (with the assertion that mutable state / imperative code is inherently more complex in a complex system than .flatMap.throttle.etc, which is obviously debatable).


I found this to be a great introductory course to the topic (Eric Meijer is one of the instructors):

https://www.coursera.org/course/reactive


I just can get what's the buzz about that. I must be missing something. Did I do Reactive Programming when I was doing signals/slots in QT/C++?


Very interesting. One thing I notice about this architecture is that in the example given, the fourish features (loading suggestions, displaying suggestions, reloading all, reloading one) are pretty tightly complected together (especially further on, for example in the combineLatest area). Is this necessarily the case? Is there some better way of accomplishing this?


and streams are..

  (define-syntax scons
     (syntax-rules ()
        ((_ x expr) (cons x (delay expr)))))

  (define scar car)

  (define (scdr xs) (force (cdr xs)))

  (define snull? null?)
and filter for streams is, OMG..

   (define (sfilter f xs)
     (cond ((snull? xs)   '())
           ((f (scar xs)) (scons (scar xs)
                                 (sfilter f (scdr xs))))
           (else (sfilter f (scdr xs)))))
and of course..

  (define (smap f xs)
      (if (snull? xs)
          '()
          (scons (f (scar xs))
                 (smap f (scdr xs)))))
I am doing reactive programming? Am I?


To follow that line, all of the users of electronic spreadsheets were/are doing RP :) (not completely a joke actually)


> all of the users of electronic spreadsheets were/are doing RP

I think you are absolutely right. One of the popular examples of where RP is useful is in implementing a spreadsheet program.


That has no notion of time or distance between "events".


So they have "ticks" in a stream? OK, one more cond clause.)


> "Reactive Manifesto sounds like the kind of thing you show to your project manager or the businessmen at your company."

> "FRP is programming with asynchronous data streams."

So, is Akka reactive or not?


It's not functional reactive in the Elliott sense. It is reactive in the general meaning of reactivity, but that is also a spectrum (actors don't work well for UI, for example).


How similar is this to Flow Based Programming by Paul Morrison?


If fbp is a form of dataflow programming, I think you could say it is reactive..

I would try to separate data flow from the FRP style that reifies the time step into first-class values .e.g. w/ "signals" or "behaviors", or from the "arrowized" forms of FRP


I've studied "fbp" in more depth than "frp" but my conclusions were that they do achieve very similar things with vastly different semantics and terminology.

One of the interesting aspects of FBP and many dataflow systems is that it can also be defined as a schema for actor-model messaging. i.e. you still have the general actor model as the low level abstraction, but then you assume actors to be archetypal and immutable between runs of the network, existing only to implement a finite, typed, addressable number of "in" and "out" queues on each actor instance, which are subsequently configured into the final network. An explicit model of time is not used - "all messages delivered" is the standard termination state, but if one breaks the Morrison model to add local actor state again, it becomes trivial to defer message delivery across runs of the network.


Nice - I've been waiting to find a spot to jump into dataflow style and the combination with actors you describe is interesting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: