To be clear, I'm trying to state that a (nice) recomputation semantics is desirable because it means it's possible to reason about your program ignorant of time. Obviously, an implementation is preferable if it can speed things up via incremental computation and that ought to be nice as well
AFRP is a good example here. AFRP semantics are easily stated in a recomputational way. It makes it necessary to talk about causal AFRP and bounded history AFRP which are nice terms to think about a computation (if sort of obvious). Then, the efficient implementations (of causal AFRP) themselves are incremental for efficiency.
Perhaps I'm not understanding the terminology, but every implementation I've ever seen consumes input incrementally. E.g.
data (i ~> o) = A (i -> (o, i ~> o))
data (i ~> o) = forall s. A (i -> s -> (o, s))
in each case, the inputs are consumed sequentially, the outputs returned immediately, and the local state updated for the next time around.
If you're referring to the ability for a new event to update the signal network only partially (in Elliott's terminology, "push" semantics) then there's Amsden's TimeFlies library.
AFRP is a good example here. AFRP semantics are easily stated in a recomputational way. It makes it necessary to talk about causal AFRP and bounded history AFRP which are nice terms to think about a computation (if sort of obvious). Then, the efficient implementations (of causal AFRP) themselves are incremental for efficiency.