Hacker News new | past | comments | ask | show | jobs | submit login
Applicative WTF? (plover.com)
95 points by lainon on Oct 23, 2018 | hide | past | favorite | 29 comments



It seems to me that people often overthink applicative. It falls right out of functor. Normally when you call fmap, you pass it a function that takes a single parameter. What happens if you pass it a function that takes more than one parameter (the other parameters being curried)? fmap applies a value from the functor to the function. Since it only passes one parameter, the result is a partially applied function. The result is that you end up with a functor containing partially applied functions.

For example, take a list of number and a function that takes 2 parameters, x and y. Maybe the function adds x to y (it doesn't matter). Pass these to fmap. Your result will be a list of partially applied functions with x being set to the values in the list. Now we want to apply the y parameter. We need something similar to fmap, but rather than taking a function and a functor, it needs to take a functor containing functions and a functor containing data. It will then apply the data to the functions and you will end up with a functor containing the result.

That's all applicative is. It applies the subsequent parameters to the result of having run fmap on functions that have more than one parameter. It shows up in a lot of different places, though and is incredibly handy. When I'm writing FP style code in languages that don't normally curry parameters, I often find myself currying parameters precisely because I want applicative ;-). It's just super convenient.

NB: pure being part of applicative is really interesting. I don't know for sure, but I'm relatively sure that a functor is applicative IFF your can define pure for it, which is really interesting to think about.

Edit: weird wording


> It seems to me that people often overthink applicative. It falls right out of functor. Normally when you call fmap, you pass it a function that takes a single parameter. What happens if you pass it a function that takes more than one parameter (the other parameters being curried)? fmap applies a value from the functor to the function. Since it only passes one parameter, the result is a partially applied function. The result is that you end up with a functor containing partially applied functions.

I blame this on Haskell's overreliance on currying. If applicative was explained in terms of (pure and) liftA2 aka map2, which is completely equivalent to defining it in terms of ap, the analogy with Functor would be much more obvious.

> NB: pure being part of applicative is really interesting. I don't know for sure, but I'm relatively sure that a functor is applicative IFF your can define pure for it, which is really interesting to think about.

Not true. Const b (i.e. f a = b for all a) forms a valid functor and you can define pure as long as there's at least one value of type b (pure a = 1 where 1 is that value), but if b is some type that does not form a monoid (e.g. nonzero octonions) then Const b does not from an applicative.


Cheers! That's a great example.


That's a great explanation. This is similar to the post's example with tree of functions <*> tree of values = tree of results.


> Last time I used Haskell, Applicative wasn't even a thing. I had read the McBride and Paterson paper that introduced applicative functors, but that was years ago, and I didn't remember any of the details.

In the previous post he learns how to use an Applicative.

In this post, he learns how to define Applicative for a new type he's created.

Yes, Applicative is abstract enough that it takes a few concrete examples to get a feel for it and grok what it's purpose is.

This is the work that's required to learn a new abstraction. All abstractions that are sufficiently different from previous experience require this kind of effort to make it legible to you.

Once you do, it kind of just 'snaps' and you get it.

Then, you get to leverage that knowledge in all the places it is re-used, instead of having to read the library code doing data manipulation in a bespoke way - you can just look at the type can say "yes, this is an Applicative - i know how to manipulate it and it will accord to some laws i can depend on", without having to read the source code.


Given that he already knew how to define a Monad instance for his type, the Applicative instance could simply be:

    import Control.Monad (ap)
    instance Applicative Tree where
       pure  = return
       (<*>) = ap
Also, given a Traversable instance—which is essentially just "fmap with effects"—one can get Foldable and Functor for free via foldMapDefault and fmapDefault.

The really interesting cases are the ones where you can define an instance for Applicative but not Monad.


An example of this is the two possible applicative instance of list. One is based on the monad instance and does a cross product if presented with two lists. The other zips its arguments. It is the more interesting one imho:

    import Control.Applicative 
    newtype ZList a = ZList { runZList :: [a] } 
    instance Functor ZList where 
        fmap f = ZList . fmap f . runZList

    instance Applicative ZList where 
        pure a = ZList [a]
        (<*>) f x = ZList $ zipper (runZList f) (runZList x)
                where zipper (f:fs) (x:xs) = f x : zipper fs xs 
                      zipper [] _ = [] 
                      zipper _ [] = []
Difference of behaviour:

   *Main> ZipList [(*3),(*5),(*6)] <*> ZipList [1,2,3,4] 
    ZipList {getZipList = [3,10,18]}
   *Main> [(*3),(*5),(*6)] <*> [1,2,3,4]
    [3,6,9,12,5,10,15,20,6,12,18,24]
    *Main>


Yes, though that definition results in:

    pure id <*> ZipList [x, y, z] == ZipList [x]
and thus breaks one of the Applicative laws:

    pure id <*> v == v
The standard ZipList definition is `pure a = ZipList (repeat a)` (an infinite list).

The issue with defining a Monad instance for ZList is that mapping the function on the right of (>>=) over the ZList on the left gets you a ZList of `ZList b`, with no meaningful way to zip these ZLists together.


Right

It is interesting that he knew this fact,and that it would be nice if this code could be automatically derived too, but I think that should not replace the learning process.

It is more of a nit for advanced users


Indeed, the learning process is important. I think it's particularly instructive to look at how `ap` is implemented, and what is left out in the transition from Monad to Applicative:

    mf `ap` ma = do
       f <- mf
       a <- ma
       return (f a)
With a Monad, the action on the right of the bind operator is a function of the result of the action on the left. The second operand of the `ap` function, however, does not have any access to the result of the first operand. The two operands are evaluated independently, apart from their side-effects. That reflects the fundamental difference between Applicative and Monad: an Applicative instance just sequences effects and combines results, whereas a Monad instance allows effects to depend on previous results.


Does this mean that many Monads have two possible Applicative instances? For the two orders of "running" mf and ma?


All Applicatives have those two variants, not just Applicatives which are also Monads. There is a Backwards newtype in Control.Applicative.Backwards which reverses the order of effects for any Applicative:

    GHCi> import Control.Applicative.Backwards
    GHCi> :i Backwards
    newtype Backwards (f :: k -> *) (a :: k)
      = Backwards {forwards :: f a}
    GHCi> forwards $ (++) <$> Backwards ("foo" <$ print 1) <*> Backwards ("bar" <$ print 2)
    2
    1
    "foobar"
As you can see, the result is the same but the effects are reversed. The definition is simple:

    instance Applicative f => Applicative (Backwards f) where
       pure = Backwards . pure
       Backwards f <*> Backwards a = Backwards $ (flip ($)) <$> a <*> f


I don't know about "many", but certainly some - e.g. you could form a "reversed MonadWriter" applicative that would write the messages from ma before the messages from mf. For a lot of monads the effects are commutative though, in which case the "two possible" Applicatives are actually the same, e.g. Maybe obviously behaves the same whichever order you "run" mf and ma in.


My "many" exactly meant to exclude the commutative case :)



Yet another reason why learning Algorithms + Data Structures + Paradigms is much more powerful than focusing too much on language specifics.


The reason GHC won't infer the definition of Applicative is because there can be multiple valid `Applicative` instances for a type (unlike `Functor` where there is a unique (non-trivial) instance).

The canonical example of this is lists, with 2 valid instances of `Applicative`.

The author of the post seems to have this realization, but I wanted to call it out, just in case.


Any Applicative can be turned backward, although the backward way will not necessarily be compatible with the Monad instance, and also sometimes the backward one is same as the original way.


That's true, but only one of the instances is 'compatible' with the Monad definition which requires (<*>) = ap (the first 'recipe' he showed for implementing Applicative)


I think that law is slightly too restrictive; there’s a useful class of Monad instances where it doesn’t hold: commutative monads, in which:

    x <- f
    y <- g
Is equal to:

    y <- g
    x <- f
Provided that “x” is not free in “g”. A good example is an async monad that provides concurrency in (<*>) and only blocks when there’s a data dependency in (>>=)—the result is the same regardless of evaluation order, but the implementations (and performance characteristics) are different. You could handwave it away by arguing that they’re “morally equivalent”, but I think it pays to be precise about properties like commutativity of actions.


> the result is the same regardless of evaluation order, but the implementations (and performance characteristics) are different. You could handwave it away by arguing that they’re “morally equivalent”, but I think it pays to be precise about properties like commutativity of actions.

You have to decide whether they're equivalent or they're not, and if they're not equivalent (for your purposes) you probably shouldn't provide both instances. If you implement <hnmarkupisbad> and ap but they're not equivalent, a future maintainer will get a nasty surprise sooner or later.


I mean if you can’t observe any difference from safe code, then despite having different intensional/operational definitions, they are extensionally equivalent, and that’s mostly what I care about.

The ApplicativeDo desugaring, which uses the Applicative combinator instead of bind in exactly the circumstance I described above, was added specifically to support this use case when we (at Facebook) were writing Haxl, which is just such an example of a commutative monad; it’s built on IO, but doesn’t expose IO to safe code, so there’s no way to observe the concurrency from inside Haxl.

Granted, I’m biased because I have a somewhat unpopular definition of “effect” and “side effect”—if an effect can’t be observed from code by a particular observer that is safe wrt some property, then to me it’s not a side-effect.

It’s pretty widely accepted that within an ST action, mutating an STRef is a side effect to any observers within that action, but from the POV of the caller, the code is pure and thus side-effect–free; but I argue that mutating an IORef within an IO action is also not side-effectful, say from the POV of another thread, if the IORef is never shared—e.g.:

    -- Morally equivalent to “pure 1”.
    do
      x <- newIORef (0 :: Int)
      modifyIORef' x (+ 1)
      pure x
Again, all I’m really saying is that you need to be precise about what model of effects you’re talking about, and what properties you guarantee re. safety, commutativity wrt other effects, &c.


> if you can’t observe any difference from safe code, then despite having different intensional/operational definitions, they are extensionally equivalent, and that’s mostly what I care about.

That's true if the only things you care about are the things that are visible from safe code. But if you care about performance characteristics (which is presumably the whole point of this kind of monad) then code that gives the same result but has different performance characteristics is not equivalent for your purposes.

Ultimately, if x and y are marked as equivalent then a future maintainer will expect to be able to blindly replace x with y - and by convention that's true of <hnmarkupisbad> and ap. So you shouldn't define <hnmarkupisbad> and ap in such a way that replacing one with the other will change important characteristics of the code - you should only make things look equivalent in your code if they are equivalent for your purposes.


Was that last line supposed to be "readIORef x" instead of "pure x"? If you return the IORef then the result is equivalent to "newIORef 1", not "pure 1"—and newIORef does have observable side-effects. (Two IORefs are distinct even if they hold the same value.)


Tree is just the free monad over the Pair functor, what's the problem?


"Digging deeper into why this worked this way was interesting, but it's bed time, so I'm going to cut the scroll here." I can very roughly intuit why this is-- the difference between the two Applicative instances the author noticed was that the shape of the first tree was being fitted into the second tree vs the shape of the second into the first. So if you "bind" the lefthand side of <* > first, its shape will be "fitted" to the righthand side, and if you do the opposite you get the second "fitted" to the first shape.

I believe it's the same as iterating over the righthand side of <* > in its implementation first instead of the lefthand side like in their first, wrong implementation?

But my reasoning feels loose here. I probably just need to look at their implementation of Monad Tree closer. :-)


I find it a little easier to decide what the Applicative instance "should" do by starting with

pair :: f a -> f b -> f (a, b)

from which you can derive the rest using pure and fmap


Yes, and actually I wanted that to be the actual method of Applicative (although I called it "liftPair" instead, but maybe "pair" is better), because it make sense to me, I agree with you.

(Although for optimization purposes it can help to also have the other stuff it already has too, in case you want to use an optimized implementation instead of the default implementation.)


I relish, lead to I discovered just what I used to be having a look for. You have ended my four day lengthy hunt! God Bless you man. Have a great day. Bye




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: