Hacker News new | past | comments | ask | show | jobs | submit login
Swift Functors, Applicatives, and Monads in Pictures (mokacoding.com)
125 points by mokagio on July 14, 2015 | hide | past | favorite | 100 comments



If functional languages had called them the Mappable, Applicable, and FlatMappable interfaces, and used map(), apply(), and flatMap() instead of operators, it would have avoided a lot of confusion. But I guess that's how you avoid success at all costs.


Honest question: Is there a reason 'flatmap' is any better other than it is what you learned it as somewhere?

I'm not convinced English has a word or convenient phrase for "monad", and it doesn't strike me that "flatMap" is it. How do I "flatMap" Haskell's STM monad? Or a probability monad? (I know the mechanics of the answer, I am specifically referring to the English-induced intution.) I don't see the intuition being particularly correct; flatMap is really tied to List, which isn't a particularly great thing to be tied to since it isn't very used. List's monadic interface is cute for some little algorithms, sure, which makes it disproportionately appear in examples, but list's monad implementation has complexity O(m^n) (where m is the size of the lists and n the number of lists you're going to consider) and you need some algorithm that gives you some serious culling on that before it's practical for very much.


> flatMap is really tied to List

Its tied to collections, not specifically to List. A number of (but certainly not all!) important monads are (or can be viewed as) collections, including Optional/Maybe (which can be viewed as a collection with cardinality restricted to being either 0 or 1.)


> Its tied to collections, not specifically to List

Actually, it transcends collections in general. As others have noted, the term "flatMap" is the concatenation of "flatten" and "map." There are other containers which support flatMap and satisfy the Monad laws[1]. An example of one in Scala is the Future[2] type. It's flatMap is described as:

  Creates a new future by applying a function
  to the successful result of this future, and
  returns the result of the function as the new
  future.
Conceptually, this behaves similarly to how a flatMap for a collection type behaves. Due to the nature of the Monad laws[1], this similarity is to be expected.

1 - http://eed3si9n.com/learning-scalaz/Monad+laws.html

2 - http://www.scala-lang.org/api/2.11.7/#scala.concurrent.Futur...


Well that's just it, though. The proper way of generalizing your notion of mapping is Functor and your notion of flattening is Monad. So the intuition behind the word has bought nothing.


> The proper way of generalizing your notion of mapping is Functor and your notion of flattening is Monad.

You are quite right that the "map part" of flatMap is Functor, as Monads are a model of Functor. IOW, if there exists a Monad for some container, then there also exists a Functor for it.

However, flattening in and of itself does not fully define the flatMap (a.k.a. "bind") part of a Monad. Its atomic use with the map operation is what makes flatMap/bind operations satisfy the expectations of the rules for being a Monad.

Flatten on its own is extremely useful of course. It's often used in definitions of Join type classes.


It does given a Functor definition. Typically in Category Theoretic presentations Monad is defined as a functor T along with mu and nu, operations that are (a -> T a) and (T (T a) -> T a), 'unit' and 'join'.

The rules for Monad are perfectly well-stated in terms of join—perhaps even better stated if you're one to like Category Theoretic diagram chasing.


Again, you are quite right. I did not present the other ways to define a conforming Monad and may have mistakenly implied that the unit/flatMap form is "the" way to make one.

For completeness, the forms I am aware of are (in no particular order):

* unit and flatMap

* unit and compose

* unit, map, and join

The valid form which you reference being a model of the last one.

AFAIK, for any container 'T' to be able to satisfy the Monad laws, at least one of the aforementioned three combinators must exist for the container.


> AFAIK, for any container 'T' to be able to satisfy the Monad laws, at least one of the aforementioned three combinators must exist for the container.

All three of them must, actually, since if one of them exists, all of them do (and you can define the other two in terms of the one you know.)


"Compose" being Kleisli composition?


> Actually, it transcends collections in general.

What I mean when I say flatMap is tied to collections is in the same sense that I read the poster to whom I was responding when they said it was tied to list, that is, that the intuition that the name "flatMap" leverages is tied to that construct. Obviously, the operation* it describes is more general, but the farther you get from collections, the less useful the intuition that the name leverages is.

(Whether this is better or worse for comprehension than the Haskell style of naming Monads and other related constructs and their associated operations is endlessly debatable and highly subjective.)


Gotcha. In reading the post to which you originally replied, it appeared to me that there was an underlying assumption of Monads being specific to collections. My bad.


Interesting quantification of Option. Mapping Boolean logic to binary arithmetic somehow ?

It's also impressive how much complexity arise from that notion of multiplicity.


Follow-on question, are there any useful monadic constructs not tied to collections (including optional as you said)?


IO, State, Parsers, and STM are useful monads for which the collection metaphor doesn't seem to be particularly helpful.


STM's a good one. I'd consider I/O and State as solving a self-inflicted problem -- I can have side-effect-creating I/O and State in just plain C with barely any type system.


Is Maybe tied to collections? State? Reader? Cont? What about Eff from extensible-effects? I use these every day.


As mentioned elsewhere here, Maybe is naturally considered a container of zero-or-one elements. The others are good examples, though.


Parsers is an obvious example.


Silly and wonderful there is always

    data Fake a = Fake
which is trivially a monad

    instance Monad Fake where
      return _ = Fake
      Fake >>= f = Fake
yet it clearly contains nothing.


Grandparent asked for useful though :P


Who says Fake isn't useful? It's very useful for stating that nothing can happen :)


Future, Either, Writer, and Free are the non-collection ones I use most often.


Futures are another example.


I'd call a Future a threadsafe queue with one element going thru it IMO, it's just as much of a collection as Optional.


That's the beautiful thing about types which satisfy the Monad laws. Because both Future and Option (in Scala if not other languages) satisfy these laws, I think it perfectly valid to consider a Future to be collection type with a cardinality of 0 or 1 that also abstracts latency.


Yeah, but I can do that without category theory, it's just obvious. I still don't understand what 'monad' offers over 'flatMappable' besides the ability to use 'endofunctor' in the same sentence.


> I still don't understand what 'monad' offers over 'flatMappable'

Its several characters and syllables shorter, and doesn't appeal to an intuition which is distant from many of the productive uses of construct.


That's true. The concept of "flatmapping" an array is easy to get, but what about flatmapping and optional/maybe value?


> ... what about flatmapping and optional/maybe value?

The same concepts apply. With an option type, flatMap can be defined in Scala Option[1] types as:

  flatMap[B](f: (A) ⇒ Option[B]): Option[B]
For the empty option situation (None in Scala), the result is a None. For the populated case (Some in Scala), the result is what f produces when given the value contained in the original Option[A] type. Of course, f is free to produce a None for its result should it so desire.

What makes Monads neat IMHO is that so long as the container type adheres to the Monad laws[2], then the developer can fully rely on the behaviour conforming to a common set of expectations.

1 - http://www.scala-lang.org/api/2.11.7/#scala.Option

2 - http://eed3si9n.com/learning-scalaz/Monad+laws.html


I think the parent was saying that "flatMap" failed to give them intuition around Option, not that Option wasn't a monad.


It certainly doesn't have to be flatmap. But the idea is that someone takes some time to think up an English name and does a bit of user testing to make sense to regular programmers.

This seems to be the approach that Elm is taking. (Although they're reluctant to add monads at all since they take ease of use seriously.)


We all know that naming is one of the hard problems in computer science. Monads in programming have been around for 20+ years now, and you are certainly not the first one to complain about their name. So where are the better alternatives? Surely someone must have come up with a better one by now?

Instead we get nice "English" names which only work for certain monad instances and fail to adequately describe others.

Elm uses "andThen" instead of "bind", which works pretty well for the monadic types in its core library. But you need to start stretching your imagination to see why "andThen" makes sense for other monadic types. Honestly, I'm fine with "andThen", especially in the context of Elm and the monads it implicitly uses. But in the wider world of programming where exotic but useful monad instances exist I'd rather stick to the most general name.

By the way, the more pressing reason for Elm not having monads is the fact that it doesn't have typeclasses (or a similar concept) and doesn't allow you to parametrize over higher-kinded types (like Maybe).


What's wrong with 'bind' ?


What's being bound?

Perhaps "then" would work. JavaScript promises use it to good effect, and they're a monad.


Smaller computations are being bound into a larger computation, where the monad provides the the code that is executed 'between the lines'.

(BTW, my own take on renaming a monad into something less scary is daisychain, or, slightly more to-the-point, chainable.)


    (>>=) :: M a -> (a -> M b) -> M b
The `a` is bound in the second argument.


Ahahah, that might be true :) Although because those concepts comes from category theory, i.e. https://en.wikipedia.org/wiki/Functor and there's more to it than just "it defines" map, keeping the "academic" name seems fair enough.


The problem is that the category-theory view of functors and monads and monoids completely dominates the literature.

Imagine if, when you first started programming, and you wanted to do something with numbers, and you found out that in your programming language, the way you did that was to use something they called "Integers" (I know, right? ridiculous). So you google "Integers" and you find https://en.wikipedia.org/wiki/Integer, which helpfully tells you

"In elementary school teaching, integers are often intuitively defined as the disjoint union of the (positive) natural numbers, the singleton set whose only element is zero, and the negations of natural numbers."[1]

And you get the impression that to add a couple of numbers together you're actually going to have to understand cardinality and group theory and so on.

That's kind of where we are with monads, with the insistence on the academic language.

Though it does suggest that I could have fun by persuading beginning programmers that to really grok text processing they need to start off with a thorough grounding in string theory.

[1] I checked with my son, who goes to elementary school. I'm not sure he even knows what a disjoint union is. I'll have to have words with the teacher.


I'd forgotten how spectacularly academic Wikipedia's math articles are. It actually seems a little better than it used to be, but for example good luck learning what the hell a quaternion is by reading the Wikipedia article, even if you have sufficient background to understand and use them in practice. The German article is much better.


Right. You don't want to try to learn something mathematical from the Wikipedia article on it - Quaternions are a great example. The trouble with most of the writing about monads is that it's all at the level of the opening paragraph of a Wikipedia math article.

We have to do better at finding a way to teach this stuff.


When I started using quaternions for rotations, which -- if we're honest -- is the #1 practical applications, I had much more success building my intuition around this Wikipedia section:

https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotati...


> The trouble with most of the writing about monads is that it's all at the level of the opening paragraph of a Wikipedia math article.

IME, that's not at all true; most of the theory-heavy sources I've seen are explicitly framed as alternatives to the large body of intuitionist tutorials that avoid theory almost entirely, focusing on a few popular applications (usually, collections and IO.)


Being able to use something and knowing the whys are two different things. Many people know Calculus(practice), but not many know Real Analysis(theory behind Calculus, the academical bit).

"In mathematics, the quaternions are a number system that extends the complex numbers."

Naturals are a subset of integers. Integers are a subset of rationals. Rational are a subset of reals. Reals are a subset of complex numbers. The latter are a subset of quaternions. It's not difficult to construct each number system. If you look inside any undergrad math textbook, you'll find exact same writing style showcased in the Wikipedia article on quaternions.


You're over-exaggerating :) From the wiki link, the very first definition works just fine. In fact, I don't see anything about unions, disjointedness in there. Certainly, no group theory. You don't need to know how to construct, count(cardinality) or know the properties of integers(number/group theory) to know what they are for the purposes of programming. No, there's no cabal making everything more difficult than it needs to be ;D I think wiki always gets unfairly maligned for this sort of thing.


Are there any nice, one-step-at-a-time introductions to Haskell that stick to lowly, commonplace English rather than insisting that "just a monoid in the category of endofunctors" is universally preferable because technically it technically is technically more technically correct?


I haven't read it but you might want to check out coolsunglasses'[0] book, Haskell Programming from first principles[1]. He wrote it specifically for people who have no background in this stuff.

[0] https://news.ycombinator.com/user?id=coolsunglasses [1] http://haskellbook.com/


Nobody insists "monoid in the category of endofunctors" is universally preferable. It's a joke.

What you'll usually see people insist is to describe monads by their operations, rather than through metaphor. That is, a monad is a type parametrized by some other type, with the following two functions:

1) return :: a -> m a 2) (>>=) :: m a -> (a -> m b) -> m b

such that three laws hold:

a) return x >>= f == f x b) m >>= return == m c) (m >>= f) >>= g == m >>= (\x -> f x >>= g)

For monads in Haskell, this is the technically correct definition.

That said, most people will tell you the best way to understand monads is to look at a few basic monad instance implementations (Maybe, Either a) and program with them, slowly learning new instances.


Cleaning up the parent's formatting as I think it's too late for them to edit. Content is unchanged:

That is, a monad is a type parametrized by some other type, with the following two functions:

    1) return :: a -> m a
    2) (>>=) :: m a -> (a -> m b) -> m b
such that three laws hold:

    a) return x >>= f == f x
    b) m >>= return == m
    c) (m >>= f) >>= g == m >>= (\x -> f x >>= g)


To be fair, I don't know a single person that explains things using

> "just a monoid in the category of endofunctors"

In fact, even among people who DO know the category theory necessary to understand this expression it's considered a very obtuse and useless explanation. Especially in the context of Haskell.

As for good readable introductions, I consider Wadler's paper "Monads for Functional Programming" to actually be a pretty good introduction. Another good intro is "You Could Have Invented Monads! (And Maybe You Already Have.)" http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...


To add to chongli's suggestion, I learned Haskell (and had a great time doing it) from "Learn You a Haskell": http://learnyouahaskell.com/


Those intuitions do not generalize correctly.

It's better for people new to the concepts to realize that they don't understand than to incorrectly believe they understand (based on mediocre collection-oriented analogies).


There's more to the interfaces than merely the operators; there are laws, too. Besides that, the operators give a big advantage when sequencing operations, something you can't do with map() notation.


When you implement an equality operator, or addition/multiplication operators on a type there are 'laws' too, but that doesn't stop it coming down to 'has an equals method'.


Then why not have both? Applicative Functors have both `fmap` and `<$>`. `>>=` could easily be an alias of `flatMap` or `bind` or whatever.


I'm right there with you. I'm a Haskell user; I love user-definable operators.


Because code is read more than it's written; multiple ways of writing the same thing makes it easier to write but harder to read.


Multiple ways of writing the same thing makes it easier to read, often, because you are more free to use the form that is more readable and intuitive in context; that's one reason why natural languages (including written languages) tend to have multiple ways of expressing the same ideas.


Because 1) those concepts are already have proper names;

2) there is a ton of (even predating our modern programming era) useful literature using proper names;

3) those proposed “non-confusing” names have zero meaning for a lot of monads/applicaties/… I feel like 99% who want whem have only seen containers.


Joinable would be a much better name than FlatMappable. FlatMap makes it apply too much to collections.


"flatmap" hints at it consisting of two components, flattening and mapping. Both are to be seen in a generalized context though, and the words are familiar to many when thinking about collections.

1. Mapping - the same as what Functor/Mappable provides. We use this to "map" over a structure, producing a nested structure.

2. Flattening: we just built our nested structure, time to squash it again.

In types akin to templates,

    Structure<A>
    ------------------------ (map something suitable over it)
    Structure<Structure<B>>
    ------------------------ (flatten it)
    Structure<B>
It's not a terrible name, but it probably doesn't help seeing the bigger picture either. "Flattening" a parser, a subroutine, an empty thing -- that's not really something very intuitive.


Join (flatten) is a fundamental monadic operation.

    join:: m (m a) -> m a
I have a parser of a single character and a way to generate new character parsers

char :: Char -> Parser Char x :: (Parser Char) let y = fmap (char . toUpper) x :: (Parser (Parser Char))

It's a parser which creates a new parser out of the input it consumes.

I do think that fmap/pure/join is the easier to understand set of defining functions for monads.


join is one possible fundamental monadic operation. You can also define monads in other ways, such as endofunctors such that the Kleisli structure forms a category. In this case, you have the identity arrow (called return in Haskell) and Kleisli composition (<=< in Haskell) as fundamental operations. Neither of these is more fundamental than the other. You can convert back and forth,

    f >=> g = \x -> f x >>= g
    m >>= f = (f <=< id) m
    
    join = (>>= id)
    m >>= f = join (fmap f m)
So if you're focused on composing monadic operations, thinking about it in terms of >=> is often a good idea.


One thing that I really like about Haskell's practice of naming interfaces after mathematical constructs is that it makes it clear what the rules for those interfaces "should be" and changes the character of a few kinds of discussion.

If we called them "Mappable", "Applicable", "FlatMappable" (yuck), "AssociativeAppendableWithIdentity"... then there would be disagreement about whether some new construct "is really" Applicable. Is summation really "Appending"?

And these arguments are high stakes, because the most important thing about an interface is that it tells the user what they can expect. If I ask for an Appendable, and get something that doesn't follow the rules I think Appendable should have, then... whose code is broken?

In Haskell, it's entirely clear whether a new type "is really" a Monoid or Functor - does it obey the laws? Okay!

Whether "container" is a good way to think of Functor is recognized as a purely pedagogic question (not that there's never drama in pedagogy, but those people getting work done can opt out without worry that it will break their code when they try to interface with others).


I don't think applicable is the same as apply, am I wrong?


eh, it's close. if the semantics are

    apply(fun,args)
it's not much of a mental stretch to do

    apply([fun1,fun2], [arg_for_fun1,arg_for_fun2])
it's probably better to just suck it up and learn the precise terms though. a lot of hazy thinking can hide behind overloaded words.


I appreciate that this article is just a quick code snippet rewrite but it really could have done with some quick (even if ugly) edits on the graphics and as a result ditched all the Haskell asides.

Assuming of course that I'm correct in my assumption that the articles is aimed at Swifties looking at FAM and not Haskellers looking at Swift.


People often complain about Haskell syntax and operators looking ugly (I disagree) but dang:

    func <^><T, U>(f: T -> U, a: T?) -> U? {
        return a.map(f)
    }
Gnartown!


What is so ugly about it? Swift tries to be somewhat close to the usual C-syntax (it's supposed to be the successor of Obj-C, after all).

For someone used to C-syntax, I guess the main deviation is that the return type is in the back and that there's a func keyword. As for the operaters, there is the addition of generics (what is wrong about <> ?), -> to describe signature of a function (a very sensible choice IMO) and ? for Optionals (again, fine for me).

I don't really see how a statically typed language can substantially improve this, but I'd be happy if someone can proof me wrong :-)


It's not clear to me why it's necessary to include <T, U> at the start of the function definition. In Haskell, unquantified type variables are implicitly universally quantified, i.e. you assume that the signature must be valid for any types T and U.

That's the only thing I think could be substantially improved, though. Personally I find the style where you separate the type signature from the function definition easier to read, for example

How about

    <^> : (T -> U) * T? -> U?
    <^> (f, a) {
        return a.map(f)
    }
or even

    <^> : (T -> U) * T? -> U?
    <^> (f, a) -> a.map(f)
which is already pretty close to the Haskell equivalent

    (<^>) :: (t -> u) -> Maybe t -> Maybe u
    (<^>) = fmap
Arguably, Haskell syntax could be improved with more built-in syntax

    <^> : (t -> u) -> t? -> u?
    <^> = fmap
though I haven't thought about what this means for parsing the language.


Explicit is better than implicit. Does Haskell allow functions to have value parameters that aren't declared?


Depends what you mean by "not declared". Haskell doesn't require you to provide type signatures, so if you don't put a type signature on a function, you can have parameters that aren't explicitly declared (though they still have an inferred static type, so you could use ghci to see what all the parameters are). If you do put any type signature on the function, then all parameters would have to be included.


If I make a typo when using a value parameter in a function definition, can I accidentally introduce an extra value parameter?

If I make a typo when using a type parameter in a function definition, can I accidentally introduce an extra type parameter?


> If I make a typo when using a value parameter in a function definition, can I accidentally introduce an extra value parameter?

No.

> If I make a typo when using a type parameter in a function definition, can I accidentally introduce an extra type parameter?

You couldn't introduce a new concrete type. You could introduce a new type variable, but the type signature would still have to typecheck, so it would difficult to come up with a non-contrived example where that would happen.


How do your examples specify that T and U are generics?


As I said, in Haskell "unquantified type variables are implicitly universally quantified" i.e. type variables (which are always lowercase in Haskell, to distinguish them from concrete types which are uppercase) are always generic.

So it's true that in the Swift examples, you would need a convention to distinguish type variables from concrete types, or else you need to explicitly mark them as generic.


Thank you! This actually explained what is going on perfectly.


    let post = Post.findByID(1)
    if post != nil {
      return post.title
    } else {
      return nil
    }
Or, as a cool C-like would put it:

    let post = Post.findByID(1)
    return post?.title
Just saying!


That is how it's done in Swift. Are you saying there are C languages that do the same thing?


Well, mine does ( https://github.com/FeepingCreature/fcc ). I don't think it's very widespread, sadly.

Example: https://github.com/FeepingCreature/fcc/blob/master/tests/tes...


Nice! If you want it to be, you should add some information and examples. Or a simple website, even.


No I mean the syntax. :)

For my language, I'll do some proper advertising on it once I figure out version two; v1 is very much a testbed for playing around with language design.


Interesting. And people think Swift is easy to learn. Not all of it though.


Swift itself is quite easy to learn, and you don't actually have to do any functional programming in it if you don't know how to do so. I'll bet most people writing in Swift are doing little more than they would have done in Objective-C since it fully supports the OO paradigm. Over time, more and more will adopt some of the neat things you can do with this first-class functions.


Swift is actually quite simple to pickup, and thanks to its static typing the compiler helps along the way. This functional side of what you can do with the language seems harder because the cryptic concepts, and on top of it there is no real built-in support. Overall though, once you've tried them out a bit they are not that hard, and actually very useful :)


I was expecting a Chipotle menu


Why even have the concept of a monad in Swift? The language is neither lazy nor purely functional, so the problem monads were introduced to solve doesn't exist. You can just write regular code to do I/O.


Monads exist regardless of whether you call them that. They are a mathematical idea. An algebraic structure.

If you made a language that used conditionals based on 1 and 0, it could do boolean logic, even if you didn't include boolean values explicitly.

Monads exist in just about every programming language. They weren't introduced to solve a problem. They already existed, and just happen to be useful to solve certain problems.


I've read that synchronously executed code from a text file is actually a form of monad in itself. Is this true?


Sequences of bytes form a monad. Sequences of machine instructions form a monad. (Indeed, for any x, sequences of x form a monad). This is occasionally a useful fact (e.g. it's what tells you that you can debug an assembly program by debugging the first half, then debugging the second half starting from the state it was in at the end of the first half).

The definition of a monad is really simple - a lot of things form monads. Which is what makes them so powerful, because if you write a function that works on a generic monad, you find you can use that function for a lot of things.


Your question is meaningless as you've written it. An object equipped with a "return" and a "bind" operation is a monad if the operations follow the three monad laws of left identity, right identity and associativity.

To say whether something is a monad or not, you must first identify the object you're talking about and describe its "return" and "bind" operations.


They weren't introduced to solve a problem.

Monads were definitely introduced to Haskell as a way to solve the problem of I/O -- see Section 7 of http://www.scs.stanford.edu/~dbg/readings/haskell-history.pd... for more details.

It's true that monads existed as a concept in category theory before they were used in programming languages, but they didn't really catch on until their use in Haskell. Certainly you don't see articles about applying other esoteric category theory concepts to Swift programs.


Monads are generally useful beyond IO, and pure programming style is available (and recommended) in almost every language.

From the article you linked:

"Although Wadler’s development of Moggi’s ideas was not directed towards the question of input/output, he and others at Glasgow soon realised that monads provided an ideal framework for I/O."


Yes, definitely. To be clear, I'm a fan (and long-time user) of monads in Haskell, and I think they're a great solution to the problem of embedding impure effects in a pure language. I just don't understand why you'd use them in a language where you can just write the impure effects in a direct style, especially when there's no 'do' notation equivalent.


You don't usually need them to do IO and state, but they're useful for things like optional and error values. Otherwise, it is mostly pedagogical.

Besides, writing things that exist in other languages is useful when you're writing other languages. If you were using C to write a new language with closures (for instance,) you're going to have to implement closures in C. That's a useful exercise, even if nobody is going to use such an inevitably clunky construct in C directly.


You use them precisely when you need a computational/imperative context which differs from the natural ambient one of your language. E.g., Promises in Javascript.


I/O is not the only thing monads are useful for, far from it. Having the concept of a monad lets you write generic code that will operate on any monad (just as e.g. having the concept of a collection lets you write generic code that will operate on any collection, or indeed having the concept of a number lets you write generic code that will work for any number rather than having to write your program as a big lookup table with explicit cases for every number).

Once you have a library of functions that work with any monad, custom monads are really easy to work with. So you can do things like http://typelevel.org/blog/2013/10/18/treelog.html or http://michaelxavier.net/posts/2014-04-27-Cool-Idea-Free-Mon... .


Generic operations on monads are great, but you need code written in terms of monad operations to take advantage of them. Monadic code in Swift has a few problems.

1. There's no do-notation in Swift. Your code will be littered with weird operators that make it (at least slightly) harder to read.

2. It's less efficient. The runtime system has to create and destroy additional closures, increment and decrement additional reference counts, and so on.

3. It's difficult to interface with procedural code, since your monadic code must be pure.

So when do you write monadic code versus procedural code? In practice, the monadic parts of your code will end up associated with particular tasks that lend themselves to the monadic style. But why not then specialize this code to its particular task? In the Treelog example, using specific Treelog terminology instead of generic monad operators would make the code easier to read and understand.


When you have specialized code you use the specific terminology - just as when you have code that works only with a list, you'll use list terminology rather than generic collection terminology. You use the monadic parts when you need to abstract over it, just like with generics.

E.g. I have three client reports that share a common superclass but have different requirements. One of them needs some extra aggregated statistics (Writer, similar to treelog). One of them needs us to make an async call to their system for each row (Future). One of them needs some data from redis. But the superclass logic of "read a bunch of rows from our database, combine the per-row results into a single report" is common. So the superclass uses the generic monad operations, and the subclasses have the specific implementations, and I don't have to copy/paste the same code for the three different reports.


It might help to look at it another way.

You can consider monads as being a design pattern that's been shown to be useful when dealing with laziness.

In Haskell they are endemic as the language itself is lazy. However, they are also useful in eager languages where one wishes to model a behaviour in a lazy manner.

The most common examples of the latter are probably: modelling temporal delays in async code (with Futures); delaying processing of exceptions (with Results) and delaying handling of nulls (with Optionals).

Where monads come in handy is that their composability allows you to decide precisely when you want to handle the relevant behaviour.

As with all design patterns they're no silver bullet. They have trade offs as do their alternatives, be that callbacks, exceptions etc. But they can be useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: