Hacker News new | past | comments | ask | show | jobs | submit login
What I Wish I Knew When Learning Haskell (stephendiehl.com)
164 points by psibi on Dec 7, 2013 | hide | past | favorite | 75 comments



Agree with the advice about monads. You're going to think monads are weird because the first one you encounter is used to hide a particular detail of how Haskell works in the real world, namely that Haskell can't mutate objects, but that running your program mutates the real world. Ignore this use case, because while interesting, it's a messy implementation detail, not something to model in your own code.

Use monads for bailing out of computations early (Maybe/Either) or passing some state along through a computation (State). When you're comfortable with those concepts, look at ST for a generalization of what IO does. That is something you can actually use to simplify your own programs.


The State monad is what really sold me on the concept. Right before encountering a tutorial that mentioned it, I had just spent some time writing some code in Common Lisp, where I was trying to find a nice abstraction to avoid threading a state variable through absolutely everything. So this came across as "dear god, the abstraction does exist!" Whereas the IO monad at first blush comes across as "lol PLs geeks realized they had to actually do I/O", which is not as compelling, in part because that version comes across as a solution to a self-inflicted problem.

It's slightly disappointing, though, that State doesn't work that well when translated back to Lisp: http://marijnhaverbeke.nl/monad.html


State is exactly the right abstraction for this. When people using other languages come up with an "abstraction" for state, it is always becomes a testing/debugging nightmare. With State, it really is just a parameter to your function, which you're allowed to hide if you feel like it.


Exactly. IO is nearly the worst first Monad to learn, because none of the internals are visible, making it look far too magic. (Only something like Cont would be worse.) The best choice would be something like Writer or State, where you can see exactly what the abstraction hides, why you'd want it, and how it would look if done via explicit state-passing.


Yes, something like WriterT Error is a great "default state" for your functions: they can produce a correct answer, an error, and some logging information. Programmers love punting things like error handling to higher levels of their program; why not do that with logs, too? (And, given a correct implementation of Writer, if you ignore the log messages, they aren't computed. Try that with Java!)


Do not agree! Monads were always this vague weird thing until I started using IO heavily. Once I was comfortable with IO, applying the abstraction to other things was obvious. I've also helped some other people learn Haskell, and being able to just explain how IO works is much easier than trying to explain the Monad abstraction.


Yes, it's easier to explain a specific concrete example rather than an abstract concept. You don't study literature learning about iambic pentameter and then ending your studies; you also read Shakespeare to see how it's applied in real life. But, you also don't learn to read by starting with Shakespeare. You start with something simpler. The IO abstraction is deep and the implementation messy. So start with State or Writer or something, and go from there.


Sure, but IO gives you answers to the main questions that new people have. How do I write hello world? How do read a line from standard input? How can you DO things in a language without side effects? You can't even write programs without IO, and it's really not confusing.

On the other hand, I think avoiding do-syntax makes sense at first.


Would someone please explain what a monad is and why they're useful, in down-to-Earth, simple language that an engineer can appreciate?

I know (and use) basically every other CS concept. Monads, though, I've never bothered looking into, because it seems like everyone who talks about them can't resist the urge to use flowery descriptions of their possibilities, rather than examples of their pragmatism.

Also, it's suspicious that pg has never once mentioned monads: https://www.google.com/search?q=site%3Apaulgraham.com%20%22m...

If they're useful, you'd think one of the best hackers would have said something about it. He spent a good deal of his career talking about closures, types, objects, functions, computing theory, lambda calculus, etc. But no monads.

So, are they useful? Would anyone please give examples of their usefulness?

EDIT: You know what's worse than hero worship? Not being able to make a reasonable argument which uses someone as an example without being accused of hero worship.


They're a very simple abstract interface that happens to capture a surprisingly large class of data structures and computations. You need to implement two operations to make a monad: (1) take a normal value and put it in the monad (2) add a function to the pipeline of stuff being performed on the monad. If your language gives special syntax for these two functions, then you get to write generic code that does lots of different but related things depending on which monad that code gets called with. So, monads are just a handy and often-occurring interface. There's not a ton to understand here conceptually, you just use them and get used to the abstraction.

edit: pg likes dynamic languages, monads don't pop up there as much. They're more useful when you (1) you have special syntax for them and (2) the type system helps you keep track of what code is not in a monad, generically using any monad, or using one particular monad. So, you'll typically only see monads in expressive/high-level static languages


Ok. So if I understand correctly, I could give a monad an open file descriptor to "numbers.txt" as its value, along with a "read line" function and a "convert string to int" function, and I'd get back a list of numbers?


Here's a weird idea for you. Imagine you could overload the semicolon in C-like languages; you could make it any function you want! This is a little bit of the idea behind monads. You can build a pipeline of chained functions and the monad you choose can make decisions in between stages. One simple example of such a decision is an early return/exit; this use case is covered by the Maybe monad.

Another cool one is the continuation monad. This lets you stop a pipeline and return a partial result, along with a function that lets you resume where you left off. This approach is commonly used in parsing, where you want to stop and restart depending on the availability of input. Another use for the continuation monad is to transform a chain of nested callbacks into linear code, eliminating callback hell.


You could do all that with IO, sure. But the fact that IO happens to be a monad is completely irrelevant to doing that stuff.

Monads are important because of the abstraction, not specific use cases. If you're thinking in terms of specific use cases, you're already wrong. The monad abstraction lets you abstract out all kinds of control flow operations that apply to types of the right shape. It turns out to be a very general abstraction - it appears in a lot of places. And it turns out to be a sufficiently-powerful abstraction to lift any Turing-complete control flow from functions on simple types to functions on the monadic types.

So, the abstraction is general and powerful. Why isn't it used in all languages? Because most languages don't have anything like the bookkeeping mechanisms necessary to make it syntactically lightweight. Haskell, on the other hand, is sufficiently not-terrible to provide enough facility for abstraction to actually take advantage of monads. (All programming languages are terrible. Haskell is just less terrible than most.)


Here's the simplest reason in a sentence:

We want to write functions that don't involve IO (which we call "pure functions"), but we want to be able to use these functions on values that have been tainted by IO.

Ex.

The function abs takes an Int and returns its absolute value. No IO there; it takes an Int and returns an Int. However, in a pure language like Haskell, an integer that has been obtained through IO is of type "IO Int". Because Haskell is a pure language, we see that value as "tainted" by IO; it any anything derived from it are irreversibly tainted. A monad lets us pull the Int out of the IO context, get its absolute value with abs, and then stick that back into the IO context. That way, IO-tainted stuff keeps its label warning us that it's IO-tainted, but we can still use pure functions on it. This is necessary because if abs returned an IO Int, we wouldn't know whether or not abs does IO. That's to say, we couldn't get pure values back when we process pure values with abs.

This can be done for any "context", but IO was the most pressing one, and monads are what made IO in pure languages easy enough for them to be usable for everyday problems.

Also, the Paul Graham worship here is pretty bad, but being suspicious of something just because he hasn't mentioned it in an article is really bad.


> But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well.

I dunno, looks like pg's words here are perfectly applicable to monads. Good enough for me!


First, if you can handle it, I recommend the original paper proposing monads as a solution for a wide variety of problems that every functional language has to face: http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/ba...

IMO, you have to understand the problems monads were meant to solve before you can understand why monads are a good solution.

Here was one key to my understanding monads. In a purely functional language you want everything to be referentially transparent. That is, an expression should be able to be substituted for its value at any given moment in time. At the very least, you'll want the ability to denote which parts of your program are referentially transparent and which aren't.

This can be expressed as utopian vision: in a purely functional language we'd like everything — everything — to be a value.

We now need some way of translating code with side effects into code that returns a value. One reasonable abstraction is to say that the "value" is not the work itself — once the work is done it's out of the bag, after all — but rather the "value" is the machine which can perform that work.

To a mathematician this screams "functor." If you want to move between the two spaces repeatedly this screams "adjoint functors." There's some work you want to do in Space A that's more easily expressed in Space B, so you take a thing in A, transform it to a thing in B, do the work in B, and then transform back to A.

Think in, e.g., Ruby: sentence.split(' ').map(&:reverse).join(' ')

split(' ') and join(' ') aren't quite inverses of each other, but they are adjoint. These pairs take us from the land of space-separated strings to the land of arrays back to the land of space separated strings.

You see that all the time in programming and mathematics. I sometimes call it the "Super Mario Brothers" maneuver when I'm trying to teach it to students. Mario wants to get to the end of the level, so he hits a block, goes up the vine to cloud land, runs through cloud land, and then comes back down.

This is a bit hand-wavey, but it's "morally correct" IMO. I mention functors because I'm hoping you've seen the maneuver I'm describing and monads are a special type of functor.


I was reading up on Monads because of the reactive course. [0] I translated the code in "Monads for functional programming" (the linked paper) to Python a few weeks back – [1]

Not sure if it will actually help anyone, but it did help me in realising that I had not understood some code, but had instead just glossed over it. The State Monad, for example. [2]

--

[0] https://www.coursera.org/course/reactive

[1] https://github.com/rm/wadler

[2] https://github.com/rm/wadler/blob/master/wadler-2.8.py


From the article, the Eightfold Path to Monad Satori:

    Don't read the monad tutorials.
    No really, don't read the monad tutorials.
    Learn about Haskell types.
    Learn what a typeclass is.
    Read the Typeclassopedia[1].
    Read the monad definitions.
    Use monads in real code.
    Don't write monad-analogy tutorials.
[1]: http://www.haskell.org/haskellwiki/Typeclassopedia


1. Monads are an exceedingly common abstraction that can help you avoid code duplication, especially the duplication of control structure code. In Java:

    String postIdString = getHttpRequestParam("postId");
    if ( postIdString == null ) {
      return null;
    }
    Integer postId = Integer.parse(postIdString)
    if ( postId == null ) {
      return null;
    }
    return db.getPostById(postId);
It gets even uglier when you can't just short-circuit out of the whole function in the error case by returning null. This pattern of checking for null between statements is captured by the Maybe monad. In Haskell it would look like this:

    runMaybeT $ do
        postIdBS <- MaybeT $ getParam "postId"
        postId <- hoistMaybe $ fromBS postIdBS
        MaybeT $ getPostById postId
The Maybe monad implements the core pattern, then that behavior is extended to other contexts by the MaybeT monad transformer. This example just lets you handle failure without giving you an indication of where the failure happened. If you need error reporting along with that, you can get it very easily with the Either monad and it's corresponding transformer EitherT like so:

    runEitherT $ do
        postIdBS <- noteT "getting post ID" $ MaybeT $ getParam "postId"
        postId <- hoistEither $ note "parsing post ID" $ fromBS postIdBS
        noteT "retrieving post from database" $ MaybeT $ getPostById postId

2. Monads in Haskell provide a way to isolate and control side effects. I gave a presentation about this almost a year ago.

http://vimeo.com/59215336

The presentation is built around two real examples of software engineering problems that I encountered in my career. One encountered in a Java project, and one in a Haskell project. The thesis of the presentation is that monads are much more useful when combined with Haskell's strong static type system and purity. This also suggest a pretty good explanation of why pg doesn't talk about them--he's more of a dynamic language guy.

Here are a few of the concluding bullet points from the presentation.

Monads can:

3. Help you prevent data corruption in concurrent applications in pursuit of shorter critical sections and reduced lock contenion.

4. Prevent bugs that might arise from confusion between different phases of execution.

5. Guide exploration of complicated problems

6. Expand your dictionary of concepts and ways of thinking about a problem.


Monads are number 1 from that list.

Numbers 2, 3, and 4 are the result of a powerful static type system, purity, and having typed IO. Nothing you listed has anything to do with being a monad. They're just examples of things you can do with types.

Your numbers 5 and 6 are sort of accurate. It's true that you can say "Hey, I think this type might form a monad," and explore from there. But if it does, it's nothing greatly interesting - you just get to take advantage of a bunch of existing library code. (And a little syntactic sugar, in Haskell.)


Right, my presentation is clear that it's not just monads, but rather the interaction of purity, strong types, and monads that gets us those benefits. I list them when people ask about monads because, while they might not be what monads are actually about, they were big things that Haskell gained with the introduction of monads. So I think they're an important part of the explanation that we give to Haskell newcomers.


> I know (and use) basically every other CS concept.

Really? So you know and use row types, linear logic, rank-N types, impredicative types, red-black trees, generalized algebraic data types, continuations, kind-level polymorphism, pattern matching, pi calculus, the Curry-Howard isomorphism, linear programming, dynamic programming, mBayesian inference, support vector machines, context-sensitive grammars, Feistel networks, one-time pads, vEB trees...


I think of it this way, I dunno if that is the right way.

Programs should be these beautiful mathematical models that describe things - stick something in, get something out. Stick the same thing each time and get the same thing out each time.

Unfortunately to be useful programs have to be allowed to deal with messy stuff that can have side effects or some other ugly situation, or read input from a user or other programs.

You use monads to be explicit about where your program is messy and where it is isn't messy.

That can be stuff like IO, or stuff like Maybe (I dunno if what I'm getting here will even have a result - if it does, then it'll be Just (the result) otherwise it'll be Nothing) but the point is there is some messiness in there and you want to know there is messiness, otherwise you dont know what parts of your programs have messiness in them.

Perhaps you can think of them as atomic actions in concurrent programming? You declare something as atomic because you want to know the stuff inside the atom at least won't be affected by the rest of the concurrent messiness, so you can think about the pieces inside it as a non-concurrent piece.


> In functional programming, a monad is a structure that represents computations defined as sequences of steps. A type with a monad structure defines what it means to chain operations, or nest functions of that type together. This allows the programmer to build pipelines that process data in steps, in which each action is decorated with additional processing rules provided by the monad. As such, monads have been described as "programmable semicolons"; a semicolon is the operator used to chain together individual statements in many imperative programming languages, thus the expression implies that extra code will be executed between the statements in the pipeline. Monads have also been explained with a physical metaphor as assembly lines, where a conveyor belt transports data between functional units that transform it one step at a time. They can also be seen as a functional design pattern to build generic types.

http://en.wikipedia.org/wiki/Monad_(functional_programming)


I've never used monads, but what you say makes me think of something:

Suppose I had a Javascript array of functions. And I pass that array into another function that performs that series of functions in order while maybe doing something in between each one. For example, maybe I can pass that list into a function that checks to make sure the result of each call isn't null or undefined. Or maybe I can pass the same list into another function that checks to make sure none of the results are NaN. Would this be an example of monadic behavior?


Yeah. That somewhat resembles monadic behavior. Passing a container (list in this case) of functions and applying each of those functions to a container (again a list in this case) of arguments sounds a lot like an applicative functor. All monads are applicative functors though. Doing something in between each call definitely is more monadic than applicative, since that's more or less what the bind (>>=) function can do.


I was glad when the author of the article advised against learning Monads by analogy, because it's one of those subjects where analogies serve only to confuse people. Personally, my brain maps badly to Haskell syntax, so let's do this in JavaScript (I know, I know, not so powerful and everything):

In fact if you used jQuery, you already know what a Monad is, probably without realizing it. Consider the following code:

  $('.myelements').filter(filterFunction).css('color', '#ff0');
You can follow along what this code does by following the dots. First, all elements matching the ".myelements" class are returned, these are then filtered, and the result of that gets a css modification.

Every time you wrote a method that returns an object, you already did monads in principle. They can be used to chain things together. While Haskell provides semantics that explicitly deal with the mechanics of monads, pretty much every dynamic language has them.

Here's what it looks like in PHP, for example:

  $obj
    ->bla()
    ->beep()
    ->borp();
You can use monads to make your code more concise, to simply chain function calls, or to iterate over a list, but they're also often used when sifting through databases since filtering and applying attributes to items. For example, an SQL-like database query might be expressed like this:

  table1.select(all).where({ ref : 1 }).orderby('id');
Of course, that's an API which is a bit dishonest since it acts as if the code really goes through all those items (which would be a performance disaster with huge databases), so when you see something like this it's often just a shortcut to assemble a query. And this goes for monads in general, they're really more of an API style. A monad itself is an abstract concept, a paradigm.

> Also, it's suspicious that pg has never once mentioned monads

For a theory why pg specifically hasn't written anything about monads yet, here's an article that I believe was on HN in the past about the difficulties of comfortably using monads in Lisp: http://marijnhaverbeke.nl/monad.html - but I guess the short explanation would probably be that Lispers tend to deal with monad-worthy problems in other ways.


While there might be some structural similarity of function chaining to bind operators in Haskell, jQuery isn't a monad. The burden of proof for being a monad is explicitly demonstrating adherence to the monad laws. To quote prolific Haskeller Don Stewart "I am pretty skeptical that the jQuery API as a whole could satisfy the monad laws".


No, you have a completely incorrect understanding of Monads.

You're using jquery as a functor not a monad.


I'm replying to you, but this comment is intended for freyrs3 as well.

> No, you have a completely incorrect understanding of Monads.

To re-use your slightly ad-hominem argument structure: "No, you have a completely incorrect understanding of Monads!" Of course the word "completely" here is just an unnecessarily flamebaity word choice in describing either of us. Clearly, we both have some understanding of monads. Let's look at the facts instead:

A functor applies an operation to a wrapped value. But monads apply functions that also return a wrapped value. It should be easy to see why people think the jQuery chain pattern is indeed monadic.

For what it's worth, I'm far from alone in asserting that jQuery is indeed a monad.

Since Haskell people claim to be the only enablers of "true monads", they would of course nitpick about patterns implemented in other languages. But religion aside, the OP asked for the possible uses of monads, and there they are. That the example also shows functors is just an added benefit, though not at all relevant to the discussion.


> For what it's worth, I'm far from alone in asserting that jQuery is indeed a monad.

And a lot of people think that the reason astronauts float on the ISS is because there's no gravity there.

> Since Haskell people claim to be the only enablers of "true monads", they would of course nitpick about patterns implemented in other languages.

Yes, because monad is an actual term and calling something a monad just because it sort of looks like it if you squint doesn't make it one. You can't write return in the 'jQuery monad', because there's no way to wrap arbitrary data types, only things like DOM elements. Something like a promise is closer to a monad, because you can turn any value into a promise, but IIRC there are still issues around it that I don't remember offhand.


See Douglas Crockford's opinion this subject: http://www.youtube.com/watch?v=dkZFtimgAcM


So I don't particularly feel like watching an entire hour-long video about something I'm already very familiar with for the sake of an argument on the internet; I'm just going to skim the video and look at the slides.

He describes monads as "a loophole" in the idea of function purity. This isn't true; monads aren't a way to 'cheat' any more than passing an accumulator through like

  factorial acc 1 = acc
  factorial acc n = factorial (acc * n) (n - 1)
is. He says something about state and functions and closures that I can't understand and how the state is 'always different', and it's just nonsense as far as I can tell.

He also calls values 'monads', which isn't accurate terminology at all; they're usually called monadic actions.


Again, jQuery or any arbitrary API is not an monadic until one shows that the monad laws hold for it. It doesn't matter how many people assert it to be true, whether an API is monadic is mechanically provable fact and as of yet there is no proof that jQuery is monadic.


Just because lots of people think jQuery is a monad doesn't make it true. Also, even if you're not interested in that high level of anal-retentive precision in your terminology, I would argue that's still not the main point of monads. I talk about this more in the vimeo presentation linked in my other comment here.


>But monads apply functions that also return a wrapped value.

Yes, but they also must follow the monad laws.

>It should be easy to see why people think the jQuery chain pattern is indeed monadic.

It is not easy for me to see. In any of your examples are you returning the same encapsulated structure in your lambda you pass in? It doesn't look like it.

>For what it's worth, I'm far from alone in asserting that jQuery is indeed a monad.

Yes, a lot of people have a misunderstanding of monads.

>Since Haskell people claim to be the only enablers of "true monads",

I'm not a Haskell person. I write monadic code in both Scala and C# to great effect.


Consider for a second that in all the semicolon languages you know, the semicolon is a redefinable function.

Think hard about what you could do then.

That is why the Haskell community sometimes dubs them "programable semicolon"


Lets start with an example, imagine that you are writing code to control a robot. You want to be able to do something like:

  forward(5)
  raiseArm(5)
  
however, your program needs to be single threaded, and needs to quickly respond to the shutoff command. One solution would be to make every function exits quickly, and gets called in a loop until its done. This means that every function is essentially an object with the methods: init(), step(), where step returns true when the action is finished.

Suppose we define forward(int) and raiseArm(int) as described. We would like to be able to easily define a function that moves forward then raises the arm. This new function would be combined(a,b)=forward(a) >> raiseArm(b).

Now, imagine we have a function that moves forward until it hits a wall, then returns the distance traveled. We want to move forward until we hit a wall, then return to our initial position. If we could block this would look like:

  x=forwardUntilWall()
  backward(x)
Instead of having forwardUntilWall() return x, it can store x in itself and let the bind operator pass it to the init function of backward(). This would let us write the above as:

  forwardUntilWall() `bind` backward.
Using do-notation (which is just syntactic sugar), we could also write that as:

do x<-forwardUntilWall() backward(x)

Which looks a lot like our blocking version.

Imagine that you have a function setArm, and you want to set your arm sequentially at variuas posistions. Normally you would do:

  for (each x in posistions):
    setArm(x)
Using monads, you could do:

  forEach (posistions) (setArm)
or using do-notations:

    forEach (posistions) $ \x-> do
      setArm(x)
In Haskell, forEach is called "forM_", and needs to be imported from Control.Monad.Loops. Of course you could avoid importing it and simply define it yourself.

Also, we are not restricted to combining our specific monad with only the general monadic operators. For example, we can easily define a function that will take 2 monadic operations and return a monadic operation that runs both of them concurrently.


Someone linked to this illustrated introduction in another thread: http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...

It should give you an idea of what monads are, but probably won't tell you why they're useful. They pop up in lots of places in programming, but you might not notice them at first because most languages have a way around them. They're more explicit in Haskell because the type system is powerful enough to make them both explicit and easy to use, via the combination of parametric polymorphism and type classes.

Here are some examples of monads:

  * Option types (sometimes called nullable types).
  * Many kinds of collections: lists, arrays, trees (but not sets, strictly speaking, because some invariants don't hold when treating sets as monads).
  * Types with logging attached (Writer monads).
  * Continuations.
Furthermore, Haskell's list comprehensions can be thought of as syntactic sugar for monadic operations, in addition to the usual do notation.

Monads are useful because they let you add context to normal values -- any context you want, really. Often, the context just gets called "state", but lots of things are state, really. The monad operations are also useful because they allow you to force a sequence of execution, which is partly why they're also so useful for performing IO: Haskell is designed so that the compiler may re-order operations when it makes sense, and bind lets programmers place a sequential order on parts of the code. This is why the "pipeline" analogy is sometimes used.

An excellent article on monads is "You Could Have Invented Monads", which guides you through discovering several different monads using practical problems: http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...

As for pg, monads don't pop up too often in untyped languages. While you can implement the monadic operations in an untyped language, it makes much less sense to do so. This is especially true for impure languages such as Lisp.


Why can't you define a set monad? The first definition I can think of is making behave like Haskell's list monad?

That is to say that "xs ==> (\x -> f x)" Would take every element in the set "xs", pass each of them into f, and union the results into a single set.


Set basically has an Ord constraint on the type of things you have a Set of since most of the functions on it have an Ord a constraint, and Monads can't be constrained in what types they wrap. This is also why it's not a Functor; you can define

  setMap :: (Ord a, Ord b) => (a -> b) -> Set a -> Set b
but that's still more restricted than fmap.


Because a monad is also a functor, but a set failed to be a functor.


Because monads are abstract, and very general, there turn out to be many different ways to think of monads, and each of the perspectives is enlightening in its own way. In fact, really the ONLY thing that monads provide is an abstraction for many concepts that you probably are already familiar with. In Haskell, the Monad type class simply allows you to use the same syntax for these different operations.

In other programming languages, I'm sure you already use the very same functions that are fundamental to monads; you just don't have a syntactic construct that says "Hey, these things share a similar structure!"

Most of the other replies focus on IO (and on the do-notation side of things), and so I'm not sure that they'll help your confusion.

We'll need to start with functors. Without getting too formal, functors describe "containers" that can hold objects of any type. A list is a functor, a "Maybe" is a functor, the result of a computation (i.e., "IO") is a functor… lots of things are functors!

And since functors can contain anything, they can certainly recursively contain themselves - that is, they can be nested! We can have a list of lists, or a "Maybe (Maybe a)" (hey,… that seems like an awfully redundant thing to construct). When we talk about "IO", we can have an "IO (IO a)": that is, we can describe a computation that returns to us another computation!

And with these functors that I've mentioned, we realize something interesting: We would often like to flatten these nested structures. We use "concat" to reduce a list of lists to just a list; how to reduce a "Maybe (Maybe a)" to a "Maybe a" is obvious; for IO, we might want to take our description of a computation that returns a description of another computation ("IO (IO a)") and convert that into a description of a computation that runs the first computation, and upon receiving the result of that computation (which is a computation itself), run the resulting computation, and then return the result of THAT as our final result.

Also, for these things, we have an idea of how we would inject a pure value into the structure. We can create a list with a singleton element, or "Just" our element, or the computation that does nothing interesting (no missile launches) and just returns our element.

Haskell is just wacky enough to let us use the same syntax for all of these rather different ideas (which only share the similarity of having the monadic structure)!

NOW, I personally don't think that the trouble that people have with monads is related to the abstraction and the shared syntax - I think it has more to do with the use of the use of Haskell's "do" notation. Learn what "do" notation is sugar for! Learn what "bind" (>>=) is (it's basically a neat combination of the two operations ("bind" and "return", for what it's worth) that I described above).

I can't promise you that learning monads is something necessarily pragmatic (except for the fact that I think Haskell is the perhaps the MOST pragmatic language, and you'll need to learn monads to be comfortable with Haskell). But the world would be a pretty dull place if we only did what was pragmatic.


Monads are an interface over container types, often, that allows you to manipulate and build up values in the container easily. They come with associated laws that allow for this kind of containerized-processing to represent an enormous array of interesting real-world tasks---failure, non-determinism, CPS, state, local context, logging, IO, random processes, parallel computation, graph building, the "builder pattern", and logic programming all came to mind immediately.

Concretely, imagine computations that might fail. In Haskell we represent this by the "possibly empty container with one slot" called Maybe. Maybe Int is either "just" an integer written (Just 3) or nothing at all, written Nothing. Frequently you'll use many failing processes all at once to build a result and want the failure of any one of them to propagate. You might do a bunch of map lookups to build a Person.

    -- lookup :: Ord k => k -> Map k v -> Maybe v

    type Name = String
    type Address = String
    type Age = Int

    data Person Name Address Age

    type AddressMap = Map Name Address
    type AgeMap     = Map Nage Age

    mkPerson :: AddressMap -> AgeMap -> Name -> Maybe Person
    mkPerson addys ages name = 
      case lookup addys name of
        Nothing -> Nothing
        Just addy -> case lookup ages name of
          Nothing -> Nothing
          Just age -> Person name addy age
That pattern of diving down deeper into a potentially failing result happens to follow the Monad interface. We think of each possibly failing value as being able to "bind" a next action which is only carried out if the computation doesn't fail. In other words, we package up the Just branches as continuations.

    mkPerson addys ages name = 
      bind (lookup addys name) (\addy ->
        bind (lookup ages name) (\age ->
          return (Person name addy age)
        )
      )
The `return` bit makes the non-failing computation (Person name addy age) into a non-failing Maybe computation---it injects a non-monadic computation into a monad in "as boring of a way as possible". (Also note that `bind` is usually called (>>=) so you can use it infix.)

In general, this pattern is so nice that there's some special syntax sugar for it called `do notation`.

    mkPerson addys ages name = do
      addy <- lookup addys name
      age  <- lookup age   name
      return (Person name addy age)
which desugars to exactly the same thing as the previous example.

The crazy bit is that this pattern, once you become familiar with the monadic interface, is extremely pervasive. Usually when you recognize it in some kind of computation it sheds new light on both how that computation can be manipulated and how it can be combined with other kinds. It's not infrequent to combine monads into a complex notion like---"a potentially infinite non-deterministic failing computation carried out in context of some configuration data where we can log and deterministically open and close resources"... which is written directly as the type

    ReaderT ConfigData (WriterT LogMessage (LogicT (ErrorT ErrorType (SafeT m))))
Or, we can define a parser as a potentially-failing computation which modifies some parser state in the context of the current parse

    newtype Parser a = Parser (ReaderT Context (MaybeT (State Stream)) a)

    parsePerson :: Parser Person
    parsePerson = do
      name <- parseName
      _    <- parseSeparator
      addy <- parseAddress
      _    <- parseSeparator
      age  <- parseAge
      return (Person name addy age)


Wikipedia goes a long way, you know? You could do a lot worse than get it from there: http://en.wikipedia.org/wiki/Monad_%28functional_programming...

The term monad comes straight from Category Theory, a branch of mathematics. There are some rules, and anything that obeys those rules is a monad.

From the point of view of programming languages, any set of values/objects for which a particular set of functions exists is a monad. See wikipedia for what they are, exactly.

Monads exist on both dynamic and statically typed languages, but it is more difficult to explain them on dynamic languages, where you don't have types to explain the properties.

So many things can be monads. For example, collections can easily be monads, for it is easy to define "join" and "fmap" for them (one of the two possible ways of defining a monad):

* "join", aka "flatten", means that if we have a collection of a collection (a list of lists of x, a set of sets of y, etc), we can "flatten" it one level (that is, get a list of x or set of y);

* "fmap" means that, if we have a function that changes the objects inside that collection, we can create a function that changes the collection so that the objects inside it are modified. For example, if we have a function from strings to numbers, we can create a function that converts a list of strings into a list of numbers.

A list comprehension in Python is an example of these rules put in practice, though when using "if" you are going beyond these rules.

There are other monads besides collection and the usefulness of monads is that they work like an interface or API that is shared by all monads.

Going to Python again, consider the syntax:

  listC = [f(x, y) for x in listA for y in listB]
The function f doesn't care that x and y came from listA and listB, and if Python supported any kind of monad, then you could write something like this:

  monadC = [f(x, y) for x in monadA for y in monadB]
And that would work for any monad you passed. You would have abstracted away that you had lists, making your code more generic and, therefore, more reusable and less repetitive.

To go with an example you use elsewhere, you have a monad containing a file descriptor, a read line function and a convert string to number function, then you'd get back a monad of a number:

  monadN = [int(readline(fd)) for fd in monadFD]
Note that monads don't define a way to extract that number from inside them, but that really isn't necessary, as you can continue to apply functions:

  monadVoid = [print(n) for n in monadN]
If your original monad was a list of file descriptors, you'd print a list of numbers, one for each file. A Maybe monad (aka Option monad) would either contain a file descriptor or not, and you'd either print a number or not accordingly.

Another type of monad, the Future monad (also known as Promise), executes the function they are given asynchronously. So the monad containing the file descriptor was a future monad, then the code above would be executed and finish, but the number could be printed well afterward. I'll come back to this example at the end.

The usefulness or particular monads depends on the context, but the ones I'm familiar with mostly help avoiding side effects in functional programming. If you don't avoid side effects, you have no use for them. If you do, you'll inevitably come up with something equivalent to monads like the IO or State.

And the usefulness of the fact that they are monads comes from the fact that you can abstract away the specific type of monad they are, and work generically.

So, coming back to our number-printing program, let's say we are using a Future monad so that the number gets printed asynchronously, and we wish to test the program. If we use the Future monad, we'll probably have to sleep for a while and then see if the number was printed or not -- a test that may fail if the program takes too long to execute (say, the file is on a remote computer and has to be downloaded first).

One alternative is that we replace the Future monad with the Identity monad. The identity monad just holds a value, so any operation on it is executed synchronously. We can test our program using the Identity monad, so we do not have to way, and run it in production using the Future monad, so our program runs asynchronously.

We can do that because they are both just monads.


I really wish people would start treating languages as languages. `(m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)` is not easy to understand because the symbols don't have names. I can't read it. How is "m >>= f" pronounced?


> How is 'm >>= f' pronounced?

How do you pronounce '4 + 5'? It's easy, because you don't try to pronounce '+', you pronounce 'plus'. With '>>=', there are a few names that all mean the same thing.

There are two primary names, 'bind' and 'chain', and a description 'feed':

bind (the result of) m to (the input of) f

chain (the result of) m into f

feed m to f

There are probably a couple more things that can be said, in different contexts, but these three cover the majority use case.


Thanks for the very complete answer!

That those terms are not mentioned in the article (titled "What I Wish I Knew When Learning Haskell", so apparently meant for people being new to the concepts) is my actual criticism. The ability to instinctively replace those symbols with the appropriate terms is something that has to be learned - and easy to forget to teach. He's talking about "don't learn monads by analogies" because he's maybe not fully aware that he is using "bind", "chain", "feed" when actually reading those expressions - all of which are analogies.


It's called "bind". Here's how to memroize that:

One Type to rule them all, One Type to find them, One Type to bring them all and in the Lambda >>= them


:D Thanks!


>How is "m >>= f" pronounced?

The pronunciation is a sort of bestial hissing that is difficult to describe; when you say it correctly, the terminal may be slightly moist.


I'll start using it.


Warning. I think I'm right with these things but I'm also very new to Haskell

> How is "m >>= f" pronounced?

If I'm not mistaken ">>=" is the bind function and could be written like:

    bind m f
Meaning that "m >>= f" is pronounced as "bind m to f". What it's doing is:

    main = do
        m <- SomeActionReturningMonad
        f m
Are you saying that a language that uses symbols isn't a language? Is Japanese a language since it uses symbols (glyphs? correct me if I'm wrong).

If you are talking about programming languages specifically, do you believe that "plus" should be used in place of "+"? Now I'm not going to get absurd and assume you mean using "parenthesis_start" in place of "(", but just in case... I will ask if you are.

Also, in Haskell specifically there are matching functions for those.


No, see below, I just think that if you write something is "easy to understand" and then you go on and explain it using unknown (to the reader) symbols without given the symbols pronounceable names, you're doing it wrong. One of the tricks that helped me most when learning more advanced math was to learn "reading"/"pronouncing" formulas first and understanding them second. It feels very nice and intuitive. That's why I'm always annoyed when people explain something using very domain-specific symbols and don't provide any way to "read" them.


I misunderstood your parent comment then I think. I prefer the Haskell tutorials (especially for beginners) that use the function and then provide the shorthand symbol to do it with.

You can also search those symbols on Hoogle[1] or if it is a builtin keyword the Haskell keywords page[2] can be useful.

1. http://www.haskell.org/hoogle/ 2. http://www.haskell.org/haskellwiki/Keywords


But (>>=) is a control flow abstraction, just like your for, while in imperative languages.

We call it bind, but there's no pragmatic point in writing bind in it's infix form because of the prelevalence of the pattern.

Would you rather have that, or

    (m `bind` f) `bind` g = m `bind` (\x -> (f x) `bind` g)


Programming languages are not natural languages. And where imperative programming can benefit from something that looks like a series of instructions resembling natural language, functional, declarative programming works better with the language being closer to mathematical notation.


If I can't talk to you about it (e.g. I have to literally say "Oh, you forgot to declare a way to apply the greater than-greater-than-equal operator!"), it's hard to actually reason about a problem domain. It's like the magic moment where you go from stumbling through cryptic signs to being able to fluently read (aloud) complex math or logic formulas. I doesn't need to resolve to a normal, casual sentence you'd drop chatting with your cab driver - but there should be a way to pronounce it.

P.S.: I guess what I'm trying to say is - every normal mathematical expression has a way to pronounce it. "plus", "for all x", "there exists an x", "sum of", "element of", ...


You wouldn't have to say "Oh, you forgot to declare a way to apply the greater than-greater-than-equal operator!", you would say "Oh, you forgot to declare a way to apply the bind operator!". Now that you know that, you can stop saying that ">>=" doesn't have a name? You are upsetting poor bind :/

Also here is a great explanation with pictures that helped me understand it:

http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...


Thanks! That article is pretty awesome. It would slightly more awesome if it would use the analogies "bind"/"feed"/"chain" that apparently are more often used to speak about it. But I could consider calling it the "shove" operator from now on. ;)


Hey no problem! I used to see Haskell as needlessly complex, unapproachable, and not usable in the real world and now realize that was a mistake, so I just try to show people what I've found out/learned when i can :)


I would read it like this:

The result of the computation m bound to the function f. The result of that bound to the function g.

...is equivalent to

lambda x where the result of f of x is bound to the function g.


I think of it as "feed m to f".


Except what gets fed to f is the 'unwrapped' value, not the whole monad


m bind f


Anyone got any solutions for cabal hell? I barely use Haskell because 50%+ of everything I try to install fails and I don’t have a freaking clue how to fix it.


If you're using cabal to install applications from hackage, it gets a bit hairy. But if you just want to resolve your dependencies

    mkdir project
    cabal sandbox init
    cabal install dependency
    cabal repl # for a ghci instance inside the sandbox
Recommended is still to generate a .cabal file and specify your dependencies there.


Use the new sandboxing feature of cabal: http://coldwa.st/e/blog/2013-08-20-Cabal-sandbox.html


Take a look at this [1], particularly the second answer, which links to a Github Gist for "ghc-pkg-clean." I tend to use that when I run into trouble.

[1] http://stackoverflow.com/questions/7961604/fixing-issues-not...


Try checking the version of ghc you have installed.


It's humorous that most of the article is not about monads, yet the entire discussion around it is about monads. Seriously, monads are only a small part of Haskell.


  SET : Group
  MAN : Lie Group
  TOP : Topological Group
  GRP : Objects
The last line should really be "GRP : Abelian Group"

See http://en.wikipedia.org/wiki/Eckmann-Hilton_argument


As someone who has been learning haskell for the past 6~ months this is very valuable. I will be studying this and toying around with the examples quite a bit, thanks for this resource!


Not sure how I feel about this thread.

A hundred folks who probably haven't investigated how to effectively teach adults, try to explain monads in the space of 3 tweets (probably not the best medium).

And then we wonder why folks steer towards less abstract tools. For every person who visits which gains monad satori, how many have we turned away, never to return?


Holy shit how did I never know about the ":r" shortcut??? This changes everything.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: