Hacker News new | past | comments | ask | show | jobs | submit login
Monads as a Programming Pattern (samgrayson.me)
233 points by charmonium on Aug 12, 2019 | hide | past | favorite | 80 comments



I think monads highlight something underappreciated about programming, which is that different people regard very different things as "intuitive". It almost seems that modalities of thinking about programming are bigger, external things than programming itself. Certainly they're barriers to learning.

Like Lisp, there seems to be about 10% of the programmer population who think "ah this is obviously the clearest way to do it" and the remaining 90% who go "huh?", and the 10% are really bad at explaining it in a way the others can grasp.

The two monad explainers that really resonated with me were:

- How do you deal with state in a language that would prefer to be stateless? Answer: wrap the entire external universe and all its messy state up into an object, then pass that down a chain of functions which can return "universe, but with some bytes written to the network" (IO monad)

- If you have a set of objects with the same mutually pluggable connectors on both ends, you can daisy-chain them in any order like extension cables or toy train tracks.

(It's a joke, but people need to recognise why "A monad is just a monoid in the category of endofunctors" is a bad explanation 99% of the time and understand how to produce better explanations)


There is a saying that the difference between poetry and math is that poetry is about giving different names to the same thing, and math is about giving the same name to different things.

Grokking monads really requires the adoption of the mathematical mindset of finding commonalities in things that at a first glance appear completely different. Tell an average OO programmer that lists, exceptions, dependency injection, and asynchronous execution all share a common structure, and they will probably give you a blank stare.

Of course, just the fact that abstracting over those things is possible doesn’t mean it is useful. In a pure FP language it might be necessary, but why should I bother with weird mathy things in my imperative language that has side effects and global state? You really have to start by explaining why composability is such a nice thing to have, and that gives the motivation for various FP patterns that are, fundamentally, all about composability.


>Of course, just the fact that abstracting over those things is possible doesn’t mean it is useful. In a pure FP language it might be necessary, but why should I bother with weird mathy things in my imperative language that has side effects and global state? You really have to start by explaining why composability is such a nice thing to have, and that gives the motivation for various FP patterns that are, fundamentally, all about composability.

But then you start running into the nasty issue that when you need to compose multiple monads, you find out that monad transformers don't always commute!


Yeah. And then you start hearing about these things called algebraic effects…


>but why should I bother with weird mathy things in my imperative language that has side effects and global state

There are good reasons, like encapsulation, etc. But the real reason is, it just dosn't feel right and there is this urge to fix it.


Because in your imperative language monads can still be a useful abstraction. For example they can give you stuff like async-await or null-safe method chaining for free.


Which is exactly my point about composability.


John von Neumann famously said that in mathematics you don't understand things, you just get used to them. I think monads are an example of this, and all the attempts to make them intuitive before you use them are a huge waste of time. Many mathematical concepts are like this. Take compactness. At first, the definition of compactness seems a little random, but it's useful for proving things, so you keep using it, and after you do enough proofs you develop an intuition for it. It feels very clear and fundamental instead of random.

Once you have this feeling of obviousness, how do you transmit it to the next person? Many things can be explained, but in math it was figured out long ago that at any given time there are many things we don't know how to explain, and aren't sure we ever will be able to explain in a way that transmits understanding faster than experience can instill it. The best thing you can do for someone is give them the definitions and some problems to work on. We can't rule out that someone may eventually come up with a brilliant explanation that provides a shortcut to understanding, but we know from experience that some things persistently defy our efforts to explain them. If hundreds of people's earnest attempts to explain something have failed, then perhaps teachers should keep trying, but learners should not waste their time with these experiments; they should skip the explanations and seek active engagement with the idea through problem solving.

That's how I feel about monads. I can't absolutely rule out the possibility that someday an effective way to explain them will be found, but I think we can agree at this point that there is ample evidence that people who want to understand monads should not waste their time waiting for the right analogy to be blogged and posted on HN. They should just start programming, and soon enough they too will feel like a great explanation is on the tip of their tongue.


Love that von Neumann quote !

I have had this experience, that I've moved much faster learning to use math tools when a friend much more advanced than me suggested that I can use a tool even if I don't know how it works. It's like if I stopped building a house because I want to see if I can build a hammer myself !

(Well, this is also my experience programming, implementing the low level stuff myself is a much more attractive exercise than actually assembling off-the-shelf parts into something people would actually use!)


I think you're roughly right, though I shy away from "intuitive" since it can come across as excluding people who haven't learned certain things yet.

What I've observed is that a lot of learning programming is about becoming comfortable with thinking of more and more concepts as first-class entities. Turning larger pieces of code, procedure, and patterns into object/values you can pass around, hold, create, etc.

1. small-scale one I see a lot is that many programmers don't realize boolean expressions produce values. Instead, they think they are syntax that can only be used inside control flow statements. It is a mental leap to realize that you can go from:

    if (thing == otherThing) { doStuff(); }
    moreStuff();
    if (thing == otherThing) { doLastStuff(); }
To:

    var sameThings = thing == otherThing;
    if (sameThings) { doStuff(); }
    moreStuff();
    if (sameThings) { doLastStuff(); }
2. In some sense, recursion is about thinking of the procedure itself as an operation you can use even while being in the middle of defining the procedure itself. The mental trick is "Assume you do already have this procedure, how could use use while defining it?"

3. Closures are another big one where you take a piece of code and turn it into a value that you can delay executing, pass to other procedures, etc.


I see what you're saying. Unfortunately that's not the only problem with monads. Monads have two problems:

- They're explained badly.

- Once they're explained well, many (including me) think it's a bad idea (not the monad, the motivation behind its use in pure-FP).

You start with a pure functional formalism, because you like to be stateless. Then you realize that avoiding statefulness is impossible in computing. So you try to shoehorn state into your stateless state of affairs (no pun), while at the same time refusing to admit that you're not stateless anymore.

The larger issue: some folks appear to think that imperativity is a subset of declarativity.

What that really means is that, they're saying that, computing is a proper subset of math.

And by that, what they're really saying, is that actions are a subset of words.

In other words, if you write something on a piece of paper that describes some action in the real world, (roughly speaking) that action happens or is supposed to happen automatically.

That's now how the world works. That's not how computers work. And I'm sorry to say that's not how programming works.

Computing is not a subset of mathematics. And Mathematics is not a subset of Computing. Same goes with Physics (Physics is not a subset of Mathematics, despite there being way more math used in Physics than CS).

Physics, Computing, and Mathematics are the holy trinity of the reality that we live in. You have to give each of the three the respect they deserve, and only try to make connections between the three, and NOT try to make any of them a subset of the other.


> Computing is not a subset of mathematics. And Mathematics is not a subset of Computing. Same goes with Physics (Physics is not a subset of Mathematics, despite there being way more math used in Physics than CS).

if i understand you correctly, are you saying that mathematics in the classical sense is not “computing” (von neumann machines?), just as math is not physics, but math is used to model physics

then the proper way to think about the problem is: can there be a math that models actual computing (does that include physics?) as it is today, and is that what we already have in our programming languages today?

and finally, maybe your main point is, with purely functional languages, there is a math invented (lambda calculus?) to describe a stateless ideal that is shoehorned into a reality that it doesn’t describe?

apologies for the random thoughts, just trying to grok everything ^_^


I think what I'm trying to say also goes by a well known aphorism by Alfred Korzybski: The map is not the territory [1].

We like to model stuff in computing and stuff in physics, using mathematics. What we're really doing is creating maps (math models) of the territory (computing, and physics). We notice there is some part of the stuff not fully mapped, so we make our map more complex and more elaborate. We keep making our map more and more complex not realizing that not everything about the territory can be mapped, because territory has an existence independent from the map. No matter how complex we make the map, it cannot explain away the territory itself.

What pure-FP advocates are claiming is that that pure-FP is all that's needed for computing. Then they notice (or someone points out) that there is stuff like computing state that is not pure. The pure-FP people go 'no problem, we have monads'. But now they've made pure-FP more complex, and it's not pure anymore. And it now depends on external execution to happen to have state updated. What they appear to be doing is trying to explain away imperative and stateful aspects of computing, which are there as an inherent part of the computer, existing and running in the real world.

You might be interested in a past discussion here [2].

[1] https://en.wikipedia.org/wiki/Map-territory_relation

[2] https://news.ycombinator.com/item?id=17645277


This is exactly why I wanted to write this post. I feel like the pattern is valuable besides "we need it to have stateful computation." I have to mention this use for completeness, but I wanted to focus on using monads as another abstract software development pattern, because that's how it is applicable in non-FP languages (async/await for example).


thanks, really appreciate the thoughtful answer and i’ll take a look at those links as well

much food for thought


I think I approximately understand monads, but I find the "wrap the entire external universe" type of explanation to confuse me a bit. It's just the result of one IO computation + a way of handling / unwrapping it!

When trying to map (or bind :) a monad to OOP/imperative programming, it strikes me as more straightforward to think that, e.g., the IO monad is an object that encapsulates the result of a network operation, together with some utility functions for dealing with the result and not having to deal with the unwrapping of the result. Kind of like Futures or Promises.

(Now, here the real FPers will say that the comparison is flawed because of certain FP desiderata like referential transparency, but that's beyond the extent to which I've internalized monads.)


> result of one IO computation

But for many people, they will be used to IO returning nothing, or error values they can ignore. So the case has to be made as to why you need a monad there at all.

(I've just had the slightly disturbing realisation that C++ streams are also monads .. the documentation never uses this term.)


> I've just had the slightly disturbing realisation that C++ streams are also monads

You might like [1].

It's part of Bartosz Milewski's extensive discussion of C++[2], but I'm unsure how that mass can be accessibly approached... maybe by starting around 2011. I very fuzzily recall liking his "Re-discovering Monads in C+" (2014)[3], but I didn't quickly find video. Apparently he gave a "Monads for C++" talk in 2017[4].

[1] https://bartoszmilewski.com/2014/04/21/getting-lazy-with-c/ [2] https://bartoszmilewski.com/category/c/ [3] https://www.slideshare.net/sermp/rediscovering-monads [4] https://www.youtube.com/watch?v=vkcxgagQ4bM


> I think monads highlight something underappreciated about programming, which is that different people regard very different things as "intuitive". It almost seems that modalities of thinking about programming are bigger, external things than programming itself.

I think this also. You need to pick a language that fits your problem area, but also one that fits your brain.


How do you deal with state in a language that would prefer to be stateless?

Encapsulation? No one gets direct access to the state. Instead, there are methods or functions for dealing with the state indirectly, crafted to protect those outside.

Answer: wrap the entire external universe and all its messy state up into an object, then pass that down a chain of functions which can return "universe, but with some bytes written to the network" (IO monad)

Sounds like "Outside the Asylum" from the Hitchhiker's Guide to the Galaxy universe. Basically, someone decided the world had gone mad, so he constructed an inside-out house to be an asylum for it.

http://outside-the-asylum.net/

"A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you." How is a Monad anything different than a particular kind of object wrapper?

https://www.infoq.com/presentations/functional-pros-cons/


1) Adding getters and setters does not make a program stateless. Your race-conditions and side-effects just take more steps.

2) "A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you." Your objection to this quote is right, because the quote is wrong.


>How is a Monad anything different than a particular kind of object wrapper?

It _is_ a particular type of wrapper object. That's where the whole "in the category of endofunctors" comes in. An endofunctor being a functor from something to itself.

You have IEnumberable<SomeObject> and that lets you do SelectMany to flatmap some internal nested set of objects down to IEnumerable<SomeOtherObject>.

The shape of what you get back doesn't change. You get back an IEnumerable of something. That has a specific contract on which you can do specific operations regardless of the object it is wrapping.

The other piece of the puzzle is that it is monoidal. A monoid is just a collection of objects + a way of combining them + a 'no-op'. This is usually worded something like "a set of objects with an associative binary operator and an identity".

The classic definition "a monad is just a monoid in the category of endofunctors" is worth picking apart piece by piece. But it's also utterly useless because you have to spend quite awhile picking it apart and then looking at concretions of every one of the individual abstractions to understand what the hell each part individually looks like and then put it back together in your mind.

That definition is classically used as a joke because it's so terse you have no hope of understanding it without a lot of study, yet at the same time it's so precise it's all the information you need!


What exactly do you mean by stateless? Encapsulation in the OOP sense is not stateless, because two identical method-calls may not return the same value. Example: if you have an ArrayList, calling `size` at one point in time might have a different result than calling `size` now. That's why I say the list 'remembers' its 'state'.

An object wrapper has to have the object somewhere in memory, you just can't touch it. With a monad, the object might not even exist yet (example: Promise) or there might be more than one (example: Collections).


The person you're replying to is talking about handling mutable state in a "stateless" way. That's a big distinction.


The person you're replying to is talking about handling mutable state in a "stateless" way. That's a big distinction.

How so? Why is it such a big distinction? Why isn't that just encapsulation? "Handling mutable state in a 'stateless' way" is basically just Smalltalk style Object Oriented programming. (As opposed to Java or C++, which has some differences.)


The value of monads is that they fold side-effects into the return values.

  dirty languages: 

  input -> function_a -> output/input -> function_b -> output 
               ^                             ^
               |                             |
          side_effect_a                 side_effect_b
               |                             |
               v                             v
          lovecraftian_primordial_soup_of_global_state

  pure languages: 

  input -> function_a -> output/input -> function_b -> output 
               ^                             ^
               |                             |
          side_effect_a                 side_effect_b ------>
               |
               +-------------------------------------------->
If C++ were pure, the type-signatures would look like

  (output, side_effect_a) function_a(input);

  (output, side_effect_b) function_b(input);
The drawback is that the type-signature of function_b(function_a()) becomes complex. Now, function_b needs to accept and pass-on the upstream side-effects. To compose function_a and function_b, we need to convert the type-signature of function_b to

  (output, side_effect_b, side_effect_a) function_b(input, side_effect_a); 
Fortunately, ">>=" converts function_b under the hood. Which allows us to write

  function_a() >>= function_b() >>= function_c >>= function_d
and pretend that each ">>=" is just a ";" without wrestling with compound inputs and compound returns.


It is encapsulation with the mandatory law stating that an encapsulation of an encapsulation must be as deep as a single encapsulation.

Note that this informal statement doesn't necessarily mean you have to be encapsulating data. A behavior, a contract, an assertion, compositions of all of these etc can also be encapsulated.

Fun ideas (not necessarily true but fun to think about):

* Monads kinda convert nesting to concatenations.

* A monad is the leakiest of the abstractions. You are always a single level of indirection away from the deepest entity encapsulated within.

* What's common among brackets, curly braces and paranthesises(?) is them being monadic if you remove them from their final construction while keeping the commas or semicolons.

Very important note: You should have already stopped reading by now if you got angry to due these false analogies.


A few observations which might also be false.

We can have arbitrarily nested monads:

  monadicObj.bind((T value) => monad.wrap(monad.wrap(value)));
Remember, `bind` only unwraps one layer. Without it unwrapping one layer, programs would continue accumulating large stacks of monads in monads.

I would also point out that it only collapses abstractions of the same kind; Maybe's bind only unwraps one layer of Maybe's. If you have a Promise<Maybe<Foo>> where Foo contains potentially more monads as instance-variables, those don't all get collapsed.

I like the 'converting nesting to concatenating' observation.

Sometimes we do need parentheses though, because most languages are not associative 5 - 2 - 1 is not the same as 5 - (2 - 1). Basically minus does not form a monoid, so the parens matter.


Any references that explain the "converts nesting to concatenation" idea? I find it fascinating, in particular because I write a lot of code that works on deeply nested data structures -- structs of values (which can be structs) or lists of values. The distinction between struct, list and value and the need to treat them differently in code is interesting and annoying, and goes beyond merely working with functors and applicables. I don't understand lenses at all, but I understand monads.


Do ctrl-f for "Nested Operator Expressions" in this piece: https://martinfowler.com/articles/collection-pipeline/

Some other references helped me along the way:

* http://www.lihaoyi.com/post/WhatsFunctionalProgrammingAllAbo...

* http://learnyouahaskell.com/chapters


Reading a little more about it, I think the concatenation idea more properly fits with the join operator, which acts like a flatten function.


>A monad is a type that wraps an object of another type.

So, the Adapter Pattern, but for types?


Monadic is a type-class. Like how Equatable is a type-class. Adapters essentially add a specific type to the type-class the client is looking for.


I think the programming pattern paradigm is the right way to explain monads (as you can tell from my own monad explanation: https://kybernetikos.com/2012/07/10/design-pattern-wrapper-w...) The category theory language around it is off-putting to working programmers, and many of the ways people explain it is by trying to introduce yet more terminology rather than just working with the perfectly adequate terminology that working programmers already have.

I think part of it is that lots of languages don't have sufficient abstraction ability to encapsulate the monad pattern in their type system, and those that do tend to be academic focused. That doesn't mean you can't (and do) use monads in the other languages, it's just that you can't describe the whole pattern in their type systems.

I was pretty sad that the discussion around javascript promises sidelined the monad pattern.


In my experience it’s easiest for people to understand monads when they’re presented as an abstract data type (e.g. what I wrote at https://codon.com/refactoring-ruby-with-monads#abstract-data...) rather than a programming pattern, because despite having “abstract” in the name, abstract data types are a relatively concrete thing that programmers already know how to use.


I watched that talk a while ago and it was great, thank you.


> The category theory language around it is off-putting to working programmers

Off-putting to some working programmers. I am a mathematically-minded working programmer who prefers mathematical and type theoretical explanations quite strongly since they just click for me.


This seems a pretty good introduction to monads.

There is a cliche that no-one can write a good introduction to monads. I don't think that is true. My opinion is more that monads were so far from the average programmer's experience they could not grok them. I think as more people experience particular instances of monads (mostly Futures / Promises) the mystique will wear off and eventually they will be a widely known part of the programmer's toolkit. I've seen this happen already with other language constructs such as first-class functions. ("The future is here just not evenly distributed.")


"Introduction to monads" articles generally miss the mark because either 1) they insist on using Haskell syntax throughout, and this is most likely to be unfamiliar and obtuse to programmers looking for this kind of articles. Expecting people to learn a new syntax at the same time as a new concept is bound to be confusing. At least it was for me when I first came across the idea. Or 2) they go through a bunch of examples with various names and number of methods, like Maybe and Collection in this article, and the reader is supposed to infer the common structure themselves. At least this article goes through the formal definition, but I think that ideally that should come first, as it is easier to see the structure of the examples once you have established a mental model.


> they insist on using Haskell syntax throughout, and this is most likely to be unfamiliar and obtuse to programmers looking for this kind of articles.

Indeed, that is a big part of a problem.

I find "Functional Programming Jargon" [1] extremely approachable (if you already know modern JS) even though it has been pointed out that their definitions might not be "pure"/correct enough.

[1] https://github.com/hemanth/functional-programming-jargon#mon...


you know of something like this in Python? I know enough js to read that but I'd prefer python


Sadly, no :(


I like the comparison with first class functions. I do feel like they are more commonly understood nowadays than when I first started programming ~12ish years ago.

I think because languages like Java are evolving towards a world where those things are common, the average programmer is 'forced' to learn those concepts.


I think for newbies there are two separate aspects to explain: first an intro to algebraic structures perhaps using groups as an example, then monads in particular.

It’s important to emphasize that algebraic structures are abstractions or “interfaces” that let you reason with a small set of axioms, like proving stuff about all groups and writing functions polymorphic for all monads.

With monads in particular I think the pure/map/join presentation is great. First explain taking “a” to “m a” and “a -> b” to “m a -> m b” and then “m (m a)” to “m a”. The examples of IO, Maybe, and [a] are great.

You can also mention how JavaScript promises don’t work as monads because they have an implicit join semantics as a practical compromise.


You really don't need to introduce groups or other algebraic structures to understand monads, and if your goal is to teach monads I believe it is harmful to do this.

The average programmer is much more likely to encounter monads (e.g. error handling, promises), than they are to encounter groups in an abstract context. Unnecessary maths will drive people away. Making a big deal of axioms, reasoning, and all this stuff that functional programmers love (including myself) is the approach that has been tried for the last 20 years, and it has failed to reach the mainstream. If you want to reach the average programmer you need to solve problems they care about in a language (both programming and natural) they understand.


Until functional people stop speak so elitist ("me" vs average programmer), people will be driven away from these concepts


Elitist functional programmers are probably just average programmers that didn't run away screaming at the first sign of math, but instead just confronted what was presented to them and took the time and effort to properly grok the material.


groups as an example

Why not go all the way and teach functors and applicatives before monads? Then the student can see that monads are just a small addition built on top of the other two. Functors, in particular, are very easy to grasp despite their intimidating name. They just generalize map over arbitrary data structures:

    l_double : List Int -> List Int
    l_double xs = map (* 2) xs

    f_double : Functor f => f Int -> f Int
    f_double xs = map (* 2) xs
Applicatives are a little bit trickier but once you get them, there's only a tiny jump to get to monads. Taught this way, people will realize that they don't need the full power of monads for everything. Then, when people learn about idiom brackets [1], they start to get really excited! Instead of writing this:

    m_add : Maybe Int -> Maybe Int -> Maybe Int
    m_add x y = case x of
                     Nothing => Nothing
                     Just x' => case y of
                                     Nothing => Nothing
                                     Just y' => Just (x' + y')
You can write this:

    m_add' : Maybe Int -> Maybe Int -> Maybe Int
    m_add' x y = [| x + y |]
Much better!

[1] http://docs.idris-lang.org/en/latest/tutorial/interfaces.htm...


Functors are not useful for much on their own, so they are difficult to motivate.

The Haskell formulation of applicatives doesn't make much sense outside of languages where currying is idiomatic---which is most languages. In these languages you tend to see the product / semigroupal formulation, and here applicatives become a bit trickier to explain as you need more machinery.


Functors enable the Fix functor and, from there, the whole universe of recursion schemes, so I'm not sure that I agree that they're not useful on their own.


Sure but virtually nobody cares about how to finitize a recursive function when they are trying to learn a new programming paradigm. Recursion seems to work just fine in languages that don't have any of these bells and whistles.

"Hey you can implement Fix" is like saying "now you can program in green" for most readers.


Oh, certainly. My assumption was that the GP meant that functors aren't useful on their own _in general_, rather than in the particular context of someone just getting into the typed functional paradigm. And, of course, recursion does work just fine in other languages, but (and I'm saying this more for posterity than as a retort since I assume you're well aware of this point) recursion schemes offer a layer of abstraction over direct recursion that eliminates the need to manually implement various recursive operations for each of the data structures you have at hand. As with a lot of the higher-level constructs in languages like Haskell, that level of abstraction may not have any practical benefits in most domains, but it's nice to have when it does offer a benefit in a particular domain.


Is this parody?


It is pretty hilarious. Map is such a common place concept, people are familiar with map and filter over arrays, but OP makes the exact same mistake 90% of people explaining this stuff do. He used Haskell / Idris syntax.

If someone already can intuitively/naturally read Haskell syntax, they've likely already looked at a bunch of these tutorials and read about functor/map.

For those that you're targeting that don't read haskell syntax naturally, explaining something to someone in a language they don't speak is if zero use to them.

The above comment managed to make something people already know confusing.

Monoids are nothing but a design pattern generalizing string append and the empty string.

Functors are simply a design pattern generalizing "map()" over arrays.

Monads are simply a design pattern generalizing "SelectMany()" over arrays.

It turns out that these patterns are significantly more powerful than most programmers realize, and by learning the underlying design pattern you'll be able to recognize novel situations where you can apply them and write much better/simpler code and APIs.


I get the intention but it's even harder to understand with this Java/C# syntax. I feel like if you're gonna talk about FP you should probably highlight along the Haskell or Scala code (or similar) and provide OOP stuff for reference in case it's not clear. It ends up being so verbose that I don't many people see the 'point'.


To me, reading examples in a syntax I’m familiar in helps me much more than reading them in a language where the concept might be elegant to express, but hard for me to get into.

I think it’d be best to include examples in a number of languages with a lanugage selector, actually. That way, people who are fluent in functional languages can read that version, and others can read the one they are more fluent in.


Here's a real life tangential example: I'm learning Thai. When I first started it was useful to show the Latin alphabet to get a grasp but it's often difficult or inaccurate to try to express that language without the system designed to express it. For instance, many romanizations lack a tone marker for tonal language, or don't show vowel length when it matters. Or for laughable example of พร, meaning 'blessing' but also a common woman's name, being traditionally romanized to 'porn' with its silent r when 'pawn' is a more neutral pronunciation for all English dialects (or /pʰɔːn/). Also, not being able to read or write the language, you will have a difficult time effectively engaging with the community and understanding the world around you.

The point is, functional languages are designed to express these concepts with much, much less cruft with extra features like currying, immutability-by-default, and type classes. Yes, use your current language to get oriented, but if you're going to really learn it, pick up a proper syntax to express it.


Sure, but to extend your metaphor, using Haskell to teach non-Haskell programmers about monads is like using Thai to teach English speakers the history of Thailand; the language might be better suited for the topic, but the people you're teaching don't know it, and learning it is orthogonal to the actual thing you're trying to teach them.


That's why I said have both with the emphasis on the one that better expresses it so people know what you're aiming for. I've written FP blog posts in the past with Javascript to accompany just in case it wasn't clear because of syntax. But if you'd just looked at the Javascript, your takeaway will probably be similar to many FP-in-JS articles of 'why so much ceremony?' The 'other' language is merely a bridge. And if you can only communicate from the bridge language's perspective, you won't be able to grasp intermediate topics.


This seems pretty good; the only thing on my mental checklist of "common monad discussion failures" is only a half-point off, because I'd suggest for:

"A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you."

that you want to add some emphasis that the monad interface itself provides no way to reach in and get the innards out, but that does not prevent specific implementations of the monad interface from providing ways of getting the insides. Obviously, Maybe X lets you get the value out if there is one, for instance. This can at least be inferred from the rest of the content in the post, since it uses types that can clearly be extracted from. It is not a requirement of implementing the monad interface on a particular type/class/whatever that there be no way to reach inside and manipulate the contents.

But otherwise pretty good.

(I think this was commonly screwed up in Haskell discussions because the IO monad looms so large, and does have that characteristic where you can never simply extract the inside, give or take unsafe calls. People who end up coming away from these discussions with the impression that the monads literally never let you extract the values come away with the question "What's the use of such a thing then?", to which the correct answer is indeed, yes, that's pretty useless. However, specific implementations always have some way of getting values out, be it via IO in the case of IO, direct querying in the case of List/Maybe/Option/Either, or other fancy things in the fancier implementations like STM. Controlling the extraction is generally how they implement their guarantees, if any, like for IO and STM.)


I think it's important to separate the issue of what monads are/how they're used from the question when they should be used at all. While monads are very useful for working with various streams/sequences even in imperative languages, they are used in Haskell for what amounts for effects in pure-FP, and that use ("Primise" in the article) has a much better alternative in imperative languages. Arguably, it has a better alternative even in pure-FP languages (linear types).

Here's a recent talk I gave on the subject: https://youtu.be/r6P0_FDr53Q


The problem with monads is they are horrible without some form of syntax sugar. I like the metaphor of "programmable semicolon", but in languages without some built-in support, the "semicolon" becomes repetitive boilerplate which is more code than the actual operations happening in the monad.


I like the "do" notation in Haskell because it boils down the meaning of monads to the following:

Monads let you break the functional programming paradigm that a function should have the same value each time it is called. e.g.

    do {
    x <- getInput
    y <- getInput
    return x+y }
here getInput is called two times and each time it has a different value. When you now think about how this can happen in a functional language you have to understand what a monad does.

The heureka moment came when I learned about the flatMap in scala which is nothing else but the bind function in haskell ("just flatmap that sXXt") and voila thats how to use monads.

See the following explanation:

https://medium.com/free-code-camp/demystifying-the-monad-in-...


It's hard to explain 20 years ago, but today if someone has used enough something like Reactive extension, Promise, LinQ, async/await, Optional, etc. There's a great chance to make one wonder about the similar pattern behind this, and then he can understand the abstraction very easily.


I once googled "functional programming for category theorists" and obviously got instead to "category theory for programmers" (incidentally I use Milewski's book in this reverse way).

I still have a rudimentary understanding of functional programming (apart from the canonical "it's just an implementation of lambda calculus"). And I have to say that without exercise and training one grabs at wisps and mist. In mathematics it's also like this, you often have your favorite prototypical monad, adjunction, group, set, etc. (e.g.: The adjunction Set->Set^op by powerset is a strong contender.) And I view axiomatic systems in essence as sort of list of computational rules (e.g.: transitive closure).

I haven't found some idiosyncratic project to code in Haskell yet though...


I guess I don't get all this "monad" stuff. This article talks about 3 types of monad. An optional, a list, and a future.

However an optional is really just a list constrained to size 0 or 1. And a future is often called "not truly a monad."

So I question the value of explaining this abstraction in great detail over so many articles when people struggle to come up with more than 1 concrete example of it (Lists), an example that engineers have already understood since our first month coding.

Maybe somebody can speak to this more.


Other replies talk about how Promises can be implemented as monads. Aside from that, you can come up with your own, too.

One monad that I occasionally use is something I'll call "Tracked". For "return" (when we make a new instance of the monad) we store a pair (initialValue, initialValue). For "bind" (when we act on what's in the monad) we only ever touch the second value in the pair, returning (initialValue, transformedValue).

That way, you can know where this piece of data came from. I've gotten a lot of mileage out of Tracked<Result<T>>: when one of your Results is an exception, then you can check what piece of data ended up triggering that exception. Yes, you could do this without the Tracked monad, but doing it monadically means that most of your functions don't need to know or care about tracking the initial data; you can just Apply those simpler functions and the Tracked instance will do it for you.


> And a future is often called "not truly a monad."

I think you've misunderstood this observation. It is possible to write a future library that gives a monad (I use such a library regularly). But the most common ones do not, because it happens to be a decent design decision in dynamically-typed languages to not quite obey the monad equations.

The next interesting monad is probably Haskell's Either (or OCaml's Result, if you're into that). It is only a slight twist on the optional monad. Where optional's None case contains no data, Either's Left case can contain data.

After the collections (list and optional), either, and future monads, the difficulty to understand useful monads without first understanding the category of monads jumps considerably. If you're interested, the next ones to look at would be the reader, writer, state, and continuation monads. There's also the classic example from Haskell, the IO monad.


You can go a very long way in programming without ever needing to explicitly use a monad for anything. The usual route into needing them is something like Haskell's IO: if you want as much as possible of your code to be stateless and immutable, how do you deal with IO, which inherently changes state external to the program?

If you've learned from the assembly end of programming upwards, it can be very hard to see the need for them at all.


You can go a very long way in programming without realising you inadvertently have been using Monad's present in the language/standard library :P


> However an optional is really just a list constrained to size 0 or 1.

True, but you've only written about them as data types, which misses the monadic part. What makes them monads is that you can join any Optional<Optional<T>> into an Optional<T>. Likewise you can join a List<List<T>> into a List<T>. It is that principled 'smooshing down' of the data types that makes them monads.


Promises, as exposited in the article, are an interface for bona fide monads. I think Promises are the best example of monads because they don't just wrap a type in memory. Maybe is kind of a subset of List (as you mention), and for both the contents are available in memory. Promises are not like Lists, and their contents can't just be pulled out, so you have to use `.then` (monadic bind) to do computation on them.


> However an optional is really just a list constrained to size 0 or 1.

What?


The sentence you quoted would make more sense if you interpret "is really just" like a mathematician, i.e. they're 'equivalent' in the sense that you can 'map' everything important about either in both directions.

More concretely, "an optional" is something that either 'has a value' or 'does NOT have a value'. So it "is really just a list constrained to size 0 or 1" in the sense that an optional that 'does NOT have a value' is equivalent to a list of size 0, i.e. a list with no contents/members, and an optional that DOES 'have a value' is equivalent to a list of size 1, i.e. it's 'value' is the single/unique member of the list.

Think of statements like 'is really just' in the sense that an integer between 0 and 255 'is really just' 8 ordered bits – they're 'equivalent' in the sense that you can map all possible values of those integers to all possible values of 8 ordered bits, and vice versa, and (importantly in a mathematical sense) in a way such that every integer is mapped to a single set of 8 ordered bit values and every set of 8 ordered bit values is mapped to a single integer. In mathematics that's often described as an 'isomorphism' which is, in working programmer terminology, just a way to convert back and forth between two sets of values 'perfectly and losslessly'.


An empty list (length zero) is "None" and a single element list is "Some" or "Just" whatever the lone element in the list is.

And of course the list is constrained to not contain more than a single entry.


He means that the type `Optional a` is isomorphic to the type `ListOfLengthAtMostOne a`. This is because we can pair their values up perfectly: Nothing ~ [] and Just a ~ [a].


Another really good functional (as in related to how they work) explanation of monads:

http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...

It's somewhat more than a monad explanation -- it covers functors and applicatives, and is somewhat haskell specific, but it was one of the guides that really clicked for me when I was trying to grok monads


wow nice post. I can also recommend Mark Seemanns take on monads: https://blog.ploeh.dk/2019/02/04/how-to-get-the-value-out-of...


This really was excellent. It actually helps with the category theory def.


This recalled me a joke on monads: "A monad is just a monoid in the category of endofunctors, what's the problem?" [0]

[0]: http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m..., 1990 Haskell entry




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: