Hacker News new | past | comments | ask | show | jobs | submit login
Functors and Monads for People Who Have Read Too Many “Tutorials” (jerf.org)
359 points by zdw on June 26, 2021 | hide | past | favorite | 248 comments



I strongly dislike Haskell's focus on theory. Just show me examples on how to get things done and then the concepts will become clear. The Haskell and mathematics communities too often act like ivory towers that humans have to become enlightened to understand, and that simply is not how my brain is wired. I'm not sure of there have been any studies on the usability score of languages, but I think that Haskell would come out very, very poorly. There's a reason why python is so popular.


For the context: Haskell was 4th or 5th language I learnt and the first pure functional one. My graduation thesis work was in Haskell and its backend/compiler so at one point I was living/breathing Haskell.

> ...too often act like ivory towers...

I 100% agree with this. The Haskell community as a whole and various tutorials etc., aren't optimised for a new comer.

For instance, you don't see a "stand up a web app in Haskell in 10 minutes" tutorials. Historically it may have worked in Haskell's favour to slowly induct learners, but in this age where languages are fighting for mind space I don't agree that's a good approach anymore.

Simon Peyton Jones (SPJ), an earliest and a famous champion of Haskell coined the phrase "avoid success at all cost" in a certain context which may have had some role in making Haskell seem Ivory Towerish.

A language, like any living being, needs to adopt to survive and thrive. It's unfortunate to see Haskell remain a niche language even after close to 3 decades.

> and that simply is not how my brain is wired.

This, however, I disagree with. I mean in a general sense, not specific to how your brain is wired. It all depends on the first couple of language that one learns and how they are learnt and taught. 10-15 years ago people would find it hard to grok Python's functional concepts. But as it began to be taught as the first language in universities and those graduates join the working population you see how it's super natural for them to grok Python.


> This, however, I disagree with. I mean in a general sense, not specific to how your brain is wired. It all depends on the first couple of language that one learns and how they are learnt and taught. 10-15 years ago people would find it hard to grok Python's functional concepts. But as it began to be taught as the first language in universities and those graduates join the working population you see how it's super natural for them to grok Python.

That must be the reason why Edsger W Dijksta once stated [0]:

> "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."

... and ...

> "The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense."

... and ..

> "APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums."

I must be one of those mentally mutilated programmers with no hope of regeneration :)

---

[0]: https://www.cs.scranton.edu/~mccloske/dijkstra_quotes.html


Dijkstra. A smart man. But also an arrogant prick. The typical professor living in his own bubble who never dealt with businesses and commercial activities. Those statements are totally nonsense.


When BASIC e.g. redefined what "variables" are and what the equals operator means, it did cause a generation of programmers to drift away from the long established mathematical definitions. This persists to this day in languages like Python. I have had to explain to my daughter that Python variables and equals are not like those in her maths lessons. Dijkstra was right to call out the bad decisions that were made, although I do concede his methods of doing so were not ideal.


FORTRAN is seven years older than BASIC and uses the equal sign and variables in the same way. Here an excerpt from the 1957 paper by Backus et al.:

"Evaluate the expression on the right of the = sign and make this the value of the variable on the left."


Good point, but IIRC Dijkstra didn't like Fortran either. ALGOL (1958) used ":=" for assignment, as did Pascal some years later.


His statements make a lot of sense: the first language learned often has a profound effect on programming habits. That's why proverbs like "one can program in COBOL in any language" appeared.


I think that some of the quotes are pretty toxic, regardless of who they are by. Programming languages are just tools and as such, they should be reasonably easy to use. Programming languages should serve people, not the other way around.

Elitism and gatekeeping do precisely nothing to help people solve problems with code.


> Elitism and gatekeeping do precisely nothing to help people solve problems with code.

I agree 99% here. The 1% is where you want to shock someone to break through. Not sure if that was Dijkstra's intention, but I assume it was. He was a teacher as well, writing books, giving lectures. I think he was genuinely concerned.


> I think that some of the quotes are pretty toxic, regardless of who they are by.

They are indeed, aren't they?

I mean, they straight up sound like bullying those who happen to not think alike or share the same opinion.

Those who are in the right are able to form rational arguments, raise concerns about specific problems, and offer solutions. But no, not in this case. The only reason for anyone to not be a diehard supporters of the author's point of view is that they have mutilated minds and are coding bums.


> Those who are in the right are able to form rational arguments, raise concerns about specific problems, and offer solutions. But no, not in this case. The only reason for anyone to not be a diehard supporters of the author's point of view is that they have mutilated minds and are coding bums.

Don't we take things a bit too serious these days? I've started programming with BASIC and I find the quotes kinda funny - I don't take it too serious and I am not sure if Dijkstra was being dead serious here either. For example: would anyone really think Dijkstra was in favour of locking COBOL teachers behind bars?


> Don't we take things a bit too serious these days?

I agree that this self righteous fake outrage targeted at those who point out bullying and abuse gets a bit too tiring, and adds nothing to the discussion.

This is a very good example. Calling out the lack of rational arguments or any reasoning regarding a technical subject, and resorting to abuse to fill the void with ad hominem and bullying, is expected to be addressed with explanations on the technical merits of the original proposals.

But no, here we are wasting time with chatter that boils down to "why aren't you ok with being subjected to abuse? Don't we need to roll over and shut up when someone throws ad hominems?"

I guess we don't? But it seems that some people whine when they can't take what they are dishing out.


> Elitism and gatekeeping do precisely nothing to help people solve problems with code.

Unfortunately, elitism and gatekeeping "solve problems with code" indirectly, by keeping inadequate people away from developer roles where they would make problems worse with negative-value contributions. Software mistakes are common and expensive: there are very strong incentives to predict and prevent them by following processes and by putting someone competent in charge.

It is of course possible for elitism and gatekeeping to select the wrong sort of people (for example, not recognizing terrible people outside a manager's area of expertise), but even intolerable toxic attitudes (e.g. hiring graduates from certain universities) can effectively keep away dangerous people.


I'm not entirely sure that i can agree with that statement in its entirety, at least in the context of this particular type of toxic gatekeeping.

If there are people who are inadequate at writing code and architecting software, then they should:

  - not manage to pass their university classes and not get a degree or a programming qualification
  - not manage to pass their internship by generating some value and proving that they're capable of learning and self-improvement
  - not manage to get the necessary certificates for a particular technology, as a vague proof of basic competency
  - not manage to solve the take home tasks that they're given by a company that's about to hire them or pass technical interviews
  - not manage to pass onboarding for some months and therefore should be fired
  - not manage to deal with their duties as a developer and therefore should be fired
Of course, depending on different cultures, these things could change (e.g. bootcamp instead of university education, personal projects instead of certificates), but none of those involve calling someone: "...mentally mutilated beyond hope of regeneration," just because they used BASIC, PHP, Python, or any other easy language that let them solve easy problems without getting too deep into the internals of programming languages and how computers work.

Furthermore, the difference is in attitude - if a person doesn't pass one of the above, they can probably just upskill themselves and spend more time refining their craft, reading books, working on projects etc., whereas dismissing them entirely is likely to demotivate them and so they'll never achieve anything. It might also be selecting for the wrong types of people - those who simply brush off criticism like that and don't care, as opposed to those who are more sensitive, something that should hardly matter in regards to developing code.

There has to be a better and more constructive way of criticizing people and even dismissing them: for example, saying "Hey, your programming knowledge seems okay, but you should work on your system design skills. Try applying for a job in a year again," vs simply ghosting them.

> Software mistakes are common and expensive: there are very strong incentives to predict and prevent them by following processes and by putting someone competent in charge.

Lastly, this feels like the job of:

  - the compiler, for errors and warnings
  - the IDE and language server, for general suggestions
  - the linter, for code style rules
  - static analysis tools like SonarQube, for additional code checks
  - testing frameworks (including code coverage gates), for unit, integration and end-to-end tests
  - manual code reviewers, after everything above has been resolved by the code author
  - QA specialists, after all of the previous checks have been successfully passed
As for having competent leaders, sernior developers, architects, security specialists etc., i wholly agree! That's not to say that the culture of software development needs to be a parody of anti-social behaviour.


Most of the current generation of programmers have been mentally mutilated by Object Orientation in the C++/Java/C# style.

I include myself. It has been a definite struggle to onboard functional concepts and solve problems in a functional style. Just doesn’t come naturally.


> Most of the current generation of programmers have been mentally mutilated by Object Orientation in the C++/Java/C# style.

Why do you feel the need to denigrate people who are experienced in imperative and/or object-oriented paradigms?

If there is any paradigm that holds its own value, why not praise the value it adds instead of resorting to nothing but ad hominems?

I mean, if something was so unequivocally better then wouldn't this perhaps resulted in a massive adoption rate, and name-calling wouldn't pop up so frequently? But no, the popular thing to do is to just denigrate those who haven't jumped into that particular bandwagon.


> if something was so unequivocally better then wouldn't this perhaps resulted in a massive adoption rate

That's not what we see happen in life. Just look at how dearly we hold on to coal plants. My country's energy is 90+% coal-based and there is an insurmountable opposition to atom, and great suspicion against solar (government recently started financially deterring against mounting solar panels).


> That's not what we see happen in life. Just look at how dearly we hold on to coal plants.

What a non-sequitur. Adopting a programming paradigm in the code that we write on a daily basis has rigorously zero to do with the infrastructure cost of switching energy sources.

This non sequitur about coal plants is even more absurd and ridiculous once we factor out the fact that the world is already moving away from coal to renewable energy sources, which is quite costly and resistant to change and subjected to an awful lot of special interest groups, and yet during that timeframe people still chose not to bother with functional programming fundamentalisms.

If an idea is good then it holds its own against alternatives. Even in their pet projects people tend to not even bother with pure FP frameworks. While hundreds of millions of euros are being spent on wind farms, people like you and me don't even bother spending a few minutes to get a hello world going with Haskell, even though it's effortless and trivial. Why'd you think that's happenin?


He's right though.

Yeah, over a long time it will certainly happen that a good idea will manifest itself. But that process can take a long time, centuries even.

Let's not forget that there are commercial interests for keeping certain languages down and pushing languages up. There are huge companies like Google, Facebook, ... pushing for languages (and frameworks).

It's often easier to stick to something existing because change requires effort. And I think that's why coal was mentioned - it's easier to stick to it than to switch to something else, so it will take time but eventually it will happen.


> He's right though.

He really isn't, and the blatant attempt to poison the well is a clear indicator of the lacking arguments.

I repeat: people don't even bother spending a few minutes trying to get a pure FP hello world going. Zero barriers to entry, zero challenge, zero resistance.

But in spite of the lack of any obstacle or challenge, pure FP fails to present a value proposition that justifies even a five minute effort from pretty much anyone.

> Yeah, over a long time it will certainly happen that a good idea will manifest itself. But that process can take a long time, centuries even.

This baseless assertion holds no water in software development. It does not take a multi-generation epifany to convince someone to use pure FP. All it takes is a single person willing to invest it's time.

But still, those who do invest their time with ivory tower tests don't see value that justifies pursuing any effort, and thus don't bother pushing it anywhere beyond a cursory test.

Why is that?

> It's often easier to stick to something existing because change requires effort.

This assertion holds no water at all, as pure FP frameworks exist for decades and still people do not bother with them. Why is that? And how can you hold such a cognitive dissonance of claiming something is so great and yet it does not exist in practice because no one ever bothered to take advantage of such greatness?

Also, are we supposed to pretend that even Microsoft offers pure FP languages that integrate with their .NET stack and still no one bothers with it at all, even when they can even contam it's use to very small and self contained modules?

The truth of the matter is that pure FP fails to gather at attention beyond navel-gazing ivory tower types because it quite blatantly fails to present a value proposition. That's the core of the issue. I mean, languages like Rust are increasing popularity like wildfire in spite of it's radically different and restrictive take on resource management because it presents a clear and inequivocal value proposition. But pure FP frameworks, in spite of having a head start of decades and a foothold on dark corners of academia, fails to convince the public that it's in their best interests to even peek in that general direction. Why is that? Do we really need to resort to absurd conspiracy theories to get an answer to that question?


The answer is quite simple.

First, FP required/requires more resources on average. With time progressing and better hardware coming, this point becomes less important.

The strengths of FP, such as easier concurrency, are becoming more relevant as well.

However, even universities take time to "catch" up. And developers often stick to the style they first learned (or a similar style at least). In addition, they already created and invested into an ecosystem.

But look at all the new programing languages that are getting created. They include more and more FP features, starting with the more easy ones that have a direct impact on productivity (such as lambdas) but it's getting more and more. Even languages such has Java start to slowly move into this direction.

Or do you want to deny that?

Also, I have no idea what you mean by "FP frameworks".


is that really equivalent though?

there is no govt forcing/de-incentivising functional programming afaik...


If people are mentally mutilated, it is by having been led to believe there is something useful called "Object-oriented", or "Data-oriented", or "Functional", or what-have-you-oriented programming.

Carpenters don't learn "hammer-oriented" building, or "saw-oriented", "screwdriver-oriented", "chisel-oriented", or "glue-oriented". They learn to use hammers, saws, screwdrivers, chisels, and glue, and use them all at different times, sometimes one more than others. Machinists don't learn "lathe-oriented", "drill-oriented", "mold-oriented", or "welding-oriented" fabrication. They do all or any of them strictly according to what they are making, and what they are making it out of.

"This-" and "that-oriented" is a entirely a sales method used to package up coursework and seminars. It is as utterly stupid to design a whole language around one of them as it would be for a carpenter to try to run a screwdrivering business. A language is useful exactly to the degree that it enables building whatever you might be called upon to build.

That is not to say any generally useful language is equally good for any purpose. Wood is better for some products, metal for others. A metal violin would be weird, a wooden gun action would be stupid. But violins often have metal bits and guns often have wood stocks.


> Programming languages are just tools and as such, they should be reasonably easy to use.

I'd like to argue on how reasonably simple is simple enough. If I need a carpenter to build a stairs I'm my house, I would expect a good carpenter to know and use all the tools appropriate, no matter how complex. Of course, most things are doable with just a hatchet just fine, but I d expect a professional to choose a hatchet because it's better, not because they never bothered to learn proper tools.

So whenever a language puts "easy to learn" as their main feature, I see it as a guide "how to build stairs with just a hatchet".


Ah yes, my brain has been permanently mutilated preventing me from reaching the one and only programmer nirvana found in functional programming. Give me a break.


> For instance, you don't see a "stand up a web app in Haskell in 10 minutes" tutorials. Historically it may have worked in Haskell's favour to slowly induct learners, but in this age where languages are fighting for mind space I don't agree that's a good approach anymore.

There's now the IHP framework that allows you to build haskell web apps in like 10 minutes. If you're curious, check it out: https://www.youtube.com/watch?v=UbDtS_mUMpI It's been called "Haskell on Rails", and for good reason.


> the phrase "avoid success at all cost" in a certain context which may have had some role in making Haskell seem Ivory Towerish.

This phrase is often misunderstood, it's "avoid, success at all costs", not "avoid success, at all costs". In other words, don't optimise for mass market adoption at the expense of everything else. Languages that have arguably done so, have ended up as extremely complex and ridden with corner cases.


The phrase is often taken as having a single true meaning, but I think the sum of them is really what makes the quote so good.

It's easy to follow a rule too far and end up way overcorrecting the original problem. What I like is that both meanings balance each other :)

Some popular mass-market friendly deliverables don't cost very much at all, avoid them and you're definitely avoiding success at all costs, but with both meanings at once this time.


I don't know much Haskell, but isn't Haskell's goal to serve as a vehicle for research in functional programming languages, rather than to offer a language that's useful for real-world work?

As I understand it, the core Haskell folks don't especially care about Haskell's (admittedly limited) real-world usage. Or, if you prefer: real-world usage is a side-effect, not a value.


I'd say that among pure functional programming languages (i.e. excluding Erlang, Scala and OCaml) Haskell is the language most suited for real-world usage.

Yes, there is a lot of research being done on Haskell, but a lot of the research is centered on areas that more or less directly contribute to typical real-world usage. Consider the recent introduction of linear types for example: They're both exciting from a type theoretic perspective, but could also improve some real-world scenarios (even though at this point linear types are still quite new, so there's not a lot of examples out there).


Conversely, I like that there's a language whose first priority is not "how do I become economically useful to the media industry as fast as possible."


a detail that doesn't contradict anything you said but may be useful for someone unfamiliar with it:

"avoid success at all costs" is usually meant to be parsed as "avoid [success at all costs]" i.e. don't compromise the language just to make it more popular.

https://news.ycombinator.com/item?id=12056169


Some context: I was taught Haskell, Prolog and Java at university (2006-2010). Coming from chaotic JavaScript and PHP scripting all were hard to grok, but I enjoyed them all. After a couple of years only Java stuck in my working memory. So: I could program in Haskell, I've probably used monads and functors fluently, but the concepts still don't click. I have the same problem with grammatic concepts like adjectives, adverbs, and the more complicated stuff. I obviously use them all the time, but I'm not fluent in the theory.


> > and that simply is not how my brain is wired.

> This, however, I disagree with.

I think it might well be correct, actually. I think that it might be the difference between abstract and concrete thinkers on that standard personality test. Abstract thinkers will find that Haskell makes more sense; concrete thinkers will find that C makes more sense.


> I strongly dislike Haskell's focus on theory.

First of all, I completely understand where you're coming from.

But the fact that Haskell embraces computer science, unlike most other programming language communities, is what attracts me to it the most. I can geek out about the mathematics that leads to simpler, safer software with like-minded people. I am always learning from Haskellers more than from any other programmers.

But it does take a lot of learning and unlearning to become a fluent functional programmer. This isn't because functional programming is more complicated than other paradigms (au contraire), it's because schools and universities have been mostly teaching OOP for the past 2-3 decades—unfortunately. As a result, most programmers have such a big gap in their knowledge that it can feel too daunting to dive in.

We could definitely do a better job teaching the theory and explaining how it's useful. That would be better for everyone: better for curious minds who want to expand their programming skills, and better for me because I'd have more company to discuss it with.


> But it does take a lot of learning and unlearning to become a fluent functional programmer.

Just no.

The reason Haskell is so complicated to become fluent in has nothing to do with the inherent complexity of functional programming. The Haskell community is just in love with complexe abstraction for the sake of abstraction, extremely terse and hard to understand syntax (see for example the obsession with introducing convoluted operators) and generally favour hard to understand style like point free where pipes are a lot easier to follow. The Haskell community is full of people who are here to geek out rather than produce software. Haskell is what happen when you let one upmanship leads your language design.

Meanwhile, you can learn SML, F# and Ocaml in a couple days, gradually enjoy the functionality they have to offer and benefit for a nice community. I really don't understand why anyone would choose Haskell.


> generally favour hard to understand style like point free where pipes are a lot easier to follow

No they don't. Every Haskeller I know (myself included) acknowledges that sometimes point free style is better, and sometimes it isn't. None of them would unilaterally declare it better, and most generally avoid it except in very simple situations.

> The Haskell community is full of people who are here to geek out rather than produce software.

The Haskell community is full of people who are here to geek out about the best ways to produce software. This means they embrace tools like mathematics, and are generally extra thoughtful about structuring programs in principled ways. It's a wonderful community of people who care about the details of their craft and treat it like a skill to be honed over time. I have a lot of respect for that.

If I could rant for a moment: I'm really sick of spending many years of my life investing in myself and my ability to produce software with the best tools humanity has to offer, only to have these people in Hacker News tell me I'm just doing mental masturbation when it seems like they don't even understand what they're criticizing.


Function application pipelines and function composition are closely related. Here's a "pipe" in Haskell:

\x -> h $ g $ f x

At a certain point you realise that in many cases seeing the `x` is not just useless but needless visual noise. The following is identical and more readable once you've internalised composition:

h . g . f

And this extends to a cute trick in which, if you need to explicitly provide that data to begin with, you can do this:

h . g . f $ x

I'm now working with Haskell having previously come from PHP and JS/TS. Composition is more readable, it's just harder to get started with because it's different to what you're used to.

As for the "geek out" comment, of course, I enjoy programming for the intellectual sake of it. I'm not product-driven. Don't make the mistake of thinking everyone's the same as you, nor that of thinking there aren't benefits to being so intellectually-motivated. I don't know if it's intentional but your comment comes across as rather self-righteous.


If you don't understand why anyone would choose something that many people have chosen, you might want to be a little less bold in announcing what motivation those people must all have.


Yeah, um... let's be sure to apply that on all sides.

I can't count the number of times I have read, here on HN, that people only choose imperative programming or OOP because they're ignorant or stupid. Because if they actually understood FP then of course they would choose it, so the only reason they don't is because they haven't learned or can't learn.


[flagged]


> They want to geek out and feel smart.

Jesus Christ, I am so sick of you and all the people like you who repeat this lie. I've invested a lot of time learning about category theory, domain theory, type theory, etc. so I could become the best version of myself as a programmer, and I have seen very real benefit from this investment. Only to have you and these other people in HN tell me I'm just trying to "feel smart". Your arrogance is so off-putting to me.

When did HN become so anti-intellectual? If you don't understand something, then either learn it or don't—but you don't need to bash other people for their own efforts.


> Only to have you and these other people in HN tell me I'm just trying to "feel smart". Your arrogance is so off-putting to me.

When did HN become so anti-intellectual? If you don't understand something, then either learn it or don't—but you don't need to bash other people for their own efforts.

Listen, I have both an advanced degree in mathematics where I mostly did algebra and I have work for large software projects with hard constraints in the aerospace industry. I have nothing about people learning category theory. It's an interesting intellectual endeavour but people who think they are learning domain theory and category theory in order to be better engineers are just dicking around. It's not anti-intellectualism. It's a fundamental misunderstanding of what and where the problem actually is. Actually this last sentence sums up my issue with Haskell pretty well now that I think about it.


Haskell embraces PLL. Computer science is much larger than PLL, and much of it ignores PLL entirely.

In particular, most algorithms research and papers are expressed in imperative pseudo-code, as that is, in fact, much easier for humans to intuitively reason about than complex category theoretical concepts.


> much easier for humans to intuitively reason about than complex category theoretical concepts.

We're not talking about any complex category theoretical concepts here. Functors, for example, are one of the first ideas one would learn in a category theory course. Likely even in the first lecture.

Even the most advanced functional programs use only the most basic categorical constructs.

And when you say "imperative pseudo-code", you've already conjured an implicit monad involving state, I/O, etc. Functional programming just teaches us that it's not the only one—and there are simpler ones that may be more appropriate for the given situation.


> And when you say "imperative pseudo-code", you've already conjured an implicit monad involving state, I/O, etc. Functional programming just teaches us that it's not the only one—and there are simpler ones that may be more appropriate for the given situation.

No, I have not. The fact that you can recreate state and I/O using monads does not mean that anyone using state is implictitly using a monad - monads have very specific properties that imperative code often doesn't have.

It's only Haskell's laziness that makes it require monads for i/o, by the way. Other pure functional languages don't use them. For example, in Idris, IO and state are not monads, they are effects - tracked at the type system level, but without all of the commodity of monads. For example, you can have a single do-block in Idris that does IO and state operations without needing any monad transformers or lifting.


> monads have very specific properties that imperative code often doesn't have.

You don't know what you're talking about. In the context of programming language theory, monads were first used by Eugenio Moggi precisely for the purpose of specifying the semantics of imperative programming languages. Only later did Wadler realize that they could be useful as user-defined constructs within a programming language.

The "very specific properties" are actually just the identity and associativity laws of a certain monoid. These are trivially satisfied by imperative-style code including the "algorithms research and papers are expressed in imperative pseudo-code" you mentioned: you can write an imperative program that does nothing when sequenced with other programs (identity), and A; B; C does not require explicit grouping (associativity).

> It's only Haskell's laziness that makes it require monads for i/o, by the way. Other pure functional languages don't use them. For example, in Idris, IO and state are not monads, they are effects - tracked at the type system level, but without all of the commodity of monads. For example, you can have a single do-block in Idris that does IO and state operations without needing any monad transformers or lifting.

This is an incoherent train of thought. Haskell's laziness does not require the explicit use of monads, nor are monads only useful in the context of laziness. Haskell could have used Idris's IO system instead—algebraic effects work just fine in lazy languages. But also it is not true that Idris programs do not use monads just because you don't see the word "monad" in Idris programs. Effects give rise to a monad, mathematically; namely, the free monad for the algebraic theory. Read the seminal work by Plotkin and Pretnar, for example.


Lazyness requires monads for IO, but monads are useful for much more than IO, even in eager languages.


Absolutely, I meant "don't use them for IO", didn't mean to imply they are not used at all in other pure, strongly typed functional languages.

I still think they don't often come up in algorithms research, though.


> I still think they don't often come up in algorithms research, though.

Why are you so fixated on algorithms research in your attempts to dismiss the usefulness of functors and monads?


You completely missed my purpose. The OP was claiming that Haskell is superior to some other programming languages because it "embraces computer science". I was pointing out that computer science is larger than PLL, and that much of computer science (algorithms research being a very productive field within CS) is not in fact using FP or PLL concepts to a large extent.

Basically, I am only trying to dismiss the idea that Haskell is somehow closer to CS or more scientifically correct than other programming languages; instead, it is as related to CS as other languages are, just choosing to focus on a different area than many others.

I also want to dismiss the idea that CS == FP, that imperative style reasoning is just some second-tier style not used in rigorous circles - much of the forefront of CS is indeed using imperative style constructs more than FP ones.


How can you say it's easier for humans to reason about when your test subjects almost all exclusively got started in programming with imperative programming? It's a slightly biased data set.


Python is probably easier to learn for most beginners, but that doesn’t mean it’s easier to write programs in. Context matters when talking about usability.

I’ve used functors and monads to solve tons of issues and have literally no clue about the theory behind them.


Agreed. Haskell solves real problems and it's quite a good language once you learn it, it's the cultural focus on theory rather than humans that I don't like.


> it's the cultural focus on theory rather than humans that I don't like.

That's completely fine. Haskell is not a single community. There are two big camps that often interact with each other, the academic and the industrial. Keep in mind the origin of Haskell is academic and it's original purpose is to test and implement ideas from Programming Language Theory. That is still there and will continue. GHC optimizes for letting people experiment with language extensions. Industrial interest didn't start to grow until somewhere around 2008-2012. It is a small community and will likely remain so.

I am a big Haskell user myself. There are many theoretical things I don't understand or need to touch, but I appreciate their contributions to the language and the community. I do agree that Haskell has lots of room to improve on tutorials for non-academics. Maybe I will get inspired to write somethings.

It's completely fine if you don't want to use it. However, you may find Haskell users' enthusiasm for the language insufferable, hehe. I do hope anyone who chooses to interact with the Haskell community finds us welcoming.


But the tutorials simply ignores the real world use case. Lots of concepts are actually useful for solve problems (even in other language).

You need to give the reader some real world example. A concept dangles in brain without connects to other things won't live too long. You brain is likely to `optimize it out` because it is unused.


I thought "Real World Haskell"did just that very well.


If you have engaged with the Haskell community (I mean, actually talked to folks as you learn or use the language) and you came away with this impression, I would be very surprised. To learn Haskell, just do it, you don't need theory. For people who are interested in it, it's there.


What I haven't ever really seen is a good Haskell Cookbook. All the references seem to head for the ozone. Show me dumb little grounded snippets.



http://book.realworldhaskell.org/ does this in a tutorial style.


There is no real focus on theory in the language itself, but a significant part of the community, indeed, enjoys it.

For balance, here's another common opinion about tutorials: http://dev.stephendiehl.com/hask/#eightfold-path-to-monad-sa...

> [...]

> Read the monad definitions.

> Use monads in real code.

> Don’t write monad-analogy tutorials.

In the end, the two opinions about learning monads coexist, and people have done fine following either way.


The thing about learning from examples is that it’s far too easy to learn the wrong thing from them, or nothing at all. When writing examples, one has to spend additional effort on explaining which parts of the example are ‘moving parts’ and which are ‘fixed’, i.e. which elements of the example are inherent to the problem you’re solving, and which are just placeholders you should replace with your own. Many don’t bother; it’s why every Medium ‘tutorial’ basically reduces to ‘here’s a bunch of code you can copy-paste without understanding’.

With a ‘theoretical’ description, like a BNF grammar or a Unix man page listing every possible option the program accepts, this is immediately obvious. Haskell’s focus on ‘theory’ follows the same philosophy.


Usable for what, though? If the criterion is correctness, I'd bet good money that Haskell would beat python in any fair comparison. Just because there are languages that let you forget the complexities of your domain doesn't mean these complexities vanish when you use said languages.


In terms of learning I’m the exact opposite. I like to start with theory, then apply it into practice to affirm that the theory is correct (or that it isn’t, and then make my own improvements). Not starting with theory feels too much like being directionless to me, or creating your own theory/mental model of what happened, which could be and is often wrong.

In any case, all knowledge needs to be elevated to some level of theory because a correct theory guarantees the repetition of correct application.


I’ve been learning OCaml recently and have been really frustrated with the same issues. I’ve been writing this page for people like me who want to learn through examples: https://o1-labs.github.io/ocamlbyexample/ (still a work in progress).


It's the same for me. I don't know Haskell yet but it reminds me the hard time I had with mathematics in school. It all changed when I started self-studying by first doing learning to do exercises and memorizing properties, and then learning the theory afterwards. For example I really enjoyed the first Andrew Ng machine learning course on Coursera, where you learn to do matrix calculations and backpropagation by hand for practical applications without much theory. It was great for building the intuition first.


If you want to just get things done, and then learn the concepts of fp, I always suggest using elm. In elm I had been using andThen functions without even knowing they were related to monads.


Haskell folks are just regular IT folks who missed their calling by not becoming mathematicians.


After years of seeing people try to understand abstractions like a functor from the definition first, I’ve started to think it’s the wrong way to do it.

This post helps on some of these points, like explaining that e.g. functors are not always containers.

I’d love to see someone try taking it further: just hashing out lots and lots of substantially different examples, showing how they are similar to each other, and progressing into higher level transformations, before finally showing the definition of a functor, and how it maps to each of those cases. Forget monads. Can we just start with functors and see how far we get with that?


> I’d love to see someone try taking it further: just hashing out lots and lots of substantially different examples, showing how they are similar to each other, and progressing into higher level transformations, before finally showing the definition of a functor, and how it maps to each of those cases. Forget monads. Can we just start with functors and see how far we get with that?

I think this is how a lot of people learn math and programming anyway. Build theory out of intuition.


Sadly, in my experience, math is often taught theory first, followed by almost trivial or an insufficient number of examples, with more helpful examples left as exercises for the reader with no solution provided.

Post-secondary math classes were actually amazing to cultivate the habit of persisting through long periods of feeling like a complete idiot. But I have to wonder what the same content would have been like had it been example-driven.

It reminds me of learning a language through exposure to endless examples and internalizing rules without being explicitly taught them, versus deliberately memorizing the grammar rules of a foreign language and then applying the rules to specific examples.


>Sadly, in my experience, math is often taught theory first, followed by almost trivial or an insufficient number of examples, with more helpful examples left as exercises for the reader with no solution provided.

Ugh. So much this.

I've had so much difficulty grokking mathematical concepts and procedures in my life. In high school, I fell behind so hard.

Algebra I was doable, and while learning Geometry I felt like I already knew everything about the subject. I never studied and the admins concluded that I must be cheating, given my performance in other courses.

I concluded that in order to learn these things, I had to have "something to do with it". So to speak, some relatable examples to which I could apply the concepts in order to internalize the rules. This strikes me as the source of the never-ending choruses that are echoing still through so many study halls. ringing out, "WHEN ARE WE GOING TO NEED THIS???"

If feels so good when the concepts click, it's like learning you had a superpower. Suddenly, the baroque, mysterious walls of lines instantly resolve into such crisp focus, it truly is an amazing epiphany, Ah-Ha!

I feel like I could have really enjoyed the discipline, but I never achieved this feat in my educational career. I had problems that dramatically redirected the course of my life. This is probably why computers appealed to me so much, because of the direct cause-and-effect nature of interfacing with them. The Hacker Manifesto comes to mind, but I digress.

I have Narcolepsy Type II and probably ADHD (as you may infer from my comment history) and let me tell you...


I really like when they present examples and counterexamples to help you realize the subtleties of the definition.

Often when presented with a definition it's easy to get fixed with examples that are too specific and then to conflate the definition with something a bit stronger.


I feel you ... going from high school to university math was a shock for me. Almost all proof and no handcraft. Going to lectures was a waste of time since I had no chance of understanding. I studied mechanical engineering, not a math program!

How I actually learned the math was by examples and easy exercises, not understanding proofs.


I have enjoyed reading https://dev.to/choc13/series/12008 which uses real world examples to explain Monads, Applicative, Lenses, etc.


I feel like these tutorials always fall apart when they start introducing Haskell syntax and say "That's Monad, simple as that!"

  class Functor f where
    flip fmap :: f a -> (a ->   b) -> f b
  class Monad m where
    (>>=) ::     m a -> (a -> m b) -> m b
Sorry, that's not doing much for me! I am now going to look for a tutorial on Monads that's written in, say, Python, so at least I don't have to learn an abstract new language alongside the abstract new concept.


> I am now going to look for a tutorial on Monads that's written in, say, Python

It probably won’t be very helpful. The definition of a monad is 90% in the types.

People use Haskell to talk about monads for two reasons:

1. Its type system is expressive enough to represent them (parametric polymorphism and higher-kinded types)

2. It’s one of a very small number of languages that doesn’t let users wiggle their way around needing some kind of principled effect system, so you really can’t get away without monads. Only in such an environment can the social pressure to take shortcuts be overcome, forcing people to structure their code carefully enough that the rich concept of monads isn’t reduced to a few occasionally used and probably incorrectly implemented corner cases like optional values and maybe futures.


Wait. Assuming one does not need or want to learn Haskell, what would then be a purpose of learning about monads?

There seems to be a disproportionate amount of tutorials about monads. Not being able to provide sufficient amount of examples in other languages makes monads seem quite less useful. Maybe next article to write is “why should you learn about monads”.


If you never want to learn Haskell (or PureScript or similar) then the author even says in the `If They're So Wonderful Why Aren't They In My Favorite Language?` section:

    "If you don't need IO sequencing, or purity because your language doesn't care about purity, or the list implementation's nondeterminism because that's a parlor trick, or STM because your ecosystem doesn't use it... there's not much point to worrying about it. Monad implementations in languages other than Haskell fail to take off because there generally isn't even a single thing that is improved by working with it through a monad interface, let alone a family of such things."

    "[...]It's definitely an interesting idea, and if you are a curious programmer, it's worth learning about this particular interface and what it can do. But it's not something you generally directly take back to your language"


There are two subtly different things here, "monads" as a concept and specific monads. If I ask you, what's the thing in common between async functions, nullable types, exception handling, sequenced evaluation, and arrays, and you only know, say, Java, Scheme, and Python - you'll probably not see anything.

Nonetheless, these are all monads, and learning specific monads helps in lots of other languages. Optionals are nice in Java. Arrays in Lisps. Error handling in Scala. But the "universality" of monads escapes these languages. I would kill for better monadic error handling in Go, but conversely I would have little use for monadic I/O or futures.

The critical mental step is seeing the commonality in all of these, which is really only possible in languages at that level of type sophistication. And necessary in Haskell where the particular combination of purity, laziness, and HKTs means monads are the only way decent way to do most of these. Once you see that commonality you can develop tools - patterns of mental reasoning about code and idioms for writing code - you can also use when dealing with the specific monads or nearly-monads in other languages. But you can't show that property in examples in a single other languages, and it's difficult to show how some deep structure is shared in examples spanning five languages.


I think that in the end I would wish for an article that distills the last part. "How to benefit from monadic nature of various concepts in any language". I believe that it goes beyond flatMapping over async functions, nullable types and arrays, and using higher order functions.


Monads are a very effective technique for expressing custom requirements in your type system, e.g. "this function must be called only in a database transaction", "this function writes an audit log entry", "this function requires this level of authorization". The kind of thing you might think you would have to do via AOP/decorators/monkeypatching, you can usually do in plain old code (with all the attendant advantages) via a monad.

If your type system can't express them then you lose most of the advantages and they're pretty pointless.


I find this perspective intriguing, can you elaborate on that? For example why `m a -> (a -> m b) -> m b` but not something else?


> why `m a -> (a -> m b) -> m b` but not something else?

it's basically continuation-passing-style (`a -> m b` is the "continuation"), you might as well ask "why can you represent so many control-flow things using CPS?". idk why, but you can!

from another angle, you could compare Monad with the less powerful Applicative. a formulation[0] that's easier to parse than the usual one[1] is:

  class Functor f => Applicative f where
    unit :: f ()
    pair :: f a -> f b -> f (a, b)
if you're familiar with JS Promises, a rough analogy would be

  >>=  (Monad)        Promise.then
  pair (Applicative) ≈ Promise.all
  unit   (Applicative) =
  return (Monad)       = Promise.resolve
ignoring parallelism, you can implement Promise.all using Promise.then, but not the other way around.

---

[0] Called "Monoidal" here: https://stackoverflow.com/q/45267953 [1] https://en.m.wikibooks.org/wiki/Haskell/Applicative_functors...


Some people describe monads as "programmable semicolons" - they're pretty much the general concept of sequencing, "do this then do that", or indeed of imperative code. I don't think they're necessarily the ideal abstraction - something like ArrowChoice is "better" in a lot of cases - but in practice they seem to come up a hell of a lot, you can represent almost anything as a monad.


Why monads and not something else? Because monads appear to be a/the mathematical structure underlying the denotation of programs with effects.

Why that type signature? You only have two options: that, or m (m a) -> m a. The former is more popular for ergonomic reasons mostly. It doesn’t really matter.


Because so many types above are m of something. You as a programmer get to decide how that m type works, so you have a lot of control there.


The point about Haskell is that it forces you to, or at least heavily nudges you to, understand monads and the rest of the typeclass hierarchy. However, you use the properties of monads all the time. The Stream APIs in JAVA and LINQ in C# are examples of abstractions built around Monads and their properties.

The definition of a monad, as it relates to programming, emerges when you realize that an Option type, a Result/Either type, a List type, and a bunch of other non-container types all have the equivalent of a `flat_map` function. Write out a few mathematical rules for how a well-behaved implementation should compose and you have a Monad and its rules.

This is why another approach to learning monads that isn't focused on the theory introduces it as a design pattern instead. I find this to be great for intuition but horrible for understanding personally, having used both approaches to teach people about Monads IRL.


There are many examples of monadic patterns used in other languages, in particular to encapsulate failure or nullable values. I've also seen a lot of codebases that have implemented a custom "pipeline" class that can be composed together, such that some transformation or effect occurs every time a value passes from one pipeline to another. These are usually monads too, allowing for some minor deviations from the formal definition.

Nevertheless language ecosystems outside Haskell, scala and F# don't recognise this as a general programming pattern, although given the current direction towards FP in the JavaScript and rust communities I think it's a matter of time.


Monads (and functors and applicative) are definitely useful outside of Haskell.

At first glance monads seem less useful in languages with builtin mutable state, and freely allowing side effects. However, if you’re writing somewhat more high-level domain specific code there are plenty of concepts that can be modeled using functors and monads very elegantly.

Parser combinators, distributed data fetching, reactive data streams, composable tree traversals, etc

I use them all the time in for TypeScript.


If monads exist outside of Haskell, why isn't there a text which explains them with out Haskell knowledge? Where would I find such a text and description?

I would really like to understand what they are, but so far have not been able to find any explanation which tries to explain them in terms of english, math or some commonly known programming language.


This is probably the best resource I've come across for discussing this subject: https://fsharpforfunandprofit.com/posts/elevated-world/


Thanks, but the article loses me at what this "elevated world" is. It doesn't even try to explain what it is and quickly goes over to F# syntax, which I don't understand. Why is there no explanation in the terms of English, math, and any common programming language? If Functors and Monads are real concepts, they should be describable in the terms listed above, if they are only describably in terms of functional languages, I would consider them a pure artifact of those languages :)


There's a problem here as the author himself notes.

I can provide you a simple example of a function with monadic interface, you will understand it any language but that will not actually help you to conceptualize the idea of monads and influence your thinking.

You will just think that's simple, whats the fuss about it and miss the pattern altogether.

Imagine asking the user to provide a number, in most languages you will get a `string | null` value and you wish to get a real number from it by parsing the string if the user entered it.

In this case assuming you have a function `parseNo` that goes from `string -> int | null` you can create a special function which takes a `string | null` value along with the `parseNo` function and gives you back `int | null`. S

Turns out that function can be defined for many different data types and it has special useful properties.

In this case it allows us to compose functions neatly, without having to do a whole bunch of null checks.

But in other scenarios it can do a lot more.


Sounds like you are describing high order functions as they are known from Lisp?


Yes higher order functions. A function that takes a function as an argument or returns a function. In lisp you can think of a higher order function as taking data, processing it and then returning it just like any other normal function because code is data.


You have a container with `a` inside, let's call the container `m`. You have `m a`.

You can provide a function with the type signature `a -> b`. With this we run the function underneath `m` and map `m a -> m b`.

Concretely this might be with for example lists, where given a function `Int -> String` I can map `[Int] -> [String]`. What the functor abstraction gives you is a consistent, lawful way to define this phenomenon for virtually every type you'd ever want to "map". Likewise monads for "flat mapping".

This explanation was 90% accurate. What's missing is that not every functor or monad is a "container", for example Haskell's `IO`, which rather represents an action of sorts to impurely get a value. But the intuition will build all the same if you play with it a bit. It's all in the type signatures.


Say you have a source of A, and a way of turning an A into a source of B. An operation that uses these two inputs to produce a source of B is a monad.

This is about 90% of the way there (and TFA actually contains very similarly phrased description, right before the monad definition you complained about).

The problem is that English does not have the right words to describe the abstract idea behind "source", "way of" and "operation" precisely and rigorously.

Regarding "any common programming language": In a way, this is like trying to explain a concept like, say, "the adjoint operator" [0] to someone only aware of integers. Or trying to explain the idea of the Liskov Substitution Principle [1] to someone who just learned assembly.

In each case above, the language used to describe the concept is multiple layers of abstraction too low to succinctly ping down the concept in question. Haskell brings (or rather is) a very rich language suited specifically for accurately describing the elements involved. That's why 1) the concept of a Monad is visible/cleanly expressible in the first place, and 2) people use it to explain the concept, even when (like this blog post) not aiming at a Haskell (or Haskell learner) audience.

Disclaimer: I'm just a lowly OOP programmer, not written a line of Lisp or Haskell in my life. Watching from the sidelines for now.

[0]: https://en.wikipedia.org/wiki/Hermitian_adjoint

[1]: https://en.wikipedia.org/wiki/Liskov_substitution_principle


Say you have a source of A, and a way of turning an A into a source of B. An operation that uses these two inputs to produce a source of B is a monad.

Did you mean to write "Say you have a source of A, and a way of turning an A into a B. An operation that uses these two inputs to produce a source of B is a monad"?

What exactly do you consider a "source"? And the way of turning an A into a B I would call a function. And combining the "source of A" with a function that converts A's into B's, a concationation or composition of functions, or a way of currying.

And yes, I am fully aware, that you sometimes need to learn the vocabulary to have a minimum discussion. But considering I have a degree in math and CS, have learned at least 10 programming languages, especially Lisp with all of its contexts, I am highly suspicious about something that cannot expressed in plain language. But thanks, your explanation comes closest I have seen so far.


> Did you mean to write "Say you have a source of A, and a way of turning an A into a B. An operation that uses these two inputs to produce a source of B is a monad"?

No. That would be the Functor concept explained in TFA. A Monad is specifically what I wrote.

> And combining the "source of A" with a function that converts A's into B's, a concationation or composition of functions, or a way of currying.

Yes, pretty much. See the article (where one of the main points is specifically that the functor concept is not a highly complex thing. Just a name for a kind of thing - an abstraction).

> What exactly do you consider a "source"?

That's exactly the point where the "genericness" of the concept (consider: it originates from category theory) precludes discussing it in more specificity. It could be a Maybe<A> (std::optional in C++), it could be a List<A>, it could be a PotentialFutureUserInput<A> (aka IO in Haskell), it could be a (C#) Enumerable... Anything that "wraps" (in any sense) another type. A functor allows you to apply a function that transforms the inner type into another without leaving the wrapper. A monad allows you to apply a function to the inner type, transforming it into a different now wrapped type, with the monad implementation taking care of "flattening" the wrappers. I will avoid the attempt to come up with an example, seeing as the article also criticizes the abundance of bad monad examples.

Here, I've written a C++20 concepts version of Monad. Well, I hope I did. But even if correct, it's just not something C++ can express well, much less use well: Since functions are not first-class citizens, you can't spell "a function from A to M<B>" in a way the compiler can deduce. That's why you need the extra template argument F and corresponding constraint FAMB.

https://godbolt.org/z/b3PoPYeE9


There isn't one unique elevated world, as he goes on to say. Two he lists we are familiar with - options and lists. If you've ever used Option(Some|None) and pulled out a value from Some - the Option is the elevated world, the Some|None is the elevated value.


In most languages you will lack the Monad type and helper functions, and you'll have available many shortcuts that undermine efforts to set up monadic expressions.

So something can behave similarly to a monad, but the type systems don't stop you self-sabotaging your code or misunderstanding good composition.

Best way to learn is to learn Haskell deliberately. Better than Go, the language forces you to compose better code, and even learn the values of FP.

If one is uninterested in learning CS or expanding one's toolbelt, it'll take some years until industry catches up.


So you are saying you cannot describe what a Monad is without the presense of a FP language?


I didn't say that. Learning Haskell may help the budding learner, though won't enforce the monadic laws.


The entire body of category theory is where all of this stuff comes from originally, not Haskell.

> tries to explain them in terms of english, math

There are a million English-language blog posts, and it’s a mathematical concept. Just look up “monads are monoids in the category of endofunctors” if you want more of that…


I was looking for a simple explanation about what a Monad could be, not a mathematical theory.


> It’s one of a very small number of languages that doesn’t let users wiggle their way around needing some kind of principled effect system, so you really can’t get away without monads.

To put that a bit differently: Haskell is a strait-jacket. In the name of purity and laziness, it takes away a lot of things that other languages give you. Monads are the hoop that is left, which you have to jump through in order to be able to do what you can do in a more straightforward way in other languages.

I'm saying this in a negative way precisely because you said it in a positive way.

But which is it really? Is it a positive or a negative? Depends on who you are and what you're doing. Depends on how much your program is improved by not having the wiggle room, and how much of the time you're doing what is easier to do in other languages.

("Easier" might not be the right word. Do notation isn't that bad of a notation for expressing sequencing in. It's a bit cumbersome, but it's not terrible - it's not like building the sequence monads by hand. It's still producing them under the hood, though. Whether that's a problem depends on how badly you need speed, how much pressure the garbage collector is under, how much of the program needs to be sequential, and so on.)


Uniqueness types are a way to get around monads.


Linear types let you safely do effect sequencing/mutation without monads, but I don’t think it extends much beyond that.


It also doesn't help that that definition removes pretty much every aspect of what makes a monad a monad. It also deviates quite a bit from the mathematical definition.

A definition like:

    bind f (x:xs) = f(x)
would fit this type signature, but would very much not be a monad.

What makes a monad a monad is that it is a functor (so a special parametrized type of sorts) with

- a function to wrap a single value (wrap :: a -> M a)

- a function that can flatten nested versions (flatten :: M M a -> M a)

- such that wrapping twice and flattening is the same as wrapping once

- such that when flattening a triply nested structure it doesn't matter which level you flatten first.

And of course it also needs to satisfy the functor axioms, which are also frequently overlooked, though they're not too difficult they just mean that 'fmap' needs to play nice with identity functions and function composition.

And Haskell in particular suffers quite a bit from the fact that currying is a first class concept, while in category theory this is reserved for a rather special class of categories. This means that using functors in combination with functions of multiple arguments suddenly requires you to think about how to evaluate a tree of functions on a tree of values, where in most other languages you'd end up with a tree of tuples at worst. Granted you can do more if you solve those problems, but it requires you to think about concepts that even category theorists consider pretty 'out there'.


To be fair, this particular article is titled 'Functors and Monads For People Who Have Read Too Many "Tutorials"'... after your tenth monad tutorial, you start getting good at reading types!


I found this useful as a Rust dev: https://varkor.github.io/blog/2019/03/28/idiomatic-monads-in...

I agree with you entirely that the Haskell syntax is probably one of the biggest issues with monads being approachable.


This blog post is maybe 10% about monads and 90% dealing with why you can’t have them in rust (no HKTs). I hardly think that’s more approachable…


Is that true though, you can't define a generic monadic interface yes but you can implement the same interface for your data types. That makes it less useful as the post says but still pretty powerful in some cases


It was for me.


What's difficult about Haskell syntax? Or do you mean "it's an issue because it doesn't look like C, Java or Python?".


For me it is indeed the latter. I can glean a lot of interesting stuff from other languages as long as they look a bit familiar. Haskell I usually come to a dead stop pretty quickly.

I've tried to read up on some Haskell basics, but the main issue is I don't have any real incentive to use Haskell, I have other hobbies that are more tempting, so I forget pretty quick.

The Java-ish version of the Functor interface for instance was quite readable to me, ugly or not.


In many ways Haskell (and other languages in the ML family) are LISP with a lot less parentheses.


In which ways?

I don't find lisp syntax confusing either.


Well, that's Haskell for you.

Haskell folks love arrows and one letter variable names.

I don't know why, I guess it's the algebra background of most Haskell folks.

When I wrote some things in Haskell, I tried to use full words for function names and variable names, but it felt I am the only one. People just love to name stuff `a` `b` `m` `f` `k`


The one-letter variables are all type parameters, and pretty much every programming language uses one-letter variables for type parameters. And I can't think of a better syntax for "functions from a to b" than (a → b). In a Java-ish syntax it'd look like:

    interface Functor {
        <T, U> Self<U> fmap(Self<T>, Function<T, U>);
    }
which is just torture. What would it look like in your ideal syntax?


Give the types useful names, like everything else. Recently I did a review for a new peer and was pleasantly surprised that they used proper names for generic arguments. At first I stumbled of course, because I wasn't used to generic arguments being longer than a few characters but I think it helped the readability a lot.

    interface Functor {
        <Source, Target> Self<Target> fmap(Self<Source>, Function<Source, Target>);
    }

Same code, but you get an additional hint of what's going on.


Alright, that's fine, but the person I replied to was complaining that "Haskell folks love arrows and one letter variable names", and my point was that most programmers love one letter variable names in this context.


Monads and Functors aren't particularly exciting in Python because properties or contexts can be expressed with e.g. boolean flags and properties or contexts can be ignored when applying functions to unrelated parts of an object.

  # milk_cows :: Person -> Tired Person
  def milk_cows(person):
    ...
    person["tired"] = True
    return person

  # feed_pigs :: Person -> Tired Person
  def feed_pigs(person):
    ...
    person["tired"] = True
    return person

  # birthday :: Person -> Person
  def birthday(person):
    person["age"] += 1
    return person

  # willy :: Person
  willy = {"age": 32, "tired": False}

  # fmap and >>=
  print(milk_cows(birthday(feed_pigs(willy))))
In Haskell there would be a 'Tired a' type with a Monad instance to express the tiredness property and a Functor instance to apply functions to it with no impact on or knowledge of tiredness, e.g. aging.


Closest but VERY BADLY BROKEN analogy to Monads is one of many Java standard interfaces.

Would you agree that introducing particular interface is gonna show interface definition in question?

What should follow (just like in Java) is few examples of actual implementations and how they fit into that interface.


the first one means that flip fmap is a function which, given an "f a", and a function from a to b, will produce an "f b".

for example, if "f" is "list of" and a is "integer" and b is "string", then flip fmap would take as input a list of integer, and a function from integer to string, and produce a list of string.

If you've done some topology: [or, maybe a is a particular topological space and b is another topological space, and f is "the fundamental group of", then given an element of the fundamental group of a, and a function from the topological space a to the topological space b, it will give you a corresponding element of the fundamental group of b.]

a and b are both types of things, f takes in a type, and outputs another type. so, a and f a are both types (usually different types). the thing is that f doesn't just let you make a type from another type, it also lets you translate a function from the type a to the type b, to a function from the type f a to the type f b.


> Sorry, that's not doing much for me!

It's easy if you first learn about Haskell notation and type classes, which pretty much every online tutorial covers. What you quoted isn't arcane, you're just unfamiliar with it (just like my granma would find C-like syntax illegible: she just wouldn't know anything about it).


I agree wholeheartedly that Functors and Monads should be thought of as interfaces. And that much of what complicates them is separating the implementation details from the interface details.

However, I can't help but feel that the author and I have some common flaws.

The first being verbosity. I often notice the same thing in my own writing: I will see a well-written paragraph and think "this is good stuff" and then wonder "but what purpose does it serve here?" Stuff like dedicating 5 paragraphs to discussing whether Functor should be a noun (I get the point, but that's a lot of words to make it), or a paragraph with caveats about how Haskell does name resolution.

The second being that we both seem to have fallen victim to the monad tutorial fallacy (https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...). But I'm not 100% sure about this one. I don't think it's wrong to call monads interfaces in the same way that it's wrong to call them burritos. However, our framework of thinking does seem to align with what that author refers to as fallacious.

Still was a good enough read that I felt the need to comment partway through :)


I think it is at least somewhat wrong to say that monads are just an interface. For instance, when the author says (paraphrased) “you could remove every trace of monads from IO and it would still be usable, albeit clunky”, they’re actually talking about removing the monad interface, but not the monad structure — it would still rely on a function of type (IO a -> (a -> IO b) -> IO b) that’s expected to follow certain rules. In my book, that is a monad — whether or not it’s called “monad”, and whether or not it’s formalized as such.

(This isn’t an entirely new kind of concept in programming, either; see “patterns”.)


i am interpreting this as "monads are a structural type, not a nominal type".

which to me is an interesting debate given Haskell's emphasis on strong interfaces strictly enforced.

We seem to be converging on a world where we agree that declaring and enforcing types is very important, yet also that it's the structure of the type, not its name, that is most critical. At some point, with sufficiently advanced type inference, do you end up in a world that looks like a lot like fully dynamic languages, where the actual declaration of the type is purely optional?


The verbosity is a direct effect of the several discussions I've had over the years on this topic. It has actually had a number of words removal and paragraph revival passes done on it.


It helped me to understand what is going on. I think I've been hung up on functor sounding like a noun for a long time. I thought there was some extra magic I wasn't getting that made it into a noun. The verbosity helped me a lot.


I found that section really helpful. I've been writing Haskell (on and off) for years but I hadn't realised that the noun thing had created a sort of mental speedbump, which is now gone.


That's fair, and I hope that I didn't come across too harsh in my comment. I elided a paragraph of praise I thought was redundant to my first few sentences.


> the monad tutorial fallacy

The paragraph starting with "I am aware of the notorious effect" directly addresses this, do you disagree with it?

Also calling it an interface isn't an analogy.


> The paragraph starting with "I am aware of the notorious effect" directly addresses this, do you disagree with it?

I suppose my answer should be "yes," but let me try to explain my convoluted reasoning:

Understanding monads simply in terms of interfaces is what I think is the silver bullet to comprehending them. It's what helped me "get" them. So they are my "burrito."

> Also calling it an interface isn't an analogy.

I agree, I apologize if I came across this way.


As someone who studies both CS and pure math, it's quite interesting when I see posts that take a very operational reading of functors and monads, instead of its shorter and more precise categorical definition (as a mapping from one category to another that acts homomorphically over morphisms). That's not necessarily wrong, but it's an important thing to keep in mind when learning category theory through a programming context (e.g. Category Theory for Programmers by Milewski). When I tried to learn category theory in this way, past a certain point the abstractions felt too unnatural and arbitrary (e.g. lack of examples for adjunctions and free objects), whereas a good mathematical treatment of category theory would provide plenty of examples across different areas of mathematics such as algebra, topology and logic.

I strongly agree with valenterry's comment[1] here, that post gives is an approximate definition of functors, even omitting the functor law. In particular, it is explaining what an endofunctor is (in the context of FP, you mostly deal with one category anyway). Furthermore, even in Haskell, there is no such thing as the category of Haskell types[0] (because of things like divergence and partiality).

For the curious, I've recommended Program Design By Calculation[2] to a number of people to show how category theory can be used as a powerful tool to reason about programs in an FP context while not compromising on the mathematics (the target audience is undergrad CS majors).

EDIT: In particular, I have a bit of issue with this sentence after stating the signature of fmap, since it does not reference lawfulness.

> This is the full definition. There is no additional hidden machinery that Haskell is somehow automatically invoking when you "use a Functor".

[0] http://math.andrej.com/2016/08/06/hask-is-not-a-category/

[1] https://news.ycombinator.com/item?id=27638565

[2] https://www4.di.uminho.pt/~jno/ps/pdbc.pdf


Because most people suck at taking abstract mathematical concepts and seeing how they could apply to code. Including me. Let's take the definition of a function:

> A function is a relation for which each value from the set the first components of the ordered pairs is associated with exactly one value from the set of second components of the ordered pair.

Cool, I have no idea what the fuck this means and how it is useful to me.

Oh wait, I use them every day.


> Cool, I have no idea what the fuck this means and how it is useful to me.

> Oh wait, I use them every day.

Do you really? Do the `functions` you use even fit the above definition?

I bet you can think of infinite number of functions (bool -> bool). The above definition admits only 4 such functions.


There are only 4 (2**2) such pure functions. For type (Bool -> IO Bool), there are infinite (∞**2) such pure functions.


That is the first definition stated in first-order logic, sure. But the operational reading of a (computable) function being something that takes in input and produces an output (via some algorithm) is still correct. It is possible to be both clear and correct with operational analogies. However, monads and functors don't readily admit an easy operational reading like functions.


> Furthermore, even in Haskell, there is no such thing as the category of Haskell types[0] (because of things like divergence and partiality).

Divergence and partiality can easily be modeled by the category of complete partial orders and Scott-continuous maps. This is what you learn in, e.g., a course on domain theory and/or denotational semantics.

Andrej Bauer is mainly arguing that Hask isn't a category because no one has formally specified it. For example, we would need to come up with a policy on when two arrows are considered equal, and it's not clear what that notion of equality should be.


Right, domain theory and denotational semantics let you talk about ⊥ in an order-theoretic setting, and I think it's also rich in connections between topology and lambda calculus. Though it might be the case that if functors already pose a pedagogical conundrum for programmers, Scott-continuity might be even harder to get across.

A quote from Bauer's post is particularly relevant:

> [I am arguing against] the fact that some people find it acceptable to defend broken mathematics on the grounds that it is useful. Non-broken mathematics is also useful, as well as correct. Good engineers do not rationalize broken math by saying “life is tough”.


Brauer has clearly had a different experience with classical mechanics and engineering physics courses than I did…


Do you happen to know of any programming language which has explicit support for multiple categories?


Lean[0]'s mathlib has quite nice formalisation of category theory[1] with plenty examples of concrete categories even sheaves and toposes.

[0] https://leanprover.github.io/programming_in_lean/#01_Introdu...

[1] https://github.com/leanprover-community/mathlib/blob/master/...


I suspect this is still not satisfying for people wanting to really know what a Monad is. Perhaps these simplified explanations can get away with explaining Functor (more so the "mechanics" of it) but it starts breaking down for Monad.

I don't think there is a shortcut... these are highly abstract concepts.

Abstraction is about exposing the most essential parts of something... Category Theory is perhaps the mathematics of abstract composition, exposing the most essential parts of how to combine (in the abstract). Monad and Functor arise from CT, and thus, are very abstract concepts, built from the axioms of an already abstract subject matter. This is why it's hard to "simplify" them any further. The formal definition of a Monad is itself the simplest way to expose it. The fastest way to grapple with it is through the axioms of CT, precisely because CT is arriving at this concept by abstraction; by only thinking about the most essential parts of it.

That is probably not a satisfying explanation (to tell someone you need to learn axioms and laws before understanding this concept), but the alternative is to simplify and/or abstract something that is already as simple and as abstract as can be, albeit with esoteric names, and thus inevitably complicating it and shrouding the concept instead. i think the reason so many Monad tutorials aren't as satisfying is because they attempt to explain both a mathematical, abstract concept and also how it applies to programming, at the same time (without explicit separation). The latter is practical, the former is very abstract, and the bridge between them is very complex and usually left out, leaving the reader scratching their heads how and why you go from one to the other.

It's worth mentioning Bartosz Milewski here (links below), as he has bridged this gap for many, but as mentioned, there is no shortcut.

https://bartoszmilewski.com/

https://www.youtube.com/channel/UC8BtBl8PNgd3vWKtm2yJ7aA


There was a particular moment I remember from Milewski's monad video, where likened monad tutorials to a situation where you arrive on a remote island and the islanders ask you to explain what a function is. One's first instinct might be to give a bunch of examples, for instance, you can have a function that takes a list of fishermen and order them by the number of fishes they caught, or a function to turn barley into beer, etc.

Upon hearing this, the islanders think that functions must be some magical panacea, and so you backtrack on that and try to talk about functions as being like animals, they take input and produce output (good luck with function composition).

The situation is like that with monads. They are far removed from the world of programming (the fact that there were connections at all to effects was remarkable!), but as such, you have to learn the ambient theory in which monads reside to really grok it.


The functor description is quite good.

If the author is reading this, I might make a suggestion: connect the hash map example to the function example.

Mathematically, a function is typically defined as literally a (potentially infinitely-long) map of unique keys (domain) to values (codomain). In programming, the two are not identical, especially in mutable languages, but the hash map example would be like memoizing function composition for a set of inputs specified as keys in the hash map.


> For their own good reasons, mathematicians use "functor" as a concrete noun. Haskell copied this, hence how often you hear about "a functor".

> However, interfaces are adjectives that describe a data structure, which is why in many languages they are often named with -able suffixes (Iterable, Serializable, etc.) Programmers are better off thinking of it as Functable or Fmapable or something like that.

Fmapable makes much more sense to me. Naming is hard.


To be honest I feel like the functor concept is too abstract and subtle to the point where it's probably not very useful in the context of programming.

Worse... their usage in programming languages is very unlike how homomorphisms are typically used in their usual context, so appealing to the math seems like its just confusing learners without payoff.

To give an analogy... it feels like trying to explain the chain rule by explaining pullbacks between tangent spaces. Not technically wrong, but now they're confused and asking questions whose answers wont help them.


Of course, the chain rule explained via differential topology is simply functoriality!

IME, category theory was much more valuable in differential topology in helping solidify why the structures and their compositional properties make sense. You get to work with several, honest-to-god categories and functors between them, especially when tangent bundles come into the mix.

Now go to programming, there's really only one category at play, and I'm not sure how worthwhile it is to apply a (more contrived) version of CT that only has endofunctors and being cartesian closed.


Haha, I feel like thats how I started to get CT concepts too. Topology and geometry really feel like the "natural" context for those ideas.

For programming... I feel like they're basically trying to do logic in a roundabout way. I suspect that (finite) model theory might be more useful for such applications if they really want theory.


> To be honest I feel like the functor concept is too abstract and subtle to the point where it's probably not very useful in the context of programming.

I think it's a small convenience to have the programming language automatically derive `map` instances for you, but I think you're right for pretty much the same reason group theory's huge but monoid theory barely exists: the `Functor` typeclass is too simple to do anything interesting with it.


> it's probably not very useful in the context of programming

This puzzles me. It turns out it's very useful in the context of programming in practice (i.e. Haskell programmers use it and find it useful), so what do you mean? I don't know CT so I wouldn't know how well it maps (pun intended) with the Math concept, but regardless, whatever abstraction is there in Haskell that programmers call "functors" is tremendously useful.


`Functor` instances need to exist: they're obviously useful. You'll see a million `map` methods in every programming language.[0] What you won't see is a unification of them all under one interface. And the point, in my view, is that this unification doesn't really buy you much. I can't think of a single algorithm that starts with "let T be a mappable type".

[0]: https://doc.rust-lang.org/std/?search=map


I mostly agree with what you are getting at here. However occasionally it is useful to abstract over all Functor or all Monads, etc.

For example, when using the Van Laarhoven free monad[0] implementation, I'll quantify over all monads. While, there is a simple data type definition for free monads[1], it suffers from a well-known issue that the bind operation has quadratic complexity. The Van Laarhoven free monad circumvents this problem (at least in the common case where you are not trying to "execute" the intermediate monadic constructions). There are also continuation-based approaches for free monad implementations as well.

The benefits of free monad are the typical ones described in "Data types à la carte" where you can, for example, instantiate your free monad in the IO monad for production, but use a pure monad instance for (parts of) your test harness.

(Generic implementations of free monads and their operations is another example of quantifying over all Functors, but I'm assuming for the sake of argument that you'd prefer to repeat that boilerplate over and over again for all your specific free monad instances.)

[0] http://r6.ca/blog/20140210T181244Z.html

[1] https://hackage.haskell.org/package/free-5.1.7/docs/Control-...

[2] http://www.cs.ru.nl/~W.Swierstra/Publications/DataTypesALaCa...


CT may be appropriate to further evolve Haskell as a language, but for programming implementations; mathematical categories aren't directly useful. There Haskell is more like any other programming language, just that it's FP.


I'm not debating that; instead, I'm arguing functors are immediately useful in languages like Haskell. Don't know enough about CT to debate that point.


They are. Abstractions like functors makes the language more similar to other structured languages. Otherwise, the same would've needed explicit recursion all over the place, which is more error-prone.


The concrete instances are demonstrably useful, since theyre... actually used! However in Haskell theyre basically one kind of very concrete object. Invoking the abstract machinery of CT to explain list concatenation just has the hierarchy backward. Its like telling someone that inner-product induced norms on hilbert spaces are extremely useful when youre talking about taking the absolute value.


Abstractions (Functors, Monads, and otherwise) beg the question of "What value is this providing", in context of a specific problem. If the cognitive complexity added by an abstraction is worth the value it provides, I'll use it. If not, I won't.

Example: I do mostly embedded prgramming, in Rust. My focus is on building working devices. I use abstractions and build APIs that are useful for solving specific problems I have now, or anticipate in the future. Eg an init function for an ADC that does the register writes described in the reference manual, so I can write `let adc = Adc::new()` instead of a series of reg writes.

It's common in embedded rust programming to make heavy use of generics, traits etc, and other things that leverage the type system to make it so that as many mistakes are caught at compile time as possible.

This sounds great in theory, but if a given abstraction makes type signatures a mess, doesn't allow me to configure a pin correctly due to an edge case, and provides no advantage if I QC I'm using the right pin, I won't use it. I see these Haskell examples as using abstractions by default, without questioning whether it's appropriate for a given use.


The article doesn't mention lawfulness until the very end. But lawfulness is a crucial part of the definition and usage.

Because, a lot of the convenience and ergonomics that we can derive using these abstractions comes from being able to rely on the laws. Without them, they are just not really useful.

And when looking at the examples, I can totally see that someone reads

> That's all Monad is. It is an implementation of some "method" that conforms to that interface specification.

and then makes "Set" or "HashMap" implement the interface unlawful and bad things happen when they are least expected.


In my opinion, in a programming context, that's Haskell specific. Most programmers in other languages won't worry about the laws. The laws are less useful and less relevant in strict languages, as is the case with many laws, which is why Haskell programmers are just about the only ones discussing them.

I label this my opinion quite on purpose, because I see the alternative viewpoint. I think you'll be more successful taking about lawfulness once someone gets the mechanics, given how much that has been a stopper over the years!


Don't get me wrong - I'm not saying the article is bad or not useful.

However, Monad and Functor are well defined terms. If you write an article where you complain about confusing explanations and try to do better, then you must start with a disclaimer: "I'm not explaining Monads as they are defined in math. I'm explaining something that is quite similar to it to make it more practical and later explain the actual term" or so.

Otherwise you are just adding to the confusion that people experience.

> The laws are less useful and less relevant in strict languages, as is the case with many laws, which is why Haskell programmers are just about the only ones discussing them.

I'm not sure if I would agree with that. But even if I would, the laws are still really important, even in strict languages. I've implemented quite a few monad instances in Scala (which is a strict language) and I'm speaking from experience.


I don't know why just saying the following block of text combined with walking through a bunch of examples and how each of the examples fulfills those properties, wouldn't be sufficient : ' If F is a functor, then for any A,B,C,f,g, such that f : A -> B , and g : B -> C, (where we write composition with a semicolon so that the composition of f and g is written f;g : A -> C), F(f) : F(A) -> F(B) and F(g) : F(B) -> F(C) and F(f;g) : F(A) -> F(C) and F(f;g) = F(f) ; F(g) (also, if id_A : A -> A is the identity on A, the F(id_A) : F(A) -> F(A) is the identity on F(A) ) '

(in case they are unfamiliar with the notation f : A -> B , then this can be explained in each of the first few examples.)

I guess it might also be good to give some counterexamples, and point out how the counterexamples fail to be functors, and why they are also not nice.


What do you mean by lawful? This is the first time I've ever heard someone use that term in this context.


Monad and Functor make some additional promises, beyond what can be expressed in the types, that some users rely on.

In the same way that we expect addition of numbers to obey the commutative law and associative law. (Except, of course, that floats don't...)


Floats are a great example actually.

Implementing "unlawful monads" would be equal to giving someone a float and telling them "you can use this like a rational number from math". And in fact it might work for the longest time - until it doesn't (think rounding-errors and similar) and then the confusion will be big.

With monads it's the same, but potentially even worse, because it's rather easy to test float addition behavior, but for certain monads that can be much harder.


Haskell has a library called monad-validate that I think is a great example for both what the monad laws are important for, and how it's important that one understand their reasons, instead of just following advice literally on every possible situation.

Just as yo want to break addition commutativity so you can approximate real numbers, there are times when you want something to behave like a monad, but not follow its laws.


In addition to what has being said, if you are a Java developer, then this article will be helpful for the understanding: https://www.sitepoint.com/how-optional-breaks-the-monad-laws...

It gives a lot of theoretical background, but if you read the article here and already know the Optional-type and then you can skip it and go directly to "A Real World Example".


thanks! this link should have been the main news; so much easier to think of 'unit' as object constructor, and of 'bind' as flatmap; the article also does a great job of explaining the purpose of the monad laws.


Monads should satisfy the "monad laws": left identity, right identity, and associativity.

IIRC, Haskell relies on two out of the three, but I don't remember which two. But if you write a "monad" that doesn't satisfy them (even though it has the right function signatures), then you're going to get bogus results.


Time for an analogy and why math is supreme:

x+y+z is associative.

x-y-z is non-associative.

Functor, monoid and monad laws allow for undefined evaluation order, lazy evaluation, parallell execution and same results for same parameters. But only if the laws holds can such be guaranteed when using certain abstractions. Associativity being one obvious caveat that might break abstraction over collections, while using divide & conquer mechanics such as function currying.


Pre-conditions and post-conditions of API calls would be a rough analogy - although fairly inaccurate.

Ie it's not just the interface that matters, but rules about the implementation.


This part could be more simply explained:

> The problem is that a "do" block in the list monad chains together several of these flatMap calls. It is very easy for the transformation function to result in the computation growing exponentially as each one expands one item from before into one or more items. You have to have something else in the problem that prevents that from happening. You may see a lot of cute little algorithms like solving the n-queens problem, but in practice, this is not something you see used a lot in Haskell. The mathematical transform is cute, but hard to control because it's a bit too simple. In particular, this has no ability to "notice" that it is revisiting possibilities it visited before. That would take more machinery.

I think he's saying that it's doing a depth-first search without any caching. I'm wondering if Haskell has some other nice way of specifying depth-first searches that performs better?


Quite often what you want is LogicT which gives you more control and can significantly improve performance over List.


This is precisely how my Programming Languages CS Professor explained it. Probably contributed to Haskell being my Go To language.

This dialectic style adopted by the article for explanation is effective in exploring confused understandings. (Bit socratic) Not all writing needs to be technical writing like in a manual -- narrative explanations can be useful too.


Let me see if I can sum this up...

Functor = map

Monad = flatMap

Did I understand the article correctly?

edit: this is an honest question. I way I read his explanation of Functor is that it's a function that implements map (list as input -> list as output).

His explanation of Monad said that its an input of List, where each item returns it's own List, then the function returns a flat List of all the results.


That's the version for a list. But list is too simple to really see why people use these abstractions.

flatMap doesn't let you implement the audit log example, or work on a binary tree, let alone try to implement something like IO.


I have no idea why this is downvoted.

For all the functor tutorials, it really is nothing more than the interface and rules that define map. And Monad is essentially flatmap.

That's was so absurd about all of this.


I think the downvotes are because it kind of sounds like you didn't read the article and are saying the things that the author directly states as a "misconception":

    "It is a common misconception that "monad is the same as flatmap", which has even made its way into some "monad libraries" in non-Haskell languages. flatmap is not "monad". flatmap is the particular implementation of the monad interface on lists. If you understand flatmap, you understand the monad implementation for the list datatype... you do not understand "monads" in general."
Maybe you think the author is wrong, but your comment didn't read like you were addressing these points.


Or maybe the article isn't understandable. Sorry, I tried to read the article and just got confused and don't understand, what the article is trying to say. So it would be a good start to present the understanding one might get from that article and then get constructively corrected, where this understanding is wrong. In my eyes, this is the only way to understand a complex topic which isn't explainable in common language.


So great call out. My mention of the downvotes wasn't in reference to my comment, but into the parent I was replying to who is clearly newish to Monads. And their summary is actually fairly accurate.

If you can read and understand this comment (https://news.ycombinator.com/item?id=27639779) then you understand 80% of the practical use of Monad.

And the majority of people who understand that comment, could now implement the flatMap/join/>>= for the "Maybe a" Monad. And could probably implement ">>" for it as well.

And if you can implement "bind" for a randomly requested Monad, while you may not know what a Kleisli Category is, nor be able to define the monad laws and might still make a mistake implementing one, you have the essential concept of a Monad.


It's not "essentially flatmap" for more complicated data structures or data sources unless your definition of "flatten" is "do any arbitrary thing".


My definition of flatten would be: m m a -> m a

Definitely not arbitrary but maybe not useful for many monads (??)


That's just a signature though, and the implementation could do so many things for complex and/or strange data structures.

So in a sense yes it's "just" this function that could do anything following the signature.

But now you're in the same place you already were, looking at a simple type signature with massive hidden implications, and if you treat it like a list you'll miss those implications.

"flatmap generalized to a vast field of dissimilar types" is much more complicated than just "flatmap"


Yeah, in formal terminology that’s “join”. Join + return + monad laws is a perfectly reasonable definition of a monad.


you need fmap as well.


Please explain a parser monad in terms of flat map.

To be concrete, I'm talking about the kind with the semantics that if `Parser X` parses an `X`, and `Parser Y` parses a `Y`, `>>` between them parses an X then a Y. I propose that you must explain that in terms of flatMap in order to say that this is a correct understanding of monads.


Sure.

flatMap() is a specific function that you already understand. It is the most common instance of a general fn pattern that is used for the specific data type Array.

The flatMap equivalent function when used on the data type Promise is called then().

The special function you are asking to understand ">>" is simply another instance of then(). This one still waits for the promise to complete - the only difference is that the lambda you pass to then() doesn't have access to the result of the previous promise.

If you're using >>/flatMap on a Parser datatype, instead of running the lambda after first promise has completed successfully, it "runs" it after the first parser has completed successfully.

flatMap and (all of its other data type specific names) are the essence of a Monad.


> flatMap and (all of its other data type specific names) are the essence of a Monad.

Well, the extra sauce on top of an Applicative Functor that makes a Monad; but the essence of an Applicative Functor is also part of the essence of a Monad, and the essence of a plain Functor is part of the essence of an Applicative.


Not only did this 'answer' not reference any specific properties of flatMap; it didn't reference any specific properties of parsers either. I hold it wholly insufficient and incorrect.


I loved this. It helped me get monad better, and improved my understanding of functor.

What I am still unclear on is the difference between applicative and monad. Both, to my mind, have this concept of combining "wrapped" values. Currently my intuition for the two is exactly the same. So my intuition is probably wrong.

I would love a similar explanation to this post on applicative vs monad.


A "functor" lets you take a unary operation on simple values, `a -> b`, and apply it to a value which structures some `a`s within a larger context (`f a -> f b`). A classic example is (to use some Java syntax) `List<A>`; it is not difficult to write a method with Java signature `<A, B> List<B> map(Function<A, B> f, List<A> list)`.

Functors let you chain together unary (non-branching, straight line) processing pipelines. If your processing pipeline works over any functor, then it can be applied to any structure that acts as a source of values, and it will leave that structure unchanged around the processed data.

An "applicative" is a functor that also lets you take two values, one structuring some `a`s and one structuring some `b`s, and cross them together to produce a result that structures some tuples `(a, b)`. If I have two lists, I can cross them together to get a list of all pairs of elements between the two lists (like a CROSS JOIN in SQL).

Applicatives let you merge two separate processing pipelines together, combining data from both sides together into a single stream. If you can combine two pipelines into one, you can combine any number of them; this is what makes them strictly more powerful than general functors, which cannot support multiple parallel data sources.

A "monad" is a functor that also lets you take a value with two layers of nested structure, `f (f a)`, and meld/flatten/unify those two layers into a single layer of the same kind (`f a`). If I have a list of lists, I can unwrap the inner list structures, lifting all of their elements to the top level. Or, more viscerally (for me), if I have a binary tree whose leaves contain binary trees, I can graft the inner tree at each leaf onto the outer tree, producing a deeper single tree.

Monads let you accumulate some inner structure produced by a stage of your pipeline into the outer structure your values came from. In turn, this reveals the values under the inner structure to downstream pipeline elements.


Articles like this and the subsequent comments are just a long-running, inside joke of who can out-pretentious one another, right?


Wether you know functor laws, or you misapply functors, you have to talk about it to be properly applicative ;)

The community is very much focused on learning, by teaching the little one knows.


I know this is just for Haskell but someone needs to start explaining category theory at a level 10 year olds can understand or something. Have struggled with confusion in the same way there.

So far all I got from this is that functors are functions?


A functor is two things:

- a parametrized type, like for instance List: you can have lists of integers, floats, functions, or any other type

- a way of mapping (in the sense of List.map) between instances of the parametrized type, given regular functions. This mapping should behave nicely with respect to regular function composition.

If your parametrized type is F, then the mapping operation turns ("lifts") regular functions A -> B into functions (F A) -> (F B) in a reasonable way: "if you give me a function then I can transform any list using it".

Functors can be used to "decorate" values: you can model exceptions (some value or an error), promises (some value not yet computed), state (some value + side effects) and many other kinds of effects using functors.

So in general an effectful computation will have type A -> (F B) for some functor F modeling the effect. And now you have a problem: how to compose effectful computations in order to build larger programs out of smaller ones? You cannot simply compose A -> (F B) with B -> (F C) since the types don't match.

What you need is a way of transforming an effectful computation B -> (F C) into a lifted function (F B) -> (F C) so that you can compose A -> (F B) with (F B) -> (F C) in order to get A -> (F C).

That device should satisfy some properties to make effectful composition work the way you expect (in particular there should exist identity effectful computations), and is called a monad.

A monad is a functor + some extra operations (bind/return) satisfying properties that will make effectful computations (using this functor) composable.

The type of bind is generally written (F A) -> (A -> F B) -> (F B) but really it is the same as (A -> F B) -> (F A -> F B) above (takes an effectful computation, gives back a lifted function).


I'm sorry I still barely understand what's going on here. Too many unfamiliar terms. Should note I don't have a CS nor math background, just basic programming.

Here's my best understanding so far:

A functor is a function that takes in a list of things and like a regular function, outputs a list of modified things...

So how is this different from a function?


A functor is a certain kind of function (Theres a pedantic point to be made but it's mostly irrelevant). However, most functions are not functors.

To give a simpler example, we say a function, f, is monotone when for x<y, f(x)<f(y); that is, it preserves order. Every monotone function is obviously a function, but functions like x^2 are not monotone.

Basically a functor is a function which preserves some other properties. I wont go into detail as many have before me, but thats the gist.


Interesting, thank you!

For those who like me wanted to see the edge case for the non-monotonic function, if you do:

-2 < -1

-2 is in fact less than -1.

However:

(-2)^2 < (-1)^2 ==becomes==> 4 < 1

And 4 is obviously not less than 1.


> What you need is a way of transforming an effectful computation B -> (F C) into a lifted function (F B) -> (F C) so that you can compose A -> (F B) with (F B) -> (F C) in order to get A -> (F C).

isn't this just applicative? What is needed to turn applicative into monad?


Clearer and more succinct than TFA, thank you


This was better than the tutorial, thank you.


I got a bit obsessed with this stuff (CT, HoTT, haskell, etc) and ended up studying math in college because of it. My advice is not to bother with CT. Its out of context, very abstract math, and you wont find any compelling examples or applications at any accessible level.

I dont want to discourage you from learning math, but I think this isnt a good place spend too much thought at first.


My interest in category theory is for a completely different reason. I too have had to struggle to understand it, but I don't plan on using it in an applied or programming sense.

I've heard a bunch of people say that category theory is almost like a 'bird's eye view of mathematics'. That many of its concepts unify many mathematical concepts under one umbrella, and reveal the similarities and connections between mathematical subdisciplines.

The main reason this interests me is that I'd like to learn math much differently than its taught.

To me, to learn the same concept under different subdisciplines with different names, is O(N) learning.

However, if someone put together a category theory book where they list a concept, and then they do this:

"Now, any mathematician can (easily) see that every major area of mathematics is a category.

- In Set Theory the arrows are functions

- in Topology they are continuous functions

- in Group Theory homomorphisms

- in Linear Algebra they are linear transformations

- in Differentiable Geometry they’re smooth maps, and so on…

But what’s important is that we shifted from focusing on the objects to focusing on the functions, the ways in which we transform the objects. This is basically the category-theoretical perspective: it’s the functions that matter."

(Taken from this blogpost: https://catsinthejungle.wordpress.com/2008/11/15/category-th... )

Do I understand any of that? No. But what attracts me to this format is it (MAYBE...) approximates O(1) learning in math, so long as it can also point out, for every concept in every field, how they differ/relate to every other concept.

So what this does is it helps teach me the 'unifiers' of mathematics, and helps me see connections between fields which can be useful for creativity. E.g. I realize I'm doing X algorithm here from linear algebra and that is O(N), whereas if I just shift fields, and use Y algorithm from Group theory I might make it O(log n) because it differs in this way or whatever it is.

Besides that I see it as a potentially efficient way of learning a lot of math.


Math is at its heart about intuition; formalism is a just tool. One uses the formalism but one has to see past it. Roughly its the difference between knowing that a group is a set with a certain binary function, and knowing that a group is a particular formalization of symmetry.

While a shortcut to understanding sounds nice, I don't think you'll find it by focusing on abstract formalism.


To me this would be another way of developing intuition, via analogy across disciplines. E.g. Volk's Metapatterns draws patterns (but perhaps less rigorous and more loose) across discplines giving one intuition about many things.

In this case, seeing the connections across the disciplines isn't necessarily something I see or am pursuing because I'm searching for a shortcut (although I did mention O(1)), I just see that as a side benefit. What appeals to me more is developing the intuition you mention via the analogies. I feel like if the idea is restated in different ways (not necessarily just terms, but perhaps visualizations/re-conceptualizations) a more holistic/intuitive/fuzzy idea emerges rather than anything rigorous necessarily, but you know a lot more math than me so maybe I'm saying nonsense.


To be honest if you're sufficiently motivated and you find it interesting then go with it. Better to be motivated about an idiosyncratic method than ambivalent about the whole thing.


I think category theory is just the wrong way to look at it. A category is quite an abstract thing and it takes a reasonable mathematical background just to have some good examples to work with (I don’t really think type system categories are good to work with.)

I prefer to just write down the type and stare at it and think about what sort of functions could have that type. Later you can look at the rules. And this mostly means one needs to have written a little (eg) Haskell so that the one can read simple types. The signature of fmap looks like:

  fmap :: (a -> b) -> functor a -> functor b
It should easy to guess what sort of types and functions could satisfy that. The rule restricts the possibilities more:

  fmap (\x -> f (g x)) a == fmap f (fmap g a)


>If They're So Wonderful Why Aren't They In My Favorite Language?

While monads as a general interface might not be in your favorite language, individual implementations of monads might very well be. JS Promise, C# Task and Rust Future are all asynchrony monads - their bind is .then / .ContinueWith / .and_then respectively (with some caveats; the first two overload their bind to also work as fmap).


It's not just "some" caveats: then auto-unwrapping means that you cannot use then as fmap reliably. In JS there is no such thing as Promise<Promise<Number>>. You cannot await/then the outer promise so that you can do something with the inner promise.

In practice it's not too much of a problem, but it does mean that if you use some FP library, you cannot trivially wrap a promise to make it a monadic type that will work with the library's monad utilities.


So, just out of curiosity: I know in ECMAScript this is probably mainly for the purpose of giving you a convenient way to lift either (α → β) or (α → Promise β) into (Promise α → Promise β) without static typing or having two "then" functions, by forcing β and Promise β to be disjoint. It also allows looser composition underneath, since you can choose which kind of return to provide at runtime without having to re-wrap the bare case manually. But are there any other reasons for Promise of Promise to be squashed away like that?


Apart from simplicity, it was designed to be used with "thenables". If the value that a Promise resolved with had a `then()` member, the value would be treated like a first-party Promise. So even if there was a separate fmap-then and bind-then, the fmap-then would have to deal with thenables and act like bind-then anyway.

There's a bunch of history in [1], particularly [2] and [3], if you care to read them.

[1]: https://github.com/promises-aplus/promises-spec/issues

[2]: https://github.com/promises-aplus/promises-spec/issues/75

[3]: https://github.com/promises-aplus/promises-spec/issues/94


Brevity:

class Monad m where

    (>>=) :: m a -> (a -> m b) -> m b
Now, I am comfortable with upper case for type and lower case for instance.

But m is an instance of a function, not a variable. But m, a, and b all look like variables. You are already hammering me with the brevity here.

Maybe put the functions in a serif font, and variables in a sans-serif? Or use different colors? Other languages seem to be more helpful with the visual cues.

Solving problems in SQL is a blast, so it seems like Haskell should be an easy lift, but the brevity of the syntax has sent me packing for years.


> Now, I am comfortable with upper case for type and lower case for instance.

Monad is a typeclass, not a type. m is a variable, whose value is a type constructor.

> But m is an instance of a function, not a variable.

On the type level, m is a variable representing a type constructor; a concrete value it might have is List. It is an instance of a typeclass, not a function.

> But m, a, and b all look like variables.

m, a, and b are all type-level variables; the value of m is a type constructor (which is essentially a type-level function), while the values of a and b are types.


> m, a, and b are all type-level variables; the value of m is a type constructor (which is essentially a type-level function), while the values of a and b are types.

Let's try to apply that knowledge, and use my tribe's creed that "similar things should look similar, and different things should look different":

    -- α, β types
    -- m type constructor
    (>>=) :: m α -> (α -> m β) -> m β

    -- a, b types
    -- 𝘮 type constructor
    (>>=) :: 𝘮 a -> (a -> 𝘮 b) -> 𝘮 b
I think that's much better. GP's complaint was correct, writing different concepts with the same notation is confusing.


There's alot of unspoken assumptions, like what m usually means. The brevity is supposed to make higher concepts expressible however. Monad m is just a basic building block.

It's hard to find best intro material. The above could scare away someone else, even Haskellers.

Maybe intro on how to read examples, explanations etc. could aide?


May good fortune land all over you.


My point is that keeping all that straight at the single letter level is brutal.

If it were a physical building, Haskell would be brutalist.


Haha, I read this originally as "Functors are Monads from the class of People who Have Read Too Many 'Tutorials'"


A response to the common proposal of “we should call functors Mappables instead”: Names Do Not Transmit Meaning, https://www.parsonsmatt.org/2019/08/30/why_functor_doesnt_ma...


This is a similar argument to not naming code directories based on anything reasonable, and naming them after types of fruit instead, because only by reading the code in each directory can you really understand it, and it always changes anyway.

I think it’s better to at least try to make a name that transmits partial meaning or a general gist of the idea.


> I think it’s better to at least try to make a name that transmits partial meaning or a general gist of the idea.

If we didn’t use formal terms for math concepts, mathematicians would waste all their time arguing about the colloquial definitions of the colloquial words they tried to map onto abstract concepts. It’s better to have a clean break and force people to learn the specific formal meaning rather than try to limp along on a busted half-intuition.


the terms actually do come from an intuitive place if you look at the underlying math. it's called a functor because it does something function-like - mapping arrows in the domain category to arrows in the codomain. in programming, we think of this as a higher-order function that lifts a function on a category (of types) to a function on a different category (of types). it's only when you try to elide that explanation that the name appears to come from nowhere.

similarly, monads bear a strong resemblance to monoids - in fact, it's the same relationship as functions and functors and it's quite precise. these intuitions only become available to you with the math, though, and programmers have a strange aversion to mathematics for people who do it day in and day out.


What I find annoying in OCaml and functors is that it makes code really hard to follow. You’re wondering how that function is defined or what exactly is that type? Now you have to try to jump through many functors, their implementation and their instantiations. If IDEs could facilitate that process it’d be fine, but they can’t atm.


Thanks. Well written. Readable.

This sentence from it sums up many of my conversations about FP

It probably also doesn't help that we have a function literally called >>=, which is hard to pronounce. It's pronounced "bind", which is only slightly more helpful in that that still fails to evoke any useful imagery


Thanks, jerf! This is really good stuff.

One question on functor: Does the thing returned have to be "the same shape" as the original? That is, if I have a list, and I fmap on it, do I always get a list back? Or am I only guaranteed to get some kind of functor back? If I am guaranteed to get a list back, am I guaranteed that it has the same number of elements?

To use your initial language, is it fair to say that a functor takes a source of a, and a function that converts a to b, and returns a source of b of the same "shape" as the original source of a?

If it's true, that makes the function example clearer to me. Of course it has to return a function, because the shape of the input was "function".

(And, did the article answer this and I just missed it?)


The answer to your first question is yes. You can see that in the types.

  fmap :: (a -> b) -> f a -> f b
The `a` becomes a `b` (which could be a different type but not necessarily), but the `f` is the same `f`.


The idea of "adjointness" is present like everywhere in math. I wrote up something on it https://github.com/adamnemecek/adjoint.


> Title is literally true. This may not be the best place to learn about these concepts for the first time, because I'm going to focus on knocking down the misconceptions about them.

I've recently been interested in learning more math to apply to programming. Does anyone here on HN know where I can find quality education online (free) I can use through the rest of the summer? I only got to geometry as my highest maths in high school.


I have enjoyed Jerf's content for years. I really enjoy this content as well. Jerf if you read this, excellent stuff, thank you.


I don't really like the description of the IO type so well. Rather, I can say that the type "IO x" is the type of I/O operations that produce a result of type x. The functor/monad functions that can be used include:

- "return x" produces a I/O operation that does nothing and produces the result x.

- "x <$> y" or "fmap x y" produces a I/O operation that does the same thing as y, but the result value is the function x applied to the result of y.

- "x >>= y" produce a I/O operation that when performed, performs x, and then passes the result to the function y, which returns the I/O operation to be performed next.

- "join x" takes the I/O operation x which has the result which is itself a I/O operation; the new one first does x and then the I/O operation which is the result of x.

A I/O operation might have different results when performed multiple times, even if it is the same I/O operation.

Also, I can say that generators in JavaScript seem to form two different monads actually (as far as I can tell), as described by the following table:

  yield x       yield*x
  return x      return yield*x
Also, functors and monads are more than the interface, but also the mathematical properties they have (such as "fmap id = id" etc).


This post brings me back to this idea for industrial programmers: https://www.simplehaskell.org/


I think the reason so many people have trouble understanding functors and monads is because they are not complex constructions but look useless when you see them.

There is nothing inherently interesting about functors and monads. Haskell, as a research project, discovered that you could isolate IO in a lazy language inside a type using a monadic construction to chain them which allows you to track side effects. That was interesting, the rest less so.

Now the Haskell community somewhat fetishized these abstractions for reasons which are their own. They must be apparent to the community at large. They are not to me.


> Haskell, as a research project, discovered that you could isolate IO in a lazy language inside a type using a monadic construction to chain them which allows you to track side effects.

Sounds like LINQ in C#. I wonder if there's any correlation.


LINQ is being worked on by the same people developing/designing Haskell. These methods are slowly being injected. LINQ is monadic and has a bind function that generalize beyond IEnumerable.


the reason for "fetishizing abstractions" is that you can write libraries that work with the generic structure of those abstractions, and are therefore automatically useful for all the specific cases. the design of ruby's "Enumerable" library in the standard lib is a good example, it's a huge collection of methods that works for anything that supplies an 'each' iterator.


I was specifically talking about these abstractions as monads and functors not abstraction in general. Enumerable is a lot less abstract than both monads and functors.

I personally don't think Haskell stand at the right level of abstraction but that's a personal opinion.


'do' notation is a good example of where the monad abstraction comes in handy.


The monads video by "Fun Fun Function" finally explained functors and monads for me in an understandable way.


Actually, just learn the construct `Promise.then().catch()` and u already know how to use Monad in your code. The `then` is the `bind` of Monad. What's complicated here ? Want `do` syntax ?

a <- method b

c <- method a

is equivalent to

a = await method(b)

c = await method(a)

Again. Use javascript to learn Monad in real world is a better way to learn.


That doesn't tell me a single thing about what 'monad' means.

It's like having someone run hello world in java and declaring they know what 'static' means.


That’s an example of a specific monad, not the definition of the general concept of a monad.

Also, JS promises are not really monads. They lack denotational equality, so they can’t be said to obey the monad laws.


Look another nebulous post on functional programming.


If it can't be explained in a tutorial or two - is it a good idea to use in (most) production environments?

(I know, I know - your environment is a special case).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: