This is very thorough, and I'm sure it's helpful to a lot of folks. So thank you to the author. For me, it's probably a syntactical issue, but I find things such as the below too hard to comprehend.
What did we get just by doing this? Let's see:
λ> :type EitherIO
EitherIO :: IO (Either e a) -> EitherIO e a
λ> :type runEitherIO
runEitherIO :: EitherIO e a -> IO (Either e a)
So already we have a way to go between our own type and the combination we used previously! That's gotta be useful somehow.
OK, looks like a pair of parentheses moved. I have seriously no idea why that is interesting or useful. Conceptually what's going on is probably not complicated, but that syntax makes it inscrutable to me. (I could be wrong and the problem could be at a deeper level of understanding.)
They're different representations of the same thing.
This is useful because we can do easy error handling in the EitherIO monad but we can't run it. We can run the IO monad but handling errors in it sucks ass.
So we handle errors in EitherIO, and make it runnable (ie. convert to the IO monad) by calling runEitherIO.
It's like calling a function that takes a pair when some other API gives you a two element list. The same kind of data, just munging the shape so things work. In pseudocode:
// EitherIO and runEitherIO in this analogy.
fun toList((a, b)) -> [a, b]
fun toPair([a, b]) -> (a, b)
// Gotta convert to a "runnable" pair before calling the function.
var latLngList = [blah, blah]
var resultPair = api1ThatRequiresPair(toPair(latLngList))
// Convert representations to do operations with a different API.
api2ThatRequiresList(toList(resultPair))
data EitherIO e a = EitherIO {
runEitherIO :: IO (Either e a)
}
But, for the sake of clarity, it can be written as
data EitherIO e a = MakeEitherIO {
runEitherIO :: IO (Either e a)
}
The author pointed out two functions that this EitherIO declaration gave us. One is a constructor, MakeEitherIO, with type
IO (Either e a) -> EitherIO e a
In other words, MakeEitherIO takes a value of type IO (Either e a) and returns a value of type EitherIO e a (you could think of this as "wrapping" the original IO value).
The second function, runEitherIO, is an accessor function for EitherIO's named record field "runEitherIO". It has type
EitherIO e a -> IO (Either e a)
That is, runEitherIO takes a value of type EitherIO and returns the internal value of type IO (Either e a) (you could think of this as "unwrapping" the IO value).
This tutorial is awesome and really tightened up a recent application I wrote. I've been trying to grok monad transformers for a while and I'm going back over the transformers paper[0] since I feel this tutorial helped things "click" for me.
It's worth looking at the Eff language, which is an OCaml variant. Much easier to understand, especially if you're monadically challenged. This paper is a great introduction to the idea of algebraic effects:
All formulations I know of are slow right now since there always has to be some kind of clunky search mechanism through the handlers at runtime. That may be fixed at some point.
In the mean time , mtl is almost extensible effects and is very fast.
Can someone give an elevator pitch of extensible effects? Monad transformers are very useful and rather easy once you get used to them, although not without their downsides. What advantages do extensible effects have?
Extensible effects are easier to compose in arbitrary ways. If you have a monad transformer stack you can only have each type of transformer once in each stack before you have to start using lift . lift . lift to get to the inner layers as the type class functions will only get you to the outermost instance.
Try[1] in Scala standard library is also similar in concept. Erik Meijer covers this subject very well in his talks[2].
[1]The Try type represents a computation that may either result in an exception, or return a successfully computed value. http://www.scala-lang.org/files/archive/nightly/docs/library/index.html#scala.util.Try
[2]http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Keynote-Duality
Slight nit with title: Monad Transformers can be used to model exceptions, but are more general - really they are a way to compose monads on a per-type basis (because all monads do not compose together automatically "for free" like functions or Functors).
I think that the title is meant to be interpreted as "(A Gentle Introduction to Monad Transformers) – or, (Values as Exceptions)" rather than "(A Gentle Introduction to ((Monad Transformers) – or, (Values as Exceptions)))". That is, I think that the intention is to introduce the reader to monad transformers by reifying exceptions into values (and thus arriving at monad transformers), rather than to give the reader a complete understanding of monad transformers, which are (in this hypothetical world) the same thing as exception-reifying values.
holy hell why are tutorials like this always in haskell. While I am sure there are a lot of people using Haskell, there is a whole lot more people in Java or python.
People that use Haskell tend to care more about this sort of thing than people that use those languages. Or alternatively, if someone cared about this sort of thing they would use Haskell.
There are quite a few layers between what's discussed there and this article. In particular, static typing is an important component of monad transformer stacks, though it's possible to express them without it. Without types, however, it's hard to have a conversation about what's actually going on.
I might also say, people who are using Haskell have to care about this kind of stuff, because the language more or less forces it on them. Java or Python, not so much.
There are plenty of monad tutorials in languages other than Haskell but Haskell is probably the language where the point of the article is not obscured by the syntactic ceremony of the language.
Spend some time learning the basic Haskell syntax (it's really not that hard) and you'll be able to get to the "meat" of such articles much, much faster than if the article were written in any other language.
The thing I miss most is the Maybe monad. In idiomatic Haskell, Maybe serves roughly the purpose that null serves in Java -- a value may or may not be present. You don't need monads to query a Maybe value -- it works roughly like a Java Optional -- but working within the Maybe monad allows you to write a series of operations as though the Maybe value always has an accessible value and know that the whole series will short-circuit to a Nothing value if a Nothing is ever examined.
For a series of operations, is that one monad or a series of monads?
If I have to use a monad for each operation, that feels like the same order of magnitude of effort as checking for null after each operation. If it's just one monad, then it feels like the monad saves work.
For that matter, the monad may still save work, because I only have to think about how I handle Nothing in one place, at the end of the series of calls. Whereas with null, I have to think about how to handle it after each call that can return null.
Just one. If you have a series of functions that could return Nothing (Maybe's version of null), you chain them like:
foo >>= bar >>= baz
This will automatically return Nothing if any of the contained functions do. This does require all of the function to have the type (a -> Maybe b), indicating that they take a value and can return either another value, or Nothing. If you have a function with type (a -> b), indicating that it is guarenteed to return something, you can wrap it like so:
foo >>= return . bar >>= baz
A complete usage might look like:
f x = fromMaybe defualtValue $ Just x >>= foo >>= bar >>= baz
Are you implying that Java cannot be used to make useful software?
No, of course not. Are people without feet useless?
What does it mean to say that Java does not have feet?
It means that it lacks a capability for which it also lacks a need (shoes). This capability is that of expressing purity via the type system. Monads - and by extension, monad transformers - are a tool designed to assist the expression of imperative algorithms in a pure context.
The reason I phrase it this way is because people often make the mistake of assuming Haskell is lacking expressive power due to its purity. This is untrue. Haskell is perfectly capable of expressing messy, mutable, imperative algorithms just like any other language. Other languages, on the other hand, struggle to express many of the pure algorithms and data structures used in Haskell; they simply lack the ability to enforce purity within the language and are thus reduced to purity by convention.
Does lack of ability to enforce purity equal lack of ability to express the algorithm/data structure? Or does it just equal lack of ability to prove the purity?
You can only express some kinds of purity if you can enforce it at the type level. An otherwise pure data structure that contains mutable elements is a violation of purity; a contradiction. If you cannot express a data structure which guarantees that its elements (and its elements' fields/elements) are pure, you cannot express a pure data structure (also known as a value).
There are a lot of tutorials around monads and monad topics using Haskell because... Haskell is one of the few languages which uses monads in a principled and explicit way.
Haskell introduced monads as a way of solving certain problems Haskell had. So if you want to use Haskell effectively, you have to learn about monads. Monads are useful in Haskell.
But that doesn't mean they are as useful in every language. So other languages don't NEED tutorials on monads as much as Haskell does. This isn't because those other languages are worse (less "principled" or "explicit") than Haskell. It's because they aren't structured to make it important to have tutorials on monads. Instead you worry about whatever the specific issues are for those other languages.
Expanding on the other replies, monads are not useful in most languages because most languages are not powerful enough to use them. The syntax is too heavy, because you need lots of function closures. The type system is too weak, because using monadic values without higher-kinded types is much more difficult and less useful.
The fact that everybody tries to import monadic values into their languages is evidence that they would indeed be useful; the fact that such libraries are always useless in practice and incapable of even transliterating the Haskell, and often fail to even implement the full feature set of monadic values in Haskell even if you're willing to type reams of code, is why you don't use them in other languages.
You don't ignore monads in non-Haskell languages because the other languages are too powerful to need them, hurr hurr Haskell is teh weaks... you ignore monads in other languages because those languages are too weak to usefully implement them. If they could, you'd see them, and I am very, very confident over the next 10 years you're going to see more and more languages coming out that really have and really use monadic values, because they will be strong enough to do so, and once you have that, the value proposition is just too good to ignore. They are too darned useful to ignore.
> The fact that everybody tries to import monadic values into their languages is evidence that they would indeed be useful...
Could you give a bit more detail here? From where I sit, I certainly don't see people trying to import monadic values into the languages I use, and I don't see much evidence that they'd be useful. (I've been trying to get my mind around monads and when and how to use them, and one of the problems I have in learning them is that, as far as I can tell, they don't look like the solution to any problem I've ever actually had.)
"From where I sit, I certainly don't see people trying to import monadic values into the languages I use, and I don't see much evidence that they'd be useful."
By importing them, I mean by writing articles like this, which invariably produces code with such an overhead to use it that nobody ever would.
They don't appear to solve any problems partially because you're already "in" a monad; program execution flow is already a monadic value itself. They partially don't appear to solve any problems because you can't really "think" in them, because, as I mentioned, most languages are simply too weak to make them convenient enough to use. And they partially don't appear to solve any problem because the samples used by all the "tutorials" are all pretty trivial; it's sort of like dismissing Object Orientation because obviously it's only useful for modeling relationships between shapes, animals, and cars or something, since that's all anybody ever talks about.
The reason you don't discern the presence of any problem is that you happily purr along as an effective 'human compiler' in a language with a [relatively] lower level of abstraction. You cannot detect that you're actually doing compilation from higher-level constructs because those higher-level concepts have not been formed in your mind (and you haven't been exposed to languages which capture those concepts more precisely/succinctly). It's quite likely, in fact, that your [manual] compilation process is not consistent or repeatable and you end up with a lower fidelity translation (i.e., with lots of noise interspersed) than a truly mechanistic compiler would.
I probably should have said this in my previous comment (the GP), but I mainly work in embedded systems. I need to be able to reason about the low-level behavior of the system more than I need better high-level abstractions.
And getting to write large amounts of my system inside monads (because of lots of mutable state)... that doesn't look like a solution to me. It looks more like a problem.
Are there high-level abstractions that would simplify my view of this? Probably. Would getting those abstractions to behave as needed on the low level be a net win, or a net loss? My money is on the "net loss" side.
That's a bet I would take any day and I think you probably don't need the low level access you think you do. It's largely due to historical architectural decisions that people think so.
Atom is a Haskell DSL for designing hard real-time embedded software. At Eaton, we use it for automotive control systems. Based on guarded atomic actions, and similar to software transactional memory, Atom enables highly concurrent programming without the need for mutex locking. In addition, Atom performs compile-time task scheduling and generates code with deterministic execution time and constant memory use, simplifying the process of timing verification and memory consumption in hard realtime applications. Without mutex locking and run-time task scheduling, Atom can eliminate the need and overhead of RTOSs for many embedded applications.
"""
Yeah yeah, they are all strong independent languages that don't need no monads. What do I care.
EDIT: you don't strictly need something like monad transformers, though. In general, monads are just another type class. A special one since IO is implemented with them, but other than that, they are not essential.
I'd really like it if foundational language/programming advocacy articles led with a few links into github usages. I mean, I suppose it's possible that it's so new that it hasn't actually been used, but honestly, if you're writing an article about it you should probably be able to link into your own examples, at least.
What did we get just by doing this? Let's see:
So already we have a way to go between our own type and the combination we used previously! That's gotta be useful somehow.OK, looks like a pair of parentheses moved. I have seriously no idea why that is interesting or useful. Conceptually what's going on is probably not complicated, but that syntax makes it inscrutable to me. (I could be wrong and the problem could be at a deeper level of understanding.)