While cringing at the cliche: monads were the key ingredient.
Let's not turn this into a cliche! Monads are powerful. I find do notation much easier to read in many cases:
do subscribe
assertSubSucceeded
startRoundtripTimer
assertRecievedWebHook "channel_occupied"
stopRoundtripTimer
unsubscribe
assertRecievedWebHook "channel_vacated"
If you haven’t come across monads before, don’t worry, you can think of the monad we are using as “code that does IO”.
On top of this, once you have a basic intuition for how monads work, you start gaining the ability to describe what your program does at the type level. For example, This block of code requires a read-only environment (Reader), has logging output (Writer), and accesses websockets (WebSocketT).
Also, the fact that Haskell is a less mainstream language led to a few obvious disadvantages: less documentation, not quite as many libraries, poorer tooling (although having on the fly type errors show up with mod-ghc is already a massive win over Ruby).
Yes, you essentially need to be prepared to do any or all of the following if you want to use Haskell commercially:
1) Patch libraries and tooling.
2) Engage communities and maintainers (bug reports, mailing list, etc).
3) Implement functionality (from scratch) which "should be there".
If you can't, or won't, do these things, Haskell will be much more painful to use in the real world.
I agree that the do notation does look nice. The on thing that I like about the >> operator is that it's clear that you are performing a combining two functions. But to be honest I don't feel strongly either way.
I definitely agree with your point on monads. I tried to keep this post more accessible, but I hope to go into a more technical post in the future on how we used monad transformers when writing these tests. We ended up using WriterT, ReaderT, LoggingT, EitherT for the "test components". Unfortunately the websocket library we used did not play well with this (it is callback based).
And I also agree with your final points. Having said that, I had very low expectations in this area going in, and it actually turned out to be not nearly as bad as I expected.
At the risk of being a little pedantic, >> isn't combining two functions, e.g.
Just () >> Nothing == Nothing
Nothing >> Just () == Nothing
It's perhaps viable to think of (m >> f) as modifying/continuing the m effect/computation using a continuation which is constantly the f effect. But that's about as close to "combining two functions" I can get it without specifically picking a monad that is a function.
On the other hand, >=> is absolutely combining two functions. It's exactly (flip (.)) in a Kleisli category over some monad.
By "combining" I just meant an operator that takes multiple functions and returns a new function based on those arguments. I guess these "functions" are a special case because they are constant monadic functions. Do you mean to say composition rather than combination when you are about >=>?
Yeah, I'm happy to abide by this all when the caveats are specialized, but >=> is definitely and directly what's being appealed to.
I meant combination just so as to generalize the idea of "composition". If you're willing to call Kleisli composition "composition" generally then I won't complain :)
Yes, but let's be careful not to confuse beginners that thought they were on the path to understanding and then you come and say "it's not combining two functions".
Well, it is combining two morphisms - that you can't deny. And most people use the shorthand word "function" to mean "any morphism" including type constructors.
You may be technically correct but it is not incorrect for people trying to make sense of all these transformations, to say that >> combines two functions. These functions are picked up by the monadic context and executed in sequence.
>> certainly never combines two functions! The closest you can come is to specialize the monad in context to ((->) e) and then talk about that kind of combination. The other way to go is to see >> as a special case of >=> wherein constant Kleisli functions are applied. I've got no problem with that now... Except now you're really talking about >=> and you may as well be specific?
You missed my point. Haskell beginners do not know we use the word "morphism" for the most generic "take an input, give an output" thing (the most generic map). That's why they call it "function".
It's not wrong, just not specific enough. Does (>>) not combine two maps?
> How do you interpret (>>) :: Monad m => m a -> m b -> m b as combining two maps of any kind
The answer is in your question. Is -> not an arrow? Isn't there one of those arrows pointing into and out of every m x in the signature above? (for the first and last m x, there could be arrows pointing in and out, respectively). That means they are maps.
A morphism is any value that has arrows pointing to it (from other values) and out of it (to other values). Read my answer above as well.
I am not debating your arguments. I know you know what you are talking about and you are right in what you're saying otherwise. What I am saying is in Haskell everything except Bottom is a categorical morphism, and you seem to be ignoring this concept, which is helpful for those who are trying to make sense of all this.
I'm not sure I've ever heard of this concept, tbh. Categorically, you have arrows. Haskell is often modeled by the category Hask of types and arrows between types. If we denote these categorical arrows in notation as fat arrows then things like
Int,
(),
Maybe (Maybe (Maybe String))
forall a . a -> a
are objects and things like
Int => (),
() => Int
Maybe (Maybe (Maybe String)) => Maybe (Maybe (Maybe String))
are arrows. Arrows are often called morphisms so you might talk about the morphisms as the elements of a Hom set like Hom((), Int).
Notably, the Haskell arrows, the type of functions, happen to internalize the categorical arrows. This makes Hask cartesian closed. Thus, we have a correspondence between the arrows
a => b
and the objects
a -> b
This is why we often talk about function types as being categorical arrows.
---
All that said, the type `m a` is not a function type let alone an arrow unless we happen to specialize `m` to be one. More specifically, the type `forall m . m a` is never a function type no matter the choice of `a`.
The notation `(-> m a)` or `(m a ->)` is... not actually a type. It's a type constructor at best (though it's illegal Haskell notation) which fits into a different kind of categorical construction if you want to go there.
The type `(a -> b)` "is" a valid arrow via the correspondence I mentioned earlier. The thing that's being referred to is the fully applied type constructor. If we have
type Arr a b = a -> b
then `Arr` is a type constructor of kind `(* -> * -> )` (e.g. a "type constructor constructor") and `Arr a` a type constructor of kind `( -> )`.
The type* `Arr a b` is in correspondence with the morphisms of Hask. A combination of arrows would be a morphism from pairs of internal arrows to... something else
(a -> b, c -> d) => r
we can represent that with an internal arrow
(a -> b, c -> d) -> r
and then use the currying natural isomorphism
(a -> b) -> (c -> d) -> r
which is the (internal) type of "arrow combining morphisms".
That's the clearest I can explain what I'm on about. Where do you fit your notion of categorical morphism into there?
I was a bit sleepy when I wrote the comment above. Please consider the following function as an example of what I'm talking about, instead of the confused "chevron" function above. This is make-believe C:
MaybeInt bind (MaybeInt a, (function : int -> MaybeInt a) f)
{
if (a.maybe_value == JUST)
f(a.integer);
else
nothing();
}
MaybeInt chevron (MaybeInt a, MaybeInt b)
{
bind(a, lambda(x){ b });
}
/* Just 5 >>= (\x -> Just $ x + 1) */
bind( just(5), lambda(x){ just(x+1); } );
I'm sure there's ways we can force it, but I don't think it's terribly natural to consider "Just x" or "Nothing" to be "tak[ing] an input, giv[ing] an output". "Just", certainly, but not once you've applied it.
Just 5 is a map, and Nothing is a map. Let's watch:
Just 5 >> Nothing
Just is a map that accepts an input and returns that input decorated with a certain structure.
Just ---(5)---> (Just 5)
Which means there's an arrow that shoots out of (Just), itself a map - because it has arrows coming in and going out - which we can use to arrive at (Just 5).
Then (>>) combines the Just 5 map - a map that takes a function, like (>>) in this case, and returns another function that accepts another Maybe and returns another Maybe.
Let's draw that:
(Just 5) ---(>>)---> (\x -> (Just 5) >> x)
Which means there's an arrow that shoots out of (Just 5), itself a map, to which in this example we go to its index (>>) and retrieve the resulting value, another map: (\x -> (Just 5) >> x)
Now we give the map above, the input Nothing, and it returns the value Nothing, itself a map, because it has arrows coming in and shooting out as can be seen below:
The ====> arrow above I've added to show other possible arrows you might want to take from Nothing onwards.
In other words, to say that a value in Haskell is not a map is to say it can't be arrived at, nor escaped from. If I understand correctly, the only value that can't be escaped from is Bottom, which has 0 arrows pointing out of it. Therefore it is not a map. But Just, Just 5 and Nothing are all maps, as they all have arrows pointing to them (which means we can arrive at them) and arrows pointing out of them (which means we can escape from them). Something that has arrows coming in and going out, we call a morphism.
This seems deeply confusing, and possibly deeply confused. Something that has arrows coming in and going out we call an object. We call the arrows morphisms. Every "a" in (+) :: Num a => a -> a -> a has an arrow going in and/or coming out. It seems unhelpful to say "plus composes functions".
I agree I'm a bit confused there, namely in that the maps are not contained in the objects, but as you said, are in the arrows themselves.
So (>>) contains a map that in its Nothing index has the lambda (\x -> Nothing >> x). You're right.
And yes, it is unhelpful to say plus composes functions. However, that is indeed what it does in the abstract world that Haskell pretends to be modelling - numbers (Church numerals) are functions and the arithmetic operators combine those functions. So, while unhelpful, it is not wrong, and saying it isn't so will confuse beginners that have done those "numbers from lambdas" exercises.
We currently do not have plans to release this code, mainly because it is very specific to our system. Having said that we are working on another large Haskell project, and we do have plans to open source a central component of it.
And the structure looks a bit like you might want to use a withSubscription function instead of subscribe/unsubscribe pairs. Same for start/stopRoundtripTimer.
I know nothing about the Pusher protocol, but the standard approach to boilerplate encoding/decoding problems these days is to use GHC.Generics.
It doesn't always fit (typically when your data structures don't resemble the wire encoding), but it kills all the boilerplate when it does.
Since you say that you can magically get away without boilerplate in Ruby in this case, I would expect that the Generics approach will give exactly the same result.
{-# LANGUAGE DeriveGeneric #-}
import GHC.Generics
-- a generic wrapper type for all Pusher events
data Event a = Event {
eventType :: Text,
eventData :: a
} deriving (Generic)
instance ToJSON a => ToJSON (Event a)
instance FromJSON a => FromJSON (Event a)
data Error = Error {
message :: Text,
code :: Integer
} deriving (Generic)
instance ToJSON Error
instance FromJSON Error
I don't deny that the test could be written in much the same way in Ruby too, the key point of the post was that in Haskell you get flexibility comparable to a highly dynamic language, while also getting a lot safety guarantees.
As for the previous tests, they probably a product of many small changes rather than design. There was a reason we were re-writing them.
Let's not turn this into a cliche! Monads are powerful. I find do notation much easier to read in many cases:
If you haven’t come across monads before, don’t worry, you can think of the monad we are using as “code that does IO”.On top of this, once you have a basic intuition for how monads work, you start gaining the ability to describe what your program does at the type level. For example, This block of code requires a read-only environment (Reader), has logging output (Writer), and accesses websockets (WebSocketT).
Also, the fact that Haskell is a less mainstream language led to a few obvious disadvantages: less documentation, not quite as many libraries, poorer tooling (although having on the fly type errors show up with mod-ghc is already a massive win over Ruby).
Yes, you essentially need to be prepared to do any or all of the following if you want to use Haskell commercially:
1) Patch libraries and tooling.
2) Engage communities and maintainers (bug reports, mailing list, etc).
3) Implement functionality (from scratch) which "should be there".
If you can't, or won't, do these things, Haskell will be much more painful to use in the real world.