Hacker News new | past | comments | ask | show | jobs | submit login
Reflecting on Haskell in 2015 (stephendiehl.com)
191 points by lelf on Dec 18, 2015 | hide | past | favorite | 102 comments



I really enjoyed this. Just wanted to comment on the author's brief comments on Elm.

> As of yet, there is no rich polymorphism or higher-kinded types. As such a whole family of the usual functional constructions (monoids, functors, applicatives, monads) are inexpressible.

I completely understand that this make Elm quite limited compared to Haskell as the author uses it, but these limitations are precisely what makes Elm accessible to newcomers. I think it's a great way to get people into Haskell syntax like partial application and into the functional way of thinking in general.

I have learned Haskell very gradually over the past 6 years, and I still find advanced Haskell with all the language extensions very daunting to get into.


For learning functional programming I find Racket to be the best for teaching functional programming. I think I took 4 tries at teaching myself Haskell and then learning Racket really helped me to get over the learning curve of Haskell though i still consider myself a beginner.


I've tried Haskell 4 times. No joke. I have no problem with currying, higher-order functions, foldl, etc., but Monads. Get. Me. Every. Time.

My brain just refuses to fully "grok" them. I still don't see why they're so awesome. I know I use them day-to-day - LINQ, the "List" abstraction (supposedly also a monad??) but I just don't see why it's important to understand them on this whole new fundamentally different level.

It's like - loops. I use loops every day. But if someone were to say - "Hey, did you know that loops are just the Andifuncsplursx abstraction applied on Crazofors?? This is why Crazofors in Haskell are so awesome" I still wouldn't "get" Crazofors or why I should care enough to attempt to "get" them.

I've just given up at this point.


Like Erik Meijer said in his course : there is nothing special or magic about monads. They don't deserve all the fuzz arround them. If you don't get why monads are so awesome, maybe that's a proof that you understood them. Because there is nothing special !!

A monad is just a type with 2 functions defined. Like an interface with two methods in OO langages. The 2 functions have to respect some laws but you can imagine whatever implementation for the 2 functions as long as the laws are observed (et type signature of course).

You could invent a total different implementation for the list monad, for the maybe monad, etc (if you respect the laws). There is no hidden ultra powerfull meaning which implies only one implementation.

The best paper on monads i ever read is an ascii art one, by Graham Hutton : http://www.cs.nott.ac.uk/~pszgmh/monads


> You could invent a total different implementation for the list monad, for the maybe monad, etc (if you respect the laws).

Could you show such a different implementation for either list or maybe?


    newtype HeadList a = HeadList { getHeadList :: [a] }

    instance Monad HeadList where 
         return a = HeadList [a]
         m >>= f = HeadList $ fmap (head . getHeadList . f) (getHeadList m) 
This is a list instance that only keeps the head of function result, so it's basically just a map.

But you could imagine putting any function that returns one result. min max avg normalize, etc


    -- const [] for a HeadList
    quux :: a -> HeadList a
    quux = HeadList . (const [])

    *Main> HeadList "hello... no wait" >>= quux
    HeadList {getHeadList = "*** Exception: Prelude.head: empty list
Such breakage is verboten. This monad instance does not fly.


It's just the name. Whatever a monad is, it must be worth the name.

(Yes, I know the history of the term.)


Heres what finally got me to understand monads, including some of the context around them:

1. Why is it important that a List is a monad?

A. Its not particularly important. Its really just pointing out that monad is a very general abstraction - it wont tell you anything you don't already know about Lists.

On the other side, lists as kind of trivial examples of monads - they didnt really help me understand monads either.

Its like saying 1 is a real number - true, but it wont help you understand real numbers.

2. Why should i use monads?

A: I like to think of monads as things you can use in a for-comprehension (or the do notation in Haskell). If you can imagine writing something like

  for {x <- thing; y<-thing} x + y
for "thing", then "thing" might be a monad (assuming all the math laws work out - sometimes they don't). In scala, for-comprehensions are literally de-sugared into maps/flatMaps/filters so for comprehension without filter <=> monad.

3. But a monad is just a monoid in the category of endofunctors?

A: There is deep category theory and math behind this stuff. It can be useful to talk about it, but when you are starting out its overkill. Don't worry about "getting" the really abstract crap at first, just skim right over. Programming in monads is a lot easier than the theory, and the theory can be learned after getting your hands dirty. Using monads is mostly just for-comprehensions.

ps:

"monoid" = there is a zero, and an add operation. like integers with +, or integers with *).

"functor" = thing that has a map() operation.

"endo" = self

"category" = kind of like a set. its a container.

So a monad has a "zero" or default monad, a way to add monads, and a map operation that returns another monad.

e.g List -> zero = Nil, add = append, map = the "normal" map with f applied to each element

Future -> zero = empty Future, add = do future2 after future1, map = make a future with the f applied to value


What helped me grok monads was to understand that monads are just a subset of functors. Then I did what I could to fully understand what functors were, since at that point they were still a kind of cargo-cult black box for me.

    let liftF = fmap
    liftF negate (Just 1)
    liftA negate (Just 1)
    liftM negate (Just 1)
Realizing these all performed the same operation (if operating on a monadic value), and that `fmap` is just a `liftF` ("lift a regular function to work on a functor"), cleared some things up for me.

This also helped: https://en.wikibooks.org/wiki/Haskell/Applicative_functors#A...

So all these abstractions really just let you do different things with functions that normally don't take functor arguments. They make the function basically allow functor arguments (kind of), to save writing a lot of boilerplate within your functions; typically boilerplate that would be put at the start of every function.

Once someone really gets the idea of what a functor is and why they can be useful, I think the rest is easy to understand once you read the definition of monads and applicatives.


In my opinion, the only way to understand monads is to use them. They'll be a little weird at first, but the type system will guarantee you're using them in the right way. The thing is that you don't really need to understand how a given monad works under the hood; you just need to know how it's going to act when you use it. For example, I couldn't write the State monad instance right now, not without quite a bit of puzzling anyway. But I use the State monad all the time, because I know how it's going to act:

    addToState :: Int -> State Int ()
    addToState number = do
      currentState <- get
      put $ number + currentState
How is this working exactly? Well who cares. I know that once this function is called, my state will have been incremented by the given amount.


> How is this working exactly? Well who cares. I know that once this function is called, my state will have been incremented by the given amount.

Precisely. Then once you've used Haskell in many projects and gotten a handle on using it, you can figure out the relationships/laws and use it to even greater effect.


I guess the problem is the focus on the rules, mechanics or syntax. It's like trying to understand roundness by having some explain formulas involving Pi.

Just forget the math, look at all the squares, triangles and circles. When you start to notice the sameness between ovals and circles and how they aren't squares you start to get the notion.

When it comes to monads, it's just the mechanics behind why some things, are intuitively composable. You don't need the mechanics if you just get a feeling for it.

In linq this is the feeling that you can stack a bunch of from x in xs from y in ys from ... together and get sensible results without too much thinking. As long as the xs and ys implement SelectMany as a monad it doesn't matter what it does, be it transforming collections, building sql queries or scheduling async tasks, the safe feeling will be there.

If you really want to do the math it might help to identify the level of abstraction we are talking about. For this I found it easier to reach for abstract algebra. Lookup the concept of a monoid and how it relates to you everyday arithmetic, it's roughly the same abstraction leap as the difference between your code and the concept of monads.

Now, monads really are just monoids for a particular kind of binary operation and values. The problem is that understanding monoids in the context of multiplication and addition is easy, you already have good grasp of both the arithmetic and the algebra there. But the compositions that monads describe (kleisly arrows) are probably not something you think about most of the time. Which kind of is like trying to understand abstract algebra without a good grasp on algebra.


I had to read Real World Haskell's chapter twice: http://book.realworldhaskell.org/read/monads.html to figure it out.

It's essentially a hidden argument that's passed around between the environment (the stuff in "do ... ") and the actions (the stuff called in "do ...", e.g, "x <- foo", foo would be the action). The argument is always the same type, and the each action can create a new one based off the old one. So you have IO, which is essentially "all interactions with the outside non-pure world", and the versions of it are "the world before I did this action" and "the world after I did this action".

Stuff that doesn't use the hidden arg doesn't have the action type (e.g., IO ()), so it has to be lifted, which essentially passes the old world state verbatim to the next step.


> My brain just refuses to fully "grok" them. I still don't see why they're so awesome. I know I use them day-to-day - LINQ, the "List" abstraction (supposedly also a monad??) but I just don't see why it's important to understand them on this whole new fundamentally different level.

Why do you want to "grok" them? Just use them. In fact I'd say there isn't much more to grokking them than just using them.


"Why do you want to 'grok' them? Just use them."

You remind me of my first ground school instructor. I'd asked her why it was necessary to use rudder in a turn, and she waved her hand dismissively. "Just step on the ball" she recited, referring to the turn-and-bank indicator. No thank you, I'd rather know what's keeping my plane stable, so I found the answer elsewhere: Langewiesche's awesome Stick and Rudder, still relevant 70 years after publication.


I'm very much in favour of people digging down into the fundamentals of what they're learning, if they want to. I think PopsiclePete, on the other hand, would be perfectly happy with your ground school instructor, and there's nothing wrong with that.


That's a little different I think. What are the potential repercussions of not intimately understanding monads and the monad laws in Haskell?

Much lesser than not understanding ruddering into a turn I'd guess, though I don't know what it is. What do you think?


Some pilots are just drivers - I'd certainly rather ride with the former, given the choice!

While it's tempting to characterize some programmers as just coders, that sounds pejorative and wouldn't be very charitable of me. So instead I'll distinguish programmers as tool makers and tool users. I know which one of those I'd put my faith in too, all things equal.

No offense intended... at all! I don't grok monads either, but in my world (mainframe stuff) Haskell doesn't register. If I used monads though I'd surely be driven to understand what's going on under the hood.

Forgive me another tangential OT story, an anecdote I read in a magazine many years ago. A man was spending a Saturday afternoon puttering around in his back yard while the family's hound slept on the back porch. The man called the dog, who then roused and put his nose to the ground, retracing all the steps his master had taken that afternoon until he finally reached the man. "The dog didn't give a damn about coming to me" the man growled, "He just wanted to know how I got here."

Hee! That's me.


> I've just given up at this point.

Given up trying to understand Monads, or given up Haskell? I'm still confused by many monads, after years of Haskell programming; it hasn't stopped me getting stuff done.


I took me several years to grok monads. I even learned some category theory to try and get there (it didn't really help, but I do think it was useful for some other almost programming related things).

The nice thing about monads with respect to the haskell programming thing is that you don't have to know why they work to get their benefit. However, if you wanted to build your own you probably do need to grok them.

However, I'm not convinced that monads are the best way to proceed on the most part in programming advancements. The immutable/mutable tracking, affine types, and borrow checker in Rust seems like it would help prevent a lot of the same types of bugs that pure FP does, but without the cost of needing to have a monadic effect system (and Rust is a more familiar style of programming than the State monad with a host of monad transformers). I'm hoping for optional affine types and more immutable data structures to become the norm more than I am monad usage.


"Monad" is an abstraction. What kind of abstraction? An algebraic abstraction.

That means really understanding the concept is much like understanding other algebraic abstractions.

The most basic concept in classic 19th century abstract algebra is "group." Just like the monad concept, this concept involves a set of values that can be combined with a few carefully chosen operations.

Just like with the monad concept, the group concept doesn't lend itself to immediate grokking. So a lot of people get frustrated by abstract algebra. They feel like someone just isn't telling them what a group "actually is."

But group is an abstraction over concrete "implementations". It is a common base for many algebraic topics, like integer arithmetic, modular arithmetic, matrix arithmetic, polynomial arithmetic, and even more complex structures. If you are familiar with the theory that applies to groups in general, you have access to proofs and formulations that can be applied to many different topics. Sometimes that generality is useless, sometimes it is very productive and succinct.

What the groups have in common are a binary operator that's associative (like plus or times) an identity element (like zero or one) and a way of taking inverses. This is all codified as group axioms. If you just look at those axioms you might say "so what?" but the concept is born from actual mathematical practice and is significantly useful and interesting.

Monad is an abstraction over different computational topics: I/O computations, randomized computations, failing computations, and so on. It captures in an elegant and abstract way the operations and elements required to express these topics. General functions can be written polymorphically over all monads, just like theorems and computations can be written to work for all groups.

So for a monad, you need return, which lifts a base element into the monadic class of values. (This notion of having a base element and a lifted set, for example Int and Maybe Int, is itself a basic abstraction that monad builds on, namely the functor abstraction, whose only operation is fmap, an abstraction of list mapping.) And you need bind, which is some way of combining one monadic value with a function producing another monadic value.

Those operations need to work together in reasonable ways specified formally by the monad axioms or monad laws.

Again, you can look at all that definition stuff and say "So what?" But again, the concept makes sense, it is useful, and it is born from abstracting over concrete topics. (Moggi wrote the first paper about the usefulness of the monad concept in computer science; the concept originally came from category theory, which is kind of like abstract algebra.)

The do notation is a good example of the usefulness of having an abstract type class for monads. It gives you syntactic sugar that works in a well defined way across many many topics.

Abstract algebra doesn't make any sense if you don't know how to work with plus and minus. Monads don't make sense if you're not comfortable with implementing pure combinators for simple computational structures.

So you should find some way to practice some of those basics, and then the abstraction "monad" will have meaning and not just look like a random assemblage of made-up rules.

Look at an example of using do notation with Maybe types to express failure. It's not that amazing, but it's useful enough, and makes sense. Now learn to implement the same thing from scratch without the syntactic sugar and without the monad functions. You will first write the whole thing with explicit pattern matching, tediously. Then you will implement the crucial combinators that lets you "bind" one Maybe value to another computation returning a new Maybe value. Then you can look at the source code for the Maybe instance of Monad and see that it is just the combinator you have written.

Then you can study the State monad in a similar way.

And then you can notice how the general monad functions are useful for both of those topics, failure and statefulness.

The concept monad in the context of category theory is even more abstract, but there's no need to worry about that level of abstraction merely to learn Haskell programming.

The reason monads are a big deal in Haskell is that the computational structures they conveniently express happen to be those which are otherwise described with "imperative" language features: mutation, jumps, and side effects.

So if you're interested in expressing those computations in a pure way, you should be a bit curious about the monad concept. If you're not interested, that's fine, but it's close to Haskell's reason for existing, so that's a more basic question: is it interesting to write programs in a pure way? If you say no, you're right to give up on Haskell.


What an excellent linking of groups to monads, I really enjoyed reading this! Thanks for writing it up.


Nice to hear! I've been thinking vaguely about it for a while as a way to explain monads in a wider perspective. Now that someone other than myself seems to find it clarifying, I'll try to work it into a short text to put up.


Please do, I would totally read something like that.


What things/path did you attempt while trying to learn haskell?

Also, i dont think you need to understand crazofors, ot even monads in full generality, unless youre a hardcore library implementor. What people blog about and whats req'd to write an app are very different things


This [1] might help 'get it' with regards to monads. Very easy to digest.

You mention LINQ and the List abstraction. Yes IEnumerable<T> is a monad (LINQ isn't in itself - it [the grammar] is the equivalent of 'do' notation in Haskell). Monads are simply 'wrapper types' that follow a couple of rules:

1. You must be able to construct one from the un-wrapped value (return in Haskell, new List<T>(...) in C#)

2. It must implement the bind function.

If you understand 'map' (or Select in LINQ) then bind is very similar, except instead of returning a mapped version of the wrapped value, it returns a mapped version of the wrapped value, re-wrapped.

So in C# parlance (because I assume from your comment you use this daily):

    IEnumerable<R> Map<T, R>(this IEnumerable<T> self, Func<T, R> mapper)
    {
        foreach(var x in self)
        {
            yield return mapper(x);
        }
    }


    IEnumerable<R> Bind<T, R>(this IEnumerable<T> self, Func<T, IEnumerable<R>> binder)
    {
        foreach(var x in self)
        {
            foreach(var y in binder(x))
            {
                yield return y;
            }
        }
    }
You may recognise that as SelectMany in LINQ. Select and SelectMany are special case function names in C# that allow the formation of LINQ expressions:

    var res = from x in list
              select x + 1;
Equates to:

    var res = list.Select( x => x + 1);
And:

    var res = from x in list1
              from y in list2
              select x + y;
Equates to:

    var res = list1.SelectMany(x => list2.Select(y => x + y));
The result is a 'flattened' IEnumerable<T> - and that's why this is also known as flat-map.

It's been a while since I've done any Haskell, so forgive me if I get some of the syntax wrong here. But the LINQ statement above translates very similarly:

    do x <- list1
       y <- list2
       return x + y
(I know there's a list comprehension syntax in Haskell, but I assume this is still right, Haskellers?)

The syntax is saying 'get the value out of its wrapper (the list) for me, so I can use it as-is (the add operation), then put the result back in a new wrapper'.

So why do we do all of this? It's so you can write the functions once that behave on the wrapped types, whether they're integers, strings or any type. The monad concept allows you to create a 'chain' of behaviour that from the outside is opaque, but internally it's operating on the raw wrapped values, as-is; and that makes all of the existing functions available. This is one of the key benefits.

A good analogy is to call them 'programmable semi-colons', because what makes each monad type what it is, is the behaviour of bind and return. So a list monad (as seen above) knows how to iterate the collection - so you don't have to write a for loop every time, an option monad knows when to not invoke the bind function if it's in a None state - so you don't have to test it yourself with an 'if' every time, the state monad knows how to propagate state changes - so you don't have to have extra context arguments on every function down the hierarchy, etc.

They remove boilerplate and capture common patterns. That is the other key benefit. At its core is an abstract idea, and I think that's why it's sometimes quite hard to grasp; but it's just a design pattern. Forget the category theory side of it, you don't need to know any of that to use use them or write new ones.

If you're more used to C# than Haskell, then you may want to check out my 'C# functional language extensions' library [2] that has C# implementations of the following monads:

Option

Either

State

Reader

Writer

And a few others, but that should be enough to get started. It may help you get your head around it in a language you're more familiar with?

[1] http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...

[2] https://github.com/louthy/language-ext


Thanks for taking the time to write all that up. I think (too early to say) that seeing all this expressed in C#/Java(script)/Ruby/Python syntax is key for someone like me.

I learned LINQ/Select(Many) and later all the map/filter/reduce functional goodness by playing with Clojure and never had a problem and never heard the word "monad" and was fine, totally fine.

Later watching a video on Rx (MS's reactive extensions) and hearing Erik Meijer talk about monads and Haskell and how Rx was inspired by that is what led me to try to learn Haskell. I just didn't see the "connection" between the functional and reactive patterns I was applying daily without any kind of mental problem and the Haskell stuff which was supposedly "the same", yet so....abstract?

I need to read your sample code and the links more carefully, but maybe this is the "key" I was lacking. Thanks !!!!


My pleasure. I guess to help a bit more (because the list monad tends to be easier to comprehend) is to see how other types are implemented. So here's a very basic implementation of the Option/Maybe monad:

    public class Option<T>
    {
        public readonly bool HasValue;
        public readonly T Value;

        internal Option(bool hasValue, T value)
        {
            HasValue = hasValue;
            Value = value;
        }

        public Option<U> Select<U>(Func<T, U> map) =>
            HasValue
                ? Option.Some<U>(map(Value))
                : Option.None<U>();

        public Option<V> SelectMany<U,V>(Func<T, Option<U>> bind, Func<T,U,V> project) =>
            HasValue
                ? bind(Value).Select(u => project(Value,u))
                : Option.None<V>();
    }

    public static class Option
    {
        public static Option<T> Some<T>(T value) =>
            new Option<T>(true, value);

        public static Option<T> None<T>() =>
            new Option<T>(false, default(T));
    }
The SelectMany implementation is slightly more complicated than I showed before. This is an optimisation that C# does to group the bind and map together. So it may look slightly scary as a function. But hopefully you can see that if the Option<T> has a value then it first invokes bind, then uses the result of the bind (an Option<U>) to project the final result. Here's a more imperative version of it:

        public Option<V> SelectMany<U, V>(Func<T, Option<U>> bind, Func<T, U, V> project)
        {
            if (HasValue)
            {
                var u = bind(Value);

                if (u.HasValue)
                {
                    return Option.Some(project(Value, u.Value));
                }
                else
                {
                    return Option.None<V>();
                }
            }
            else
            {
                return Option.None<V>();
            }
        }
You can see that with the IEnumerable<T> version of Select and SelectMany it encapsulates list iteration. With the Option monad it doesn't do that. It instead checks the HasValue field, and if it's false then it doesn't run the map or bind functions.

The second static class: Option, contains the 'return' functions: Some or None. These wrap a value of type T in an Option<T>.

Now if we use Option<T> in a LINQ expression:

    var option1 = Option.Some(10);
    var option2 = Option.Some(10);
    var none    = Option.None<int>();

    var res1 = from x in option1
               from y in option2
               select x + y;

    // res1.HasValue == true  res1.Value == 20 

    var res2 = from x in option1
               from y in none
               select x + y;

    // res2.HasValue == false

    var res3 = from x in none
               from y in option2
               select x + y;

    // res3.HasValue == false
This is the same as using do notation in Haskell:

    do x <- option1
       y <- option2
       return (x + y)
If we were to do that imperatively it would look like this:

    var res = Option.None<int>();
    if( option1.HasValue )
    {
        if( option2.HasValue )
        {
            res = Option.Some(option1.Value + option2.Value);
        }
    }
Clearly more cluttered and error prone and importantly, not composable. This is where the notion of 'programmable semi-colons' comes from. It appears that the monad is running behaviour 'between the lines', and it is.

Hopefully that clears the fog. I'll keep an eye on this thread for a few days, so feel free to drop any questions in here or on my project page.


monads are not unique to Haskell. read the source for webdriver.io and you'll see that monads exist in js, too.

at that point, you might say "but this thing they're calling a monad in js is just an object.". well, these things we call monads in Haskell are just type classes.

now, a more fundamental question is "why do we care about types so much," and I don't have a pat answer for that one. suggestions to try out lisps for comparison makes sense, tho.


Stop trying to get them and use some specific monads. Monads are nothing but something that satisfies 3 laws.

This is what helped me most.


If you understand currying, you've got monads pretty much. They curry in a hidden thing.


Racket is really underrated for some reason. I rarely see it mentioned even though not only is it friendly to beginners (both programming beginners and FP beginners) but has a decent ecosystem.


I played with Racket and used to lurk on their mailing lists for many years! I haven't seen a friendlier community than the Racket community anywhere else.

DrRacket is a great environment to learn programming and play with various concepts, both for beginners and advanced users alike.


i very much agree it's really sad that such a high quality language is held under the radar. Maybe poor marketing? The community is pretty good though.


[deleted]


Yikes, anyone coming from a non-FP background to Elm is very much going to learn new ways of programming.

Elm is quite different than most front-end UI and I think your last sentence does it a great disservice.


I consider myself a functional programmer, and I also have an interest in logic and type theory, and have learned enough Haskell to write some student-level projects in it.

But when I try to dip my toe back into the Haskell community, a wave of despair washes over me. There's just too much. Haskell is changing too fast. There are too many language extensions, and more and more conceptually sophisticated features keep getting added to it. Every project uses a different set of GHC extensions.

I feel that without more standardization or stability, Haskell can never be more than a playground for PL researchers who can afford to spend all their time learning and implementing advanced language features.


I don't really feel that way. In fact, I think that there's just so much going on in Haskell that it kind of frees me from having to understand it all. Being comfortable with fundamentals of purely-functional programming (basically ADTs and use of first-class functions e.g. with maps/filters/folds), and the essential types and type classes (Functor, Applicative, Monad, State, etc) give one enough of a grounding as to be able to write pretty much any program one could want. Sure, I could try to learn lenses, or the parallelism libraries, or software transactional memory, or DataKinds, or the infinitude of Ed Kmett Prosemicofunctor type classes... or not. If it turns out I need those, or I think they'd be particularly useful, I can learn them, but I don't need to. I'll never know everything there is to know, so I don't feel an obligation to.


Part of the problem is a 'relative' lack of material covering practical development in haskell, compared to material explaining/exploring fancy type featues ( i.e. Prosemicofunctor type classes ) , and those that do blog about more mundane issues dont get voted up or are as visible in the community since its not interesting to those that have already passed that level.


This matches my experience.

At the company I work at, we use Haskell for HTTP services. The software stack we've grown is extremely opinionated about how to do things. We provide just one way to do things like report errors, run queries, do work asynchronously, and so forth.

We get people up and running with about the same amount of effort as we spend getting people going on our PHP, and I think a big part of the reason why is because we're able to constrain the education effort to the slice of Haskell that we want them to read and write.

For instance, we accept the fact that we have new teammates who haven't yet truly understood what "do" blocks are all about. These people are nevertheless able to write perfectly serviceable, production-ready code. They've learned that they can chain imperative actions together in a particular way, and that's enough for awhile. Deep understanding comes later.


Sounds very similar to approaches we've taken with C++ with a team that largely didn't have C++ experience(~15 devs, 2-3 C++ "experts", I use the term losely). Also similar to the approach DropBox said they use with their cross-platform C++ code form a talk a while back.

In cases like this I feel like convention does a good job of keeping a codebase from spawling out into using every feature of the language.


Two questions - first, did you reply to the wrong comment? Seems like you might've meant to reply to thinkpad20's comment, which is about not needing to know everything to be productive.

Second - where do you work?


I'm at IMVU. We're hiring! :) http://www.imvu.com/jobs/index/

I intended to react to the comment "But when I try to dip my toe back into the Haskell community, a wave of despair washes over me. There's just too much. Haskell is changing too fast."

I think this is true. There are too many ways to do everything in Haskell and they're changing too often. If you don't get very specific guidance, you're going to be totally paralyzed with indecision. I have good evidence that you can do very well by confining the solution space.


Nice! How long have you been there? It'd be interesting to hear how things have gone running Haskell in prod (and hiring, and such) since your guys' blog post in March of 2014.

Gotcha. It also doesn't help that many of the canonical learning materials (e.g. LYAH, or Real World Haskell) are slowly drifting out of date.


For me all the extensions and abstractions induce a kind of choice paralysis, and fear that my program isn't abstracted far enough. I don't feel that when working in C, Java or Ruby.


Abstraction, like optimization, is nice when it serves a purpose. If it doesn't serve a purpose, you're probably just making your program harder to understand for no reason. You can see this in Java too, with proliferating class hierarchies introducing needless concepts. Haskell is also a research language and there are people working on abstractions for the sake of it, much like mathematicians sometimes do. If that's not your interest, just write programs that work and don't worry about being abstract for the sake of abstraction!


> ...fear that my program isn't abstracted far enough...

This. I feel exactly the same. I read various concepts like Free Monads, I go find papers and read them. But when I sit down to write code, my code is a blob doing IO. I can use State monads, StateT etc. But the result is not the elegant programs that I see on books and blog posts.

I am also very pragmatic. I want to build tools that implement an idea I have. I want to build as soon as possible to try these ideas out. But when programming in Haskell, I find myself code golfing and lost in the means rather than the ends.


That captures my feelings really well. And then I think that if I'm going to stick to the "meat and potatoes" of functional programming, I might as well just use a language in the ML family.


Except controlled side effects are a godsend, even for meat and potatoes.


Yes, indeed. Even with the 101 level haskell I write, it feels so right. I exactly know where the side effects are and the semantics of the program is so clear.


That's why Go succeeds too.


Are you really talking about ghc extensions? I think it's quite possible you'd feel that if the haskell library state of the art just had advanced in a strictly haskell-98 world.


I don't know if I'd entirely agree. I feel like the way we package up javascript deps changes all the time (gulp seems to have won the battle for now?) and Scala's build system wasn't even really usable until maybe 2 years ago. It's an issue, but I think that having a usable IDE, devops toolchain/integration are bigger issues.


Stack is absolutely wonderful. stack setup command works wonders, building is effortless, moving to new library versions is as easy as ever and ever since I've started using it I've been writing more and more code in Haskell.

Tooling is absolutely great, Atom plugins are very fine, really, it's very enjoyable and it all happened in a single year.


I like Haskell but I see no future in terms of widespread adoption for it. It's simply too alien for your typical CS grad and for some projects one would like to use it for (let's say security critical), laziness (and non-predictable behavior in terms of memory use and performance) is a big problem.

We do however see some of its features being slowly introduced to other languages which is nice, I guess.


Haskell's learning curve is certainly steeper than other popular general programming languages. But my view on that is best explained with an analogy: calculus is certainly harder to learn than arithmetic approximations but once learned, the complexity of calculus is factored out of all the problems suited for solving with it. Haskell is excellently suited for solving extremely complex problems and why it's so desirable for concurrency and parallelism tasks.

On your second point, while it seems nice to have Haskell features slowly percolate into other languages, I urge caution in thinking that's a panacea for not learning more complex languages. Haskell is greater than the sum of all it's features. It's power comes from the combination of everything working together in a very well thought-out way.

So I don't necessarily disagree with your tone, just some additional commentary from someone who knows C++, Python, Java, and Haskell equally well (and perhaps equally bad ;) ).


> Haskell is excellently suited for solving extremely complex problems

I doubt that seriously. Could you write a complete Mars landing project in Haskell for instance? That would impress me. That's the area where Ada excels, or (less impressive) C and FramaC. But Haskell? No way.

AFAIK there is not one serious large project written in Haskell There is not even one sophisticated Haskell IDE for daily use written in Haskell (excuse me Leksah). Tell me if I am wrong, and if there actually are really big Haskell projects. Please don't listen compilers, they are not really complex.

The only advantage which Haskell actually has in comparison to other modern languages is its greatest disadvantage. The overall rock solid type safety of the whole application makes it very difficult to add additional unpredicted functionality which requires a significant change in many modules (where on the other hand the type system helps a lot). I am very impressed by the efforts and the implementations of Haskell but the simple fact that the Cabal hell -- for the simple job of a basic package manager -- lasted for years reveals that Haskell is not that practical after all.

I invested a lot of time to learn Haskell to use it seriously but after I switched to Nim (nim-lang.org) I realized how easy I can get real productivity.


"Could you write a complete Mars landing project in Haskell for instance? That would impress me. That's the area where Ada excels, or (less impressive) C and FramaC. But Haskell? No way."

Depending on what you mean by "in Haskell", probably. You can write hard-realtime software in an embedded DSL in Haskell that gives you explicit control over timings, scheduling, etc, while still giving you Haskell's higher order features for composing your programs.

https://hackage.haskell.org/package/atom

As I understand, the company that wrote it uses it in production in automotive systems.


> Haskell is excellently suited for solving extremely complex problems and why it's so desirable for concurrency and parallelism tasks.

What is an example of an extremely complex problem that you have solved that would be vastly more harder to implement in another language.

The most powerful thing that I have noticed is the type system. When you have a complex problem, you end up breaking it up by writing a lot of functions. In case of a language with a less powerful type system, it is hard for you to string them all together to build the complete program.

But with Haskell, if you have cared to actually annotate the types, this becomes a really trivial task. It also detects flaws in your reasoning (because the result of the previous step is incompatible with the input of the current step).

So, doing this would be vastly more difficult in a language like Python or even a language with less powerful type system like C.


> Haskell is excellently suited for solving extremely complex problems and why it's so desirable for concurrency and parallelism tasks.

As is the JVM (and most languages running on it). Not only that, but the JVM has actually a proven track record of being good and performant at it. And you can solve these problems with pretty much the language of your choice.

What makes you think Haskell is well suited for it? Immutability? This is available in a lot of languages (and for languages where it's not enforced, simple programmer discipline and code reviews address it).


Maybe the difference is that we all learned calculus in classes from experts, while the only way for most people to learn haskell is to pick it up themselves.


> It's simply too alien for your typical CS grad

In my university, Programming 101 (mandatory) was (and is) Haskell. For context, there is typically a mix of experience among the class - some will have programmed seriously before, but others will have only studied math.

I had already studied Java, but I found Haskell a delightful re-introduction (it was my first exposure to functional programming) and I see its value as an educational tool.


Dijkstra wrote an amusing little note about why it's a good idea to teach Haskell in CS 101. http://www.cs.utexas.edu/users/EWD/OtherDocs/To%20the%20Budg...


As a first language, I think Haskell is too safe. It forbids you from doing too many things that the computer can do. I recently talked to someone who was learning Haskell as a first language, and it was a surreal experience. It was obvious that he wasn't having fun in the sandbox, but he couldn't articulate it because he'd never been outside the sandbox.

My own first language was QBasic and I'm very happy about it. If I'd been made to suffer through parsers instead of putting colorful circles on the screen, I would think of programming as hard work, and that probably wouldn't lead to a very good career.


> Haskell is too safe. It forbids you from doing too many things that the computer can do.

Computers are to computing science what telescopes are to astronomy, or something. Haskell is a great language for expressing computations, the thing CS is about. It's not meant to flip bits and observe processor states and page faults, if that is your idea of fun. That's more computer engineering stuff.

But Haskell is absolutely capable of drawing pretty circles on the screen in just a couple of lines of code![1]

[1]: https://hackage.haskell.org/package/gloss-1.9.4.1/docs/Graph...


> Haskell is a great language for expressing computations, the thing CS is about.

I think Haskell is good at expressing a narrow range of ideas that, honestly, aren't all that fruitful outside the FP field. There are three main reasons why it's not a great language for expressing arbitrary algorithms:

1) It uses the pointer model instead of the integer RAM model. That leads to extra logarithmic factors.

2) Immutability. That's hypothesized to also cause extra logarithmic factors, but AFAIK that's still an open problem.

3) Non-strict evaluation. That wreaks havoc with space complexity, and compositional analysis of performance in general.

Yes, you can add epicycles to remedy these drawbacks (arrays, ST, strictness annotations). But I'd rather use a C-like language in the first place. That's closer to the "core" of CS as I understand it, and that's how most algorithm research is done.


All three are problems with performance on a von Neumann machine. So there are three main reasons why Haskell is not a great language for expressing high-performance computation algorithms. Since CS is a branch of maths, performance isn't that big a deal; the important thing is that the result is computable. The performance may be an interesting property, but so is line count and purity and ease of writing proofs and doing analysis on the code!

Speaking more pragmatically, outside of CS study, most of the programs I write do not require me to write high-performance algorithms – they tend to be very I/O bound, and call out to external libraries/services for the data crunching. When I need performance (because of course I do at times), there is an amazing library that lets me write C code in Haskell[1]. The fact that some sections of my program is performance critical doesn't mean I have to write the entire program in C. I can write just the performance critical parts in C.

[1]: https://github.com/fpco/inline-c/blob/master/README.md


Um, CS is totally about performance. Frigging P vs NP is about performance. Sure, there's a part that deals with computability and the halting problem, but I'm not sure Haskell is especially useful for studying that either! It can't even express the notion of "dovetailing over all possible programs" very easily :-)

Out of curiosity, what kind of CS do you mostly work with? If it falls under the umbrella of Curry-Howard, then sure, I agree with you. But CS has tons of other areas like graphics, crypto, AI... The impact of Curry-Howard related ideas on these areas has been disappointingly small, IMO.


> In my university, Programming 101 (mandatory) was (and is) Haskell.

I don't think it's a good idea because it might discourage a lot of students from the get go and they might simply drop out.

It's well established today that the language you start with has little impact on your mastery of the field years later (most of us started with BASIC I bet), so you might as well start with an easy language that will get people hooked.

If the students are curious and talented, they will move on to more complex things with time.


> In my university, Programming 101 (mandatory) was (and is) Haskell

I think this is true at a lot of universities. I would have thought if you get a random CS grad they're much more likely to have Haskell experience than Go, Ruby, Clojure, etc.


From what I've heard, MIT does the same thing with CS101, except in Scheme. Starting the curriculum that way seems like a good idea (in the long run, since internships and whatnot all want Java or C++/C#), between leveling the playing field and providing a foundation on a lot of the "math" that CS is based on.


I had a course in College that was taught in Scheme. It wasn't the "intro to computer science" class, but it was mandatory and at the Sophomore level.

That said, scheme never managed to feel like anything other than a toy language to me, and that we were being fed problems that happened to map neatly into the structure of the language. Write a RPN calculator! Write a Tree Parser! Stuff that really isn't that much harder (but admittedly more verbose) in imperative languages.

I couldn't help but to think "Sure, these problems are easy enough, but how would I blit pixels with this language? How would I process TCP/IP packets? What would a database interface look like? How am I supposed to do error handling?"

In the end I had no desire to integrate Scheme into my day to day programming.


Since 2007, the intro curriculum uses Python, not Scheme.


> It's simply too alien for your typical CS grad

I think its more too alien for your typical mid- to late-career developer that hasn't seen a CS classroom for a decade or more, or your typical successful non-CS-degree-holding autodidact developer that's done well with a narrow range of industrially-popular languages, than it is too alien for your typical CS grad.


Haskell is already gaining widespread adoption in a variety of fields that require the levels of correctness it provides (hint: compilers, DSLs to correctly specify and process complicated contracts worth millions) or where haskell-style code more directly maps onto low level machine concepts (hint: not x86).

Haskell will probably not take over the world of web programming. It currently is gaining widespread adoption in a variety of interesting places, however.


I don't think laziness is as much of an issue as the GC. You can't write a Haskell library and the link it into arbitrary programs safely.

If you want to write a new libssl or libjpeg, you need to use c, c++, or rust (or a few others, of course).



I don't think "fighting spam" is really what the GP was talking about.

A common "best practice" with cryptographic material is to hold it as short a time as possible (um, but garbage collection makes that hard), and zero it before releasing it (but immutability gets in the way). That means that, in a language like Haskell, you're going to leak crypto material into the free memory pool unless you break the language paradigm to prevent it. Worse, you have to remember to break the language paradigm everywhere you need to.

In C++, you'd just wipe it with zeroes in the destructor, and use RAII to make sure it's freed when you don't need it any more. In Java, you have to explicitly overwrite it on your way out of the block (maybe with a "finally" clause). But in Haskell, you can't overwrite it (yeah, you actually can, but still...)


In non RT java this is nearly impossible. The best way is to use a specific allocated off memory byte buffer and zero it once done. But GC/deallocation and JIT compilation will fight your zeroing attempts, and it can easily break existing code :(


This is just wrong. You can totally use managed mutable buffers that you overwrite once you're done. It's probably easier than in Java.


When you say "security critical" are you thinking specifically of stuff like crypto timing attacks? I think aside from those sorts of concerns haskell is an excellent choice for security-critical code.


"It's simply too alien for your typical CS grad"

That's just a question of teaching it to more CS students then. I don't think it's outside of the reach of the average programmer.

And yes you probably wouldn't use it to build real-time applications. But you probably wouldn't use Python to write an operating system. No language fits all use-cases.


I like Haskell but I see no future in terms of widespread adoption for it.

The aim isn't necessarily widespread adoption, so much as the ability for people who know it to use professionally. It's way past merely "production ready", but reputation is a lagging indicator and so most programmers are stuck doing Java...

Since the goal is to avoid {success at all costs}, getting 25% market share is probably not possible. 1 percent would be a start.

The problem, economically speaking, is that Haskell and Clojure engineers are massively underpaid, when you consider that an average Haskeller would be Principal+ at any Java shop. That's because the Java and C++ people can create bidding wars every year and spike their salaries, whereas using a better but more niche language makes that career strategy untenable.

Haskell's laziness isn't as much of an issue as people make it out to be. You can have strictness any time you need it, so it's more akin to an opt-out model than laziness being forced upon the programmer. It is correct to observe that high-performance applications frequently have a lot of strictness annotations, and the argument can be made that high-performance Haskell (while it can be close to C in performance) isn't "idiomatic"... although that claim is true of all high-level languages (even Java). High-performance Clojure ends up being full of '^' characters for type annotations, and high-performance Haskell ends up being full of '!' characters for strictness.


> The problem, economically speaking, is that Haskell and Clojure engineers are massively underpaid, when you consider that an average Haskeller would be Principal+ at any Java shop. That's because the Java and C++ people can create bidding wars every year and spike their salaries, whereas using a better but more niche language makes that career strategy untenable.

I haven't found the same to be true for other languages, but in my experience the Average Haskeller is so caught up in academic exercises, technical one-upmanship, and code golf shenanigans that I'm surprised they find any time to contribute any business value at all. Having a discussion about acceptable levels of technical debt with a Haskeller is like having a discussion about acceptable levels of meat products in your diet with a vegan in a world where there are no plant foods..."everything is technical debt, and none of it is acceptable, and we absolutely can't move on until we change all relevant functors to applicatives and all nested record accesses to lenses!"

And it is squarely a cultural issue, not a technical one...it is quite obvious that Haskell is a fantastically powerful language and capable of mowing over most enterprise-y languages with ease for a very wide variety of domains. Its just that Haskellers are so nitpicky about elegance and style that they don't know how to let shit be shit and get something done when it is needed.

This obviously isn't the only case for Haskellers. Looking at JGM's github feels like looking at Dumbledore's magic through the eyes of a Muggle. It just feels that way with the average Haskeller that I work with.


I don't know who the average Haskeller is, but your description doesn't fit any of the engineers I worked with at Silk, who were all very pragmatic and got stuff done quickly.


Great points. I can attest to the fact that low-latency Java code looks nothing like idiomatic Java. Ironically, the techniques for writing low latency Java are the same in Haskell (avoiding object creation, boxing, garbage collection, etc). High performance C and C++ are a league of their own with direct control over memory layout and alignment etc. Haskell has some amazing work on that front as well though: http://www.doc.ic.ac.uk/~dorchard/publ/icfp-2013-simd-vector...


> high-performance Haskell ends up being full of '!' characters for strictness.

The upcoming GHC 8 provides the Strict extensions which makes everything strict by default on a per-module level. That should greatly reduce the number of »!« characters.


I've learned haskell at the university level, and really enjoyed it(and found it very easy to pick up), but find it puzzling where I should use it. It's very easy to say this problem requires a scripting language, and this problem is better suited for an object oriented language, but I don't quite understand what problems would be easier with a functional language. At least in terms of problems that I want solved.


> ...this problem requires a scripting language...

Cool, we have that in Haskell these days![1] Except, of course, it has all the good high-level stuff of Haskell, which reduces boilerplate and increases correctness.

[1]: https://hackage.haskell.org/package/turtle-1.2.3/docs/Turtle...


Nice to know. Maybe using it more often would lead to thinking about things functionally and finding problems that are better solved functionally.


When comparing with scripting languages, I think it's fair to talk about how Haskell increases correctness, but I don't think you'll convince people coming from those languages it reduces a lot of boilerplate. (On the other hand, you can definitely sell Haskell as a boilerplate reducer to people coming from languages like Java, say.)


Generally speaking, most programs could be written in either Haskell or an imperative language without much trouble (assuming you're comfortable with the language), though some problems are better suited to Haskell than others.

The way I see it, if you want to extend your language to be able to express solutions to your problem in terms that are close to your problem domain, then Haskell is probably a good choice.

If you would rather transform your problem into constructs that are easily understood in terms of the low-level operations performed by a computer, then you might be better off with a low-level imperative language.

To put it another way, if your requirements include things like deterministic real-time scheduling, strict performance or memory requirements, anything involving assembly language or cachelines, then you should probably use something like C or C++ or Rust.

On the other hand, if you don't care about any of that and you just want your program to compute the right answer in a reasonable amount of time and you want to have high confidence that it's right, then Haskell is a pretty good choice.


My experience is that typed functional programming is excellent for any application that can afford to run a garbage collector and needs to to be able to change continuously forever.


Traditionally speaking, one of the bigger arguments for functional languages is "embarrassingly high parallelism" because they tend to simplify many issues that writing imperative languages concurrently can bring about


Speaking as somebody that dabbled around with a lot of programming languages, I believe that haskell is kind of an endpoint. Doesn't get better than this. Going back (to other languages) is harder than going further (improving on haskell).

But pure haskell might be too much for a begginer. For me, as an hobbyist, it was. Then I discovered a real world haskell, Elm that (for now) is 'married' to JS. Later, if I feel the need, I can always 'upgrade'. IMHO elm is for haskell, what meteor is for javascript. Even without support from purists, they conquered their market share. This irritated a lot of people.


"Call-By-Push-Value

On the topic of evaluation methods there is interesting research opportunity into exploring an alternative evaluation methods for the lambda calculus under the so-called “Call-By-Push-Value” method. Surprisingly this technique has been around for 11 years but lacks an equivalent of SPJ’s seminal “stock hardware implementation” paper describing an end to end translation from CBPV to x86. This would be an interesting student project."

Any idea which SPJ paper he is referencing?


He's probably referring to SPJ's work on the spineless tagless G-machine [1].

In truth, though, I think trying to incorporate Call-By-Push-Value into Haskell would radically change the language unless there's some research in type theory that allows for the implicit Call-By-Need nature of Haskell to be extended into a form that can accommodate CBPV, which itself explicates a kind duality between Call-By-Name and Call-By-Value.

If advancements have been made here, I'd love to hear about it.

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=6708...


I've been intrigued and frustrated by Haskell on and off for years now. But I think the open sourcing of Swift has, at least for now, iced my interest in Haskell. Certainly they are not directly interchangeable, but I think Swift brings a lot of the benefits of an FP language like Haskell or OCaml but is eminently more practical and applicable. Or at least it will be as the OS community behind it gains speed.

So Haskell is still on my radar but, for now, off my list of things I intend to learn.


distance :: { x :: Number, y :: Number } -> Number

distance point = sqrt(point.x^2 + point.y^2)

The function application of sqrt shouldn't have parens.

And the type of distance should be

distance :: forall r. { x :: Number, y :: Number | r} -> Number

Otherwise, it's not row-polymorphic and you wouldn't be able to apply distance to a record of type Point3D.


> Avoiding success at all costs takes time...

Uh?


"Avoid success at all costs" is haskell's unofficial motto (coined by SPJ), so this is a little joke.


Originally spoken as "Avoid (success at all costs)", i.e. don't sacrifice your principles just to get your 15 seconds on the stage. Now sometimes used as a joke in the alternate meaning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: