Hacker News new | past | comments | ask | show | jobs | submit login
Haskell for Web Developers (stephendiehl.com)
128 points by rwosync on July 26, 2013 | hide | past | favorite | 163 comments



You might not need a deep understanding of category theory to write useful programs in Haskell, but the downsides of relying on those explicit monads to control state and interactions still show through in practical code.

Here’s an early example from the article:

    import Control.Monad.State

    type Interact a = StateT Int IO a

    repl :: Interact ()
    repl = forever $ do
      val   <- liftIO readLn
      state <- get
      liftIO $ putStr "Total: "
      liftIO $ print (val + state)
      put val

    main = runStateT repl 0
Here’s something with similar behaviour written in Python, an imperative language with a reputation for being easy to read:

    total = 0
    while True:
        total += input()
        print "Total:", total
The difference is striking. The latter is obviously much more concise. Ironically, it also seems clearer about how the state is used, having the initialization, reading and writing of that state all kept together. It’s worth considering that the stateful logic is relatively kind here, too, because there’s only a single value that needs to be maintained via StateT in the Haskell version.

The Python code has the added advantage of actually being correct, assuming that the intended behaviour really is to add up the total of all entered numbers over time as the output suggests.

I think this example is instructive in terms of advancing industrial practice rather than the theoretical state of the art. Haskell demonstrates a lot of nice features that mainstream industrial languages lack today, but as soon as any sort of state or effects enter the picture, it becomes absurdly complicated to describe even simple systems. I would love to have future languages incorporate some of the safety that Haskell offers, but I think to catch on with working programmers they will have to hide away a lot of the state/effect details where they aren’t needed, just as tools like type inference have hidden away their own form of boilerplate. It’s not just about not needing to be an expert on category theory, it’s about not having to be aware of it at all.


The syntax in Haskell for dealing with side effects is tedious/verbose by design. The intention is to push one in writing pure programs since pure functions and compositions are easier to test, verify and reason about.

Good Haskell programs are pure; side effects are usually isolated in a "shell" wrapping the purely functional core of your program that interacts with the world (IO, networking, graphics, etc).


> The syntax in Haskell for dealing with side effects is tedious/verbose by design.

I don’t think Haskell’s designers have ever had the goal of making imperative programming more difficult. Simon Peyton Jones himself, as far back as thirteen years ago, wrote that “Haskell is the world’s finest imperative programming language.” [1]

If you want to do imperative programming in Haskell, it’s easy: Just code everything within the IO monad. It looks just like imperative code in most other languages, only you get to use Haskell’s light syntax and advanced runtime (with great concurrency abstractions, modern GC, and lightweight threads), so it is indeed a pretty fine imperative experience.

But Haskell also gives you another option for imperative programming, one that mainstream imperative programming languages do not: the option to provably wall off the small amount of impure logic in your code from all the rest. And once you’ve tried this option for a while, you’ll probably never go back to doing it any other way, at least for non-trivial projects.

Keeping the main bulk of your code pure makes life so much easier that it’s worth paying the small additional cost to wall off the impure bits. Yes, you must centrifuge off the impurities. Yes, you must create a protocol wall to safely communicate with those impurities. And, yes, these costs are real. But they are bounded by a small constant effort factor and applied to only a small portion of your overall code. In exchange for this small cost, you get to keep the rest of your code (most of it) pure and collect the purity dividends for the project’s lifetime. For most non-trivial projects the dividends add up to a huge payoff.

So you can use Haskell as a regular imperative programming language, and it's a pretty good experience. But if you use Haskell as Haskell, you can have an even better experience.

[1] http://research.microsoft.com/en-us/um/people/simonpj/papers...


The syntax in Haskell for dealing with side effects is tedious/verbose by design.

In the entire history of computer programming, I’m not sure language designers taking that kind of attitude has ever worked.

Good Haskell programs are pure; side effects are usually isolated in a "shell"

The real world is stateful, and many useful programming techniques rely on mutable state. Indeed, programs with no externally observable effect would be rather uninteresting, since by definition they can’t do anything useful!

Please don’t misunderstand my position here. I’m strongly in favour of programming languages providing tools to control that state and avoid unexpected effects (or combinations of effects, or ordering of effects). This would prevent many common classes of programmer error, surely improving the quality of production software significantly.

However, I don’t accept that trying to completely separate the two worlds is necessarily the best way to achieve that benefit. I think adopting the hard line you describe is one of the reasons that functional programming languages haven’t taken off in the industry as a whole, yet several useful tools found in those languages have been widely adopted in recent years.


"functional programming languages haven’t taken off in the industry as a whole"

You mean, except for Erlang?


If Erlang is an example of "taken off"...

I am currently using Erlang on a project because it was a good choice, but I would not exactly consider it popular. Popular is stuff like Javascript, Python, Ruby - even Go.

I think it's indicative that in an example of how simple Haskell is supposed to be, you find this several paragraphs in:

> This is an example of a monad transformer, namely the composition of StateT ( State Transformer ) with the IO Monad. The values from the IO monad are lifted into the State monad with the liftM function.

That makes me curious - I'd like to figure out what all that means, exactly, what the concepts are, what they're useful for, and so on.

But I don't know if I'm a "typical web developer" either.


Monad transformers is one of the trickiest parts of using monads, and certainly they're not well explained at the example - I'm afraid the linked article does an awful job at introducing monads to web developers, when most of the basic terms used remain without a proper definition.

For what I've learned about transformers (although I haven't used those at practical projects), they are used to combine several monads and wrap them around a single value with several layers of monadic types, so that the adequate monad can handle the inner value when the occasion arises. But don't take my word for that; understanding transformers "does" require knowing a good deal of category theory.


Using monad transformers doesn't require any category theory, just Haskell and an understanding of Haskell's use of monads (which are inspired by category theory but aren't going to help you much with proofs). It might help in writing them - I've not written any that weren't toy examples, so I couldn't say.

The idea behind monad transformers is simply that they wrap a monad and give back a monad (which could in principle be wrapped again).

So you can have, for instance,

    StateT Int (ReaderT String IO)
You would have a monad that carries a modifiable integer, an unmodifiable string, and which sequences IO actions.

There is a function called lift which wraps a monadic value in a transformer. So for the above, for the unfamiliar, ReaderT provides

    ask :: ReaderT a m a
which here would be specialized to

    ask :: ReaderT String IO String

If I were working in that monad in do notation, I could do something like:

    str <- ask
and get back the string the reader provides in str. If I were working in our entire stack, though, ask isn't the same type - so I could instead do something like:

    str <- lift ask
to turn a ReaderT String IO String into a StateT Int (ReaderT String IO) String.

So if I had:

    flip runReaderT "foo" $ flip runStateT 7 $ do
        str <- lift ask
        modify (\ x -> x + length str)
        i <- get
        lift $ lift $ putStrLn $ str ++ show i
we could look at the type of the entire do notation expression. It must have a StateT Int on the outside, because the first thing we do with it is runStateT; it must then have a ReaderT String because the next thing is unwrapping a ReaderT. Running through it:

        str <- lift ask
grabs a String ("foo") out of an ask that's been lifted into our stack from a ReaderT String IO String

        modify (\ x -> x + length str)
At the top level, updates the state in our StateT with the result of applying the function to the current value. In this case, 7 + 3 = 10, so the StateT on the outside now holds 10.

        i <- get
grabs that 10 from the StateT and calls it i

        lift $ lift $ putStrLn $ str ++ show i
Appends the string we grabbed ("foo") and the stringification of i ("10"); this is passed to putStrLn which gives us a value of type IO () that, when evaluated, will print "foo10". IO is not the monad we're working in, though, so we lift it into ReaderT and then lift the ReaderT into StateT, to be ultimately unwrapped. The whole expression here has type IO (), and could serve as a definition for main if you want to play around with it.

I should admit that I lied a little bit. The lifts are actually not necessary, because of some typeclass voodoo that I think makes it slightly harder to understand coming in but usually easier to use (unless you need to nest two transformers of the same type...). What's done is that get and ask and such aren't defined just for the one type but for anything that implements MonadState or MonadReader (respectively, as it happens). Then, there is an instance defined for "any monad transformer wrapping anything that implements MonadState", &c, which obviously propagates up the stack, so most of the time you can leave out explicit calls to lift.


In real life (not contrived examples) haskell greatly simplifies dealing with state. Isolating IO makes it much simpler to reason about where exceptions can come from.


In real life (not contrived examples) haskell greatly simplifies dealing with state.

That is the assumption I’m questioning.

For example, let’s consider quicksort. Hopefully we can agree that this is a substantial and practically useful algorithm, and one that fundamentally relies on mutable state for its performance and scalability.

I recently posted a link here to a discussion on haskell.org[1] about implementing a real quicksort. This was contrasted with the usual elegant but not-really-quicksort version that has propped up a million functional programming advocacy posts, and with the original C version[2].

Neither Haskell version seems particularly simple to me when compared to the C, and I see little extra safety in return for all the Haskell syntactic overhead that couldn’t have been had just as well with a couple of "const" annotations in numerous imperative languages.

Obviously not every case is as self-contained as this algorithm and in other contexts Haskell’s approach would fare better. But a lot of real world code is stateful, and a lot of it uses this kind of local state, and so I contend that making everything explicit, Haskell-style, does not greatly simplify dealing with state in general.

[1] http://www.haskell.org/haskellwiki/Introduction/Direct_Trans...

[2] http://www.haskell.org/haskellwiki/Introduction#Quicksort_in...


So for me the real life tends to be much larger applications. I tend to worry less about the efficiency of something like quicksort in haskell. If I need very high performance from something I tend to use C anyhow. This tends to be very small and isolated blocks of numeric code. Otherwise network latencies tend to overwhelm performance speedups.

All that aside if you think about the state of an application globally and don't worry about using state in small areas like quicksort then there are big wins to be had. When building scalable services STM makes it much easier to perform state updates and removes trying to reason about locks. Now this does require a big change in how applications are reasoned about and designed.

So having been involved in many things the world contains too much state. Much of the world could be more stateless. I write a lot less very stateful code than I used to. And the code that I do write tends to be more correct. The ability to easily mutate state is now something that scares me.

Also when used correctly haskell doesn't provide a little safety, it provides a lot of safety.


I do agree with you on the general principle of not using state arbitrarily. As I see it, this is an example of the still more general principle that code shouldn’t depend on anything without good reason. The more dependencies are involved, the harder it is to maintain or reuse code, to run automated tests against it in isolation or review it formally to check the logic, and so on.

However, where I sometimes have a slightly different view to some functional programming enthusiasts is that I don’t see state as some sort of inherent evil. My interest is in writing code that is correct, maintainable and efficient. Writing pure functions is one possible technique to help reach that goal, but to me it is only a means to an end. In particular, it is only one case of the more general principle that stateful resources and effects acting on them should be specified clearly enough that bad things provably can’t happen, but they don’t need to be emphasized or restricted any more than that, particularly if it means compromising on other areas such as readability.


I don't think it was an assumption; I think Steve was speaking from experience. As someone working extensively in C and doing a fair bit of Haskell, I'm finding my C to be more verbose and less safe. The C runs faster, in my application; sometimes I write it faster and other times slower, depending on a bunch of factors.


That’s fine. I’m just saying that my experience differs, and trying to demonstrate why.


What experience do you have building large real-world applications in Haskell? I'm just curious.


Sure, no worries, word choice just bugged me a bit.


Then I apologise; I didn’t mean anything by it. I can’t edit the post any more, but please read something like “position I’m challenging” instead.


Quite alright.


>The Python code has the added advantage of actually being correct, assuming that the intended behaviour really is to add up the total of all entered numbers over time as the output suggests.

Only in a single threaded environment. However, concurrently, the Haskell code stays correct in a concurrent environment while your python code has bugs.

Extrapolate a bit, and think about how many python functions may be slightly less trivial than yours and yet still have the same concurrency issues...


Contrary to the hype, I think plenty of useful code does still run in a single-threaded environment, so I find your argument unconvincing.

That said, even in an environment with concurrency, I’m not advocating uncontrolled state changes as typified by current imperative programming languages. I’m arguing that we could benefit from the robust theory underlying languages like Haskell, but used more transparently so the developer only has to worry about it when it matters.

For example, you could be running a second thread in parallel with the one we’re looking at here, but if its effects are constrained to replying to a network-supplied request for the current time and reading the system’s clock, it doesn’t matter at all that our interactive adder is running at the same time. These effects affect different, independent resources and will never interact, so there is no benefit to cluttering either code path with extra details.

What I’d really like to see is a language that could understand those resources and how they are affected by each thread in terms of reads and updates. I want it to stop me from modifying the code later so that it accidentally becomes vulnerable to interactions that I’m not fully controlling. I want it to provide me with tools to specify how potential interactions should be handled, in a way that is systematic and robust but otherwise as simple as possible, and that can be applied at whatever level of my design makes the most sense.

Research into type and effect systems is addressing challenges like these, but I’m hoping that when the dust settles, the final version will look a lot more like Python with a few strategically placed annotations than Haskell where these implementation details are in your face all the time (even if most of the time you don’t actually need to worry about them).


>I want it to stop me from modifying the code later so that it accidentally becomes vulnerable to interactions that I’m not fully controlling. I want it to provide me with tools to specify how potential interactions should be handled, in a way that is systematic and robust but otherwise as simple as possible, and that can be applied at whatever level of my design makes the most sense.

That's...the state monad. The compiler tells you when you try to access it in a way that is not allowed.

You have to be explicit with sharing.


That's...the state monad. The compiler tells you when you try to access it in a way that is not allowed.

But the price for that is that I always have to access it in a cumbersome way, even when being so explicit makes no useful difference to the correctness of my code.

You have to be explicit with sharing.

I don’t want to be explicit with sharing.

I want to be explicit with sharing when it matters, and otherwise not to worry about it, safe in the knowledge that the language will warn me if I try to do something later that invalidates any assumptions I’m making now.


If your argument is that it's better to be explicit with sharing, then why would you want to use threads that implicitly share 'total'?


I'm intrigued by the claim that the Haskell code is correct in a concurrent environment. If I run two copies of this function concurrently, won't they step on each others' toes as they fight over the file offset of stdin?


You mean the Haskell code stays equally incorrect in a concurrent environment...


What is incorrect about it?


The thread op assumed that the code was supposed to accumulate the amounts ever entered. The Haskell code doesn't accumulate, it adds you input to the prior state, prints that sum, then stores your input, not the sum as the next state.

Of course, that's just a (likely true) assumption of what the behavior was supposed to be and it extrapolates far from the point. The example didn't really intend to show off a particular behavior but instead the interweaving of effects.


The example didn't really intend to show off a particular behavior but instead the interweaving of effects.

Indeed. My point was not to nitpick an interesting article. I was trying to illustrate how making so many details explicit in Haskell has a real, practical downside: what seems to be an obvious logic error had slipped through, where it surely would have been spotted easily in the simpler Python code. Moreover, no-one had even mentioned this possible bug after a couple of hours and dozens of posts here, though of course that could just be because no-one else felt this point was important enough to make rather than because they hadn’t noticed.


FWIW, I tend to prefer using modify rather than set wherever that makes sense, which would make this particular bug harder to express.


Yeah. I didn't follow your point since I didn't feel like this was a terrifically idiomatic example. I felt like the bug was equally easy to catch in either expression.


No offense but you failed miserably. If you were trying to reproduce the python code in Haskell, the example is not what you would write. This is a 1 liner in Haskell. The two examples do not represent the same thing.


The two examples do not represent the same thing.

This is a claim I don’t understand. They are both simple examples of representing an algorithm that maintains state in their respective languages. Of course you could write it as a one-liner using a different style in Haskell, and in various imperative languages for that matter. However, my point was that making these monadic abstractions explicit everywhere in Haskell does add weight to the code. That continues to be true for more complicated stateful code that can’t be rewritten as a one-liner or expressed more elegantly using recursion anyway.


I simply do not agree that the abstractions add weight to the code. If anything I think that they make it more concise. It scales much better than constructs given to you in Python. But don't take my word for it, try writing a complicated Haskell program with lots of effects. I think you will find the capability Haskell provides to compose and abstract a huge boon.


But don't take my word for it, try writing a complicated Haskell program with lots of effects.

Unfortunately it’s late and I don’t have time to spend the next few weeks writing Haskell code to try your experiment. I’m guessing no-one else reading this does either. Perhaps you could instead direct us all to some existing projects that demonstrate the advantages you’re describing, so we could see the kind of code you’re thinking of and make up our own minds?


The thing is I could throw a bunch of Haskell at you but in the end it wouldn't really mean anything. When I embarked on learning the language there were many times I almost threw in the towel. It was hard to see beyond the initial frustrations. Haskell code looked confusing, verbose, ugly, and so on. It was only by taking the time of going through this process that I came to an understanding - followed by appreciation.

But I don't claim to be incredibly intelligent. It is very possible you or someone else here could look at a heap of Haskell code and just immediately understand the advantages/disadvantages without having gone through learning all of the "why".

Your time is yours. Don't let me try to tell you what to do with it. Heck you can make a fortune just writing PHP so is there any reason to learn anything else? I'd say anyone who loves their craft would want to at least give all new things a shot and invest some time into learning what is out there.

The project I took on initially to learn Haskell was translating a real time sparse 3D reconstruction computer vision algorithm. It was incredibly daunting and challenging. But in the end very worth it!


The thing is I could throw a bunch of Haskell at you but in the end it wouldn't really mean anything.

It is odd that you seem to be assuming I’ve never written anything in Haskell and that the concerns I’ve expressed are based on a sceptical reading of one article. I’m quite sure I’ve never written anything to that effect here.

I make no claim to special expertise or authority. Indeed, I’m almost entirely self-taught, so it’s certainly possible that I have completely missed some key point. However, I’ve been following Haskell’s development for years and written enough small/medium projects in it to have formed my own opinions. My experience has been that for mostly pure code where it’s a natural fit, Haskell has a lot to like, but for more interactive systems, it often does become rather awkward for exactly the reasons we’ve been discussing. (Edit: There’s a good example of this in the article: the chat room using WebSockets.)

In the real world, people do sometimes just dump half of their program into IO to be done with it. In the real world, you do wind up repeating essentially the same logic multiple times to cope with monadic and non-monadic cases (or the authors of libraries you use did, but sometimes you are that author) and at times you do need things like explicit lifts. In the real world, sometimes you do want to be in the same monad twice. In the real world, there are awkward corner cases in the language, and you do wind up relying on GHC extensions to help clean them up. In the real world, people do write elegant code like the popular not-quicksort example despite its dubious performance characteristics.

None of this takes away from the power and usefulness of Haskell’s type system, or neat ideas like STM if concurrency is relevant to your application. But this isn’t some kind of either/or world, where having some good points automatically means a language can’t have any bad ones.


It is made explicit and verbose in that contrived example because it specifically an example of those things. I am having a really hard time believing that you genuinely don't understand this obvious fact.


Of course I understand that this is a contrived example.

What I don’t accept — without evidence to the contrary — is that it is unrepresentative. Would more “serious” examples of implementing stateful algorithms using monads and similar tools not be similarly verbose, given the explicit nature of such details in Haskell? Maybe that really is true for experts, in which case I would be happy to learn about it and update my views. Examples are welcome.

For now, I have never personally seen it, and it’s not immediately obvious to me how Haskell code would beat a dedicated imperative language at its own game in terms of conciseness. Correctness guarantees, sure. Performance, maybe, if we reach the point where powerful optimisations are possible in Haskell because of the extra information available. And for conciseness, I’m not suggesting it’s orders of magnitude worse or anything silly like that. But when, by design, your effects and sequencing have to be singled out and specified explicitly every time, you’re starting at a disadvantage compared to systems that incorporate that behaviour implicitly.

Please also see my comments on quicksort elsewhere in this thread.


"Total: " implies that it's meant to be summing the input values. It's actually only summing the current and the previous, because it puts back val not (val + state).


Why are you having multiple threads share 'total' at all?


Frankly the two aren't equivalent in behavior. So of course the python version is simpler, it does less. The Haskell code equivalent to your python version is almost as trivial.

    repl :: Int -> IO()
    repl state = do
      val <- readLn
      print ("Total: " ++ (show (val + state)))
      repl (val + state)

    main = repl 0


So of course the python version is simpler, it does less.

What does the Haskell version do that the Python doesn’t? In terms of externally observable behaviour they both prompt for inputs and then add them up. In terms of design they’re both infinite loops that maintain one item of state.

I understand that the example was written to show the State monad, and obviously the same externally observable behaviour could be achieved in other ways in this simple case. My point is that in imperative languages you get that State behaviour for free, and unlike the simplified alternatives to the Haskell that rely on recursion instead, such as the one you posted, the imperative state-for-free model scales up when you start to get lots of state and mutation going on. It doesn’t start to suffer in performance metrics like speed or memory usage, nor does it start to have unpredictable performance characteristics in complex systems.

As I’ve stated elsewhere in this thread, this argument is not meant to belittle Haskell or dismiss the clear advantages its model offers. I’m arguing that there is a price to making those things as explicit as Haskell does, and that price is probably too high for mainstream industrial acceptance, so if we’re going to reap the benefits of the sound foundations on an industry-wide scale, they need to be available more transparently.


As you say the Haskell version explicitly controls state, your python version doesn't. If the python version had access to the variables in question under locks with as much security as the Haskell version then they would be doing the same amount of work. Sure they don't do anything observably different but it is a daft example with no driving motivation for why someone would write code that way.


The imperative one scales alright, into an un-maintainable nightmare of bugs.

The problem is that the imperative model places the entire burden in the programmers head (even when it may not even be code you wrote!) whereas the Haskell version makes everything explicit.


His example is intentionally contrived to show a minimal example of StateT. This is who you'd really do it in Haskell:

for strings:

    main = repl ""

    repl :: String -> IO ()
    repl a = getLine >>= \c -> print (a++c) >> repl (a++c)
and it's not that much harder for Ints

    main = repl 0

    repl :: Int -> IO ()
    repl a = fmap read getLine >>= \b -> print (a+b) >> repl (a+b)
or using do notation

    repl a = do $
       b <- getLine 
       print sum
       repl sum
       where   
           sum = a + read b
which is still concise and easy to read.


One could also do something like

    getContents >>= mapM_ print . drop 1 . scanl (+) 0 . map read . lines


The `StateT` example in the article is unfortunate, as it reinforces the misguided notion that handling state in Haskell requires complicated machinery, such as monad transformers. This is simply false. You can use a mutable reference, like this:

    import Control.Monad
    import Data.IORef

    main = do
      ref <- newIORef 0
      forever $ do
        val <- readLn
        modifyIORef ref (+ val)
        total <- readIORef ref
        putStrLn ("Total: " ++ show total)
Haskell's libraries haven't been designed to make this use case as concise as possible, but nothing prevents you from introducing more succinct notation. To illustrate this point: In Haskell, the `forever` control flow operator is not a language keyword, but a function you could write yourself. In Python, could you write `print` yourself?


>In Python, could you write `print` yourself?

In Python 3, where print is a function? Yes.

In Python 2, not with exactly the same syntax, but you could certainly write a function to provide the same features.


A better example would have been `if`.


I've been working with clojure, and atleast for me, it offers the freedom to work with both pure and impure code with relative ease, whilst still being conservative on state manipulation.


I've been coding in Scala for the past couple years, and it is basically exactly what you describe in your last paragraph.


Scala is one of those frustrating languages for me.

A lot of the basic syntax and concepts are very much as I’d wish to do them myself. For example, I wrote a toy compiler years ago that not only used the var/val distinction but even used exactly the same terminology. I’ve seen a few videos of Martin Odersky, and he’s one of those presenters where I find myself nodding along with almost everything.

Sadly, I think Scala suffers from a similar heritage problem to C++. The good ideas underlying it always seem slightly held back by the historical baggage and compatibility issues, and somehow it never feels quite as clean and tidy as it should. I understand the benefits of building on such well-established foundations, but as with effects in Haskell, sometimes silly things just seem to take far more effort than they should, even though much of the language appeals to me.


>The Python code has the added advantage of actually being correct, assuming that the intended behaviour really is to add up the total of all entered numbers over time as the output suggests.

What output? I don't understand why you wrote python code that does something different than the haskell example does, and then decided that makes the haskell code wrong.


I think if you’ve got an example of an open-ended read loop, and it collects numbers, and you’re printing something that says “Total:”, it’s reasonable to ask whether the intent was to print the total of the numbers collected by the loop so far.

That this side discussion has happened almost makes my point for me. If the Haskell code had described what the mutable state actually was (the total to date or the previous entered value) rather than getting hung up on the fact that it was mutable state at all (even as written, the term “state” appears five times in this trivial algorithm) then we wouldn’t be having this conversation.


>it’s reasonable to ask whether the intent was to print the total of the numbers collected by the loop so far.

Yes, it is reasonable to ask that. But you just said it was incorrect, you didn't ask that.

>That this side discussion has happened almost makes my point for me

I don't see how. Your point appears to be "I can write shorter python". You can write shorter haskell to do that too, the point was to demonstrate the State monad, not to be concise.


If the code isn't incorrect, the label printed is. It is not computing the total.


I don't think you're going to convince existing web developers to adopt Haskell.

The second example from the article uses 8-lines of Haskell, a hand-waving explanation of a state transformer monad, composition, and a state transition diagram to demonstrate how easy it is to write trivial Haskell programs. The program in question takes a number from stdin, adds it to a counter, and prints the result.

"Web developers," being those who have experience building applications using web-based technologies, have a plethora of tools and languages at their disposal that don't require the up-front investment of learning Haskell.

Haskell has a lot to offer but I suspect the only people interested in using Haskell in their web application projects are Haskell developers or people who've been convinced that they will be smarter if they learn Haskell.


I've had the opposite experience. Everyone I know who does web development is frustrated by the poor quality of work they produce, and how hard it is to maintain the projects they've done. Re-writing a project from scratch every ~5 years is essentially normal development practice. I learned haskell just for web development. I am not smarter as a result, I just have a better tool now that helps me produce better results. My wife then decided to learn haskell for web development too. Neither of us have a background in computer science, or even a degree at all. We learned haskell for purely practical reasons, and we keep using it for those purely practical reasons.


ymmv; I hate web development but maintainability is the least of my concerns. I have frameworks for that.


What framework, and what is it doing to help with maintenance? My experience has been that typical RoRish frameworks are so heavily concerned with trying to make the initial writing of the app easier that they actually hinder long term maintenance.


Pick a framework, I've probably worked on an application that used it.

what is it doing to help with maintenance?

What frameworks do: impose structure and convention on the design of programs.

In the case of web frameworks you provide some data, map URLs to functions, and hook into some helpers for managing sessions, caching, and the like.

Haskell has them too.

...they actually hinder long term maintenance.

I've found the opposite to be true in the general case.

Disqus is probably the largest Django application out there that I know of (and note I have never worked there). From their technical talks and contributions to Django over the years it seems to me that Django has been a net-win for them, maintenance wise, over the years. I can't say what it's like maintaining a Django application like Disqus but it doesn't sound to me from what I know that Django is any hindrance to them.


>What frameworks do: impose structure and convention on the design of programs.

So, nothing? Convention doesn't prevent things from breaking when you make changes.

>Haskell has them too

I know, I use two of them. And I haven't found any benefits to maintenance from them. I've found huge benefits to maintenance from haskell though.

>it seems to me that Django has been a net-win for them

Compared to what though? I didn't say frameworks are useless, I said they don't appear to do anything to aid in maintenance.


So, nothing? Convention doesn't prevent things from breaking when you make changes.

No, not nothing. Exactly what it says on the tin.

Without a web framework what do you get? A few buffers to read/write some bytes to. Have fun writing your parsers, dispatchers, database connection pool handling, object relational mapper, etc, etc, etc... and especially have fun maintaining all of that.

If you used a framework though, you wouldn't have to. It's already there and there's a community of people who work on that stuff for you. Awesome.

First rule of maintenance: the code you don't write is the code you don't have to maintain.

I don't see what Haskell is winning here in terms of maintenance. Without Yesod or any other framework you're in the same boat as someone starting with plain Java or Tcl or something.

Unless of course you're referring to its masochistic type system that magically protects you from ever writing a bug ever. Good luck with that.

My point was, originally, that a web developer that doesn't already know Haskell and hasn't convinced themselves to learn Haskell isn't going to look at some Haskell code and be converted. They're going to see StateT and monad combinators and the awful pain of IO in Haskell and go write their app in X.


>Without a web framework what do you get? A few buffers to read/write some bytes to. Have fun writing your parsers, dispatchers, database connection pool handling, object relational mapper, etc, etc, etc... and especially have fun maintaining all of that.

Why would I write all of that myself rather than use libraries? You do realize that writing an application is not a binary choice between "use framework" and "write everything from scratch" right?

>If you used a framework though, you wouldn't have to

I don't have to anyways, a framework has nothing to do with it.

>I don't see what Haskell is winning here in terms of maintenance

Type safety. I can change my code, and every single thing I just broke is now specified for me, file and line, by the compiler. This is an absolutely massive difference compared to trying to rely exclusively on unit tests. You should really try it sometime.

>Unless of course you're referring to its masochistic type system that magically protects you from ever writing a bug ever.

Really, you should try it sometime. Or at least take the ignorant nonsense to reddit where it belongs.

>My point was, originally, that a web developer that doesn't already know Haskell and hasn't convinced themselves to learn Haskell isn't going to look at some Haskell code and be converted

And my point was that is exactly what happened with both me and my wife.


Really, you should try it sometime. Or at least take the ignorant nonsense to reddit where it belongs.

I've read the papers and I've tried a bit of Haskell. I write lots of C++ too. I've tried different flavours of typing. I still prefer dynamic, strong-typing any day. Common Lisp or Python generally do the trick.

And I've been programming for > 10 years. I'd appreciate it if you wouldn't assume I'm an ignorant dumbass because I don't agree with you.

And my point was that is exactly what happened with both me and my wife.

Does not imply it's true for everyone. Maybe I should quantify that my original point with most web developers using existing technologies aren't going to hop over to Haskell because of state transformation and combinator monads and strong typing.


>I write lots of C++ too

I know, that's why you say completely backwards things like "masochistic type system". You are ignorant of haskell, and instead apply your experience with statically typed languages with inexpressive and awkward type systems to haskell.

>And I've been programming for > 10 years. I'd appreciate it if you wouldn't assume I'm an ignorant dumbass because I don't agree with you.

Ignorance has nothing to do with being a dumbass. You are ignorant. So am I. So is everyone else. The fact that modern type systems is one of the areas where you are ignorant does not make you stupid or a bad person. My suggestion that gaining knowledge and experience in that area could be beneficial to you is not an attack, it is a genuine suggestion. I used to be a programmer with 10 years experience who thought static typing was bad too.

>Does not imply it's true for everyone

I didn't claim it was. I pointed out that your broad claim is unsupported by evidence, and that my personal experience was directly contradictory.


I know, that's why you say completely backwards things like "masochistic type system".

I don't think disagreeing with you makes me ignorant or gives you any special insight into my knowledge of type systems.

And my statement is only backwards from the perspective of developers who find pleasure in working with such a compiler.

If you want to know the extent of my ignorance I've only used Haskell in so far as to complete various programming exercises and inspecting the source code to xMonad. I was interested enough at one point to find out what all the hype was about. I've watched most of the seminal talks from Peyton-Jones and I've read numerous papers on Hindley-Milner type inference, category theory, monads... and perhaps countless more articles than I care to remember that extoll the virtues of the pain of getting your types right (in fact I think I've read it from numerous sources describing the experience as "pain," but that, "it's worth it").

Weird.

In the end my conclusion, different and ignorant though it is, is that purity is over-rated and not worth the trade-offs. I prefer languages that let me be the judge of what is useful and avoid languages that make broad generalizations about purity, mutability, and types.

I pointed out that your broad claim is unsupported by evidence, and that my personal experience was directly contradictory.

And my experience is directly contradictory to yours. So what?

If we needed hard numbers to settle this I think we will find that the majority of web applications in existence are not written in Haskell. And if we take a survey of developers who build and maintain web applications about whether they would consider switching to Haskell I imagine very few would answer, "Yes." I think they will continue to maintain and develop new features and applications without Haskell and the world will continue to turn.

Yes, as you've pointed out, there are other people out there like you. Sorry, you're in the minority. The vast, vast minority.

There are a plethora of considerations to make when writing web application software. I don't think I've ever stopped and said, "You know, I should wrap all of this in a state transformation monad and combine it the an IO monad and ensure that my entire program passes through my compiler's type-checker." And I write large, eventually consistent data-stores without type checking... and most of the time they work!

I just don't see why type checking up-front is such a huge deal. And believe me or not, I've tried to convince myself.


One thing I haven't seen mentioned here is that web development is often not a one-man endeavor. At its most basic, you commonly have separation between front- and back-end. So unless you plan on doing it all yourself, you will need to find a front-end developer who is either familiar with Clay, JMacro and Fay; or who is willing to learn; and frankly, I don't think that's a very likely scenario, given the scant usage of Haskell as a web development language.

Also, the example of the compiled JavaScript from Fay is horribly obfuscated:

  return Fay$$_(Fay$$_(
            Fay$$cons)(i))(Fay$$_(
            Prelude$enumFrom)(
            Fay$$_(Fay$$_(
                Fay$$add)(i))(1)
        ));
Oh, God, my eyes. Any idea at all what that does? I can tell you one thing: I would not want to debug that on a Friday night. Or a Tuesday afternoon. Or ever, really.

Kind of have to agree with a few of the other commenters here who are saying things along the lines of "You can do anything with a hammer, if a hammer is what you have."

Also: this reminds me of a "LISP for Web Development" talk I went to a few years ago. The speaker was talking to a bunch of web devs about how LISP has an undeserved reputation for being domain specific (academia, astronomy, whatever). He then went on to explain how you could use it for calculating the results of some kind of physics experiment and output it as an HTML table. Uh-huh. My buddy and I walked out after about 25 minutes.

Am I being unfair?

>> As an example of we’ll use JMacro to implement a simple translator for the untyped typed lambda calculus.

Nope.


> Oh, God, my eyes. Any idea at all what that does? I can tell you one thing: I would not want to debug that on a Friday night. Or a Tuesday afternoon. Or ever, really.

Do you often debug the assembler output of your compiler? Same thing; fix the Fay code, not the Javascript that the compiler generates.


I have source-level debuggers in other languages.

I'm not sure Fay now generates sourcemaps, and either way not all browsers support them.

So it's pretty much guaranteed you'll have to debug generated javascript at some point.


> Any idea at all what that does?

Without reading the article or having used fay, I actually knew that that was a JS translation of `enumFrom` and that you'd probably left out part of it. An editor color scheme could probably reduce the noise of all the "Fay$$" namespacing (or whatever that is) stuff.


You should not ever have to debug generated code. That's what source-maps are for.


>Am I being unfair?

I'm not sure I would call it unfair, but I wouldn't call it reasonable or logical. You don't have to use haskell for your front end development if you don't have someone to do it. You don't have to use haskell for your back end development if you don't have someone to do it. He just showed some of what is available if you do want to do it. Our front end developer knows haskell, and works on the backend as well writing haskell. She still doesn't use clay or fay though, she uses SASS and just plain old javascript.

Also, Fay output isn't obfuscated, it is just accurately translating haskell to javascript, which means lazy evaluation.


I'm just getting started with Haskell and there will soon come a time where I need to jump in the deep end. When I'm ready to learn Category theory, what background will I need? Is elementary calculus and a little linear algebra enough? Or am I going to need some more advanced math concepts? Once I've got the right background, where should I go to learn Category theory?


We had a discussion on this when Stephen's last post showed up here: https://news.ycombinator.com/item?id=6041726

I've been looking into this myself and I have a similar background as you (elementary calculus and linear algebra; I also took number theory as part of a math minor). So far it looks like I should pick up a bit of abstract algebra/topology (rings, fields, groups, etc) then I should be able to start off on the shallow end of category theory and work from there. At the same time I need to improve my understanding of how monads, monad transformers, monoids, functors, etc work in Haskell from an operational standpoint; I have a basic understanding now but I'd like to dig deeper.

The thread I linked above has a lot of links to books and stuff like Reddit's r/haskell and other places, that you can sort through to see what works for you. I have yet to make that determination for myself, but there's plenty to choose from. I actually picked up a copy of Benjamin Pierce's "Basic Category Theory for Computer Scientists" a few years ago based on someone's recommendation, way before I got into Haskell, and didn't make the connection until I picked up his book on type theory recently (while researching a paper on FP) and thought his name looked familiar.

Ultimately I'm hoping to be able to get deep into type theory and go through that Homotopy Type Theory book that came out recently.


Thanks for the response. I was interested in the answer to this question as well. I read through the thread [1] that you linked to, and seems like the consensus is the same thing you said: it's best to learn abstract algebra and topology first. Does anyone know of some good sources for those topics? The other threads are understandably focused on category theory.

As for category theory, after reading through that thread [1] and the Reddit thread [2] that it links to, I think Conceptual Mathematics: A First Introduction to Categories [3] will be my first read on category theory (but probably followed by the Rosetta Stone paper). That book receives some strong recommendations as a first book in the Reddit thread [2].

[1] https://news.ycombinator.com/item?id=6041726

[2] http://www.reddit.com/r/haskell/comments/1ht4mf/books_on_cat...

[3] http://www.amazon.com/Conceptual-Mathematics-First-Introduct...


Not a problem. I'm interested in the abstract algebra question too, as I haven't actually done any research yet. Perhaps someone should start an r/categorytheory. :)


Digging deeper into the HN thread that you linked to, I came across this discussion [1] on Reddit which is focused more on the abstract algebra question. They recommend Algebra: Chapter 0 by Aluffi (Amazon [2] or PDF [3]).

[1] http://www.reddit.com/r/math/comments/1eiyid/category_theory...

[2] http://www.amazon.com/Algebra-Chapter-Graduate-Studies-Mathe...

[3] http://www.mimuw.edu.pl/~jarekw/pdf/Algebra0TextboookAluffi....


Thanks for the Algebra Chapter 0 link! I think that's my next math book :)


Ah, I think I had seen that but forgotten to note it. Thanks.


This might become a recursive rabbit hole.


I will echo the choir here that states that Category theory is not Haskell. Category theory gave Haskell some neat tricks, but its not necessary to understand it to do anything practical. Category theory is so abstract and general that it is unlikely to come up with any grand solutions unless you are phd'ing.

If you enjoy it anyway: I like (http://www.haskell.org/haskellwiki/Typeclassopedia) for how it discusses how the different type classes in Haskell relate. I found this a lot more practical coming from this direction but your taste may vary.

I'd also highly recommend going through Types and Progrmaming Languages by Pierce (http://www.cis.upenn.edu/~bcpierce/tapl/) if you are interested in the area. This gives you a good background on type system stuff.

There are tons more links to category theory stuff here (http://mathoverflow.net/questions/903/resources-for-learning...)


In my experience, reading the Typeclassopedia[1] is probably more than enough for most people.

Some people like thinking about category theory when they program but in the end its still just a bunch of (cleverly organized) abstract interfaces.

[1] http://www.haskell.org/haskellwiki/Typeclassopedia


Well, I've been continously reassured that learning the theory is unneccessary if all you want is to use Haskell effectively.


Learning category theory is about as important to programming in Haskell as denotational semantics are to programming in C. Which is to say, not very important unless you're interested in deeper computer science.


So lets assume I'm interested in deeper computer science :)


I'd still just start with learning Haskell. It will not be nearly as intimidating and you'll see how the ideas work out in practice in the programming world. Then move on to category theory. But hey if you are ambitious by all means dive right in.


Well, then I think you should either learn category theory directly or learn Haskell without it first. Learning two brand new things at once usually produces an incomplete understanding of both.


Well said!


I'm in a similar situation. My math background consists of the required courses for a BA in CS. I tried to start with Mac Lane's "Categories for the Working Mathematician", but it was too dense for me. I switched to Awodey's "Category Theory" and that has been easier to follow. Wikipedia, Youtube, and other online sources have been very helpful. It's been a very rewarding experience so far.


I'm currently working my way through "Basic Category Theory For Computer Scientists" with a local meetup group. I don't have enough to compare it to, to say whether I'd recommend it as a book. I've been enjoying it, though. If you're in the Bay Area, join the meetup group - we're only just starting chapter 2.

As for background, I think some familiarity with set theory would be most useful, groups and rings and such a bonus. Calculus won't be of much help. Linear algebra might, depending.


i'm not sure what you need to learn category theory (but i'm working on it). so far it seems you need a to be able to write and read proofs, and not much else.

of course, that doesn't have much to do with haskell, but if you have an interest in abstract algebra beyond its applicability in haskell, you're in luck, because it doesn't seem to require much background. all it needs is for your head asplode.

caveat: i've tried to learn category theory, and found that i could barely follow a proof, let alone write one. so i picked up a book: http://www.pearsonhighered.com/educator/product/Mathematical...

and i'm working through it now. i could come back to category theory and find that i'm still lost, but i don't think so.

i haven't really had much trouble writing haskell without a decent math background.


>I'm just getting started with Haskell and there will soon come a time where I need to jump in the deep end

Your implication is that "jump in the deep end" means learn category theory. That is entirely unnecessary. If you are interested in category theory, by all means go for it. But don't think it is in any way a haskell requirement.


My long term bet is Haskell, but boy do the examples look ugly. And most of Haskell really is ugly. My mind just can't get around type names like "M a p f b c => (a -> p) -> b -> c -> a" just yet. I know it's not hard to understand, but it is hard to parse and build up context.

Looking forward to having 5+ years of experience in this brilliant language.


Yeah, but type

foo :: M a p f b c => (a -> p) -> b -> c -> a foo f b c = ...

would look something like

A foo<M<a>,p,f,b,c>(AP f, B b, C c) { ...

in Java (I don't know Java, but I hope you gen an idea). I mean, this is a really complex example you've shown. And most of the times, you can (just like everywhere else) write simple type synonyms to express what you want.


Rest assured it doesn't take 5 years to get there :)


Why do you think a language that "really is ugly" and " is hard to parse" must be a "brilliant language"? Maybe the emperor has no clothes? Maybe Haskell is just what you see: suitable for academic discourse, unusable for real-wold programming.


Presumably he thinks that it is brilliant for reasons he already knows, probably involving the type system, but also observes that it is aesthetically ugly and difficult to read.


Aesthetics are purely subjective and rather useless. Hell even our own opinion of them can change drastically over time. The other reasons are what matter. I initially found Haskell quite ugly. But now that I understand the semantics behind it the syntax is quite beautiful in my eyes.


Lets throw reason and common sense to the wind and pretend the subjective "haskell is ugly" is objectively true. Given that, could you explain to me how you make the seemingly wild and random leap to "haskell is suitable for academic discourse, unsuitable for real-world programming"? Perl is widely considered to be ugly, is it also only suitable for academic discourse? What properties do you believe haskell possesses that makes it unsuitable for "real-world programming"? What do you even deem "real-world programming" to be? It seems rather bizarre to suggest haskell is unsuitable for real-world programming in a discussion about examples of using haskell for real-world programming.


I really like Stephen's writing style: no frills, concise, but he still spends time explaining the hard things. Plus he usually writes about things I don't understand so I come away having learned something.


In this blog post about how easy and non-academic Haskell is I'm going to implement the lambda calculus as an example


One of the most challenging things about Haskell tutorials is the difficulty in determining the etymology of very terse function names.

For example, what does the M in forM_ stand for? Monad? Something else? What about liftM? Why is it called lifting? Is the M here the same as the M in forM_?

These are just a few examples from this tutorial, but I find that this is pretty much common everywhere in the Haskell world, making it hard to parse semantic meaning from function names if you aren't already immersed in the language and community and well versed in the lingo.

One of the best things I've learned to break out of this is to search for JavaScript implementations of many Haskell concepts when such a concept is doable in JavaScript. It usually leads me to enough understanding of what some function is trying to accomplish to proceed with whatever Haskell tutorial I happen to be reading.


The positive news is that while it might take a while to learn the conventions, usually the conventions are enforced all around. In your case, the conventions are documented in the docs for Control.Monad, the library those functions are from:

The functions in this library use the following naming conventions:

> A postfix 'M' always stands for a function in the Kleisli category: The monad type constructor m is added to function results (modulo currying) and nowhere else. So, for example,

   filter  ::              (a ->   Bool) -> [a] ->   [a]
   filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
> A postfix '_' changes the result type from (m a) to (m ()). Thus, for example:

    sequence  :: Monad m => [m a] -> m [a] 
    sequence_ :: Monad m => [m a] -> m () 
> A prefix 'm' generalizes an existing function to a monadic form. Thus, for example:

    sum  :: Num a       => [a]   -> a
    msum :: MonadPlus m => [m a] -> m a
http://www.haskell.org/ghc/docs/latest/html/libraries/base/C...

Personally, the things that I found hard were idioms (like learning that $ is just avoiding parenthesis, "go" is for loops, etc) and how theres a lot of libraries that make the code shorter at the expense of forcing you to be aware of them (for example, applicatives <$> <*> look alien until you learn what they are and monad transformers add a lot of "magic" to the code as well as force you to use annoying lifts all over the place)


Let me hoogle those for you (sic) http://www.haskell.org/hoogle/?hoogle=liftM =-)


It's called lifting because it's moving something between levels. In the case of liftM, it's moving a function that operates on type a and returns type b, to one that operates on a wrapped type a and returns a wrapped type b. Since a monad should be a functor, and the types are the same, I'm pretty confident that this is the same thing as fmap - though in principle there could be a monad with no functor instance defined where you would have to use liftM (for now: http://www.haskell.org/haskellwiki/Functor-Applicative-Monad...).


Yes, the M is for monad. Eg map vs mapM


these "how to read _" posts are helpful mini- style guide/naming convention roadmaps

http://blog.ezyang.com/2011/11/how-to-read-haskell/

http://www.haskell.org/haskellwiki/How_to_read_Haskell#What_...


If you like hammers you can make them do anything. Or is the analogy nails? I forget.

The real question is why use Haskell instead of something else that most people generally consider targeted at the web?


> The real question is why use Haskell instead of something else that most people generally consider targeted at the web?

An equally "real" question is why people think "the web" is special in such a way that it requires a different programming language for back-end applications than other server applications use.


I would say that the difference is that the Web is almost entirely about I/O, which Haskell treats as an unfortunate aspect of programming that is to be made as unpleasant as possible so it will be avoided except where unavoidable.

There are many programs that run on servers, grinding away on deep questions for extended periods with no more than the bare minimum of communication with the rest of the universe---maybe just a terse "42" at the end of a long run.

But the Web is about talking to users thousands of times per second, sending it all in a hurricane of streams to a database, fetching data for each of them from various databases, streaming it out to a third-party credit card processor, sending the results back to the users....

Haskell's ideal of minimal I/O isolated in a ghetto so the "real" work of the program can remain unpolluted doesn't seem an ideal match for the case where the real work is almost entirely about routing I/O streams.


> I would say that the difference is that the Web is almost entirely about I/O

That's true in the trivial sense, in the same way that "the desktop" is about i/o, or "the command line" is, in that all of those specify an i/o channel.

But web apps or services aren't all about i/o any more than any other applications/services, they just differ in that the main user-facing i/o channel is HTTP.

> Haskell's ideal of minimal I/O isolated in a ghetto so the "real" work of the program can remain unpolluted doesn't seem an ideal match for the case where the real work is almost entirely about routing I/O streams.

Monads aren't ghettos isolated from the real work of the program. Monads are tools for doing the real work of the program -- whether its i/o, or dealing with mutable state in a transactional, concurrency-safe manner, or, well, lots of other things.


>That's true in the trivial sense

I think it is true in the actual real world practical sense too. Consider how many web applications are just plumbing between a database and a browser. 95% of my app is in a MonadIO.


<i>Web is almost entirely about I/O</i> — I think it's not quite correct, the web is half about IO and half about parsing, and parsing is where Haskell shines.

Haskell does not avoid IO because it's an unfortunate aspect, it avoids IO to simplify things, to minimize the surface that can break, to make it more easily optimizable to a compiler. "Pollution" is just an emotional term used in teaching beginners to emphasize usefulness and beauty of pure functions.


>Web is almost entirely about I/O, which Haskell treats as an unfortunate aspect of programming that is to be made as unpleasant as possible so it will be avoided except where unavoidable.

That isn't what haskell does though, that is the "I've never bothered to learn haskell and want to spread FUD" version of it. I/O is not "a ghetto" in haskell, it is first class. Haskell just makes I/O explicit, it doesn't make it unpleasant. My app is literally nothing but I/O, just plumbing between postgresql and browsers. Haskell has done nothing to make that difficult or unpleasant, and has done much to make it easier and simpler, especially in regards to error handling and form processing, which frankly is most of the code.


well said.

I do very very imperative things in haskell. Including very very high quality numerical algorithms where i'm writing vectorized math that uses the cpu ports well (the things that let you get instruction level parallelism), the cpu caches well, etc.

Basically, I know how to write fast code. Haskell supports writing sophistcated algorithms using these gnarly bits of code i wrote as primops in a really nice way. this code is more imperative than most "web" code ever written ever, and competes favorable with state of the art alternatives written by dedicate experts.

Point being, anyone who says haskell can't do IO is an idiot and needs to finish their computer science education, or get one :) (or, get a refund)


That seems like a far more real question.


Off the top of my head:

1. Built-in concurrency makes your backend scale better.

2. Type system reduces bugs in your code.

3. Mature libraries for web development: frameworks, templating languages, ORMs.

I'll admit there are some downsides:

1. Harder to hire devs that know Haskell.

2. Smaller community.

But it seems like those are getting resolved quickly. The Haskell community is growing, and there is better documentation and support thanks to companies like FPComplete. O'Reilly has been publishing a bunch of books in Haskell, the Stack Overflow community is active and helpful...


1. Concurrency makes applications scalable, not fast. 3. I don't know what mature means, but they are not used by high-profile applications. 1'. This is probably balanced by their high-motivation to work on a Haskell project.


Regarding 3, I wouldn't regard many of the technologies used in many high-profile applications, like Node.js, as particularly mature. They're just popular. They have mature communities surrounding them, but many of these technologies are far from technically mature and/or rock-solid.


>I don't know what mature means, but they are not used by high-profile applications.

Does that mean that if my website becomes "high-profile" next month haskell will have suddenly become a better language than it is now, without the language changing at all?


It's evidence that it's viable, whereas now it might be great in theory, but there's no direct evidence that it works in practice for large-scale projects.


Haskell has been used in large-scale projects. There's an algorithmic trading firm, Tsuru Capital, that uses it exclusively, in fact.

You might also be interested in the work of Galois. They've got some slides on Haskell in industry: http://www.scribd.com/doc/19502765/Engineering-Large-Project...


That seems like a very bizarre way for technical people to behave. "I'll assume this general purpose programming language must have some weird problem that somehow makes it unusable, but people can only find out about that problem if they write a popular project in the language". Why not assume every general purpose programming language is capable of being a general purpose programming language baring evidence to the contrary? Shouldn't tech people be capable of examining the language and determining if there's evil "large-scale" goblins lurking within?


Systems in general tend to have goblins that reach out and bite you when you try and scale them too hard. Better to have a clearer picture of where there be dragons - maybe or maybe not "better enough" when weighed against other advantages and disadvantages, but "mature" is a virtue.


Mature is a virtue, but "popular app was written in language X" isn't an indication of maturity, it is an indication of a particular app getting popular. The whole reason I used the term goblins is that without specifying what sort of problems people are afraid to run into, it is just vague, hand wavy nonsense. Does someone seriously think haskell is going to somehow "not scale" but ruby would have? Programming language choice has far less to do with scaling than the design of the application.


"Popular web app" implies a level of traffic - and thus demands at scale - that most other domains do not have. It's not the fact that someone was able to make a Haskell-based web-app popular that is indicative of much, but that the Haskell-based web-app stood up to popularity on the web that gives some reassurance and thus helps establish the maturity of the ecosystem.


But reassurance of what? That's exactly what I mean, what kind of evil spirits do people believe are going to make a general purpose programming language incapable of being used to write a web app? This is the exact same FUD that people spread about ruby when rails was starting out. It strikes me as a very lazy, unintellectual way to approach things. "I'll just assume magic gremlins make this language bad until someone else proves otherwise". And then inevitably someone else proves otherwise and it becomes "that proof isn't convincing enough". Make it explicit for me, how many page views/month do I need before haskell has been declared gremlin free?


You're making a binary distinction. It is not a binary thing at all. Any large software system has a lot of moving parts. Any of these parts may have bugs, some of those bugs may be significant. There's also the possibility for different pieces of functionality interacting in poor ways.

The more a piece of a system has been used to build things, the more confident you can be that it doesn't have bugs that will be getting in your way. The more similar these things are to what you're planning on doing, the more confident you can be that things won't get in your way. What the thresholds should be depends on the particular tech, your particular project, and the world around it. Maturity is absolutely something that should be traded away when there is sufficient reason.

This is by no means lazy or unintellectual unless you're using it as an excuse not to work (or learn) or not to think.

Regarding your implicit concern about no one being the one to step up and shoulder the burden of trying new things in production, I reiterate my assertion that maturity is just one factor that should be used in determining technology choice, and there's nothing wrong with those first venturing to new things those who have the most to gain by their use.

For what it's worth, I'd like to note that I am presently writing a web application in Haskell, so I obviously don't think it's a poor fit for all web projects.


I am not making any distinction, binary or otherwise. I am saying that dismissing languages as "not mature" based on a perceived lack of popular apps written in that language, without defining what "mature" means or what problems the language might be hiding, is lazy and unintellectual. "Not mature" is just a more socially acceptable way to say "I have no actual reason to dismiss this, but I will dismiss it anyways".


You were asserting that I was making a binary distinction:

"Make it explicit for me, how many page views/month do I need before haskell has been declared gremlin free?"

There is no such thing as gremlin free. As something is tested in various environments, we become gradually more confident that there are no gremlins.

A gremlin might look like:

I spent weeks building this system, and now I realize that it can't ever really be robust because I built it all around lazy IO and have no good method of controlling where exceptions happen when they are triggered.

That would be an example of Haskell - as a language and ecosystem - being immature. This is not what anyone recommends doing anymore, because the people have realized there be dragons there. Dragons are worse than gremlins, as it happens.

The odds of similar troubles go down over time, especially for areas anyone has done work near.

I'm sure we could quantify, with sufficient data, but I don't have the resources and I'm not sure it would be a good use anyway. How many resources did you put into investigating the other pros and cons, when you made the decision to use Haskell? Like anything else, it's a concern I give some thought, and weigh against other concerns, investing as much time into it as seems appropriate - estimating from what I've observed on other projects, what I've heard from other people, &c.

Just because I acknowledge that something can be a legitimate concern doesn't mean I'm lazy and unintellectual for not pouring way more resources than I have into it - eventually we have to get work done, and doing yet more investigation of the details of a particular decision has diminishing returns.


You really seem to just want to argue about nothing. Your position is a strawman. The original poster I replied to was literally saying "can't use haskell cause academic norealworld". That is lazy and unintellectual. I wasn't talking to you, so constantly trying to re-frame my statements as though they were directed at you is not productive.


"You really seem to just want to argue about nothing."

No, I want to argue (constructively, I hope) about things people say that I disagree with, in the hopes that at least one of us learns something.

"Your position is a strawman."

This is nonsensical. I'll assume you meant "You are arguing with a strawman." I'm not sure that's the case, but feel free to clarify any of your above arguments if you think I misinterpreted something.

'The original poster I replied to was literally saying "can't use haskell cause academic norealworld". That is lazy and unintellectual.'

On reviewing the thread, I understand you to be speaking of part 3 of https://news.ycombinator.com/item?id=6109593

What was literally said was, "I don't know what mature means, but they are not used by high-profile applications", where "they" referred to "3. Mature libraries for web development: frameworks, templating languages, ORMs" from the parent comment. Is that right?

I would say that "can't use haskell cause academic nonrealworld" is a substantial mischaracterization of that, and the parent point is something I would agree with - so far as I am aware we have not seen the haskell web frameworks exposed to tremendous real-world load.

"I wasn't talking to you, so constantly trying to re-frame my statements as though they were directed at you is not productive."

You were replying to me, so I'm confused who you were talking to. If you simply mean you weren't talking about me, I'm not sure I agree, as you seemed to be speaking in broad terms about those who might hold views I think are at least reasonable. I didn't so much think you were calling me out specifically.

Generally, this comment seems disingenuous at best, and I may not reply if you follow up, unless you raise the level of discourse - your earlier comments were better.


>The real question is why use Haskell instead of something else that most people generally consider targeted at the web?

That applies to any language. Why use perl? Why use PHP? Why use ruby? Why use java? Web development isn't magical, it is just ordinary, general purpose programming. So people use ordinary, general purpose programming languages. Like perl, PHP, ruby, java, and haskell.


>Do I still need a graduate degree in category theory to write to a file?

No.

> Will I need a graduate degree in category theory to parse the next paragraph in this article?

Oh. Um wait, define web developer for me again?


In the second example, shouldn't it be `mapM_` instead of `forM_` with this ordering of the arguments?


I've been meaning to ask this for a while now: what's the appeal of Haskell? It doesn't come off as a pleasure to program in. Could someone who knows the language share their thoughts?


I love programming in Haskell! Seriously, the reason it's on HN so much is that Haskellers love the language. Here are some reasons:

* Types, types, types. You're probably sick of hearing about Haskell's type system, but it's amazing. Gives you almost all the flexibility of a language like Ruby, PLUS you can encode a lot of your assumptions in types...so if there's a bug you'll catch it at compile time. I like how I can just write a LOT of Haskell code without testing it, and then just compile my project and find all the issues I need to fix. It makes prototyping really easy and fast.

* Awesome abstractions. Functors, monads, first class functions: you know how java is verbose and you end up writing the same boilerplate over and over? Haskell is the opposite.

* Awesome syntax. I really love Haskell's syntax: super clean, and you can modify it however you like. I like writing very readable code (you read code way more often than you write it) and Haskell lets me write very readable code.

* Neat libraries! All that stuff other languages are trying to work around? Like concurrency, or sandboxing? Part of the standard library already. I had written this post a while back (http://adit.io/posts/2012-03-10-building_a_concurrent_web_sc...) that shows exactly how easy it is.

Seriously, if you're curious about languages at all, give Haskell a try.


I think something that's hard to appreciate before you learn Haskell is that while a lot of the abstractions used in the language are more complex and abstract than you're used to using, it's because the language/types/compiler help to lift an enormous mental burden you usually carry while reasoning about code in most languages.

Purity is great not because you make fewer errors but because when writing pure code you can focus on what needs to happen instead of worrying about state/effects/control flow. While having to sprinkle `liftIO`s around your imperative code sounds dreadful, it's quickly appreciated as a statically checked marker for "code that you need to think harder about".

So what you end up with is a fantastic playground to use mental tools unavailable in other languages due to the complexity overhead. You also have a suite of abstractions that "almost never break" due to their invariants being encoded into the type system and thus checked every single time you reload the file.

People often compare this effect to having a large suite of automatically generated test specs run on every compile. It's like that, yeah, but humans can write tests this bulletproof and no suite runs this quickly.

I get jitters writing Ruby sometimes now because I know I'm fallible and I have to store so much more in my mind in order to reason about it.


Agreed. Ruby used to be my go-to scripting language but I've switched over to Haskell because it's much easier to reason about.


Can you give examples of the scripting tasks you've used Haskell for? Are they on your github?


All sorts of command line utilities. For example, here's a timer:

https://gist.github.com/egonSchiele/6005209

Here's a script that emails me when someone follows/unfollows me on Github:

https://gist.github.com/egonSchiele/6091917


The appeal for me is that it is an expressive, high level language with a useful type system and high performance. It is a pleasure to program in.


Set aside the small examples and theory about what's best, no argument beats a real world example.

Are there any large opensource haskell web apps that one could read to see how it really done ?


Not sure about web applications, but Pandoc [1], git-annex [2], and xmonad [3] are a few examples of successful, reasonably-sized applications written in Haskell. It's definitely worth browsing through their code if you want some examples of idiomatic Haskell in a practical context.

[1] http://johnmacfarlane.net/pandoc/

[2] http://git-annex.branchable.com/

[3] http://xmonad.org/



So, the short answer is no.

And there are many reasons to that. One of them: Haskell developers tend to love the technology and forget about the product.


How do you come to that conclusion? I am all about the product, which is why it isn't open source.


Well, I think there's clearly such a contingent, and I think it's more visible because of Haskell's academic heritage; it may also be larger, but I am much less confident in that - there are people who love pushing the tech in most languages.


I think it isn't more visible, it is just more talked about. There is a perception, not a reality.


I don't think there's a difference between what you mean by "talked about" and what I mean by "visible". I am convinced that the perception is there - I perceive it. I am not convinced that the reality is present - though I am not entirely convinced it's absent, either.


There is a big difference. Visible means you can see that "contingent" existing. Talked about means they don't actually exist, but people always refer to them as though they do exist.


So you think there is not a subset of Haskell developers who are more interested in tweaky tech than in practical problems? If that's the case, then it's true that I disagree with you - as well I should, having observed and interacted with such people myself.


Reminds me of "Step 2: Draw the rest of the fucking owl." It does not follow that because Hello, World is easy, therefore IO in general is easy.


Can you explain what you feel is hard about IO in Haskell that isn't hard in other languages? Sure, you need to get used to the different way of working with IO (remembering to use <- instead let, calling return to wrap the result back into the IO type, etc.), but after some time, it's pretty much automatic.


I really need to learn Haskell. Thanks for this.


All of the examples of aeson I've seen have been for very simple, flat, json objects. What happens when you have json with nested objects. Do you have to create types for each intermediate json object or can you pull the values out into the top level type? If so how?


When it comes to encoding JSON, you don't have to represent everything. For example, you can always do:

object [ "foo" .= object [ "bar" .= [ "Yum", "Waffles!"] ] ]

Which will give you {"foo": {"bar": ["Yum", "Waffles!"]}}.

You can do traversals when decoding too:

tasty :: JSON -> Parser [String] tasty json = json .: "foo" >>= (.: "bar")

You can also do similar things with lens-aeson:

tasty json = json & key "foo" . key "bar" . _Array

Though I'm less familiar with that, and you won't be inside the Parser monad.


When I code in Haskell, the first and most important step I make is to create rich types for the domain I'm working in. If you disambiguate your types, your code becomes more readable and the compiler is a lot better able to help you out when you make a mistake.

For example, rather than having an Invoice with a status field of Text or String, I have a type InvoiceStatus = Paid | Unpaid | Void. From there, you can write a very precise parser that takes exactly what you want and nothing else. When I apply this approach to each component of the thing I need to parse (or do it generically even), I don't really find myself having to munge javascript types into my data structures.


To expand on that, these "custom" types are really the only time you have to do any extra work for support with aeson. If you leave your status field as Text or String it's implicitly supported.

That said, aeson support for nullary types (i.e. types that hold no "extra" values, like InvoiceStatus above) is trivial and the types make a huge difference in the rest of your code.


Aeson is built on the Show and Generic type classes. Anything that supports those also basically supports the FromJSON/ToJSON classes that Aeson needs. Lists already support it (for types that also support aeson), as do Map/HashMap for various text types mapping to something that supports aeson.

So, complex nested structures are a matter of defining Show and Generic instances for any types that don't already have them. From there you can trivially make those types aeson-compatible. Since so many common types are already supported, adding support for your own nested types isn't too difficult. The example use automatic derivation of Generic instances, which may result in some performance problems but is the easiest way to get off the ground.


Nested json objects are no different. You do not need to create an intermediate data type for each json object (that would be ridiculous). The docs are pretty clear on this, a quick hoogle of hackage would get you what you need.


As long as your nested types have implemented or derived the ToJSON typeclass, nested structures will serialize just fine




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: