Hacker News new | past | comments | ask | show | jobs | submit login
Why Haskell is Kinda Cool (amtal.github.com)
65 points by icey on Aug 25, 2011 | hide | past | favorite | 68 comments



There are a few missteps in this article. To begin with, the author claims that a variable of type Double is "a function with no arguments". This is not the case [1]; not everything in Haskell is a function. A Double is just that: an instance of the numeric type Double.

They also say that "[T]he Ord "typeclass" ... implements comparisons." But of course the typeclass itself doesn't implement comparisons. A typeclass defines an interface which its instances implement. Consider the Eq typeclass:

    class Eq a where
        (==) :: a -> a -> Bool
        (/=) :: a -> a -> Bool
So to make a type an instances of Eq, we need to implement (==) and (/=) functions with the type signatures provided by the class. It's pretty obvious how this should go for the Bool type, so let's define that and make it an instance of Eq.

    data Bool = True | False
    
    instance Eq Bool where
        True  == True  = True
        False == False = True
        _     == _     = False
        
        a     /= b     = not (a == b)
This is all covered very well by Learn You a Haskell [2].

[1]: http://conal.net/blog/posts/everything-is-a-function-in-hask...

[2]: http://learnyouahaskell.com/making-our-own-types-and-typecla...


The points you describe are valid, but there are a few nits:

Regarding your first point, your referenced article makes a misstep of its own, conflating the implementation with the semantics. From the article: "Do some folks believe we’re still doing what Church did, i.e., to encode all data as functions and build all types out of ->?"

From a semantics perspective: Yes. If it makes it easier to reason about, then treat variables as nullary functions. In the Spineless, Tagless G-Machine paper, both values and thunks are represented as closures, which may or may not be evaluated. For example:

  infiniteSeries :: [Integer]
  infiniteSeries = iterate (1+) 1
Is this a variable or a function? Having a special case for Double ("a closure containing a value") versus [Integer] ("a closure containing a thunk") is a distinction only important once the implementation becomes an issue (ie, lazy evaluation affecting memory usage).

As for your second point, typeclasses can include partial implementations. For example, the Eq typeclass [2] has a minimal definition required, either (==) or (/=), as one can be defined in terms of the other.

[1] research.microsoft.com/pubs/67083/spineless-tagless-gmachine.ps.gz

[2] http://haskell.org/ghc/docs/latest/html/libraries/base/Prelu...


Great points! That we can treat infinite streams as having the same semantics as lists is of course dependent on Haskell's non-strictness—it's not the same in ML, for example.

For me one of the more convincing arguments that article puts forward is that Haskell only has unary functions; the type Int -> Int -> Int is just a shorthand for Int -> (Int -> Int), i.e. functions which appear to have more than one argument are considered for the purposes of the semantics to be of the form (λa.(λb.c)).

The core point that I think Conal Elliot tries to make is that as far as the denotational semantics is concerned, Haskell has values of many types, some of which (the ones of (abstract) type a -> a) are functions.

You are of course correct about Eq and partial implementations. For me the key point is that where one function is defined entirely in terms of other functions and class constraints which ultimately rely on the instance implementing those other functions, they encapsulate the logic of the typeclass—in other words, they define a property of the interface. So equality for values of a given type is ultimately defined by the instance, but the relationship of equality to inequality—a property of the concept of equality in general—is defined by the typeclass.


> For me one of the more convincing arguments that article puts forward is that Haskell only has unary functions

It's not specific to Haskell, you can trivially argue that all languages with functions have only unary functions but most languages use tuples as the function's sole parameter.

Haskell even lets you switch a function between the two systems via `curry` and `uncurry`.


> the author claims that a variable of type Double is "a function with no arguments".

I believe he was trying to make the very important point that in a functional language, there is no conceptual difference between a variable containing X of type T and a function with no arguments that returns X of type T.


He was, and I was trying to make the point that he was wrong. :)


Ok, but you didn't, really. You gave an implementation difference between the two, but what is the conceptual difference?

Also, the implementation difference is just, well, an implementation detail. If you need to worry about whether a value requires calculation or not, you are likely about to mess things up anyway. Consider the following:

    f = (a long calculation of some sort)

    main = do
      doStuffWith f
      doMoreStuffWith f
    
The compiler may or may not decide that the result of f should be cached, meaning that after the first evaluation of f, f will from then on be a plain value thunk.

Given that evaluation order in Haskell is complex (and something you should try to avoid thinking about as much as possible), this distinction does not make conceptual sense.

Further down in my program listing, there is:

    bob = f
Is bob a value, because it takes no parameter? Is it a function because f happens to be a function? Is it really a function but practically a value since f has already been evaluated? Is f so trivial (5 + 2) that its result has already been evaluated and inlined by the compiler?

These questions can be answered, but they shouldn't. Their answers are not obvious from the code, and also subject to compiler decisions.

Therefore, "f = sum [1,2,3,4,5,6]" and "f = 21" are completely conceptually equal. The distinction is not important when you are programming. It may be important from a performance perspective in some instances, but that is usually a sign that you should defer some code to C.


> Is bob a value, because it takes no parameter? Is it a function because f happens to be a function?

Functions are values. But not all values are functions. Functions are values with abstract type a -> a. That's what a function is in Haskell. (Of course, I'm only talking about functions from values to values here, not about functions from types to types and so on.)

Whether bob is a function or not will of course depend on its type. For instance, consider the following definition for f:

    f = (+ 5)
Then yes, bob will be a function, with type Num a => a -> a. On the other hand, if you decide to define f as

    f = take (10^10) $ [1..]
then it won't. The distinction here is determined entirely at compile time: it's about what type a particular value has.

> Therefore, "f = sum [1,2,3,4,5,6]" and "f = 21" are completely conceptually equal.

Right. They're both values of some concrete integer type.


If you were to act like doubles are (optimized) thunks, would that ever trip you up? Haskell's strictness means that its abstractions don't leak as much. If I'm able to pretend that everything is a function, and doing so won't lead me astray, I don't see the point of caring that some things are really values (for some definition of real). That seems like the sort of detail that Haskell makes a point of protecting you from.


Don't forget currying. Actually, in Haskell, function have exactly one argument. If you need more, just return a function.

  -- appears to have 2 arguments
  foo :: Int -> Int -> Int
  foo x y = x + (y * y)

  -- equivalent definition
  foo :: Int -> (Int -> Int)
  foo x = \y -> x + (y * y)

  -- or even more consistently
  foo :: Int -> (Int -> Int)
  foo = \x -> (\y -> x + (y * y))
Thinking this way is more useful in the long run: it makes partial application more natural.

  bar :: Int -> List Int -> List Int
  Bar = map (foo 42)


I liked Haskell when I tried it for all of these reasons. However, there were a few problems I had:

1. Debugging logic. I LOVE its typesystem, which catches most of my errors. However, sometimes I have algorithmic/logic errors. In a perfect world, I would have worked those out well before coding. But once I have coded it, I just don't know how to debug my code. I have gotten so used to gdb/pdb as my last resort that I am lost without it. Does anybody have any idea debugging Haskell programs?

2. Its Lazy evaluation sometimes get in the way of tail recursion optimization unless you force it, and can lead to the stack getting full. But this can be handled by forcing evaluation with seq.


1. There's the ghci debugger, though I don't know if it is practical (I don't use it myself).

Alternatively, you can use Debug.Trace (or the http://hackage.haskell.org/packages/archive/TraceUtils/0.1.0... extension), which makes "print-based" debugging easier than in imperative languages:

Say you have:

  processValue f =
    after .
    map f .
    before
If you want "print-based" debugging, you can use Trace.Utils.traceAround:

  processValue f =
    traceAround "processValue" $
    after .
    map f .
    before
Or, if you just want the results of "after":

  processValue f =
    traceId "Result of after: " .
    after .
    map f .
    before
Alternatively, there's also the FileLocation package, which gives you filenames + line numbers in your debug prints via Template Haskell macros.

2. Controlling evaluation is indeed tricky. Haskell programmers take manipulation of infinite data structures (of any kind) for granted -- but then if they want to control stack use, it gets a bit tricky. Other languages take stack use management for granted, but if you want to have infinite arbitrary data structures, it gets tricky.

I personally prefer Haskell's choice, because it means that having infinite structures doesn't contaminate the code with effects (which explicit suspensions/resumptions of computations would do), but it could probably be done better (e.g: move laziness/strictness annotations mostly to the type-level).


Haskell doesn't even really have a stack. It's more like a spaghetti stack.


Is that supposed to be derogatory? :-)


No. It's at technical term that crops up naturally in the discussion of e.g. continuations and closures.


For debugging my haskell code I first determine if the bug is in a pure function or IO. If it's in IO I use typical print type debugging (either running the IO action in ghci or just rerunning my program repeatedly).

If it's in a pure function, I load the code up in ghci and use it to replicate and explore the inputs that lead to the bug, and try out fixes. In a complex function I'll often promote (de-indent) any 'where/let' helpers to toplevel definitions so I can try them in ghci and narrow down which is behaving badly.

I have not missed gdb in haskell, but then most of my gdb experiences have involved asking for a stack trace and discovering the stack's been smashed by the bug. Or using gdb to chase down some unexpected mutation of a variable deep in the code. Not problems typical to haskell.


This looks like some interesting research in the right direction for debugging lazy programs, http://arxiv.org/pdf/1108.4706v1. Which goes to show, people need to be looking at Racket a lot more then they do - including Haskellers.


About 1: Yes debugging (with a debugger) is more complicated than in strict languages. For most problems, you can however use QuickCheck to test your algorithms and logic. QuickCheck takes some laws about your functions, and gives you the simplest test cases that break those laws (if they are any). Very illuminating.

QuickCheck is not a replacement for a debugger. But it allows you to go further without one.

2. I agree. If you have a space leak because of too much laziness, you have to add strictness annotations until the problem goes away. Some people have better intuition than others for that, but there doesn't seem to be an easy formal way to decide.


Does anyone know how to turn off the extra mouse cursors? That is bugging the hell out of me and I can't concentrate on the article enough to read it (but I really want to).


Scroll to the bottom of the page and click the power icon on the enflock widget.


Which still leaves mouse pointer images all over the document. Horrendous.


Type javascript:$('.ms-mouse').hide(); into the address bar and hit enter after you turn off Enflock.


this is the dumbest thing in the world


Here's a question for the Haskellers out there:

How does Haskell help you write less code? What mechanisms are in the language that allow you to write higher level abstractions (as compared to, say, Ruby or Python)?

How does Haskell help you to express concepts that are very difficult to express in other languages?


One of the things that I find pleasing when writing Haskell is the idea that I don't have to think about the context in which my code might need to run.

For example, a common pattern that I find myself repeating in many languages is downloading a bunch of webpages asynchronously and collecting the results in a consistent order. To do this in Go I might wire up a channel and collect the results that way. In C# I might use the BeginAsync functions and possibly locks to collect all my results. And what if one of them times out? Throws an exception?

In almost every language I have had to specifically think about all these situations, making sure I have handled all of the exceptions and timeouts properly and by the end of the day I usually end up with a mudball procedure that is really good at asynchronously downloading webpages, but it is far from reusable.

In Haskell, once I'm done writing my function "downloadWebPage", I just lift it into the context of the problem I'm trying to solve. If I'm trying to download a page asynchronously, I'll use "asyncDl = forkAsync downloadWebPage" to make it a function that downloads a page asynchronously. If I need to handle timeouts, I'll use "timeout 200000 . asyncDl" to make a function that returns Nothing after a certain amount of time. What if I wanted to do this for a list of web pages? Well I just use mapM run the asynchronous, timed out webpage download function over a list of websites.

What if I want to use any function instead of one that download websites? Well you make a library: https://github.com/jb55/async-util

What I find in Haskell/Lisp that I have not seen in many other languages is the idea of a combinatoric API. One where combining functions is your main tool for building abstractions rather than more rigid computation paths. The amount of code that you do not have to write when programming this way is something that you just don't see in other languages.


Haskell abstracts things at an even higher level than most programming languages. For example, the infamous monads are a way of abstracting out the details of how 'effectful' computations (i.e. computations that do something other than simply taking arguments and returning values) should work. Monads model effects like exceptions, mutable state, input/output, nondeterminism, and so forth, and by abstracting them into the same pattern, it allows you to write functions that work with any effect in the same way. For example, if I have a list of operations which have effects and all have the same result type, I can sequence them together using the sequence function, so for IO:

    sequence [putStrLn "foo", putStrLn "bar", putStrLn "baz"]
will print those three strings on their own lines. However, I can use the same function for Haskell's equivalent of null:

    sequence [Just 3, Just 8, Just 5] -- results in Just [3, 8, 5]
    sequence [Just 3, Nothing, Just 5] -- results in Nothing, because Nothing is
                                       -- propagated if it occurs
or for nondeterminism

    sequence [[1, 2], [3, 4], [5, 6]] -- results in every possible list whose first
                                      -- element is drawn from the first list, whose
                                      -- second element is drawn from the second
                                      -- list, and whose third element is drawn from
                                      -- the third list.
or for any other effect that Haskell has. A lot of learning Haskell has to do with looking at the abstractions Haskell offers (monads are only one; you also get things like arrows, functors, applicative functors, &c) and understand how you can phrase your problems in terms of them and consequently use Haskell's abstractly written functions to your favor. A great practical example here is parsing; parser combinators can be expressed as monads or as applicative functors, both of which make your parsing code very small and quite natural to read. For example, the following is a function to parse strings like "(552,864)" using the Parsec parser combinator library:

    parseDigit = oneOf ['0'..'9']
    parseInt = many parseDigit
    parsePairOfInts = do
        string "("
        p1 <- parseInt
        string ","
        p2 <- parseInt
        string ")"
        return (p1, p2)


Why can't you use monads in ruby? A monad is essentially just an interface, right?

The original question was about the mechanisms, not the style or libraries.

I don't really know haskell, so correct me if I'm wrong.


Type-classes are like interfaces, but not quite the same.

They allow return-type polymorphism, which ordinary OO interfaces do not. And the Monad type-class requires this feature.

Basically, when you call the "return" function in Haskell, (e.g: return 5) -- the code being called depends on the type of the result. The type of the result is determined from the context (by type-inference or rarely, type annotations). In Ruby or Python, there's no easy/direct way to encode something like "return" such that it works with any Monad instance.


Interesting. I see what you are saying, but could you provide an example (or a link to an example) for clarity? Preferably something not easily accomplished in ruby?


I found an article about return-type polymorphism. See http://vpatryshev.blogspot.com/2010/01/dispatch-by-return-ty...

One thing is impossible, though: defining two methods that differ only in return type, something like

Most important point:

Something like

    int i = parse(String source);
    long l = parse(String source);
    boolean b = parse(String source);
is possible in Haskell, but not in Python, and barely imaginable in Java.


Elaboration:

  class Read r where
    read :: String -> Maybe r
Note that it's polymorphic on the type of the return within the Maybe. It's very flexible, the polymorphic type can appear anywhere within the type signature.

Now you can write various functions that use "read" and they all remain return-type polymorphic. For example, you can write one that loops, requesting the user to repeat entry until parse-able data (of the wanted type) is given:

  repeatReadingUntilValid :: Read r => IO r
  repeatReadingUntilValid = do
    line <- getLine
    case read line of
      Nothing -> do
        putStrLn $ "Invalid input: " ++ show line
        repeatReadingUntilValid
      Just result ->
        return result
That's just a silly example, because read isn't that interesting.

Another example is QuickCheck, which uses type-classes to auto-generate fuzz-testers for functions.

For example:

  import Test.QuickCheck

  pretty :: MyType -> String
  pretty = .. pretty print my type here ..

  unpretty :: String -> MyType
  unpretty = .. parse the pretty printing of my type here ..
Now I can test that unpretty is indeed the inverse of pretty:

  quickCheck (\x -> unpretty (pretty x) == x)
(for every x, the unpretty of pretty of x equals x).

I can generalize this property to:

  isInverse f g x = f (g x) == x
And then use:

  quickCheck (isInverse unpretty pretty)
Similarly you can define:

  commutative f x y   = x `f` y == y `f` x
  associative f x y z = (x `f` y) `f` z ==
                        x `f` (y `f` z)
  transitive f x y z = x `f` y && y `f` z ==> x `f` z
Which makes an important type of unit testing a breeze.

The type system is saving us from writing code here.


Parsec looks pretty similar to python's pyparsing or Pysec.Pysec is a "Monadic Combinatoric Parsing" library in python, so i think you could do monads in python.


If you can do continuation passing, you can implement any monad instance. You still may not be able to encode the Monad generalization (which indeed, Python cannot).

As for continuation passing, Python can do explicit continuation passing, but it has caveats:

* No TCO means it doesn't work well in larger cases

* Explicit CPS is ugly. yield-based continuation passing is restricted. You can't implement non-determinism via generators.

So using yield, you can implement most monad instances in Python (excluding non-determinism).

But Haskell can goes beyond a particular/useful monad instance (e.g: Parsing) and has a Monad generalization (basically a type-class that allows lots of libraries to be written such that they work with any monad, not just this parsing monad).

Additionally, monad transformers are a technique to compose monad types together to form a new monad that can do the things each of the monads can.

For example, I can compose the non-determinism monad with the parsing monad to get a non-determinism parser that yields various possible parse results.


Because it's pure, you can know what comes out at the other end when you put something in. (That sounds more disturbing than it's meant to be). You can build pieces that you can reuse within many different structures, like different types of pipelines and filters. All without having to worry about leaks or the wrong stuff ending up in the wrong place.


Haskell has some expresiveness advantages over Python/Ruby, but it also has some expressiveness deficiencies compared with these languages. I can list some of these if you're interested.

But the end result is that in my experience, Haskell code is not much shorter than Python code (I don't have much Ruby experience, but I suspect it's probably on par with Python).

IMO, the main advantage of Haskell over Ruby/Python, is that you don't pay the enormous prices for the high expressiveness that you do for roughly the same expressiveness in Ruby/Python. I find it much easier to maintain Haskell code, it has virtually no runtime failures, it has good performance, and so forth.

I cannot possibly list all of the useful things that are nice about Haskell, the rabbit hole is very deep. But I'll enumerate just a few.

Type-classes:

As a simple illustrative example, let's look at the Show class:

  class Show s where
    show :: s -> String
This means: Any type "s" is an instance of Show, iff it is declared an instance, and has an implementation of the "show" function.

Example instance:

  data Bool = False | True

  instance Show Bool where
    show True = "True"
    show False = "False"
This definition also illustrates Pattern-Matching, which is a pretty useful benefit on its own.

Note, Haskell has a "deriving" mechanism that allows replacing the above boilerplate with:

  data Bool = False | True
    deriving (Show)
So far, we've used only type-classes that Python interfaces can encode. This is basically Python's __repr__ (except it's auto-generated to a sensible implementation that is usable as Haskell syntax).

Here's something Python/Ruby cannot directly do:

  class Read r where
    read :: String -> Maybe r
Note that "r" appears in the result type of "read".

Now, you can compose "read" with other functions to build interesting values, that are still polymorphic on the result type. For example:

  readLine :: Read r => IO (Maybe r)
This type means: Given that the "r" type is an instance of Read (its values can be parsed from String), readLn is an effectful procedure that when executed, yields a "Maybe r" value (which may be Nothing if input is not parseable).

With Python/Ruby you may be able to encode this via classmethods, but then all the relevant classes have to be modified to inherit from this. You can try returning factories, but then these must return any particular type they're requested. There's no easy/direct way to encode this as in Haskell.

This is basically a safe __unrepr__, which is not implementable in this way in Python not just because of the lack of return-type polymorphism, but also because of mutability/identity concerns. Immutability-by-default is another nice benefit, gets rid of a whole slew of object-identity/aliasing issues.

On top of the extremely versatile type-class mechanism, many useful abstractions are built, which do not exist in Python. Monads are just one of these useful abstractions.

For example, using lists and their instance of the Monad typeclass, I can write the "powerset" function as:

  powerset xs = filterM (\x -> [True,False]) xs
filterM is much like filter, except it allows a monadic "effect" to be applied at the predicate function. In this case, the effect is non-determinism (or multiple results). We ignore the value of the element, and accept AND reject every element in the list -- thus we get the powerset.

Concurrency:

One of Haskell's models of concurrency is very similar to Erlang, and uses immutable-shared-state threads with explicit-mutable-state. This is great for both performance and simple semantics (the rarity of mutable state makes reasoning about data easy. The preemption makes reasoning about starvation easy). Haskell's threads are also user-level cheap threads, and you can literally create millions of them.

Say we want to do latency-hiding on some expensive procedure. I'll present the Haskell code to do so by creating a thread-pool that preemptively executes the given operation and passes the result to callers that need it:

  preemptively :: Int -> IO a -> IO (IO a)
  preemptively poolSize act = do
    requestMVar <- newEmptyMVar
    replicateM_ poolSize . forkIO . forever $ do
      res <- try act
      responseMVar <- takeMVar requestMVar
      putMVar responseMVar res
    return $ do
      responseMVar <- newEmptyMVar
      putMVar requestMVar responseMVar
      either ioError return =<< takeMVar responseMVar
preemptively takes a number of threads to run, and an action to preemptively execute in all of those threads. It creates an empty MVar (basically an inter-thread channel with a 1-sized buffer) for placing requests.

Then it uses "replicateM_ poolSize . forkIO . forever $ do ...". This means the same as: replicateM_ poolSize (forkIO (forever (do ...))). forever (do ...) executes the given do block in a loop, forever. forkIO executes the forever-loop in its own thread. replicateM_ poolSize executes the given forkIO action poolSize times. So this single line is enough to create our thread pool.

Then comes the code for the thread's single iteration. In each iteration, the thread executes "act" via "try". So any IO exceptions are caught and converted to a value, and placed in "res". Then the thread waits for someone to request a result by waiting on the mvar (takeMVar). Note the requests put on the requestMVar consist of a response MVar to reply to. The thread then places the result of the execution (which may be a value or an exception) into the response mvar.

Then we return an action that asks a thread in the pool for a result, and re-raises the IO exception if there was one, or just returns the threads' result.

Now, if you want to use preemptively, all you have to do is:

  getConnection <- preepmtively 20 (sshConnect remoteHost)
And you have a thread pool of 20 threads that preemptively make an ssh connection to your remote host. getConnection will then be immediate if it happens after the threads are done.

This example is meant to illustrate that real-world, concurrent, effectful programming is nice in Haskell, and not just the theoretical pure stuff.


Thank you for your detailed answer.

If I gather from the other answers here correctly, the Haskell money feature (for abstraction) is type-classes.

FYI, I'm keying off of abstractions/features here because I see that as a clear route towards writing less code (less bugs).


> If I gather from the other answers here correctly, the Haskell money feature (for abstraction) is type-classes.

On a more elementary level, lots of languages could benefit from algebraic data types. Most languages only support the `product' part of them well, but the `sum' is equally important.

Where product means in terms of C putting things together in a struct, and a sum is more like union. Only that C's unions are dangerous.


I'll also add that type-classes are the main feature in Haskell that Python/Ruby cannot easily encode.

There are many interesting features that Python/Ruby/Lisp-variants can encode, but Haskell does with static guarantees, whereas Python/Ruby use dynamism (e.g: general polymorphism, "duck typing", various strong type system features that Python/Ruby do not "need" because they don't attempt to give guarantees, etc).


You also get far less bugs because of controlled effects and the strong polymorphic types (i.e: Haskell's strong type system).


http://news.ycombinator.com/item?id=2722732

Dons does a good job showing how my python code is significantly shorter when he writes the equivalent haskell.


> Since most of my bugs in other languages stem from unwanted interactions between pieces of code via side effects, this is kind of a big deal.

Is this really the case for most people? Most of my errors are syntax, pattern matching, not considering certain cases, unrefined logic, etc.

Creating external side effects such as writing to files or sockets is not that big of deal or as error prone, because after you learn that trick once, it behaves in the same way. The POSIX standard does not change as frequently as my own code :)


How awful is enflock? (NB: Very)


Indeed. I refuse to read this. Haskell is probably awesome, but this site is clearly not. Time to get noscript running again...


Good illustration of why unreadable text colors are kinda lame.


Perfectly readable here. Although not knowing Haskell some of the syntax is a bit difficult to grok.


Haskell syntax is actually pretty straightforward, once you know what you're looking at. It's difficult at first blush because of the weird infix operator symbols ($, ., <> etc.) but once you get used to them there's nothing too scary.

That being said, why not post any problems you have here? I'm sure plenty of people would be happy to help explain Haskell's syntax, and no doubt more people than just you would benefit from that.


Parent is possibly referring to the blog's horrendous color scheme, not Haskell as a language.


I'm thoroughly convinced Haskell is the String Theory of programming. You may not be convinced by it, but you sure as hell have to be smart to grok it.


Arguments of this kind are both wrong and dangerous. Becoming a good Haskell programmer isn't easy, but then neither is becoming a good programmer simpliciter.

The language itself is conceptually novel compared to imperative languages, but the concepts themselves are neither difficult nor abstruse. Actually I find Haskell refreshingly simple; it's a lot like Lisp that way.

I'm sure you'll agree that it's unfortunate that so many people are put off by a subject's alleged difficulty, rather than encouraged by it. However, given that this is in fact the case, wouldn't it be better if we concentrated on saying how enjoyable, intellectually stimulating and useful learning certain things was rather than discouraging people from even having a go?


You convinced me to give it a try. I'd still be interested in an opposite view. Are there any non-ranty articles about the problems of the language?


A sibling comment mentioned that mutation is difficult; another prominent problem is that reasoning about time and space efficiency in Haskell is rather unintuitive. It's easy to learn a little bit about optimizing functional programs (e.g. through CPS-transforming your functions and taking advantage of tail-call optimizations), and then find your Haskell program even more slow and space-wasteful because it creates a huge number of thunks. (A "thunk" in this context is a zero-argument function whose sole purpose is to be evaluated later; in Haskell's case, to be evaluated when it's "needed" à la call-by-need.) On the other hand, laziness also means you can write apparently inefficient code that nevertheless performs incredibly well, e.g. constructing infinite data structures.

There are also both major and minor squabbles of various levels of importance (e.g. Haskell got :: and : mixed up, you can't have two records with the same field name, Bob Harper argues that laziness itself is a mistake, various people would prefer abstractions other than monads for dealing with state) but I think Haskell's performance and Haskell's initial impedance with imperative programming are the big problems—at least, those are the big problems I hear people complaining about.

(Then again, I'm an enthusiastic Haskell person, so you probably should find someone who likes the language less to give you a better opinion.)


The downside of Haskell is that performing mutations is more difficult than with traditional imperative or object oriented languages, as monads take some time to get used to.

That said, it's a wonderfully elegant and expressive language, and I suggest you just jump in and try it as opposed to looking for someone to convince you not to ;p. Learn you a Haskell for Great Good, and Real World Haskell are both available online, and are great resources. (written from a phone, otherwise I would provide links, though a google search should bring them up quickly)

Good Luck! moving from imperative/oo to functional style can be a mind bender, but don't let yourself get discouraged, it's still just code. :)


I like Haskell a lot, and here's my list of problems with it:

http://www.reddit.com/r/haskell/comments/hezgk/haskell_singu...


See https://bloggablea.wordpress.com/2007/04/24/haskell-records-...

Otherwise: If you want to see some articles that also talk about the problems of the language, your best bet is to read the papers of the people who make the language. They are acutely aware of the problems. The downside is, that you have to learn the jargon first.

I find most academic papers about Haskell and functional programming surprisingly readable though.


>most classes of bugs are caught at compile time

This is false, can we stop pretending that hindley-milner solved the Halting Problem?


You don't need to solve the halting problem to solve most classes of bugs.


Most? No.

Some, a few? Yes.

Also, stop trolling ##C with your Haskell stuff, nobody cares.



I've been here much longer than you, don't hide behind the url.


I've been here longer than either of you and he's right. You're being needlessly abrasive.


Think of it this way. More compile time errors = fewer run-time errors. You can still have bugs of course. Most my Haskell run-time bugs are because I defined a partial function. Many standard functions are partial too.

I find that changing a working Haskell program to add new features is easier than other languages I have used. You pay a price (understanding your problem better) up front to get there, but the dev cycle is very fast after that.


>More compile time errors = fewer run-time errors.

But any possibility of 1 run time error means you'll still need to manually write a test asserting the correct runtime behavior anyway. So the typical argument is that typed languages don't actually save you any work, and might actually increase the likelihood of errors by discouraging unit testing.

I'm not claiming this holds for Haskell, as my experience with the language is not great enough to pass any judgement yet.


1 test is not enough to cover the functionality of various functions.

In Haskell, types are enough to prove some properties about functions, so you don't need to write any test for the proven part. For example, you can write a RedBlack/AVL tree implementation in Haskell where the types prove that the tree is balanced. Haskell's type system is not strong enough to prove the order invariants are correct, though, so you'll still need to test that, but that's easier.

In a language with a weaker type system, you'd end up with more test code (for balance tests) and less guarantees (tests give no assurance).

Also, many bugs slip through tests that are meant to cover these classes of bugs. You don't get any guarantee from tests. Types give you nice guarantees. They're not as comprehensive as very-expensive tests, but they're pretty cheap. You get far more reliability-bang-for-the-buck from types, and thus the baseline from which you start testing is much higher.


Ok. Fewer run time errors = fewer tests. Also check out QuickCheck for automatically generating tests. It's very handy.


In fact one of the better books Simon Thompson's Haskell: The Craft of Functional Programming (better than Real World Haskell or Learn You A Haskell IMO) emphasizes the importance of testing.


This is a good example of what I really think we need:

Testing, static analysis, and help from the type system.


I think a better way to put this is that for example type errors are the first to bang really loud in any decent language.

But Haskell does not help you with other, much more subtle and dangerous bugs. For example, a function not being tail-recursive despite you believing it to be.


"For example, a function not being tail-recursive despite you believing it to be."

In haskell, I don't think tail recursion is nearly as important because of laziness. While one computation is adding more to the "stack", another is consuming.

foldr, for instance, is not tail-recursive. It can return (partial) results given infinite input with constant memory.

Of course, tail recursion is still important in the places that strictness is important. It's just not quite as fundamental in a language where laziness is the default.

I think there are ways to tell if the TCO has been used, regardless. That's not a property of the type system, or even the language, but the implementation. If nothing else, the profiler should tell you quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: