Hacker News new | past | comments | ask | show | jobs | submit login

It seems like much of the difficulty this paper is dealing with is the fact it is possible for expressions to diverge. Is their any case where this happening would not be an error? If not, then wouldn't it make more sense to leave the behaviour undefined, and have the compiler throw warnings when it detects this possibility?



Since Haskell is non-strict, how it handles diverging expressions is part of the semantics. If the expression never needs to be evaluated, it doesn't matter if it diverges or not.

I can't think of how to translate that to the examples of the paper, but there are functions where this distinction is useful. For example, it makes sense that getting the head of a list does not care whether the tail diverges or not:

    head [] = Nothing
    head (x:_) = Just x

    > head (10:undefined)
    Just 10
However, if we add a redundant-seeming case, this behavior changes:

    head [] = Nothing
    head [x] = Just x
    head (x:_) = Just x

    > head (1:undefined)
    *Exception: Prelude.undefined
This example is a bit contrived, but I think it illustrates how the distinction can matter. I can't think of a self-contained example of where this is obviously useful, but it does come up in practical Haskell all the time, sometimes without people realizing it. (I remember somebody mentioning that making Haskell strict caused a whole bunch of code and common idioms to stop working correctly.)


A common practice while working on a Haskell code base is to fill in not-yet-completed functions with `undefined`, which is a divergent expression. This lets you do things like write:

  func = undefined --TODO: implement func
and then you can use func in the rest of your code and as long as nothing strictly evaluates the results of func, you don't have to care about the fact that it's not yet defined. This lets you verify that the rest of your program still compiles properly, for example.

This means that preserving the lazy evaluation of undefined expressions matters. For example, it's perfectly OK to do this in ghci:

  Prelude> let myfunc = undefined::(Integer -> Integer)
  Prelude> let ints = [1, 2, 3]
  Prelude> let results = map myfunc ints
  Prelude> let square x = x * x
  Prelude> let results_squared = map square results
All of this is fine because nothing has actually forced any computation to occur, but I still get the benefits of asserting that my expressions are well-typed. If I actually force evaluation by asking ghci to show a result, I'll get an error:

Prelude> results_squared [* Exception: Prelude.undefined

But it matters that this only happens when I actually force the computation somehow.


Yes, the alternative to GADTs is to just let the compiler give you warnings. For example, if you have a pattern matching over a list that you know is never empty you will write something like

    -- gives error at runtime if list is empty
    case mylist of
      (x:xs) -> dostuff
      []     -> error "impossible"
or

    -- missing case in pattern match gives 
    -- error at runtime and a compiler warning
    case mylist of
      (x:xs) -> dostuff
What GADTs let you do is enrich your types with extra information so you can write incomplete pattern matches and prove that they will never fail so the compiler won't show you any spurious warnings.

Its the old tradeoff between static and dynamic typing - more static guarantees vs more simplicity and flexibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: