Hacker News new | past | comments | ask | show | jobs | submit login
GADTs Meet Their Match [pdf] (microsoft.com)
90 points by luu on Oct 9, 2015 | hide | past | favorite | 11 comments



It seems like much of the difficulty this paper is dealing with is the fact it is possible for expressions to diverge. Is their any case where this happening would not be an error? If not, then wouldn't it make more sense to leave the behaviour undefined, and have the compiler throw warnings when it detects this possibility?


Since Haskell is non-strict, how it handles diverging expressions is part of the semantics. If the expression never needs to be evaluated, it doesn't matter if it diverges or not.

I can't think of how to translate that to the examples of the paper, but there are functions where this distinction is useful. For example, it makes sense that getting the head of a list does not care whether the tail diverges or not:

    head [] = Nothing
    head (x:_) = Just x

    > head (10:undefined)
    Just 10
However, if we add a redundant-seeming case, this behavior changes:

    head [] = Nothing
    head [x] = Just x
    head (x:_) = Just x

    > head (1:undefined)
    *Exception: Prelude.undefined
This example is a bit contrived, but I think it illustrates how the distinction can matter. I can't think of a self-contained example of where this is obviously useful, but it does come up in practical Haskell all the time, sometimes without people realizing it. (I remember somebody mentioning that making Haskell strict caused a whole bunch of code and common idioms to stop working correctly.)


A common practice while working on a Haskell code base is to fill in not-yet-completed functions with `undefined`, which is a divergent expression. This lets you do things like write:

  func = undefined --TODO: implement func
and then you can use func in the rest of your code and as long as nothing strictly evaluates the results of func, you don't have to care about the fact that it's not yet defined. This lets you verify that the rest of your program still compiles properly, for example.

This means that preserving the lazy evaluation of undefined expressions matters. For example, it's perfectly OK to do this in ghci:

  Prelude> let myfunc = undefined::(Integer -> Integer)
  Prelude> let ints = [1, 2, 3]
  Prelude> let results = map myfunc ints
  Prelude> let square x = x * x
  Prelude> let results_squared = map square results
All of this is fine because nothing has actually forced any computation to occur, but I still get the benefits of asserting that my expressions are well-typed. If I actually force evaluation by asking ghci to show a result, I'll get an error:

Prelude> results_squared [* Exception: Prelude.undefined

But it matters that this only happens when I actually force the computation somehow.


Yes, the alternative to GADTs is to just let the compiler give you warnings. For example, if you have a pattern matching over a list that you know is never empty you will write something like

    -- gives error at runtime if list is empty
    case mylist of
      (x:xs) -> dostuff
      []     -> error "impossible"
or

    -- missing case in pattern match gives 
    -- error at runtime and a compiler warning
    case mylist of
      (x:xs) -> dostuff
What GADTs let you do is enrich your types with extra information so you can write incomplete pattern matches and prove that they will never fail so the compiler won't show you any spurious warnings.

Its the old tradeoff between static and dynamic typing - more static guarantees vs more simplicity and flexibility.


It's interesting how haskell news most often score high but have very few comments here.


There's often not a lot to say. You read the paper, say "that's clever", and don't really have anything to add.


I consider those "aspirational" upvotes.

HNers like to think of this place as a community of smart people, so we upvote stuff we think smart people should enjoy, even if effectively haven't payed attention to the content.


waiting for insightful comments and summaries to be posted


I think it's not specific to Haskell but more specific to CS papers. More accessible (and a bit more ranty) material quickly gets a number of comments, e.g. https://news.ycombinator.com/item?id=10300213


The main reason may be simply that it takes time to read such a paper. It takes me at least half an hour to read the first 3 sections.

This is always a real pleasure to read a paper like this, which dives us into the matter at the first sentence, gives a good overview of the stakes as well the concerns, and sketches the solution skeleton with convincing arguments.

Nonetheless, this would require to be more deeply involved into the domain, to be able to make insightful comments or even to point to related papers or alternative approaches.


nobody understands haskell, they just like to think they do ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: