Hacker News new | past | comments | ask | show | jobs | submit login
Programming With Types, Not Tutorials (fpcomplete.com)
142 points by mightybyte on Feb 25, 2014 | hide | past | favorite | 115 comments



This article gives a nice glimpse into the essence of why modern strong static type systems like Haskell's are such a huge win for development. The type system gives you a way to bound the number of things that you must understand about an API in order to use it. The confidence that we get from having these guarantees makes code reuse orders of magnitude easier in Haskell than it is in pretty much any other language in mainstream use today.

All this reminds me of a comment made by a friend of mine awhile back when he was learning Haskell. He said that programming in Haskell feels less like programming and more like putting together a puzzle. I think this article very nicely captures what he was getting at.


> ... like putting together a puzzle

It's funny how lots of good formalisms, notations, and DSLs all achieve this quality in different ways.

In particular, this reminds me of the mathematics of tensors, or Bra-Ket notation in QM. Both of these systems use notation to give each piece of the puzzle a particular shape, so that if you know all of the pieces you can often see immediately how they should fit together.

This seems (to me) to be the most important part of designing an API, and the brilliant strength of a good type system is that you get this quality for free.


> [...] get this quality for free.

Let's say, for much cheaper. Designing a good API in Haskell still requires work---but it's arguably easier to get right than in most other languages.


> This article gives a nice glimpse into the essence of why modern strong static type systems like Haskell's are such a huge win for development. The type system gives you a way to bound the number of things that you must understand about an API in order to use it. The confidence that we get from having these guarantees makes code reuse orders of magnitude easier in Haskell than it is in pretty much any other language in mainstream use today.

Nicely put. I think that's the clearest statement I've ever seen about why a powerful, expressive type system is a Good Thing.

Now, tell me how to get this point across to a programming-languages class for junior/senior C.S. majors, in at most a couple of days, starting 18 hours from now. ;-)


Sadly, that nice glimpse doesn't really view that nicely. :(

It feels like someone showed me an image of a processor and said "see, you just have to line up the pins with where it goes." What could be simpler? If it doesn't fit the pins, it is the wrong processor.


To be fair, I was starting from approximately "here is where the pins go" in the types I wanted, and "here is how the pins are arranged" in the form of the libraries that could solve it.

The complaint I was addressing was that math-heavy libraries that lack tutorials are nearly impossible to use. I provided a constructive proof that it's not hard at all, if you understand the type system.


I didn't mean this so much as a critique of your post. I do believe it serves its target well. I just get annoyed with attempts to elevate it to "a huge win for development."

Indeed looking back, I think I picked the wrong analogy. Your post seems to be saying that "without directions on how to put it together, if you know what all the parts of a bike do, you can figure out how to assemble it yourself."

There is certainly a fair bit of truth to this. However, this seems to belie a high level of understanding for all for the parts. At least if you hope for a speedy and correct assembly.


I think the better analogy is: all the parts are brightly colored, and if you try to put a red part in a blue hole the compiler won't let you. Okay I guess that's not really an analogy anymore, but, you get the point.


Same basic analogy, though. If the parts come such that they only fit together one way, then finding how to get them together can be deduced by looking at them.

Falls flat when either a) you don't fully understand the parts or b) you are wanting to try a combination not strictly accounted for in the designing of the existing parts.

Problem A is straight forward. Having an in depth knowledge of the "types" in a system has clear benefits. However, not really understanding the types is doable, provided you are strictly in the "assembly" phase of development. I believe that is what this post was about.

Problem B is the killer to me. Often "combinations not accounted for" include partial combinations. It is not unreasonable to want to make sure the wheels of a bike spin with the chain before the pedals are placed.


Have you played with Haskell at all? Partial combinations aren't a problem. Partial application is extremely well-supported. The goal of a type system is that the only combinations that don't work are broken combinations. Haskell isn't all the way there yet (imho dependent types will be necessary), but it's very close, and more than close enough for 99.9% of real work.

Completely agree on your assessment of problem A, and that that's what this post is about. Learning the actual math behind the constructions is definitely beneficial, but for people who don't have the time or inclination to do that, this post demonstrates that they can still get enough of the benefits to be productive.


I confused things by using terminology here. I did not mean partial as in "partial application." I mean partial as in "isn't fully built."

Consider, a bike that doesn't have pedals or even all spokes is a broken combination. You can still spin the parts that are assembled and see what will happen, though.

So, unless I am wrong on this, point B is still troublesome to me.

There is also a concern on whether the "assembly" phase is where time is best saved. How much more effort did it take for some elements so that they would have these "easily assembled" types?


The assembly optimizations come for free, in a sense, because what is really being optimized with the extra work is composability. Assembly time isn't all that important, but composability is absolutely essential.

Regarding things that aren't fully built, I really would need an example to be able to comment. I think the analogy has reached the limit of its usefulness :)


I think my point is best summed up by saying that I can still run a project that has failing unit tests. To my knowledge, I can not run a program that fails type checking.

The argument for this is typically "once it typechecks, it works." However, sometimes I can see enough from the output of a non-test passing piece of code to know whether the idea is even sound.


I think the only reason you think not being able to run a program that doesn't type check is a problem is because you are used to a workflow in which running a program is how you test your ideas. The workflow used to explore ideas while programming in a strongly typed language is of course very different. Personally I think it's a lot better, but that's just opinion.


That is an amusing personal attack. :)

I test my ideas by executing them, yes. Are you truly claiming that you have all of your ideas completely fleshed out before you test any of them?

That sounds incredibly impressive, if so. Can it be done in some cases? Almost certainly. I question whether it can in all, though. Even most.


You don't have to have them completely fleshed out. But you do have to have them fleshed out to the type level. The Haskell compiler can be an invaluable aid in fleshing out the design of problems. Especially when they are complex difficult to wrap your head around.


Certainly wasn't intended to be a personal attack. I'm not claiming that I don't test my ideas, I'm claiming that I do so in a different way. Rather than testing ideas by running the program, I test my ideas by type checking it.


Most of my ideas I first test by a simple sketch of boxes up on a whiteboard. Going from there to some toy code to try out what I suspect are the tough parts of those boxes.

I realize with liberal use of undefined, I can get something similar in Haskel. So, I am not trying to claim in precludes this behavior. Just claiming that it is a rather drastic change to the flow many people are used to.

And really, I don't like claiming many people are used to this. As, I can really only speak for myself.


I don't think they're quite as different as you think, but, it sounds like we agree. Two different ways of doing the same thing. To me, being different from what most people in software engineering are used to really doesn't sound so bad, judging from our industry's track record.


Oh, I definitely agree with that. And don't get me wrong, I actually want to learn and use Haskel somewhere.

I just get wary whenever I see the conversation leaning towards indications that it is a clear win. Seems like it is a qualified win.

Too many of the stories read more like your typical "developer learns new way of thinking, becomes better for it."

Which is ultimately what makes it so frustrating. When someone learns lisp and becomes a better programmer. There isn't much for them to say, other than "learning to think the lisp way has helped me be a better coder."

There are those that get hung up on the metaprograming of lisp. Which is amazing. But, for the most part you can do that elsewhere now. The only thing left is it really is a frame of thinking.

Contrast this with Haskel. Few stories focus on how it helped them see a more comprehensive view of types. Instead, the talk turns to how Haskel is a saving grace that the unwashed masses just isn't used to.


> To my knowledge, I can not run a program that fails type checking.

GHC added support for that. (They jokingly called it Python-mode.) See https://ghc.haskell.org/trac/ghc/wiki/DeferErrorsToRuntime


Wow, missed this post. Will have to look into that. Very curious how that works. Thanks!


That's definitely how I felt while learning Haskell: Like putting together a jigsaw, which reveals itself when it's done, rather than the usual plumbing and hoping there are no leaks in the end!


I honestly have no idea what all of that does. I'm sure I could work my way through it, but still... in other languages you can get a glimpse of the purpose of a program even if you don't know the syntax. You can tell whether it is a parser, a game, a Hello World, or a web application.

The other thing I dislike besides the opaqueness of Haskell is what another commenter described as "mechanically filling in" of code. I get that feeling every time I try strongly typed functional languages. I had the same feeling when I started with C++ and stuggled with `const`. I had an idea in my head, but couldn't convert it to code because the compiler didn't allow it. Instead of helping me, the syntax slowed me down, took me out of the "flow". It rarely, if ever, helped me avoid bugs. Now, I've probably learned avoidance strategies, and rarely have problems with `const` anymore. Haskell gives me that old feeling of programming in a straightjacket, multiplied by 1000.


If you don't know functional programming, of course you won't be able to understand functional code at a glance. You wouldn't expect someone who doesn't know imperative programming to intuitively understand imperative code at a glance either. It's not opaque, it just requires learning.

Not being able to convert the code in your head into type-safe functioning code just means that you don't understand the code well enough yet. That's not a bad thing about you or about the language, it's just the state of things. Once you gain that understanding, you'll be able to write code that compiles, and you'll also discover that now the compiler is your best friend. Writing Haskell is extremely comforting to me, because I know that the compiler has my back. It's not a straightjacket, it's a guardrail alongside a mountain highway.


I don't think it's about functional programming, not even type systems. It's about all the idiosyncrasies Haskell throws at the programmer. Even the author adds in the end:

> I still don't really understand free. I definitely don't have a clue what kan-extensions are in general, or what Coyoneda is specifically.

So, he got a working program (because it's assumed it works if it compiles), but can't explain why, what the components are or do. I'm sure there are a plethora of strong points about Haskell, but his conclusion that you can just gloss over it is not making a good case for it in my opinion, it's actually a huge red flag.


I think you misunderstood the article. He knows exactly what the components allow him to accomplish, he just doesn't know how exactly they work or any of the math behind them. This is directly analogous to the situation in every other programming language: very few programmers know exactly how every library that they use works. This is good, because in practice there isn't time to learn the intricacies of every one of your dependencies.

To be clear, this also has very little to do with Haskell. kan-extensions is a library, it's not a part of Haskell itself. His point is that if you had a kan-extensions package in some other language, it would be a hell of a lot more difficult to use it in your program without learning the math yourself. And again, the key point here isn't that you can gloss over what the library allows you to do, it's that you can gloss over the complex abstract mathematics that went into the creation of the library. You can get the benefits of free monads without taking a course in category theory. That's definitely a good thing.


In addition to what nbouscal pointed out, I'd just like to say that I very explicitly did not assume it worked just because it compiled. The last code block doesn't just have the miniature library, it also contains code that uses it, and that code works perfectly. That's the opposite of assuming that because it compiles, it works.

Though the only way for something with those types to not work when it compiles would be for it to crash or go into an infinite loop. I didn't use any constructs capable of either of those, so it was very unlikely for it to fail once it compiled.


I always try to write code that my favorite English professors would approve of.


My favorite English teacher would have written "So what?" at the bottom of most of my code. :(


Think of how you looked at C++ code before you learned to program and you'll see the same phenomenon. Very little of your knowledge about imperative programming transfers over to Haskell so it's not surprising that you look at it and see gibberish. It's the same way many people can pick up Latin-derivative languages quickly after learning a few, but find that their knowledge doesn't transfer and struggle to learn Mandarin or Russian.


Hmmm... Not really. People familiar with BASIC can read most C++. People glancing at a Russian Newspaper see sentences, words, stories.

I've seen a fair amount of lambda style code, prolog, and so on. This code just looks ugly.


Author of the article here.

I learned to program very young. I was fluent in BASIC as my first language. I couldn't read the slightest bit of C++ when I first exposed to it. The syntax is hugely different because it's optimized for handling features. There are new operators like ++ everywhere. There's crazy template stuff. I couldn't even start to guess at what the various unary * and & operator applications meant.

I eventually learned enough C++ to make sense out of those constructs. The reason they show up is that they make sense for operating at the level C++ is intended to operate at.

Haskell introduces a lot more new ideas than C++ ever did. It has radically different syntax than C++ because it makes it easy to work at the level Haskell is intended to work at.

There is no way in which the code in that article is ugly. It's very simple, very clean code that uses ideas you've never been exposed to, in syntax that reflects those ideas.

And I'm glad to have backup from someone else who knows the language in a sibling comment.


> People glancing at a Russian Newspaper see sentences, words, stories.

Of course they do. That's because a Russian newspaper is using the same design language as an English newspaper. But the semantic language itself isn't legible. You can't tell that яблоко is a kind of fruit by "glancing" at it, unless you know Russian.


For what it's worth, this is perfectly standard Haskell code, perhaps even on the simpler side.

(Not saying that you have to like it. Just giving perspective.)


Learning functional programming is not trivial, especially if you come from the imperative paradigm. Haskell was my first language and I still struggled not to think procedurally in the beginning , and the type-checker refused almost all of my programs. But after a while you start to get less and less type errors, and soon types will be your friend and guide you when writing programs.

I have to say though; static types itself doesn't prevent bugs, they only prevent common errors. But if you got an expressive type system like Haskell (or OCaml etc.) you can write well-typed programs. A great example is Yesod[0], which is a web framework completely designed around being well-typed.

0. http://www.yesodweb.com/

EDIT: Okey, "well-typed" may not be the right term. TAPL calls every program that pass the type-checker well-typed so what I meant is programming written type-safe.


It's a matter of using the types to express meaningful constraints, so that well-typed is more likely to mean correct.


For what it's worth, my own experience has been that becoming fluent with the functional style makes the imperative style seem really bizarre. I've made multiple attempts to learn Go, but for the life of me I can't; it seems so byzantine and archaic, like a creaking, inconsistent Rube Goldberg machine. The feeling in functional languages that everything is an expression and evaluates to a value actually feels really liberating; everything can be thought of a series of transformations.

This is not meant to argue with you, merely to provide the viewpoint that imperative programming is yet another valid way of performing computation, not necessarily the _natural_ way.


To be fair not everything makes sense as a value, which is the reason why Unit or () exist, because things that cause side-effects still need to evaluate to some value.


That's exactly why I'm programming way more Haskell than Go these days.


Sort of off-topic, but wow, this is a really neat presentation format for tutorials. Love the markdown source[0] and the "open snippet in IDE" features.

[0]: https://www.fpcomplete.com/tutorial-raw/5133/cf0de9edd526ea5...


Is this a good thing? What the development process in this post appears to involve is a mostly-mechanical filling in of function definitions to match a pre-given type signature; but, if it's mechanical, shouldn't the compiler be able to do it for us?


The compiler is doing a lot of it for us.

> Thanks to GeneralizedNewtypeDeriving, I got the three instances I wanted for free via F.

A good bit of what you're seeing as "mechanical" is actually the simplicity achieved because Haskell is expressive enough to get very close to the abstractions that we really need. Haskell allows you to decompose problems in ways such that each step along the way is very simple and in many cases obviously correct.

Obligatory Hoare quote:

> There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.


Haskell is absolutely fantastic. In C, when I'm given only the prototype of a function like:

> int foo(int, int);

Even if you tell me it is referentially transparent, I have no idea what that bloody function really does.

But with HASKELL, with just the type signature you can write a program that does what you want even though you don't know what the components you used do! Mind/Blown.


I Love Haskell but I think sometimes people give it more credit than deserved. So you say

> int foo(int, int);

Doesn't tell you anything. Ok, you are right! But that code translated to Haskell would be:

> foo :: Int -> Int -> Int

How does that give you more information about foo than the C code above?

Now if you say you use type synonyms in Haskell, I would agree, but you can use typedefs in C as well.


Well it does tell you that there are no IO computations with side effects. So you really are restricted to the Int type and the operations that can be performed on it.

You get much stronger statements by using type variables (not type synonyms). This is actually laid out rather clearly in Theorems for Free by Wadler[1]. The essence is that the more generic a type signature gets, the more you can reason about it's behavior. Since if you want to perform (+) (-) (*) (/) for example you'll need to include a (Num) constraint.

[1] http://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf


> So you really are restricted to the Int type and the operations that can be performed on it.

That's so vague it's almost meaningless, and doesn't support the original assertion, which was "But with HASKELL, with just the type signature you can write a program that does what you want even though you don't know what the components you used do!"

Even given a simple thing like foo:: int -> int -> int, there's really no way to tell what it's doing. It could be multiplying the arguments together, dividing, adding, computing psuedo-random numbers using two seed values, ...

Not only that, but I'm not sure it's true that it's limited to the Int type because there are things like System.IO.Unsafe.


> Even given a simple thing like foo:: int -> int -> int, there's really no way to tell what it's doing.

Ahh, this is an important point. The reason you can't tell what it's doing is because it is quite concrete. One really cool thing about this kind of type system is that the more generic your functions, the more you can tell about what it's doing. Consider this function:

    foo :: (a, b) -> a
There's only one possible thing this function can do! And there's only one possible implementation--the correct one. This is the case because the types involved are completely general. Type signatures can communicate a lot more than one might think.


That's all well and good with a trivial function like foo :: (a,b) -> a, but for most non-trivial functions it won't be so obvious, and that's exactly when it's most helpful to know what's going on.


a.) Many functions in Haskell are this trivial. map, fold, etc...

b.) Functions that are non-trivial will have additional constraints often hinting exactly at what the function does.

Most important is not the fact that you can write poor Haskell code which does not exibit this self-documenting behavior, but rather that if you choose to exploit the type system it has the expressiveness to allow you to do so! (and enforce it)

If you are working with a Haskell library and see a function with the signature (IO Int) you may start to question your choice of library :)

The only problem is, once you get a taste of this power it is easy to yearn for even more such as dependent types (i.e. Agda, Idris...)


> foo :: (a, b) -> a

> There's only one possible thing this function can do!

Ah, so it scales a vector by a scalar returning a new vector, right?


No, it is impossible to write the function you described on that type. There is no function "scale" that works polymorphically on any type 'a'.

The parent you are responding to is quoting a proven result of computer science.


But what percentage of "real world" functions work on completely generic types? In reality most useful functions take more specific types, and those types allow many more operations to be performed on them. In general, it's not really possible to tell what a function does just by looking at the type signature.


You aren't forced to go straight from fully generic types to specific types. That is the entire point of type class constraints which give you fine grained specification of behavior.


Define "real world functions" :)


foo = undefined

And all its brothers :)


>There's only one possible thing this function can do!

That's not at all true... it can do all sorts of transformations on the first value of the tuple. I suppose without type restrictions on the parameters it can't do too much because there are relatively few functions defined on all types, but then, how often are you going to see a function signature that vague?

More likely, you'd see

`foo :: MyClass a => (a, b) -> a`

or

`foo :: (MyType, b) -> MyType`

which opens up the possibilities considerably, because you don't know which of MyType's behaviors is being invoked.


That's exactly it: (a) there are no functions at all defined on all types except id (of type (a -> a)) and (b) you end up seeing signatures close to that vague all the time. This is exactly the power of parametricity.


> [...] you end up seeing signatures close to that vague all the time.

That's not an accident. Factoring out little helper functions like this is considered a virtue in Haskell (and acording to Paul Graham, also in Lisp).

Even if you only use them once---because they make reasoning easier.


Agreed! I think the tendency to highly parametric functions comes from several sources. First, good programming practice allows for decomposing entities and leading to more flexible functions. Second, once you have higher kinded types you realize that 90% of programming is wiring between contexts and is more parametric than people think. Finally, HM typing auto generalizes all of your let bindings which means your compiler can inform you of opportunities for genericity that you weren't even aware of... so long as you ask.


In my mind what I wrote was obviously sarcastic, but apparently it wasn't. Poe's law strikes again I guess.


My C functions look more like

    thing_t *find_thing(thing_index_t, thing_t *things);
Where my indices are wrapped in a single-element struct so that I get some type safety. You can write bad code in any language.

The Haskell things that I miss most in C are the parametric polymorphism and the structural pattern matching. Even so, I've been able to encode some surprising invariants in my C types (with occasionally a little bit of manual help)...


It really is awesome! If you see a value with this type:

  (bool -> bool)
You know it only has 4 possible names, only 4 things it could possibly do: false, true, not, id. (Ignoring infinite recursion or exceptions.)

The list functions are a rich place to explain this to people.

  (a list -> (a -> b) -> b list)
If a declared function has this signature, there's only one reasonable thing it is: map. (It could return the last element only, or an empty b list, but those aren't so reasonable.)

What about this?

  (a list -> (a -> bool) -> a)
It'd seem it returns the first item from the list that matches the predicate. Possibly the last item. (And an exception if the list is empty or item isn't found.)

Other people aren't as impressed by this as I am, though.


Yes but those are trivial examples. What about a math library? The type signature isn't going to tell you which trig function to apply.

I'm a fan of static typing, but this sort of guessing based on type signatures seems like a bad thing, actually. It's like when people use autocomplete in their IDE and pick anything that looks like it might work, instead of reading the documentation (and maybe the source) to understand what they're using. It's understandable with beginners, but for people who should know better it's willful blindness.


It reminds me of dimensional analysis in physics. That works really well when the number of possibilities are low and the system as a whole is contained enough and well-constructed enough that there aren't many pitfalls.

But everyone knows physicists can't manage factors of 2 and pi. :-)


I'm not suggesting that you decide "which function to use" based off a type signature. But that the type signature provides a lot of useful constraints to help you reason about things. And those examples point out how in some cases, just with the type signature you can determine exactly what the function can do.

The examples show the different from non-pure languages or functions, where "someList.Bla()" returns "void" and might have done anything to the list, from removing all items to reversing them, to calling Bla() on each element inside (which in turn might do who-knows-what).

True though, without pure functions then the benefits aren't all there.


It actually can using generic type signatures with constraints. Once you use a concrete type like Int you are severely limiting the expressiveness of the types. Which is fine in some scenarios.


My understanding is that Agda will do it for you. I think GHC is getting something similar with type holes, too.


Agda has proof search, yes. But for example if you write down the expression for `and` with type holes:

  and : Bool -> Bool -> Bool
  and a false = { }0
  and a true = { }1
It will deduce the following which is well-typed but not correct.

  and : Bool -> Bool -> Bool
  and true false = false
  and false true = false
If the types are precise and highly constrained it works well, but it's imperfect.


I think you left out the wrong case (and another right one).

Also, while this is the same functionality, it shouldn't really be expected to work well when dealing with concrete types - you're not nearly as constrained as when you have a bunch of polymorphic terms in the relevant type signatures - like was the case in the article here.


Haskell has Djinn. Not as powerful as Agda.


It's a very good thing that when there's only one way to get the correct result, the compiler forbids everything else.

Would you prefer the compiler allow incorrect code?


All the Python fans say "yes".


Not all.

PyCharm has excellent static analysis capabilities for this. One just have to hint it with `assert isinstance(spam, Egg)` on occasions.

Catches me tons of bugs before interpreter ever hits them. Which is way nicer than getting an exception notification email with some live customer hitting 500 page because there's a TypeError in some obscure execution path you've never really tested.


GHC added that as a feature, recently.


I'm just starting to learn Haskell, reading "Learn You A Haskell..." I didn't realize that Haskell documentation isn't considered very good. Does HN agree with that? Are there any must read documents besides the official ones and Learn You A Haskell to mitigate that problem?


Part of the issue is the wide spectrum of code of Haskell that people write. Beginners to Haskell will write code that is very radically different from the code that more experienced programmers write. Not to say that code that one writes in the style of Learn-You-Haskell or Real-World-Haskell level is bad but there is a wide range of choices of how to write Haskell.

Haskell also tends to also have a wide gap between beginner material and advanced material and not a whole lot in between. Some beginners will often come and see advanced discussions about the lens library for instance and will assume that they need to learn the lens library in order to program in Haskell. This just leads to disaster since it's nearly impossible for beginners to decipher the type error messages that come out of it until they grok the type system at a more advanced level.


A lot of intermediate Haskell material (and advanced Haskell material) is locked up in a series of papers. They're usually well-written, but the intermediate Haskell canon is not well-defined nor is it in a comfortable format; I often find that when I recommend a paper to someone they have a tendency to be turned away... even if it's a tutorial paper not so different from a blog post.


If someone started a blog where they give an approachable overview of all these papers, would that be helpful? Would such a blog be considered rude since many people would read the blog summary instead of the actual paper?


Okey, this is a question to the parent of the comment I reply to. (I couldn't reply to tel for some reason).

Do you have any example were the documentation available is only scientific or technical papers? I've been learning Haskell for a year now, and even though I've read some papers due to interest I've never felt like it was the only form of documentation.


I'd say that people often reference intermediate Haskellers to a number of major papers like Data Types A La Carte or the Typeclassopedia. The entire set of Functional Pearls are released in academic paper form and provide a really wonderful glimpse into "advanced" Haskell style. You also would probably want to take a look at many of the things that Jeffrey Gibbons or Oleg Kiselyov have written.

I could think of others if I dedicated some time to it, but those come up fairly frequently.


I think that'd be a wonderful idea. Many of these papers have important depth that would be difficult to cover in a blog post, but the blog could easily serve as a nice directory.


The blog would or wonderful. Or contribute to the Haskell wikibook or FPComplete's School of Haskell.


There are some good beginner tutorials, but beyond that the documentation is absolutely terrible. The typical documentation for a Haskell library will be a page that lists all of the modules in the library with a link to a page for each module. The page for each module will be a list of functions in the module with a type signature, and if you are lucky a one sentence description of what the function does. There will not be a single explanation or example of how to actually use the library.


... and for the majority of things, that's absolutely fine. Function name + type signature + short description is very frequently plenty.

decodeUtf8 :: ByteString -> Text

There are also quite a few things that are thoroughly documented, sometimes not in the Haddock (which, there's some argument, should be kept simple like a reference). Yesod has a book, and I hear Snap's getting one.

There are certainly cases that both don't have thorough documentation and need it, but they're not as many as you imply.


I started reading "Beginning Haskell". So far the most practical Haskell learning resource I've encountered. http://www.apress.com/9781430262503


Part of the reason intermediate documentation is a bit lacking is the incredible usefulness of the #haskell freenode IRC channel. If you ever have any questions about haskell, feel free to drop by and ask. There's usually at least one person around who'll be willing to take the time to explain it in as much depth as you need.



This is a rather frightening article. Why is it considered a good thing to not know anything about the components that make up your program? This seems like blackbox abstraction taken to its absurd conclusion, where programmers simply wire together type compatible components without any idea of what they're actually doing.

What does the example program even do?

I think this article is one of the best explanations of the visceral reaction some programmers have to Haskell. There's a lot of theory behind Haskell's semantics and libraries, and some people just do not enjoy using something without having any clue of how it works or how it's implemented. Even a hand-wavy naturalistic explanation is better than a dismissive "you don't have to worry about that".


Are you implying developers should know the inner details of every library used?

Realistically most devs read documentation or Google to get something done. They only dive into the library if the documentation is unclear or something is broken. Reading well written libraries is a good learning experience, but not a requisite to use one.

In Haskell a function's type signature is expressive enough to exist as rudimentary API documentation.


The library described in the article gives you a way to take an arbitrary data type, and treat its values as the primitive actions of an embedded domain-specific language (that's what singleton does), and then write interpreters for that language by saying how those primitives should be interpreted (which is what interpret does). All (or at least most of) the control structures for the language come for free, because there's an instance of Monad.

This tends to be quite helpful especially in the case where there are multiple interpretations you might be interested in. For example, a graphics library might provide an interpreter that executes drawing primitives using OpenGL to put stuff on the screen directly, as well as one which interprets the same data by manipulating arrays of pixels. Perhaps another could construct an SVG.

To do that, you'd provide a data type whose constructors would encode whatever drawing primitives your library supported, things like circles and lines and Bézier splines, and then use the interpret function to define each interpreter by supplying a function which takes a given primitive and says how to carry out whatever actions are needed to be taken to display or render it.

(Note: this example is likely a little simplistic, because most of the operations in a drawing library won't have return values, but this set-up allows for such things. You could also make a version where the things drawn also continuously produce values by reacting to user input, allowing for some sort of optional interactivity.)

You'd also likely define a bunch of user-facing versions of the primitives which would amount to singleton applied to each of the constructors of your data type, just for the sake of convenience and implementation-hiding.

Then you have a library where people can define complex drawings in terms of the primitives you provided, and supply any one of those to any one of the interpreters you defined in order to put it on the screen, or write it out to an SVG file, or whatever.

For no additional work, you get operations like

sequence :: [Drawing a] -> Drawing [a]

from the monad library, for taking a list of drawings and producing a drawing which carries them out in sequence (whatever you've decided that ought to mean via your interpretations, probably stacking them on top of each other), as well as many others.

A more interesting example (in that it would involve input as well as output) might be a library for coordinated communication of some sort over different types of underlying network protocols (necessitating different interpreters for the primitive communication operations). Again, you'd get lots of control structures for free, things like for-each loops, and ways to define new bits of a protocol by applying a function to the results of a bunch of smaller pieces of communication.


It seems to me that this article actually proves the opposite point that it's trying to make.

I have no idea what the program presented is supposed to do, or what problem is being solved, despite knowing all the types being used.

I swear the point of Haskell is to make people using Haskell feel smart :-)


A simpler example would be a basic list operator, given a method you can guess what it does.

    [a] -> a
That takes a list (brackets denote a linked list) and returns a single item. There is a small list of things that this could be. It can't filter or order in any way, since there are no restrictions or functions provided, it can't aggregate for the same reason, that leaves either the first item or the last item.

[Yep head and last](http://www.haskell.org/hoogle/?hoogle=%5Ba%5D+-%3E+a)

Looking at that list we can see some other alternatives that I mentioned

    foldl1 :: (a -> a -> a) -> [a] -> a --Aggregation
    maximumBy :: (a -> a -> Ordering) -> [a] -> a --Ordering
    maximum :: Ord a => [a] -> a
    (!!) :: [a] -> Int -> a --Indexing
    sum :: Num a => [a] -> a --Aggregating
    product :: Num a => [a] -> a
And if followed along, you will notice all of his implementation points were straightforward.

Note that all of this assumes you have very well written APIs that follow this chain of logic, otherwise you can write things like this.

    second :: [a] -> a
    second _:i:_ = i
It compiles (or would with the default case added) but is rather odd and would throw off this line of reasoning. Similarly by abusing unsafe methods you can do crazy things.


> [a] -> a

> That takes a list (brackets denote a linked list) and returns a single item. There is a small list of things that this could be. It can't filter or order in any way, since there are no restrictions or functions provided, it can't aggregate for the same reason, that leaves either the first item or the last item.

Assuming that lists can be empty, it can't be the first item or last item either, as either of those (or the second, or third, etc.) would have to be [a] -> Maybe a

Haskell doesn't actually do this, but (while there may be pragmatic reasons to do this and yet have them work on a type that isn't constrained to non-empty lists), its kind of a failure in the logic of types, which makes head (and last, etc.) subject to crashing. But if you pretend that you can use type based logic and what is possible to guarantee based on the type signature to infer semantics of returned values, then you have to really follow the logic alone -- if you can start to accept that there could be unstated assumptions of the type Haskell actually makes, then you open up lots of other possibilities based on which specific unstated assumptions that aren't implicit in the types involved are made.


Your function second is no worse than head or last. I think you chose a difficult example. [a]->[a] or a->a would be simpler, if a bit boring.


I dunno, I like second being a bad choice, the number of times it is useful are very few. `!! 2` is much more straightforward. In contrast `last` is impossible to do that way and `head` is part of the underlying data structure (as in it is a linked list, so has head and tail) so is a gimme.


> It can't filter or order in any way, since there are no restrictions or functions provided, it can't aggregate for the same reason, that leaves either the first item or the last item.

That doesn't logically follow. Technically it can return any item in the list. Without some kind of documentation there's no way to know.

Saying "you can guess what it does" doesn't buy me anything over any other language. I can guess what a function does in C or C++, too, and in my experience I'll guess right in C and C++ more often than I will in Haskell.


> Technically it can return any item in the list

But that is all it can return. By looking at the name I know with 100% certainty what it is.

> I can guess what a function does in C or C++, too

Never without the name. Too few functions are actually pure and the difference in a pure signature and a non-pure one doesn't exist.

Heck if it is C or C++ it could cast all the void* to int* and find the first one that is non-null and points to a positive value.


"I have no idea what the program presented is supposed to do, or what problem is being solved, despite knowing all the types being used."

This is an absolutely fair observation. The code being shown is basically chowells79's recreation of a library that I wrote about a year ago:

http://hackage.haskell.org/package/free-operational

Basically, it really only makes sense once you've studied and grasped the key ideas behind a handful of other things: (a) functors, applicatives and monads; (b) the `operational` monad (another library, which mine recreates and expands on); (c) free monads (yet another library).

If you look at the link above, you'll note that I've chosen to follow the opposite approach—I provide links to other sources that provide the background context for what the library is doing. Elsewhere in the documentation I provide extended (but toy) examples of how to use it, of which I'd highlight the one in here:

http://hackage.haskell.org/package/free-operational-0.5.0.0/...

That page's example demonstrates a toy implementation of parser combinators and three different ways of using it: (a) parsing strings, (b) enumerating the strings that a parser accepts, and (c) optimizing parsers by merging paths with common prefixes.

So basically, the library is a kind of interpreter construction toolkit. A very abstract one...


I suppose it is a bit ironic that I chose to build a library that really does benefit from higher-level explanation.

That's what happens when you build software engineering tools - you have to convince programmers of their value, even if they see how to use them.


Pretty depressing if you ask me. You're limited to just plugging libraries together if all you understand is the type system. Hey it compiles and runs without error. Congrats. But does it do what you want? Impossible to say.


It takes time man. Understanding is a process. You have to start somewhere. Good for him, it was all greek to me.


The idea that you don't need tutorials because of the type system is absurd, another whopper along the same lines as Haskell programs being bug-free.

If nothing else, you may need a tutorial to learn to use the complex type system.


That's why I was very precise in my statement.

    The great thing about Haskell is that for many purposes, you don't need to understand the math or have a tutorial to use a new library.
and

    But a complaint that you can't use most libraries because they involve math you don't understand and don't have any tutorials? That's one of the silliest things I've ever heard.
Note the word "library" in both of those. I was very precise. Library documentation isn't lacking the way some people complain loudly that it is.


Yes. More documentation wouldn't hurt, though. The type system is more expressive than most other languages', but not that expressive on an absolute scale.


My issue with this is that, quite aside from my lack of understanding of what's going on, the author is constantly professing their lack of understanding of the underlying details.

> I still don't know a thing about what F or Coyoneda mean. But it isn't really slowing me down.

They're essentially cargo-culting signatures, and while sometimes that can be enough to stimulate proper understanding, I worry that the first time something goes wrong the author will be hard-pressed to solve it. It seems like a useful way to stumble around in unexplored areas; it doesn't seem like a good way to build software you're going to have to maintain and support.


I couldn't maintain or understand the low-level code Java depends on without great effort (I know I could if I devoted enough time, which I usually don't have). Same for low-level libraries for C or any other language.

I think the argument here is that -- given that there are no problems with the library; otherwise you're screwed in any language -- type information for Haskell libraries gives you more useful information than docs & tutorials for libraries in other languages.

The author repeatedly asserts that he is not scared of math, and would probably be able to understand the libraries he uses if he spent the time. It's just that he doesn't need to.


bcjordan: sorry, I meant to up vote and my thumb missed, hitting the down arrow.

fpcomplete is a nice system for learning. I like the live code and the IDE mode. I signed up for a year paid account, about two weeks ago.


As a long time programmer, this gives an insight into why I avoid Haskell. Syntax matters, in part because it has such an influence on style. When I look at the code, I can't read it. There seem to be poor choices on choosingcasesformultiplewords that dependOnContext but are still unreadable.

I can't help feeling that the reserved terms, such as Functor, Applicative, F, and liftF bring the chaos to the thinking process that is reflected in the article. I skimmed it, tried to read it, and still have no idea what the person was even trying to do.

This is a language that discourages clarity.


Where are you seeing any discrepancies from camelCase? The only thing I see is that types and constructors (i.e. Functor/Applicative/F) start with capital letters.

Also, to be frank, many languages give a false sense of knowledge portability. "As a long time programmer" may not carry you so far into Haskell (or Prolog, or Forth) unless you've seen something else in their paradigms.


> Also, to be frank, many languages give a false sense of knowledge portability.

This is true, there's a very common psychological bias that people experienced in one domain when encountering a new subject which they find difficult will sometimes blame the new subject matter as flawed or overly complicated instead of realizing the limitations of their own knowledge.


> I skimmed it, tried to read it, and still have no idea what the person was even trying to do.

The fact that you haven't learned enough Haskell to understand the article after skimming it doesn't make Haskell problematic.


Exactly! Syntax matters!

Haskell's syntax is radically different because it matters that it is. It's not different to obfuscate or be obstructionist.

It's different because it's better, for the features of the language. Function application via whitespace is a key feature to make automatic currying pleasant. Automatic currying is a key feature to make partial application pleasant. The patterns continue.

Haskell's syntax wasn't chosen by throwing darts at a dartboard. It wasn't chosen to upset C programmers. It was chosen to optimize simplicity and clarity of code that uses the language.

It missed the mark occasionally, but overall it did a good job. Syntax does matter, and Haskell is proof that using higher levels of abstraction benefits greatly from syntax that encourages it.


Historically, most of Haskell's syntax was chosen to be similar to other functional languages existing at the time. I don't know where ML---the grandsire of Haskell's family of languages---got its syntax from in the first place.


Functor, Applicative, F and liftF are not reserved terms. What makes you think so?

They are user defined (or in this case, library defined).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: