Hacker News new | past | comments | ask | show | jobs | submit login
A brief introduction to Haskell, and why it matters (github.com/pchiusano)
174 points by DanielRibeiro on March 7, 2014 | hide | past | favorite | 169 comments



> Haskell is in some ways a nicer language for functional programming than Scala, and if you are serious about learning more FP, we recommend learning it.

I'd take it as far as to say that, if you're a developer of any sort, you should learn at least some Haskell at some point. It does things in such a completely different way from more mainstream languages that the sheer mind-opening effect of learning it is worthwhile.


Even after working on just a bit of Haskell, I feel that my JS and Ruby development have gotten tighter.


I have about a half a year's worth of Racket (acquired over around a year and half) under my belt, and intend to learn Haskell in the coming two years. Is there anything startlingly different in Haskell compared to the LisPs?

EDIT: Thank you for the responses, I have bookmarked the relevant github page and feel like my question has been well-answered :) Also, even more interested by Haskell than I was before.


First, as several people have already mentioned, types, and in particular type-directed programming. In Haskell, I often find the type of a function to be some of the best documentation for it, and I write the type of a function first to describe what I need to write. Partly that's a property of having a strong type system, and partly it's a product of the Haskell culture to want to pack as much semantic content into the type system as possible to help ensure correctness. It's shocking how often you're done the moment you get a new addition to the program to compile and typecheck; it's also shocking how painful it is to go back to C or a dynamically typed language once you're used to that level of static typechecking.

Second, expressivity and modularity. I find that Haskell programs tend to have two types of functions: very short functions (a few lines long at most) implementing the logic of one particular operation (heavily composed of other such functions), and long but extremely simple functions expressing high-level program flow. Good Haskell style encourages writing functions with the most general type they can work with, which results in many functions becoming very general, reusable helpers. As a result, I rarely find myself staring at a half-dozen functions at once trying to figure out what one does and how logic and state get threaded through all of them; each individual piece makes sense in isolation. That's also quite nice when trying to refactor a function.


The single biggest thing you'll notice is how much time you'll devote to thinking about types, and how that makes your code much more declarative.

Even casually skimming through the GetOpts implementation in Cabal should give you and idea of what I mean:

https://github.com/haskell/cabal/blob/master/Cabal/Distribut...


> thinking about types, and how that makes your code much more declarative.

it's not that it's more declarative (tho that becomes a side effect), but that thinking about the type forces you to model your problem domain in a formal way, which brings to the forefront all of the problems that would otherwise have been hidden away as undeclared assumptions.

E.g. in java or C++, the states that a class can be in is often invalid, but is only enforced procedurally via programmer checks, where as in haskell, you often have to holistically model the problem as a type, and all values that the types can take up must be accounted for. Thus, you hit the edge cases right up front, instead of finding out later (as bugs).


I am not a Haskell programmer so I look at this with interest.

The biggest thing I notice is that type specifiers have comments next to them akin to variable names describing what they are for.

This worries me. Comments are just so awful compared to clear meaningful variable/function naming that propagate through the code when used. Does Haskell discourage and even prevent clear naming or am I missing something? It would be a tradeoff I couldn't accept.


Like quchen said, the comments are for the API documentation. They are documenting that function's use of a type. The type alone frequently will not communicate any semantic meaning. Consider this type signature.

    exp :: Double -> Double -> Double
You don't inherently know from the type which argument is the base and which is the exponent. Double is not a name that you have control over, so we write a comment for it that will automatically show up in the API documentation. They only get names in the implementation.

    exp base exponent = ...
But these names are for the programmer, not the API documentation. You actually want these names to be separate from what is shown in the API docs. Consider this function:

    map :: (a -> b) -> [a] -> [b]
    map _ [] = []
    map f (x:xs) = f x : map f xs
This code is very concise and easy to read. It is easy to read because the names are small and the important thing is the pattern, not the actual meaning of the names. This function is also extremely general, which means that there's not much use in names. Position in the function means more than a name. Here's what it would look at written with "clear" OO-style names:

    map :: (a -> b) -> [a] -> [b]
    map function [] = []
    map function (firstElement:restOfList) = function firstElement : map function restOfList
First of all, we see that sometimes we want to pattern match instead of using a name. GHC would actually give a warning with this code saying that the name "function" in the first case is never used. This is actually a very useful warning that has helped me catch bugs on multiple occasions. Secondly, the "clear" names here completely obscure our ability to understand the code. It's just too much noise. Now, I'm not trying to get into the whole naming debate here. The point I want to make is that there can be good reasons to not show the parameter names in the API documentation, which is why you'll see comments on type signatures for the purpose of auto-generated documentation.


I guess the complaint is that Java-like languages would let you write the type declaration as

  Double exp(Double base, Double exponent)
while the Haskell syntax for types doesn't provide a place to write down names for arguments. Perhaps a nice fix would be to add Agda-style syntax for arrow-types, like

   exp :: (base : Double) -> (exponent : Double) -> Double
At some point we will want to add dependent types anyway. :)


You can relabel Double with a type alias. Really, adding a newtype gives you a sort of partial equivalent of tagged arguments (requires tags, doesn't permit reordering), and might be best practice when there's more semantic content and no clear typical order (eg, you're passing price and quantity to makeOrder, rather than base and exponent to exp). I make a habit of doing this with my C (where a single element struct adds no overhead, much like a Haskell newtype).


Yeah, type aliases can be really useful for some kinds of type documentation. But they seem a bit on the heavy side for one-off things like the exp and map examples.


I agree that it doesn't make sense for exp or map, but not because they are one-off; rather because they are already clear enough that it's not going to add much.

For map in particular what are you going to rename? You can name unbound parameters without a newtype:

    map :: (input -> output) -> [input] -> [output]
might add a bit;

    map :: MapFunction a b -> [a] -> [b]
takes more away than it adds I think.


It's usually a matter of taste. You shouldn't have types or values with 20 characters, but you also shouldn't do C-style "no vowels" naming.

It's also worth mentioning that the "-- ^ comment" syntax is so that Haddock, Haskell's automatic documentation generator, can display the comments in the HTML documentation. Example: http://hackage.haskell.org/package/Cabal-1.18.1.2/docs/Distr...


Haskell is a lazy-by-default language, this has some interesting consequences (though you might have seen examples of lazy evaluation in Scheme, but it's not so omnipresent there).

Haskell is pure: there's no mutable values in the core language (though IORef's can be used for that), so you can be sure that a function does not call `set!` and all data structures are persistent and immutable unless mutability is specifically emulated.

Haskell uses some deeply abstract and powerful concepts like monads, arrows and functors that give a great insight into programming languages theory.

Haskell lacks Lisp-way macros but gets by surprisingly well with its type system magic, laziness + combinators can get you ~90% of macro power.

Haskell enables you to reason about code formally, making invariants (code properties that must be preserved during its execution) in your code explicit. This makes programs much more safer and predictable.

On the other hand, its core is very close to the lambda calculus (that Lisps are also so close to), so with Haskell you basically get the power of Lisp + a solid and powerful layer of type system that verifies your code + nice, clean and consistent syntax.


"Haskell lacks Lisp-way macros but gets by surprisingly well with its type system magic, laziness + combinators can get you ~90% of macro power."

You can get 90% of the rest with Template Haskell, at the cost of some compile time and (arguably) some prettiness.


Yes. Although I have no experience with Racket (so correct me if I'm wrong), the Lisps are inherently multi-paradigm languages. Haskell, on the other hand, is functional programming in its purest form. I would consider it worth learning if only for that reason, even if you don't end up using Haskell a lot.

EDIT: ...Did I just accidentally paraphrase what ESR used to say about Lisp?


The way I see it is that Racket is like ANSI Common Lisp. The language (or ecosystem in Racket's case) is designed purely for practicality and not based on any purity of theory implementation. Racket gives the programmer immutable values, type safety, laziness, pattern matching and objects. It also gives the programmer all their compliments. It doesn't enforce good taste or the one true way.


If by Racket you mean the #racket language, that's not really true. It is a Scheme and embraces (and encourages) the purity of modeling that's associated with that.


   #lang racket 
is just a corner of the Racket ecosystem. If anything underpins it, it's the idea of teaching computer programming and computer science research. For elegance it #:extra-constructor-name for (struct...); contracts, units, modules, collections, objects, mixins and packages (two types); and two syntaxes for Regex.

That's of course not to say that the Racketeer ethic doesn't favor elegant code. Only that language design is not ideologically bound to it.


You evidently have never used Common Lisp :)


One word: 'loop'.

Common Lisp encourages all sorts of theory derived ideas. The community favors good looking code. But functions that start with 'n' as in "No" and the ability to redefine symbols linked to atomic values are always there for chainsaw juggling foo.


Racket is definitely multi-paradigm, it has a pretty nice line in OOP on the side. Pretty sure at least some OOP is required for any GUI stuff in Racket.


"Is there anything startlingly different in Haskell compared to the LisPs?"

Static types, possibly more purity (depending on the lisp you've been writing), pervasive laziness, slightly awkward (but usually unnecessary) macros.


I'm noticing a distinct lack of "productivity." :)


I'm not convinced that's "startlingly different in Haskell compared to the LISPs."


Meaning you feel it is not that much more productive?

Are there things in haskel that would make you recommend it over lisps?


Personally I feel like the gap between Haskell and Lisp is measured less in productivity (the way the gap between X and Lisp is for most Xs) but instead in comprehension and robustness. The code I write in Haskell might be only marginally easier to write than the similar lisp code, but it comes with much nicer, more sustainable structure and 60% of my unit tests for free.


Frankly, I'm starting to feel very strongly that static typing in and of itself makes you much more productive -- in the long run. Not so much because you write stuff faster, but because you spend less time patching it up.


Funny, because I am coming to the other conclusion. Static programming is great when you can build one giant unified model of everything in your code.

When you just want some parts that can be fitted together to get you what you want, dynamic programming is tough to beat.

Consider, how many extensible editors (or other applications) have been written in Haskel? Or, really, any static language.

Contrast that with how far emacs has managed to come. And, among the lisps, elisp is not highly regarded.

For that matter, consider how far javascript has managed to take web programming.


Both of those examples obviously have very little to do with the quality of the language used, and everything to do with the ecosystems that were built around them.


Then give me a compelling example otherwise.

Also... I was not meaning to make an argument to the quality of the languages. If anything, my argument is admittedly about the ecosystems garnered by the different types of languages.


The problem with these massive ecosystems of dynamic code extensions is that when you layer leaky abstraction on top of leaky abstraction, each layer being broken in its own subtle ways, the end result is a ticking time bomb waiting to cost its users thousands or millions of dollars in lost productivity. Your example of JavaScript in web browsers is a perfect demonstration of this.

We are the only engineering discipline where there are a significant number of people who actually think extensibility is more important than stability or robustness. Imagine a civil engineer pooh-poohing the stability of the bridge he's building, pointing instead to how easy it is to add additional lanes. Imagine a mechanical engineer decrying running a formal analysis of a new design for a car, saying "forget about that, look at how easily customers can plug in custom dashboard attachments!" This is lunacy, plain and simple.


This reeks of what I have seen as a common attitude among software developers where we make sweeping statements about other professions with little to back it up.

I will not claim that extensibility is the be all end all attribute. I will claim that it is a valuable attribute. More so for some applications than others.

Similarly, I will make the same claim for formal checked programs. In some fields/industries, why wouldn't this be the norm?

So, if the claim is that static typing can make for a more completely specified application and that we should demand that for some fields. I agree.

If the claim is that static programming is superior to dynamic, I take issue.


"If the claim is that static programming is superior to dynamic, I take issue."

I think this would benefit from a little more clarity about what you mean when you say "static programming" versus "dynamic programming" - I'm sure you don't mean "Dynamic Programming".

If you mean static types, then I think that they are a tremendous win wherever they apply, and I think that sufficiently sophisticated type systems exist that they can apply most places. Trying to express meaningful constraints in a brain dead type system is awkward and you wind up moving between over- and under-constraining yourself - though I've been surprised at what I can express in C (with zero runtime overhead) with a little creativity.

If "static programming" is taken to mean "compiled, with runtime compilation of additional code made difficult to impossible", which often correlates with "statically typed" in existing languages but is technologically orthogonal, then I agree that this kind of "static programming" is not uniformly superior.


Yeah, apologies. I did not mean "dynamic programming." I thought the context made what I meant fairly clear, though.

If you have any examples that show how this is technically orthogonal, I'm all ears. Hence my request for examples of things that are as extensible as emacs.

And to be clear, my understanding is that ghc is actually fairly extensible. I would love if there were more examples. Preferably in more approachable domains than compilers.


'Yeah, apologies. I did not mean "dynamic programming." I thought the context made what I meant fairly clear, though.'

No worries - as I said, I'd understood that didn't mean "dynamic programming".

"If you have any examples that show how this is technically orthogonal, I'm all ears. Hence my request for examples of things that are as extensible as emacs."

Well, Typed Racket would presumably be one example. More generally, as a theoretical proof, one could bundle the entire compiler into the runtime and link in arbitrary new code.

"And to be clear, my understanding is that ghc is actually fairly extensible. I would love if there were more examples. Preferably in more approachable domains than compilers."

I'm not aware of it being exceptionally easy to write plugins for GHC compared to other compilers - it has incorporated a lot of extensions to the Haskell language but that's not the same as a plugin ecosystem (which might itself exist - just "I'm not aware"). It certainly has a plugin interface, but so does GCC. As GHC it itself implemented in Haskell, a lot of pieces of it are also available as libraries.


There are no non-leaky abstractions outside of toy applications. Extensibility is the unique strength of software. Otherwise you might as well do it with hardware. Other disciplines are (more) limited by their physical constraints. If air-tight abstractions could solve everything machines could do the programming and there wouldn't be much need for human programmers.


Yes, clearly the reason that machines aren't doing all of the programming is that there's no such thing as a non-leaky abstraction.

ಠ_ಠ


Well it is pretty self-evident that non-leaky abstractions don't exist -- or I haven't really seen one in all these years. Take functions in functional programming: if tail recursion matters that is one leak; if strict or lazy evaluation matters that is another leak; if you want to share intermediate results well that springs a big leak; etc. Engineering is about trade-offs. We are there to judge what matters to us and therefore "leaks" and abstract away details that may not matter to us (at the level we are working on). Yes fundamentally I do see this as the barrier to complete automation.


Funny, I just swapped out the network card on my work desktop, because the new card has features the old one lacked.


I hear this forms the common theme for PL grants :P


XMonad is an example of a Haskell application that is compellingly extensible.

Edited to add: Also, since you asked specifically about extensible editors: http://www.haskell.org/haskellwiki/Yi


I don't think XMonad is a poster child for advertising how great (read: awkward) haskell is for runtime extensibility. Sure it does the job, but it's based on a giant hack (invoking the compiler on your configuration, forking a new process, etc.), this is pretty specific to xmonad and is non-trivial, so you can't easily reproduce this kind of extensibility for other applications. It's completely different to the kind of extensibility offered by something like emacs/lisp, which is done purely in the runtime and doesn't require transferring some state to a new process.


The only real difference is that you can't extend live - which matters, but not tremendously for a window manager.


Have you seen StumpWM ? The sad part is that CLX is terribly instable.


I actually use ratpoison :-P


I'm pretty sure behemoths like NetBeans and Eclipse rate pretty damn high on extensibility.

Anyhow, you don't have to take an all-or-nothing approach. It's pretty common to write game engines in C++ and then use a dynamic language to provide game-specific scripting on top (e.g. World of Warcraft uses Lua for the UI, Civilization V uses Python for most of the non-engine stuff). Also, Photoshop has javascript builtin.


MS Windows and a lot of Windows applications are extensible via COM. The core of COM is a dynamic cast at runtime to a static interface. You see something similar in a lot of Go code. For me this is the right balance. Static types but extremely late binding. Or static types, dynamic dependencies.


They are high, no doubt. In my view they still fall well short of the heights that emacs reaches.

Take a look at skewer-mode for emacs sometime, and realize that is less than 1.5k lines of javascript and elisp.

I was also avoiding the approach of bundling in specially crafted hooks for extensions. If anything, that really just kind of makes my point. That for some things there is a highly perceived benefit to dynamic languages. (And yes, the converse is true.)


You're not going to get concrete comparable examples because Haskell has not had enough adoption and resources thrown at it to be able to compare it in this way. What I can do is point you at another comment I wrote about this the other day and the responses it got.

https://news.ycombinator.com/item?id=7299034


Amusingly, I was in that thread, too. :)


I've not done extensive development in any lisp, so I'm not able to make a robust comparison. There are clearly ways in which Haskell leads to more productivity (static typing when sufficiently expressive is a tremendous win), and some ways in which LISP has an advantage. It sounds plausible to me that it's a wash; something other than a wash is marginally more likely, but I'm not confident about which direction it would go.


Makes sense. I think that mirrors my view, mostly. I have lately begun to fall on the side of lisp more so than haskel. Sadly, I can not really give a good reasoning as to why.

I can say that finally going through SICP has been borderline mind blowing. It is hilarious/sad/crazy to see how many hot topics today were covered in a bloody introduction textbook from the 90s.


On a side note, this may be worth looking at: https://www.cs.drexel.edu/~mainland/2013/05/31/type-safe-run...


Types

Haskell being statically typed will outlaw a large-ish number of direct lispisms until you provide the compiler sufficient type-based justification that the operation is OK. For instance, new lisp->haskell programmers often want to make arbitrary nested lists

    [[1,2,3],4,[[5,6],7]]
which is not well-typed so the compiler complains. Honestly, what you need is a different type than the strict Haskell list called a Rose Tree

    data Rose a = Leaf a | Branch [Rose a]
which tells the compiler that you explicitly want arbitrary nesting.

---

Laziness

All of Haskell has default laziness and this means that your environment is fully tilted toward laziness. Generally this means that a thing called equational reasoning holds nicely and this forms a MAJOR component of your ability to reason about Haskell code. In particular, given any repeated substatement, you can "lift" it up with a let

    ... e ... e ... e ...
    ==
    let x = e
    in ... x ... x ... x ...
Lazy Racket gives this to you too, but having it everywhere is a new thing. You also are able to rely on control structures as combinators much more so that something like

    fold x c . map f . map g 
is very common in Haskell since laziness will automatically fuse each step, while in (strict) Racket you'd want to manually fuse them together

    fold (c . f . g) x
reducing composability.

---

Purity

You can get pretty close to pure in Racket, but Haskell takes this concept much further. Purity and laziness are a driving force which leads to the need for things like Monads... and strong types allow the boundaries of various semantic segments to be very sharp. As an example, the STM libraries in Haskell are remarkably nice due to a combination of strong types and purity. In particular, you tend to build up computations of types like

    STM a, STM b, STM (a -> b -> c)
where the STM marks that these computations only make use of transactionally safe memory and are allowed to be re-run as many times as needed to ensure linearity. Then you use

    atomically :: STM a -> IO a
which "upgrades" STM to IO allowing now the entire set of side-effects by interpreting the STM computation as IO... and thus choosing to run and re-run it until it linearizes.

---

In each of these cases, Racket, being the flexible language it is, has methods of nicely including the feature—typed racket, lazy racket, base racket without ambient state or io—but Haskell goes fully and confidently into these three choices and that subsequently leads to very interesting combination effects and a community and ecosystem designed in a particular, interesting style.

In other words, I believe that even if you're intimately familiar with each of those things in isolation, Haskell may be the first time you've ever seen them together... and that changes everything.


For List types, you still need the (incomplete) https://ghc.haskell.org/trac/ghc/wiki/OverloadedLists work to make ListLike structures have a friendly syntax, but today you could cobble something almost-readable like

     [ R 0 , [ R 1 , R 2 ] ]


You might also want to play around with Typed Racket

http://docs.racket-lang.org/ts-guide/index.html


Aww, do I have to? I've been putting that off for so long, haha! I'll give it a go.


Is it, though? How do you know?

Don't get me wrong, I think so too, it's just that I've begun doubting myself. I've always thought it worthwhile to learn very different languages for the mind-opening effects of their paradigms. But it's become clearer to me that beyond the warm and fuzzy feeling of understanding ways to package and handle state in pure FP languages, I lack any specific evidence that learning Haskell helped my programming style or effectiveness in mainstream languages.

How would I go about finding such evidence?

To put it in a brutally reductive form: I looked at my C++ code before and after I learned Haskell and played with it a fair amount. I didn't see much difference, though maybe I didn't look at the right things?


> "Functional Programming in C++"

> My pragmatic summary: A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible.

> I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in Lisp, Haskell, or, to be blunt, any other fringe language. To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields. We have cross platform issues, proprietary tool chains, certification gates, licensed technologies, and stringent performance requirements on top of the issues with legacy codebases and workforce availability that everyone faces.

> If you are in circumstances where you can undertake significant development work in a non-mainstream language, I’ll cheer you on, but be prepared to take some hits in the name of progress. For everyone else: No matter what language you work in, programming in a functional style provides benefits. You should do it whenever it is convenient, and you should think hard about the decision when it isn’t convenient.

-- John Carmack (http://www.altdevblogaday.com/2012/04/26/functional-programm...)


John Carmack took Haskell out for a spin in 2013 and had a good take-away from it. I don't have any sources to link, but I remember quite a few comments from him on twitter about how he got almost nothing but benefits the harder he leaned on const correctness in C++. And that this inclination came from his experience with immutable data in Haskell.


A good chunk of his 2013 keynote was a praise to Haskell.


I'm going through their FP in Scala book right now. So I'm focusing on pure Scala for the moment. But I'll be definitely learning enough Haskell to be able to read papers and such. Since most of the literature on how to do FP come with examples in Haskell.


The nice thing about FP is that you can basically copy-paste algorithms direct from academic texts, tweaking only very slightly for syntax. It's this way in Racket, I imagine it's even more easy in Haskell!


That's true. But it's my impression that the terminology used in Haskell in discussing types of functions is different enough to require some study. A Haskell to Scala translation of terminology would be what I'd have to devise for myself.


Last time I touched Racket it was still called mzScheme. Good times.


I've had the same experience jumping into Clojure even though it's less "pure."


> ...Clojure even though it's less "pure."

How so? I was under the impression you had to explicitly go out of your way to break functional purity/ use mutable state in Clojure, just like Haskell.

Is it easier/harder/more-unwieldy to maintain functional purity in one of them over the other?


Clojure offers nothing to constrain IO effects.


Thanks, I think need to gain a better understanding of Clojure and Haskell.

Do you mean nothing in the main Clojure distribution? Or nothing, period?


IO isn't expressed in the type in idiomatic Clojure (it's dynamically typed anyway), whereas it is in idiomatic Haskell (to great effect and occasional mild annoyance).


I don't necessarily agree with this. Scala gives you the ability to write in an OO style - which can result it some funky looking hybrid code. But if you are disciplined, you can write perfectly pure, functional Scala.


I believe the difference between Haskell and Scala can be described with the pit of success design paradigm.

Haskell is lazy and always pure, so writing hybrid code takes great effort. This improves code quality by making the path of least effort the best one.

Whether writing pure functional code is the goal is an interesting debate, but Haskell is certainly the better language to learn that in.


Sorry, maybe I missed something, but what, exactly, don't you necessarily agree with?


I think he was disagreeing with the quote but maybe attributing it to you.


Huh, didn't even consider that possibility.


> if you are disciplined, you can write perfectly pure, functional Scala

That's a key point right there. If you are an OOP guy trying to learn FP, Scala makes it too easy to stick to what you already know. I find that Scala's OO features get in the way of learning functional programming, because like you said, it requires discipline to do things in a functional way. With Haskell, you don't have a choice.


I'm a long-time C++ programmer, and I set out to learn Haskell at one point. I didn't really 'finish', but everything I learned tended to make me think, "Ok, so it's like having a library full of algorithms that let you do useful work on data in a standard way." In other words, each "paradigm shattering" language feature, I saw as a big C function that you just don't see, because it's part of the language. But underneath, I know there's still the equivalent of a cool library, just in the runtime.


I think that's an overly simplistic view of Haskell's feature set.

For example, garbage collection can be thought as a "big C function implemented as a library." C++'s smart pointers and templates can be thought as libraries, but that doesn't detract from their value as programming language features.

Your argument is a derivative of "in the end everything is glorified assembly."


It's so much more than just "a big C function". In my mind Haskell is not about being a big library of useful functionality, but rather a lot more about getting guarantees about the things your code can and cannot do. Haskell lets me guarantee that the function `fst :: (a, b) -> a` cannot possibly be buggy (barring things like unsafePerformIO/undefined which can be statically checked for very easily with things like Safe Haskell). It lets me guarantee that some functions will always return the same result given the same inputs. You simply cannot get these guarantees in C/C++.


rule 1: don't talk about the runtime


Don't forget Hoogle[1], the type-based search engine! Let's say that I want to combine a couple of my lists into one list. I know what my inputs look like, and I know what I want my output to look like.

So I go to Hoogle, and search for "[a] -> [b] -> [(a, b)]". First result is "zip".

Or I want to get only the things in a list that satisfy a predicate, but I'm suffering from nominal amnesia. I type in "(a -> Bool) -> [a] -> [a]" and Hoogle lets me know that I meant to write "filter".

[1] http://www.haskell.org/hoogle/?hoogle=%28a+-%3E+Bool%29+-%3E...


Squeak Smalltalk is better in it - you can actually type in arguments of method and a result and it can find you method that, given these arguments, returns this result.

So, for example you can type: 'hello'. 'HELLO' and it finds you 'asUppercase' method of String object.


How does it perform this search? I can't imagine running all functions with a matching type signature scaling to more complex examples.


Well, in fact it does something like this. But what messages it tries is actually limited. It has been a while since I looked at it and so don't remember the details.

This is a very useful function of Squeak and I presume Pharo Smalltalk but I think users of it should have some awareness that it has limitations and not to simply assume that because a result is not found that it does not exist.


Anyone have a practical guide to Haskell? For instance, how can I open a file, split a line on a separator, then do something on the third column? Also, how can I talk to MySql or Sqlite? Load a table then perform operations on the loaded data? I've found the best way to learn anything is to find real uses in my day to day work and start small.

In short, find some tasks for which I would use Perl|Python|Ruby and solve them with Haskell.


Haskell can do these practical things quite easily.

    import Data.Char
    changeThird (a:b:c:rest) = a : b : map toUpper c : rest
    main = do
        contents <- readFile "foo.txt"
        let ls = lines contents
            ws = map (changeThird . words) ls
        writeFile "foo2.txt" (unlines $ map unwords ws)
But like others said, I think you're going to have a harder time if you try to learn Haskell by starting this way. Anyone learning Haskell will be coming in with no exposure to any other pure language, and that has big implications. There is a fundamental difference between the "contents <- readFile ..." line above and the next two lines that you have never encountered before in any other language. You'll probably have a pretty difficult time with it if you don't understand a little more about how the readFile function is different from the lines function and what that means for how you can use them. You can't just mindlessly copy examples from small day to day tasks like you can with other languages because Haskell is unlike any other language you know.

Please don't read the above paragraph to mean that Haskell is not suitable for these kinds of small tasks. I actually think that Haskell makes a fantastic scripting language. I'm just saying that you can't learn it like you learn other scripting languages. You need to get a bigger understanding of what's going on otherwise you'll end up frustrated about why things don't work the way you expect.

All that being said, I still HIGHLY recommend learning Haskell.


You're probably going to have a bad time trying to do it like that. Perl/Python/Ruby are brilliant at what they do, and the problems they're best at aren't necessarily problems that Haskell is very good at.

If you're interested in learning the language, I suggest starting with Learn You a Haskell For Great Good[1] (first programming book I've ever considered to be a page-turner), then following it up with Real World Haskell[2] (going through that one now myself)

[1] http://learnyouahaskell.com/

[2] http://book.realworldhaskell.org/read/


To expand on this, Haskell is excellent to make your own parser. Take special note of say Chapter 16 Parsec of Real World Haskell[1]. This is the sort of application where you would choose Haskell over Perl/Python/Ruby.

Of course parsing is something thats very important if you are making your own DSL. And this is another application where Haskell really shines. This is why the guys who do programming language research produce papers with code in Haskell. Its orders of magnitude easier to make your toy language for exploring evaluation order in Haskell than it is to do the same in Java.

Its easier to learn a language if you're building something you're interested in though. Personally, I learnt a lot of Haskell implementing a web app with Yesod[2].

Good luck.

[1] http://book.realworldhaskell.org/read/using-parsec.html [2] http://www.yesodweb.com/


Of course parsing is something thats very important if you are making your own DSL. And this is another application where Haskell really shines. This is why the guys who do programming language research produce papers with code in Haskell. Its orders of magnitude easier to make your toy language for exploring evaluation order in Haskell than it is to do the same in Java.

I feel like I see a lot more cases where the research language is built as an extension of Haskell or its syntax tree is written up as a (G)ADT possibly with the derived `read' as the "parser" (but possibly just constructing and interpreting ASTs within a Haskell program).


I don't really agree.

Python, at least, has the PLY library, which makes parsing (IMO) easier than with Parsec.

Other languages, like OCaml, also make writing parsers very easy.

Haskell with Parsec might beat C with lex/yacc, but I don't think it's that great compared to what else is available.


After using PLY in production and (afterwards) toying with Parsec for a side project, I could never go back to PLY. Even aside from PLY's obvious flaws (docstrings as code, mutation instead of returns, ample use of python magic backstage...), Parsec wins by being more configurabile and composable.

Whoever is maintaining my PLY mess now might get a handle on it faster than they could learn Haskell, but probably not by much.


Could you elaborate why you feel this way? I could also just vote for one or the other, but I'd like to see discussion about the relative merits of PLY or Camlp4 w.r.t. Parsec.


I can't deny part of the reason is that I'm simply more comfortable with Python.

I think my main contention was the "orders of magnitude" claim in the comment I replied to. Parsec is nice, but it's not "orders of magnitude" better than PLY or camlp4 or other parsing tools in other languages.


Haskell also has Attoparsec, a simpler faster alternative to Parsec


>Of course parsing is something thats very important if you are making your own DSL.

Writing a DSL was my original motivation for learning Haskell beyond it being just a toy. About a day after I really got going on my parser I decided to abandon the idea of a DSL and just write it as a monad.


"This is why the guys who do programming language research produce papers with code in Haskell."

Structural pattern matching is also a nice thing to have when writing interpreters.


RWH is a good recommendation, although it contains a rather large number of errors. LYAH is very good for learning the basics, too.

But, if you're already a very experienced programmer, you can probably learn how to write practical programs in Haskell by just reading the IO chapter (http://book.realworldhaskell.org/read/io.html) and the Systems Programming chapter (http://book.realworldhaskell.org/read/systems-programming-in...) from RWH. These will help you understand how IO actually works in Haskell. The rest is just libraries and learning the language itself.

On a related note: I've found that starting with the main IO function is a good way to start writing any large program in Haskell. Most people I know who complain about Haskell being a mess in impure environments tend to write pure functions first, then build their IO functions on top of that, instead of the other way around. I'm not sure whether this applies for everyone, and of course this approach works well in domains that have little to do with IO (e.g. mathematical programming), but it's something to keep in mind.


Beginning Haskell: A project based approach is a good book. http://www.apress.com/9781430262503


"For instance, how can I open a file, split a line on a separator, then do something on the third column"

Assuming we have

    doSomething :: Maybe String -> IO ()
which turns some optional string into an action to perform, you could say:

    mapM_ (doSomething . listToMaybe . drop 2 . words) . lines =<< readFile filename

The IO action readFile produces a (lazy) String, lines chops it into a (lazy) list of Strings - on each of those we break it into columns, get rid of two of them, get Just the head if it exists (otherwise, Nothing), turn those into the actions we want with doSomething, and sequences those actions. The result is a new action.

"Also, how can I talk to MySql or Sqlite?"

http://www.yesodweb.com/book/persistent


Others have recommended good books, but here's a glimpse just immediately.

    import Control.Monad (forM_)
    import Data.List.Split (splitOn)
    import Data.List (transpose)
    import System.IO

    main :: IO ()
    main = do
      withFile "file.file" ReadMode $ \handle ->
        contents <- hGetContents handle
        let clines = lines contents
            ccols  = transpose (map (splitOn ",") lines)
        forM_ (ccols !! 3) $ \cell -> do
          putStrLn cell
Then take a look at the following packages

    http://hackage.haskell.org/package/split
    http://hackage.haskell.org/package/mysql-simple
    http://hackage.haskell.org/package/sqlite-simple


I think this is the corrected version

    import Control.Monad (forM_)
    import Data.List.Split (splitOn)
    import Data.List (transpose)
    import System.IO

    main :: IO ()
    main = do
      withFile "tel.csv" ReadMode $ \handle -> do 
           contents <- hGetContents handle
           let clines = lines contents
               ccols  = transpose (map (splitOn ",") clines)
           forM_ (ccols !! 3) $ \cell -> do
             putStrLn cell


Good catch—obviously I wrote that freehand :)


Real World Haskell (RWH) [1] is probably what you want, although it may not be as good of a first introduction as other books such as Learn You a Haskell (LYAH) [2] or Programming in Haskell [3].

[1] http://book.realworldhaskell.org/ (free to read online)

[2] http://learnyouahaskell.com/ (free to read online)

[3] http://www.cs.nott.ac.uk/~gmh/book.html


Rosetta code is sometimes reasonable for answering these questions:

http://rosettacode.org/wiki/CSV_data_manipulation#Haskell

(early Google search result for 'rosetta code csv')

edit: Looking at the python code, I would call the approach esoteric, but I guess that's just an opinion (I would use the csv module without giving any consideration to the quality of the structured file). So who knows how idiomatic the code is.


Realize that the Haskell is not using an external csv library like the others are before making judgements ;)


A lot of them don't. I like the idea, but it's weird to not show the straightforward method of dealing with a csv. Some other toy problem would demonstrate opening files and a little parsing.



If you're looking for a guide to using Haskell in the real world, you should check out "Real World Haskell", by O'Sullivan et al, freely available online at [1]. It's an excellent book, in my opinion one of the best programming books available on any language.

In particular, for interacting with MySQL, check out the chapter on that in RWH [2], and for reading lines and operating on them, see [3].

However, you should start from the beginning, as those kind of operations (for better or worse), require a firm grounding in Haskell concepts like types, monads, etc.

[1]: http://book.realworldhaskell.org/read/

[2]: http://book.realworldhaskell.org/read/using-databases.html

[3]: http://book.realworldhaskell.org/read/io.html


I recently learned about the `interact` function in Haskell, which makes it easy to write programs reading from standard input and write to standard output, just like many of the Unix tools.

For example, if you want to print the id and name of each user from a file in the /etc/passwd format:

    -- display all users: ids and names from /etc/passwd

    import Data.List.Split (splitOn)

    eachLine :: (String -> String) -> (String -> String)
    eachLine f = unlines . map f . lines

    showUser :: String -> String
    showUser line = fields !! 0 ++ "\t" ++ fields !! 4
                    where fields = splitOn ":" line

    main = interact (eachLine showUser)
This can be run, like (assuming the program is saved in list-users.hs):

    $ runhaskell list-users.hs < /etc/passwd
I am very new to Haskell, and I am sure there are other, perhaps better ways to do this.


It looks ok to me, though I might pattern match in the `where fields` part instead of using `!!`. Still looks pretty good.

If I was feeling fancy, that `unlines . map f . lines` looks like an isomorphism that we could do something awesome with the lens library. Anyone who knows it better care to show me how to do that?


Haskell also matters from a business point of view! I can tell you that hiring Haskellers immediately connects you with the top of the food chain in terms of hiring.


Actually I thought of another reason why Haskell matters. My HN score has gone up over 60 points in the last few days from only 4 comments here about Haskell. It's really a very efficient tool to have in your arsenal :-)


Can you give me a list of companies that are looking for Haskllers?


I'm CTO of a team of Haskellers in London. We're not hiring at the moment, but if you ping me on ben <at> dlstartup dot com I'll keep your CV around. Otherwise there are 5-6 haskell startups and probably others on the finance world.

Whereabouts are you?

In addition to mightybyte's list there's also

http://www.borde.rs/

Barclays Capital

http://www.sqreamtech.com/

Tsuru


Also I heard that http://pusher.com/ have or will have a bit of Haskell in production:

http://pusher.com/jobs/platform_engineer


I don't know about actively looking, but I know that the following companies use Haskell.

Janrain (http://janrain.com/)

Standard Chartered Bank (https://www.sc.com/en/index.html)

Karamaan Group (http://karamaan.com/)

Silk (http://www.silk.co/)

Skedge.me (http://skedge.me/)

Soostone (http://www.soostone.com/)

Erudify (http://www.erudify.com/)

Facebook (http://www.haskellcast.com/episode/004-simon-marlow-on-paral...)


Signal Vine is currently in the market for a Haskeller and may need more in the future.

http://functionaljobs.com/jobs/8684-platform-engineer-at-sig...

Disclaimer: I'm a cofounder of the company which spun-out Signal Vine.


I'm cofounder of usedox.com. We're 5 engineers strong now and do most of our work in Haskell. Not hiring at the moment but expect to in early summer. The quality of the unsolicited, inbound requests we get for prospective hires is amazing.


Can you email me so I can email you back about it? Or some other efficient way of helping me remember to ask you about the job in the early summer ;)


max < at > usedox.com


Haskell's power is impressive but the "Cabal hell" is a serious problem.

Recently I tried to get into Haskell. I've installed EclipseFP and the first sessions looked nice. But when I started to use external cabal modules I always ran into serious trouble with versioning. Even recompilation doesn't help a lot because there is always a complaining module.

Do I something wrong? I am thankful for advice.


You may want to try cabal sandboxes. I think it's a similar idea to virtualenv in python


Thanks, I will take a closer look at it.

I hope this is a hierarchical solution so that I won't need to download the whole cabal stack for any new project.


It is, you won't have to download the entire stack. You might want to try this tutorial out:

http://yannesposito.com/Scratch/en/blog/Holy-Haskell-Starter...


http://fpcomplete.com free Community edition online dev environment.


Much more carefully written no-nonsense overview http://en.wikipedia.org/wiki/Haskell_features


Now if somebody could tell me how to install the latest haskell platform on Ubuntu 12.04 and why cabal (with the haskell platform for 12.04) always gives me those incompatibility warnings, I'd be happy to try it out.


Can someone give examples of successful/popular Haskell programs?


Here's a list compiled by someone else:

http://haskell-news.blogspot.com/2008/01/top-10-most-popular...

Here's a GitHub search. Mostly libraries, but a few end-user programs:

https://github.com/search?p=1&q=language%3Ahaskell+stars%3A%...

I didn't realize Pandoc was written in Haskell. Darcs of course, and xmonad window manager are the ones that come to my mind.


It is amusing to see the numbers placed for "registered downloads." Those seem low enough that they may as well just be left off.


To be clear, that list is 6 years old.


>2008 That is ancient.


xmonad, darcs, pandoc are three big ones


http://www.ledger-cli.org/ has a port in Haskell (http://hledger.org/ that from what I've heard will be the main branch in future.


pandoc is a beauty


I really like Parsec and the Diagrams library. Neither is really "a program", though.


So it fits what ESR says about Lisp?

"Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot."

Is that right? As Haskell seems to be far less intimidating to approach than Lisp, I may actually end up doing some. I fondly remember my junior level programming languages class and doing assignments in Standard ML. Haskell looks similar.


Most people find Haskell (especially the laziness, the type system and its error messages, and the typeclass hierarchy) more intimidating than Lisp. If you're familiar with Standard ML, you already basically know the syntax, though, and the types and pattern matching will make you feel more at home.


I'm a JS and PHP developer who wants to learn Haskell. I would prefer to build something rather than just doing a tutorial or reading a book. Do you have any recommendations on projects suitable for a beginner?


My step 1 to understanding a new language has always been to build a simple, file-serving HTTP server without using any HTTP-specific libraries.


I really like this suggestion. I'm in the process of learning Haskell now, and plan on using this as a practical learning exercise.

Out of curiosity, how difficult did you find this to complete?


I think this is perfect for you (and many others):

http://www.amazon.com/Beginning-Haskell-A-Project-Based-Appr...


One cool thing about Haskell that really struck me and that usually isn't emphasized is partial application of function arguments.

http://www.haskell.org/haskellwiki/Partial_application

I'm not sure if it's unique to Haskell, but it's such a great feature that I've wished for in other languages since learning about it. You can replicate it by manually writing curried functions, but having the language do that for you is incredibly useful.


You get this kind of thing "for free" in a concatenative language, e.g. a stack based language. Though having to mostly rely on stack-combinators (this depends on the language and/or idioms of that language, not something that is inherent in concatenative languages) for writing functions might feel too limiting.


What's with using "strict" for the opposite of lazy evaluation? Has the term "eager" fallen out of fashion? Strict seems overused to me.


     main :: IO ()
     main = putStr "Hello world!!"
A side-effect in the very first program. I'm disappointed!


That's not a side effect - it's the effect!


Then you missed the point. There is no side effect (yet). main is just the some value with type IO ().


I've been wanting to make time to learn Haskell. But I have this problem, when learning any language, where I don't know what to build, but want to take a learn by doing approach.

What are some good things to build when getting into a new language? Data structures? Chat server?


I really love writing libraries. Haskell has a type system that supports defining really solid abstractions (I.e. code that doesn't even permit wrong usage). Creating APIs like that is extremely satisfying. Maybe take a look on hackage and see if there's something missing that you'd like to see and make it.


The problem with learning Haskell is I just can't shake the "Research" feeling. Learning Erlang looks a harder slog, but with the benefit of a "Real world" feeling, whether correct or not.


I have separately spent a week learning Erlang and learning Haskell.

Erlang: much much easier to learn. Within a week I felt I could probably have built pretty much whatever I wanted. A few pitfalls, but it's predictable, and with very few difficult to understand parts. Also the OTP is almost CPAN-like in its ability to get you solving problems very quickly. HOWEVER: until the next time I need to build a telephony switch or any kind of massive message handler (chat application, messaging queue, WhatsApp competitor etc), I just have no use for it.

Haskell ... I had some basics in a week. After some months of dabbling here and there, I finally get Monads, but haven't done anything productive yet. I feel like I have become a significantly stronger developer in the other languages I use, have a much deeper understanding of some basics of CompSci, and have had a lot of fun. I think in another few months, it might start being my goto language for solving problems.


It's funny to me how 'research' has become a put down. aka "research toy" vs "getting REAL work done".


Most of the time it's because those people can't make the connection between the research and the "real world". The worlds are very blurred in my experience.


It's not a put down, it's an application area:

Research: generate ideas

Applied: deploy to thousands or millions of users


Why don't consider Clojure as an available option to Haskell or Scala?


While Clojure does a lot of things well, it have the same flaws that Haskell (and to some extend Scala) tries to solve. For example, Haskell has strong emphasis on types and has no side-effects which let you easier write correct programs. One beautiful thing you'll encounter doing after some experience with Haskell is that types will guide you when designing your programs, not only giving a correct but also concise and elegant solution (most of the times).


I actually am considering clojurescript for the front end of my Haskell app.


.syntax LISP with comfortable is everyone Not


LISP is prefix notation, not just reversed order. An expression should begin with a function (a verb) and be followed by a list of its arguments (the nouns).

(not (comfortable everyone LISP-syntax))


I'd expect the quantifier to take the predicate as an argument, rather than the other way around.

(not (everyone comfortable LISP-syntax))


probably better. I was thinking everyone would just be data rather (a list of people or something) rather than a quantifier function.


Right, but then you'd presumably want (every (map ...)).

I'm nitpicking, of course... just taking the opportunity to think lispy.


The language that has more lines of tutorials and articles written about it that actual lines of code.


Just like C++! (Yes, I too made that up on the spot.)


That's because the code is so concise that it doesn't need many lines.


It doesn't really matter, it's yet another language that can be used, that some people like and in this case really want to shove down everyone's throats.

I guess it's not getting enough uptake organically, so once more to HN for the throat-shoving.


Name one language that got popular with no one posting/talking about it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: