Hacker News new | past | comments | ask | show | jobs | submit login
Haskell in Production: The good, the bad, and the ugly (mwotton.github.com)
115 points by hamidr on Oct 31, 2012 | hide | past | favorite | 142 comments



I don't think the library situation is nearly as bad as some people--not necessarily the author of this presentation--make out.

For one, Haskell actually has a pretty good and growing collection of libraries, probably disproportionate to the language's popularity. Web development, in particular, is well represented: there are several competing web frameworks; Yesod even has commercial backing now. Haskell even has some libraries that are not well reproduced in other languages like QuickCheck and Parsec. I've also been looking at sbv lately, which seems rather more convenient than other languages' alternatives.

Additionally, libraries aren't the only thing that matters. In fact, most libraries are pretty trivial. I remember seeing a presentation by Brian O'Sullivan online about his experiences with using Haskell at a startup. One point that really stood out to me was about libraries: sure, some libraries (like JSON parsing, I believe) were missing; however, it only took him something like a couple of days to write it from scratch. Those days are easily overshadowed by Haskell's core advantages. I would much rather spend a couple of days writing a new library than trying to track down a bug Haskell's type system would have caught. The beauty of this system is also that each library only needs to be written once and can then be reused, making the world better for everyone.

Now, there are some other issues with Haskell. Cabal really is a big one. I just spent quite a while fighting cabal, and I'm not even sure how I won in the end :P. Happily, I believe people are working on addressing the cabal issue.

I'm sure there are other annoyances besides. But no language is entirely free of annoyances. Considering everything, I really do think Haskell is a good choice for a very wide range of programming problems, especially in a startup setting.


I used haskell in different startups and is was most of the times a good choice. I have written a dozen of (embedded) domain specific languages to support rapid development. This is quite easy in haskell. Parsec helps with writing parsers for your DSL.

What I saw, was that the amount of tests could be reduced, when using haskell. The type of testing is also somewhat different. You don't have to test for type errors. QuickCheck let you test mathematical properties (in an empiric way).

The FFI compensates for the missing libraries, but is a bit difficult to use.

But the biggest reason of using haskell for me, was the way it handles concurrency and the high abstraction level. This enables you to write distributed applications in a more convenient way than usual. If you working in a web start up, you hit somewhere a point, where you need to think about this stuff.

Besides Yesod, Snap is also a interesting choice, because of it's simplicity.

I enjoy using this language. I would choose it anytime. It made me a better programmer.


> But the biggest reason of using haskell for me, was the way it handles concurrency and the high abstraction level.

Anyone care to point to something online that spells these out in (simple) details for someone who knows next to nothing about Haskell?


Simon Peyton Jones had a talk with a very nice overview of concurrency in Haskell. He is a very good presenter as well :).

http://www.youtube.com/watch?v=hlyQjK1qjw8


There are a lot of concurrency related libraries implementing all kind of abstractions. Distributed computation is upcoming. There is an overview here: http://www.haskell.org/haskellwiki/Applications_and_librarie...


one big kick, since haskell is pure functional you don't have state (okay, in terms that imperative programmer use that term) and all data are immutable.


I find the linked slideset uninteresting, as it misses the speakers words - its just a collection of positive and negative words. Still, I want to reply to your comment, even if my reply is bit harsh, but Haskell always makes me a bit bitter:

First of all, I played with some web frameworks in Haskell and didn't like them at all: the yesod community had a very dismissive tone towards people that didn't want to use what they perceived as "standard" and Snap is well organized, but sometimes weirdly implemented around the corners (WAT? The templating library also serves static files?). I would love to use any of them, but they are still not fit to build ecosystems like Django, Rails or Spring.

Its nice that you picked the JSON library (by Brian?) as an example for a lib that can be written in "a few day". `aeson` is a great library. Its just not something that someone without detailed Haskell knowledge can write in a few days. It requires good knowledge of Haskell, especially things like the GHC.Generics library and Template Haskell (Hint: GHC.Generics is not very well documented for practical uses as well, the documentation spares you of the hard cases).

I found that if you go away from those 'core' libraries, quality drops sharply. Giving just one example may be unfair, but I do it anyways: the readline bindings (haskeline) didn't build on OS X for 2 major releases straight. Also 0.1 and 0.2 seem to be very common version numbers on hackage and most documentation boils down to type descriptions. For loves sake, I couldn't find a good and proper tutorial working with bidirectional streams (think websockets) and asking a GHC committer ended in "well, we're still not sure what the best abstraction for this is, theres multiple competing" and someone on the Haskell User Group Berlin said "well, its not such an easy problem, but it will be solved in a year or so". In Ruby, thats implemented in an hour (safe and sound using libraries, with proper abstractions), today. Yes, I honestly think that the library situation is bad - its not the lack of libraries, its the number of half-hearted libraries vs. the real good ones.

Also: Pro-active teaching, like producing screencasts and tutorials is sadly not very common in the Haskell community as well (weirdly, this is spotty: haskell-cafe, Real World Haskell and Learn you a Haskell are shining examples of how things should be, also in other communities).

I am surprised by how much the Haskell community always says "type system" first, but seems to put all other practical considerations on the backseat. Also, it is common to just dismiss concerns without addressing them.

That said, I still love Haskell and enjoy developing in it when things are working smoothly, I just wished that the community were more receptive and less dismissive to actual problems of people that are not as deep inside the language as people spending every evening on it.


Before my rant: I think the Haskell community is incredibly nice. It is the most helpful and brilliant group of people I have interacted with in my life.

Rant:

Yes, I have spent a while getting up to speed with Haskell. I think it has definitely made me a better programmer to learn, but I really don't like how quickly everything seems to go in the deep end.

The need for the 'right' abstraction and 'good enough' stuff like Cabal really irritates me. Cabal is especially hurtful to the community growing. I think there needs to be a much bigger investment in Nix, that is an absolutely amazing project that barely gets the attention it deserves.

Another thing that really makes me not want to invest time in Haskell is how quickly code starts to depend on language addons. I think that idea can be done well, like in Racket, but for Haskell it really seems like a scary way of bandaging the standard.


What do you mean by bandaging the standard? And why are extensions scary?

In general none of the extensions change the behaviour of valid "standard haskell" (beyond adding a small handfull of new keywords, rendering those words illegal as variable names). So calling it "bandaging the standard" seems a bit harsh/odd.

I think extensions are a nice way of keeping the language moving forward without being slowed down by the standardisation process. The standard can then catch up in its own time.


Because there is no better way of saying "this is not finished yet and might be removed at any time".


> Also, it is common to just dismiss concerns without addressing them.

That's human nature. I used to put a lot of effort into Tcl, including revamping John Ousterhout's book:

http://journal.dedasys.com/2009/09/15/tcl-and-the-tk-toolkit...

But Tcl became increasingly untenable commercially, and a number of years ago I jumped into Ruby on Rails quite enthusiastically. It does things mostly the 'right' way in my opinion. However, while I like it, it's not "mine" in the way Tcl was, and I stepped back and started looking at everything in a more critical, but also more open way and have been a lot more inclined to call a spade a spade, even with the languages I like:

http://news.ycombinator.com/item?id=4716838

http://news.ycombinator.com/item?id=4716890

But coming back to it, that attitude of "if Tcl doesn't do that, it doesn't really matter, and maybe not doing that is a good thing anyway" is prevalent in language flame warriors of all stripes, and, in my opinion is something to be avoided: if you can't look at yourself honestly, you won't improve.


Thats true, but I rarely found the tension between the brute capabilities of the language, the number of geniuses working on the language and the failure at communicating about it as big as in the Haskell community. Its full of gems waiting to be used, but sometimes I have the impression that many practical problems are seen as downright mundane in some circles there. I really hope for a second coming in that regard.

Amen to your opinion about language flame wars (and this wasn't intended as a flame).


"Practical problems" is one of those expressions that mean different things to different people, emphatically so in the Haskell community. The response is that far from treating the problems as mundane, the community actually sees the problems for what they are and are taking steps to solve them. Shit's hard!

Dan Doel elaborates on this:

http://www.reddit.com/r/haskell/comments/wn882/clarification...

I find the whole, 'we're the people actually getting shit done,' line quite amusing. Because the recurring theme of Haskell (and its family) is that those LaTeX paper/pseudocode writing academics keep inventing language features and abstractions that are better for getting shit done than the people just settling for something ad-hoc so that they can get shit done.


I am aware of that sentiment. I didn't want to imply that all those abstractions don't get shit done. A lot of them are very much worth learning and solve many problems. So much for the "useful" part.

But what use is a powerful abstraction without usage examples and guidance to use it? And what use is an abstraction that is just not there yet? Or if it is hard to learn the correct abstractions for certain problems, because there is no guidance about that? The lack of examples in many Haskell libraries and projects is striking and I usually resort to working through similar libraries to see how they are doing it.

My problem are not the paper-writing academics, but those that should do the footwork and make their knowledge and brightness accessible to others.

As always, its a combination of many factors.

Also, don't overinterpret words: I never said that Haskell is not useful for getting things done (some things just work tremendously well!). Its just that guidance on practical problems is hard to come by, even if the abstractions fit. Thats why I think that Real World Haskell is such a strong book, because it actually provides that.


(This must be the comeuppance for talking abstractly about abstractions. Heh.)

For starters I shan't cop-out and claim Haskell is primarily a research vehicle. So it's succeeding in some sense. We'll live.

So here's my take: Most people yield defeat at monads. And then the few that survive find that they have to face iteratees and co. Rage and tears don't even begin to express it.

You're right, there's no guidance about iteratees. Material on this issue, not unlike monads; not unlike space complexity tradeoffs in lazy evaluation; hell, not unlike almost everything about haskell, is all over the place. And that sucks balls.

Do I have a solution? I'd counter, isn't this the price to pay for being on the bleeding edge? And I'd contribute this: there used to be a time when people felt a greater affordance to try anything half-baked, fiddle with it here and there, and BBS to others about it. Over the years there's this expectation creep that everything has to be perfect like an immaculate birth. Perfect! With guidance included!

I want to say that this is due to the decline of leisure time. But that can't be the whole story.


Hrm. I was reading a book about copy writing recently, and one of the takeaways was that you really want to be selling a cure, rather than a prevention.

Meaning, people will pay a lot and care a lot about curing some particular disease, but preventing it... they might pay something for it, but it's probably not really on their radar.

Haskell, with all the fancy CS stuff, is selling prevention, in some sense. "We promise that your code will somehow be better"... well ok, but that doesn't really sell me. I'm being told to eat my vegetables rather than being shown a tasty steak. Compare and contrast with Rails when it came out, which showed you how you could write code that was 1) more organized than PHP, but 2) not as bureaucratic as Java. Cool stuff!

Reality is not black and white, but perhaps there's a grain of truth in there? Perhaps Haskell people could do a better job of showing how the CS stuff helps with real problems? I can't really say as it's simply not on my radar all that much, so they could well be screaming about it from the rooftops and I wouldn't have heard too much.


For the record, many haskellers agree with you that the quality of many libraries and their documentation isn't good enough:

http://blog.johantibell.com/2011/08/results-from-state-of-ha...

Still, Haskell is largely a volunteer effort. Things won't magically improve just because enough people complain about it.


Yes, thats a catch 22, but one that every community has and it is hard to overcome. I wouldn't have written my reply if I didn't think that the parent wasn't overly positive and brushing away a few problems.

Thanks for the link btw.


I found that if you go away from those 'core' libraries, quality drops sharply. Giving just one example may be unfair, but I do it anyways: the readline bindings (haskeline) didn't build on OS X for 2 major releases straight.

Haskeline is not bindings to readline or editline, but pure haskell. It does use libiconv, which is a chamber of horrors on OS X, and the source of your problems. But if you were using ghci you were using haskeline, so it wasn't that it couldn't be built. It is an extremely high quality library -- surprisingly, or perhaps not surprisingly, considering that it was written by a mathematician.


Sorry, you are right, libiconv was the problem. Still doesn't change that it couldn't be built on OS X for a long while (0.5 to 0.6 as far as I remember, 0.7 fixed things).

No, I didn't use ghci. I just tried to install darcs among other things.


> Snap is well organized, but sometimes weirdly implemented around the corners (WAT? The templating library also serves static files?)

I don't quite understand what you're talking about here. Could you be more specific?


>but sometimes weirdly implemented around the corners (WAT? The templating library also serves static files?).

No it doesn't. There's a similar high level API, but that's it. Serving static files is part of snap-core, and done with serveDirectory/serveFile. Templates are part of the higher level snap module, and can be served in a similar style using heistServe.

>I would love to use any of them, but they are still not fit to build ecosystems like Django, Rails or Spring.

I'm not sure what you mean by that. Not fit to build ecosystems as in, have 3rd parties contribute "components" or something of that nature that people can optionally use? Because snap has that, they are called snaplets, and despite snap being quite new, there's already a budding little ecosystem there.

>I am surprised by how much the Haskell community always says "type system" first, but seems to put all other practical considerations on the backseat. Also, it is common to just dismiss concerns without addressing them.

That is simply not true. The haskell community is very heavily focused on addressing practical concerns, cabal is under constant work and discussion, and the developers are seeking funding so they can focus on it full time. A new extensible record module was released just this week to address the issues people have with haskell records. It seems incredibly odd to keep seeing people claim that the community ignores feedback when so much of the community's goings-on are focused entirely on addressing that feedback.


>No it doesn't. There's a similar high level API, but that's it. Serving static files is part of snap-core, and done with serveDirectory/serveFile. Templates are part of the higher level snap module, and can be served in a similar style using heistServe.

https://github.com/snapframework/snap/blob/master/src/Snap/S...

While not strictly static, the heist snaplet silently adds a route that unconditionally serves templates in a fashion similar to a static serve.


It isn't "not strictly static", it literally has absolutely nothing at all to do with static content in any way. The heist snaplet adds a default "if nothing else matches, look for a template" route if you use the initialization function that says "sets up routes for all the templates". If you don't want that, then use the initialization function that doesn't do so (heistInit' which is right below heistInit in the docs, and linked to by the doc string of heistInit).

That seems like a really odd thing to try to characterize as "weirdly implemented".


I'm using haskell for web development, although we're not the typical faddish startup.ly. I agree that the library thing is a non-issue. I've had absolutely no cases of wanting a library to do X, and it not being there. Quite the contrary the only library issue I've had so far is "there's 3 libraries for X, which one is the best?".


I don't even know Ruby, but it's clear that the idea that Ruby is an "amazingly primitive" language that people are using instead of Haskell because it has existing libraries and services or because "startups are not technical" is drastically wrong and almost insulting. To the contrary, Ruby, probably more than any other language, has a large community that's highly enthusiastic about its programming style and culture, which is the reason most of those libraries and ecosystems were created in the first place. I know this is just a few bullet points, but vaguely mentioning a few (good) reasons to evangelize Haskell (plus a motorcycle) along with three seemingly random relatively small issues as criticisms of Ruby (plus a horse and wagon) distorts reality almost to the point of unrecognizability.


Compared to GHC, MRI really is primitive. Parts of this are intentional: the object model is intentionally simple, and one of the design goals was to keep unixy goodness close to the surface, so the IO model, forking and so on are quite shallow shim layers over the system calls.

Other parts, like the threading, the GC, and the parser (oh god, the parser) are suffering from baggage that's never had GHC-levels of academic effort thrown at it.

    To the contrary, Ruby, probably more than any other language, has a large community
    that's highly enthusiastic about its programming style and culture, which is the reason 
    most of those libraries and ecosystems were created in the first place. I know this is just
    a few bullet points, but vaguely mentioning a few (good) reasons to evangelize Haskell 
    (plus a motorcycle) along with three seemingly random relatively small issues as 
    criticisms of Ruby (plus a horse and wagon) distorts reality almost to the point of    
    unrecognizability
The presentation is making the point that MRI itself is primitive, and addresses the plus points of the community in the very next slide. For people who care about the implementation underlying their code as much as they care about their own code (or more), MRI's shortcomings are a visible downside.


> I don't even know Ruby, but it's clear that the idea that Ruby is X is drastically wrong and almost insulting.

That's a pretty big assertion to make for any non-trivial value of X. Let me try and break it down a little more.

He's spot on about CoW, this is a similarly big issue for memory consumption of long running perl applications (though in our case because of caching things in the value struct).

Lexical scoping ... after years of lisp and perl I find that any language that lacks that feels horribly primitive, to be entirely honest.

Concurrency ... hahahahaha oh dear. He should've played with the coroutine stuff before claiming events were the -only- way, but they're certainly the default way for a lot of dynamic languages, and they're definitely better for low level plumbing than high level code. Functional Reactive Programming approaches are amazingly nicer once you get the hang of them.

So - from those axes, yes, it's amazingly primitive.

You seem to be complaining that he didn't -mention- the community? He was trying to contrast; Haskell also has a thriving community so short of trying to get into a community DSW I don't see why he'd mention that.

All the rubyists I know would readily agree that, on the axes that he's referring to, ruby is far more primitive than haskell; and I think they'd also agree that a lot of current ruby adoption is about the ecosystem rather than the language and community, even though -they- personally are mostly more interested in the latter.

I do, however, think he should have dropped those slides even so; I knew as I was reading it that they'd provoke the counterproductive (from his POV) reaction that you've exemplified in a sufficient percentage of the audience to be not worth it.


I think he is referring to the implementation of MRI. First of all, its not as primitive as some people make it and second, there are multiple alternatives that gave a lot of thought to how to fix all the problems of MRI (JRuby, Rubinius).


yep, the OP is asking for a flamewar by putting the horse wagon and the "primitive" keyword in there.

...though I like the unintended message he's sending with the motorbike pic for Haskell: it's hot, fast, but it really can get you killed! :)


Although I'm mainly a C++ developer, I'm a big fan of functional programming languages.

Haskell is more difficult, it requires to really grasp some computer science principles whereas you can start hacking php/python/ruby right away. It requires to actually think about what you're going to do instead of integrating a couple of libraries: that's a double edged sword.

You need to understand that the added value of Haskell for most start ups is near zero. It will more difficult to hire, the final result will not be better and in the end code quality matters less than business relevance.

If your technical founder is a Haskell guru and knows a couple of engineers, then sure, why not, it may provide some competitive advantage, otherwise just forget about it.


You need to understand that the added value of Haskell for most start ups is near zero.

I used to have that opinion as well, but I am slowly retracting it. Very strong typing and great abstraction does make development easier. E.g., I have been writing parsers lately for an application-specific macro language. Writing such parsers using a good Haskell parser combinator library such as parsec is many times more productive than parser generators for most imperative languages.

Another advantage is that strong languages allow you to refactor code easily. If you change some types, the compiler will find type mismatches, allowing you to fix code rapidly. I can imagine that startups often refactor and substantially modify their codebase.

In the end you do not only want to develop an application quickly, but also correctly. If a strong language can be used to eliminate most errors and improve productivity, I see it as an asset to a startup.


A lot of context is lost in a slideshow. I am very much a fan of the experience of developing in Haskell. My problem was tangential stuff like Cabal & the difficulty of operationalising Haskell. It's the reason I spent a few fruitless weekends trying to get a heroku haskell buildpack: if it was trivial to deploy prototypes, it would be a much more appealing target for time-poor startups.


You have tools to help you find mismatches in Python. I agree that maintaining a large code base in Python (or Ruby) is harder, but it's possible and is mainly a question of discipline and process.

If a strong language can be used to eliminate most errors and improve productivity, I see it as an asset to a startup.

Most startups projects I've seen are technically trivial.


> Most startups projects I've seen are technically trivial.

may be this is due to the fact that technically non-trivial problems aren't easily solved by startups, who might tend to choose a technology thats easy to use (such as php/ruby), but makes it difficult to solve the technically non-trivial problems with the same degree of ease?

I m just guessing here having no real insight.


>Most startups projects I've seen are technically trivial.

Most startup projects are made by people who are only capable of solving technically trivial problems, and who are only willing to consider languages that make solving difficult problems borderline impossible.

Imagine someone said "welding is a good way to fasten metal to metal". You are responding with "all the carpenters I've seen use screws".


If the outlook is short-term than I agree. You will always be able to hack out a quick solution in PHP/Ruby/Lisp much faster than Haskell. But this advantage is only unmitigated if you fail and the code goes away before it has to be maintained. If the code is going to be around for a while it's going to need to be maintained, and here the advantage of Haskell grows with the code base size. It's a bit deceiving because the type of bugs that Haskell will catch automatically are not so much of a burden in a greenfield Ruby project, rather it's years later after bitrot and layers of iteration and no one really understands the whole system anymore that the advantages of Haskell really shine.

Granted, if you don't know Haskell then it's not worth the cost to learn with everything else you need to juggle when doing a startup, but if you can, I believe there are significant long-term advantages.


It really depends on the type of risk. My assertion is that most startups have most of their risk in the market bucket rather than the technical bucket. If you can move some risk from the market bucket to the technical bucket, and compensate by using Haskell, that may be a good tradeoff.

in the case where your risk is in the market bucket, you may go through many just-barely-working prototypes before you hit product-market fit. at that point, if you've saved some time using Ruby (say) because of external factors like Heroku, Bundler, and ruby-friendly services, you're ahead of the game, even if your tenth prototype sucks and needs to be rewritten.



Ask HN: Anyone using Haskell in production? http://news.ycombinator.com/item?id=1909093


...I know that HN people are mostly *NIX oriented or dislike Microsoft technologies ...but how about F#? ...wouldn't it provide most of the advantages of a ML family language but coupled with the "smoothness" of VS tooling and .Net integration that could take some of the deployment pains away and provide the "accelerated liftoff" needed by startups?

(It's a question, not suggestion... I must admit I've never had the balls to use ML of Lisp like languages in production so far...)


I think one of the issues with using Microsoft tools is licensing.

The perception around Microsoft is that things can stay cheap (or even free), but at a certain point you're going to be expected to shell out the big bucks.

From our own experience, we find that Microsoft software sort-of works with other Microsoft software. However, when you want something that isn't MS to work, you're immediately into serious financial outlay.

For a start-up, this simply isn't feasible. Even a well-funded start-up has better things to spend their money on than sending it straight to Microsoft's bank account.


well, you got it in the first paragraph - because Windows isnt Unix, and people dont use it for production. F# is in theory nice, but devops'ing windows, I just dunno man.


I've heard there's also good F# support via Mono now (but then again, I dunno about devops'ing with Mono... I wish someone had the balls try this and post about the experience...)


I've been deploying F# into production for a couple years, on to both Windows and Linux. They're rather simple daemons that talk to databases, message queues and filesystems, and do telecom logic (routing/billing).

Overall, Mono has been incredibly smooth. I prefer to compile from source as RedHat doesn't come with packages for mono. That takes time, but isn't a big deal. F# support seems lacking when it comes to some of the tooling (compiling, interactive).

mono-service, the service daemon to act like a Windows Service host (allowing same code as on Windows, and ~5 lines to a minimal daemon) sometimes has acted weird, but it's been due to how it handles it's runfiles and lockfiles, and once you know that it's easy enough to make sure they're ok. (As in, if one user ran it, the perms are wrong so another user can't even check it, and the error messages aren't there/clear.)

F# performance is on par for .NET. If a certain high-level feature is not generating great code, there's always the option to drop down or approach another way. F#'s much more flexible than C# when it comes to performance.

F#'s why I stay on the MS stack. .NET is pretty compelling with it's base library and great tool/developer support. But after seeing powerful languages, I just get so annoyed with the verbosity of things like C#.


Verbosity in C# practically disappeared with 3.0, released with .Net 3.5.

There are a couple of things left that annoy me, but mostly it's gone.


It got _better_, way better, much nicer than Java. It's not even remotely gone. Type inference is crap in C#, and they even admitted in some cases, it was due to compiler design problems, not language design[1].

Even local vars can't be type inferred if they're functions, because of C#'s confusing decision to use the same syntax for lambdas and quoted code.

The ASP.NET MVC team resorted to using reflection on anonymous types, because there was no lightweight way to pass in a set of options at runtime. With a more expressive syntax, that'd be needless.

C# has statements that aren't expressions, which really bulks up code and adds flow for no reason. In F#, even "if" and "try" blocks are expressions which again keeps slimming things down, and more importantly, keeps the code simpler.

In one direct "line-by-line" translation (C#->F#), F# reduced the number type annotations I needed by 95% (1/20th).

No pattern matching (and thus no active patterns!), little type inference, no syntax for tuples, no lightweight function syntax, no code nesting, no workflows, (and a weird hardcoded one just for async), no top-level functions, no custom operators, and C# is flat-out downright clunky when compared to F#.

It may be one of those "you don't know what you'll miss until it's gone" kind of deals. I've used C# now and then, even last month on an entire project, and it just feels tiring.

1: http://blogs.msdn.com/b/ericlippert/archive/2009/01/26/why-n...


+1. Your story sounds a bit like mine a that guy's - http://lukeplant.me.uk/blog/posts/why-learning-haskell-pytho... :) It's a bit old, so some things don't apply anymore, but still.



It can be done. Ask StackExchange. And it has grown much MUCH better/easier from 2k8 R2 onwards.

Don't mistake me for a Windows fan, just pointing out that Windows's gotten a lot better than it used to be.


I think a better thing to mention is Jocaml; I think that is the true successor of *ML as much as Microsoft doesn't want to admit.


There are also clojure and scala with java integration which are cross platform.


yes, they are functional, but they are not "Haskell relatives" at all... Clojure is a Lisp (and I still have some reluctance imagining it being effectively used in a team of more than 3 programmers with more than 10 points IQ difference between them...) ...and Scala is a "weird beast" - I dunno, but the "businessy" side of my mind tells me it's far from a silver bullet and may end up being too "complicated" for the problems I need to solve, and I don't like it when the tools I'm using are more complex than the problems I'm solving...


In practice, I'm not seeing it as a problem. It's a bit like C++ in that you can safely ignore all but a subset of the functionality. The problem is that everyone picks a different subset. :)

I'd say the compiler speed is still the biggest problem, and they're working on it.


You joke, but Scala's complexity is a very real problem. Specifically, abuse of implicits can make trying to understand Other People's Code quite the nightmare. Even reading the function type specifications in the official Scala docs is a chore.


Okay, I'll agree about implicit abuse. I feel like reading the function types is something I will eventually feel more comfortable with, as if it's my fault for picking a language with both the power and static typing. And I haven't figured out a better way to do it myself.


I would argue Haskell is more powerful than Scala, and also more readable, for the most part.

You can certainly find examples of unreadable Haskell, specifically in the vicinity of ivory towers — there must be something in the air up there which makes academics fond of operator line noise. However, compare the following Scala [1]:

    implicit def KleisliCategory[M[_]: Monad]: Category[({type λ[α, β]=Kleisli[M, α, β]})#λ] = new Category[({type λ[α, β]=Kleisli[M, α, β]})#λ] {
      def id[A] = ☆(_ η)
      def compose[X, Y, Z](f: Kleisli[M, Y, Z], g: Kleisli[M, X, Y]) = f <=< g
    }
…and Haskell [2]:

    newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }

    instance Monad m => Category (Kleisli m) where
      id = Kleisli return
      (Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)
[1]: https://github.com/scalaz/scalaz/blob/master/core/src/main/s...

[2]: https://github.com/ghc/packages-base/blob/master/Control/Arr...


Haskell is a lisp.

(With unconventional syntax admittedly)

Linked list as primary data structure? Check Functional? Check Macros? Check (Implemented as normal functions, since with lazyness you don't actually need macros for control-structure type constructs)


"has X so you don't need macros" is not equivalent to "has macros" (otherwise nearly every language could claim to be "a lisp.")


But this isn't "because of X".

Anything you can write as a macro in lisp you can write as a normal function in haskell, without special gymnastics.


Not quite everything. Macros can define new types, for example.


...hmmm, never thought of normal function being like macros because of laziness :)


I've no doubt this presentation makes some interesting points, but I'm as inclined to finish reading it after noticing it's destroying my browser history as I would continue evaluating software after noticing it's randomly corrupting my home directory.

Please leave history alone or at least use window.history.replaceState().


Also a bloody pain to read on a mobile device.


It's not exactly easy to read on a desktop either.


"strong static types save time"

I would find the presentation without controversial points at all were it not for this one sentence :)

Edit: aaargh, I treated mention of Ruby primitivism as a joke and forgot about it; if taken seriously it can be controversial as well.


every single NoneType error I have gotten using in the python I'm paid to write would go away with a static type system.

Every single function I've changed the signature of and then failed to change one caller would go away with a static type system.

I cannot count the number of times I've had to stop and write down the types of things I wanted to reason about, an d then keep all that in my head, to figure out what was going on in a large code base. Guess what? Most of the types were static. If I had Haskell's good type inference, I wouldn't have had to do any of this time-wasting endeavor.

The statement isn't controversial. Correct code type-checks. When there are type errors in your code, in a dynamic language, they would be bugs found at runtime.

[edit for formatting]


Dynamically typed languages are used everyday to write robust software (erlang, lisp, etc...).

You're making a common mistake, which is to reason about static typing advantages everything else being equal. Sure, if static typing had no drawbacks, it would be insane for any language not to be statically typed, for some of the reasons you're giving (and others). But everything is not equal. Static typing is a tradeoff, and lots of its advantages are often caused by something else (or in combination with something else). Don't get me wrong, Haskell is an impressive feat (just starting learning it), but in haskell, typing is associated to advanced type inference, purity, etc...

In my experience being paid to to write python, a lots of NoneType, attribute errors, etc... often have a deeper underlying reason linked to bad architecture, and bad programming practices. I am not convinced typing would make improving those projects easier (certainly, a primitive typing system ala c/c++/java does not in my experience).


Type error: "everyday" is an adjective, adverb expected in this context ;)

The examples you cite would be detected by a type checker. The attribute errors wouldn't even need a complicated one. (I've never had an attribute error in C.) It's fine to complain that these things are due to bad architecture, in whatever way that might be, but if there were a type checker involved then the code wouldn't even build if there were a type problem. It's fine to rail against poor design, or systems that are hard to use for no good reason, but in the mean time the code usually has to actually work.

(And if there were no type problems, it would run fine without one, bad architecture or no.)


Most code is static, but dynamic runtimes make a bunch of things fairly easy that are either hard or impossible with static ones, like introspection and serialization (you can do serialization in a static language, but you can't just slap in a call to JSON.load on a file with complex structure and then access the result the same way as native types), proxy/remote objects, monkey patching (in various forms - raw monkey patching is bad style anyway, but even things like random global callbacks are hard or bad style in most static languages), objects that pretend to be A but are really B (perhaps to avoid changing existing code; perhaps to implement things like "pseudo-string that's an infinite series of As" or "rope" or "list that loads the relevant data from a file when an item is accessed" without having to change all the code to make "string" and "list" typeclasses/interfaces; perhaps for mocking during testing), dynamic loading, REPL (in running code, that gives you the flexibility to change anything you want), ...

The benefits of that kind of stuff are arguable, but I think the net effect is that static languages, even when they save you from having to write out a lot of types, encourage a fairly different style from dynamic languages, which I prefer.

p.s.: you don't need a static type system to use an option type. :)


Many people conflate the features of a statically typed languages with those of a language without a runtime. Many of these features(dynamic loading, REPL, ect) are obtainable in a language that is statically typed but provides a runtime, and it should be noted(especially since the words static and dynamic are heavily overloaded).


> The benefits of that kind of stuff are arguable

I'll argue for and against some of these points. My perspective should be somewhat contentious as a dyed-in-the-wool Haskell user. We take the side-effects and the static type stuff to an extreme. Should be more interesting to read.

> you can do serialization in a static language, but you can't just slap in a call to JSON.load on a file with complex structure and then access the result the same way as native types)

That's accurate. It's kind of the point. If a field of a data structure is going to disappear depending on the contents of some arbitrary input, I'd consider that a flaw. It's convenient to say foo.bar as opposed to foo ! "bar", but the latter (in Haskell) is explicit about the expected type it infers. For example, I can do this:

    λ> json <- readFile "/tmp/foo.json"
    λ> putStrLn json
    → {"firstName": "John", "lastName": "Smith", "age": 25, "address": { "city": "New York"}}
    λ> do let decade x = div x 10 * 10
              group n  = show (decade n) ++ "-" ++ show (decade (n+10))
          person    <- decode json
          firstname <- person ! "firstName"
          age       <- person ! "age"
          address   <- person ! "address" >>= readJSON
          city      <- address ! "city"
          return ("The name is " ++ firstname ++ " who is age " ++ group age ++ ", and he lives in " ++ city)
    → Ok "The name is John who is age 20-30, and he lives in New York"
Whether `person' is a string, or `age' is an int, is inferred by its use. I could also explicitly add type signatures. Type inference and type-classes give you something that you don't have in Java, C++, C#, Python, Ruby, whatever. The decode function is polymorphic on the type it parses, so if you add a type annotation, you can tell it what to parse:

    λ> decode "1" :: Result Int
    Ok 1
    λ> decode "1" :: Result String
    Error "Unable to read String"
Or you can just use the variable and type inference will figure it out:

    λ> do x <- decode "1"; return (x * 5)
    Ok 5
    λ> do x <- decode "[123]"; return (x * 5)
    Error "Unable to read Integer"
    λ> 
So you have (1) a static proof that the existence and type of things are coherent with your actual code that uses it, (2) the parsing of the JSON is left separate to its use. And that's what this is, parsing. The code x * 5 is never even run, it stops at the decoding step. Now use your imagination and replace x * 5 with normal code. If you take a value decoded from JSON and use it as an integer when it's "null", that's your failure to parse properly. What do you send back to the user of your API or whatever, a “sorry an exception was thrown somewhere in my codebase”?

If you want additional validation, you can go there:

    λ> do person <- decode json
          firstname <- person !? ("firstName", not . null, "we need it")
          return ("Name's " ++ firstname)
    Error "firstName: we need it"
Validated it, didn't have to state the type, it just knew. Maybe I only validate a few fields for invariants, but ALL data should be well-typed. That's just sound engineering. This doesn't throw an exception, either, by the way. The whole thing in all these examples are in a “Result” value. The monad is equivalent to C#'s LINQ. Consider it like a JSON querying DSL. It just returns Error or Ok.

Types can also be used to derive unit tests. I can talk about that more if interested.

> proxy/remote objects

Again, the above applies.

> monkey patching (in various forms - raw monkey patching is bad style anyway, but even things like random global callbacks are hard or bad style in most static languages),

Well, yeah, as you say, monkey patching isn't even a concept. I don't know what a random global call back is for. Sounds like bad style in any language.

> objects that pretend to be A but are really B (perhaps to avoid changing existing code; perhaps to implement things like "pseudo-string that's an infinite series of As" or "rope" or "list that loads the relevant data from a file when an item is accessed" without having to change all the code to make "string" and "list" typeclasses/interfaces; perhaps for mocking during testing)

That's true. There is no way around that. I was recently working on a code generator and I changed the resulting string type from a list of characters to a rope, technically I only needed to change the import from Data.Text to Data.Sequence, but it's usually a bit of refactoring to do. (In the end, it turned out the rope was no faster.)

> dynamic loading

Technically I've run my IRC server from within GHCi (Haskell's REPL) in order to inspect the state of the program while it was running to see was going on with a bug. I usually just test individual functions in the REPL, but this was a hard bug. I even made some functions updateable, I rewrote them in the REPL while it was running. I've also done this while working with my Kinect from Haskell and doing OpenGL coding. You can't really go re-starting those kind of processes. But that's because I'm awesome, not because Haskell is particularly good at that or endorses it.

GHC's support for dynamic code reloading is not good. It could be, it could be completely decent. There was an old Haskell implementation that was more like Smalltalk or Lisp in the way you could update code live, but GHC won and GHC doesn't focus much on this aspect. I don't think static types is the road-block here, in fact I think it's very helpful with migration. In Lisp (in which live updating of code and data types/classes is bread and butter), you often end up confused with an inconsistent program state (the 'image') and are waiting for some function to bail out.

But technically, Ruby, Python, Perl and so-called dynamic languages also suck at this style of programming. Smalltalk and Lisp mastered it in the 80's. They set a standard back then. But everyone seems to have forgotten.

> REPL (in running code, that gives you the flexibility to change anything you want), ...

See above.

> The benefits of that kind of stuff are arguable, but I think the net effect is that static languages, even when they save you from having to write out a lot of types, encourage a fairly different style from dynamic languages, which I prefer.

Yeah, some of them are good but static languages can't do, some are bad that static languages wouldn't want to do by the principle of it, some are good that static languages don't do but could do.

> p.s.: you don't need a static type system to use an option type. :)

This is a pretty odd thing to say, because you don't need an option type in most if not all dynamic languages, they all have implicit null anyway. All values are option types. The point of a stronger type system is that null explicit, i.e. null == 2 is a type error. And a static type system tells you that before you run the code.


> This is a pretty odd thing to say, because you don't need an option type in most if not all dynamic languages, they all have implicit null anyway. All values are option types. The point of a stronger type system is that null explicit, i.e. null == 2 is a type error. And a static type system tells you that before you run the code.

Well, that's a problem, actually. After encountering option types, it's hard to live without it. Because you want to be able to mark that parameter A and B should not be null, but C may be. And unless you have a very good static analyzer, you are constantly at the mercy of a nasty NPE somewhere in your codebase.


And then you need to write some serialization code or IPC and static type system is a pain or just makes it plain impossible to do (in which case this is a shortcoming of this particular type system, not the idea itself, but still).

No, I don't want to argue and I'm not religiously in favor of either unityped or typed languages. I have no problem in admitting that there are things that are made easier with static typing. I'm having good time writing both in OCaml and Erlang. On the other hand it worries me that proponents of static typing have difficulty admitting that even most sophisticated type systems are not suited for certain other tasks.


To me it seems that the point on serialization is completely moot in languages like Haskell(what we are talking about), serialization may be hard(er) to do in a language like Java or C++ where we have a rooted type hierarchy and we can't easily add new functionality, or do type based dispatch. Haskell's type classes allow us the freedom to define new kinds of functionality on top of existing types easily. You can look at libraries like Aeson, which allows serialization and deserialization from JSON with just a few lines of code. The biggest problem is that people's view is strapped to this old dynamic Lisp vs. C kind of paradigm that doesn't exist in modern functional languages.


> every single NoneType error I have gotten using in the python I'm paid to write would go away with a static type system.

Not true. Java has NullPointerException, and static typing does nothing to prevent it. Java doesn't have Maybe, or Option or whatever you call it, but `Option.IsNone` isn't any different from `if (obj == null)`.

> Every single function I've changed the signature of and then failed to change one caller would go away with a static type system.

Yes, static type systems are great for that. But if you are using Python, use pylint and rope.

> If I had Haskell's good type inference, I wouldn't have had to do any of this time-wasting endeavor.

I don't know about you but changing signatures is very low in my list of pain-points. If it's solved for my environment, superb. If it's not, it will sting once in a while but that's that.


It is worth noting that in Haskell you don't have something like NullPointerException (the type system won't allow for that), and using Maybe actually is a bit different than writing 'if (obj != null)' everywhere, if you are into stuff like monads or functors. Besides, in Java null pointer can pop up pretty much everywhere, but in Haskell you probably shouldn't store all your data in Maybe.


> It is worth noting that in Haskell you don't have something like NullPointerException, the type system doesn't allow for that - so while you're point is relevant to Java it doesn't hold here. The closest you can get to something like null pointer is using 'maybe' type, but when you do so, you have to everywhere explicitly handle what happens if variable has no value (or the value of 'nothing' more precisely).

Which, as I mentioned, isn't any different from checking from nulls everywhere. Or if you are so inclined, write an Option class with the desired interface and use it everywhere where the value in nullable. My point is using Maybe is the same as manually checking for null.

    case maybeValue of
      Just value -> ...
      Nothing    -> ...
is the same as

    if (val != null) {
        ....
    } else {
        ...
    }
If you see the compiler forcing you to always use Maybe for nullable types as an advantage, good for you. Personally, I don't see it as a big deal.


Maybe is a monad, which means Maybe computations can be chained together with `>>=` (or using `do` notation) without checking for `Nothing`. You can easily produce a large composition of potentially-failing computations while completely ignoring the possibility of failure.

The case analysis you give as an example is only required at the point when you want to extract the final result into a different monadic context, and even then you would typically use the `maybe` or `fromMaybe` functions to make it more concise.

Only a novice Haskell user would write:

    case comp1 of
        Nothing -> handleFailure
        Just r1 ->
            case comp2 r1 of
                Nothing -> handleFailure
                Just r2 ->
                    case comp3 r2 of
                        Nothing -> handleFailure
                        Just r2 -> handleResult r3
which is indeed, just as bad as explicit null checking in C or Java, with runaway indentation to boot. But anyone who understands Haskell's rich abstractions would instead write:

    maybe handleFailure handleResult $ comp1 >>= comp2 >>= comp3
The fact that you can't forget the "null check" without the compiler telling you about it is a nice convenience afforded by the strong type system, but it's far from the only benefit.


> Maybe is a monad, which means Maybe computations can be chained together with `>>=` (or using `do` notation) without checking for `Nothing`. You can easily produce a large composition of potentially-failing computations while completely ignoring the possibility of failure.

Like

     val = foo.bar.blah.boo rescue nil
Or

    try {
        val = foo.bar().blah().boo()
    } catch(NullPointerException ex) {
    }
Or

    try:
        val = foo.bar().blah().boo()
    except AttributeError:
        pass
Yes, I know Maybe is a monad, but that doesn't make a difference to me. A series of computation where a step depends on the results of the previous step and the previous step can return null is hardly an issue in any language.

> The case analysis you give as an example is only required at the point when you want to extract the final result into a different monadic context, and even then you would typically use the `maybe` or `fromMaybe` functions to make it more concise.

The case analysis I give is an example where either there is a value or null and I need the value. I don't really care how Haskell defines monadic context.


> isn't any different from checking from nulls everywhere

You would only check for nulls "everywhere" if all functions could potentially return null. Then, indeed, there is no difference. But (hopefully) not all functions will return null, so you'll probably have at most a single-digit percentage of functions that do.

The point is that by using Option, you are explicitly stating "This function can return null", making it impossible for the caller to ignore. If you write it somewhere into the docs, it is indeed easy to overlook.

This may not be as relevant when you are only working with your own code, but I find this (and static typing in general) most helpful when dealing with code from someone else, including the libraries one uses.


> But (hopefully) not all functions will return null, so you'll probably have at most a single-digit percentage of functions that do.

In a real world API, almost everything that returns an object can return null or throw an exception.

> The point is that by using Option, you are explicitly stating "This function can return null", making it impossible for the caller to ignore.

Throw an exception if impossible to ignore is the motive.

> If you write it somewhere into the docs, it is indeed easy to overlook.

It might be overlooked. But is it too much to assume that someone making a call will check the parameters and the return type?


There is a fundamental difference between throwing an exception and returning null. From http://en.wikipedia.org/wiki/Exception_handling: "Exception handling is the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional situations requiring special processing – often changing the normal flow of program execution." One should throw an exception when an anomalous situation appears, e.g. I can not connect to the database. Whereas returning null / returning an Option means that this case needs to be treated in the normal flow of execution, e.g. asking a Person-object for it's spouse. It is perfectly reasonable that a random person isn't married (so throwing an exception is wrong) but at the same time it should be impossible for the caller to ignore.

> It might be overlooked. But is it too much to assume that someone making a call will check the parameters and the return type?

http://news.ycombinator.com/item?id=4695587

Apparently it is too much to ask, even if the program is performing something super-important for security like SSL.


> It is perfectly reasonable that a random person isn't married (so throwing an exception is wrong) but at the same time it should be impossible for the caller to ignore.

The whole discussion started from the claim that Haskell makes the NullPointerException/NoneType non existent. And I am just saying it isn't any different from how you enforce it in any other language - you either handle nulls or throw exceptions.

> (so throwing an exception is wrong)

I am sorry, but I don't play the "throwing an exception is wrong" game. I use it for actual exceptions, control flow, must-handle scenarios. I don't see how exceptions are defined has to do with how I use it if it makes my program logic clear or translates to my intent. The only reason I think before using exceptions for things which aren't exceptions is the stack which is saved, which most of the times so minuscule that it doesn't matter. Ruby has the best compromise as in it defines catch..throw; most of the languages don't so I resort to using exceptions.


Sorry, but it is not a game, it is a convention that afaik holds true for ALL languages that use exceptions. Using exceptions for control flow is widely accepted as a code smell.


> Sorry, but it is not a game, it is a convention that afaik holds true for ALL languages that use exceptions. Using exceptions for control flow is widely accepted as a code smell.

Unless using exceptions either hinders performance(they are expensive but I am yet to see a case where it matters), or makes the control flow incomprehensible, it doesn't matter what is widely accepted. I need a reason for "don't use exceptions because...", and "exceptions for exceptional conditions" or "just because it's widely accepted" doesn't cut it.

I am pretty sure you have very strong opinions about goto as well, which I use a lot when using C. It's simply the cleanest way to directly jump to the cleaner instead of convoluting the flow with non-needed flags. Since you are placing much weight on what others deem acceptable, you can look at linux kernel code and Steven's code in Unix Network Programming.

Also, exceptions are control flow in every sense of the word, though non-local http://en.wikipedia.org/wiki/Control_flow#Exceptions. I don't know where the notion of exceptions not being control flow came from. Among other examples of exceptions for control flow, Python and Ruby raise StopIteration in their iteration protocol. And in Python, exceptions aren't that much costly.


You don't have to handle it everywhere explicitly. This is where Functor, Applicative, Alternative, Monoid and Monad are for. They will do the plumbing for you. Eventually you will unpack the values, but this is only necessary if you change environment.

Say we have a failing computation: failComp = Nothing

and a couple of succeeding computations: sucComp = Just 1 sucComp2 = Just 2

We can use the typeclasses to avoid explicit unpacking:

-- Monad result: Just 3

add = do x <- sucComp2 y <- sucComp return $ x + y

Applicative result: Just 3

-- the <#> should be the applicative operator. addapp = (+) <$> sucComp <#> sucComp2

Alternative: result (Just 1)

val = failComp <|> sucComp

Monoid: result (Just 2)

mon = failComp <> sucComp2

Functor result (Just 6)

-- # should be multiply operator

func = fmap (#3) sucComp2


(It's a little late now, but if you indent a line by 2 or more spaces, HN will format the text as code, so you don't lose the formatting.)


In Java, the compiler won't tell you that a variable may be null. In Haskell (at least when you compile with -Wall, strange this particular warning isn't the default) you'll get an error if you've failed to handle all the variations of data that you've provided.

There's a good presentation my Yaron Minsky about OCaml in the real world where he cites this as a major advantage (in OCaml, failing to match all patterns is an error by default).


> In Java, the compiler won't tell you that a variable may be null. In Haskell (at least when you compile with -Wall, strange this particular warning isn't the default) you'll get an error if you've failed to handle all the variations of data that you've provided.

    Connection con = DriverManager.getConnection(
                         "jdbc:myDriver:myDatabase",
                         username,
                         password);

    Statement stmt = con.createStatement();
    ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");

    while (rs.next()) {
        int x = rs.getInt("a");
        String s = rs.getString("b");
        float f = rs.getFloat("c");
    }
It's not that bad. In the above example, you either get an SQLException or a ResultSet. ResultSet is your option type here. It wont' be null - it might or might not contain values.

There might be a lot of good things about Haskell(I am not that familiar to make a call), but seriously, Maybe doesn't look that great.


That's right, I've changed my comment to be more relevant reply, sorry for confusion.

> If you see the compiler forcing you to always use Maybe as an advantage, good for you. Personally, I don't see it as a big deal.

One of reasons I like Maybe is I can accidentally put null somewhere I know I should not; Haskell's type system will prevent me from that. I wouldn't use Maybe unless I really need to - whereas in languages with less strict type systems, almost everything is a 'Maybe'. Besides, it seems it's easier to build some layer on abstraction on it, which you can reuse and save some time (for example, you have monad instance for it - however I don't think I've ever used it).


> whereas in languages with less strict type systems, almost everything is a 'Maybe'

I don't know Haskell(I do know F#), so help me with this: How is everything not a Maybe in Haskell? In the languages I know, when you return an object from a method, (it can throw an Exception or return null) or return a valid val. When it throws an Exception, there is no Maybe, or else it's a Maybe. How is it any different in Haskell.


i'm a huge fan of python, and yet, after just the most superficial introduction to haskell i couldn't agree with you more.

i don't think it is the static typing alone that saves time, though. it is the static typing plus type inference.


I like dynamic typing for short scripts, but after working in Fortune 500 projects, I got to love static typing.

Projects with 50+ developers using dynamic languages become unmanageable after a few months of development time, even with unit tests.

Nowadays I rather advocate static typing languages with type inference. Most use cases where dynamic behavior is handy can be handled with reflection or meta-programming.


Speaking of my own experience on this subject: A large part of my career over the last decade has been working in Erlang. I'd estimate that once I had a firm understanding of the language and the platform, most of the problems in my code have been runtime type errors.

The erlang distribution includes an optional static analysis tool called dialyzer, and for a little over two years I've used it consistently. Without question it has provided a significant improvement in my productivity and the quality of software I deploy. Also, the type annotations in the code (such as they are..) do help with documentation (edoc) and human understanding.

So, yeah, it's a controversial topic, but 'compile time' type validation seems to work for me. ymmv, of course!


Ah, powerpoint presentations where the truth is somewhere between the bullet points. I would have preferred a blog post which gives the author enough breathing space to justify his claims.

Haskell's single biggest problem is that it looks unfamiliar to programmers who are trained in traditional imperative languages. Due to its minimal syntax, its error messages can be confusing to a learner. New languages like Scala and Clojure are packaging many of Haskell's concepts in a more familiar syntax. They are gaining popularity since they actually solve hard problems like concurrency.

In my opinion, startups prefer Ruby because they prefer to start solving the problem right away without worrying too much about its correctness. A classic case of "Worse is better" works for most web applications. Whereas, Haskell requires some thinking into how to best model the solution before you start implementing it. This often leads to more 'correct' code.


I think that one of its biggest problems has been the vast majority of the "Teach Yourself Haskell" books/articles.

When I first had a go at learning it, I ran into two common problems with them. Firstly they usually did a great job of showing you how to write a quicksort or a Fibonacci sequence but then struggled to explain how it applied to most every day challenges (things like i/o - and yes, I sort of understand the complexity around that in Haskell, but as a developer, it's one of the key things I'm likely to want to do).

The other problem was that examples were typically full of what I would consider appallingly named/abbreviated functions,parameters etc

"fib' (f, g) p"

for example.

When you're already struggling to understand langauge concepts, having your code full of "f,g, and p" and trying to remember what they mean really doesn't help.

I think it's improving (Learn You a Haskell looks pretty good, but I've not had a serious attempt to re-learn since that appeared), but it's always been a bit of stumbling block for me in the past.


> The other problem was that examples were typically full of what I would consider appallingly named/abbreviated functions,parameters etc

As adimitrov points out, this is in many ways a carry over from mathematics. However, this isn't the only factor.

For the parameters:

Expressions and functions in general are usually much shorter in Haskell than in other languages, so one can see how the variable is being used, e.g. the conventional definition of map is:

  map f [] = []
  map f (x:xs) = f x : map f xs
One can tell that f is a function and that it's doing something to the first element of second argument, and then recurring on the rest of that list. That is much much nice, than (at worst)

  map function list = if null list then [] else ((function (head list)):map function (tail list))
Secondly, the types tell a lot about a function and it's parameters, for the above

  map :: (a -> b) -> [a] -> [b]
one can easily see that the first argument is a function and the second is a list, and the specific types mean that there is a single sensible way to combine them: apply the function to every element of the list.

(Lastly, there are conventions, which take a little time to learn, but they help. E.g. f, g, h for parameters which are functions, (x:xs), (y:ys) for head/tails of lists.)

However, naming the functions poorly is a problem. (But type do help with this too, especially with Hoogle[1].)

[1]: http://haskell.org/hoogle


The x:xs one is fine as long as it's properly explained in the tutorial - e.g. it's a list of items, with x being the first item and xs being all other items in the list.

The

map :: (a -> b) -> [a] -> [b]

may well be abundantly clear once you understand Haskell's types, but is rather less so when you're first learning. In that particular case, I'm not entirely sure I could come up with a clearer set of names, but there's plenty of times when I was first learning that I really struggled with getting to grips with a concept simply because the naming left me wondering why I was passing an A into a Q to get a list of Zs (or whatever it was).


I'm not sure how specific to Haskell this actually is, though. Think of templates in C++ (esp. template functions) or generics in Java. You use T or whatever as a placeholder for actual types. Any time you need more than one type, you're forced to make a similar decision about how to describe a type about which you know very little.

It's is a heck of a lot more concise than either of those, though. Might this be the actual stumbling block? You're saying a lot about map -- in a very precise way, mind you -- without writing a lot.


C++ templates and generics are relatively advanced features though. I wouldn't be expecting to teach them in lesson 2 of the langauge.

And what you would call a parameter in a production app is not necessarily the same you'd call it in a tutorial.


The other problem was that examples were typically full of what I would consider appallingly named/abbreviated functions,parameters etc.

That's mostly due to the mathematical or computer-science background of most people who wrote these articles in the early days. It's entirely normal in mathematics etc. to use brief variable names, and it carried over into Haskell because of the large cultural overlap between Haskellers and academics.

For short examples, I hugely prefer the short notation with "appalling" names, but that might be due to my background in formal logic and mathematics. It doesn't help in software development, and it's poison in larger applications, that's for sure. But for short examples, you get used to it pretty quickly.

You should check out both Real World Haskell and Learn You a Haskell (both are also freely available online.) They're really well-written, and part of why Haskell's community is so awesome.


I wouldn't say Clojure has a more familiar syntax than Haskell. If anything, it's the opposite.


Does anyone really think that a slide presentation is a better way of saying what the author wanted to say than an essay?

And if so, do they also think that Norvig's Gettysburg Powerpoint Presentation is better than Lincoln's Gettysburg Address?


What does CoW mean? As in "GC breaks CoW & caching". There are quite a few acronyms and supposedly even some animal with these three letters.


Copy-on-write.


Is this a slideshow? I can't do anything here in FF 16.0.1.


Ditto. I just turned off page styles and read the text. There's not much of it.


Try the cursor keys?


What about virthualenv (virtualenv for Haskell)? Does that help alleviate some of the cabal issues?


It worked well for me, and there's cabal-dev too. BTW, I think virthualenv is now called hsenv.


I'll consider Haskell for serious programming when (and if) it gets a decent record system.

http://bloggablea.wordpress.com/2007/04/24/haskell-records-c...


That blog post seems fairly outdated. There are several lenses libraries that provide decent record support:

http://stackoverflow.com/questions/5767129/lenses-fclabels-d...

Personally, I'd much rather have a better baked-in record system, but that's unlikely to change substantially now.


The state of the art is the amazing `lens` package https://github.com/ekmett/lens/wiki/Overview . See this example, https://github.com/ekmett/lens/blob/master/examples/Pong.hs


Looks great! I hope it gets more broadly adopted. Thanks for sharing.


The post might be old, but AFAIK there have been no significant changes in the built-in record system since.

Sure, there are great lens libraries but until the Haskell community blesses one of them the result is a mess of many incompatible record systems. Not exactly the best way to foster a thriving open source community.


"startups arent technical"

I guess YOUR startup isnt technical... But there are in fact technical startups out there. It's just that they make more money than yours, do better and are generally cooler.


My startup involved custom hardware and realtime messaging. I still would have been better off in the early stages writing it in Ruby.


...upvoted you for the pure and refreshing "geek power" spirit of the comment :) ...really miss this attitude lately, too much "business-think" everywhere ...now back to real work


> Ruby is amazingly primitive

What? In what sense in Ruby primitive?


When you're comparing MRI to GHC? Painfully simple GC. Simplistic green thread model (with recent changes, but still fundamentally crippled compared to GHC). Mixed metaphor abstractions in core and stdlib. A very ad-hoc development culture in core. Limited debugging and source-level warning support. No JIT or useful AOT compilation. A thrown-together syntax.

Some of these aren't weaknesses. Others are.


Then I guess he should have said "MRI is primitive", or "Ruby interpreters are primitive". As it is, it looks like a typical language fanboy rant more than an objective technical argument.


If he's making explicit reference to CoW unfriendliness, there's not much ambiguity there.


"primitive" is an awkwardly chosen word with negative connotations... I suppose it is in the sense that it doesn't do type inference (or reasoning about types at all, for that matter). Like a Lisp it simply executes what you write without any validation beyond syntax checking.

The idea is that Haskell's type-based, lazy approach allows for a more high-level approach of specifying what you want instead of how you want it executed. I'm not sure it always succeeds in that, but that's another matter. It's an interesting language nevertheless, with elegant code, but if I need to build something quickly I fall back to Python (which is "primitive" in the same sense as Ruby).


> Like a Lisp it simply executes what you write without any validation beyond syntax checking

But then in Common Lisp when something goes wrong you drop into the debugger and can look at your code's state. In python/ruby my program just goes poof! and I get a backtrace. Then I'll either have to add statements around the suspected cause of the bug to inspect the environment or restart with a debugger and retrigger the bug.

The Common Lisp Way(tm) is also available in Smalltalk; I think those systems give you the tools to deal with their dynamic nature, while python and ruby (and others as well) do not.

And then languages with more static type systems can give you the tools to deal with their static nature. In Haskell that would be powerful type inference and expressive type system.


Well, dropping to the interactive prompt is a statement or two in Python. You could do that when something goes wrong (for example, in the inner, or outer exception handler of your application).

There's also tools for Python where you can browse/modify state of a program remotely through a debugger, for example winpdb, others included in IDEs.

Overall, dynamic debugging support is quite good, if you are willing to search around a bit. But probably not as good as CL or SmallTalk, if they're especially engineered for these kind of things. I don't know enough about them to comment on that.


> Like a Lisp

You know about Typed Racked and possibly other efforts, right?

http://docs.racket-lang.org/ts-guide/


Well not really. I had a vague sense that there are extensions that add typing, but not more. My experiences with Lisp are mostly limited to SICP. Thanks for the link.


No problem. Generally, talking about Lisps in this context is not a very good idea - I couldn't find the link, but I remember that in the comments on lambda-the-ultimate someone demonstrated equivalence of static type systems and lisp macros by providing compile time guarantees that then-discussed static type system gave. I'll try to find this discussion later, or maybe someone has the link ready?


I'm just guessing here, but maybe he is not talking as much about the language itself but about the (default) VM? AFAIK, the VM of Ruby (but also Python and node.js) do not support parallel execution of threads (which is usually solved by starting multiple VMs and using an extra cache).

Compared to the VMs of Haskell, Erlang or the JVM one might call this primitive.

(I definitely wouldn't call it amazingly primitive and I am aware there is JRuby / Jython)


MRI 1.9 support parallel execution of threads.. It's using OS threads. No more green threads.


Even MRI 1.9 does not support parallel execution of Ruby code. JRuby and MacRuby do, however.


I think it is very positive. I read it as "Things are being done, please be patient. B.t.w. what's done is insanely great".


    * $LANGUAGE is amazingly primitive
    * GC breaks CoW & caching
    * dynamic -> lexical scoping between $OLD_VERSION & $NEW_VERSION
    * concurrent behaviour requires event-driven style
Exactly the same could have been written about Lisp in the late 80s


eh? haskell program can be really slow, especially when it is written by non-experts who don't understand lazy evaluation well.


A <Language> program can be really slow, especially when it's written by non-experts who don't understand <CorePrinciple> well.


This is a terrible set of slides. The only new thing I gathered from it is "cabal sucks". Then the presentation came to a sudden end which left me thinking "what, that's it?" How does such a low quality link get so many upvotes? takes notes for future marketing endeavours


It wasn't really intended to make much sense by itself. I find if I make slides that stand alone, the audience reads the slides and ignores me. I put the slides up as a courtesy to the people in the audience.

There was less than zero marketing involved - I gave this talk months ago and only just noticed this thread.


Apologies if I sounded like I was criticising what you do - I'm sure that these made a lot more sense in the context of a presentation. I was questioning why the person who posted this link did so and why it has been upvoted to such an extent.


not caremad, just clarifying. I'm as baffled as you are.


takes notes for future marketing endeavours

Lol. :)

Didn't you notice the multiple david vs goliath references? Or the challenger vs incumbent dimension? Conflict makes for a good story makes for frontpage news.

Moreover, the 50/50 text/visuals is a refreshing change away from huge blocks of gray.


ha. I didn't think that at all when I was writing it, but I can see how it's applicable. I'll be sure to analyse that more explicitly next time I write a presentation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: