Hacker News new | past | comments | ask | show | jobs | submit login
I tried Haskell for 5 years (metarabbit.wordpress.com)
366 points by sndean on May 4, 2017 | hide | past | favorite | 257 comments



We have a code base of roughly 200,000 lines of Haskell code, dealing with high performance SQL query parsing, compilation and optimizations.

I only remember one situation over the past 5 years that we had a performance issue with Haskell, that was solved by using the profiling capabilities of GHC.

I disagree that performance is hard to figure out. It could be better, yes - but it's not that different than what you'd get with other programming languages.


but it's not that different than what you'd get with other programming languages.

Until you have to solve a (maddeningly common) space leak issue. That's a problem unique to lazily evaluated languages (basically Haskell) and is godawful to debug.

It reminds me of solving git problems... you suddenly find yourself having to tear back the abstraction layer that is the language and compiler and start thinking about thunks and so forth.

It's jarring and incredibly frustrating.


I find this response a little ironic because we don't really see complaints about knowing the C runtime and C compiler when performance becomes a problem, which is also jarring and frustrating. But, ultimately, sometimes languages have failings and we need to peek under the hood to figure out how to address it - we're just more comfortable with the relatively common and direct mapping of the C runtime.

I am not experienced enough with Haskell to know whether peeking under its hood involves a more complicated model or not. It might be more frustrating. But its certainly not a unique experience - it's costs are just less distributed over other runtimes.


The C is very straightforward.

Maybe you meant C++? You see no complaint because noone use that anymore. Anything that can be done with an easier language is done with an easier language. The hardcore C++ performance critical code is left to a few veterans, who don't complain.


I don't consider the combination of the C runtime + the machine model straightforward - just less arcane than C++. Consider pipelines and branch prediction and cache lines and it quickly becomes difficult. Granted, those typically become relevant later in the optimization stage than other things.


Pipelines, branch prediction and caching are not part of the C runtime. And unlike Haskell, C makes it easy to look at the assembly for a piece of code, evaluate it for the machine it's running on, and fix these problems when they come up. C is not generally adding additional burdens, especially not ones that a higher-level language like Haskell won't also be adding to a far greater degree.


"we're just more comfortable with the relatively common and direct mapping of the C runtime"

That's a funny way to put it. It's more like the difference between getting results or abandoning the thing altogether due to exploding cost of required effort.


When have you ever had the C runtime be the cause of a performance problem?


I have not because I don't write C professionally. We generally have things that require algorithmic improvements due to the scale - language doesn't matter.

C's model requires you to understand the machine model. Haskell presumably requires you to understand the machine model (but less thoroughly) but understand the compiler's model also. It's a little more but comparable. So complaining only about the Haskell runtime just seems ironic to me.


Writing a custom replacement for malloc is relatively common. Does that count?


I don't see why not, though I wouldn't consider that to be an example of a difficult to diagnose problem in the same vein as lazy evaluation.


Writing a custom replacement for malloc is relatively common.

You... can't be serious. That's common to you?


Yes? From the Wikipedia:

"Because malloc and its relatives can have a strong impact on the performance of a program, it is not uncommon to override the functions for a specific application by custom implementations that are optimized for application's allocation patterns."


"not uncommon" is far far from "common". If you're writing your own malloc replacement you're pretty deep into the weeds of high performance computing. Heck even just deciding to replace the stock implementation with an off-the-shelf replacement puts you in fairly rarified company. I'd wager the vast majority of software written for Linux makes use of the standard glibc allocator.

I expect high performance games are the most common exception, but they represent a fraction of the C/C++ in the wild.

Space leaks in Haskell, on the other hand, are unfortunately relatively easy to introduce.


There is this really great paper about space leaks and arrows: https://pdfs.semanticscholar.org/cab9/4da6e4b88b01848747df82...


> Until you have to solve a (maddeningly common) space leak issue.

Hm, I've been making useful things with Haskell for a couple years including quite a few freelance projects and haven't encountered many space leaks.

Definitely not enough to say they are maddeningly common, or even enough to say they are common.


My experience tracks yours.


here is a method that can help debug space leaks: http://neilmitchell.blogspot.com/2015/09/detecting-space-lea...


I wish the problem was simply detecting and isolating space leaks.

My experience is that actually fixing them can be incredibly difficult, hence my comment about needing to understand the gory details about how the runtime evaluates lazy expressions.

Heck, that post even uses the phrase "Attempt to fix the space leak" as it's often not an obvious slam dunk. Sometimes it even devolves to peppering !'s around until the problem goes away.


> My experience is that actually fixing them can be incredibly difficult

My experience differs, FWIW. If you know where you're creating too many thunks, and you force them as you create them, they don't accumulate.

Making sure you actually are forcing them, and not simply suspending a "force this", is probably the trickiest bit until you're used to the evaluation model.


If you know where you're creating too many thunks, and you force them as you create them, they don't accumulate.

Translation: if you've internalized the way Haskell code is compiled and executes, so that you can easily reason about how lazy evaluation is actually implemented, you can solve these problems.

If not, it devolves to throwing !'s in and praying.

Which is basically my point.

If I don't have a hope of solving common space leaks without deeply understanding how Haskell is evaluated, that's a real challenge for anyone trying to learn the language.


This sounds a bit FUD-ish. Programming in any language involves understanding the evaluation strategy. You seem to be advocating languages that can be used with a level of ignorance or innocence which in practice just isn't possible.

https://en.wikipedia.org/wiki/Evaluation_strategy


I disagree. I contend that most of the time folks ignore the evaluation strategy for eagerly evaluated languages because they're simply easier to reason about. That strikes me as objectively true on its face and I believe most Haskellers would concede that point.

The only time I can think of where that's not the case is when microoptimizing for performance, where the exact instructions being produced and their impact on pipelining and cache behaviour matter. But that's in general far more rare than one encounters space leaks in idiomatic Haskell.

Heck, one just needs to read about two of the most common Haskell functions to hit concerns about space leaks: foldl and foldr. It's just part of the way of life for a Haskeller.

There's simply no analog that I can think of in the world of eagerly evaluated languages that a) requires as much in-depth knowledge of the language implementation, and b) is so commonly encountered.

The closest thing I can come up with in a common, eager language might be space leaks in GC'd languages, but they're pretty rare unless you're doing something odd (e.g. keeping references to objects from a static variable).


You're taking issue with material that is covered in every haskell 101 course worth it's salt (how to understand the properties of foldl and foldr).

We typically don't evaluate the efficacy of a language contingent on someone who is missing fundamental, well-known concepts about the language.

Also, I don't think folks "ignore the evaluation strategy for eagerly evaluated languages". They simply learn it early in in their programming experience.


You're taking issue with material that is covered in every haskell 101 course worth it's salt (how to understand the properties of foldl and foldr).

Oh, I'm not "taking issue". This isn't personal. It's just my observations.

And yes, that one needs to explain the consequences of lazy evaluation and the potential for space leaks to a complete neophyte to justify foldr/foldl is literally exactly what I'm talking about! :)

Space leaks are complicated. And they're nearly unavoidable. I doubt even the best Haskeller has avoided introducing space leaks in their code.

That's a problem.

Are you saying it's not? Because that would honestly surprise me.

Furthermore, are you saying eager languages have analogous challenges? If so, I'm curious what you think those are! It's possible I'm missing them because I take them for granted, but nothing honestly springs to mind.


I didn't claim space leaks aren't a problem. But one has to size the magnitude of the problem appropriately. And one should also cross-reference that with experience reports from companies using Haskell in production.


I think there are different types of space leaks:

- Reference is kept alive so that a computation can't be streamed. Easy to figure out with profiling but fixing them might make the code more complex. Also, if you have a giant static local variable ghc might decide to share it between calls so it won't be garbage collected when you'd expect it.

- Program lacks strictness so you build a giant thunk on the heap. This is probably what you think of when talking about dropping !'s everywhere. I don't find it that difficult to fix once the location is known but figuring that much out can be seriously annoying.

- Lazy pattern matching means the whole data has to be kept in memory even if you only use parts of it. I don't think I have ever really run into this but it is worth keeping in mind.

- I have seen code like `loop = doStuff >> loop >> return ()` several times from people learning haskell, including me. Super easy to track down but still worth noting I guess.

Building giant thunks is the only one where you really need some understanding of the execution model to fix it. 99% of the time it is enough to switch to a strict library function like foldl', though.


> I don't find it that difficult to fix once the location is known but figuring that much out can be seriously annoying.

To be clear, I agree with this. Easy to fix once you know where it is, if you're competent in the language. Occasionally very hard to know that.


> unique to lazily evaluated languages

More precisely, unique to significant use of laziness, which is going to (obviously) be more common in lazily evaluated languages but laziness is supported elsewhere.


(OP here)

I do say it's not that big of a deal in the end: it almost always is OK and at the end you optimize the inner loops by looking at profiles; like in any other project.

But when using GHC, I have indeed sometimes ran into situations where I expect something to be fast when it is not (e.g., `ByteString.map (+ value)` is incredibly slow compared to a pseudo-C loop).

I also did find a bona fides performance bug in GHC https://ghc.haskell.org/trac/ghc/ticket/11783


GHC isn't magical. It has bugs too.

We had an issue with the Data.Text package and OverloadedStrings in 7.10.2 which caused extremely slow compilation times and filed a bug report for that, which was solved for 7.10.3.


> (e.g., `ByteString.map (+ value)` is incredibly slow compared to a pseudo-C loop).

That situation sounds like you needed a ByteString Builder to get comparable performance.


Maybe, but pseudo-C actually worked very well.

I even have some "real C" in my Haskell code to handle some inner loop stuff.


I find that incredibly hard to believe. I wonder whether you are looking at history through rose-tinted glasses or you simply don't know because other people quietly solved those problems. I ran into such problems several times in 50 line programs and not because my code was non-idiomatic or wrong. #haskell agreed it was non-trivial to get the code to perform well.


Hi can you tell us a bit more about this SQL-related project? I'm rather practically interested.


I work for www.sqream.com, and our main product is SQream DB.

SQream DB is a GPU SQL database for analytics. Everything was written in-house - from the SQL parser all the way down to the storage layer. We're designed to deal with sizes from a few terabytes to hundreds of terabytes, and we use the GPU to do most of the work. It is the compiler (written in Haskell) however, that decides about specific optimizations and deals with generating the query plan.

Our SQL parser and typechecker is actually BSD3 licensed, if it's interesting: https://github.com/jakewheat/hssqlppp


> www.sqream.com

I don't like having to be that guy, but your landing page hijacks scrolling (which always causes problems; I noticed first because it broke ctrl-scroll zooming), downloads a 30MB video in a loop, forever, takes a significant amount of CPU even when the video isn't in view and the tab is not active, and despite having almost no content manages to take far more memory than any other non-email tab I have open.


Our website is... Yeah... I know... :/

I've passed on your comments in any case, but know that we're in the process of rebuilding the website from scratch.


Any chance SQream DB will be open sourced or a demo version? It seems every Column based GPU DB I find is behind a paywall and targeting enterprise levels.


I think it's unlikely at this point...

We too are currently during some large projects for enterprises...

We will be releasing a community version on AWS and Azure in the near future which should be cheap or free, other than the instance cost on the respective cloud provider.


If you want more generic SQL tooling in Haskell, my project was just recently open sourced (https://github.com/uber/queryparser). It currently has support for Vertica, Hive, and Presto; adding support for more dialects isn't complicated. I'm working on cleaning it up for hackage.


Very interesting! Do you do any sort of typechecking in this?

FWIW, HsSqlPpp also supports various dialects. In SQream DB we use a custom dialect, while HsSqlPpp is mostly Postgres.


Typechecking of the SQL statements? There was an internal branch where I was experimenting with that, but it didn't get landed before I left, and isn't in the squashed version.

It was relatively simplistic - along the lines of "things tested for equality must share a type". It was also focused on domain-relevant types, not SQL type (/representation).

> FWIW, HsSqlPpp also supports various dialects. In SQream DB we use a custom dialect, while HsSqlPpp is mostly Postgres.

Nice. It'll be interesting to see where our implementation choices differed, and what we can learn from each other :)


> but it's not that different than what you'd get with other programming languages

Does your list of other programming languages include C/C++/asm?

;)


Incidentally, yes.


If anyone's curious to try out functional programming, I would highly recommend Elm. I haven't been so excited about a language since I went from C to Ruby ten years ago, and Pragmatic Studios has a great course on it (I have no affiliation): https://pragmaticstudio.com/courses/elm


I spent some time working through various Haskell books with varying success, then decided to do a code challenge (adventofcode.com) using Elm. I didn't finish, but after writing Elm for many days on end, suddenly Haskell clicked a lot more. The Elm compiler is far more friendly than Haskell's and will walk you through a lot of rookie mistakes and oversights.

I find Elm exciting, and with the prevalence of React/Redux these days, a lot of front end developers would be wise to familiarize themselves with the origin of many of the concepts borrowed from Elm.


I bought a few Haskell books but could never really grok it. In the last couple years I started learning f# and have found it's an incredible "intro to Haskell" language. I opened one of my Haskell books the other day and realized I understood everything much more quickly.


The same is true of OCaml.


F# is a beautiful, pragmatic, functional programming language. It's a shame some people dismiss it due its Microsoft origins.


Yeah, F# feels like Don Syme looked at ocaml and haskell and applied a bit of yagni and tossed in some other useful conventions (I'm sure many others were involved as well).

It drives me crazy that Microsoft doesn't push it harder from an investment perspective. They don't even write books about it :)


There are books out there. For new language adoption, F# do have new features like Type Providers. However, it's not the best language in X for X in ['data science', 'distributed system'] etc.


Sorry, my language was vague. There are many F# books, but I find it telling that Microsoft Press doesn't really write books about it. Look at C#, VB.Net, Office, SQL Server, Sharepoint, etc. Almost everything has a new book (or 10) every new edition. F#? Nothing.


And Elm is implemented in Haskell ;]

I've been meaning to try Elm since I've been playing around with Haskell more and more. Purescript is relatively similar but allows you to build more than frontends. It seems Elm has grown quite a lot in popularity though.


PureScript has great potential. I'm very excited for what it offers the future of Javascript and even Haskell. It's on a whole other level than Elm.

The only problem from my experience trying to use it in a project is that since it's alpha software a) it's changing so fast that APIs/libraries are very unstable to the point they are almost unusable if you try to follow newish releases (this was 6-8 months ago, I hope/expect stability has improved) and b) you have to be intimately familiar with Haskell.

As the OP's article describes about Haskell with PureScript the documentation situation is even worse, the documentation is 90% "hard" and 10% "soft". Which is fine if you know Haskell, or even a bit of Elm, but it's very tough coming in as a newbie of both Haskell and PS.


Elm is also my gateway drug into functional. And Pragmatic Studio was a great start.

Monads then seemed much easier to understand after Elm and reading this: http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...


I would also give a recommendation to elm. It is like a simpler version of Haskell and its relatively easy to build something quickly(no affiliation either).

I always had a hard time wrapping my head around functional programming, but elm was easy to understand and shows off the strengths of a functional approach.


I concur with your suggestion. As a longtime functional programmer (Lisp, Clojure, Erlang), Elm is one of my favorite languages of all time. I'm hoping to jump into Haskell next, and I feel like it's a great transition in that direction.


Elixir is mentioned on the page too. Does it complement Elm?


Very much so. I'm increasingly seeing more and more projects that use Phoenix (Elixir) on the backend, and Elm on the frontend.

Here's an exciting example: https://github.com/dailydrip/firestorm


Side bonus to using Elm on the frontend: if your backend is written in Haskell, you can derive Elm types and functions to query your backend, giving you type safety from front to back.

https://hackage.haskell.org/package/elm-export

https://hackage.haskell.org/package/servant-elm


I think a lot of Haskeller's will find Elm's type system to be limited and frustrating. Even for simple things the solution in Elm is that you have to write a bunch of boilerplate.


I had this experience. However, starting out I think it'd be a much more gentle introduction.

For other haskellers, Purescript seems to solve this problem.


I haven't gotten to give PS more than a cursory look but the row polymorphism and structural typing seems to really compliment JS semantics.


Question to the Haskell experts here.

Is Haskell more academic in nature or used heavily in Production environments?

Is there an application sweet spot/domain (say Artificial Intelligence/Machine Learning, etc) where it shines over using other languages (I am not talking about software/language architectural issues like type systems or such)?

I have no experience with Haskell but do use functional languages such as Erlang/Elixir on and off.


> Is there an application sweet spot/domain

State of the Haskell ecosystem: https://github.com/Gabriel439/post-rfc/blob/master/sotu.md

    The topics are roughly sorted from greatest strengths to greatest weaknesses. 
    Each programming area will also be summarized by a single rating of either:

    Best in class: the best experience in any language
    Mature: suitable for most programmers
    Immature: only acceptable for early-adopters
    Bad: pretty unusable


The state of the Haskell ecosystem is that it is amazing at encouraging people to write libraries, especially libraries to assist in writing haskell code but amazingly bad at encouraging people to write actual programs, especially actual programs useful for something that isn't writing code.

Take out the programs that are for writing code, ghc, shellcheck, darcs etc. And what are you actually left with? git-annexe, pandoc, xmonad is hledger worth knowing about if you don't care what language it was written in? (Maybe it's amazing, first I've heard of it).

Whenever I bring this up, people find it upsetting. The Haskell community might need to get past both of those things. 1.0 was 27 years ago. So many fabulous programmers have gone down the road for entertainment or enlightenment so where are your damn apps if you're a language that doesn't suck for writing them, no really, where are they?


I believe a similar dynamic exists with many of the expressive high-level languages; this is probably not an issue unique to the Haskell community.

For example, I think many would struggle to name many open source user application written in Clojure as well.


This is an excellent Haskell resource. Grades are given by both application domain (e.g., "web development") and development task (e.g., "testing" or "IDE Support"). Each of the categories has not only a grade but also list of relevant libraries and example projects.


This is a fantastic link & I love the discretely sorted topics with ratings (could be more granular but its a start). Is there a similar link for other PLs notably more pedestrian langs like Java/Python/C ?


Thanks for the link. Great reference. This could keep me occupied for a while.


We're using Haskell to produce an entire compute platform for building, composing, and running type-safe containerised micro-services for data science. This certainly touches on a lot of the areas that Haskell is traditionally known for being good at, e.g. building DSLs, type systems, interpreters, etc.

However, the work also includes a runtime platform that is more low-level, including building our own container system, talking to the Linux kernel, using cgroups/resource management, and distributed message passing - areas where languages such as Go have found a niche, and may be classified as Real World. However for us, Haskell's type-safety, runtime performance, and extensive ecosystem has been a boon even in this domain. We effectively use it as an (incredibly powerful) general-purpose language here, and it's worked more than fine.

We're currently at around 15,000 lines of code with a team of 5 Haskellers, and it hasn't really been a problem regarding performance, understanding the codebase, or with newcomers to the team.

(plug - we're at https://www.github.com/nstack/nstack and are hiring more Haskellers)


Any particular reason you're building your own container system instead of leveraging LXC or Docker?

For the massively parallel workloads you find in data science, it seems like you'd benefit a lot from the wealth of container orchestration tools around Docker (swarm, Rancher/Cattle, Kubernetes) in order to easily scale out your functions. Especially when many companies already have these set up for their more vanilla applications.

This is an example I've seen that can leverage a docker swarm for invoking functions, loosely modeled after AWS Lambda: https://github.com/alexellis/faas


Hi there - we've actually built a lot of our container ecosystem around existing Linux tools, including `systemd-nspawn`, `btrfs` and more rather than creating the whole stack from scratch - and again this is all controlled from Haskell. We experimented with Docker, Kubernetes and more, but found they they made lots of assumptions about what was running inside a container that didn't mesh with our compute model, so using lower-level primitives worked better for us.

We're really lucky also to have one of the main `rkt` developers joining us soon to work on the container side.


This work might give you some ideas even though it's a bit dated:

http://programatica.cs.pdx.edu/House/

Also, COGENT for lowest-level stuff being wrapped for use in Haskell somehow might be interesting. Used on ext2 filesystem already.

https://ts.data61.csiro.au/projects/TS/cogent.pml


Thanks for the pointers - house is great, I remember reading some of the papers a long time ago.

I haven't seen COGENT before - will take a look over the weekend - thanks!


Application sweet spot is a funny thing, because it's mostly determined by the presence of specific libraries. What Haskell has more than anything is ridiculously general libraries. It's got perfectly good libraries for web development, Postgres access &access, but the only place I'd say it's got a real application-specific strength in libraries is parsing.

Weirdly, what Haskell is good at is generality. Which is much the same as for LISP.

I'd say the Haskell community remains academic/hobbyist, but that's not to say there aren't plenty of people doing Real Work in it.


I agree that parsing libraries are probably the strongest part of the Haskell ecosystem for applications. The only time I've chosen Haskell for a serious application was because of its parsing libraries (specifically parsec): https://blogdown.io/.


"academic/hobbyist" probably doesn't give justice to the benefit those works provide for languages that implement experiments pioneered in i.e. haskell. I say "probably" because I can't give any exact examples.


This is really a specific example of the benefit academia generally provides to society being highly underappreciated.


Ok, so the generality like LISP/Scheme with a few basic productions/rules and some basic operations could be used to create/represent any levels of complexity. Something like Northhead & Russell's Principia Mathematica.


Yes, and that's why you're often scratching your head at the function definitions: they're extremely general solutions to general problems. This, in conjunction with the documentation issue means that often you end up getting referred to papers, which does nothing to help the "only for academics" impression.


It is a fair point that some libraries could benefit from some better soft documentation.


I think painting Haskell as a LISP but in a blank-slate world without packages is not quite accurate. There is a large library of existing packages that take you beyond the language's primitive constructions.


One of the main reasons Haskell isn't used more in production systems is the small pool of developers that can actually program in it.

It is a very good language once the type system makes sense and you can reliably reason about the lazy nature of it.


Haskell is good for problems where you want to spend most of your time thinking in terms of the problem domain and not so much about all the little implementation details.

For instance, if you were to write a CAD program, you probable want to be spending your time thinking about algebra and geometry, and not about little details like how an array is laid out in memory or when memory needs to be freed. The latter might be important for optimizing the last bit of performance out of a tight loop, but a lot of problems are hard enough on their own that just coming up with any solution that works is enough trouble on its own.

It's also good for problems where you want pretty good performance but don't have to be as fast as C, or where you want to take advantage of multi-core parallelism but don't want to deal with the usual complexities of threaded code.

Persistent data structures are really nice for certain kinds of applications, such as web services, where you have some complex internal state that needs to be updated periodically. Error handling is really easy when you can just bail out when something fails, since none of the state was ever modified-in-place and your old pointer to the old state is still valid.

The powerful static type checker makes Haskell a good fit for applications where it's really important that you calculate the right answer, or where you don't want to spend much time tracking down weird bugs.

Haskell isn't so good at applications where you need hard real-time guarantees or you want to have very fine control over the memory layout of your data structures or you want to be able to easily predict CPU or memory usage.


Haskell has a preference for looking academic, and is one of the favorites on academic circles. But that does not mean it's not used within industry.

I'd say that people mostly do not know what is running in industry. It could be any share of anything. We get a hint looking at job offers; Haskell is small there, but not unheard of.

On the sweet spot, Haskell is great for modeling complex domains; it's great for long-lived mostly maintenance stuff, since refactoring is so easy; it tends to generate very reliable and reasonably fast (faster than Java/.Net) executables, but hinders complete correctness proofs and has some grave speed issues that can ruin your day if you hit them.

I would like to know how is it to manage a team of Haskell developers. I do expect it to be better than Java/.Net in forcing safe interfaces between developers (thus making it reasonably easy to survive a bad developer in a team), but I never saw it in practice.


>> I'd say that people mostly do not know what is running in industry

well if anyone does know what's out there then HN crowd would be a pretty good suspect in that regard.

I personally have done my fair share of consulting and prof services engagements - dozens of them across all kinds of industries - I've never seen a Haskell shop.

Java, Python, JS, Golang, Scala, Clojure, .Net, C, PHP - definitely out there in the field. Haskell - not so much.


An ex-co-worker went to work at JanRain with Haskell: http://www.janrain.com/blog/haskell-janrain/


Your experience may be biased. If I had a shop that depended on external consultants for development, I'd avoid any language that I'm not certain to be mainstream. It's just too risky.


I can name two Haskell consultancies off the top of my head :)


But what if those two are busy right now? What if geography is a problem?

Compare with how many Java consultancies do you think you can find on a quick search?

I don't think the Haskell development labor market is nearly liquid enough to rely on consultants. For full time jobs the picture is very different.


> But what if those two are busy right now?

Then he'll have to look up some that he can't name off the top of his head? I can't name any Java consultancies, but I'm nonetheless confident that there are plenty...

With regard to Haskell, there are a lot of people interested in Haskell work compared to the number of positions presently available.


well if anyone does know what's out there then HN crowd would be a pretty good suspect in that regard.

HN is a comically bad barometer for anything other than what's trendy in norcal.


It's not as bad as you say, exactly. Yes, some trendy languages are overrepresented (coughRustcough), and there are people with blinders on about what programming outside the Valley is like. But there's way too much .NET, Java and such posted for HN to be wholly unaware of trends in the broader industry. Java may see some use in NorCal, but I really doubt C# does anywhere near as much as it pops up here.


> and is one of the favorites on academic circles.

I'd argue Python is much more prevalent with researchers than Haskell.


I think they mean programming language research.


I think I can safely say it isn't used heavily in many production environments but it is used heavily in some. For example, one of Facebook's abuse systems, dealing with 1M requests per second is written in Haskell: https://code.facebook.com/posts/745068642270222/fighting-spa...

As for particular places where it shines, I don't know of any in particular. I remember when taking my Declarative Programming course in university my lecturer loved to talk about an experiment the military did where they challenged a few teams to implement some kind of radar system in their chosen language and evaluated them based on time taken, correctness and code complexity (LOC) at the end. The Haskell team performed very very well. I can't find any details on this experiment though...


The article you're looking for is http://www.cs.yale.edu/publications/techreports/tr1049.pdf - Two of the top three (out of ten) solutions were made in Haskell, one by a beginner.


It would be interesting to see the full source code for the various entries - am I missing a place to acquire them?


Yep, that's the one! Looks like there were lots of problems with the experiment but good to read regardless.


>I can't find any details on this experiment though...

Probably this one: http://www.cs.yale.edu/publications/techreports/tr1049.pdf

But take it with some sacks of salt -- it's a single research paper. It should be replicated, peer reviewed, etc to have any merit. It's methodology could be totally crap for example...


It is really interesting. I do get the feeling that the researchers really wanted Haskell to win, though. It would be nice if researchers with other favorite languages also arrived at the same conclusion.


Ok, so the Facebook example definitely falls under a contemporary heavy production usage system. Thanks.


If you measure complexity as LOC then Haskell will do well, but you may not end up with uncomplex code :-)


I've been playing with it for about 5 years, started the local Haskell meetup, etc. I used it on a side project that reached beta in 2015, and on a startup at the beginning of 2016. Starting at the end of 2016 I started using it on a big contract. I had to convince the client it was a good idea, but they've been convinced by my team's productivity. All of the above were back end web services. The current project is a bunch of systems to underwrite and issue loans and involves an ML calculation (hosted on data robot).

I love it for general back end web development. I can actually solve problems so they stay solved.


I'm VERY curious about this! Backend web services not being the sexiest / most theory-requiring subject.

How did you convince the client, and why did you choose Haskell for this over, say, .NET, which throws a web service up quicker than it took me to type this? What sort of productivity are you achieving?


Haskell works really well for things revolving around languages.

At SQream (www.sqream.com), Haskell is used for CLI, SQL parser, SQL language compiler, SQL optimizations and a variety of tools.

We also use Haskell to generate Python code for testing (think: describing a testing scenario and sets of queries, translating into a runnable, predictable Python script that's easy to debug)


To me, this is Haskell's most appealing strength. The idea is, you can express your domain and logic in Haskell in pure functions which are highly testable, and then you can use that to generate code in any language you want. And the code you generate can have whatever zero-cost abstractions you choose to invent, all implemented in the nice, functional Haskell layer.

Haskell's strength at expressing and transforming AST's, in pure functional, understandable, testable code seems to be the key here.


You hit the nail on the head.

We also employ QuickCheck rather extensively, which makes testing millions of scenarios really straightforward.


Production use cases pushed me towards Haskell. What elegance there is serves towards reasoning about the correctness of complex code, however the tradeoff is difficulty in reasoning about performance. You can still get great performance! And I use it for prototyping in the Type-Driven Development style. Strongly disagree with "if it compiles it is correct" but we have great tools like Doctest and QuickCheck for that. I usually write types with DocTest examples first, then implement. undefined with lazy evaluation is really cool for figuring out function signatures first.


> reasoning about the correctness of complex code, however the tradeoff is difficulty in reasoning about performance

That's what drove me to languages like OCaml and Rust, which try to solve this tradeoff in a different way: Same thing about strictness and correctnes (despite slight differences in the type system), but lazy evaluation is only provided when explicitly asked for.

The cool thing about OCaml is that you can actually reason about performance, as long as you ignore memory issues (memory usage, garbage collector runs, etc.). Rust is heavily inspired by OCaml and allows you to also reason about correctness and performance of memory usage.


> The cool thing about OCaml is that you can actually reason about performance, as long as you ignore memory issues (memory usage, garbage collector runs, etc.).

If you ignore mem issues you can also reason about the performance of Haskell :)

If we want to reason about perf, I think Rust is currently leading in terms of a modern language that allows perf reasoning.


> If you ignore mem issues you can also reason about the performance of Haskell :)

Really? My guess (based on close to complete ignorance) would be that laziness would make reasoning about performance hard. Can you ELI5 why my guess is wrong?


All performance issues caused by laziness are memory issues. Something isn't evaluated promptly, so large amounts of memory are consumed storing unevaluated expressions. This causes drag on the garbage collector and excess cache thrashing when it finally is evaluated.

But all of that is just because data stays in memory longer than expected.

If you ignore that, laziness has a small performance impact, but it's less than the performance impact of interpreting code, which is something many people consider completely acceptable.


Both you and cies said that "All performance issues caused by laziness are memory issues." My intuitive sense was more like: You've got a lazy calculation that has several stages. At some point in the pipeline, you're doing something slow, but it's not obvious. When you get to the end and start using the data, the whole calculation is slow. But it's not clear where it's slow, because of the laziness.

That may not be "a performance issue caused by laziness", in your terms, because the laziness isn't causing performance issues. But it's laziness making a performance issue hard to find.

Does my scenario happen? Is it common? Is it easy to analyze/find/fix when it does happen?

Most generally, does laziness make it harder to reason about performance?


That happens sometimes, but the solution is the same as in a strict language. Break out the profiler and see what's taking all the time. There's a learning curve while you figure out what the usual suspects are, but usually you can find things pretty quickly afterwards.

Profiling is a bit of a black art in any case. I've seen horrible performance problems in python when we added enough code to cause cache thrashing to a loop. No new code was slow, but the size of it crossed a threshold and things got suddenly bad.

Some performance problems in Haskell are hard to track down, but most are pretty easy to find and fix. It's basically the same as every other language in that way.


Laziness can actually make things faster in some scenarios. Take Python2's range() function for instance, which eagerly builds the entire list and returns it. range() is commonly used for for loops, so Python builds this list (an N operation) and returns it, and then the for loop iterates over it (another N operation). But if you use xrange(), you get a generator instead, which can lazily construct values when asked for one. It turns the for loop case into a single N operation. Similarly if you have nested function call chains, if everything is returning an eagerly evaluated list, you have to construct the whole list in memory at each call and pass it to the next stage which will eagerly iterate over the whole thing to generate its list in memory and so forth, eventually the old list can be freed / GC'd when the new list has been made.. but with lazy sequences, you only need memory for the whole list at the very end, if you want it and not just individual values. Python3's range() replaced Python2's with xrange(). (The other option is a numpy-like approach where you eagerly allocate the list and then every stage just mutates that single thing.)

The 'memory stays around indefinitely' problem is caused by not cleaning up references to generators. You can make your own xrange() clone in Python easily, and start consuming values, but if you stop before consuming all of them (assuming it's not an infinite generator), that xrange() generator object still lives and has to keep around whatever references it needs to generate the next value if you ask it for one. When your whole language is based around laziness, you might have a lot of these and it may not be clear what references are still around that keep the GC from cleaning them up.

I wouldn't say reasoning about performance is necessarily more difficult, and profiling will light up a slow part of a lazy pipeline just as well as an eager one. My own struggles with laziness in Clojure were all around my own confusions about "I thought this sequence was going to be evaluated, but it wasn't!" which may have led to unnecessary calls to (doall), which forcibly walks a sequence and puts the whole thing in memory.


> laziness would make reasoning about performance hard

Because of it potentially hogging memory! Now we were told to ignore mem issues by the commenter. I just point out the contradiction.


We use Haskell in production at lumi.com for our backend/API. We've found it very productive and easy to maintain.


React, HapiJS, RethinkDB, Node and Haskell. That might be the coolest stack ever. Do you have any FOSS project on Github?


No projects under the Lumi name yet, but our team contributes to many FOSS projects and also have their own interesting projects such as Purescript by Phil Freeman who recently joined the team: https://github.com/purescript/purescript


That's really nice. Sounds like a very experienced team too.


been a nodejs dev for years but I've recently found react, typescript, elixir and rust more to my liking with python for ML stuff.

Good Type safety seems to be an acquired taste.


I just saw a haskell meetup where they're using haskell for devops. Basically static typing the underlying machine configurations. As usual, being an Haskell code, the abstraction is generalized and you can apply the logic on other domains.


Was it https://fugue.co ? I think they use Haskell for their DSL


here's the list of libs mentioned https://news.ycombinator.com/item?id=14267364


Thats really interesting. So config files themselves could be considered as some complex type and the inherent language type checking may be used to validate them?

Somehow reminds me of Structs/Records in Erlang/Elixir and using pattern matching, conditions and guard clauses to enforce validity or logic.



I've been using propellor for a few months now and it'fantastic. It takes a little while to get your round it but compared to puppet, chef and ansible, of which I've used all, it's a breath of fresh air.


Good point, this was the first of the two libraries mentioned, the other has just been open sourced; it's called deptrack https://github.com/lucasdicioccio/deptrack


I've been writing Haskell for nearly 10 years and am coming up on 4 years writing it full time. (You could say I know the language pretty well...)

I think it's just a pretty good general purpose language, for the most part? I know that's not very exciting of an answer, but I do consulting for people and I've seen Haskell for everything our clients do, from web development (a lot of web development), compiler development, distributed OLAP style analytic processing on large data volumes, custom languages/DSLs, machine learning, hardware/FPGA design work, desktop software (flight tracking/schedule software, used by on-ground airport personnel, with some incredibly beautiful code), cryptocurrency and cryptography etc etc. There are the occasional research-y things, and we've also just done things like that for random institutions or companies (e.g. prototyping RINA-style non-TCP network designs.)

I wouldn't say any of these were a particular "sweet spot". Though some things like compiler development or web development are pretty pleasurable. The language and libraries make those pretty good. Web development is one of the more popular things now especially. Everyone has to have their thing in a browser now, and there's a fair amount of library code for this stuff now. Definitely one of the stronger things.

You do get the occasional problem that just has this perfectly elegant solution at the core and that's always an awesome bonus. One of my favorites was the aforementioned flight software. It had this very strict set of bureaucratic guidelines but all the core logic ended up having a very nice, pure formulation that ended up being about 8 functions used in a single function-composition pipeline. (We use this sometimes as an example in our training of real solutions for real problems.) A lot of software is much more boring and traditional.

Mostly, it works well and stays out of my way, and lets me solve problems for good. And I really like the language in general compared to most other languages; it hits a very unique sweet spot that most others don't try to touch.

I do admit the sort of high level, abstract shenanigans are fun and interesting too, but I think of it more as a fun side thing. Just this week I literally wrote the documentation for a data type, describing it as an "Arbitrary-rank Hypercuboid, parameterized over its dimension." Luckily I do not think you will ever have to use that code in reality, so you can breath easy. :)

---

EDIT: and to be fair, there are definitely some problems. Vis-a-vis my joke, we could all collectively use better "soft" documentation. I think some problems like "operator overload extravaganza" are overblown due to unfamiliarity. Most real codebases aren't operator soup; but bigger problems are things like "When to use the right abstraction" are real problems we have hard times explaining.

I don't think performance is much harder than most other languages, but we definitely don't have tools that are as nice in some ways, nor tools that are as diverse. The compiler is also slow compared to e.g. OCaml. Some parts of the ecosystem are a lot worse than others; e.g. easy-to-use native GUI bindings are still a bit open. Windows is a bit less nice to use than Unixen.

There's a big gap between training in-person vs online/self taught content too, IME.


I am not an expert in any way, but parsers seem suited to Haskell. Perl 6's original parser was build using Haskell I believe and went on to influence the language a lot (again this is all stuff i have read, I have no real practical experience with either Perl 6 or Haskell).


Haskell in production (mostly) at: https://scrive.com/


I think Haskell has a future in machine learning.


> 1. There is a learning curve.

Time and experience can cover up anything. So this does not say much about Haskell other than it is all negative without time and experience.

> 2. Haskell has some very nice libraries

So does NodeJS and (on an abstract level) Microsoft Word. Libraries are infrastructures and investments that (like time and experience) can cover up any shortcomings.

> 3. Haskell libraries are sometimes hard to figure out

That is simply negative, right?

> 4. Haskell sometimes feels like C++

That is also negative, right?

> 5. Performance is hard to figure out

That is also negative, right?

> 6. The easy is hard, the hard is easy

That is a general description of specialty -- unless he means all hard are easy.

> 7. Stack changed the game

Another infrastructure investment.

> 8. Summary: Haskell is a great programming language.

... I am a bit lost ... But if I read it as an attitude, it explains a lot about the existence of infrastructure and investment. Will overcomes anything.


> The line that if it compiles, it’s probably correct is often true.

That is the meat of why Haskell is so great. I've never so reckless refactored code as much as I do in Haskell, I just wait for the compiler to tell me what I missed and go back and fix it up. I'd never do that in C, C++, Java, etc; it'd be suicide.

And while that's still not a great fleshed out explanation, it's a great oversimplification of the symptoms of a programming language with a great type system. And that type system is more or less the only reason to use Haskell.

As other people in this thread have commented, Haskell has fantastic abstract libraries, which let you do abstract things, the most useful of which that I'm aware of/understand is parsing. Parser combinators are a very natural fit for Haskell, and making DSLs with them becomes practically trivial to do.

Edit: I think the author takes for granted the general praise that Haskell gets, and was attempting to temper it with his practical experiences using the language.


> I've never so reckless refactored code as much as I do in Haskell, I just wait for the compiler to tell me what I missed and go back and fix it up. I'd never do that in C, C++, Java, etc; it'd be suicide.

That's precisely what I was doing at work today in a C# project. I was going particularly crazy today, doing some refactoring with project wide regex replaces rather than leaning on VS/Resharper for everything.

I think it depends a lot on how your project is structured. If you're passing around object everywhere and casting... you're gonna have a bad time, sure. But at that point you're practically using Python or something. If you're using generics and so on properly, then you can lean on the compiler and type system quite a lot.


Yup, C# codebases (especially w/ tools like Resharper) can be massively refactored and, in my experience, almost always work perfectly as long as there are no compilation errors. C# tooling is fantastic.


"That is the meat of why Haskell is so great. I've never so reckless refactored code as much as I do in Haskell, I just wait for the compiler to tell me what I missed and go back and fix it up. I'd never do that in C, C++, Java, etc; it'd be suicide."

That's interesting. I am pretty aggressive with C++ refactoring because the compiler will tell me what's wrong. Especially compared to JavaScript for example. In what way does the Haskell compiler provide better error checking?


The semantics of the program can be more thoroughly encoded in Haskell's far more powerful type system. So if the program type checks you're far more likely to have a correct program than in a language like C++ or Java.


> That is the meat of why Haskell is so great. I've never so reckless refactored code as much as I do in Haskell, I just wait for the compiler to tell me what I missed and go back and fix it up. I'd never do that in C, C++, Java, etc; it'd be suicide.

I work on Java projects and do big re-factorings using IntelliJ, relying on the compiler. So this is a bit project-dependent.

Java chooses two wrong defaults - nullability by default, and mutability by default. If your project consciously chooses to not opt into these defaults, from my experience, you can use javac to guide you in fearlessly making large refactorings.


"Java chooses two wrong defaults - nullability by default, and mutability by default."

Which of course, are two defaults that Haskell chooses correctly (which is maybe what you were implying).


OP here.

The point about a learning curve is that Haskell is different from most mainstream programming languages.

> > 2. Haskell has some very nice libraries > So does NodeJS and (on an abstract level) Microsoft Word. > Libraries are infrastructures and investments that (like > time and experience) can cover up any shortcomings.

Javascript is one of the most widely used languages in the world and MS Word leverages the Microsoft ecosystem, which is another of the most widely used development environments.

It's when you use less widely used environments (and Haskell may be borderline here) that you need to start worrying about libraries being available. I would not go as far as claiming that Haskell is as good as Javascript in that respect, but it is pretty good.

> > 4. Haskell sometimes feels like C++ > That is also negative, right?

I actually like C++ a lot.

It's also one of the most successful programming languages in the history of programming languages, so it has something going for it.

> > 5. Performance is hard to figure out > That is also negative, right?

Yes, it's a pitfall.

> > 7. Stack changed the game > Another infrastructure investment.

Yes, tooling matters.


So you understand I was commenting that your post didn't really say much about Haskell (learning curve, libraries, tooling ). What you commented or implied is a bit of current Haskell culture.

> I actually like C++ a lot.

Haskell is an example of academic carefully crafted language, while C++ is rather a practically driven continuous patchwork (and they are extremities to both ends). I simply find your comparisons odd. Given their deliberate design choice difference and it ended up feeling alike, how can that be positive?


> Haskell is an example of academic carefully crafted language, while C++ is rather a practically driven continuous patchwork (and they are extremities to both ends).

Haskell is also a continuous patchwork, just look at the number of deprecated GHC extensions.


... which is the opposite of its pure vision, which is why I can't help think it is a negative.


>>> Haskell is an example of academic carefully crafted language

>> Haskell is also a continuous patchwork

> which is the opposite of its pure vision, which is why I can't help think it is a negative

I'm very confused. One second, you're literally saying Haskell was "carefully crafted", and the next, you're saying it's "the opposite of its pure vision"

It's almost as if you're just trying as hard as you can to argue, not caring what stance you're arguing, just so long as you can act like the author is wrong


> I'm very confused. One second, you're literally saying Haskell was "carefully crafted", and the next, you're saying it's "the opposite of its pure vision"

> It's almost as if you're just trying as hard as you can to argue, not caring what stance you're arguing, just so long as you can act like the author is wrong

Let me write the complete sentence:

([luispedrocoelho commented that] Haskell is also a continuous patchwork ) (which I simply accepted and followed -- even though I am doubtful) which is the opposite of its pure vision, which is why I can't help think it is a negative (regarding OP's point).

I commented originally that OP's this C++ comparison is also negative, right? So it is consistent.

When I say something is carefully crafted, I am refering to its intention. When I accept that someone find it like C++ or is also a patchwork, I accept that as a reality. When the reality runs against its design goal/vision/purpose, it is a negative, IMHO. -- Hope this clarifies.

EDIT: to make it even clearer, I didn't comment that whether a pure design goal or a continuous patchwork reality on its own is positive or negative. That is subjective. However, having a design goal and reality conflict, that can't be positive.


Every time I've tried to dedicate time to really learning Haskell, I've come away with the feeling, "If I was a lot smarter or had a lot more time to dedicate to it, this would be a fantastic language to develop in after about 2-3 years."

The way that functions can compose in the hands of a gifted developer is truly elegant. That said, I'm not sure it's a skill that translates well to the general development community (and maybe that's fine?).


Out of curiosity, what method/resource did you use to try to learn it?


I've tried "Learn You A Haskell for Great Good" and "Real World Haskell".


That was my guess, but I didn't want to be presumptuous. I tried with them and as well it didn't really stick. In my opinion "Learn You A Haskell" is a great supplememental reference, but a terrible teaching tool. And I think It's done more disservice to enforcing the reputation of Haskell being difficult to learn.

I highly recommend the lecture notes online from the "Upenn Spring 2013 Haskell" class. Search that phrase and it should be the first hit (am on mobile).

Go through the lecture notes, and do all the exercises in the homework. (Link in purple at the top). It's a night and day difference from LYAH.

And you get to build some nifty real world style projects along the way.

Good luck!


> (#1) Time and experience can cover up anything.

This is simply not true. Time and experience can never cover up for lack of expressiveness, and expressiveness is one of the biggest selling points of Haskell.

In fact, experience can only cover for complexity, and lack of discoverability or intuitiveness. Haskell has all those three flaws, so you see, experience is really needed.

> (#2) So does NodeJS and (on an abstract level) Microsoft Word.

No, they don't. Javascript and macro Basic do not even have enough expressiveness for supporting the kind of library you'll find in Haskell.

> (#3) That is simply negative, right?

That's the inevitable downside of #2.

> (#6) That is a general description of specialty -- unless he means all hard are easy.

Well, not all obviously. You still can't declare "f = solution to SAT in polynomial time" and be done with that. But the sheer amount of hard stuff that becomes easy is unsettling.

About #8, I'll let it open. Experience and investment can certainly not overcome anything as you claim, but I'm still not decided wether to classify Haskell as "the best available for nearly anything (maybe except if you have a killing library in another language)" or "hell, why can't somebody just come and rewrite it as something _simple_", or both.


> Time and experience can never cover up for lack of expressiveness

Expressiveness is subjective. Experience can alter one's perception.

In fact, time and experience alter the basis of comparisons, from objective comparisons to subjective comparisons.

> No, they don't. Javascript and macro Basic do not even have enough expressiveness for supporting the kind of library you'll find in Haskell.

You mean you can express the idea of taking browser screenshot (for example) or producing a publisher acceptable document with the same kind of ease(expressiveness in my dictionary) in Haskell?

Again, without specifics, the comparisons does not mean much -- which was all I was commenting.

> But the sheer amount of hard stuff that becomes easy is unsettling.

Another meaningless subjective word (sheer). I wasn't debasing Haskell. I was commenting on the meaningless of original post.


Expressiveness is a objective term, although it's multidimensional. It's the number of different concepts you can express in a language without writing an interpreter¹. Anyway, the dimensionality isn't much of a problem on that comparison since Haskell has the upper hand in an overwhelming number of dimensions.

Keep in mind that Javascript required a multi-year process in what a committee reached an agreement and all interpreters had to be changed so that JS programmers could use continuation monads. And they are still restricted to continuation, and a way too verbose syntax.

> Another meaningless subjective word (sheer).

That "The easy is hard, the hard is easy" assessment is inherently subjective, but "subjective" is not the same as "meaningless", even less when nearly everybody that experienced it has the same assessment. (Are you also going to complain about my "overwhelming" above?)

1 - I'd give you a point on subjectivity if you were talking about the difference between a library and an interpreter. But those languages (that is, JS and Basic) are just not expressive enough for this to become a problem.


> but "subjective" is not the same as "meaningless", even less when nearly everybody that experienced it has the same assessment.

Subjective on its own don't have to be meaningless. It can be subjectively meaningful. However, use a subjective statement to pass as an objective support, that is meaningless. So if OP and you are merely commenting on the status of mind or his and yours, that is fine -- and I do learn something in that regard. OP, and several other commenters, did not seem to realize they are substituting their (personal) subjective opinion to objective reasoning, that was what I pointed out, in case it become useful (to them).


haskell is objectively expressive. (where expressive means amount of logic per character)

there are multiple reasons for this, kind've tied together. Whitespace is function application (no parenthesis to keep track of like lisp, or even e.g. c, java, or javascript, when programming in a functional style). Immutability forces you to compartmentalize and compose everything. when everything is a function, function and variable names serve as VERY powerful documentation, especially when combined with haskell's also VERY powerful type system (which e.g. has sum types where most languages do not, which allows for example, defining most error states of program as a data type (which can then derive from any typeclass)).

one of the primary benefits of laziness is to make function composition possible under a wider variety of situations (which also increases expressiveness, at the cost of occasional though not difficult to avoid space leaks)

one joke I've heard is that the ideal haskell program is 20 lines of imports and one line of perl.


By objective, you mean succinctness (which can be simply measured by program character counts)? In this regard, APL must be the most objectively expressive language. What if we run deflation over APL?

It happens that I measure expressiveness by the amount of the time the author takes to express an idea and/or the amount of the time the reviewer takes to comprehend an idea. That, unfortunately, is very subjective.

APL programmers write very short programs, but they express it at one character a minute (or less) pace.


Well I can't argue with the contention that there's a level of subjectivity here. However your definition of expressiveness is most likely positively correlated with the one I provided; furthermore, while you may be able to more quickly "express an idea" in a dynamically typed language, your ability to precisely enumerate the idea will be less than a statically typed one.

I've been writing haskell for only several months, yet I would say I'm already more expressive with it than other languages I have more experience with, like javascript or python, outside of domain specific languages like sql.


> furthermore, while you may be able to more quickly "express an idea" in a dynamically typed language, your ability to precisely enumerate the idea will be less than a statically typed one.

What we need realize is not all ideas are precise. In fact, most of our ideas are vague to certain extent. They are still OK as long as the vagueness does not matter to the problem of interest or it is already constrained or implied in the context. So to efficiently express an idea, both insufficiency or over specificness are negative to expressiveness. To disclaim, I don't claim any language is the best to that regard. I believe language should be suited to the problem (as well as the experience of the team).

Since you particullarly mentioned type, I would suggest that not always a particular type is important in a idea. For example, sorting, the types of the items are not intrinsic in the idea. Having to specify the type contributes negatively to the expressiveness. However, when performance is concerned ragarding to certain specific sorting problem, types (as narrowly specific as we can) are of importance. However, we should recognize that is a different idea from the original idea of sorting. So even though in the program eventually it expresses both the algorithm for sorting as well as the types, being forced to mix ideas are negative toward expressiveness.

Haskell, its types and pureness, for example, are not always essential in a programming idea. Having to take care of these language requirement while it is non-essential, makes it less expressive (in those problem domain).


I agree to some extent, but haskell pureness for example , which I take to refer to monads and other concepts from category theory in haskell, is actually very very abstract; the "constraint" of pureness (IO monad) just leads to the design of more consistent and very general interfaces; once you become familiar with these interfaces you are quicker than you would otherwise be, also haskell's type inference help alot with expressiveness vs. dynamic languages. In fact, personally my road to really learning haskell was through converting shell scripts to haskell, a task typically left to dynamic languages but which haskell might actually be uniquely suited for contrary to convential wisdom due to aforementioned factors.

Not saying haskell is some kind of panacea, doesn't run in as many environments as javascript, and can't handle programs that require very high performance at very low latencies due to garbage collection.

For web dev, the library situation seems very mature to me.


What evidence do you have that APL programmers do not express and comprehend ideas in APL programs much faster than programmers using other languages? Similarly for Haskell.

How do you know you're not just a Blub programmer?

http://www.paulgraham.com/avg.html


The first evidence is a direct evidence, but limited only to my direct channel, which is my personal experience.

The second evidence is an indirect evidence. If average APL programmers can write and read programs at the same speed (character count/minute) as average programmers for other languages, Java for instance, then APL programmer will possess significant advantage in finishing similar programming tasks, given that the typical code size is often a few orders smaller. Why such advantage is not embraced and see APL programmers everywhere? My hypothesis is avg APL prgrammers program at much slower speed.

Like any science, I cannot say my hypothesis is conclusive. I am open (eager) to hear and examine any other evidences (and change my hypothesis if necessary).


I feel the same way. I think the Author was trying to give an honest opinion on the pitfalls. Would have been nice to hear about the honest benefits too.


I don't think being like C++ is a negative and it is far easier to learn compared to Haskell.


> ... I am a bit lost

Google "cognitive dissonance" and the "sunk cost fallacy."


The point of the article was focusing on things to fix. Having used Haskell, there's a great many things that were worth the cost of learning it. For one thing, it's made me a vastly better programmer in other languages. It lets me appreciate better styles of writing code and think outside the box when solving things. Using Haskell also gives me the tools for many projects to write code almost thoughtlessly -- you can just start writing in a declarative style and not worry about debugging, because everything is checked by the type system. Libraries exist for abstractions you wouldn't even think of in other languages that can cut down the code to 1/100th the length it would be and maintain expressiveness and type safety. At the same time, I don't find myself using Haskell all that often because I do think it's missing a lot of things I desire, but depending on what you're working on it can be very far from a sunk cost.


> you can just start writing in a declarative style and not worry about debugging, because everything is checked by the type system

That is dangerously close to the infamous "if it compiles, then it works" boast, which Haskellers make all the time (while denying that they make it), demonstrating in the process that they don't write real software, where the defects one encounters are very often of a nature such that the program behaves exactly as intended by its authors, but the intended behavior is itself wrong. How does the type system help here?

My comment wasn't glib. I actually think those phenomena quite well explain much Haskell advocacy. Considering how massive an undertaking it is to learn the language and its Byzantinely-complex, PhD-theses-in-disguise libraries (each of which sports a zoo of custom operators) and how small the payoff is, it's unsurprising that those who take the plunge begin zealously encouraging others to do the same, lest their own investment have been for nothing. In a way it's like a conspiracy.


There's no conspiracy; programming in Haskell is the most fun I've ever had with a computer.

The "if it compiles, it works" thing describes people's experience. It isn't always true and it isn't a valid excuse not to do proper testing, but in Haskell a non-working solution is usually at least broken in a way that makes sense in the context of the problem you're trying to solve. If you program an incorrect solution to a problem, you'll probably get a wrong answer. But a wrong answer is different than nonsense, which is what you get if, say, you write past the end of an array in C.

The Haskell type system is a very good nonsense filter, and probably a majority of programming errors are from telling the program to do something nonsensical. If you filter those out, sometimes what's left is a working program.

A Haskell programmer is unlikely to write a several thousand line program and have it work correctly the first time, but even large programs are written in small pieces. When those small pieces work correctly the first time, it's gratifying.


I'm not sure how this corresponds to your experience with Haskell, but as an OCaml user there's been numerous times where I have been forced to write large chunks of code without a compiler/IDE to aid me at all (e.g. VM wasn't working so had to work from Windows temporarily). In each of these cases, upon compilation, I have been greeted by a flood of errors (syntax errors, type errors, and everything in between). However, without exception, the program has always worked as expected when I ran it the first time. I think it's reasonably plausible that one could write a program the magnitude of ~5k lines in Haskell and have it work on the first attempt.


I will never, ever use Haskell in production because of its default evaluation strategy, the wrongness of which was tacitly conceded not long ago with the addition of the strictness pragma (which only works per-module) to GHC.

I think it's especially telling that its community skews so heavily towards this blogger/monad tutorial writer dilettante demographic rather than the D. Richard Hipp/Walter Bright 'actually gets real work done' demographic. I know which of the two I'd rather be in. Haskellers are even worse than Lispers in this regard. For the amount of noise about Haskell, you'd expect to see high-quality operating system kernels, IDEs, or RDMBSs written in it by now. Instead its killer apps are a tiling window manager, a document converter, and a DVCS so slow and corruption-prone even they avoid it in favor of Git.


There is a fellow by the name of Jon Harrop who has in times pasted "showed up" when Haskell came up for discussion. To say that he does not like Haskell is putting it mildly. I have the impression that the mere existence of Haskell is a personal affront to him.

What struck me about your post was how much it sounded like something he would write, in particular the interpretation of facts (a strictness pragma was recently introduced to Haskell) in the most extremely negative way possible (said pragma is admission that Haskell's default evaluation strategy is simply "wrong").

If you are not he, you should look him up, as I expect you two (?) would get along fantastically.


> you'd expect to see high-quality operating system kernels

The Haskell community is smaller than others but there are numerous highquality projects, including kernel/OS level projects that you don't mention:

HaLVM, Haskell Unikernels - https://github.com/GaloisInc/HaLVM

The specification model of the sel4 microkernel - https://github.com/seL4/l4v/tree/master/spec/haskell

There are many smaller examples of OS projects in Haskell, but for better examples of production systems you only have to look to areas requiring high assurance such as finance, where a number of large companies use Haskell and other functional languages. (BoA, Barclays, Credit Suisse, Standard Chartered, etc.)

It's true that there a lot of Haskell blogs focused on research level Category theory, but the concepts there don't need to be used or understood for high quality, production ready code. I'm actually not a Haskell guy, and very much of the mindset that you should use the right tool for the job, but being dismissive of languages with unique features such as being lazy by default is shortsighted.


Would you also feel that a strict-by-default language which adds support for lazyness is a tacit admission that strict-by-default is wrong?


How would you feel if Common Lisps suddenly started sporting infix operator notation?

Haskel's alien, counter-intuitive evaluation strategy was actually marketed (circa 2010) as a powerful performance-enhancer that facilitated optimizations that were difficult if not impossible to do in a strict language like C. Instead, it ironically tends to cause more performance problems than it solves, with hard-to-debug spaceleaks capable of taking down production systems.

The entire Miranda branch of the PL tree is likely going to prove to be an evolutionary dead end, and Haskell will eventually lose out to Idris or some other strict language. And I don't think it's a coincidence that the strictness pragma (which does not even give you a strict program) was added soon after Idris and the rest started getting traction.


> How would you feel if Common Lisps suddenly started sporting infix operator notation?

You mean like this? (I linked it the other day.) https://github.com/rigetticomputing/cmu-infix But one might argue that the prefix notation + macros being able to support such a library is just further testament to their use as default...


In C# or Java, it's easy to get a lazy collection and force it into a list or an array. Is it possible to do the same with haskells (lazy) lists?


Array.fromList?


My impression on the consensus was that laziness was good for the tree (computation) but bad for the leaves (datatypes); hence the good practice of using strictness annotations on datatypes.


> hence the good practice of using strictness annotations on datatypes.

It's such a good practice that there's -XStrictData (different than the pragma mentioned by GP)


> I will never, ever use Haskell in production because of its default evaluation strategy, the wrongness of which was tacitly conceded not long ago with the addition of the strictness pragma (which only works per-module) to GHC.

Why must you publicly post something so wrong?


Because they have an axe to grind and a throwaway to burn?


> very hard to understand why a function could be useful in the first place

So true.

https://hackage.haskell.org/package/base-4.9.1.0/docs/Contro...

> mfix :: (a -> m a) -> m a

> The fixed point of a monadic computation. mfix f executes the action f only once, with the eventual output fed back as the input. Hence f should not be strict, for then mfix f would diverge.

But why tho?


Ubiquitous single-letter symbols mapping to who-knows-what possible things, pnflly abvted fncn nms, and unclear motivations for code are what I've bounced off of with Haskell every time I've tried to dig in. The community seems to have adopted all the worst parts of mathematics culture, along with whichever good parts they've brought in.


Serious question. In the following code, what would be better function / type names? (or you can pick another bit of Haskell code that you know and/or have particular trouble understanding).

    class Foldable t where
        foldl :: (b -> a -> b) -> b -> t a -> b


Maybe something like:

  class Foldable myFoldable where
    foldl :: (summary -> element -> summary) 
          -> summary 
          -> myFoldable element 
          -> summary
This is the translation I do in my head when I read that type signature, at least.


Wow. That's much better.

Apparently-idiomatic Haskell reads (or, rather, doesn't) about like line-noise/code-golf style Perl to me.


The problem with this is that there are Foldables that do not behave in the way that is implied by the above re-write of the parameters.

I think this is kind of the problem when you are really high up in abstraction-land. The functions that you're using are so generic that it's hard to say that they operate on anything in particular, except that the arguments fulfill certain properties or laws.


I wonder if anyone has worked on a Prelude replacement which has these replacements (along with renaming functions where appropriate)...


To change the names you'd have to re-implement all the functions. Good luck with that, base is /large/.

This is a documentation issue, not an implementation issue.


Really? I find the single-letter version far easier to read.


The single letter version is far easier since I already know what foldable does. Not so much when I'm learning a new library.


I've always disliked mapM_. Something about it just looks ugly and it doesn't really hint what its for. I would like to see mapM as mapMonad and mapM_ as eachMonad. Of course, once I knew what it meant the abbreviations became rather nice.


Yeah, reading that shit drives me nuts.


Here's a practical application of mfix:

    mfix $ \threadId -> forkIO $ do
      -- computation in forked thread
forkIO creates a new thread and returns its thread id. However, in order to access the returned thread id inside the forked computation, normally one would have to store it in a variable and then read from it inside the thread.

mfix, by nature of its laziness, captures the return value of the function passed to it, and passes it to said function.


It's a very neat example, and I like it very much as an illustration of mfix, but wouldn't one just use Control.Concurrent.myThreadId for that? Seems a little overkill.


Ha, I guess this must be some kind of selective blindness then :) I've seen myThreadId many times, it makes perfect sense, and yet, whenever I've had to grab the spawned thread id I've always reached for mfix... Well, in this particular case there's of course no need for that.

But the general pattern applies to many things. One other example that immediately springs to mind is e.g. registering a callback while needing the ability to unregister the callback from within itself, using some sort of id returned from the registering function.


By analogy to fix:

fix f --> f (fix f)

That is, to compute the fixed point of some function f, we give f access to the fixed point (i.e. fix f) in order to compute the fixed point.

Consider f is strict, and that we try to run this:

    fix f --> f (fix f)
          --> f (f (fix f))
          --> f (f (f (...)))
Why? Because if f is strict it has to evaluate its argument before evaluating the body.

On the other hand, if f is non-strict, we can evaluate its body without evaluating its argument first.

mfix generalizes fix to monadic functions. When we have monadic functions, we're usually dealing with side-effectful computation, so we want to make sure that we only force the monadic action f once, so that the side effects only run once.

---

Another question to ask is "what is fix even useful for"? fix is how we introduce general recursion into a language. For example, the lambda calculus has fix, except it's called the Y combinator. Compare:

    Y f   --> f (Y f)
    fix f --> f (fix f)
The lambda calculus is Turing complete because it has this general recursion.

fix is often used in smaller toy languages to also make them Turing complete, because this form of recursion is straightforward.

While Haskell has recursive function definitions, fix (and mfix more generally) are sometimes useful in their own right.


I think you answered the wrong "why".


See: https://wiki.haskell.org/MonadFix

Basically it's the primitive that allows monadic computations to be written in the same lazy cyclic style as regular values in Haskell. (e.g. `ones = 1:ones` to create an infinite lazy list of 1.)

There isn't a single answer to "why?" any more than there is for monads in general, but as an example, I've been looking into using this abstraction to model circuit graphs.


Reflex is a frontend library for haskell.

    -- create an input text widget with auto clean on return and return an event firing on return
    -- containing the string before clean
    inputW :: MonadWidget t m => m (Event t T.Text)
    inputW = do
        rec
          -- fire send event when pressing enter
          let send = textInputGetEnter input

          -- textInput with content reset on send
          input <- textInput $ def & setValue .~ ("" <$ send)

         -- tag the send signal with the inputText value BEFORE resetting 
        return $ tag (current $ input ^. textInput_value) send


Ignoring the operator soup for a second (thanks lens) you might notice that the send event depends on the input field and the content of the input field depends on the send event. In, say, java you would first create the field and then mutate it separately. In haskell mutating wouldn't work so instead we usually tie the knot which means making the value dependent on itself and letting laziness figure it out.

However we are creating a text field here so if we make the value recursive on itself we are gonna be spawning text fields all over the place. MonadFix is variant of this that splices the effect out so it is only run once and then allows the value to be recursive.


It's useful if you want to e.g. traverse a custom tree structure with an effectful function. You can write the function that just does one level of the tree and pass it to mfix to get a version that does the whole tree recursively, composing the effects in the way that naturally makes sense for that kind of effect.


The standard version [ fix :: (a -> b) -> b ] is weird too - I'd suggest reading up on that first, there's a lot of tutorials. You can use fix to write recursive functions, so I'm guessing mfix is analogous, it just handles some >>= plumbing for you.


mfix is somewhat low-level. It’s used in the desugaring of Haskell’s “do rec” notation.

I find it useful because it lets me take a recursive structure[1] and a multi-pass algorithm[2] on that structure, which involves side effects[3], and express it in code as a single pass, without mutation.

This conversion of multi-pass algorithms to a single pass while retaining time/space complexity guarantees is one of the key benefits of laziness in general.

[1]: e.g., an AST

[2]: e.g., type inference, where pass 1 is generating type constraints, and pass 2 is solving the constraints and annotating the tree with the final inferred types

[3]: e.g., generating fresh type variables


I'd love to hear more detail about the example you give of type inference over an AST. I like the idea, but I'm having trouble envisioning how to write code in that style cleanly. Did you implement that in an open source project I could take a look at?


It’s not the prettiest code, but here:

https://github.com/evincarofautumn/kitten/blob/8a7e949f1af71...

The key line is:

    (term', t, tenvFinal) <- inferType dictionary tenvFinal' tenv term
In “inferType”, I just annotate each term with “Zonk.type_ tenvFinal type_” as I go, in the same pass as generating the type constraints. Since “tenvFinal” is the final type environment, and “zonking” substitutes all type variables with their solved types, everything gets lazily resolved when reading the annotated types later on, e.g., during analysis & codegen.


Thanks, that helped me understand better. That's really cool!


Haskell: where difficult problems are trivial, and where trivial problems are the subject of ongoing academic research


The Sieve of Eratosthenes is a good example of relatively trivial in imperative code but a research paper in Haskell.


I am pretty sure the sieve can be written like this in Haskell, which might be a bit short for a research paper:

    sieve :: Integral a => a -> [a]
    sieve l =
        sieve' [2..l] []
      where
        sieve' (p:ns) ps =
        sieve' (filter (\x -> rem x p /= 0) ns) (p : ps)
        sieve' [] ps =
            reverse ps


That's not the real sieve, though. It doesn't cross off pre-visited primes and is incredibly inefficient. Try finding just the 19th prime. Here is the paper on the real sieve, https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf


And it's exactly the same thing with the classical beautiful functional example of quicksort. It's not actually quicksort and its performance is awful as a result.

Haskell has trade offs.


Some ideas on how to make it into a research paper -- add discussion on how do you trivially manage memory (avoid unnecessary allocation and copying and reversing). The original trivia idea was simply to sieve in place. So how do we do that in Haskell (hint: invent something)?


C'mon. Funny joke, but not fair when taken serious. Or what "trivial problems are the subject of ongoing academic research" in Haskell?


When you consider "trivial" to mean "only if you ignore all kinds of edge cases that most languages implicitly ignore" ... then Haskell has plenty of paper-worthy trivial problems.


> and where trivial problems are the subject of ongoing academic research

Like what?


Working with graphs is super painful. Building a graph where you can say `parent ( firstChild node ) == node` is really difficult. There are ways to define it all in one go, or ways to build as you go, but I'm not aware of any ways to do it without serious tradeoffs.

See e.g. https://wiki.haskell.org/Tying_the_Knot


I am not holding that (joke's) view, but monad came to mind as an ongoing academic research (to original Haskell at least) in order to solve a rather trivial stateful programming (trivial in an imperative style). To generalize that example, there are solutions that are trivial to express in an imperative style but seems convoluted in pure functional programming. How to retain the trivialness are subject of ongoing academic research.

EDIT: To be specific, I can think about the example of machine efficiency that can simply be specified literally in C but it is uncertain and opaque in Haskell.


This confuses me. You don't really need monads to perform io or specify sequencing in Haskell, although do notation is very convenient for sequencing sctions.

Ultimately, if you want imperative style, you can just write imperative code that's well typed in Haskell.


> You don't really need monads to perform io or specify sequencing in Haskell.

By implying certain logic dependence? It would be rather un-natural, which I assume is the reason for the invention of monad.

> Ultimately, if you want imperative style, you can just write imperative code that's well typed in Haskell.

It is not just a style. There are logic implications. In imperative programming, the states are always assumed. That is, the operations are assumed to have different effect switching orders. Note this is a much stricter assumption than the alternative that some/all operations are stateless. The benefit of such stricter confinement provides certain convenience. For example, because the order of instruction is meaningful and can convey ideas, contexts can take place, and local ideas can be focused (rather than carrying the whole states). In the real world (compared to the mathematical world), most action affects vast states that are impossible to quantify, and we (as an adaptive species) are well adapted to stateful imperative thinking. How do you perform rigorous logic thinking when the inputs (states) are never complete?

In computer programming, the states can always be fully described. However, if you always want to fully describe your states, you'll either restrict your functions and programs with limited states (input/output) or you will be constantly writing cumbersome functions/programs with a long description of all its states. The solution is to share and imply some of those states in the types, monad, for example. So yes, monad is not strictly a necessity but a convenience. Since Haskell never makes assumptions of hidden states, merely imitating an imperative style is still not imperative programming. Because the states are still not assumed, but rigorously specified and carried. So even with a seemingly similar code style, the cognitive process in Haskell and a conventional imperative language is very different.

Which is better? It depends. If you are holding the view that everything computer solves are mathematical problems, or when you are solving mathematical problems, certainly you would think only Haskell's approach makes sense. On the other hand, if you understand that computers are merely a tool for solving real-world problems, then you are often not looking for mathematically correct answers but practically good enough solutions. Practically good enough solutions allow room for shortcuts or undefined behaviors. Embracing these uncertainties (shock to mathematicians) allows for efficient means. Under the latter context, a pure mathematical restriction can be a burden rather than help. The latter approach, in fact, carries much more complexity. For example, calculating pi to one millionth digit. In functional approach, it is solving a mathematical problem and a correct answer is what you looking for. In imperative approach, memory cost, speed efficiency, various shortcuts are all part of the consideration and part of solution. It is not always just the answer matters, how you get the answer also matters. But do they really matter? It depends.

Real programming is often a mixture of both -- in C or in Haskell. C defaults to imperative style with explicit stateless when needed. Haskell defaults to stateless with explicit stateful when necessary. Trivial in one is difficult in the other.


> The same is not true of Haskell. If you have never looked at Haskell code, you may have difficulty following even simple functions.

Why is it that people talk about this almost as if it's done virtue of the language? As if the fact that's it's so inscrutable proves that it's valuable, different, and on a higher plane of computing.


You are reading into that way too much. The author is just saying the syntax is a lot different than languages you have probably used before.


The point is that it's not inherently inscrutable—it's different. And being different is a virtue: at the very least, you'll learn something from Haskell, and it's going to let you do things other languages won't. It's not just an Algol reskin.


If it had Algol syntax and Algol naming conventions it might actually have succeeded outside of academia. The decision not to go with an Algol-derived syntax was probably the biggest mistake ever done in Haskell's design.


Maybe... I'm not convinced the semantics would feel as good in that setting.


Not that that's a virtue, but it's a proof that it's a different thing.


I didn't read it that way at all. I read it as acknowledging the syntax as a problem in the beginning, but claiming that it became much easier to read once you really learned the language.


It's merely different. It's like using Prolog for the first time, you're likely to find it very unfamiliar.


Wow, really good writeup. And on point regarding '5 days' vs '5 years' approach.

And it really confirms my biases against the language.


> The easy is hard, the hard is easy

Probably the best single line description of Haskell.

The learning curve helps with the former (easy) part, the latter (hard) part contains some really brilliant ideas where you start to wonder why so many people still use other languages for everything, then you remember how hard the easy stuff can be...


I don't think it takes 5 years to learn this stuff. Nothing stood out to me and I only used Haskell for 10 weeks for a single college course 2 years ago. It's all true though.


> Types / Type-driven development Rating: Best in class

> Haskell definitely does not have the most advanced type system (not even close if you count research languages) but out of all languages that are actually used in production Haskell is probably at the top.

What are these other research languages, that have such incredible type systems? Do they usually have implemented compilers, or would they only be described in an abstract form? Can I explore them for fun and curiosity?


The fact that he mentions type driven development probably means that he has dependent typing around, probably as seen in the language Idris, whose recently published book is called the same. Idris is very much not just an abstract language.


Yes, the author mentioned Idris shortly afterwards, but I get the impression that this is just the tip of an iceberg. Idris was presented as the one example that is most likely to evolve from a research-only language, to a production language.


There's also agda and coq, at various points on the spectrum between programming language and theorem prover.


I strongly agree with most of this. I've been so pro-Purescript because I think we need a break from Haskell into something new but related. Haskell is great, but has so much baggage it's tough to enter into.


I have tried to use Haskell a number of times in the past ~5 years for small-scale projects and witnessed others try to use it, and most of these projects (actually I think all) have resulted in failure / rewrites in more simple/plain-vanilla languages like Python/Go. I keep thinking Haskell is interesting, but at some point I had to force myself to stop investing time in it because I kept concluding it's not a good investment of time for a move-fast/practical person like me. I again had to remind myself when I read this post.

Some reasons I remember for the various failures, in no particular order:

- steep learning curve = experienced (in other languages) programmers having a tough time not being productive for weeks/months in the new language, with no clear payoff for the kind of projects they're working on

- sometimes/often side-effects/states/global vars/hackyness is what I want, because I'm experimenting with something in the code; and if I'm not sure if this code will be around in 3 months, I want to leave the mess in and not refactor it

- in general, I think all-the-way pure no side effects is too much; I think J.Carmack said sth along the lines: Haskell has good ideas which should be imported into more generally useful languages like C++/etc, eg. the gamestate in an FPS game should be write-once, it makes the engine/architecture easier to understand (but in general the language should support side-effect)

- I found the type system to be cumbersome: I kept not being able to model things the way I wanted to and running into annoyances; I find classes/objects/templates etc from the C++/Java/Python/whatever world to be more useful for modeling applications

- when the spec of the system keeps changing (=the norm in lean/cont.delivery environments), it's cumbersome/not practical to keep updating the types and deal with the cascading effects

- weird "bugs" due to how the VM evaluates the program (usually around lazyness/lists) leading to memory leaks; when I was chasing these issues I always felt like I'm wasting my time trying to convince the ghc runtime to do X, which would be trivial in an imperative language where I just write the program to do X and I'm done

- cryptic ghc compile errors regarding types (granted, this is similar in C++ with templates and STL..)

- if it compiles it's good => fallacy we kept running into

- type system seemed not a good fit for certain common use-cases, like parsing dynamic/messy things like json

Working at Facebook for the last year and seeing the PHP/Hack codebase which powers this incredibly successful product/company has further eroded my interest in Haskell: Facebook's slow transition from PHP to Hack (=win) shows that some level of strictness/typing/etc is important, but it's pointless to overdo the purity. Just pick sth which is good enough, make sure it has outstanding all-around tooling, have good code-review, and then focus on the product you're building, not the language.

I'm not saying Haskell is shit, I just don't care about it anymore. I'm happy if people get good use out of it, clearly there are problem spaces that are compact, well-defined and correctness is super-important (like parsing).


> - when the spec of the system keeps changing (=the norm in lean/cont.delivery environments), it's cumbersome/not practical to keep updating the types and deal with the cascading effects

Interesting. I find that one of the best parts of Haskell. If I expose the assumptions I've relied on in the types, then when those assumptions break I have tremendous help knowing what code must change to deal with it.


> Performance is hard to figure out

1000x changes in performance is not a problem if:

1. Performance of one module is not overly dependent on the code that uses it.

2. Performance never degrades order of magnitude with new compilers.


The thing with enlightenment level ideas isn't that they aren't enlightening but that they attract hype. Individuals pretending to be enlightened if you will.

A scientific mindset as well as liberalism are also ideas where new proponents often want to draw a line in the sand to stratify people into superior and inferior. The original proponents were chasing a higher level of quality for all, but the need for social stratification weaponizes and gates ideas.


I'm confused. Can you tell me how your comment relates to the article?

The author of the article is sharing his experience with Haskell, explaining ups and downs. He concludes by

> Haskell is a great programming language. It requires some effort at the beginning, but you get to learn a very different way of thinking about your problems.

Are you interpreting the word "different" as "better"?


> If you read an article from 10 years ago about the best way to do something in the language, that article is probably outdated by two generations.

Thank you, that is all I need to know about Haskell. I won't be learning Haskell then, in the same way that I won't have anything to do with C++. I don't have enough time to use these fashion-dominated and fad-obsessed programming languages.


Surely you’re joking? I think ~5 years is a totally reasonable time frame for new techniques, language features, and best practices to come about. Languages and libraries evolve in response to the needs of users and pain points with older approaches.

To be honest, having been a Haskeller since ~2010 (and a C++ guy since long before that) I’m not even really sure what specifically the author is referring to there.


Holy cow! Don't ever dream about getting into node.js then!


Anything to do with Javascript is prey to churn. Many modern web pages consist of a blank page which loads various bits of Javascript from various servers which then process other Javascript via Javascript templates into the final page. There might be a small minority of web sites which actually are updated so often that these actions are justified, but the bulk of the websites clearly are those where some Javascript-fixated person has decided that "there is no such thing as too much Javascript".

Considering that the language was not very well designed in the first place, putting it on both the web browser and the server, and then insisting on regenerating what could easily have been supplied as static HTML pages by running dozens of Javascripts on both the server and the browser should be enough to convince an average observer that Javascript is being abused in this way due to nothing more than faddism.


This has been the most disappointing blog post I have read in quite some time.


Why? The author didn't crap nor fawn all over the language. He highlighted what he thought were important quirks of the Haskell ecosystem/community/whatever, in a measured manner, while noting that if you invest time and effort you can indeed produce relevant output.

Should one dive in given those quirks? The decision is up to the reader.

Curious, what did you expect/wish for?

EDITED for better phrasing.


Do you care enough to share why is it disappointing for you?


we should forbid click bait titles on HN, it's an insult to the audience's intelligence

Clickbait title are so common we think they're normal titles.... Here's how he did it: you create a craving for an answer then you offer a solution for that craving. "here’s how it was" ==> that's the trick Also "here’s how" should never be used in a title, we all know that the title's subject IS what you're going to talk about. a pre-click bait era title would have sounded like: Learnings after using Haskell for 5 years


We should, and we do.

This title is on point: The author used Haskell for several years, and found there was a learning curve, some helpful libraries, some poorly documented libraries, lots of historical baggage, unclear performance consequences, and uncommon correlations (good and bad) between task and difficulty.


While this title follows a particular style it actually tells you what the article will be about (5 years of Haskell). What bothers you about it?


It's clickbaity because you have to click to find out if their opinion is highly positive or negative. It sounds like it's going to be answered by "... terrible" or "... great!", and that it's phrased that way to make you click and find out.

Granted, once you click you find out that's not why it's titled that, but you still feel baited into clicking to find out. A better title would be something like "I tried Haskell for 5 years and here's the good and the bad", or something like that.


It's hardly clickbait. Clickbait would be a purposely inflammatory or exaggerated title to get you to click the link. This is the complete opposite of that.

> It's clickbaity because you have to click to find out if their opinion is highly positive or negative.

I think you just stated the purpose of a title. Wiki says clickbait is:

> "... relying on sensationalist headlines or eye-catching thumbnail pictures "

There's nothing sensationalist about. "I tried Haskell for 5 years". It's rather banal.


The title is not only accurate but it is stylistically appropriate in this case as the author is making an explicit criticism of the type of post where an author tries something for a few days and blogs about it.


Do you think this title is clickbait? I just read the thing it seems like a good title.


It's "baitish-feeling at first", just missing the tailing "and here's what happened" --- but --- at this point I think this style is just starting to become assimilated into general "unintended" blogging/writing style..

The other day, I saw some Medium post titled "Functional programming will make you happier". I was furious and about to blast some diatribe on the increasingly infantile-inanity-inflation-in-headlines but looking at the post, the author seemed genuine in taking the troubles to work out their thoughts to the reader, so I refrained.. tilting at windmills, increasingly people are simply growing up with these sorts of titles and develop a primal instinct for what gets attention and clicks --- aka, reads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: