Hacker News new | past | comments | ask | show | jobs | submit login
On Marketing Haskell (stephendiehl.com)
156 points by azhenley on May 30, 2020 | hide | past | favorite | 264 comments



I am the person that would be the one most likely to adopt haskell in the company I work in. Now, I don't and I won't even though I used it for several pet projects and I would say have learned a lot of haskell and associated patterns.

It's not that haskell doesn't have merits. However I feel like they are oversold. While you can build programs that lack certain error classes, they are not bug free and at the same time debugging often was a lot harder than in other languages. The laziness is especially problematic in that regard.

Another problem is cultural. Haskell and Scala community have been the most hostile communities I have experienced. And the friendliness of communities is a sure predictor for growth.

Another important problem is the docs. For some reasons they hardly contain examples. You don't get adoption that way. Even if the maintainers feel that the type signatures are documentation enough.

And as a last remark, the combinatoric complexity of compiler flags is a problem and needs to be fixed. But the ecosystem is in a deadlock there that no compromise could be found on what should be in the language and what shouldn't be. And it kind of indicates a deeper problem in the haskell design process.


I was a Haskell developer in my last gig. I entered that job determined to make a success of it, because I was excited about the language and felt I wanted to "prove" that it had value in a "boring" domain like financial services. But I was stymied at every opportunity by Haskell itself.

My biggest complaints are 1) the lack of good libraries, and 2) the apalling state of documentation for the few that exist. Too many Haskell projects read like "I figured out this neat family of types for thinking about X - at last URL parsing is provably safe - now you figure out the rest".

Let's take a practical example: HTTP requests. If I'm a budding JavaScript developer, or Kotlin developer, or Python developer, I can read through copious search results and tutorials on Axios / OkHttp / Requests / delete as appropriate. These tutorials not only walk me through the common use cases but also show me their APIs in the context of the new and alien syntax of the language I'm grappling with. They give me enough template code that I can use an incomplete understanding of whatever language it is I'm learning to figure out the rest without recourse to a textbook.

What does Haskell have? It has Req (and a bunch of other also rans), and the docs for Req are atrocious, spanning a staggering 1 (one) example and then a meandering essay on the deficiency of other Haskell HTTP clients. It's not good enough.

And that's my assessment of Haskell as a whole - not the language individually, which is expressive and powerful and inspired - but the whole ecosystem around Haskell that feels like a set of beta projects and hobby repositories generally developed by interested individuals rather than commercial endeavours.

Or look at the paltry state of IDE support. It's 2020 and the best option we have is VSCode with HIE, which rapidly spirals out of control allocating memory, randomly crashes and on Windows just refuses to even start half the time. Without this tooling a lot of the theoretical benefit of the type system is being lost. This is low hanging fruit but there's been no incentive to pick it.

And so I've made the difficult decision to leave Haskell behind. If you need typed functional programming, your options are OCaml, Scala, F# or even subsets of Rust and TypeScript. Haskell is a great learning experience but totally unproductive right now.


I think one of the most underrated aspects of Rust is its stellar documentation, tooling, community, high-level libraries and general accessibility. The language itself isn't easy per se, but 100% of the difficulty is intrinsic; none of it is incidental. In many ways it's "the practical Haskell", even though it's technically lower-level.


This has been my experience as well. In fact, I often say that Rust is like a hyper-practical ML language, like Ocaml 2.0.

I learned Rust in 2016, and have kept up with it since. It was a painful first entry point, and then another rather painful experience when I first used it for a big hobby project, but now it is my go-to language because it's exactly as hard as it needs to be, has amazing community and documentation and libraries, and gives me the option to have really good performance while also using very nice ML-family linguistic constructs.


This is the most under appreciated feature of Python and Ruby. They got it right and other languages are still trying to catch up. Rust offers nothing better except memory safety at the expense of ease of use and with steep learning curve. Indeed same can be achieved by C, if one replace the complicated syntax and steep learning curve with better testing to check for memory safety. Zig language shows that it is possible to provide a simple way to write both memory safe or unsafe code.

Well begin is half done, and Python and Ruby made it so easy to program that without reading multiple of books and spending a year or two, programmers can write decent code. This is the reason why Go and Swift the other two system programming language is far ahead of Rust despite being GC language. Indeed even the latest OS Fuschia is written in C++ (zircon kernel), with Rust SDK to write high level programs same as the one written with Dart and flutter.


> Zig language shows that it is possible to provide a simple way to write both memory safe or unsafe code.

Zig is a kind of language that wouldn't have been written if its author discovered ATS[1] at a right time.

[1]: http://www.ats-lang.org


i was aware of ats before creating zig


It's a bit of a chicken and egg problem and pretty much what the author of the article is talking about as well. You need people's time and energy to improve the status quo and getting people's attention takes ... people's time and energy. Haskell has always been grassroots and imho has enough hard-core believers that it will never go away but whether it'll ever be mainstream I don't know. I have great admiration for the small group of people who put their own time and energy into making the language and the ecosystem better. Re IDE support - loads happening there atm, ghcide is imho a true game changer and fast becoming an invaluable tool for working with Haskell.


I no longer think it's a chicken and egg problem with Haskell, and more of a goals & culture problem. Libraries like lens and parseq have a huge amount of effort put into them, probably more than what it would take to create a good HTTP client library. The problem is that creating industry friendly libraries is apparently not a goal that the Haskell community has.


> The problem is that creating industry friendly libraries is apparently not a goal that the Haskell community has.

I don't think that's a fair take. Haskell started as a research language, and it's only in the 2010s really that it started taking off and becoming more and more practical. That's because people invested lots of effort in libraries like lens (and many others). Haskell didn't even have a strong build system until Stack popped up in the mid-2010s. Frankly these were bigger priorities. These things had to be solved first in order to transition Haskell from a research experiment in lazy evaluation into a real-world tool.

In 2020, outside of maybe dependent types, the community is probably more concerned with practicality and pragmatism than ever before. There's a huge focus right now on improved IDE support, for instance.


10 years should be enough time to get a working IDE and a good starting set of libraries, and my personal experience is that Haskell has neither.


Maybe you ran into an issue where there still isn’t a decent starter library, but there are actually tons of great ones out there that we’ve used to build production services. So I can’t say I really know what you’re talking about in general.

Language server support is a recent trend, and it’s being worked on for Haskell just like it is for everything else.


People put energy into what's most useful for them and I don't think they should be blamed for this. I'm sure I'm biased but Lens and Parseq (and its successors) are both amazing libraries solving real problems in a powerful way. You probably don't agree and that's perfectly fine by me. I'd also go as far as saying that the 2 libs you've picked actually have pretty comprehensive documentations as well which is not always the case with Haskell libs (a real problem I 100% agree with btw).


I’m not blaming them, mostly, but I am pointing out that what they’ve put their effort is not consistent with industry usage. Parsec is neat, but parsing is a relatively niche industrial need, and lens sits somewhere between PL research and “solving problems that only Haskell has”.

They’re fine libraries, they’re just not what I want to see if you’re trying to convince me to replace Java with Haskell at work. If I was looking to write a compiler however...


I think when framing it as a chicken and the egg problem, haskell communities take on it is that you don't need a nest for the chicken to hatch in.

Brutalist Software engineering.


Usage based data implies that you actually do need a nest for an chicken to hatch in.

If you want chickens to hatch reliably, you need a coop too.


It's a bit of a chicken and egg problem

It is. The way you usually resolve that problem if your language is going to become a big mainstream success is to have a particular focus, one key selling point where it is uniquely capable or at least so far ahead of the opposition that using it is a compelling advantage. JavaScript was the language for writing front-end code. C was the language for low-level programming. Java was the language for getting code monkey class programmers to be reasonably productive in enterprise environments. There are now other languages in each category, and there are now other uses for each of these languages, but the important point remains: those languages dominated those niches at the time they took off and that was why they took off.

Unfortunately for Haskell, a sophisticated type system and compiler capable of supporting experimental language design research is a niche that about 0.0001% of developers and people who pay developers care about. Haskell has indeed been very successful in that small niche, and it remains so. But I had a look into it the other day from the point of view of implementing the back-end for a new web project, and its support for writing web servers is still probably a decade behind the state of the art. I swallowed my pride and today that back end runs on Node. Would I prefer Haskell's safety and power over JS? Of course. But I'd prefer a language where any server I wrote can speak modern web protocols and support the functionality I need, and of those two languages, only one clears that bar.


Can you explain what Haskell fall behind in term of writing web servers? As someone who wrote Node and Haskell web apps in the past I don't think it's true. Let't take a look at one of premier Haskell web framework Yesod. It has support for authentication, http/2, https, middlewares, logging, etc. I don't think there is any "modern web protocols" that Haskell cannot support. It's only a language anyway. It popular and powerful enough for people to implement something novel with it.

I know Haskell had been used for testing-bed for programming language ideas because making abstractions in Haskell is freaking easy. But you can just ignore them and stick to boring, working and practical subset of the language. The language already been heavily used in industries so I know for a fact that it's fit for production use. I myself did deploy Haskell apps to production (Telegram bot for personal use) and never encounter any problem.


Let't take a look at one of premier Haskell web framework Yesod.

OK, let's do that, since that's exactly one of the things I tried.

So, one thing I was interested in was indeed support for HTTP/2. You've just stated that Yesod supports it. Right now, with the aid of all of the main types of documentation linked from the Yesod site and a search engine, I am totally unable to verify that. I literally cannot find a single reference to it on the entire Web.

The same is true of identifying anything about the level of TLS support, configuring which cipher suites to accept, etc.

I was also interested in WebSocket support. Again, there's no obvious sign of this in the documentation, at least not that I found in a few minutes of browsing. Any of it. Seriously, projects wanting programmers to use them should have one authoritative source of reference documentation, and it should be up to date and easily searchable.

Still, on this one, at least a web search finds me yesod-websockets. That has the usual mix of package technicalities on its Hackage page, plus a documentation page with a long list of functions, type signatures and one-line descriptions in the Hackage documentation. Ironically, there is also a link from the package description to Stackage, which has another page of package technicalities linking to another similar page of documentation, which is roughly the same, but apparently the two versions I've found are different because one is 0.3.0.1 and one is 0.3.0.2. There's nothing to give you confidence in using a library in production like a four-part version number that starts with "0.". And no obvious description of the real level of maturity and stability that reflects, so I'm going to assume this is pre-release quality. And did I mention there should be one authoritative home for reference documentation? That goes for the package itself, too.

No matter how many times a Haskell fan says that this kind of thing is sufficient, it simply isn't. There is no introductory tutorial. There are no illustrative code snippets or HOWTOs or full working examples. There is no apparent organisation of any kind in what little reference documentation does exist except for a few headers that have little meaning if you don't already know what they mean. I have no idea how stable this package and its API are, but the version numbers (plural) suggest the default assumption should be "not very".

This isn't a serious proposition for production work. It just isn't. Maybe it's useful if you already know the Yesod library and the related libraries well, for example because you developed some of that ecosystem, or you've talked with someone who did, or maybe you've watched it evolve since it was simple enough to figure out from the limited documentation and kept up. But for someone with Haskell experience who hasn't used it to build a web server before, it's all but useless.

For the record, I had a working server, including proof of concept WebSockets code, up and running on both the client and server side using Node, in less time than it's taken me to re-check a few facts and then write this comment. And a web search for any of "node http2", "node tls" or "node websocket" immediately locates a wealth of relevant documentation, examples, etc.


> Right now, with the aid of all of the main types of documentation linked from the Yesod site and a search engine, I am totally unable to verify that. I literally cannot find a single reference to it on the entire Web.

> The same is true of identifying anything about the level of TLS support, configuring which cipher suites to accept, etc.

This is really weird, and I think that your comment doesn't represent the reality. Here's the official Yesod Book[1] mentioning WAI and Warp as a de-facto standard server implementation for a Haskell Web ecosystem. The Hackage page for Warp[2] contains "HTTP/2" and "TLS" at the very top of the package description section. Googling "haskell warp tls" produces the topmost link pointing to warp-tls package[3]. The package docs clearly show all supported ciphers[4]. And googling "haskell yesod http2" produces the second top link[5][6] that mentions HTTP/2 and TLS and it has been around since 2015.

[1]: https://www.yesodweb.com/book/web-application-interface#web-...

[2]: https://hackage.haskell.org/package/warp

[3]: https://hackage.haskell.org/package/warp-tls

[4]: https://hackage.haskell.org/package/warp-tls-3.2.12/docs/Net...

[5]: https://www.yesodweb.com/blog/2015/07/http2

[6]: https://pasteboard.co/JaRXKUR.png


Here's the official Yesod Book[1] mentioning WAI and Warp as a de-facto standard server implementation for a Haskell Web ecosystem.

So just to be clear:

To discover what is actually possible in this area, someone new to Yesod is supposed to read the entire book through to an appendix, which itself could kindly be described as "not entirely clear".

Then they're supposed to recognise that they need to write their Haskell application using WAI and a Warp backend (whatever that means in this context, because the relationships between these tools are among the "not entirely clear" aspects of that appendix) instead of using the other tools that were already described earlier in the book, despite having no obvious motivation for doing so at this point (because terms like HTTP/2 and TLS do not appear anywhere in the Yesod book appendix you linked to).

Then on the off chance that these alternative tools mentioned in an appendix of an online book can do what the developer wants even though the appendix doesn't say anything about that at all, our intrepid developer is supposed to go and look up the Hackage pages for those tools, one of which does at least mention the term HTTP/2.

There's no further information on those documentation pages about how to actually use important features of HTTP/2, for example setting up server push, though. In fact, the only reference to server push I can find anywhere in the Warp documentation relates to the function setServerPushLogger.

So while I appreciate your taking the time to comment, I'm afraid my previous conclusion has not changed. Maybe these tools are useful if you already understand them in detail, but for someone familiar with Haskell but writing a web application with it for the first time, this experience isn't even on the same scale as a good, production-ready ecosystem.

Finally, just for the record, while choosing the perfect query to Google might have found you a useful link, several similar obvious searches I tried before did not, nor indeed does the exact term you mentioned ("haskell yesod http2") in some other search engines such as Duck Duck Go. And your Pasteboard link (I assume) intending to demonstrate the search you managed successfully is not working from here.


> Maybe these tools are useful if you already understand them in detail, but for someone familiar with Haskell but writing a web application with it for the first time, this experience isn't even on the same scale as a good, production-ready ecosystem.

But this is hardly an example of "not being able to find it with a search engine" as the original message seemed to claim. Surely it takes time to get acquainted with a new ecosystem, and curiosity is welcomed in the community.

> To discover what is actually possible in this area, someone new to Yesod is supposed to read the entire book through to an appendix

That's hardly true either, one just needs to google "haskell yesod http2" or "haskell yesod tls" - both results will be within topmost 5 links. The results will be full of potentially unfamiliar technical details, but they still allow getting the general idea of what is possible. And reading the book helps enormously to get everything internalized.

> There's no further information on those documentation pages about how to actually use important features of HTTP/2, for example setting up server push, though

the top link of googling "haskell warp http2 server push" produces a whole section on how server pushes are done - https://hackage.haskell.org/package/wai-3.0.4.0/docs/Network...

> in some other search engines such as Duck Duck Go

I use DDG by default, and I regularly add !g for the same reason


I tried too and there was nothing useful I could find in the first results of google for "haskell yesod http2". Google is heavily customizing search results per use so we probably have different results.

The only related thing I got at the end of search results is this page https://www.yesodweb.com/blog/2015/07/http2 that doesn't even mention yesod so good luck trying to understand what is it while all it talks about is haskell warp.

By the way this paragraph is scary. Haskell didn't support half of the modern cryptography required for TLS 1.2 (2008) so the author had to write them himself (and sent pull request to the tls haskell package).

"Unfortunately, many pieces were missing in the tls library. So, it was necessary for me to implement ALPN, ECDHE(Elliptic curve Diffie-Hellman, ephemeral) and AES GCM(Galois/Counter Mode). They are already merged into the tls and cryptonite library."


For what it's worth, it's extremely easy to write a fast WebSocket server in Haskell.

[0] puts examples front-and-center on the Hackage page.

[1] lets you easily integrate WebSockets with other things like Warp.

[2] shows that WebSocket servers in Haskell are actually quite performant.

[0] https://hackage.haskell.org/package/websockets

[1] https://hackage.haskell.org/package/wai-websockets

[2] https://github.com/hashrocket/websocket-shootout/blob/master...


For what it's worth, it's extremely easy to write a fast WebSocket server in Haskell.

Again, that's all well and good if you already know how to do it. But to be useful for serious production work, it shouldn't take several hours of reading, including numerous side trips to dead ends, and a discussion on HN where multiple Haskell developers who apparently have a lot more experience in this area suggest a variety of different libraries that collectively might get the job done. Except that right now, we've got two libraries for dealing with the WebSockets aspect in play, not counting plumbing libraries to join them up with the other aspects, and it's not clear which would be preferable to use or why.

It's worth remarking that at this point, we still haven't reached the point where I'd know how to write the simple proof of concept that I wrote in probably 20-30 minutes with Node without needing to consult anyone else. Achieving that within the Haskell ecosystem would apparently take considerable further reading, not to mention writing considerably more code if the examples we've collectively come up with so far in this discussion are as good as it gets.

Just to be clear, that is a fair comparison. In both cases, I was already familiar with the language and its main tools but I had no previous experience of writing the server side of a web application using either.

I don't know how much clearer I can make my point here, and I think perhaps those commenting to defend Haskell are still missing it. Even if some combination of libraries exists to do something in Haskell, that is still of very limited practical use if someone who isn't already an expert in them can't find the relevant information about what they are and how to use them. The documentation and packaging we've been talking about in this part of the discussion appear to be far from ideal, and keep in mind that even in the discussions here, we've already started from the premise that the developer recognised Yesod as the way to go when several other similarly well-known libraries are available for writing web servers in Haskell with varying capabilities and limitations. This is the same fundamental criticism that numerous people have made of the Haskell ecosystem in many different contexts, including elsewhere in today's HN discussion.


I'm largely sympathetic to the point that you're making. Your original claims were a little more inflammatory. This has transitioned into a discussion about the quality of documentation and tutorials, which I largely agree with.


FWIW, my point was always intended to be about the big picture and not just what is technically possible for someone who is already an expert (as some of the replies seem to have interpreted it). I'm sorry if my original phrasing was insufficiently clear.


> But I had a look into it the other day from the point of view of implementing the back-end for a new web project, and its support for writing web servers is still probably a decade behind the state of the art.

This is an unsubstantiated claim. For years, Haskell has had the best in class backend web stack - https://www.aosabook.org/en/posa/warp.html


For years, Haskell has had the best in class backend web stack

Please define "best in class".


Best in class - when within a single language, the following is available either out of the box or as a simple extra library dependency:

* Semi-explicit deterministic parallelism out of the box (https://www.microsoft.com/en-us/research/wp-content/uploads/... , https://downloads.haskell.org/~ghc/8.4.2/docs/html/users_gui...)

* implicit Async IO runtime out of the box (https://www.microsoft.com/en-us/research/wp-content/uploads/...)

* thread-safe global mutable state and atomic transactions (http://book.realworldhaskell.org/read/software-transactional...)

* type-safe native TLS (https://hackage.haskell.org/package/tls)

* multicore async streaming web server (Warp) with a unified application API (WAI)

* type-safe embeddable DSLs for rendering web content (https://hackage.haskell.org/package/blaze-html , http://fvisser.nl/clay/)

* type-safe embeddable binary content at build-time (https://hackage.haskell.org/package/file-embed-0.0.12.0/docs... , https://hackage.haskell.org/package/servant-static-th-0.2.2....)

* type-safe HTTP specs and automatic schema derivation (https://docs.servant.dev/en/stable/cookbook/index.html)

* the whole stack (multicore asyncio runtime, Network, HTTP/1.1/2, WebSockets, TLS, HTTP Server + Schema Specs, Markup rendering) is compiled into a single static binary of ~3 megabytes, universally across Linux, MacOS, Windows.

* fullstack story with a help of GHCJS and a selection of frameworks (https://reflex-frp.org , https://haskell-miso.org)


So Haskell is best in class if you define that to mean web servers written in Haskell?

If we surveyed a representative sample of 1000 professional web developers, how many of them do you think would give a list broadly in line with your definition for the properties of their ideal server implementation language?


Since most web developers have no real clue about any of that stuff, why would their list in comparison matter here?


I think if you are going to claim a tool is "best in class" then it should be demonstrably better at meeting the actual needs of people doing what it's used for.

The list above was a list of features, not a list of benefits. Those features might bring practical benefits to the working web developer -- after all, people who program in Haskell do so for a reason, which is probably that they believe this kind of language is helpful in some way -- but that has yet to be established here.

Meanwhile, an experienced back end web developer will probably be asking questions about managing large numbers of routes or guarding against common attack vectors or integrating with databases or calling external APIs or having a template system that supports translation to multiple (human) languages or understanding performance characteristics and how the servers will react to higher loads. These are practical matters that many web server frameworks attempt to address. It is revealing but not entirely surprising that the Haskell response to my question was instead to talk about strong typing and STM and so on.


> So Haskell is best in class if you define that to mean web servers written in Haskell?

I was answering your comment about Haskell's web ecosystem. I didn't claim that Haskell is the best in class for everything just because it's the best for implementing web backends.

> It is revealing but not entirely surprising that the Haskell response to my question was instead to talk about strong typing and STM and so on.

I put them into my response because they are the features that you highly likely are not going to find alltogether on any other platform. The regular things like I18N, L10N, and so on that you mentioned are not worth the discussion, because Haskell ecosystem has them, in various shapes and forms, and apart from boring implementation details they are no different from similar libraries on other platforms.


In haskell it's best to stick to libraries with more users than those that try to experiment with a fancy api.

So for example, just use http-client instead of req. Use postgres-simple or persistent instead of beam, etc...


> What does Haskell have? It has Req (and a bunch of other also rans), and the docs for Req are atrocious, spanning a staggering 1 (one) example and then a meandering essay on the deficiency of other Haskell HTTP clients. It's not good enough.

For HTTP request library, most would recommend Wreq. Being most popular, the documentation is really good I might say with most used API have their own example [1]. You can also easily find great tutorial on the net, like http://www.serpentine.com/wreq/tutorial.html for example. Honestly I never use Req myself and looking at the documentation myself, even though I don't it's lacking, I think it's not really geared toward Haskell beginners so maybe you have a point there.

As a novice Haskell programer who only use it for side projects, I don't think the documentation is that bad and it's only getting better. Most popular libraries like database clients and web frameworks have intensive documentation and tutorial nowadays.

I agree with you for the state of IDE support, it leaves something to be desired. But people are still working on it[2]. For now I mostly use vim with help of linter and formatter. The great thing about good type system is, you can still do much even without great IDE support. Haskell already have working and great REPL support that make development a breeze so it helped. I'll still take Haskell over OCaml or Scala anyday. Rust is great and I love it so much that it's my go to language for something system programming or networking related, but I don't think it's a good match if you want to learn functional programming.

[1] https://hackage.haskell.org/package/wreq-0.5.3.2/docs/Networ...

[2] https://github.com/haskell/haskell-language-server


From your complaints, which are all valid, at least the IDE problem is progressing very rapidly right now. Ghcide used to be a better solution over hie until now but now these two projects have been merged and we're finally getting something that works well out of the box.


My experience recently has been great steps forward in features, and massive steps backwards in reliability.


The problem is that every time Haskell achieves some major milestone in the state of the art developer experience, the world has moved on to something new and Haskell is 5 years behind again.


Could you be more specific on this point? From my perspective, it’s always all the mainstream languages that are copying things from Haskell.


Haskell's language is very sophisticated, but the tooling and bindings to external systems is primitive.


Again, I’m going to ask for specifics. I run a Haskell company so I’m not unfamiliar with the language.


Eclipse then Visual Studio Code, Tensorflow, web services. I'm sure Haskell is great for you in your niche, but someone with arbitrary interest X, who knows Java or JavaScript or Python (or all 3 together, which is easier than knowing Haskell), X likely works out of the box with those languages, but not Haskell.


I’m sorry, but I still don’t understand what you mean. Haskell has mature libraries for writing web services. That’s what I use the language for, and it’s hardly a niche.

Plenty of people (including some of my colleagues) write Haskell in Visual Studio Code.

I don’t know anything about Tensorflow (though machine learning seems a bit more “niche” than building websites tbqh), and I haven’t known anyone to use Eclipse for writing code in the past decade or so.

Genuinely, I haven’t a clue what you are talking about. How would three general purpose languages “work out of the box” with X, but a fourth would not? Eclipse is a text editor and you can write Haskell with it. There are Haskell bindings to Tensorflow available.

Your point now doesn’t even seem to be consistent with your original point. How is Eclipse state-of-the-art? How is Haskell behind?


>If you need typed functional programming, your options are OCaml, Scala, F# or even subsets of Rust and TypeScript.

And don't forget D. It's surprisingly well-suited for FP as its effects system has explicit support for pure functions and specifying a function is guaranteed not to throw, in addition to support for true (transitive) immutability. Not to mention full first-class function support, with lexical closures.


> Or look at the paltry state of IDE support. It's 2020 and the best option we have is VSCode with HIE

There's at least https://plugins.jetbrains.com/plugin/8258-intellij-haskell that is as simple for newcomers to setup as it can be.


You might like https://hackage.haskell.org/package/http-client/docs/Network...

Maybe my needs are limited but I rarely need something more than the GET and POST json examples there.


99.9 percent of folks should just be sticking to http-client instead of using dodgy libraries with fancy api that are not well maintained.


> Another problem is cultural. Haskell and Scala community have been the most hostile communities I have experienced. And the friendliness of communities is a sure predictor for growth.

To elaborate further, some of the most upsetting conversations I’ve had with Haskellers revolved around simple things like exceptions and logging. Issues would consistently turn into a matter of personal intelligence, and proving oneself correct. It is insane. We are just trying to build things here. These are very boring implementation issues that just need to get done.


I've found the opposite - I work with quite a few 'prominent' figures in the Haskell community and they are very approachable and friendly. The most hostile I've experienced was Stack Overflow as a beginner...


Hostility toward beginners is a poor strategy if growth is the goal.


I admittedly don't engage with the Haskell community much myself, but I'd speculate that part of the problem is the fact that C++/Java/Python/etc-style exceptions (as opposed to things like Either Excep t or SomeMonadFail t) are very boring implementation issues that specifically need to not get done. I'd also speculate that whoever you talked to did a poor job of articulating that.


I used to contribute to the Darcs project, which is written in Haskell.

It read like a series of math equations, with frequent single letter variable names.

It was concise and elegant in it's own way, but hard to follow in some cases due the lack of readable variable names.

I like the language but this cultural aspect made it harder to get up to speed and contribute.


What’s wrong with single letter variable names? Were those variable names not associated with types that themselves have descriptive names?


I randomly opened a file in the Darcs code base and picked a line:

     mReadFilePS p = B.concat `fmap` BL.toChunks `fmap` TM.readFile p
To understand this, you need to know what "m", "PS", "B", "BL" and "TM" mean.

In Node.js, the convention is that classes are named after the modules they import-- they are full words:

    const config = require('config')
In Darcs, we find this:

  import qualified Data.ByteString      as B
  import qualified Data.ByteString.Lazy as BL
  import qualified Data.Map             as M
  import qualified Darcs.Util.Tree.Monad as TM
I guess you get used to it, but it's a steeper learning curve for contributors to memorize more items to get up to speed.


I don't understand. There is no memorisation necessary. The modules are imported, qualified by an alias.

I'll accept the learning curve for Haskell is at least longer (rather than steeper) than in most languages, but to hold this up as an example to me is just bizarre.

And I say this is someone who only knew JavaScript and PHP for some years before learning a bunch of other languages including Haskell.

Moreover, you've conveniently left out the type signature which I expect would normally accompany this function, which would tell the reader what the `p` variable is.


That same issue hasn't slowed Go down, so something else is getting in the way.


Not a Haskeller, but I'd like to try to change your mind about this point:

> We are just trying to build things here.

Some languages are built to do engineering first-and-foremost. Languages like Haskell are built to do mathematics in first-and-foremost. This doesn't mean that such languages aren't also well-engineered, or that they aren't also tuned to do engineering in; but the engineering concerns must be subordinated to not breaking the model that mathematicians rely upon to do math in the language. So you can't just e.g. change whether a particular kind of function can emit IO or not; or change what values exist as part of the lattice of a type; or any number of other things visible within the language's abstraction; because those very properties/constraints are also syllogisms which are relied upon in mathematics done in the language.

And before you say "why not just break the runtime model", by e.g. having special debug functions that can do IO while presenting to the runtime as pure functions—well, it's not impossible, in theory, but it's a Hard Problem, because those properties that a function presents to the runtime are also used by the compiler to optimize (or memoize, or fuse, or eliminate) code. You can't really declare something as a pure function without the runtime thinking it can do all sorts of things that result in the function being called less.

In theory, Haskell could have something like a "volatile" qualifier for functions, sort of like C's volatile qualifier for data, to tell the runtime that while the function has type X, all optimization opportunities that would normally apply to type X are off the table. Then you could have special debug functions of type X. It's just that getting this working would require a lot of re-work throughout the compiler, since the compiler mostly deduces optimization opportunities from the type (and marks something as having more optimization opportunities by proving that it has an extra applicable type), rather than just tracking some separate metadata that can be arbitrarily written to.


You say you're not a Haskeller, so I'm going to take what you say about the compiler internals with a grain of salt.

> And before you say "why not just break the runtime model", by e.g. having special debug functions that can do IO while presenting to the runtime as pure functions—well, it's not impossible, in theory, but it's a Hard Problem.

  import Debug.Trace

  square x = 
      trace "I'm a Hard Problem" (x * x)

  main =
      print (square 3)


What do you mean by, "Languages like Haskell are built to do mathematics in first-and-foremost"?

It's hard for me to find a reasonable interpretation of this sentence that is true. I'm not sure much serious mathematics is done is Haskell, beyond the usual arithmetic operations present in most programs, especially relative to other programming languages. It hasn't taken off in the numerical computating market, for example.

Historically, it also seems like Haskell was built as a research language.


He probably mean a heavy focus on type theory. It has been used in formal verification as a prototype tool [1].

[1]: https://en.wikipedia.org/wiki/L4_microkernel_family#High_ass...


I'm not sure much serious mathematics is done is Haskell, beyond the usual arithmetic operations present in most programs, especially relative to other programming languages. It hasn't taken off in the numerical computating market, for example.

All that stuff about types and categories and monads and so on is maths too. Arithmetic or numerical work is only a subset of that field. Lots of maths doesn’t even involve numbers!


So let me ask again: in what sense do people "do mathematics" in Haskell?

They don't really "do" category theory, for example; simply noting that something forms a category is mathematics only in the most trivial sense.


> All that stuff about types and categories and monads and so on is maths too.

I think the question he's asking is: What papers have been published in math journals where the work was done in Haskell? Which mathematicians use Haskell to do mathematics work?


Not much afaict. For mathematics, it's usually Coq, maybe Lean or Agda. For scientific computing it's Python or Julia.


To be fair, Agda is a fusion of Martin-Löf type theory and Haskell if my memory serves me.


You can use HaskellR (https://tweag.github.io/HaskellR/) for everything scientific that is being written in Python and Julia.


Everything in computers is math. Floating point numbers are math. Java generics with inheritance are math. It's not a point of distinction.


> In theory, Haskell could have something like a "volatile" qualifier for functions

  foo :: IO X
Haskell already has that, it just shoves the compiler re-work off on the programmer.


The point of what I said is that in many cases, lifting a function into a monad means that the code that uses said function can no longer be used to prove what you set out to prove by writing it.


Yes, exactly: a function that requires a volatile (or IO) annotation is one for which (some of) the things you could use a non-volatile function to prove are not actually true. If you want to lie to the compiler[0], unsafePerformIO, seq, and other unsound primitives are available and documented. Contrariwise, if you want to not lie to compiler, the primitives for that are also available and documented.

0: Which is a completely resonable thing to do, and the basis of any language's standard library, the ability to recapitulate which in normal code separates good languages from Go-like special-cases-for-me-but-not-for-thee languages.


And in theory, Haskell could be exceptionally fast, or exceptionally correct, or exceptionally maintainable while in practice it is none of those things. It is, however, a relatively elegant language that explores interesting ideas and that some people enjoy, and that's great!


I honestly think that Haskell's combined speed/correctness/maintainability is exceptional, none is in isolation, but I can't think of any reasonably production-ready language that has such a great combination of all three.


I honestly think it's well behind some of the most popular mainstream languages in that combination, and that that's why we haven't seen a proliferation of Haskell even at companies where it is already actively used for some years alongside other languages.


> Haskell and Scala community have been the most hostile communities I have experienced.

This is not the first time I have seen this claim, but it's so much in contrast with my experience that I don't know what to think.

There seems to be some cliques of Haskell developers on mostly non-communicating communities. I would imagine they self-select but I have actually only encountered any hostility on Stack Overflow, and the Haskell part of it, besides being business as usual for the site, also is a mostly useless knowledge base (to the point that I remove the site from searches to decrease the noise).


I often think of Haskell as one of the friendliest communities. Granted, not on SO, but on IRC.


I agree —- I find the Haskell IRC very friendly, even to the point of helping people do research and figure things out instead of just dropping answers. And there’s a beginners IRC for all those stupid questions I have


The Haskell channel(s) on the FP Slack (https://fpchat-invite.herokuapp.com/) are also exceptionally welcoming and I've seen people go above and beyond helping people again and again.


Same here. We have adopted F# instead of Haskell. We get most of the benefits and none of the downsides. F# is a truly multi platform environment (I know Haskell claims that too but fails in practice). If we want to write impure, imperative code without monads, we can. We can also use every single opensource project out there without learning a new operator DSL and style. There are some annoying things, but the benefits outweigh those. The development velocity is also great. We can implement prototypes in a few hours and put a new service into production in a matter of days. There are few things that you need to set in your projects as opposed to learning a ton of compiler flags. I have never thought that I am going to be on a Microsoft platform once but here we go. It works and it is very convenient.


I went from Haskell to F# too. For more or less the same reasons you outline. Better and more consistent tooling. Significantly easier debugging. Not all the benefits of the Haskell type system sure, but enough of them to make classes of errors vanish and programming a joy. A welcoming and friendly community looking to solve problems and build things is also boon (the open source community around Fable is one example).


>> A welcoming and friendly community

O yeah, I totally forgot that. There is a very senior guy on SO who literally helped to fix all our non-trivial problems. I still need to investigate if we could move from Elm to Elmish (or more like SAFE).


F# was inspired more by Ocaml than Haskell I think? Ocaml also takes a more practical approach to allowing side effects etc.


Yes, and I think this is why it wins. Practical implications matter the most for end users. I like OCaml too. However, F# has way more libraries, and I think it is easier to get buy-in from all sorts of developers, at least this is my experience.


F# makes it possible to work with unicode strings, as opposed to Ocaml, which is huge.


Did you personally have unfriendly arguments with Haskellers? I personally find them quite helpful and friendly on the freenode #haskell channel.

Although there have been heated public disagreements, I haven't seen the people trying to destroy one another. They just have very strong disagreements. That doesn't bother me directly since I'm not in those discussions. The most I can say is that the lack of unity means there is a shortage of "best practice" of how to do certain things and lower community focus in advancing the ecosystem.


Overall most people are friendly in almost all communities, at least if you avoid community-defining topics. So most interactions with haskellers have been friendly. But I had a number of relevant interactions that were Fairly hostile and I have passively observed interactions on various communication channels that wouldn't have flown in the python community.


Regarding the lack of examples, the type system is strong enough that in many cases, if you write the types that you expect up front, there's only one or few possible implementations.

A common workflow for Haskellers is to specify type signatures and interactively edit until the compiler is satisfied. Often, the code gives you correct results on the first try.

This means the type signatures aren't simply documentation, they also strongly direct your implementation. The problem from a marketing and adoption standpoint is that this isn't visible until you've hopped into a REPL or IDE and tried it yourself.


Regarding the lack of examples, the type system is strong enough that in many cases, if you write the types that you expect up front, there's only one or few possible implementations.

This is a common counter-argument. It is a true statement. And, in the nicest possible way, it is also completely missing the point.

When you're playing with cute mathematical code and almost everything is written in terms of totally generic types, it's cute that you get free theorems and automatic derivations of some basic functions and all that stuff.

However, in the real world, 99% of the time we work with concrete types to get concrete work done, and there are going to be many possible implementations of a function of type Int -> Int, or MyDataStructure -> MyDataStructure -> MyDataStructure, or SomeWrapper SomeType -> (SomeType -> SomeOtherType) -> (SomeType -> SomeOtherType) -> SomeWrapper SomeOtherType. This remains true notwithstanding the possibility that some of that cute generic code was used to implement various small parts of the function you actually care about if you dig down far enough.

This means the type signatures aren't simply documentation, they also strongly direct your implementation. The problem from a marketing and adoption standpoint is that this isn't visible until you've hopped into a REPL or IDE and tried it yourself.

I would argue that the problem from a marketing and adoption standpoint is that it still isn't visible even if you do do those things, unless you're interested in writing heavily mathematical generic code rather than completing the job your boss told you to do within a reasonable timeframe.


> This is a common counter-argument. It is a true statement. And, in the nicest possible way, it is also completely missing the point.

Yes, kind of.

> When you're playing with cute mathematical code and almost everything is written in terms of totally generic types, it's cute that you get free theorems and automatic derivations of some basic functions and all that stuff.

It's not only "cute" as you so reductively put it. There are real world benefits.

> However, in the real world, 99% of the time we work with concrete types to get concrete work done, and there are going to be many possible implementations of a function of type Int -> Int, or MyDataStructure -> MyDataStructure -> MyDataStructure, or SomeWrapper SomeType -> (SomeType -> SomeOtherType) -> (SomeType -> SomeOtherType) -> SomeWrapper SomeOtherType. This remains true notwithstanding the possibility that some of that cute generic code was used to implement various small parts of the function you actually care about if you dig down far enough.

That presumes an arguably wrong method of developing software that needlessly eschews understanding and correctness confused for superior economic choices, finally resulting in "real world programming".


> However, in the real world, 99% of the time we work with concrete types to get concrete work done, and there are going to be many possible implementations of a function of type Int -> Int, or MyDataStructure -> MyDataStructure -> MyDataStructure, or SomeWrapper SomeType -> (SomeType -> SomeOtherType) -> (SomeType -> SomeOtherType) -> SomeWrapper SomeOtherType.

But, on the gripping hand, using types unspecific enough that that is the case is often a sign of underusing the type system. E.g., very few domains actually deal with bare numbers as domain values, and when you express concrete types specific enough, there may be more than one possible correct function, Bru very often the one actually correct function will be much more obvious, and the mistakes possible by omitting a step when using less-robust sets of types become impossible because you effectively have statically verified dimensional analysis.


As a general rule, I am all for defining specific aliases of fundamental types, type-safely if possible. I don't think this changes my argument materially, though. Real world applications of programming always come down to processing data of known, concrete types at some point. And at that point, all the power you get from saying things like there's only one possible implementation of a function f :: a -> a immediately go out of the window, because you don't have a value of type a, you have a value of type SomeConcreteTypeThatIKnowThingsAbout.

In my experience, though of course yours may differ, there are very often still many plausibly correct implementations of functions working with those types and many ways to combine them that might make sense and that would satisfy the type checker. The idea that specifying the types alone is a viable substitute for proper documentation of the real meaning of the functions just falls apart at that point.


> And at that point, all the power you get from saying things like there's only one possible implementation of a function f :: a -> a immediately go out of the window, because you don't have a value of type a, you have a value of type SomeConcreteTypeThatIKnowThingsAbout.

Not if your logic is primarily in terms of domain types, invariants encoded in those domain types as is necessary and economical.

> And at that point, all the power you get from saying things like there's only one possible implementation of a function f :: a -> a immediately go out of the window, because you don't have a value of type a, you have a value of type SomeConcreteTypeThatIKnowThingsAbout.

Can you give an example? I can only see this being the case if your logic is done at the concrete type level.


I wonder if we're talking at cross-purposes here. While it is certainly useful to write abstractions of common algorithms and access patterns for data structures generically, any useful program ultimately has to deal with data of known types. Those might come from the domain model, or they might be implementation tools like file handles or database queries, but they are something specific, some concrete type that is relevant to the problem at hand.


> if you write the types that you expect up front, there's only one or few possible implementations.

That does not square with the constant complaints about insufficient documentation.

It also does not matter if there's only one or two valid ways to call a function if the programmer can't figure out how to do it.


Yeah you exemplify my point.

None of what you wrote contradicts that examples could be given in the docs.

But defending the lack of documentation kind of shows why haskell will not see much adoption.

Also, rust for example is much more rich when it comes to examples in docs and rust people could also argue to not provide docs because of type signatures. They just decide to provide documentation and succeed in attracting people.


Same for me - I used Haskell and Scala as my primary two languages for over 6 years. In retrospect I question whether many of the stated benefits of Haskell are even benefits at all.

In terms of reducing development time or cost, reducing defects, improving design, or really improving any business outcomes whatsoever, Haskell showed no success, only failure.

How could that be in contrast to “ugly” and “complex” languages like Java and C++, or free for all languages like Python and Node js?

The reason is simple:

1. imperative structure is actually clearer and easier to understand, easily amenable to all the upsides of function-oriented programming with far fewer downsides.

2. removing whole categories of bugs through compilation and type system patterns is a hugely overrated waste of time. Lightweight unit tests get you 99% of the way there with far less rigid design commitment

3. safety and correctness are resources just like memory or runtime complexity, not absolutes. Being able to tradeoff safety against other concerns, at a moment’s notice, deep in the guts of a large system, is one of the most critical requirements of business software

4. Premature abstraction costs you way more in pure functional languages than most other imperative or OO situations.


I agree on all points which pains me because I like the idea of functional programming and find it pretty. At least for smaller programs.


> And as a last remark, the combinatoric complexity of compiler flags is a problem and needs to be fixed. But the ecosystem is in a deadlock there that no compromise could be found on what should be in the language and what shouldn't be. And it kind of indicates a deeper problem in the haskell design process.

Are you referring to language extensions? This is sooooo off-base if so.

I don't see how people can bemoan Haskell's complexity but love a language like Kotlin which has its hodgepodge of function call semantics. Talk about complexity without any sane ground. It's so hacked together. At least with Haskell everything is very simple in isolation and composes for the most part.

--

Also re:examples.. I'd say Haskell libraries have better examples than the average Go library for instance. Many have entire modules whose entire existence is for putting examples & tutorials in the haddocks themselves.


"I don't see how people can bemoan Haskell's complexity but love a language like Kotlin which has its hodgepodge of function call semantics"

One of a few possible explanations: they are mobile app devs who simply want to get things done, instead of hacking the toolchain (including myself).

Sure I'd love be able to write Android apps in Haskell. What's the easiest way to do it?

With Kotlin, it's already nicely packaged in Android Studio. Very convenient.



I figure in the next 5 years I'll hack the toolchain once and for all. Probably for game distribution. It's a fixed cost is all. To each their own.

I do appreciate the tooling around Kotlin. Definitely wish we had more time and money available in Haskell to match it.

Still doesn't explain enterprise use of Kotlin on the backend lol.


So what kind of bugs do you commonly experience then in Haskell programs?

From my limited experience I agree that debugging a bug that passes compilation is more difficult than on other platforms.


> Haskell and Scala community have been the most hostile communities I have experienced

How come I never seem to have this problem. I mean one scala dick (who wasn't actually so bad after a while) but that's it.


I haven't either. Granted, I've only asked for help on Slack and Reddit, but I've always had many people explain the solution in good detail to me.


Although I think the post is very clear describing the reasons why Haskell isn't attractive in business, this part rubbed me the wrong way:

> Thus, instead of our discourse being about semantics, engineering quality, and implementation, we instead discuss our tools in terms of social factors and emotions (i.e. it “feels readable”, “feels usable”, “feels fast”).

Thinking about readability or usability as emotions and social factors, rather than as first level concerns, is a non starter to me, and I find it extremely off putting that the author seems to pile up those qualities in the marketing "group", as if they were not real properties of a language.

You can design a drill in a way that makes it the most scientifically correct at using the power it's supplied efficiently, but if the design hurts the users hands and requires months of training to operate, any worker will throw it in the trash, because they can drill things just fine with a regular tool.

(I'm not necessarily saying that Haskell is that drill, just saying that usability should never be treated with condescension when you're designing tools).


The thing is that, at least in my experience, "feels readable" is indeed emotional territory. More specifically people relate favourably to things they are comfortable with and most programmers are comfortable with ALGOL/C syntax. Show them something unfamiliar, eg. Haskell, and they will say it's weird and unreadable. That statement is usually not based on careful assessment, it's a gut reaction.


Everyone got a ton of identity, equality, and proof-based stuff in math classes. Very few programmers, even the non-college grads and those who started young, had none of that before writing Hello World.

Many of us found imperative (and other non-mathy) programming styles immediately easier to understand, regardless. Me, I have to translate mathematical statements into "programmy" style to have a hope of understanding it ("OK so what does this term/operation 'do' in isolation?") and keep a crib sheet handy for WTF all the single-character bits (so, usually all of it) mean.

[EDIT] my point being, that preference doesn't seem especially "emotional" to me.


> Everyone got a ton of identity, equality, and proof-based stuff in math classes.

Sure and some people like it some hate it (according to western popular culture most people hate it). Also even if someone loves doing school algebra if no one tells them that you can write code that way they'll never know. If you only expose them to imperative Hello World code then of course they'll relate to that. Conditioning, familiarity and experience matter a lot.

> Many of us found imperative ... programming styles immediately easier

Immediately as-in you've been shown an imperative and a declarative Hello World and related to the former? Or you've only been shown the former? See my earlier point.

> I have to translate mathematical statements into "programmy" style

Same point as above. I did that as well for ages in the past, now I have to do the opposite.

> my point being, that preference doesn't seem especially "emotional" to me.

Maybe emotional is indeed the wrong word. But it's definitely not logical either and I'd also say it's not inherent, ie. people can change (I know I did) but at that point we'll dive into nature vs. nurture and that never ends :)


I think what they're getting at in writing about how people are not given the right frameworks to reason about a tool's usability they're saying that people might say something "feels readable" without knowing whether it's actually easier for them to parse. Similarly for other statements. Someone might say a tool "feels fast" without actually knowing if it's faster for their problem domain than another viable tool. An analogy in the art world is that drawing lines which imply motion behind a character might make them "feel faster" but are they actually moving at all?


I took his point to be that things like "feels readable" and "feels fast" and to do with feelings rather than actual quantifiable things that could reasonably be compared between languages.

So it isn't that those things aren't important; just that they are subjective


I'd argue that ease of use and readability are perfectly quantifiable - there are thousands of people whose job is precisely optimising usability for many products. Sure, dealing with people is harder and less clear cut than measuring cycles in a processor - there's a lot of variability, context based on previous experience, etc. But that's no reason to ignore the human factor of programming and gather some metrics/statistics, then seeing where you can improve.

What those features are, rather than subjective, it's contextual, and I feel that behind the author's phrase might lie the usual complain by functional programmers that non-functional languages just feel foreign because people are usually taught other styles during their learning process.

As an explanation, that might be partly true, but it being true doesn't really save you from the fact that if you want people to use your tool, you have to make it easily usable, and usable for real world people rather than for spherical cows.


> I'd argue that ease of use and readability are perfectly quantifiable

Do we have any objective metrics? The only thing I can think of is all the (inconclusive) studies about error density. Unfortunately error density is a very far cry from measuring the overall quality of a software product, or the mental effort that was required to produce it, or the mental effort required to maintain it.

> there are thousands of people whose job is precisely optimising usability for many products

Yeah but programming languages are more abstract; they're tools of thought (just like mathematical objects), and optimizing their use is nothing like observing a user press the buttons on a machine to get themselves a cup of coffee.

Controlled studies are very difficult to run, for obvious reasons. (Unlike for the coffee machine)


And yet, effective marketing is less about the quantifiable and more about the subjective feelings that your product will ultimately provide.


haskell isn't a drill it's a root canal


Well, if the dental work needs a conceptual makeover, then see a dentist.

I'd contend that the immutability of containers and why Docker is such a win might be getting Haskell to its moment.

For me, the hardest unlearning piece isn't the syntax.

Rather, the impact of the purity on the execution model is the stumbling block.

Monads seem arcane because laziness + purity = statelessness, which never really had tangible meaning until the idea that a Docker image is also stateless came along.

Maybe some cruder, crayon-level examples can be devised to help us slow folk get there.


docker is a win on desktop because I can run two database instances at once on my laptop + understand what's happening

+ a win for cloud when combined with kube to understand exactly what's being exposed

haskell isn't simple or understandable

it's flexible but I have no idea what's happening where

effects are getting folded in from all over the codebase

all the type abstractions are for injecting code from somewhere else


At least you can be sure to have a pure, monadic, side-effect free, perfect root canal.


We dance around this issue in the comments to every Haskell story, but the reason Haskell is hard to market is that it's bad. It's a research language being shoehorned into production by a few people who really love it. Some details of this have been given in this thread; let me suggest the following threads for more:

1) https://www.reddit.com/r/ocaml/comments/3ifwe9/what_are_ocam... 2) https://www.reddit.com/r/ocaml/comments/e7g4nb/haskell_vs_oc...

Yes, the Haskell community hates that guy and considers him a troll. But he does functional programming professionally as part of a private consultancy and wrote a book on OCaml. If anyone's equipped to understand what's wrong with Haskell, it's him.

Haskell has had 30 years to get its act together. Any benefits it has are drowned out by a sea of buggy tooling and accidental complexity (monads, etc.).

Ask yourself this: if there are literally billions of dollars in industry riding on writing efficient and correct software, and Haskell is such an obvious productivity win, why does it have a market share that rounds to zero?

Time to move on.


> if there are literally billions of dollars in industry riding on writing efficient and correct software, and Haskell is such an obvious productivity win, why does it have a market share that rounds to zero?

The young economist looks down and sees a $20 bill on the street and says, “Hey, look a twenty-dollar bill!”

Without even looking, his older and wiser colleague replies, “Nonsense. If there had been a twenty-dollar lying on the street, someone would have already picked it up by now.”

Industry makes collective bad engineering decisions all the time, usually driven by secondary factors like institutional momentum or hiring constraints. Did you read the article? It talked about a bunch of things like that.

If you equate market cap with quality, then Java is probably the best language of all time.

> wrote a book on OCaml

I find it hard to believe that anyone who’s really used both prefers OCaml over Haskell. OCaml is nice compared to most crap, but as someone who’s spent years working with both, Haskell wins almost every time.

It’s interesting, actually - I don’t know anyone who’s used Haskell and doesn’t like it. Almost everyone who vocally complains about it hasn’t really used it.

Reading that guy’s reddit posts, I have to conclude he’s some kind of troll - most of the things he criticizes Haskell for are way worse in OCaml, or just completely disconnected from reality.


> I find it hard to believe that anyone who’s really used both prefers OCaml over Haskell.

I've actually used Haskell, Ocaml and Purescript for several years and a lot of locs, and I actually like Ocaml better. Reasons:

- The module system -- a very intuitive way to structure code

- Poly variants

- Strictness -- predictable performance and easy debugging

- Allowing side effects

- Private types -- better than newtypes imo since you can upcast to it's structural counterpart

- Really good type inference -- you never have to write type signatures. Haskell and Purescript type inference is ok-ish and sometimes craps out, which is part of why type signatures for each function is recommended.

- Stupidly fast compiler -- yes it matters, try contributing to the Idris codebase. It's like trying to contribute to the linux kernel, you'll wait 30+ minutes for it to compile.

- Very fast programs

- An actual package manager (stack uninstall?)

It is an overall better experience for me. And the language features that are coming down the pipeline are looking great:

- Modular implicits - a more sound ad hoc polymorphism system than typeclasses. Typeclasses can't handle simple things like if half of the Haskell ecosystem uses one Filterable typeclass and the other half uses a different Filterable.

- Algebraic effects - a strongly typed system for writing effects that will be part of multicore -- delimited continuations / conditions and restarts

These days I'm more interested in Erlang/Elixir though. I just don't think static typing is all that after having spent so much time with Haskell/Purescript/Idris/Ocaml. There are really nice benefits you get with dynamic typing that you don't with static, and there are better ways to deal with those problems that won't be noticeably worse in most business applications with just a little bit of coding discipline, such as good testing habits and having good fault tolerance built into the language.

I do like Haskell though, it's definitely really fun to program in. It's just not something I'd use for anything. If I really needed something with strong static typing I'd either go with Ocaml or Idris.


Idris barely made it out of experimental status and was dog-slow. Not sure when you'd choose that over Haskell for production software.


I probably wouldn't until/unless Idris 2 gained some steam, it would probably be Ocaml.

I was thinking more along the lines of dependent types, because the kinds of problems where you'd really really need strong static types overlap with the need for dependent types, which Haskell has some support for (and Ocaml but less so).

But if I needed to prove something I'd probably do it in Coq since it interops with Ocaml fairly well.


Thanks for the detailed reply.

- The module system -- a very intuitive way to structure code

I see the module system used two ways in production - one is for trivial code-sharing, like to generate Map submodules. The other is for completely incomprehensible nonsense that makes it nearly impossible to find the actual definition of the thing you want. I find typeclasses easier and more ergonomic to use for both the trivial cases (like maps) and complex cases (like json codec generation, for which OCaml still uses the rough equivalent of the basically-deprecated Template Haskell).

- Poly variants

I agree, this is nice.

- Strictness -- predictable performance and easy debugging

I’ve always found this complaint overrated. Sure, it’s slightly annoying sometimes, but surely worth the language-wide discipline that comes with it.

- Allowing side effects

This substantially lowers the quality of production OCaml code compared to Haskell. Tons more spaghetti code, tons of global variables, etc etc.

Obviously purity is kind of a religious argument at this point, but in practice I have observed that it’s a liability.

- Private types -- better than newtypes imo since you can upcast to it's structural counterpart

I’ve never seen this in production. I had to look it up. Indeed, the first article is a blog post along the lines of “why is this useful”?

Haskell also supports casting newtypes via Coercible.

- Really good type inference -- you never have to write type signatures. Haskell and Purescript type inference is ok-ish and sometimes craps out, which is part of why type signatures for each function is recommended.

Haskell type declarations are usually only required with extensions. All OCaml style guides recommend writing type declarations in mli files anyway. The OCaml type system can occasionally infer things more easily because it’s so much weaker - cf the value restriction, no HKTs, etc.

- Stupidly fast compiler -- yes it matters, try contributing to the Idris codebase. It's like trying to contribute to the linux kernel, you'll wait 30+ minutes for it to compile.

Based on my coworkers who do performance optimization on OCaml code, it’s probably because the compiler isn’t optimizing much. But I don’t know much about that first hand. Also, have you tried GHCID or similar? Makes Haskell compilation very fast while developing.

- Very fast programs

Not an advantage over Haskell. They fare similarly in contrived competitions, and I have some suspicions that it’s easier to make Haskell code faster in practice. Some of the shit we do for speed in OCaml is very ugly. OCaml also suffers from very poor GC latencies compared to Haskell. I’ve seen multiple-minute(!) GC pauses in OCaml programs with a few dozen gigs of allocated memory. Never seen anything even close to that with Haskell.

- An actual package manager (stack uninstall?)

I guess I don’t appreciate the benefit of this over the stack model of tightly versioned automatic dependency resolution. Can you expand on some benefits?

- Modular implicits - a more sound ad hoc polymorphism system than typeclasses. Typeclasses can't handle simple things like if half of the Haskell ecosystem uses one Filterable typeclass and the other half uses a different Filterable.

That is by design and seems good to me. Why the hell would I want there to be two different Filterable typeclasses?

The ergonomics of MI remain to be seen but I suspect they will still be more annoying than compiler-resolved typeclasses.

- Algebraic effects - a strongly typed system for writing effects that will be part of multicore -- delimited continuations / conditions and restarts

This is basically vaporware at the moment. I agree it sounds nice, but I have no reason to believe we’ll see it in production in the next decade, and if we do it will probably look a lot more constrained than the idealized model people talk about.


> If you equate market cap with quality, then Java is probably the best language of all time.

Haskell proponents vastly underestimate the importance of stability to industrial users. Java provides some very strict promises about backwards compatibility that make it very attractive if you’re going to write a service and maintain it for decades.

Also, unlike Haskell, I can hire a huge number of Java devs, at a reasonable rate. My company alone would probably double industrial Haskell usage if we were to use it, making hiring a serious challenge.

So yes, for metrics that businesses care about, Java might be the best language around.


> if there are literally billions of dollars in industry riding on writing efficient and correct software, and Haskell is such an obvious productivity win, why does it have a market share that rounds to zero?

Well, one reason we might get stuck on a “suboptimal” language is network effects outside of the criteria we use to compare languages: what language does an organization already have experience with, does the (only) current compiler support the target hardware and system, it’s to late now to write the Linux kernel in Haskell, etc.

However, I agree with you. Despite the time I’ve invested in learning Haskell, I’ve personally decided to spend my time developing in other languages. If Haskell is the magic wand right there in front of us that will give us programs without flaws, how come almost nobody has picked it up and shown us working programs that took less development effort and fewer flaws.

We could compare John Wiegley’s ledger (C++) to the similar hledger (Haskell), but hledger isn’t the same program as ledger it has different features, I have no idea how long it took to write, how many flaws it has versus the original, how different the developer skills were, how much time and defects were saved by it being a redesign, etc. And these are small single developer efforts (now with numerous contributors). From everything I can gather, they are both excellent. So in this case there is no compelling advantage demonstrated for Haskell, and this supports your point.


> We could compare John Wiegley’s ledger (C++) to the similar hledger (Haskell)

I'm confused. Do you know he is a prolific Haskeller and maintains many libs?


I don't understand how that's relevant to the point.


I'm confused. Do you know that he wrote ledger in C++?


Sorry, I thought you were making a point I did not get citing him in comparison with a haskell project. Apologies, I misunderstood.


No worries, I probably wasn't very clear. I wish that someone like John Wiegley would give us his thoughts on Haskell/Lisp/C++. I have a lot of respect for him; he's made big contributions to Emacs and wrote a Common Lisp version of ledger in addition to his C++ version.


Following links in your 1), I liked https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf 'The Genuine Sieve of Eratosthenes' by Melissa E. O’Neill. Somewhat damning and to your point.

Are there any theoretical barriers to writing algorithms in a pure and lazy language with the same space/time complexity as a side-effectful and strict language? I suppose quicksort (also mentioned) is an obvious candidate for study in this regard.


Thanks for linking this. I just read it, and that is some DAMN fine writing.

Sometimes when I complain about jargon and bad writing in scientific papers, people argue back that the author has to be precise. This paper makes it clear that these are orthogonal issues. Writing is a skill, too.

The author is being extremely technical and rigorous, but she is also CLEAR.

Quite the badass.


There's also hash tables. IIRC, Haskell didn't have a good hash table implementation until relatively recently because people religiously avoided mutable data structures. (And the implementation I'm thinking of is not purely functional.)


Haskell has had good HAMTs for quite some time. There isn’t really any benefit to using a mutable hash table over that. In fact, it would probably be slower given the way copying GC works with mutable values.


Yes, there is a benefit: correctly implemented hash tables are faster. Haskell's GC may prevent a good implementation, but that's a problem with Haskell, not with hash tables.


Writing the fastest possible implementation of something is rarely a valid business goal.

The ideal is the midpoint on a Venn diagram of 1) fast enough for our purposes, 2) logically robust, 3) cheap to implement.

Besides, if something absolutely must be the fastest implementation possible and this can’t be done in Haskell, then there’s no reason why one couldn’t drop to the FFI at that point. You could even use template Haskell and write inline-C. It works.


A marginally faster mutable hash table is not really useful for much. If that 30% speedup or whatever is going to make or break your application, a single hash table was the wrong choice anyway.

If this is the kind of thing that keeps you up at night, you should just be using C.


FTI, HAMT = hash array mapped trie


FTI? FYI? At any rate, thanks for the clarification.


"For their information". It's not for centimeter's information, he/she clearly already knows. It's not for mine either; I discovered the answer by DuckDuckGoing!


"Thy"?


Sacrebleu, how can this be? The Haskell one-liner compiles so it must be the genuine sieve of Eratosthenes.


I read those two links. If I ignore some of the points that were valid 10 years ago but not anymore (e.g., there's now stack), and if I ignore some of the personal anecdotes, I come away feeling like much of what is said is not necessarily wrong but is nevertheless an exceptionally negative take. I think you can make almost any tool sound broken and unusable if you use nasty enough and strong enough language about a few examples.

Also, there really is very little substance behind the actual language criticism. It's hard to make a blanket statement that typeclasses are "overkill"; they seem to be showing up all over the place in new languages. Similarly, type annotations are incredibly useful and are also showing up all over the place these days outside of Haskell-land. The point on non-strict evaluation is by far the strongest and most legitimate.


jdh30, the author of both comments, is a massive OCaml fanboy and Haskell hater. I've seen him try to take Haskell performance numbers and interpret them using the wrong units to make them look worse. I'd take anything that he says about Haskell with a big, big grain of salt!


The comments I've just read of his, are at least, all heavily referenced.

After spending nearly a decade thinking about haskell, everything here rings very true.

It's a language whose sales pitch begins by assuming you are using the language: here are all these solutions to problems you now have.

Laziness is the wrong default; it's a catastrophe for writing programs. It requires the whole category-theoretic infrastructure of haskell to solve, and then, imparts insurmountable complexity on writing anything real world.


> Haskell is best at solving problems that Haskell invented and other languages do not have.

That is the most succinct explanation I've ever seen of Haskell.


With respect to availability of talent pool: my alma mater used to teach Haskell as the introductory programming language, and from what I’ve heard from people who were around then it was great for faculty and students. However, a very large corporate donor pressured the university to switch to Java in the early 2000s, because that’s what they used internally and they wanted a large local supply of pre-trained fungible labor. Now only a small fraction of the top students who study advanced topics in PL or compilers or what-have-you will come out of the university knowing how to use Haskell. I think this is deeply unfortunate - if society could quantum tunnel through to the point where Haskell was the safe, readily hireable corporate choice, the state of software (especially corporate software) would be a lot better off.


CS programs for undergraduate majors have a real dilemma. What programming language should they teach their students. My own kids are studying CS in college so I've seen the curriculums for a number of universities, and I've spoken to the faculty responsible for the language choices at some top programs. To me, there isn't an obvious best programming language to study as an undergraduate.

Some schools have a bifurcated curriculum, like Indiana University which has a program in Informatics for those planning to go into a career the requires programming and software engineering skills and a program in CS for those perhaps planning on going into grad school. There, learning Haskell might make sense in the CS program, but does it make sense for other students that want to get out and get a job?

My own daughter studied at a school where her first CS class was taught in Scheme (Racket). It, to me, looked like an excellent introduction to programming for someone majoring in CS. They used a book I approved of, How to Design Programs/2ed by Felleisen, et. al. Would Haskell have been better? Maybe, but I predict that more kids would have serious problems with a language like Haskell that is burdened with more challenging concepts and ecosystem. Perhaps Lisp/Scheme is a gentler introduction to applicative programming than Haskell. However most schools start with Python or Java.

I understand the arguments for a strongly typed functional programming language like Haskell (my own university research was in program verification many years ago).

Early on in CS programs students usually take one or two classes on data structures and algorithms. Data structures in my opinion needs to be taught in C. Haskell, Lisp/Scheme, Python, Java, and even modern C++ are so high level that teaching data structures in these languages ends up with students not learning the proper use of these languages' standard libraries.

At some point undergraduates should have an introduction to systems programming, the best text in this area is Computer Systems: A Programmer's Perspective/2ed by Randal E. Bryant and David R. O'Hallaron. This is a great book for undergraduate CS majors. This book requires C programming. So to me, it makes sense to teach data structures in C since students will need to understand it for their first exposure to systems programming.

Where should Python fit in the curriculum? It's arguably the most versatile programming language. What about Javascript? When should CS majors be exposed to it. Modern Java is a big language so one semester of use isn't going to be enough and it's too important to ignore. What about C++? It takes years of working with it to become a good but not expert programmer in C++. Should it be a part of the curriculum?

So my own recommendations for an undergrad CS program's language choices would be:

* Python -- for intro to programming, a course on algorithms, and a machine learning course

* C -- for an early class on data structures and a later class on systems programming

* Java -- software engineering, and a compiler course

* Lisp -- programming languages smorgasbord course, and an AI course

* Assembly -- hardware architecture/digital design course

* Javascript -- taught as part of a web-applications class

* Haskell or OCaml (or maybe Rust) -- intro to functional programming and program verification

What do HN readers think about this list? I'm glad I don't have to make the real decisions there are just too many languages that students need to learn in just a few years.

As far are my own background I have spent a great deal of time pursuing three different degrees from three different universities in CS. I've been the instructor for university courses for CS majors twice and taught perhaps 40 week long classes to professional programmers within a Fortune 500 company. I've also interviewed and hired many senior developers for my own company.


As someone who programs only as an amateur I honestly find baffling the whole idea that a university would teach students any programming language. Shouldn’t they be learning concepts, algorithms, data structures? Isn’t the language just a tool? I’d think mere technicians would learn languages. University-educated scientists should learn the underlying theory with which they can learn ANY language.

I am a lawyer. I didn’t learn Westlaw or Lexis in law school. I mean yes, I did use them and was trained in them. But they were just tools. But no one would say “what computer research system should we teach our students?” And good thing—the ones I used in law school are now obsolete!

Even as a lawyer I can teach myself programming languages. Man I’d be disappointed if I went into a university CS course and they’re debating something as low level as what language to teach me, rather than teaching me how to learn any language and how to design new ones.


It's sort of inescapable, because there's some baseline level of tooling that needs to be supplied to the students to that they can learn the actual subject matter of the course.

As a practical example, in my graduate program, there were some courses where the professors handled a lot of those baseline details, and courses where the professors gave a lot of leeway to the students. My observation was that the latter courses couldn't go nearly as deep into the official topic of the course. Students would spend so much time managing their programming environments that it became a serious distraction.

Also, in reference to your question, "Isn't the language just a tool?" - no, it isn't just a tool. It's also a language. For a professional software engineer, the programming language isn't just a way of telling a computer what to do; it's also the primary, and most authoritative, medium by which you communicate complex technical information to your colleagues. It's perhaps comparable to the divide between informal language and legal language - one is useful, especially for communicating to people with limited expertise in the field, but the other is where the rubber actually meets the road.


“As some who legalizes only as an amateur I honestly find baffling the whole idea that a law school would teach students any particular nation or jurisdiction’s legal code. Shouldn’t they be learning concepts, rhetoric, and logic? Isn’t a case law just a tool? I think mere technicians would learn legal precedent. University-educated lawyers should learn the underlying theory with which they can learn ANY case law.

I am a software engineer. I didn’t learn RequireJS or CommonJS in school. I mean yes, I did use them and was trained in them. But they were just tools. But no one would say ‘what JS module system should we teach our students?’ And good thing—the ones I used in school are now obsolete!

Even as a software engineer I can read court opinions. Man I’d be disappointed if I went into a law school and they’re debating something as low level as what jurisdiction’s laws and court decisions to teach me, rather than teaching me how to interpret any law or legal argument and how to design new ones.”

I hear what you’re saying, software engineer, but avoiding any particular set of case law would leave these lawyers extremely unprepared for professional practice—not just because they will likely work in one of those legal codes otherwise covered by the school, but because those codes and the arguments that shaped them constitute an immense base of empirical knowledge about legal rhetoric, logic, and concepts that simply cannot be replicated in a vacuum. Accordingly, those excellent judges and legislators who do the novel work of lawmaking and legal argument are _not_ reduced to technicians by their knowledge of case law. In fact, I’d say it’s the opposite: it is impossible to have a good grasp of the theoretical underpinnings without also having extreme facility in one legal code or another as a ‘native tongue’.


The most natural parallel is something you know implicitly: your native (or highly advanced second) language. It's not taught at law school, because it's assumed. Obviously you can teach law in any language, but you need to have a common base to apply and write down concepts, and that's the programming language to the university course.


>Isn’t the language just a tool?

Yes. But to learn the data structures and algorithms you have to practice coding them. What language do you code them in to teach them? After awhile most code looks the same.


It's an open secret in the IT industry that if you are smart enough to be a good programmer and you want to be well-paid you should become a lawyer.


Surely lawyers don’t make that much that they can have a better quality of life compared with a programmer earning good money from a wealthy market and living in a country with a relatively low cost of living? I mean, programmers can live and work anywhere. Lawyers can’t.


When I was looking into it back in the day the salary spreads for programmers and lawyers didn't overlap very much. A good programmer could get ~$250k max while a good lawyer starts at around ~$300k. This was also before remote work was so much a thing, eh?

> I mean, programmers can live and work anywhere.

Yes and no. YMMV


If your alma mater is what I'm thinking of (UT Austin) I can maybe share some insight on the current situation. The introductory courses are all still taught in Java and the lower-division systems courses are taught in C. From what I know about the curriculum it is entirely possible for a student to go their entire undergraduate career without expanding from these two languages (maybe they'll need Python for a class or two). This past semester I took our PL course which was taught in Haskell. I thought it was an interesting course and strengthened my base in FP (my high school taught CS courses in Racket where I started my foundation). However this was not a large class and from discussion with peers there is generally little interest in learning "esoteric" languages such as Haskell.


Was it University of Texas at Austin? I recall an article by Dijkstra complaining of this very thing.


"To the members of the Budget Council" --Dijkstra

https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...


I don’t get it. What does removing Haskell from the into course have to do with the use of Haskell in the PL or compilers class?


If it's anything like what happened at my alma mater, they may well have switched to Java for everything.

It's really a disservice to students. Developing competence in a variety of programming paradigms is a career-expanding experience. It makes picking up a new language - and, by extension, a new technology stack - a quick and easy thing to do, because there will be very few ways of doing things that you haven't seen before. Which, in turn, makes it a lot more feasible to pivot your career or act quickly when an opportunity arrives.

A bunch of students who know only Java, on the other hand, is exactly what big corporate employers want. For employers, it maximizes the interchangeability of employees. For employees, it minimizes the interchangeability of employers.


> at my alma mater

Where would that be? Why is everyone here avoiding being specific? If you are not specific, why should we believe in what you say?


Because doing so would not be decorous, and there's no practical reason to do so.

The most practical way someone could act on the opinion I expressed, if they find it useful, is not to sit and laboriously compile a list of individual institutions. It's to take that opinion under advisement in the course of the due diligence they should already be doing in their college search.

As for the last question, if I had any fiduciary responsibility, I would advise you not to believe anything posted on an Internet message board, because the commenters are largely just a bunch of anonymous yahoos who like to troll high school juniors and seniors as a sort of perverse hobby.


Probably for the same reason their username is "mumblemumble" rather than "Ted Anderson from New York". This isn't Facebook; don't ask people to dox themselves.

If you want people to name and shame, ask for a list of offending institutions, not individual examples. (Also, fact-check that list, rather than blindly accepting accusations made by internet randoms, but that should go without saying.)


I didn't ask about their real name, I asked about the college they were talking about, which hardly exposes them to doxing.


The more information attached to your account, the closer it gets to revealing your true identity.


If your first program is a function, the whole world becomes a function.

If your first program is an object, the whole world becomes an object.

There are always 2 ways to skin a cat with a computer. The more practical computer user could be satisfied with only using their first.


Most CS students don’t take those classes. They’ll take data structures, algorithms, game design, databases, etc. which are mostly in Java or C++.


Most CS students certainly do take compiler classes, at least for any decent CS course.


> from what I’ve heard

No source specified - ignore.

> if society could quantum tunnel

Oh dear.


One of the best ways to get adoption is to have people take initial risks and build real world serious product with a serious team/company backing behind it.

Erlang got a big boost when it came out WhatsApp scaled it up with a billion users, with tiny team and only a few servers.

React/React Native got boosted by Instagram adopting it to build their production mobile app.

Rails had 37signals/Basecamp and Github/Shopify/early Twitter/etc.

Haskell needs some more famous commercial "killer-app" examples for real businesses and real teams. And yes I'm very familiar with the current selection of popular Haskell examples.

Currently I only ever hear Haskell (and Ocaml for that matter) being used by engineers for backend engineery things (parsing programming languages, fighting spam, tooling) mostly for OSS tools.

On a positive note, PureScript is trying to position itself as a practical Haskell and making a bit of success. One helpful reason is they have a nice app/company leading the way as a production app with Lumi [1], with their founder playing a key role on the language/ecosystem dev side as well: https://www.lumi.dev/blog/purescript-and-haskell-at-lumi

This is what Haskell needs more of - and critical a company promoting/blogging about their progress using Haskell as well as contributing back into the community.

Another big one is simply a sort of 'starter kit' which guides you with best practices on how to architect a serious Haskell app (ie, using STM, free, etc). React has plenty of starter kits. For Rust CLI apps there is a great starter kit https://github.com/iqlusioninc/abscissa

I'd love to see a similar starting point micro-framework for starting a new Haskell app that makes a bunch of importation decisions for you and guides the structure of the project.


I’m aware of 3-5 large companies (the number depending on what you define as “large”) that are using Haskell for big, serious projects, but all but one of them is in the defense or banking industries. Companies in these industries are usually not interested in sharing their experiences or engineering techniques, so it feels very quiet compared to when some hip startup starts (loudly) marketing some technology they’re using.


There is Hasura, they are probably the most famous commercial Haskell app right now (in terms of brand recognition).

https://hasura.io


That fits into the tooling category but it is a serious company.

I found one blog post mentioning Haskell but without much technical details:

https://hasura.io/blog/from-zero-to-hipster-haskell-in-produ...

I hope they write more..


Ocaml (via ReasonML syntax) accounts for over 50% of Messenger.com. frontend code.

https://reasonml.github.io/blog/2017/09/08/messenger-50-reas...

I’ve personally found starting adoption by converting pure domain code first to be the best approach, since it does not have to interact with legacy code.


There are more than 200 companies using Haskell in a different levels, check out https://haskellcosm.com


a tagent, is erlang and maybe elixir worth spending time learning if Ive spent time writing haskell and scala?


Yes, I loved learning Erlang (and later Elixir was a breeze). It's a good experience in learning how to build apps using the Actor model (they predated it before it was called Actors) and building resilient, multi-process based systems where failure is part of the design and fully accounted for.

It also has a really small and simple syntax and wonderful pattern matching heavy approach to code, which is something I've took with me to other languages (and a feature sorely missing in most major languages).

OTP is a bit of a beast and feels overwhelming at first, for example when you download an HTTP server to run your app you actually "run" the supervisor which spawns a bunch of processes. And this is all within your own application which is a group of isolated processes.

I'd personally use Erlang/Elixir for everything if it had strong typing. And I still might anyway for my next project.


Anyone using Haskell to drive their business is making a huge mistake in my experience. Haskell2010 is a fine language. Unfortunately, the ecosystem has centered around whatever the latest version of GHC is, with its millions of language extensions and breaking the world with changes like https://gitlab.haskell.org/ghc/ghc/issues/10365 and https://gitlab.haskell.org/ghc/ghc/-/wikis/proposal/monad-of.... Which is fine if you're doing research or building tools for fun! But it makes an absolutely awful industrial experience when you want to ship features and not get stuck on a bit rotted version of GHC.

edit: It's unfortunate that people are downvoting me for offering my experience, especially on a post that wants to make Haskell more attractive to being used in the industry.


You have a very valid point. As someone who uses Haskell professionally in a large organisation, I can honestly say that past changes such as FTP and AMP have added very little but caused significant issues for both academic and industrial users. GHC is no longer a Haskell 2010 compiler and unfortunately there seems little interest in the community for creating a new Haskell standard.


There a several companies who benefit massively from using Haskell, not least by attracting talent!

The most recent instance is probably Juspay: https://github.com/juspay


these issues probably did not cause a lot of problems for production software


Huh? My whole comment is about the fact that they DID cause a lot of problems for production software. The company wrote their flagship product in Haskell, and there is a pretty large chance that that decision will doom them because of the choice to use Haskell. It's unfortunate because the product has a great market fit. If they had choose pretty much any other tech stack, they would be killing it.


I recall refractors from those proposal being very mechanical and easy to automate.

Maybe my memory is fuzzy... But if not can you give an example where that wasn't the case?


Yes. An example was moving to the version of GHC where the semigroup change happened. The codebase was using this library: https://github.com/brendanhay/gogol. The library dropped support for one version of Google's API for another version. Fair enough, except the old version would no longer work because of the semigroup change. So I ended up having to waste a ton of time completely changing on how we were using a library to satisfy the semigroup change.

Again, this is the problem. The ecosystem tends to only aim at GHC latest because of the attitude that "its mechanical to change!". Yeah, for the code you write maybe, but if the dependencies have that attitude, all of sudden upgrades can be a huge effort.


That seems like a problem with gogol dropping support for relevant features, rather than with the semigroup change in particular. If you didn't want to keep up to date with the library more generally, the semigroup change in isolation could have been dealt with by forking the library in question.

Which isn't to say this isn't still indicative of friction in the Haskell ecosystem when it comes to building a large system for production.


Would fixing it require more than adding

    instance Semigroup t where
        (<>) = mappend
for every type t that has a Monoid instance? That would be tedious for sure, and very frustrating that a maintainer wouldn't support their package sufficiently well to do that for you, but still seems to fall under the description of "mechanical".


> The company wrote their flagship product in Haskell

Which company are we talking about? I don't see mention of a specific company in your original post.


It would be unprofessional to name the company.


I'm not asking you to name the company! Your original comment talked in general terms that could be interpreted as hearsay. Your later comment mentioned "the company" as though we're supposed to know what that refers to. It suggests you worked for a specific company that got burned by Haskell. That makes your point of view significantly more useful!


The idea of "marketing Haskell" to me is akin to marketing math and mathematical notation. Yes, Haskell is elegant. Yes, it is powerful. Yes, it is succinct. Yes, the benefits of its algebraic type system are many and very real -- if your code compiles, it is correct in many important ways. No one can argue with any of that.

But... most people who look into Haskell lose steam as soon as they realize they actually must understand and be able to reason about functors, applicative functors, monoids, monads, and (god forbid) lens-things like prisms, traversals, isos, etc. to do anything useful. It's a shame.


The sad thing is nobody ever makes that complaint when talking about everything you'd need to know to build a bridge, or to place an IV, or even to build a table. There's a huge part of the software community that's self taught, but also a gigantic part with degrees in computer science. If there was enough value in these ideas and critical mass of wide adoption, we could just add it to the curriculum, and it'd just be yet another of those things you need to learn to get into the software industry.


>The sad thing is nobody ever makes that complaint when talking about everything you'd need to know to build a bridge

If we're talking about the software engineering equivalent of bridges, I think it would be much more valuable add TLA+ or similar to the curriculum.


Why not both? They are not mutually exclusive and they both would advance the status quo.


If there was unlimited time, sure.


OP was complaining about having to understand Functors, Monoids, etc... like they are nuisances in the way of writing real code. In response 6gvONxR4sf7o and I were saying that programmers really ought to be taught these incredibly useful abstractions. After more than a decade of coding professionally in popular languages and then 4 years of writing Haskell I cannot begin to tell you how amazing it is that we have these abstractions to rely on. There are virtually endless data types out there and instead of having to study each and every one of them I can just look at whether something is a Monoid and immediately know that I can append them and there is a unit. I can look at whether something is a Functor and immediately know that I can map any function over it. I only need to know a few of these generic patterns and "magically" I can write functions that operate on any of the countless data structures out there. This is true code reuse. Not teaching these to future programmers, not exploiting the timeless, powerful knowledge behind these is in my opinion a disservice to them.


I've been writing Haskell for 12 years so I'm aware of the abstractions it offers. At the end of the day though, either we're building a bridge or we're not.

If we're building a bridge (i.e. a distributed database, a real time system, fly by wire system, etc.) Haskell is useless if for no other reason than how it does memory allocation.

Dependent types are also a dead end for building bridges in my opinion. Proving any non trivial invariants is basically impossible in the real world (http://web.archive.org/web/20200113205635/http://blog.parall...).

And even if it wasn't, dependent types are for a program. But our "bridges" aren't programs, they're systems! If there's one thing that the Haskell community don't understand, it's that. The scary issues that crop up in large scale distributed systems aren't from the nodes, they're from the edges. So any system of correctness that is focused on the nodes [i.e. type systems] is avoiding the real issue.

That's the power of TLA+ by the way. By modeling the system, you have a specification that gives you some confidence what you're building is correct before beginning to implement your bridge.

And if we're not building a bridge? The business doesn't really care about correctness, they care about shipping features. Rollbacks are an acceptable cost. Our job is to sit down and bang out a solution to the problem at hand, not build the perfect abstraction to the problem. When we use languages like Haskell, we take that power out of product's hands.


At work we're building a novel DSL and its ecosystem for a specific industry. Not sure if you consider that a bridge or not, the naive part of me likes to think it's akin to one but maybe i'm deluding myself. Of course we could use any language under the sun but imho Haskell is a really great fit for this project. It also helped hiring some truly great talent (I'm the hiring manager for this team so I know first hand). I don't have much experience with TLA+. From the little I know and learnt about it over the years I think it's brilliant which is why i agreed it should be taught and used more. For the record I've also been involved in multiple large-scale micro-service-obsessed projects in the past, none of them used Haskell or TLA+ and they all were a terrible mess of hidden entangled complexity. Not sure if there's any takeaway just an anecdote.


Honestly, there is already too damn much stuff to learn to get into the software industry.

One can make a point in that senior developers could use different tools than juniors. That happens on other industries, and development avoids it like the plague, with clearly bad consequences. But the industry will never standardize on high-knowledge tools.


That sentiment is so strange to me. There's a ton to learn to get into the industry, but "too much" requires some real thought. If an extra month of full time study could fix some software issue 95% of the time, are there any issues that you can think of that would be worth raising the bar? I'm not advocating that learning monads is the thing, but that a high barrier to entry isn't in and of itself bad.

I sincerely hope that software as a field matures to the point where maybe 50 years from now, hiring most of us working in it with our knowledge today would be like hiring a mathematician who never learned calculus.


What software issue does category theory and abstract algebra solve?

Don't get me wrong, it's cool and it has expanded my appreciation and understanding of math. But I just don't see what engineering problem it's solving besides mathematical elegance.

If you're thinking monads and IO -- separating side effects is nice and useful, but it doesn't solve any real problem IMO. It's not like people are losing sleep because they don't know which code is side effecting. Side effects are where a lot of engineering complexity is, but IO doesn't wrangle it much better than anything else. Maybe Erlang does the best job at it IMO.

And plus monads don't compose. Algebraic effects (ie multicore Ocaml), or even Actors, does "IO" in a better way without the math jargon.


> It's not like people are losing sleep because they don't know which code is side effecting

I’m pretty sure many people are, since this will be a likely cause of them losing money in their businesses. They could be losing money directly, and they could be losing money as a consequence of having to invest more developer time in solving unexpected effectful behaviour.

In fact, I can think of one comedic case where a programmer literally lost sleep because he programmed his garage door to respond to HTTP GET requests, instead of POST requests which are conceptually meant to be the ones that are effectful.


I always found the IO barrier a good distinction at code level. In fact, I started to work on something to that end for Elixir because of it.

Monads come with their own share of problems that's true. Composability isn't really one of them in my opinion though, we got transformers for that purpose.


> What software issue does category theory and abstract algebra solve?

They address (not solve) the problem of lack of code reuse. I have never seen a language where I can put to use so much code written elsewhere without knowledge of my particular use case. They also address the problem of reasoning about the behaviour of code being difficult.


I think it's true that there's too much in the way of faddy tools and products, frameworks, libraries, ecosystems.. and at the same time people are perhaps not investing enough into theory/fundamentals.

Where programming languages fit on this spectrum is a bit controversial. I think most people treat learning a programming language as an ecosystem problem, but that's only because these people tend to consider only mainstream languages that just don't have a lot going for them in the "language proper."

The result seems to be that if you propose a programming language a competent programmer can't pick up in a weekend or three (mostly by learning the syntax & types and then just applying prior knowledge from other languages), there's going to be a lot of complaining and whining about it. Maybe it's justified as long as we don't see advanced languages that really change software engineering in a big way; 'til then, a language is just another (more or less faddy) tool.

If we had a language that could offer guaranteed productivity boost (without any nasty tradeoffs other than the required up-front mental investment) for anyone who takes a few months to study it, then hell yes we should!

So it boils down to: is there really enough values in those ideas or not? I think not, or else both academia and the entire industry is just extremely dumb for not seeing that value. And I'm dumb too :(

The other point to note is that some valuable concepts just slowly creep into mainstream languages over time. Like mainstream languages today have a decent set of constructs that you primarily associated with functional languages 2-3 decades ago. Those things are probably not valuable enough in isolation to justify switching languages.


> So it boils down to: is there really enough values in those ideas or not?

Up front, it depends one which ideas "those ideas" are. If you're referring to monads etc, I don't know. It's not obvious to me either way. In the broader sense, I hope what we're seeing with things like rust's ownership stuff, or haskell's hardcore folks dreaming up new ways to do generic things soundly, the zig and co folks separating compile time code from runtime code, or even the bleeding edge dependent type folks letting you write provably correct code, I hope it's just the beginning. In terms of programming language developments, it seems like there are a lot of new ideas happening now. It's not that academia and the industry are dumb, it's just that this shit is hard and we're just starting. So I worry when I see "it's another thing to learn, and there's already to much, so no" as a justification against the hard parts in new ideas. I can't wait to see what the next few decades of languages and tooling bring us. Who cares whether it means we have to study?


> I think not, or else both academia and the entire industry is just extremely dumb for not seeing that value.

I don’t think we should discount that possibility, although I don’t think it’s true that academia doesn’t see the value in more sophisticated technology.

Industry, yes. I can accept the idea that a large proportion of industry is just extremely dumb. This isn’t an original idea though.


A month is probably enough to learn the meaning of those terms, and how to implement instances of them... but for learning how to actually apply them on real code, an year is more likely.

Most professions require less than an year of training in total, the ones that require more training have less employment positions and command better salaries. You want software development to migrate up, with good salaries to few people, and the industry is stubborn in pushing it down, with low salaries to many people. But both of those options are bad, software development has a huge amount of possibilities, that can only be exploited by employing many people, and also has a huge reward for competence; it needs to spread through all the spectrum.

A world where software developers were are few as mathematicians would be a very boring one.


> But... most people who look into Haskell lose steam as soon as they realize they actually must understand functors, applicative functors, monoids, monads, and (god forbid) lens-things like prisms, traversals, isos, etc. to do anything useful.

Why? Why does everyone think that you have to understand the mathematical underpinnings of Haskell to program in it, but not that you have to understand the assembler to which C compiles to program in it, or the internals of your car's engine to drive it?

Of course, in each of these cases, someone who has that knowledge will probably have some power available that will be denied to those who don't; but that doesn't mean you can't use it. The great thing about this mathematical knowledge is that it can be filled in on demand, and, unlike a lot of knowledge that you don't even know you don't have, when you realise you want to understand what these objects are, you've already got the terminology to look it up in research papers.


Those abstractions are like the steering wheels and pedals for Haskell, not like the car's engine.


> Those abstractions are like the steering wheels and pedals for Haskell, not like the car's engine.

Agreed, but you also don't have to understand anything about the mechanics of the steering wheels and pedals to drive a car, only about when to use which. Monads and other scary-sounding terminology are the same. People really do know these patterns from other programming languages, and attaching a name might be scary, but it doesn't make them intrinsically harder. On the contrary, it makes it easier for people who want to peer into the theoretical internals to do so—though no one has to do it.


> The bottom line is that rather than using more correct tools and engineering practices, it is simply far easier and more economical to hedge software risk with non-technical approaches. Moreover, if you throw enough bodies writing Java at a project for a long-enough period of time, it will produce a result or fail with a fairly predictable set of cashflows, which is itself a low quantifiable risk that’s very easy for a business to plan for.

This is a very MBA-like lens through which to examine software development. It's not wrong, but it's also a recipe for mediocrity, which is a great way to damage your culture and sit around wondering why you can never get ahead of your competition.

The deeper question is why is software quality only loosely correlated, if at all, with business outcomes?

The pessimists will point to Equifax et. al. and tell you that the market simply doesn't care, that businesses can go through downtime after leak after catastrophic failure and their stock will never take a hit. And there's some truth to that. The end consumer actually doesn't really care about the bad news. There's not much that any company in the business can do about that.

But the optimist understands that great engineering will leapfrog competitors and leave them in the dust. The value of correctness isn't in the two months after you begin, it's in the two months after your latest release five years from now. It requires vision, patience, already-available human capital, and leadership to realize the benefits.

> If the talent pool is illiquid or geographically locked, that’s a non-starter. If there isn’t a large corporate vendor to go to to sign up for a support contract, then that’s a non-starter. If there isn’t a party to sue as an insurance policy against failure, then that’s a non-starter.

If you give your talent the opportunity to work on a Haskell project and they're walking out the door when it's difficult to find Haskell work to begin with, then you, the manager, failed in deciding which talent to hire. If you're looking for a support contract to hedge against the loss of in-house talent, then you've already failed. If you think that legal solutions will protect you against market failure, then you've already failed. Haskell isn't trying to market to these people in the first place, why should it start?


> but it's also a recipe for mediocrity

This is framed as a downside, but to be mediocre it can't be so bad it doesn't work at all. Most software that is great from a technical standpoint is made by a relatively small team, which translates into a relatively large risk for the business since it compresses all the volatility into retaining that team or not. With average tenure in tech around two years, you can see how that is not an attractive proposition for most companies. If all the competition is also (at best) mediocre, you can simply try to compete on other (nontechnical) fields like marketing or customer support.

There is a secondary level of incentives that is present in most larger companies: delivering a fantastic product will make the owners rich and the people involved will maybe get a raise. Delivering a spectacularly failed project ends your chances of career advancement. This really diminishes the appetite for taking technical risk amongst decision makers. Mediocre is often good enough.


> delivering a fantastic product will make the owners rich and the people involved will maybe get a raise. Delivering a spectacularly failed project ends your chances of career advancement.

Aren't ~90% of the software projects failures?

Mediocre software is failure. There is no upside to avoiding all risks.


Of course there is upside in avoiding all risks, as long as you are happy with the status quo. I understand if you personally are not happy with it, but we just have to look around to see conservative people everywhere.


To the poster's credit, he does frame this in terms of balancing concerns. I'd argue the post overstates the non-technical mitigation, but they seem worth being aware of, and even as a rhetorical device, overstating helps bring them into focus.

> The deeper question is why is software quality only loosely correlated, if at all, with business outcomes?

Because quality is on a spectrum and context specific: commercial domains need quality levels—you have to set magnitude or quantify in some way. The software profession such as it is, tends to come at this a binary matter (cite pretty much any debate about speed vs quality). That gets, I think, amplified in technical communities where correctness is deemed more important or simply more attainable in terms of claims.

> But the optimist understands that great engineering will leapfrog competitors and leave them in the dust.

I agree with the sentiment; I actually do think we're in a phase where engineering leverage is underestimated, but would qualify it and say this is different to correctness. What great engineering can do is offer a short term technical advantage (eg "secret sauce") and/or a sustainable one (eg "organisational speed"). It's not clear correctness provides that kind of benefit, unless we want to frame as increasing precision/accuracy/reproducibility of results.


tagged in on a client project written in haskell

codebase organization was nonintuitive and I blame type-level programming

compilation was so slow and bad that other programs on my laptop started to die

the feature I was adding was something that would be library-provided in a normal web language

none of the operators have names and are very hard to google; I had to search for things like 'haskell tilde asterisk arrow'. Impossible to know what's from libs vs core language. Lots of explanatory docs online were like 'oh we don't pronounce this one'.

Core abstractions have no definition ('to explain that I would have to explain monads but you won't understand monads so GFY'). Monads are just sub rosa dependency injection .

lang doesn't protect you from runtime crashes but does apparently protect you from knowing where the program crashed


Haven’t written any serious Haskell, but doesn’t Hoogle work for searching infix operators?

One thing I really like about Haskell is Hoogle. The ability to search by function signature is great. I often wish I could do that in other languages.


searched `->` got

(->-) :: Bind m => (a -> m b) -> (b -> m c) -> a -> m c

semigroupoids Data.Functor.Bind


Fair enough. `->` is core syntax, not a library function. It means `->` if you're a Java guy or `=>` if you're a Scala guy.

Just search for it the same as you'd search for it in Java or Scala.


I'm open to the argument that my imperative / scripting background in some way primes me to learn rails but makes it hard for me to navigate a haskell codebase, and that this is human difference rather than one language being inherently more navigable than the other


> lang doesn't protect you from runtime crashes but does apparently protect you from knowing where the program crashed

Can you share a sample?

  square x = crash * x

  crash = undefined
 
  main = print (square 3)
Cause for me, this results in:

  foo: Prelude.undefined
  CallStack (from HasCallStack):
    error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err
    undefined, called at foo.hs:3:9 in main:Main
Maybe your mileage varied.


This is a relatively new feature. It used to say something like 'error: undefined' and exit.


This is a very lucid piece on PL adoption in general. When I think about programming languages, I do think rather emotionally (which is fine, emotions are early warning systems).

I liked the linked author's points about the pre-requisites for marketing a solution:

    It is memorable
    It includes a key benefit
    It differentiates
    It imparts positivity
When I think about the most popular languages, they do have all of these: Rust:

Memorable: very. It's basically everywhere you look Has a key benefit: going C++ fast without the C++ footguns, memory-safe, WASM integration Differentiates: Using Rust is definitely choosing a particular set of technical tradeoffs Imparts positivity: the community is known to be welcoming, feels cool, cute crab. People re-write existing software in Rust just because it feels good!

Yet Haskell has almost none of these:

Memorable: I mean, yeah, but it's not being shown to people, and I can't point to a bunch of software. Tools I know of include PostgREST, Hasura, Nix, hledger, which is better than nothing. Key benefit: Hand waving about "being more correct", unlike Rust which has reams of "Microsoft claims 60% of its bugs are memory-errors" articles, and in any case TFA's point is that this needs to be a different message because correctness isn't gonna sell it. Differentiates: This is definitely true, almost too much Imparts positivity: Not really feeling this one with Haskell. People feel good when they figure out a monad is a container & a lens is a path through a data structure, but attempts to explain to others these things feel condescending. Does Haskell even have an animal mascot? (serious question.) Also "stumps" a lot of people and people feel bad about it. There's no "import antigravity" equivalent feeling with Haskell.

This sounds pretty sombre for Haskell, though it's been an academic language a long time, and could probably stay alive that way.

One positioning possibility is maybe with the Frontend dev world becoming more and more functional and pure-oriented, positioning itself like reasonML, as a way to write and reason about declarative pure UIs. But I don't think the existing Haskell community cares too much about that world.

Idk, I'm going to just keep writing Rust and JS, the already most loved and popular languages.


I love haskell. What I love about it is the type system and pureness. In python, I loathe refactoring because of how fragile it often is. I hate finding out that I misunderstood what to even pass a function. I want the what haskell brings to those issues. I hate how hard haskell is to debug and how incredibly hard performance is to reason about. I want what python (or another imperative language) brings to those issues. I wonder if haskell had been strict by default rather than lazy by default, if its adoption would be different today.

There are certainly pieces of haskell that would make all our lives easier, and I can't wait for them to make their way into mainstream industry work one way or another.


-XStrict?


It's easy to write strict code in haskell, but the default behavior affects community norms, and what's considered standard and idiomatic. That's where I think the evolution could have been different.


Haskell would never have been strict by default, because the entire point of Haskell was to be a common basis for research into non-strict languages. Non-strictness is simply the core of what Haskell is.


The author of the article has an interesting contact page:

"To contact me via email run one of the following scripts to generate my contact information. I find this is an effective filter against the deluge of emails from recruiters."

The "scripts" are offered in Haskell, x86-64 Assembly and Python. Solution is pretty obvious though, no need to run them actually.


Recruiters who do contact me will be named and shamed publicly.

I would applaud any recruiter who went through all that trouble just to contact a person, regardless of my interest in their offer.


"The hard economic truth for engineers is that technical excellence is overwhelmingly irrelevant"

No. Technical excellence matters a lot when used appropriately, but competing with javascript and python is not where haskell is going to win.

There IS a language with HUGE success that is: declarative, strongly typed with type inference, lazy, and no side effects. It's called SQL[1].

That's where Haskell could shine -- compared with SQL. Imagine Maybe rather than NULLs, a sane language, multiple returns, a much better type system, etc. Python and Java can't touch SQL, but haskell could.

[1] http://thoughts.davisjeff.com/2011/09/25/sql-the-successful-...


SQL does not have type inference. It's dynamically typed, and will break at runtime if you feed the wrong type into something. It probably is strongly typed, depending where you draw that line.

It also sometimes has side effects, although that's more dialect specific than present in the standard I think (when it comes to DML).


I'm sure there are some counterexamples, but for the most part most SQL implementations will fail to compile a query if you try to add a string to an integer or take the average of a date column. So I'm not quite sure what you mean.

And type inference also plays a role with CASE statements, UNION, etc. Again may depend on SQL implementation.


Hmm, I'm not sure of the details, but I very much remember encountering type errors based on what data path was taken. Playing around now I think the typical case is more as you described. I might revisit when I'm better rested.


SQL works really well though. Lot's of people know it and can learn it. There are many implementations. There is a standard.


But haskell could offer so much more. A good type system would be huge. Just replacing NULL with Maybe would be huge. A lot of other ideas could come out of it.

And people care about correctness in a way that they don't with python or javascript. Haskell really has something to offer.


That could be interesting. Do you have any links to projects pursing this type of thing?



Ask any marketer — it's (almost) never the marketing. Or, rather, it's never the marketing of the "hype and subterfuge" kind. No product dominates the market for decades due to hype and subterfuge, let alone hype and subterfuge deployed decades earlier. It's always the product. It's just that what matters about the product to most people is often not what matters to some minority that insists that “technical excellence” is defined by their own preferences. In the famous VHS vs. Betamax war, the technically superior product won, it’s just that it was superior in matters that appealed to many and inferior in matters that appealed to few. The few famous cases where it was the marketing (diamond rings, maybe?) stick in our mind because they're the exception, not the rule. Blaming marketing problems is often an excuse to overlook product problems.

Second, the problem with the message of "code written in Haskell is more correct than code written in other [more popular] languages," isn't that the industry doesn't care about correctness. Correctness translates to cost (to a degree), and the ability to write software as correct as it needs to be more cheaply is actually quite compelling. Rather, the problem with it is that it's simply not true (or, rather, hasn't been shown to be true), certainly not to any degree big enough to matter. That, in and of itself, does not make it a terrible message; after all it's just harmless hype, right? What makes it a terrible message is that it is very clearly not (shown to be) true and yet some Haskellers seem to actually believe it, drunk on their own kool-aid, which makes you think of them as out of touch with reality. At best, it's an emotion (it "feels" more correct), which you've mocked, rather than any sort of "critical thinking," that you present as the desirable standard.

Third, the mistake in the perception of the economics of software is not the importance of correctness — that can have an easily measurable impact. It's in the importance of code. Programmers in general, but Haskellers in particular, are obsessed with code. As a result, they grossly overestimate its importance to software. Ask yourself how long it takes you to produce, say, 1000 lines of code of sufficient quality, and then look at a software produced at your company and divide the size of the codebase by the number you came up with. You'll find that the true cost is several times that. The "pure" cost of producing code is not where most of the cost of developing software is. Maybe it's in what you mockingly call "emotions", but it certainly isn't in programming-language semantics. Also, before presenting managers' priorities, it's better to actually speak to actual managers; they just might have some useful insight here, plus, it's what a culture of technical excellence demands, no?

All of that means that unless your language has some clear and direct effect on the bottom line — which, these days, usually means it has good performance and/or observability, a good ecosystem, or a monopoly of sorts on some distribution platform -- there is no "message" that can make it popular, other than "it's easy and it's fun." While perhaps not a sufficient requirement for success, this can create a supply of qualified developers that at least won't put your language at a disadvantage.

And here's another tip: if your retrospective introspection isn't painful, doesn't discuss any serious and conspicuous flaws with the product, and still paints you as being somehow right after all and everyone else as being somehow wrong — it's probably not true. Worse, it seems untrue. And not calling your own practice one of "engineering excellence" and others' as "throwing enough bodies" can't hurt, either. In other words, I don't know what a good marketing message for Haskell would be, but this post is a good example of a bad one.


> All of that means that unless your language has some clear and direct effect on the bottom line — which, these days, usually means it has good performance and/or observability, a good ecosystem, or a monopoly of sorts on some distribution platform -- there is no "message" that can make it popular, other than "it's easy and it's fun." While perhaps not a sufficient requirement for success, this can create a supply of qualified developers that at least won't put your language at a disadvantage.

I think this approaches the heart of the problem. Whatever the advantages of haskell may be, the disadvantages run extremely deep. Haskell is notoriously difficult to debug and optimize (both memory and cpu usage). It is interesting and influential, but trying to abstract away the order of operations as a core of the language just didn't pan out. All of that is before issues of IDEs, libraries, compilation times etc. It has been around for 30 years, at some point it needs to be called a worthwhile experiment so people can move on from clenching their fists that no one else is as clever as they are.

The more I think about it, my ideal language would be the exact opposite. Something extremely clear and straight forward with minimal complexity and maximum infrastructure.


> The more I think about it, my ideal language would be the exact opposite. Something extremely clear and straight forward with minimal complexity and maximum infrastructure.

I completely agree. That's why I like Java for high-level (but large-scale) programming, and interested in Zig for low-level.


This is a slightly loaded question, but is a Haskell codebase harder to work on than one in a reasonable non-functional language?

Maybe it's because the bugs are generally less common and easier to fix, but when I look at Haskell codebases it seems like it's quite difficult to achieve the same level of "separation" one would in (say) Java - i.e. an average Haskell file seems to purely consist of many loose free functions (often with very short names!).

I might be missing the point but the aforementioned always pushed me away from Haskell - great language, but seems to have legibility as an afterthought (as opposed to python, which is an awful language, does manage to enforce some sort of legible coding style)


In my experience, even bad Haskell codebases are easier to work with than mainstream language ones. Once I got over the learning curve, I barely think very hard when programming Haskell. It's all mechanical.

For this reason, I find code style to not affect me in Haskell. I've seen a bunch of different styles and I'm never taken aback. The code is all simple & mechanical to reason about.

I've been able to refactor other people's "bad code" without much effort too. This is after others have said the code is bad and unfixable. It just takes a good attitude and the same mechanical approach as always.

At the end of the day, I think a lot more about the business problem in Haskell than other languages. I hate having to think about code & simulate a computer in my head. But if you wish to use other languages and pay me to think like a computer, feel free! It's your money.

You can think of modules as Java classes and get the same sort of code separation fwiw.


I've found it to be easy to work personally, if you want long names of functions in your code if it helps you should. Common libraries might not have long names for functions, but you'll find the same thing for the most part if you look at the API of collections in various languages for example.

Oddly now I find it really legible compared to other languages, you can still make things obtuse if you really want to, but the flow seems so much clearer.

I would say compared to Java (as you mentioned it) there's much less separation because things invariably gravitate towards big balls of mud containing lots of state. But in Haskell the function you're looking at is only tied to the functions and data types it uses, no more, no less.


Yes, you are missing the point. It is very common for people to miss it when they only have experience in OOP, and very hard to adjust to the required paradigm.

Good Haskell code is not organized the same way as good OO code. While in OO you will have separated instances that take complete care of different business concerns, in Haskell you will abstract the care into different parts, and integrate the business concerns into a few instances that take complete care of them.


I’d suspect the difficulty comes from the paradigm shift.

I’ve personally switch code to functional style and it has simplified and reduced code for me. Granted if you don’t know how monads work it could be very confusing.


I'm a bit rusty but I do understand the type and category theory behind Haskell. I don't find it confusing, just sort of suboptimal a la C++ template syntax


Ah yes if you have a language that isn’t conducive to it... luckily Typescript is multi-paradigm enough.


How could it be tweaked to make it better?


What D does. Angle brackets make instantiations harder to parse. Also the template keyword is pointless


What about tooling? Haskell being such a powerful language would take off a lot more if there was a very hand-holding, Jetbrains-like IDE that gave you everything you needed and pointed out the problems in your code in real time. Or at least that's what I think, having never worked with haskell in a professional capacity. What are some day to day devtool stacks like in real companies?


http://rikvdkleij.github.io/intellij-haskell/ it's been around for a while already


ghcide, took a couple days to setup but the effects are phenom

https://www.youtube.com/watch?v=cijsaeWNf2E


I made the decision to stop using Haskell for things I consider "professional" use a couple years ago. No matter how much I love the idea of the language, there's a couple superficial things it gets wrong and one fundamental thing, or at least I believe it's fundamental, perhaps some academics have proof one way or the other.

The superficial things are things that are not really part of the language, but more of its standard library, part of the 'prelude', or things the community has agreed upon:

  - the I/O monad (and others) can throw errors
  - prelude is full of bad practice like use of strings and linked lists
  - there's something wrong with how function names can't be namespaced
There's more than this, and they're not just things I came up with, they've been fought over in the community at length. Other preludes exist, I/O libraries that don't throw errors, even whole different systems for doing I/O exist. It's not that solutions are hard or impossible, they're there, it's just that the way the language is managed it's not likely that they'd ever be applied in a way the entire community, and especially those drawn to it by any marketing would experience them.

Maybe I'm making a fool of myself and things changed in the past couple of years, things were really developing at a rapid pace when I left, but it seemed all the improvements happened around haskell, not to the language itself. When Stack and LTS-Haskell emerged it was the most amazing thing that I saw happening to Haskell in like 10 years, but given the lackluster response Snoyman received from the core team it seemed like they didn't even realize it even though the entire community had like an upheaval.

In my opinion, Haskell's marketing is fine. It's being marketed to every college student anywhere that goes to a good enough university that it teaches functional programming. In addition to that there's raving fans (myself included) that extoll its virtues to anyone who would hear. What it needs is a better retention rate. It needs that people who learn it, immediately realise it's fit for production use, that it's not just a fun experiment and a educative way of looking at things. For that we need something like a Haskell 2.0, designed by the community to incorporate everything we learned about Haskell's flaws and make it an absolute powerhouse of productivity. Maybe Haskell should be split into two flavors, one the teaching language with the linked lists, strings and funny pattern matching, and one that's got a prelude that will set you up with the perfect base to build the next compiler, database, service or web application.

If all those smart people who learn Haskell and tinker with for a bit, actually stayed around and used it to build awesome stuff, Haskell would truly have an enormous foot print on the world.

(not really relevant but the fundamental thing is that Haskell's performance is really hard to predict, this is true for many garbage collected languages, but for Haskell it seems to be extra bad because of the way its paradigm just doesn't map 1:1 to how computers are built)


> we need something like a Haskell 2.0, designed by the community to incorporate everything we learned about Haskell's flaws and make it an absolute powerhouse of productivity

How about PureScript?


Marketing Haskell as an industry language is starting to feel a bit like a forlorn hope. After 30 years of getting passed by a variety of new languages that are supposedly inferior, hoping that Haskell will make industrial in-roads seems a bit silly to me.

I mean, try if you want to. It's no skin off my back. But it doesn't seem like it'll work.


Javascript is not popular because it has Java in the name name...


But it was a big initial boost for it. "ECMAScript" didn't have the same marketability, and Java was being publicized and becoming popular, so they renamed it to JavaScript.


That's wrong on the history. Netscape called the language "LiveScript" before changing it to "JavaScript". Then Microsoft called it "JScript." Then the "ECMAScript" name was chosen as a compromise.

Using different names for the same language didn't change anything fundamental. It was built into the browser, so if you wanted to write code in the browser you didn't have a choice.


I still maintain that ECMA sounds like an unpleasant skin disease. Not good marketing, that!


It would be unreasonable to attribute none of JavaScript's popularity to its name.

https://stackoverflow.com/questions/2018731/why-is-javascrip...

Even JavaScript's first name Mocha is a reference to chocolate coffee and the Java language.


I think it's just popular because it's in the browser.


Unless I have missed something, it is immensely popular.


xixixao is making a statement about the reason for Javascript's popularity, not about its popularity itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: