Hacker News new | past | comments | ask | show | jobs | submit login
Haskell's Children (owenlynch.org)
176 points by xiaodai on Sept 23, 2020 | hide | past | favorite | 191 comments



I enjoyed reading the perspective given in this essay and agree Haskell's been (and continues to be) an influential language. So take this criticism as maybe my being overly particular about the meaning of descendant.

Idris as a descendant of Haskell I have no argument with.

Rust however, is more correctly viewed as (primarily) a descendant of ML languages, which all possess the properties the articles lists as derived from Haskell. Perhaps it is more correct to call Haskell and Rust cousins, both deriving from the same lineage of languages.

I would also not call Julia a child of Haskell. As it is more correctly viewed as an interpretation of Matlab as a Lisp, I'd therefore call Julia a descendant of Lisp. That Haskell single handedly popularized the utility of category theory in computing and that a category theory library is written in Julia does not make Julia a child of Haskell any more than it would have made Ocaml a child of Haskell (there used to be a category theory library written in Ocaml).

The popularization of functional programming, Domain specific languages as algebras, reactive (not FRP), property based testing, parser combinators, model view update for UIs, design principles secretly based on monads are all but a few of the ways Haskell has influenced programming. Which is the core point of the essay and with which I am in full agreement.

This thread somewhat has the air of an eulogy but it goes without saying that just because something is not the most popular in fashion does not mean it is dying.


Rust’s two most prominent type system features are lifetimes and traits. Traits aka type classes first appeared in Haskell and are not a feature of ML.

I would be interested to learn of features in Rust that came from ML and that aren’t features of Haskell - not counting strictness :-) I can think of a non-example: one of ML's distinguishing features is its functional (in the FP sense) module system; Haskell and Rust have more conventional module systems.

You could maybe argue that a Rust enum definition is more like an ML type definition than a Haskell data definition because the constructor parameters are tuples rather than curried, but it isn’t clear to me whether this is actual influence from ML, or Rust aiming to be more C-ish.

As I understand the history of algebraic data types and pattern matching, they first appeared in Hope, and from there they were added to ML, and on the separate branch of lazy functional languages they were inherited by Hope’s successors Miranda, Orwell, and Haskell.


You raise good points and I suppose valid differences in focus will alter one's notion of nearness. Here's why I made the claim I did:

> Rust that came from ML and that aren’t features of Haskell

> As I understand the history of algebraic data types and pattern matching, they first appeared in Hope, and from there they were added to ML, and later adopted by Hope’s successors

They appeared first in a practical form in Hope but I'm taken to understand that these ideas could already be found in Programs and their Proofs: An Algebraic Approach and the core ideas are clearly present in this 1968 paper by Burstall: http://www.cse.chalmers.se/edu/year/2010/course/DAT140_Types...

Creators of HOPE, VAX ML (which also had limited patterns and case analysis) and LCF/ML all influenced each other and played prominent roles in the design Standard ML, which would in turn strongly influence future functional languages, including Haskell and Ocaml. It's tricky to untangle the history but I don't think it's controversial to say SML popularized functional programming as prototypically understood today (before Haskell evangelized it further).

> Rust’s two most prominent type system features are lifetimes and traits

Other than the initial prototype being written in Ocaml, I'd say Rust's pragmatic feel: eager evaluation, immutability by default but a hatch to mutability and the implementation of the features listed in the posted article all make it feel more like an ML. I agree, though, that this is a subjective take.


There is a good list of ways Rust has a more ML vibe than Haskell here https://lobste.rs/s/nwtarh/haskell_s_children#c_quph8k in aspects of Rust’s syntax and choice of type names


I agree, I would call OCaml, Haskell, and Rust cousins, more or less ... descendants of ML.

Though the original Rust compiler was written in OCaml, so depending on your viewpoint it could also be its child. I'm not sure if Rust lifted any features or syntax from OCaml specifically. I think the "O" in OCaml was not influential ...

----

edit: I found a list of influences, and it does credit Haskell with typeclasses, but SML/OCaml and C++ are credited first:

https://doc.rust-lang.org/reference/influences.html

I always thought of Rust as a mashup of OCaml and C++, but Haskell is in there too.


While Haskell has a lot of influence from ML it is also a descendant of the highly influential line of programming languages designed by David Turner starting with SASL. If Miranda weren't a closed source language it perhaps would be what all of us are talking about now.

Some History of Functional Programming Languages (David Turner) https://www.cs.kent.ac.uk/people/staff/dat/tfp12/tfp12.pdf


Interesting, I didn't know about SASL.

Miranda was open sourced earlier this year! I remember downloading the tarball and looking at its source code.

https://old.reddit.com/r/programming/comments/fbtmar/miranda...

https://www.cs.kent.ac.uk/people/staff/dat/miranda/

https://www.cs.kent.ac.uk/people/staff/dat/miranda/downloads...


This Landin paper predates SASL by about a decade: http://thecorememory.com/Next_700.pdf


Mentioned in the PDF I linked to :) And lambda calculus came decades before still.


But Catlab.jl is definitely a child of Haskell: the developers have said many times how much inspiration they've gotten from Haskell.


> Rust however, is more correctly viewed as (primarily) a descendant of ML languages, which all possess the properties the articles lists as derived from Haskell.

Why would that be more correct? Would Rust have these features if they were not popularized by Haskell? I don't know the answer, but it seems like discussing which language some feature was taken, when more than one contemporary language has it, is a kind of meaningless pedantry.


Given that the original Rust compiler was implemented in OCaml, it seems natural to assume that's where some of these features were inspired.

Though I'm not really sure that it matters.


> It manages to be much faster than most dynamic languages ... It would be unfair to compare the performance of Rust and Haskell, because Haskell is optimized for things other than performance.

Hmm...

> When I first started using Rust, I really missed monads.

Option and Result are Monads. Surely he misses do-notation.

> This all being said, I think it is worth looking at the features that are prominent in Haskell that ended up going to Rust ...[list of features that Rust got from SML/ocaml]

Here are the officially stated Rust influences: https://doc.rust-lang.org/reference/influences.html

To say that Rust is a child of Haskell is really overstating the case.


> Option and Result are Monads. Surely he misses do-notation.

You can find monads everywhere if you look hard enough. But Haskell actually lets you use those discoveries for massive code reuse, while Rust forces you to write stuff like Result::and_then and Result::map over and over again.

> To say that Rust is a child of Haskell is really overstating the case.

Right, most of these are common influences. I'd say Rust's main contributions are (a) the ownership system, and (b) painting a thin coat of C++ syntax over ML features to make them more palatable, and neither of these seem all that inspired by Haskell.


You can totally use macros to reimplement do notation in rust, and that's what many crates on crates.io already do.


Do notation isn't the important part. It's the reuse you get from the Monad, Applicative, Functor, etc. traits/typeclasses.

(Of course you still want some form of nicer notation than the raw one, but that's beside the point.)


Rust obviates the need for many uses of these monads, either by not requiring purity (so no IO monad), or by enabling controlled mutability (so no State monad).

Out of the remaining monads that are in common use (Result, Option, Future), it doesn't seem particularly worthwhile to try cram them all into the same API, when you could instead try to design better specialized APIs (as has been done with Future)


The monadic interface is not the only API surface those types have in Haskell. It very rarely is. There’s no “cramming,” they either have monadic structure or they don’t, and if they do it’s very convenient to be able to treat them as monads.


Just to be pedantic, we also recently got applicative-do. So you can use a subset of do-notation for the much more prevalent Applicative Functors, too.

In practice applicative functors are even more prevalent than their subset, monads.

If you can express your algorithm using the weaker applicative combinators only, there's more opportunity for optimisation and parallelism. (And the code is easier to understand for your readers, because there are fewer interactions for them to worry about, too.)


That sounds super neat! Haskell always has me jealous of language features, but I’ve committed to focusing on mastering C# and the CLR right now because it’s what I get paid for.


See https://www.microsoft.com/en-us/research/wp-content/uploads/... and https://simonmar.github.io/bib/papers/haxl-icfp14.pdf for more background, if you are interested. It might even contain some ideas you can use in C#.


> It might even contain some ideas you can use in C#.

I just implemented the Haxl paper in Java as a weekend project (for implementing GraphQL APIs), so it totally does ;)


Cool! Have you posted about your experience anywhere?


What about the Parser monad? Or the Validation Applicative?

I think you're being a bit short-sighted here.

I fully appreciate that it might be extremely difficult to accommodate these HKTs, etc. in a language with the core values of Rust. That's absolutely fine, because that's probably several research-level projects in and of itself. OTOH, we shouldn't discount the value of such things.

It's absolutely fine and reasonable to choose e.g. efficiency over other things, but it's not the only value.


I guess you could call shorter ways to write and_then or map code reuse, but it's at such a small scale that it seems more like terseness?

Little abbreviations might be widely used, but when people talk about code reuse I'm hoping to hear about ways to share larger chunks of code, not having shorter idioms.


I think I've worded things very poorly.

In Rust, the functions Result::and_then, Result::map, Option::and_then, Option::map, etc. are pretty much all saying the exact same thing, but they need to be reimplemented over and over again. With Haskell, Result and Option (and [], QuickCheck.Gen, Data.Binary.Get, etc.) just need to implement return and (>>=), and they get a ridiculous amount of useful functions for free (mapM_, foldM_, forever, ap, etc.). That's a "large chunk of code" saved in my opinion.

https://hackage.haskell.org/package/base-4.14.0.0/docs/Contr...

And, of course, they also get some very nice sugar in the form of do notation.


Completely agreed. In addition:

Having popular interfaces like monad, applicative functors, monoids etc also suggests common ways to structure any API you are working on.

If you design an API from scratch and have to implement everything again and again, you might be tempted into a pretty arbitrary breakdown of the problem space.

Whereas in Haskell personal laziness drives you to look hard to see if you can express your API in terms of foldable and traversable or monads etc so that you get lots of functions for free.

Those common interfaces also automatically suggest gaps in your API that otherwise you would have only noticed after using your API for a while (or by being a very experienced programmer).


>Rust forces you to write stuff like Result::and_then and Result::map over and over again.

The feature of Haskell that lets you avoid writing 'and_then" over and over is do-notation.


What everyone is talking about isn't "not having to write `and_then` over and over" but rather that there is a common interface to "things that are monads", i.e. `>>=`, `map` and `apply`. `do`-notation is nice syntactic sugar on top of that common interface, but it's not the actual thing itself.


I think you misunderstand. I mean that merely by implementing the Monad typeclass, you get a ton of methods (mapM_, join, etc.) for free. You can't do that in Rust right now.


I wonder if these could be written against the Carrier trait.


It is not. do notation is just syntactic sugar. The feature that enables re-use are the type-classes and the structure of the standard library. Interestingly, you do have traits in rust, but I think there is no monad trait yet in the std lib.


There is not a monad trait because we do not have higher kinded types, which is a requirement.

Beyond that, it's not clear that a monad trait would work, even if we did. It may, but it also may not. We'll see (or not) I guess.


Why wouldn't a Monad trait work?

You mean you couldn't express it in Rust, even with HKTs? Or you mean that it wouldn't be useful?


Both. See here for more https://twitter.com/withoutboats/status/1027702538563477505

There have been some people working on these problems, take https://varkor.github.io/blog/2018/11/10/monadic-do-notation... for example. This is why I say "not clear" and not "impossible" or something. We just don't know.


It's basically a research-level problem given the other constraints of Rust. (Which is fine. Trade-offs and all that.)


> Option and Result are Monads. Surely he misses do-notation.

Rust does not have higher kinded types, which are required to have abstractions like the Functor and Monad type classes in Haskell.

Option and Result are monadic, but they must implement their own, separate methods and can not share a common interface. Which takes away a lot of expressive power.

There is a accepted proposal for something similar, known as GAT [1] (generic associated types). But that is blocked on other work and might never actually be implemented.

[1] https://github.com/rust-lang/rust/issues/44265


> But that is blocked on other work and might never actually be implemented.

In a theoretical sense, sure, but in a practical sense, my understanding is that GATs already exist in chalk, and so it's still a matter of putting in the work, not a question of "is this possible or not."


I didn't mean to imply that it was impossible.

You would know better of course, but it was my impression that there is no firm commitment to stabilizing GAT, even once the compiler fully switches to Chalk, due the potential disruptiveness and complication of the language.

(For those unaware: Chalk is a rewrite of the trait logic in rustc, based on logic programming [1] )

[1] https://github.com/rust-lang/chalk


> it was my impression that there is no firm commitment to stabilizing GAT

Certainly, but not in the sense of being uncertain about whether or not GAT are worthwhile, only in the sense of not wanting to make promises that may not be able to be kept. The Rust developers want GAT--or something analogous to GAT--and much work has been put in towards that goal (work that will have been useful even without GAT), and at this juncture there's no foreseen theoretical impediments; however, that's not to say that some unforeseen impediment won't yet arise. Such a thing happened with specialization, which has been in the works for much longer (current status: https://github.com/rust-lang/rust/pull/68970), and the lesson has been to not incite expectation until you are extremely certain the promise will be able to be kept. As a mere half-informed bystander it seems that GAT not encountering the problems that specialization did, but don't take this as any sort of promise. :)


While I personally would really want both GAT (+ full HKT) and specialization, I could emphasize with some opposition to stabilizing them.

Rust is already a complicated language. Those two features would increase the complexity of the type system quite significantly.

Possibly beyond what is healthy for Rust adoption.


The neat thing about GAT is that it doesn't really increase the user-facing complexity of the language. Conceptually, users already need to understand generics, and they need to understand associated types. GAT, which allows users to use generics within associated types, doesn't feel like a new feature so much as it just feels like letting two existing features play nicely.


Gotcha; I had not heard that. Given the strong need for things like async trait methods, I would be surprised to hear that, but I also haven't been paying mega close attention.


I couldn't provide a link where this was said either, and it would have been quite a while ago. I just remember reading about concerns in that direction.

I think async trait methods could be stabilized without exposing GAT to the user.

Of course I would very, very much like to have them either way, they would make certain abstractions much easier to build.


Haskell's killer feature here is higher kinded types. do notation is just a syntactic sugar. But (with the exception of Scala or PureScript) almost no other every-day language supports the ability for defining abstractions over higher kinded constructs like monads.


I'm primarily a Scala developer, that also likes Haskell. And you're right.

With the caveat that some languages support an encoding of HKTs. You can encode HKTs in OCaml, Kotlin, or TypeScript for example, and there are FP libraries taking advantage of that. I believe the encoding was inspired by this paper:

https://www.cl.cam.ac.uk/~jdy22/papers/lightweight-higher-ki...

It kind of sucks, but it does let you work with a Monad type class, and it's better than not doing it at all.

I'm not sure if such an encoding is possible in Rust. I've been told that F# has a hard time with it, not sure why (possibly because of reified generics). In TypeScript and Kotlin it works because the runtime and the type systems are relaxed and you can just force downcasts if you're in trouble.


> I believe the encoding was inspired by this paper

Just to add some tangential color here: even if you don't intend to use this encoding in production, it can be a fruitful intermediate step in refactoring! I just recently ran into a situation in Java where I had some tightly-coupled concerns for which HKTs were the obvious way out. Since Java doesn't have HKTs, I applied the HKT encoding to get the hard part out of the way, and then discovered a simplification that brought me back into the realm of idiomatic Java. I would not have found the final solution had I not pathed through HKTs -- I actually thought it would be impossible to do this decoupling properly at first!

More color: the paper itself describes this approach as an application of defunctionalization. I use defunctionalization a lot, too, to write an obvious recursive solution and then mechanically transform it into an iterative one. So it's nice to know how to apply it at the type level, too ;)


> Option and Result are Monads. Surely he misses do-notation.

Well, functions and lists are also monads. I guess what the author means is that the language recognises the similarity of those disparate structures.


IIRC Rust was influenced more by OCaml than Haskell, but I could be wrong.

The article really rubbed me the wrong way with this tidbit

> (we don’t talk about Go…)

So since the author is speaking of "premier system’s language" we choose to ignore the world's premier cloud language? Interesting.


Rust was initially implemented in OCaml, even. But that said, it gained a lot of more Haskell like features later on; we ended up with type classes, not ML modules, for example.

EDIT: also, strictly speaking, Rust doesn't use Hindley-Milner, and we don't have parametric polymorphism.


Can you clarify why you don't think Rust has parametric polymorphism?

  struct Person<T> {
    age: T,
  }
Is parametric polymorphism and valid Rust.


Just because some bounds are parametric does not mean that all of them are; specialization violates parametricity, for example. While that's not in Rust proper yet, it did require making sure that dropck could handle it, for example, and the intention is to ship it.

There's also more minor things like https://github.com/rust-lang/rfcs/pull/1210#issuecomment-179...


I don't doubt you know more about Rust than I do, but this seems pedantic to me. Kind of like correcting someone for pronouncing forte as "for-TAY" instead of "fort" or telling someone "well technically you don't actually touch _anything_ because of the electrostatic force."

If you ask all the developers out there to describe parametric and ad-hoc polymorphism I think a vast majority would give the example of a type parameter (e.g., Java generics or C++ templates) for parametric polymorphism and Java interfaces or Haskell's classes for ad-hoc polymorphism. I can even quote directly from Pierce (Types and Programming Languages):

> Parametric polymorphism, the topic of this chapter, allows a single piece of code to be typed "generically," using variables in place of actual types, and then instantiated with particular types as needed. Parametric definitions are uniform: all of their instances behave the same.

I think Rust and the aforementioned languages fit this definition. Outside of a specific compiler issue, claiming otherwise seems to only confuse the issue, especially for those just casually reading and not familiar with programming language theory.


> If you ask all the developers out there to describe parametric and ad-hoc polymorphism I think a vast majority would give the example of a type parameter (e.g., Java generics or C++ templates) for parametric polymorphism and Java interfaces or Haskell's classes for ad-hoc polymorphism. I can even quote directly from Pierce (Types and Programming Languages):

In practice it is often useful to drop our demands for rigour by a bit. But any reasonable definition of parametric polymorphism _has_ to exclude C++ templates.

C++ templates are much closer to the definition of ad-hoc polymorphism.

My practical less-than-rigorous rule-of-thumb for parametric polymorphism amounts to something like: whenever Wadler's Theorems for Free paper applies. https://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf

Theorems for Free means that eg the type of the identity function (forall a . a -> a) guarantees that even if you supply it a bool, it can't magically turn into an implementation for `not`, or multiply all integers by 2.


This isn't Rust specific; it's just the definition of parametric polymorphism. Yes, many programmers may give you a slightly incorrect definition, but especially in an article about Haskell, I'd expect a bit more precision.

Which doesn't mean it's terrible to get it wrong, just want to be clear about what Rust does and does not have. It is important because these kinds of definitions either hold or they don't; "sorta kinda mostly parametric" isn't the way folks tend to think about this. Which makes sense, because they're interested in proofs and formal definitions.

Yes, Pierce is great! But the issue is:

> all of their instances behave the same.

This is not true in Rust, as shown in my comment and the other replies. We have accepted APIs that break this property, and we have other language features on the way that break this property.


IIRC, Java interfaces are subtype polymorphism. Ad hoc polymorphism would be, for example, overloading.


In a sense even things like `TypeId::of` or `mem::size_of` violate parametricity.


You can do the following, which violates strict parametricity:

    fn foo<T>() -> usize {
        std::mem::size_of::<T>()
    }

    foo::<u8>(); // 1
    foo::<u16>();  // 2


You can do the same in Java and C++. This may violate a strict definition of parametricity (I've read the definition from a few different sources and am still mulling it over), but I'm not sure how this relates to parametric polymorphism.

The _behavior_ of this function is the same for all types, the _output_ is different. That is, for all types, the function body is the same. Maybe there is a more abstract definition of parametric polymorphism you are using, but as I said above, this seems pedantic.


The internal behavior can trivially be made different just by operating on the value:

    fn foo<T>() -> usize {
        let x = std::mem::size_of::<T>();
        if x % 2 == 0 {
            panic!();
        }
    }

    foo::<u8>(); // 1
    foo::<u16>(); // panic
That the body is the same isn't necessarily the issue at hand (though of course it's still a useful property in its own right), what matters is that reasoning about what this function will do requires knowing which specific types it is used with.

> this seems pedantic

The first code example is merely the simplest demonstration, in the wild I would expect lots of `size_of` in generic contexts to result in type-dependent behavior somehow.

I'm not saying this is necessarily a very bad thing, nor do I have strong opinions on the usefulness of strict parametricity (which AFAIK Haskell doesn't have either). But in discussions relevant to parametricity, it's useful to know the ways a given language can subvert it (and Rust will further encourage it to be subverted, once the specialization feature is developed).


> I'm not saying this is necessarily a very bad thing, nor do I have strong opinions on the usefulness of strict parametricity (which AFAIK Haskell doesn't have either). But in discussions relevant to parametricity, it's useful to know the ways a given language can subvert it (and Rust will further encourage it to be subverted, once the specialization feature is developed).

In practice Haskell seems to have pretty strong views on enforcing parametric polymorphism, doesn't it?

Haskell gives you ad-hoc polymorphism via typeclasses and there are also existential types and GADTs etc, if you need those. But once you declare something to abide by parametric polymorphism, you are expected to keep your end of the bargain.

(Yes, you could violate the pact via some unsafePerformIO trickery, but that's considered bad form.)


The whole point of parametric polymorphism (as opposed to eg ad-hoc polymorphism) is that just from reading the type of a function you get a guarantee about the limits of its behaviour.

If you functions routinely violate those limits as a matter of course, those guarantees are useless.

I'm all for abusing notation and terminology a bit when it makes sense in practice, but loosening our definitions too much risks making them useless, too.

In practice in Haskell, I often only need a helper function for eg integers, but when the implementation allows, I will give the function the most parametric-polymorphic type that fits, because that makes the readers job easier:

Just like an invocation of `filter` is easier to read than a for-loop, because `filter` is strictly less powerful than the loop.

(In addition, the more general type serves to give the compiler a hint, so it can yell at me in case I accidentally do an operation on the integer that I didn't mean to.)


Yes, this was my favourite line:

> However, in 2020, the premier system’s language is surely Rust.

Yesterday I was speaking with a friend who runs a small company doing contracted embedded programming (the embedded OS is often Linux but often not) and he'd never even heard of Rust. Maybe it's a little lower level than what's meant by "systems" here but still, the article's statement is just delusional.


The largest vocal community today for a systems language (meaning down to the OS level with strong determinism guarantees, not the redefinition to make Go a systems language meaning server systems) is definitely Rust. But in practice, no one is using it because their systems predate it, don't have compilers for it, or they simply can't use it due to their problem domain (it has an insufficient history, they can't contract support with the compiler team for it, or whatever).

I can go to Green Hills and get a contract that guarantees their compiler will be available and supported for my target platform for the next 10+ years. Rust is still a moving target with a single implementation, and no (present) way to get the guarantees needed for the embedded domains I've been involved in. I'd like to see Rust experimented with there, but it won't be the "go to" language for quite some time.


> Rust is still a moving target with a single implementation, and no (present) way to get the guarantees needed for the embedded domains I've been involved in.

Absolutely. I've said it before, I will say it again, Rust will not be taken seriously as a systems language until it is specified. As I understand it years ago I was laughed at for this, but it is underway. I am grateful for this.

I forgot to preface my statements with I Love Rust, or, moreso, I actually love OCaml, and I love OCaml inspired languages, and I would love to write an OS in it (save ReactOS).

That being said, the desire to label it as the "premier systems language" (and no fault of the author, many before have tried to say this) is entirely not grounded in any sort of reality.


Fun story: here's a recent talk by someone at Green Hills, talking about experiences giving Rust a try https://www.youtube.com/watch?v=G5A7rSPYpb8


Thanks for the link, I'll check it out later.


I'm pretty sure that was just a joke about Go's primitive type system.


I mean if we save the "intellectual superiority" of Haskells or other FP languages -- (I've used them all) -- in practice with teams of other engineers, and in production where it counts the most, Go has outshined them all. So primitive is surely a way to describe it.


I fail to see how else than "primitive" you can call the Go type system: it is roughly what you would get from a language designed in the 70's, and a lot of advances have since been made.

Whether or not these advances make a difference (positive or negative) in production is different question, but I'd rather not use "how much is a language used", as an inherent quality metric for a programming language. Otherwise, I fear we might all end up doing old-school Java, Javascript, Visual Basic and ABAP.


> "how much is a language used", as an inherent quality metric for a programming language.

Define quality. Because what I've realized with the PL-elitist crowd is this almost always boils down to "the ability to be expressive" which is fine but it certainly isn't a complete metric, and, to borrow a common refrain, we've had expressive languages in the 70's as well, we've had s-expressions for a while now. What about other metrics, such as "ability for mass amounts of programmers to program the computer to do the correct thing?" and "ability for programmer to maintain said program?" -- you know, real world quality. My point is, people's usual quality metrics are often incomplete or narrowly defined.

In either case, if Go is so primitive, then why didn't the 70s produce a language like Go? Why did developers who created languages, operating systems, and systems software in the 70s not make Go until the 2000s (you do know who created Go, right?)

I just don't buy this argument, especially having used those "advanced" type systems for many years, sorry.


I mostly agree with you, but the claim was that Go's type system was 70s-era primitive. And I think that claim is probably correct. The type system of Go isn't all that sophisticated or "advanced".

But all that says is that the magic of Go isn't in the type system. The couldn't produce-it-until-the-2000s isn't in the type system. The thing that makes Go more used in the real world than Haskell isn't the sophistication of the type system.

I don't see much to argue with in those claims.


I am not sure that my message was completely clear: my point was that despite Go having an objectively old/primitive type system, compared to modern state of the art, that does not preclude it from being a successful "production" language. The value the type system brings is, in my experience, rarely the most important factor in the success/failure of a software project.

My second point was that the popularity of Go (or any language) does not mean that it is inherently a good language, where good can mean, productivity, defect rate, .... Adoption of a language depends on many things, I believe quality of the tools, availability of libraries, and good old marketing (the Google aura around Go has helped in adoption) are often more important. The effort to make an initial, crappy, proof-of-concept is in my experience often decisive for the choice of language. The cost of long term maintenance (where a good type system might bring value) is rarely considered.

For Go, it also hasn't hurt that some of the core developers where being paid by Google to work on it.

The metric of the "ability for mass amounts of programmers to program the computer", is indeed a very relevant one, and I think Go shines there. When reading Go code, I typically find it easy to understand what the goal of the code is. However, "to do the correct thing", is often less of a success: there are off-by-one error and corner cases that have bugs, and reimplementations of the same basic logic. If you have a lot of developers available, you simply have them fix these issues, and your Go project becomes a success.

My point is, that in this scenario, the language itself is of minor importance: success is determined by the fact that you have access to a large pool of relatively cheap labor of reasonable quality (Google or VC funded startups are perfect examples). A 20 or even 50% productivity gain by having a "better" language would simply not matter for the outcome, certainly not if it takes your labor longer to get up to speed.

So, in that context, compared to the alternatives, the quality of Go is relatively high. But I do think that, if during the design of Go they had taken a couple of basic concepts from ML like sum types, polymorphism, and a decent module system, they would have ended up with a language that would make the current Go look unproductive and poor quality, even in that same context. It wouldn't make a difference to Google, but it would make the life of many a developer a little bit more productive and joyful.

And yes, I do know who created Go, but arguments by authority carry little weight.


> When reading Go code, I typically find it easy to understand what the goal of the code is.

I have the opposite experience. At a micro-level you might be able to tell what a line of Go code is doing, like manipulating this variable into that state etc, but it's such a tedious language that drowns the reader in code that it can be very hard to see the forest for the trees.

As an example of a deliberately simple language that mostly easy to read, I would perhaps cite Erlang.

In any case, I agree wholeheartedly that Go would benefit from sum types. (Though given the weak type system otherwise, you couldn't even implement a useful `Maybe` in Go. You'd really want parametric polymorphism to make sum types shine.) A way to express immutability would also have been welcome, especially given that they pride themselves on concurrent programming.


There's a difference in mentality between people who make either argument.

The "PL-elitist" crowd looks at programming languages from a computer science perspective, while the "PL-pragmatist" crowd (for lack of a better term) looks at it from a workplace perspective. At least that's how I see it.

In other words, one group writes tooling and libraries; the other writes business applications using said tooling and libraries.

One paradigm invites complex, highly expressive code as to minimize the chance of unforeseen errors (even if just by having a smaller codebase), and as to provide a clean and powerful interface that "does more with less".

The other invites simple, highly maintainable code as to minimize the training overhead of new employees and the chance that their lack of rigor might end up accidentally introducing or reintroducing bugs. Furthermore, a simpler language means that new teams can hop onto a pre-existing project faster, because they don't have to scrutinize each LOC to the same extent.

I might be wrong on any or all of this, but that's how I perceive it.


Actually it is worse than that, have a look at most languages designed in the 70s, like CLU and Modula-2.


Worse in what ways?


Support for generics (CLI) and exceptions (both).

There are other languages I can take out of the bag like Mesa/Cedar, Interlisp-D, Smalltalk-80, although they are already on the 80's borderline, which then also brings Ada, Standard ML and Object Pascal into the picture.


"How much a language is used" is a proxy for "how useful is the language for actually writing programs". And, really, what else do we mean by the "quality" of a programming language? If it's beautiful, but not as good for actually writing programs, then it has quality as a work of art, but less quality as a programming language.


There's other reasons languages become popular such as platform support, marketing, what people learn in school or as a first language, and what gets you hired. Popularity does not necessarily imply any sort of superiority of a PL usage compared to others.


I think you are going too far here. Taken at face value, your post says that for programming languages, there is zero correlation between popularity (use) and superiority (fitness for use). That implies that everyone choosing a language for a project is either stupid or ignorant (not the same thing).

That may offer consolation to those who think that certain languages are the best, and who wonder why those "best" languages are used so little. But I don't think it's true. I don't think programmers are stupid, ignorant, or sheep. I think if a tool offers them advantages, they'll use it.

Perhaps I should qualify: significant advantages. There is a cost to switching. The switched-to language has to be enough better to pay back the cost.


This is not actually true. Do you seriously believe Go would have ANY popularity now if it wasn't released and massively marketed by a BigCo?


It probably would've ended up in the same place as Limbo and Alef. Interesting, useful, but of limited use and exposure to the world.


"Massively marketed"? Compared to what? C#? Java?

I remember what Sun did with Java back in the day. No, I do not agree that Go has been massively marketed.

I will admit, however, that Go was created and supported by Google, and that mattered. It mattered, not in the publicity, but in the tools and libraries (and maybe even in the tutorials).


Compared to C++, Haskell, Python and the long list of other languages not directly backed by BigCos.

So your point is because Java has been marketed more then Go that Go is not massively marketed? If you take the set of all languages and would make a sorted list of how many hours and dollars where spend to market it then I would be very, very surprised to not see Go in the top 5.


Define "market".

C++ has vendors behind it. Those vendors make actual dollars selling C++ (or at least tools that work on C++.) When Microsoft markets Dev Studio (with C++ support), is that marketing C++? How about C#?

In contrast, what does Google do to market Go? Put up a website, and put out notices of the next version? When have you ever seen an advertisement for Go? For a tool that supports Go? How about for C++, C#, and Java?


I'm being very clear. A language with a big company behind it the likes of Google, Facebook, Microsoft etc. I have defined it two times now. That there are some vendors which make money with a language is very different.

Advertisement is a subset of marketing. Back in the day when Go was "new" You saw daily posts on many programmer focused communities about it. Google bankrolled the entire development of Go. Blogposts of many googlers talking about the language. Google sponsored Go events. The positive image of Google itself at the time of Go initial introduction.

You try to weasel me towards C# and Java when those are not two language I even have mentioned. But since you want to hear about. Yes both of these were also heavily marketed.


But if the big company behind it doesn't do anything, what difference does it make?

Or, if the big company doesn't do anything more than is done for other languages (Rust, say), what difference does it make? Is Rust marketed in the same way Go is? Per your definitions, I would say yes, even though the Mozilla Foundation isn't a big company.

But then, is Haskell marketed in the same way? I see lots of posts on it, at least here. Lots of blogposts on it. But there's no big company, or even foundation, behind it. Is that marketing?

Why do you define a set of actions as marketing when people who work for Google do it, but not when others do it?


I think the difference is that some Googlers can write Go blogposts during their working hours?


Yes, considering BigCo had internal resistance against it for years, and only just recently started to adopt it more for its own internal uses. Comments like this show the clear ignorance of the context surrounding the creation of the language.


Ah yes google bankrolled the entire development for no reason. Get real. Go was specifically created for novice programmers at Google to write more safe code then C without them actually having to learn anything new.


You really have to stop assuming "intellectual superiority" when people talk about more advanced / less advanced stuff.

Go have limited capabilities type system. C have limited capabilities type system. Both are quite successful. If talking about C I didn't shown "intellectual superiority", maybe it would be incorrect label if I where to talk about Go?


Go is just a lot of missed opportunities, that they could have taken without making the language any more complicated.


In what sense has Go outshined anything?


[flagged]


Would you please stop posting supercilious dismissals of the community? You've already done this more than once, and that's a very bad sign for a new account.

(Edit: Perhaps I should explain that last bit. Actual new users never comment this way—only seasoned users, and the ones who do it are usually using new accounts because they've behaved badly with earlier ones.)

https://news.ycombinator.com/newsguidelines.html


I can try to be more direct in pointing out the obvious biases that we have on this echochamber, if you think that would spark more thoughtful debate? If you think the tone of the post was wrong, I can address that, but I don't really think that it's fair to suggest we avoid talking about this kind of thing?

My other comment that you're calling a "supercilious dismissal" also received 31 upvotes and some interesting responses, fwiw.


What people call "obvious biases" are typically what they themselves are choosing to select out of the statistical cloud of posts that get made here. The bias at work in making such a selection is one's own, but we project it onto the community, and then create a superior image of ourselves by posturing above it, which is what I mean by supercilious. This is a drama between oneself and oneself, which has nothing to do with real conversation. It's a way of changing the subject to oneself. To be fair, you're hardly the only one doing it; it's quite common, but I have to tell you that it tends to be low-quality commenters who do this.

It frequently comes with labels like "echochamber" and "groupthink", which are also ways of elevating oneself as the noble freethinker standing against the mob, or some such image. Actual noble freethinkers never behave like this, so the whole thing is incongruent—it's simply a way of flattering oneself, which (ironically, because we all feel so unique) is one of the most repetitive and tedious things that people do. That's why we shouldn't do it on a site that's trying to have interesting conversation.

Another angle: people seem to posture above the community when they feel like they need to defend themselves against it. Presumably they want to defend themselves against some claim of i-am-very-smartness that they feel is emanating from the community, so they puff themselves up and put the community down by way of compensation. In reality, though, no one is making such a claim. HN isn't making such a claim—HN isn't a person to begin with—and certainly the people running HN are making no such claim. It rather arises in the psyche of some readers, for whatever internal reasons. The internet can be crazy-making that way.


Thanks for the thoughtful response. I couldn't possibly disagree more with most of your sentiments, but I appreciate you taking the time to write them out. To discount the validity of "echochambers" and "group think" is a pretty dangerous strategy, I think. If you were to poll HN users on what programming languages are successful/worth using, as a topical example, you would (and do/have) get very different responses from, say, /r/java (which is an extreme example and also an echochamber, but kinda proves the point).

To then go on to suggest a "no true scotsman" example of nobility wrt pointing out echochambers is also a pretty dangerous sentiment. Pointing out an echochamber is an echochamber is almost so obvious it's barely worth noting - to suggest there is anything "noble" about being a part of and/or not being aware of an echochamber is silly. Messageboards are echochambers. We are here because we have largely similar interests, and the posts on this board are pretty generally related to those interests. Obviously it is an echochamber, and obviously as I am posting here I am a part of it. I'm not sure how that means it is also not worthwhile to sometimes point out/poke fun at the obvious echochamber we're both a part of.

Anyway, best of luck moving forward.


I was amused at that. It's clear this author is not Go's constituency. You cannot use types to ensure consistency in Go, you just have to write the algorithm correctly. But Go fronts the algorithm and keeps the noise around it to a minimum. They are opposed but equivalent strategies.


You still have to get the algorithms right in eg Haskell.

It's just that in Haskell you have to write fewer tests to be sure you did it right, because for many kinds of mistakes you could have made, the compiler can yell at you.

The other advantage of nicer types is that they serve as part of the documentation. Exactly because the types can constraint the behaviour of implementations more than in Go. So the reader can make more valid assumptions from the type alone.

Hoogle uses those types to give you a truly extra-ordinary way to search the Haskell libraries via types. That approach would be almost useless in Go, and you have to fall back to searching via keywords etc only.


Are you saying Go is the 'premier system's language'?


What does "premier cloud language" mean?


How is Go the premier cloud computing language? I would say that crown goes to Python. I almost never encounter Go when working with cloud computing.


To add to what everyone else is saying, higher kinded types (HKTs) are what make monadic code really reusable. It lets you write code that is polymorphic over a variable kind of monad, rather than code that works for a specific monad like a list. Without HKTs, you might have a polymorphic type signature List[<T>] of a list of some variable kind of elements, or a Set[<T>], or a Maybe[<T>]. In haskell syntax it's List a, Set a, Maybe a.

When you write code that operates on something like a monad, without caring about what the monad is, you want code that has a type like <T1>[<T2>]. In haskell syntax, it's m a. <T1> (or m) in this case is a type that takes another type as an argument to complete a concrete type. AFAIK, rust doesn't have type variables that take type variables (or types) as arguments like that.


> Option and Result are Monads. Surely he misses do-notation.

That's part of it. Option and Result are concrete instances of Monads. He misses the Monad abstraction.


>> When I first started using Rust, I really missed monads.

> Option and Result are Monads. Surely he misses do-notation.

Surely you are joking if you are trying to say someone writing haskell would say rust has monads in any practical sense. Sure, all languages have/can have A (or two) monad.



Haskell is not at a tipping point yet where it's losing it's excitement. E.g. with RecordDotSyntax and the new haskell language server many longstanding pain points will be solved soon. There's especially lot's of excitement around haskell and web web development. Take a look at what we're doing with IHP: https://www.youtube.com/watch?v=UbDtS_mUMpI&feature=emb_titl... https://ihp.digitallyinduced.com/

Soon you can even deploy haskell web apps in 15 seconds: https://twitter.com/larsparsfromage/status/13066539171876495...

Growing between 5% and 10% weekly. Many new people are starting their journey into haskell with IHP :-) While the community might be changing, it's definitly not dying.


Could you please specify what is growing between 5% and 10% weekly and for how long? Thanks.


Community size. Measured by: Downloads, GitHub Stars, people in our gitter channel, twitter followers, Newsletter Subscribers, Youtube Reach.

E.g. on Sep 1. we had 870 GitHub stars and 321 Twitter Followers. Now (23 Sep) we're at 970 GitHub stars and 487 followers.

IHP has been launched around 3 months ago. So we're measuring it since then.


You can grow by 15% even if you have 20 or 25 users :-P .

I am a huge Haskell fan though. The 'problem' with Haskell is that, because of its roots in academic research the language will not chase success at any price (read hacks); and tends to eventually find solutions to its problems even if it means not being popular for a long time. I like that. Watch out for stuff like linear types[1] coming to GHC soon.

[1] https://www.tweag.io/blog/2017-03-13-linear-types/


> The 'problem' with Haskell is that, because of its roots in academic research the language will not chase success at any price (read hacks); and tends to eventually find solutions to its problems even if it means not being popular for a long time.

That's the PR line, but only describes about 5% of the problem with haskell.

But haskell's not relatively unpopular because it was late to add things like linear types. It's relatively unpopular because it doesn't care much about user experience.

Consider: it's getting linear types before an efficient string type in `base`. The first goal of this language is not good engineering practices and getting things right. The first goal of this language is "cool type stuff", with good engineering maybe goal 10 or something (is O(n^2) `nub` ever going to be deprecated I wonder?). And that's fine, we need a language for cool type stuff, let's just be honest about it.

Context: I've been Haskelling for 5 years, I've written a fair amount of FOSS in it, and it's my favorite language. I just want to help spread an honest impression of it.


maybe OP is referring to memory allocation during compilation?


Compared to many modern javascript apps haskell's memory usage is actually pretty good :)


What an excellent comparison :)


To me, Haskell got great at "get things done quickly". With Stack (and Stackage), initiatives like "Simple Haskell", and upcoming RecordDotSyntax, it truly becomes a weapon of pragmatic choice to knock out working software as quickly as possible. And a combination with Elm (and code-generation of Api.elm from Servant types) makes a perfect match for SPAs.


I just looked through the Simple Haskell web site. I had not seen that before. I liked the Graham Hutton quote on having a secret plan to just use Haskell 98. For years, I enjoyed using Hugs via the iOS app Raskell. I still use Raskell but sadly it is no longer stable after an iOS update a year ago. Hugs, being an interpreter, is so fast for happy hacking activities.


Interesting article. I think it's worth explicitly calling out that rather than proclaiming the death of Haskell, this article is talking about the children of Haskell and what they are capable of doing from a clean slate and with a clear goal.

IMO Haskell remains as the go-to language for teaching functional programming and for experimenting with PL concepts. Long may it remain.


I also read it as there simply being some domains where Haskell is not well suited.

I like using Haskell for writing compilers, but I would probably not like to use it for numerics code. While I could replace Haskell with Idris, I'm still a bit lukewarm on the cost-benefit of heavy use of dependent types for many programs (although Idris has other nice advantages unrelated to the type system).


Julia is a child of CLOS and Dylan, nothing to do with Haskell.

https://en.wikipedia.org/wiki/Common_Lisp_Object_System

https://opendylan.org/documentation/


I'd claim that Scala is also heavily inspired by Haskell: it's probably closer to ML, but so is Rust (which is on the list). Heavy use of 'flatMap' (i.e. monadic bind) is clearly Haskell-inspired, and this is more obvious in popular libraries like scalaz and cats. The 'for comprehension' language feature is clearly inspired by 'do notation' (IIRC Python's list comprehensions are inspired by Haskell's too).

Elm is a more special-purpose child, but seems to have had an impact on JS practices, as a 'gateway' to functional programming. PureScript is also interesting but hasn't had as big an effect.

I agree with the article that Idris is really nice; although I'd lump Agda in with it as a dependently-typed child of Haskell, and contrast them both against Coq (rather than lumping Agda and Coq together).

I've also played around with Curry and found it quite interesting (functional logic programming), although it's essentially a research language.


I code real life programs in Agda. Some part of my server and my tools are in Agda. One time I was in #haskell freenode IRC channels and people started acting out as if I'm some crazy person. Agda has seemless integration with Haskell. There is almost no friction writing 20% of your program in Haskell (unsafe) and 80% in Agda (safer).

I personally believe, agda is an excellent programming language. It's stdlib is good, concepts and construct are very well-thought out, it's very safe and works surprisingly well. I've been programming in agda for years and almost never had issue implementing anything even though it's not Turing complete. Using sized types you can prove most sane things to terminate anyway. And if you really need infinite loops (say, you're implementing an interpreter of a Turing complete language) then you have many options: (1) disable termination checking for a single function (2) use Haskell to implement a single function (3) use Rust, C, C++ etc to implement a function then with GHC ffi link it to agda.

Maybe I'm just being near sighted, but I'm really surprised more people don't use Agda or Idris in production. The number of times the type system saved my ass is uncountable. You can literally write unittests in your type system it fekkin runs your custom unittests while type checking. How is this not revolutionary? I don't understand.


Wow; I was impressed when I saw someone run an Agda program (rather than declaring success once it compiled).

My main gripe with dependently-typed languages is the amount of wheel-reinventing that's involved; e.g. I might have crafted a "ListOfIncreasing comparisonFunction" type, which is perfectly suited to the problem I need to solve, but then I find myself implementing a map function, a filter function, an append function, a length function, etc. Then I find myself needing a "length of appending commutes" lemma, and an "empty is left identity of append" lemma, and an "empty is right identity of append" lemma, and so on.

It's really nice when it works, as a nice self-contained little module; but any more can get exhausting. I haven't tried integrating Agda with Haskell, but that sounds like a nice tradeoff: do all of the "boilerplate" in Haskell (string plumbing, etc.) to get a "Maybe InputsForMyAgda" value, pass it into whatever tricky algorithm we've managed to implement thanks to Agda keeping us in check, then send the result back into Haskell for throwing into whatever external monstrosity we're interfacing with.


I would share my github projects but unfortunately I don't want this HN persona to be associated with my real name (my github name is my name_lastname unfortunately). Look at some of Ulf Norell's (agda's main author) projects though you'll have a taste, although I think his work is more on the abstract side.

I don't know why you had problem with compiling... All you have to do is run `agda --compile src/Main.agda` and it just works...

> My main gripe with dependently-typed languages is the amount of wheel-reinventing that's involved; e.g. I might have crafted a "ListOfIncreasing comparisonFunction" type, which is perfectly suited to the problem I need to solve, but then I find myself implementing a map function, a filter function, an append function, a length function, etc. Then I find myself needing a "length of appending commutes" lemma, and an "empty is left identity of append" lemma, and an "empty is right identity of append" lemma, and so on.

This is true to some extent, but this has nothing to do with Agda. If you're working in an extremely strict type system like agda's where all types need to be provably constructable, you cannot just start coding. You need to have a good idea what your program will look like. E.g. if you're writing a parser (as I often do because writing parsers in agda is such fun) you need to know some mathematical structure behind parsers.

Take a look at this page: https://agda.readthedocs.io/en/v2.6.1.1/language/sized-types...

Here they build Kleene star, which expresses infinite computation. However, you need to construct it in a (coinductive) way such that it doesn't infinitely loop. You cannot just code something like this. In that page they use a paper to base their functions. You need to know the math behind this to have an idea what will work. But you also absolutely don't need to be a mathematician either. And if you have no clue where to start, having multiple iterations of your program usually results in success. E.g. sometimes I start coding agda, then realize my types aren't good enough, restart, rinse and repeat. In my experience Haskell is much better for programs like this. Once you write your program in Haskell and have an idea where things are going, start formalizing some functions in agda, and then try to rewrite everything in agda. Writing programs in agda doesn't mean it's correct, of course, your business logic can still be wrong. The power of agda that comes from dependent typing, is that you can encode tests and proofs of your invariants into the type system. This means if you really want X to be always correct you can prove X or of it's not easy, you can write a whole bunch of unittests that'll be checked at typecheck time.

Regarding FFI, it's actually very easy, read here: https://agda.readthedocs.io/en/v2.6.1.1/language/foreign-fun...

You can add arbitrary haskell code in your agda programs. If you compile with `agda --no-main` agda doesn't call GHC so you can manually call GHC to link your program. This means you can do everything you can do with GHC to your compiled Haskell code in MAlonzo/


> I don't know why you had problem with compiling... All you have to do is run `agda --compile src/Main.agda` and it just works...

My point was that most Agda code I've written, or seen others write, is used to prove theorems; the result is rarely run. Even when implementing "real" algorithmic programs, it's usually just to prove that they satisfy some property, rather than actually using them to do anything ;)

> This is true to some extent, but this has nothing to do with Agda.

It's not inherent to Agda, but it's true in the sense that I could do "stringly typed programming" in Agda (i.e. using big, imprecise types which require a bunch of redundant checking and the compiler won't help me with), but there wouldn't be much point (I might as well do that in, say, Python and benefit from the vast amount of libraries and tooling).

> you can write a whole bunch of unittests that'll be checked at typecheck time.

I do this whenever I'm writing Agda or Idris, and it's so satisfying. The 'test framework' is just the type system, e.g.

    test_parse_empty : Eq (parse "[]") Nil
    test_parse_empty = refl
When I'm testing in most languages I tend to do property checking, e.g. QuickCheck/Hypothesis/etc. I haven't been brave enough to do that during type checking yet ;)


I even do

    _ : (f x y) == expected; _ = refl
makes tests shorter.

I also put my tests in the same file that define them by adding a private module

   module X

   record X : ...

   private
     module test where
        _ : (f x y) == expected; _ = refl
This way it's not possible to export record X without its tests passing.


I believe their gripe with compiling was not about invoking or executing the compiler. I believe the issue was with writing code that would (successfully) compile.


Did you tried to incorporate cubical agda into some real world development? For me it is cool to let internal machinery handle isomorphism between different types, but cubical agda library is still much smaller than std agda library ;/


Not yet, it's been one of the things I wanted to do for about a year since cubical-agda was released. I dabbled with it but I don't know enough type theory/topology to digest cubical type theory well. I tried a few times, but I didn't understand everything...

The benefits look really promising e.g. proving forall x -> f x == g x -> f == g. I really want to use this theorem in some of my agda projects, but I never needed it desperately enough that I had to drop everything and do this. My usual workaround is implement slowButEasyToUnderstand function. Do a whole bunch of proofs with slowButEasyToUnderstand. Implement fastButComplicated function. Prove forall x slowButEasyToUnderstand x == fastButComplicated x

I think one immediate application I can think of with cubical is, you can implement mergeSort : List X -> List X and then pass this around everywhere. You don't need a Sorter record or whatever since when you implement quickSort, you will be able to prove quickSort == mergeSort, so everything will typecheck. I don't know which will end up getting compiled though.


More so by ocaml according to Odersky. I would argue that unhealthy obsession with bringing all of Haskell features to Scala actually hurt the language making it seem way more complicated than it actually is


As I said elsewhere, the main contribution of Haskell is the science of programming languages. Without Haskell, we would be stuck with "practical" or "pragmatic" languages like C or Java that do not care much about soundness. Haskell is the science to Rust's engineering, IMO.


As if Ada, Standard ML, Modula-3 weren't there first.

When I learned those, Haskell was still Miranda.


SML inspired many languages: Elm, F#, F*, OCaml, Python, Rust, Scala, Modern Java(which looks more and more like an OCaml on the JVM), Julia

I really don't know why people think Haskell is the pinnacle of everything in PL design.


I think part of it is that there aren't many functional programming languages with a standard specification, multiple implementations, lots of users, an active research community, and where results of research feed into the implementations and the standard.

Haskell ticks all these boxes.

SML has most of those things, except the last: it was effectively frozen in 2001. This is discussed in section 9.3 of the HOPL IV History of Standard ML https://dl.acm.org/doi/10.1145/3386336

OCaml lacks a standard and multiple implementations, and seems to me to be doing better than SML in the other areas.


Isn't GHC the only full implementation of Haskell 2010? I am not sure how much the multiple implementations matter when there is only one full implementation of the last specification, which itself lags 10 year behind the only implementation massively used.


Haskell certainly is not the pinnacle in language design.

IMHO it is the first successfully implemented research language, though. I think that Haskell is the first language that, at the same time, had substantial commercial and research applications[1]. As such, it showed that the CS of PL is indeed of practical relevance.

Rust on the other hand, mostly applies this scientific research. And successfully so. Hence I call it the "engineering" part.

[1] To a certain degree this is also true for Lisp and SML, of course. But I think SML fell out of fashion quite quickly both in academia and industry. Lisp is, well, lisp. People that start using it seldom search for another language afterwards.


Having learned programming in those days, it feels really weird that now is fashionable to assert that Haskell === FP.


I can explain it this way:

If you were a Java/C/C# programmer in the 2000s, you missed out on:

  * lambda abstractions
  * Generics (prior to 2004)
  * Higher-kinded Generics
  * Primitives in Generic collections
  * no nulls
  * do-notation
  * ML type system
  * global, decidable type inference
  * Pattern matching
  * Functions as first-class objects
  * Currying
  * And most importantly *side-effect-free* functions
As a someone who picked up Haskell around 2010 as my first functional language, I liked what I saw, and I assumed that other functional languages were probably similar. My inner evangelist was all "You should use FP instead of Java, because lambdas are ergonomic", "You should use FP because nulls are bad", "You should use FP because functions are easier to test than methods".

Over the last decade I've realised that those statements aren't really truthful. Many functional languages allow null. Many functional languages won't enforce side-effect-free functions for you. So it was never really about FP, it was about Haskell. Anyway, that's how they got conflated in my mind.


SML has many of those features. Sure, Haskell popularized them, but it doesn't mean that computer scientists began studying these concepts only after Haskell became a thing


> SML has many of those features

So does Java in 2020, but I was writing about:

>> it feels really weird that now is fashionable to assert that Haskell === FP

When Googling "functional programming", you get back pages of definitions emphasising "pure functions", "no side-effects", "mathematical functions", "avoiding shared state", "referential transparency". So what Haskell provides is what the blogosphere has been labeling as "functional programming" for a decade. Those definitions could be wrong, but they're the definitions that are floating around out there.


> Modern Java(which looks more and more like an OCaml on the JVM)

Could you please elaborate on that? Is it related to the new features, like lambdas, pattern matching with switch and so on?


Algebraic Data Types (sealed classes/sum types + records/product types), pattern matching, local type inference. Moreover, Substrate VM (part of Graal) will help you achieve faster startup time and lower runtime memory overhead since it will be AOT compiled to a standalone executable, something that OCaml does right now.


While I know you are correct chronologically, it is 2020 now I think ‘network effects’ have come to dominate this perception (that Haskell is the language that gives the basis for breaking away from “practical languages). I think Haskell’s presence in the group consciousness of CS-minded developers is, at this point, due to the comparatively large web presence and semi-PR done on its behalf. Simon Peyton Jones almost single handedly pumps Haskell (and I love him for it) more than anyone pushes ADA, SML, and Modula put together.


> Haskell is the science to Rust's engineering, IMO.

Wut? Rust doesn't have the purity and the capability to isolate side-effects that Haskell has. Rust is popular because of its ownership model that got into the mainstream. We still don't know what are the costs of it.

The only similarity between them is that both are statically typed, expression based languages (ex. Scala, Ruby, Wolfram, LISP) and moved the task of proving stuff from analysis tools to the programmer. But that doesn't mean that Haskell "is the science" behind anything. For what it's worth, Rust is closer to the ML (OCaml) family than Haskell, focusing on performance rather than purity. Though that is also debatable since it doesn't feature a GC and comes with zero-cost abstractions.

Rust has its very own merits. I'm eager to see how it will evolve and what hidden costs will be uncovered.


For me the biggest merit Rust brings into the table is that affine types got out of the research lab, and now everyone is looking at ways to balance GC with affine types in most mainstream languages.


> now everyone is looking at ways to balance GC with affine types in most mainstream languages.

Sounds interesting, do you have some links on this topic?



Niko Matsakis wrote his thesis in Scala IIRC. So Scala might have better shot at "the science behind Rust".


rustc was originally written in Ocaml. Rust just has lots of influences :)


What is genuinely new in Haskell that didn’t already exist in academic languages in the eighties?


Type classes and monadic IO (and having a type system being able to express monads as a thing).


Didn't stuff in the 80s have the ability to express monads, but nobody had yet realized that they were useful?


You can express particular monads, but to have a Monad class/trait for all monads you need first class type constructors (also known as higher-kinded types), and that is not common even today.


SML has always let you abstract over type-constructors using modules, and it was always routine to do so. You can express monads abstractly in this way, and some people have (Ocaml, but the SML is the same modulo syntax):

http://batteries.forge.ocamlcore.org/doc.preview:batteries-b...

But the sort of code you would write in Haskell gets very noisy without typeclasses, and it's just not an obviously good way to program in a language that's intended to be impure and strict.


No write up on Unison? https://www.unisonweb.org/

For me, Haskell is on the list of languages (right next to Smalltalk and Forth) that I would like to learn in the future just to experience problem solving from that vantage point, but it's taken me so long to work through SICP that I wonder if I have the time to meaningfully experience all these languages.


Just as an aside:

I have fond memory of first working through SICP in my teens. But having recently revisited the book, I realised it's mostly nostalgia.

There are better books about programming than SICP nowadays.


It still has some mindblowing insights to share, but the length is a little intimidating. What books would you recommend instead?


Oh, I didn't mind the length too much. You can benefit from each individual chapter.

> What books would you recommend instead?

'Think Python' is a good introduction to programming in general. https://greenteapress.com/wp/think-python-2e/

'How To Design Programs' is another good introduction. https://htdp.org/2020-8-1/Book/part_preface.html

For functional programming in general, Okasaki's 'Purely Functional Datastructures' is a real treat.

Paul Graham's 'On Lisp' is worth reading. It's a bit on the pragmatic side. http://www.paulgraham.com/onlisp.html

Above all of them, the best book in general is 'The Pleasures of Counting'.


Unison is truly doing something different and new. Totally worth checking out.


I am a fan of Haskell, the biggest problem for me was installing stack/cabal. I eventually got there, but I recall it was a very painful and frustrating process. With that in mind I can advocate for it in good conscience. Managing ghc versions and creating/generating projects should made easy.

I think rust got this right, rustup and cargo are great tools that are easy to install and work with.


I'm starting a (completely unofficial) project to understand the difficulties involved in getting started with Haskell. If you have the time and inclination I'd be much obliged if you would file a new issue explaining in more detail what points of pain and frustration you encountered: https://github.com/tomjaguarpaw/tilapia/issues/new


IMO the suggestion for anyone new to Haskell should be "just use Stack". It's easy to install, works out of the box.

Cabal has fixed the problems from a few years ago, and Nix is a nice ecosystem to plug into, but these can get in the way at the beginning, at a point where the goal is to "create my first project, write my first program, build it, test it, run it".


I hear you. I have made playing with Haskell much more enjoyable by doing two things: rely of stack and by have a fast multi core VPS so that long Haskell builds whenever I update to a new stack resolver version. I don’t do much Haskell on my laptops anymore.


There was a fantastic talk at JuliaCon 2020 about applied category theory in Julia: https://youtu.be/7zr2qnud4XM


Haskell done right (TM) is objectively by every means slower. Both in terms of implementation and execution. You end up trading maintenance cost related to design for implementation cost and maintenance cost related to scaling which I may argue isn't really feasible in a pure Haskell approach. I base this upon personally experience witnesses a haskell system serve something like 10 requests per second when a comparable implementation in Scala or rust or go could have easily done 10000


How do you know this Haskell system was "Haskell done right"? I'm not trying to be snarky, this is a genuine question. I have witnessed enough disasters (performance or otherwise) in multiple languages to know not draw conclusions from just one or two failures. Of course, consistent failures by multiple teams do point to a problem with the technology itself.

(Then again, maybe you do have multiple examples and/or have strong evidence this system was indeed "Haskell done right", which is why I'm asking).


Well, its hard to tell not being a haskell advocate, but the people building the systems were haskell experts with years and years of that being their only language and were extremely particular about the implementation, which is part of the reason why they justified the 10 requests per second part, is because it was "done right". My understanding is that haskell is designed is such a way that certain things are impossible to do any way but the "right way", so its also just fundamental by merit of the fact that they were using haskell to solve those particular problems. But it was mostly clout and the fact that the implementation was so thorough and the people building it were like phd level experts in haskell/category theory.


I think what you witnessed is the fact that being an expert in programming language theory or category theory, does not make you a good software engineer.

Haskell is surprisingly well suited for back-end services (e.g REST). Warp (an HTTP server), Servant (a way to define your backend API), and Mio [1] (the Haskell network IO implementation) are amazingly performant and scalable.

Of course, that does not prevent you from messing up on the business logic level. I suspect that they lacked awareness of performance implications of their choices, this is not something Haskell will save you from. On the contrary, parts of the Haskell community do tend to go to over-abstraction, and I believe this does have a negative effect on people writing production software.

[1] https://dl.acm.org/doi/abs/10.1145/2503778.2503790


I've made this comment before, while Haskell has succeeded in terms of inspiring other programming languages, as a language for use itself it's a non-entity. Here's a list of all the popular open source Haskell software:

- XMonad

- Pandoc

- Shellcheck

Everything else is just tools for using Haskell (Cabal etc.).

Seriously, compare:

- Haskell https://github.com/search?l=&o=desc&q=stars%3A%3E1000+langua...

- Rust https://github.com/search?l=&o=desc&q=stars%3A%3E1000+langua...

- Dart https://github.com/search?l=&o=desc&q=stars%3A%3E1000+langua...

Haskell has been around for a long time. It's not just "avoiding success at all costs," it's a nothing. It inspired good features like Option and Maybe in other languages, but it itself is on the same scale as Dart.


I personally think that the most important impact that functional languages make isn't in their direct use (which can be very powerful), but more so in their influence. Bringing functional features into more main stream languages allows everyone to take advantage of powerful features/patterns.

The first one that comes to mind is pattern matching. All your ML derivatives have had it for years, and it's an extremely powerful concept. Unfortunately, most people (myself included) had never heard of it until it hit more main stream languages like Rust and C#. Streams and the more wide spread use of anonymous functions is another great example of functional concepts making imperative languages much more expressive and powerful.

I don't like writing in pure functional languages very much, but I do enjoy a lot of the power the have and how we can make these concepts more accessible to traditional imperative languages.


Erlang pattern matching makes using the language a pleasure, I find Rust's implementation less flexible.


OP here.

I just have one thing to say.

In the real world (i.e., unlike in a tree), children have multiple parents.


The author seems to like Julia for numerics -- have the language and community improved since Dan Luu's damning review six years ago? https://danluu.com/julialang/

I use Python / numpy heavily, and the giant hole numpy opens in my type annotations causes me grief daily. If there's something better out there, I want to try it, but it has to work. It has to work not just in the "I'm putting together a journal article and I need it to run" sense, but in the "I'm building a system that crunches some heavy engineering numbers all the time, and customers expect consistent results" sense. Can Julia be trusted?


That was just around 2 years after the language became public and 4 years before it became stable, so I wouldn't really call the language "mature" back then (I didn't use the language then to have the insight even if I got before 1.0). From what I can see, the core developers seem to take seriously bug reports, even implementing sophisticated ways to accurately find the source of bugs [1] and running tests on the entire ecosystem before each new tagged release to catch bugs early. There are many large scientific projects using Julia [2], and there are industry leader packages in the area like DiffEq.jl.

The "time to first plot" (a consequence of the aggressive JIT that compiles large programs with heavy optimizations, as Julia doesn't have an interpreter outside of the debugger) still exists, but it is also improving at a fast rate.

[1] https://julialang.org/blog/2020/05/rr/

[2] https://juliacomputing.com/case-studies/

[3] https://github.com/JuliaPlots/Plots.jl/issues/2838#issuecomm...


Even back then, Dan's blogpost was misleading at best.


The most recent relevant discussion about Julia I found is:

JuliaCon2020: Julia Is Production Ready - https://bkamins.github.io/julialang/2020/08/07/production-re...

Discussion: https://news.ycombinator.com/item?id=24082281

There are comments suggesting that Julia is plenty capable, with libraries covering equivalent (or more) features provided by Python libraries like NumPy. YMMV.


The author states:

>"The type system in Rust is obviously very influenced by the type system of Haskell, but they also implemented “ownership” which allows for the killer feature garbage-collection free automatic memory management."

I always assumed that garbage collection was simply a convenience offered by a particular language.

I was also under the impression that memory leaks were the class of problems that were addressed by garbage collection and that memory safety was a class of problems that was concerned with things like buffer overflows and corruption and addressed by runtime checks.

However after reading I feel like I have failed to understand the role of garbage collection in enforcing memory safety. Is memory management and memory safety not so cleanly divided?

Could someone explain or point me to a good resource for understanding how garbage collection is used to enforce memory safety?


The common memory-related bugs are:

- use after free

- memory leaks (failing to free)

- concurrent and uncontrolled access from multiple threads (race conditions)

Garbage collection helps with the first two. The system will not free memory until it is certain (by whatever method is being used) that the memory is no longer in use. But this comes with a run-time cost. Rust's innovation here is a more reliable way of tracking memory usage statically, and removes some of the run-time cost. Paired with its ownership/borrow mechanisms it also works to solve the third issue, which GC doesn't deal with at all.


Garbage collection is also limited, you can still effectively create “memory leaks” if you hold a reference to something you’re not longer using.


You can think of it as kind of the inverse of leaks; if fixing a leak is "hey, this isn't used anymore, let's clean it up," fixing memory safety is "hey, this is still used, don't free it yet!"

Does that make sense?


Yes both of these responses were helpful. Cheers.


Elm should have made the list. It is a dsl for creating web uis. Nothing comes close to Haskell’s being a dsl for creating dsls


Elm, but not PureScript?!? :)

I use PureScript everyday, and I love it. It's nearly identical to Haskell, with a few small changes that can be described on about a page


Yeah, forgot about Purescript. Always thought of it as just basically Haskell :)


As someone too dumb to learn Haskell or Category theory, it blows my mind that someone who says he is four years from being a teenager wrote this.


Working with Haskell requires no Category Theory.

It's a fun language, and if you play with it and its ecosystem, it might give you ideas to import in your own environment.

Pick up a good book, and learn it, because it's fun. And no, you're not too dumb to learn it. Java and its ecosystem can be more complicated IMO, you don't realize it because you've put years in learning it.


Agreed it requires no category theory, but I'm an experienced programmer in a few languages and I've found it very hard to learn. I can get to a basic level, where I write rather verbose programs because of all the recursion and immutability, and I sense that I need to make better to use of the core abstractions that the language is known for (those with names which point to concepts in abstract algebra and category theory), and I've totally failed to do that in the two times I've spent a few weeks trying to learn.

Advice gratefully received!


Perl 6 was also inspired in part by Haskell.

An early version of Perl 6 was written in Haskell, which led to further Haskell influence on the language.


Please note that Perl 6 has been renamed to Raku (https://raku.org using the #rakulang tag on social media).


All these languages existed 5 years ago... I’m not sold on the time travel premise.


Rust, Idris, Julia - pretty arbitrary imo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: