Hacker News new | past | comments | ask | show | jobs | submit login
Haskell People (argumatronic.com)
90 points by allenleein on Sept 30, 2017 | hide | past | favorite | 97 comments



Nice post.

A monoid is an interesting and underrated "pattern" for objects where you can have a "nothing" and an operator that smashes two values together. The operator may be 'lossy' as in 3 + 4 = 7, or it might preserve the originals to some extent as in "3" + "4" = "34".

I realized the power of this when working with a new library and thinking "how do I concatenate these", all I had to do was check that the data was a Monoid and used mappend. There is something very satisfying about that.

Plus if you build some utilities on monoids, they'll work with lots of distinct objects, from parsers to numbers to lists to all sorts of other things.

What Haskell lacks in the community is people blogging about how to get more mundane things done like setting up a CRUD website, API, or desktop app. I have a site in my profile that attempts to fill that gap.


Can't help but quote: "I see poorly implemented monads everywhere. They don't even know they're monads."

Indeed, being acquainted with Haskell makes you write and see code in other languages differently, see different and more important patterns than "visitor" or "factory", and use their properties.


who said that?


Can't remember; I saw at as a meme, with the boy from Sixth Sense as the template.


>> algebraic structure I talk about the most passionately is the monoid

Can you ELI5 monoid? what problem does it solve which isn't easy in other languages without it?

edit: thx everyone.


A monoid is something that allows you to take two of the same kind of thing and smash them together to be one of that same kind of thing. It also requires you to have an "empty" thing that can be smashed together with any other thing without changing the result.

In code:

    smash :: Monoid a => a -> a -> a
    empty :: Monoid a => a
A monoid is a property of things, not languages, so the idea of "a language without a monoid" doesn't really make sense.


So it sounds like a monoid is a:

Function of arity-2 (possibly expressed as an infix operator)

Which is closed over the set of values that are its arguments/parameters

For a domain (set of input values) which contains a value that converts the function into an identity function when used.

E.g. -

Addition over real numbers (or integers...), becomes an identity function when one arg is 0

Multiplication over real numbers (or integers...), becomes an identity function when one arg is 1

Something along that line?

I guess that makes it useful for a fold/reduce type function where the final result is the same type as the input stream, and the initial value can be identified as a known default.


Exactly right, but for omitting associativity (which, as noted elsewhere, mightybyte skipped).


Yep, it sounds like you pretty much have it. In addition to the two examples you mention, here are some other examples of monoids:

Lists under concatenation Booleans under AND Booleans under OR Numbers under max Numbers under min


Ah, so this is a monoid:

    instance Monoid Int where
        smash 25 x = x
        smash x 25 = x
        smash x y = 2*x^2 - 3*x*y
        empty = 25
Thanks for explaining!


There are also the monoid "laws" (not checked in Haskell):

    empty `smash` x = x -- left identity
    x `smash` empty = x -- right identity
    (x `smash` y) `smash` z = x `smash` (y `smash` z) -- associativity
Your definition doesn't fit any of these. (Edit: you've rewritten to meet the first two; but it is not associative.)


mightybyte didn't state the associativity law in his original definition. As for the left and right identity laws, he did state them, and I did intend my instance to satisfy them (otherwise I wouldn't have special-cased 25, of course), but I messed up. I've fixed my post since.


Just for reference, the laws are stated in full in haskell's docs, https://hackage.haskell.org/package/base-4.10.0.0/docs/Data-....


I was deliberately “acting stupid” to criticize an insufficiently precise definition.

---

> That is usually a terrible idea.

Not at all. Think about the structure of a proof of negation. You make a “silly” assumption and “play along” until you reach a contradiction.


That is usually a terrible idea.


Proofs by contradiction do not require you to play dumb, you can just present them as they are.


I left out associativity because I was focusing on the ELI5 part.


They are not checked in Haskell, but they are definitely relied on by existing code.


For anyone who was wondering, catnaroek was being sarcastic.


I think combining 25 and x should yield x.


Oops, yes, my bad.


A monoid is a method of converting coffee into internet comments about monoids. It's quite ingenious, really.


"Monoid" is an adjective that describes a data type.

Anything you describe as a monoid has to have three properties: you can add them together, there's an "empty" or "zero" value, and (a + b) + c = a + (b + c).

For example, strings with concatenation are a monoid, because you can use an empty string and the string + operator.

For example, integers with addition are a monoid, because you can use 0 and the integer + operator.

For example, sets are a monoid, because you can use the empty set and the union operator.

For example, a "Picture" data type could be a monoid, because you could have a fully-transparent picture and use an "overlay" operator to put one picture on top of another.

Why do you care? Well, for one thing, it just makes it easy to find the function or operator you want to use: as the OP said, if I feel like my data type should have a concat or append operator but I don't know what it's called, I just use mappend / mempty / mconcat.

But, as another practical example: "sum" in Python works for numbers, but I've seen folks assume that it works for anything that you can use + on, and so try and use strings with it. But, because "sum" isn't a general purpose tool, that doesn't work! In Haskell, on the other hand, the equivalent -- mconcat -- will work on anything that is a monoid. (And can be specialized to work faster for specific data structures.)

Other languages also have monoids. If someone tells you Haskell has monoids and other languages don't, what they really mean is, Haskell makes explicit the monoid pattern / interface, where other languages have it implicitly for different data types. Talking about the pattern explicitly isn't revolutionary, but it can be pretty useful for discoverability and writing "general" algorithms.

Instead of calling data structures monoids, you'd get the same effect if, as a programming language community, you decided that as many types as possible should support "+", and that every data type that can should have a makeEmptyThing() method, and that it would be weird and not ok if x + makeEmptyThing() didn't equal x and if (x + y) + z didn't equal x + (y + z), and then subtly shamed any libraries that defined makeEmptyThing() and + in ways that didn't follow that pattern. But if your programming language community did this (for consistency) you'd probably want to come up with a name for it -- "Addable", "Concatable", etc -- and the Haskell community chose "Monoid" (because of relationships to theory etc etc).


You're actually wrong about Python's sum function, it works for anything you can + on BUT strings [0], this is to avoid quadratic behavior with string concatenation.

[0] https://github.com/python/cpython/blob/5837d0418f47933b2e3c1...


> But, as another practical example: "sum" in Python works for numbers, but I've seen folks assume that it works for anything that you can use + on, and so try and use strings with it.

I don't know python very well, but aren't string special cased? It does work with lists.

    sum([[1,2,3],[4,5,6]], [])


It does if you specify a sequence of list as the first argument, and an empty list as an initial value.


This is because the start argument defaults to 0. It doesn't make sense to add a list and 0.

The same thing does not work for a sequence of strings and an empty string.

    sum(["abc","def"], "")
Throws TypeError: sum() can't sum strings [use ''.join(seq) instead]


I agree with much of what you say here. Some refinements and small disagreements follow:

A monoid isn't just a data type. Mathematically, it's a set ("data type" is plenty close enough, in this context) and an operation. Integers with addition are a monoid. But so are integers with multiplication. And these are two different monoids. Haskell confuses this issue a bit - they way the Haskell libraries are set up you can only name one Monoid instance per type, and so types that form monoids in several interesting ways get wrapped in "new types" so you can name each of the several. This mostly works fine, but is a bit hacky from a math POV, and I think doesn't lead as well as it might to a more general understanding.

It's worth noting, too, that some of the Haskell libraries - particularly those from the early days - make some unfortunate decisions about what instances to bless. I often complain about the Monoid instance for Map. Map forms a monoid under union so long as we combine values (on collision) associatively. Unfortunately, the library doesn't let us pick how they are combined, it just takes the left one. That's usually not what I want, and it's particularly painful when I've been combining Sets with monoid operations and now realize they need to carry some extra info.

... and then of course there's floating point, which probably shouldn't even be Num.

> But if your programming language community did this (for consistency) you'd probably want to come up with a name for it -- "Addable", "Concatable", etc -- and the Haskell community chose "Monoid" (because of relationships to theory etc etc).

There are two things I really like about using the name Monoid, compared to the others.

First, it's much clearer what the rules are. We aren't stuck wondering whether strings or products are "really" "Addable", whether functions of the form (a -> a) are "really" "Concatable" (... and what about nonempty strings? you can concatenate them, but they have no identity object...).

Clarity in an interface means I know what properties I can rely on. You can define those properties precisely with "more intuitive" names, but then the actual interface doesn't match the intuition and people won't realize it, which is worse. You could get precise and intuitive by adding verbosity - "AssociativeOperationWithIdentity". My principle objection there is readability and aesthetics, but if that's what a community wants to go with okay... "Monoid" is short and unambiguous and well established in related fields.

That last point touches on the other thing I like. There are mathematical results that can be useful to programmers. Reducing the translation needed helps make those more accessible. And everyone could do with knowing just a bit more algebra :-P


For some people it is easier to understand a concept, especialy an abstract one, if they are given a few examples. For those people:

A monoid is a set and an binary operation on the elements of that set with the following properties:

1.The operation acts on two elements of the set to produce another elemenent of that set(closure).

Examples:

You have a set od natural numbers {0,1,2,3..} with standard addition. Whichever two natural numbers you choose to add you will get a natural number back.

A set of MxN matrices and matrix addition. If A and B are two MxN matrices A+B is a MxN matrix.

2.The order in which the operations are performed does not matter(associativity). So for all elements of our set a,b and c and the operation +:

a+(b+c)=(a+b)+c

must be valid.

Examples:

Let L be a set of strings and + concatenation. "Just"+(" an "+ "example")=("Just"+" an ")+"example"

Vector addition over vectors of same dimension.

3.There is an element(unique!) in the set which when applied to any element of the set returns that element(identity, neutral element).

e+a=a+e=a

e is identity

Examples:

0 is identity for standard addition: 0+1=1, 4+0=4

1 is identity for standard multplication: 1*7=7

""(empty string) is identity for string concatenation


A monoid is an interface with two operations: one that produces an element out of thin air, and one that combines two elements.

It also comes with a contract, like the contracts of "equals", "hashCode" or Comparable in Java. The contract says that the operation must be associative, and that the element you produce out of thin air must be neutral: when you combine it with another element it leaves the other element unchanged.

One benefit of having such general interfaces is that you can reuse code across a wide range of domains. For example: the Comparator interface in Java can be seen as a monoid, with the "thenComparing" function as the combination operation: https://docs.oracle.com/javase/8/docs/api/java/util/Comparat... You could build a list of comparators and concatenate it, map a function that produces a comparator over a list and concatenate it, and so on.

Another benefit is that you can be quite general when specifying requirements for your functions. In Haskell, the Writer monad only requires the type of the accumulator to be an instance of Monoid. So you can user regular lists, difference lists, or the integers under addition.

Also in Haskell some composite types automatically satisfy an interface if their components do, saving you work. For example, a tuple of monoids is automatically a monoid. Any function that returns a monoid is itself a monoid as well. (Functions that begin and end in the same type can be monoids in a different way: the neutral element is the identity function and the combination operation is composition.)


Monoid is a type class, similar to interfaces in Java, and this is what it makes easy: if your code uses the monoid interface of some type then you can swap the underlying concrete type without changing your code, and it still works.

Obviously, you can do that in any untyped language, too, but explicitly defining these contracts makes code easy to manage and understand. And the compiler will tell you if/how you're doing something wrong.


"Yet another monoid tutorial." This time in Hacker News comments. And lots of them. :-)


The irony is that there are now 4 monoid tutorials in the replies, and they are all different, and differently confusing :)

And there I was, thinking that this only applied to monads :)


Perhaps someone should come up with a generalizable method for combining monoid tutorials. I wonder what we could call it...


Is it really so confusing? I mean, if you understand + for integers and for strings and for lists, and you can understand why this is basically the same operation for all of them, congratulations! you know what a monoid is.


It's not confusing to me :) I'm just saying that at the time I wrote my comment there where four different comments with four different ways of explaining monoids with four different sets of terminology :)


more mundane things

Even the most reasonable of "Haskell people," the OP included, seem to have this built in condescension for programs that actually do things. Still waiting for the Haskell community to produce something non-mundane to justify all that. If there was just one example of something that clearly demonstrated the unique value of Haskell to non-category-theory-enthusiasts.


Pandoc? I've never looked into the code (because haskell might be even harder to read than to write), but it's the obvious example of something super useful that was written in Haskell...



Also, https://chordify.net/ is implemented using Haskell. I've used that a couple of times. Very useful (if you play music).


Of course Haskell programs do things...lots of things. Are you sure you're not being condescending to people who want their programs that do things to have a solid mathematical foundation as well?


Where do you get that from?

There are lots of practical Haskell projects that "do things": From web frameworks over window managers to cryptocurrencies, and also many commercial projects that are all but mundane and where nobody cares about category theory.

You can also check out https://haskell-lang.org/libraries for some pretty practical documentation on libraries that Haskell programmers use to build programs that "do things".


Haskell doesn't force types to be nullable. Nulls are the "billion-dollar" mistake - by introducing bugs in programs.

That's just one example.


This entire post is so true.

As a personal (not so personal anecdote). Purescript[1] Book[2] claims "No prior knowledge of functional programming is required, but it certainly won’t hurt. New ideas will be accompanied by practical examples, so you should be able to form an intuition for the concepts from functional programming that we will use."

And then it goes through a rather gentle introduction of various concepts. And then this is the first time ever that `map` is mentioned:

> Fortunately, the Prelude module provides a way to do this. The map operator can be used to lift a function over an appropriate type constructor like Maybe (we’ll see more on this function, and others like it, later in the book, when we talk about functors)

WTF.

"lifting over typeclasses wat". The whole concept of lifting isn't discussed until six chapters later.

[1] A Haskell-derived/Haskell-based language that compiles to Javascript, http://www.purescript.org

[2] https://leanpub.com/purescript/read


I like Haskell a lot, but it has one aspect that has only bothered me. A lot of incidental complexity is added due to the prevalence of custom functions/operators with arbitrary associativity and fixity, on top of normal precedence rules. Perhaps it is because I got spoiled by writing lisp (Clojure) where such concerns never exist in such a uniform language. But writing Haskell can lead you to make lines of code that don't always read in the order you'd expect. I was always having to lookup the fixity and associativity of things to know what was going on just in a single otherwise straightforward line of code.


When I was going through "Learn you a Haskell...", and writing various programs with it, my first thought on the "$" operator (which basically lets you not parenthesize everything after it in an expression), was, "Why?".

And then I wrote more Haskell, and found, wow, that's pretty useful...

... for writing Haskell! I've never wanted it in any other language I've used.


Could you give an example? It’s not something I’ve thought about when writing Haskell, but perhaps I just haven’t noticed.


Agreed. Though this is really a problem with text based editing (which obscures the tree structure), not Haskell.


I remember hearing this said about Lisp programming in the early 1980's. Specifically, there was a "tree-based" editor for InterLisp on the Xerox Lisp machines... I only ever met one person (hi Marty!) who even basically understood it.

I really hate the "we've never had one so we'll never have one" argument about anything, but I think in this case it may well be true, although the argument stems from the fact that a lot of people have tried to make "tree based" editors, but I've never heard of anyone making one that got any kind of traction.

Even the guy who could use the Xerox tree editor preferred emacs (on other machines).


The problem with tree editors has always been the lack of standardization, not any kind of conceptual or UI problem. Generally authors take one of two roots:

1) Provide a tree based editing UI layer on top of text files since text still rules. In my opinion this is almost like the worst of both worlds, though it can be successful. Paredit is probably the best example of this (and many lispers swear by it).

2) Create a Grand Integrated Vision of How to Fix Every Programming Problem Ever Created. This can make a nice demo, but of course never turns into a practical product any time soon.

Until we have a standardized tree or graph interchange format that is designed to satisfy the common denominator of the full range of languages, protocols, etc., as text does today, all structured editors will be fighting an uphill battle.


Interesting title.

In my experience, Haskell is for Haskellers more than just a language. I have found that every Haskeller I talk to is, in the specified order:

  1. A Haskeller.
  2. A programmer.
  3. A human being.
  4. A person of a specific gender.
They really are "Haskell People". What's even more bizarre is that for some other languages this hierarchy seems to be reversed.

That being said I'm glad that the word "sucks" was finally removed from this article:

https://wiki.haskell.org/The_JavaScript_Problem


Thanks for the article link.

I love that "late binding" is listed as a flaw. Some would call that a feature. Those of us in the "some" must be wired a bit differently that the "early binding" crowd, I guess :-)


How can deferring checks of correctness to runtime be considered a feature? Haskell has an option to defer type errors to runtime, so I don’t see why early binding wouldn’t be preferable, since you can have it as you like.

My life quality has increased substantially since switching from Python to Haskell, and no longer getting runtime crashes saying “AttributeError: <x> has no attribute <y>”.


"How can deferring checks of correctness to runtime be considered a feature?"

Correctness is not something that matters in the real world. Even defects don't really matter. What matters is problems that defects cause. It's a subtle difference, but very important one. Because you can either limit the scope of those problems at runtime or try to make sure that there are as few defects as possible to cause problems in the first place. First approach doesn't force you to specify anything to check correctness for, is flexible, productive, prepared for the real world. Second approach is much less flexible, much less productive, doesn't work well in the real world, where things fail not only because of defects, but by nature. So, this is how it can be considered a feature.

(While writing this I realized that Roboprog probably meant late binding as a feature from OOP, where it is considered necessary for OOP to even exist, not reliability.)


> Correctness is not something that matters in the real world. Even defects don't really matter. What matters is problems that defects cause. It's a subtle difference, but very important one.

What, exactly, do you consider the difference between these two?

To me, saying ”correctness doesn’t matter” is equivalent to saying “it doesn’t matter whether your app does what you want it to do”, which makes no sense to me.


I always saw this as a "things I hate about JavaScript because it's not Haskell" list.

Some of these points are of course legitimate, but other to me sound like a cry of "blasphemy!".

I like functional programming, I took a course on Haskell during college and it was a humbling experience.

But I just can't get over the fact that its community is like that.


I think there are two different definitions of "late binding" in play here. The footnote makes it clear that the author considers C++ not to have "late binding"

> Early binding allows for static verification of the existence of method-signature pairs (e.g. v-tables). Late binding does not give the compiler (or an IDE) enough information for existence verification, it has to be looked up at run-time.


Disclosure: using JS for UI work, not nuclear reactor controls. Hate reading reams of verbose, flabby, painfully explicit, long, statically typed (often believed to be "self documenting", as well), code. Prefer short, intuitive code (and LESS of it, but with explanations of context and "why") for non safety critical work.


Haskell is one of the least verbose languages out there, typed or otherwise.


Haskell's syntax is very terse, but Haskell codebases can still be more verbose than JavaScript ones because the type system actually makes you think about things like error states. JavaScript is definitely more verbose when you write code for all paths and not just the happy one, but doing so seems far less common in practice than it ought to be.

Another angle on the parent comment is that Roboprog hasn't been exposed to statically-typed languages with good inference. It sounds like they're mostly complaining about type annotations, but at least in Haskell you can get quite far without having to annotate anything.


There's probably some truth to this. My mathy friends have a deep appreciation of Haskell. I always try to get into it for a few weeks, marvel at it, grok some new things, then inevitably end up in more "practical" languages (only because there are almost no domains, save parsers and compilers, where Haskell is obviously the "best" fit).

This may describe why: Haskell it built for the sake of itself, it's coherence, it's design. It is first and foremost designed to do things the Haskell way, which may be conceptually pure and "right" but aren't necessarily the most straightforward. In contrast, most other languages exist to solve a problem C++ (performance and control), PHP (a templating language for the web), Python (web, data science), JS (browser), etc. This being my understanding/paraphrasing of what the author is saying.

I think the answer is somewhere in the middle: the good parts of Haskell, selected for pragmatism (strict instead lazy like most languages), with an emphasis on being approachable to users. There's a lot of languages that are almost there. Elm gets the ease of use but strips too many features. OCaml is practical but has std lib split, multicore issues, and is kind of crufty. Go is arguably too simple, Rust is too low-level, Kotlin is almost there but the JVM can't replace languages that can produce straight binaries for the most part. Nim is very easy to write in a functional style and is maybe the most practical language I've ever used. It lacks the support, libraries, and mindshare of the others, though.

What kind of worries me is there is a strong incentive for languages to promulgate themselves into domains they aren't especially suited for, so it's more likely that one of these flawed languages will be Frankensteined into a domain it doesn't really belong and we'll be hacking around those disadvantages in a couple decades. Maybe the real answer is to take a step back and consider what is a middle ground that gets 90-95% of what anybody needs. I'm not convinced that something like Nim or a new language couldn't be 99%, with the 1% of the time being necessary to drop into domain specific languages. If one of the big corps put support and funding into the "right" language, it's not really a difficult job, but it is those damn incentives. How do you fix them?


Regarding other languages out there, I'm gradually starting to appreciate what has been accomplished with Swift. I did not like it at first, and the new version 4 of the language is much improved over the first a few years ago. But the algebraic data types that are pervasive in the language, and the versatile and type-checked pattern matching, and the familiar functional toolset with first class functions, map, filter, etc, and the widespread use of value types over object references, there are a lot of things there that come from other languages, and most importantly, it's a highly practical language for getting real work done on billions of devices. I'd like to see Swift adopted elsewhere other than just Linux and Apple platforms, as I think it has a lot to offer, some great compromises.


Middle ground. Compromise. You're looking at language diversity with the assumption that it's a problem. No corp, no matter how powerful, could make the be-all-end-all language because a language isn't a tool; it's the culture around the tool.

Imagine you have three teams of hackers, each composed of die-hard devotees of a language: C++, Rust, and PHP. You task each team with building a program to the same spec, in C++. You already know what to expect, don't you? The PHP bunch will finish first, with a program that works probably well enough. The Rust gang will make something frighteningly safe, that doesn't use any memory. 10 years later the C++ folks are still writing clever templates.

Different languages are good at different things not just because of the strengths and weaknesses of the language itself, but because a language has an associated worldview, and different worldviews lend themselves to different goals. Within hackerdom there are many communities adapted to different ideological ecosystems.

In a future with one "perfect" language, where would you go when safety was the only thing that mattered? You'd have to find some old geezer who cut her teeth on Ada, before the Programming Cultural Revolution.


> Maybe the real answer is to take a step back and consider what is a middle ground that gets 90-95% of what anybody needs.

I call this "pragmaticaly typed language", but should just call it "a pragmatic language"


Actually the same could be said about Python and the Python way, pythonic programs, etc.


The thing about the Python way is it is dogmatic but it is also very practical: it's dogmatic about being practical. For loops and list comprehensions are more familiar or easily explainable to many programmers, so Guido kind of hid map/filter/reduce instead of leaning on them or having alternatives like other languages (Cloure or JS).

I've got friends who I'm not sure can read that are able to understand Python. Haskell isn't like that.


Fun thing - list comprehensions could be seen as map/filter and so on but they could also be seen as syntax sugar for monads:

     [ x + y
      for x in xs
      for y in ys]

     foo = do
         x <- xs
         y <- ys
         return (x + y)
Just that this is overloaded in haskell so that it works with lists but also as async/await, as the `?` operator that propagates errors in rust and so on.

So maybe it isn't the basic concepts of haskell that are hard to get but that they are so general.


Haskell aims for simplicity. That is favoring the reader/maintainer of code over the writer. It’s a different trade off that is not obviously worse.


I don't really consider Haskell an example of simplicity, I consider something like Elm or Go simple. Haskell is...correct?

But this is a problem with these discussions because my definition is different than yours and they could both be argued to be correct. Is this like Hickey's easy vs simple? Is my definition incorrect?


I meant it in the objective sense of information theoretic complexity. I’m not sure what other measurements would be objective in comparison.


Is there any research which shows that Haskell is superior in terms of information theoretic complexity.?

I'm not familiar with the term. The only thing I can find it in reference to Haskell is this:

https://en.wikibooks.org/wiki/Haskell/Algorithm_complexity

which isn't really related to this:

That is favoring the reader/maintainer of code over the writer.

What material is there which shows that Haskell is simple in terms of information theoretic complexity and superior to other languages (not?) designed for that?


Being more rigid about semantics and formalization of program meaning means that it is a simpler process to answer queries about what a certain piece of code does vs an alternative “pragmatic” language that is riddled with exceptions and lacks a strong type system.

If your job is to take a piece of code written by someone else and either fix a bug or add a feature, it is objectively easier to do when you have strong guarantees about input types and format, exceptions generated, referential transparency, side effects, execution model, etc.

You also get shorter programs and simpler programs in the algorithmic complexity sense when you have strong first class features for modularity, interface definition, and constrained polymorphism.

Haskell has all of these properties as good and often better than “production” languages, and has since the 90’s. Haskell is objectively better on all counts.

However what matters is not objective truths but subjective realities. Haskell is also “different” in a way that is only endearing to mathematicians and CS theorists (which are really the same in the extremes). You can hire someone and train them on Haskell, but it’ll take much time and money to get them to similar comfort levels, and not everyone is willing. And with Rust, which carries over many of Haskell’s benefits to the imperative world, you can get 80% of the benefit for 20% of the cost. So why bother?

If I had a time machine though, I think code dropping Haskell 98 back prior to the invention of FORTRAN would have put us in a much more desirable alternative history. One where code mostly works as advertised, security is based on proofs of correctness not boxing, requirements and intentions are more clear and explicit, etc. Too bad we live in the world we do.


Two of the core tennents of Python is: simple is better than complex and code is read much more often than it is written.


It amazes me that someone could write down that principle and then design an untyped programming language.


Dynamic types can be more problematic to modify, but many of us find them easier to read: assume that the code actually worked and does something reasonable, now skim for the gist of it (without having to see a bunch of extra detail).

Assembler is untyped (just bytes and words). I'm not big into Python, but I'm pretty sure it has types, but they are late/runtime bound.

Dynamic types are probably not a good choice for an army of idiots, but if dynamic types were so completely unworkable, you would think that they would disappear, eh?

That said, I'd rather see avionics written in Ada than Python, but not every problem needs that level of scrutiny and pain.


> assume that the code actually worked and does something reasonable

That's almost never the case if you're in a situation where you're reading code


Why is dynamic typing (with reliance on duck typing) any harder to read, or more complex?


    def add(a, b):
        return a + b
In a dynamically-typed language, you can't actually know if this dead-simple function will throw an exception, until you know the entire call graph leading up to where this function was called. That's fine in small scripts, but really freakin hard if you have call-stacks 10 levels deep.


Sure, but it is both exceedingly readable and exceedingly simple. It is pretty much pseudocode. Which is what we are discussing, not type safety in huge codebases.


Thats true in small functions, but that's also true for statically-typed languages with type inference. In OCaml, the same function would also be

    let add x y = x + y
Looks reasonable to me, and this is statically type-checked


If I were making an argument in this case, I'd say that several other languages have fulfilled the "readability" benefit while also fulfilling orders of magnitude higher performance and type safety garuantees. This implicates that one case is write only instant legacy code and the other will be highly maintainable going forward if the codebase and team need to scale in size.


Sure. That doesn’t mean it achieves those goals very well.


Could you give a few examples of good parts about Haskell and why do you believe they are good for anybody?


I think Haskell's view of the world with types is basically right. Pattern matching, algebraic datatypes, typeclasses, powerful and usable functions, and immutability (by default) are a great way of reasoning about code. Add the other common aspects of functional programming (map) and you have a very lean and expressive core. Haskell also has a lot of really cool and crazy stuff related to parallelism/concurrency, which are at least partially possible/easy because of aspects like immutability (Erlang/Elixir show this, too).

http://chimera.labs.oreilly.com/books/1230000000929/index.ht...

To be honest I'm extrapolating a bit; my experience in static typing is mostly in other languages and my experience with functional programming is mostly in Elixir/Clojure, but a mix of the two in my book is the most bang for the buck you'll find. But as a disclaimer, ....I'm acting like I know what I'm talking about, these are just my feelings from working with some of these languages off and on. Haskellers can answer this much better than me.


The podcast that she links too is really interesting [1]. Talking about what its like to learn Haskell as a first language. Really interesting for me because I'm getting into Elm, and their approach is to use really concrete ideas and names instead of Monads and such. She even mentions that Elm is a good way to start approaching Haskell concepts (around 13 minutes).

[1] https://twitter.com/thefrontside/status/912327851386470400


The experiment to bring "Haskell people" into the world of VR has so far gone well (https://github.com/SimulaVR/Simula), but I have sometimes wondered if Haskell is too off-putting of a language to bring traditional graphics programmers into this project.


The content of the article makes it abundantly clear the author isn't talking about "understand thing," but rather specifically "understand Haskell." These aren't identical.


I didn't take it as that. I think his argument translates to other functional languages to some degree. I was working on a Scala project where the lead developer exhibited the exact behavior the author describes. Scala is a bit more "useful" than Haskell because it borrows from Java, and it's not too much of a leap to write something you can use. Opening a file in Scala and processing it is much less of an understanding task than it is in Haskell.

The "understand first, then use" aspect of many functional languages is what kills systems in the crib when you're trying to build something that provides business value.


> The "understand first, then use" aspect of many functional languages is what kills systems in the crib when you're trying to build something that provides business value.

This notion that one must "understand first, then use" is limited strictly to the particulars of the language. It's equally true of other languages: one must understand at some level how the language works to build anything with it.

It also isn't clear how having to understand Haskell and having to "understand thing" are the same thing, as the author asserts.

An uncharitable reading of the article is that it's a sophomoric insult to people who don't use Haskell, as it apparently dismisses them all as people who don't care to understand things but only to more or less mindlessly and practically randomly build stuff. I surely hope that wasn't the intent.


I don't see that at all. To me it was clearly about how using a language that is designed to be correct rather than useful/ simple places more responsibility on the coder to think about the entirety of their problem before digging in.


We can agree to disagree. I see no evidence to support your generalization in the article.

It's also a contentious assertion besides, since the implication is that people who don't use "correct rather than useful/simple" language don't "think about the entirety of their problem before digging in."


I’ve encountered multiple times that Haskell has forced me understand exactly what I was doing, otherwise the parts wouldn’t fit together, or I’d end up writing huge amounts of duplicate code. I’ve never encountered this in another language.


I can't see most "Info Services" type management buying into something requiring that level of effort prior to having anything to show for it. Right or wrong, just sayin'.


It definitely depends on the cost of failure. If it’s 30 minutes of downtime, it might not be worth it. But if it’s losing all customer funds, it’d be irresponsible not to put this level of effort into it.


Perhaps that says more about you than the languages?


How do you figure?


Whether you understand a problem completely is independent of your choice of language to implement it in. The notion that Haskell is allegedly requires complete understanding first where other languages don't is both unsupported and also irrelevant. The implication that "Haskell people" seek to understand things first while other language users don't is an assertion at best (and that's being charitable).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: