Hacker News new | past | comments | ask | show | jobs | submit login
Some Notes About How I Write Haskell (infinitenegativeutility.com)
191 points by chopin on Feb 21, 2018 | hide | past | favorite | 43 comments



This is a good and well-written article, but it's worth noting that I think many of the ideas here cut slightly against the grain of current Haskell best practices. Which is not to say any specific part is right, but for example the entire discussion on QuickCheck is hard to really make sense of and arbitrary is a famously good use of type classes.

Probably the most countercultural paragraph is the paragraph on recommending you make a fresh type rather than reuse an existing type. I think that you lose so many useful operations eschewing maybe for your uniquely named Maybe [1] that a type alias is probably a much better call there, and also captures what you meant more generically. Further, stuff like Maybe over a more specific instance works much better with valuable tools like Compose.

Similarly, the redundant constraint on Empty is the sort of thing that will only trip you up in the real world.

All in all, great article tho.

[1]: People underestimate the compiler cost of deriving instances. It adds up very fast if you derive lots of functors and foldables in your quest for unique names everywhere.


Thanks for your commentary too. I’m starting to learn Haskell, and the nature of the language seems to lead to less “best practices” articles out there.

Seeing knowledgeable people comment on things written by other knowledgeable people seems to be the best way to learn how Haskell is done in the wild (i.e. anything outside the text I’m learning Haskell from at the moment)!


Full disclosure, I _like_ ConstraintKinds and constraint oriented programming in many cases, and you can find evidence of that in my comment history.


There's quite a few people who don't like arbitrary, though, aren't there? Hedgehog is growing in popularity as an alternative to the quickcheck approach.


I know there is a contingent of people who feel this way, but to be honest I've never heard a compelling argument out of them other than the vague complaints about type classes.

You can certainly do things the way Hedgehog does and it's great. I think many of the nice parts of Hedgehog are not incompatible with quickcheck.


One semi-real problem with `Arbitrary` is that quickcheck itself can only define a fairly small set of instances since it doesn't want to incur a dependency on all of Hackage. Likewise, other packages are unlikely to bring in quickcheck as a dependency just to define the instance.

This means these instances must be defined in standalone packages where they're necessarily orphans. The cultural prohibition against orphans means there aren't as many of these orphan instances packages as you'd like so you can end up redefining instances in every consumer project.

With the Hedgehog approach, there's no problem whatsoever with creating separate packages for these instances.


>the entire discussion on QuickCheck is hard to really make sense of and arbitrary is a famously good use of type classes

I found it understandable but for me the issue was: all the problems that author pointed out are true but you're not forced to make an instance of Arbitrary. I rarely do in my tests. But some times it really does make sense and in those cases it's nice to have.


> People underestimate the compiler cost of deriving instances. It adds up very fast if you derive lots of functors and foldables in your quest for unique names everywhere.

Do you pay this price when you derive via a `newtype`? Or does GHC figure out that the instance it would write out would be identical (modulo coercions)?


It can generate the instance for you for particular typeclasses with an extension. See DeriveGeneric and DeriveAnyClass. The penalty applies when you're unwilling to use these extensions or when you create a type with the `data` keyword.


From the earliest days, amateurs instruct computers, while professionals write executable stories. I loved reading the projection of this quip into Haskell.


"amateurs instruct computers, while professionals write executable stories" Did you just make this up? I think this is an important idea.


Programming has much more in common with literature than what engineers typically realize. (Not to mention its relation with philosophy, with classes being a pure Aristotelian system of universals).

Even for programs that are meant to be written and compiled once and never maintained, the programmer needs to build in their mind the whole story of the required computation, and put it in words for the compiler to transform it to low-level instructions. The clearer the programming language, the easier will be to reason about that story.


What is that supposed to mean?


"Programs are meant to be read by humans and only incidentally for computers to execute."

- Donald Knuth


I thought that quote was by Abelson & Sussman in "Structure and Interpretation of Computer Programs".


Yes, it’s from the preface to the first edition.

Incidentally, the earliest expression of this idea that I’ve seen is from a talk by Fischer Black [0] in 1963, published a year later in a volume on LISP [1]:

Programming style is not a matter of efficiency in a program. It is a matter of how easy it is to write or read a program, how easy it is to explain the program to someone else, how easy it is to figure out what the program does a year after you've written it; and above all, style is a matter of taste, of aesthetics, of what you think looks nice, of what you think is elegant.

Although style is mainly a matter of taste, a programmer with a "good" style will find his programs easy to write, easy to read, and easy to explain to others. ...

In particular, you may have acquired special programming tricks that you are very fond of, and that aren't used by other programmers, but that don't make your programs much more efficient. I urge you to stop using those tricks. As Samuel Johnson once said, "Read over your compositions, and when you meet with a passage which you think is particularly fine, strike it out."

In other words, make your style simple, not complicated, even though the complicated style may seem to have some abstract virtues. ...

0. Yes, this is the same Fischer Black of the Black-Scholes duo of financial fame. His PhD, informally supervised by Marvin Minsky, was on artificial intelligence. Myron Scholes, for that matter, was also a good programmer and made money programming for economics professors at Chicago while he did his PhD there.

1. F. Black, “Styles of Programming in LISP,” in The Programming Language LISP: Its Operations and Applications, ed. E. Berkeley and D. Bobrow (1964), p96 (p106 of the PDF): http://www.softwarepreservation.com/projects/LISP/book/III_L... [PDF]


Huh, I does appear to be in the preface of SICP. It does seem more of a Knuth-y quote though. I can see why it can be accidentally attributed to the literate programming guy, vs. the meta-circular interpreter guys.


it's pretentious nonsense about how programming should be declarative rather than imperative and the implication that Haskell (the one true God) is. this point of view of course forgets that Turing tape machines came first and lambda calculus second and that most theory is still done using TMs rather than categories.


easy there, you might want to get your facts straight before you drop vitriol like that. LC actually came first, via Alonzo Church in Annals of Mathematics. Turing didn't publish his definition for another year and also, he did his PhD thesis under Church in LC notation.


>LC actually came first, via Alonzo Church in Annals of Mathematics.

That's irrelevant, as early computers weren't programmed in LC, not build on such an architecture. And of course algorithms and even programs (e.g for Babbage's computer) existed before LC.


No computers have ever been programmed via Turing Machines, or built on an architecture resembling TMs(Von Neumann architecture can be described as a register machine). TMs are simply a mathematical formalisation of the notion of computation, equivalent in power to the lambda calculus.

However, most programming languages(including C, Java, etc.) look much more similar to lambda calculus than a description of a Turing Machine - and for very good reason. Have you ever tried describing a TM that encodes even the simplest logic? It is a pain in the ass.

Indeed, most courses on the theory of computation that discuss Turing Machines etc. don't ever expect students to fully describe a Turing Machine. Many times they use a language reminiscent of the lambda calculus to describe Turing Machines.

Just take a look at the definition of Turing Machines on wikipedia and examples of TMs: https://en.wikipedia.org/wiki/Turing_machine#Formal_definiti...

https://en.wikipedia.org/wiki/Turing_machine_examples

That resembles no description of programs that are written by humans to run on computer systems, unlike the lambda calculus.


Random-access, multi-tape Turing machines are actually very good descriptions of computer hardware. The description of Turing machines in practice (using unary or binary notation on a single tape) is generally done because they're simpler and no less powerful, but modern computers hew much closer to a state-transition model embodied by Turing machines than a symbol rewriting process embodied by Lambda calculus.


>However, most programming languages(including C, Java, etc.) look much more similar to lambda calculus than a description of a Turing Machine - and for very good reason. Have you ever tried describing a TM that encodes even the simplest logic? It is a pain in the ass.

That's because we don't actually have tape, but random access memory. But a turing machine is just a limited form of imperative programming, and much closer to Assembly, or C, Fortran, BASIC, or even Java, than Lambda Calculus.

>That resembles no description of programs that are written by humans to run on computer systems, unlike the lambda calculus.

Actually looks like a pretty run of the mill description of working with memory locations, gotos, conditional jumps and so on. Substitute the need to run through the tape for random access memory, and you're there.

The examples don't remind you of programs written by humans mostly because they're visual examples showing the whole state configuration. If we similarly mapped the memory states during various steps of the execution of a common imperative program, it would look very much like those tables.


I don't think that pointing out who came first is even relevant to the discussion; the real question might be: "which model is better to program in?" and the answer would be lambda-calculus derivatives for many.


different problems require different tools. functional has its place and imperative has its place. that's why i was so snarky in replying (even if i did get the timeline wrong): prescribing one over the other is dogmatic and pretentious.


You're right, we should avoid forming cults and following blind obsessions. Programming should remain the domain of rationality.


I find that the more competent people are as programmers, the more they treat their preferred one as a religion. It's not even that they can't hack others, it's that they've arrived at the One True Way. Less experienced programmers (and moreso those who can kind of code but are faking their way with Stack Overflow) tend to be (or claim to be) language-agnostic.


Doesn't it blow your mind that you can represent imperative computations using a purely functional language? (like Haskell do-blocks, or lambda calculus itself).

The beauty of a declarative notation is not that you get to ditch representation of state, is that you can represent such state in a much more compact and tractable way than what is required by theoretical representations of imperative machines. Trying to do mathematical reasoning with Hoare logic is a pain in the ass.


I found the paragraph about iterating with list comprehensions very illuminating. As someone who is just starting to learn Haskell, deciding when not to use map / filter is something where my intuitions regularly seem to be off.


I like the fact (that I didn't know before) that you can use refutable patterns in the way shown here, as filters in a list comprehension. That feels almost like logic programming.


This is a nifty thing made possible by the MonadFail class, which defines how a monad treats or recovers from a failed pattern-match.

    When a value is bound in do-notation, the pattern on the
    left hand side of <- might not match. In this case, this 
    class provides a function to recover.
https://hackage.haskell.org/package/base-4.10.1.0/docs/Contr...


As somebody who works with Scala, List comprehensions (and associated filters) were indeed a surprise to me as well when I started hacking in Haskell. Their resemblance to logic programming makes it so much more elegant, and so much easier to reason about.


How do you feel about Scala currently? I'm a Scala dev as well who's kind of been moving away from it over the past few months.

I'm not really sure of the future of the language. Most of the super committed community members are fans of the Haskell inspired style, which is great (its how I write Scala, personally) but I'm not sure why you'd then choose Scala over Haskell, or if you need the JVM, Eta or even Frege. The majority of 'casual' Scala people seem to use it as Kotlin with implicits more or less, and as more and more libraries and features come up for Kotlin I'm not even sure that it makes sense to use Scala for those people, given the larger backing, better tooling, easier on boarding, etc for Kotlin.


Personally, I started out in Scala as a 'casual' user using it as better Java/Kotlin. But, I have certainly seen myself gradually gravitate towards Haskell-inspired style. For many like me (with more exposure to OOP than FP in early career), it is a pedagogical device into FP. With regards to the future of Scala, particularly tooling, Scala may still be able to retain its 'casual' users with its recent focus on tooling : http://scala-lang.org/blog/2018/02/14/tooling.html


There are plenty of people who use scala as an ML on the JVM, not as a worse Haskell. They just aren't as loud about it.

If you're looking for an ML kind of experience, Kotlin is a joke, it doesn't even have pattern matching.


>If you're looking for an ML kind of experience, Kotlin is a joke, it doesn't even have pattern matching.

Can't agree with this more. I just cannot figure the raison d'être of Kotlin.


To me it seems sorta like reasonable Java. Tons of QoL upgrades that Java hasn't (or won't, due to compatibility issues) implemented over the years make it so that you can write Kotlin without absolutely having to rely on a massive IDE or something like project Lombok. They've got great tooling and a lot of support, and they've been adding in tons of great features (e.g inline classes/value classes coming soon).

If I had to build a company today Kotlin would definitely be at the top of my list; I may love other languages much more, but Kotlin and C# are easily at the top of the developer experience imo, since they've got great tooling/ecosystem/support and both are very approachable/average languages that won't really scare anyone away like how Haskell might (despite it being one of the most underrated/most scare monegered/most misunderstood languages out there imo)


Would you care to explain more about your style? It sounds like a nice compromise, and the lack of pattern matching and HKTs in kotlin is definitely a bit of a pain point after having gotten so used to Scala and Haskell.

ML on the JVM seems like a nice niche for it, but I'm not sure I really have seen libraries or Scala devs sorta advertise their style/library as ML inspired. I've seen OCaml mentioned as an inspiration to Scala before, but there are some notable differences between the two (e.g. inheritance vs structural typing)


I don't agree with everything suggested here, but this is similar to what I'm talking about

http://www.lihaoyi.com/post/StrategicScalaStylePrincipleofLe...

especially the reasoning that you should try to avoid mutability, but when something is mutable, just make it mutable. This is much more in line with ML approach than with typical Haskell concerns around effects systems.


Scala has always been big on advertising it's abilities to abstract over things with it's type system (higher-kinded types, constructs from category theory, Haskell-inspired typeclasses), but has failed to actually out those capabilities to good use.

Just one example: Everybody pretends that e. g. the map function in collections is totally unrelated to the one in Futures, or the one in Slick, or the one in Quill, or the one in Spark. We all know that this is not just a random coincidence in naming.


This is very weird Haskell. The fizz buzz example concatenating strings where an alternative would fit and the pointful point free example for instance.


I liked the fizzbuzz example very much because it was so unorthodox.


Title would be more clear if it didn’t start with the title of the blog.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: