> Yoneda-crazy: I know Haskell, I know some category theory, but I am highly sceptical that teaching the Yoneda Lemma to C++ programmers is actually useful in any way.
This is infuriatingly true. Everything you do in Haskell you can understand in terms of what code gets executed where and how. This is just as true of Haskell as it is of any other high-level language.
Indeed, you can take an arbitrary Haskell program, and rewrite it in C with the same semantics, and all you'll learn is how GHC's runtime is implemented and how it executes your program. For a small concise program, this actually makes for a nice exercise. I don't know why, but people sometimes seem to think the runtime is "magic".
Personally I find category theory to be a red herring when it comes to Haskell and really programming. I feel that people learn a bit of CT, then quote it endlessly as a sort of elevation of programming from lowly code-slinging to a high mathematical pursuit. Except CT doesn't really provide all that much insight. At least not unless you're knee-deep in advanced type theory. But who has really read Benjamin Pierce's books (as beautiful as they are), or the original Hindley texts?
A side comment, this one's more about CT itself, is that the standard texts don't seem to really have many results. Not compared to, say, any undergrad book on algebra or number theory, where cool theorems come at you every few pages to twist up your brain. Category Theory always felt almost mechanical to me. Lots of definitions and rules for assembling diagrams and such, but then what. Just the Yoneda Lemma and that's it? If anyone can point me to the fun CT theorems, I'd love to hear about them.
>CT itself, is that the standard texts don't seem to really have many results.
Oh man. If you like math, CT gives nice mathematical definitions for stuff. I would love to use my algebra skills to do general programming like I can use my linear algebra knowledge in numerical programming.
I'm making bold claim that I hope someone can debunk with awesome counterexamples:
"CT HAS ZERO APPLICABILITY IN PROGRAMMING"
Using CT to Linear algebra analogy: CT in programming is like learning definitions of vector spaces, subspaces, span, and basis and stopping there. No CT equivalent for solving linear systems, linear transformations, least-squares or eigenvalues. Just because you have rooted programming in cool algebraic notation does not mean anything in itself.
I know this isn't really the answer anyone's looking for, but I legitimately found some basic knowledge of category theory to be very useful with Haskell. Maybe it's a red herring -- I don't really know -- but when I was able to intuitively see the connection between a monad in Haskell and a monad in category theory it was almost like a switch went off in my brain and the abstraction was easier to work with and just less intimidating.
The epiphany I had from this is that, while linear algebra's utility comes in the form of designing efficient algorithms, CT's utility (in Haskell) comes in the form of understanding structures. It's similar to thinking of modular arithmetic in terms of remainders vs. quotient groups. I get lost doing anything complex with the former way of thinking, but when I switch to thinking in terms of basic abstract algebra suddenly the foliage clears and the way ahead is clear.
Well, algebra is a language created with the goal of making simple things easy to communicate. Yes, I'm getting in a tangent and calling operations in an infinite and non-countable basis "simple", but just try to do anything complex in non-linear algebra and it all falls apart already, never mind completely discontinuous functions.
Category theory is a language created to communicate complex models. If it's not obvious yet that you can't use CT the same way you use algebra, let me state so: you can't. CT must be less powerful than algebra, because you can't reason about complex stuff as easily as you can reason about simple stuff.
Yet, one mostly can not write a program nowadays without using CT. There's no language that does not implement a subset of it. You don't notice it for the same reason that fish do not notice the water.
But as powerless as CT must be, it gives you plenty of tools for simplifying your models, and not having those tools available can only make your program more complex, never simpler.
This is a kind of marketing trick Haskell has pulled, which some people fall for (regardless of the merits of the language), namely that Haskell is somehow more "mathematical" than other languages. The truth is that Haskell -- like all programming language -- has made a design choice, which was to bake in correctness proofs based on the Curry-Howard correspondence into the language. This means that instead of bringing the math to your program like other verification methods do[1], it takes you to the math, requiring your participation. Haskell just feels more mathy because C-H puts the math in your face.
Another trick Haskell uses is preferring abstractions that are based on concepts from CT[2], and validating the utility of those abstractions on their "rigorous mathematical foundations". This is again just another case of taking you to the math rather than the other way around. Programming languages are first and foremost human languages, meant to be written and read by humans. As such, their power is drawn from a single source: their interaction with human cognition. Whether an abstraction is "mathematical" (whatever that means) has absolutely no relevance to its power, which comes from cognition (and psychology in general) rather than math. All this "in your face" math does is make the machine's job of verifying your software easier.
A Haskeller tried convincing me that such a language is a good fit for human programmers, too, because all it is is a subset of more traditional, impure languages. But the notion of subset => simpler is a mathematical notion with no relevance to cognition. If humans operated in this way, natural languages would have been much more mathematically simpler, too.
> validating the utility of those abstractions on their "rigorous mathematical foundations"
Some Haskell users may think this, but I think the usefulness of the foundations is not derived from rigor. Instead, it is derived from the wealth of knowledge we have about mathematical structures which makes them easier to reason about. So by linking the programming structures you are using (e.g. monads) with associated mathematical structures (also called monads[1]), all of the mathematical knowledge surrounding that structure suddenly becomes available. This is rather like how doing numerical calculations with matrices benefits from the wealth of knowledge about linear algebra.
Another "mathy" aspect that Haskell and similar languages have is that they like to start with a few fundamental constructs and build everything else within that system. This is more a design philosophy than anything, but I think it does have its advantages in that it results in better composablility, since everything "comes from the same place".
As far as the relationship to cognition, I think Haskell will be better in this regard if you are mathematically minded, because you will understand programming constructs from their mathematical roots. For example, I had an easy time picking up monads because I knew the category theory version. It could be argued that once a person learns these concepts they are given a better means of reasoning about programs than other languages, but I suspect that largely depends on one's intellectual leanings.
>It could be argued that once a person learns these concepts they are given a better means of reasoning about programs than other languages,
This was my challenge. Give concrete examples where you reason about programs better while programming because of the category theory. People have been arguing this for years without any concrete examples. I'm a math person and I would really like to use category theory and mathematical intuition when programming.
I argue that there is very little usable structure for reasoning for the programmers after the definitions. There is no buildup of structure that is common when mathematics is used effectively. The gain from math is very small or nonexistent.
The gain from maths is that you can tell everyone what an exalted, rarefied experience you are having writing the same programs people write in blub, except more slowly, somehow using elite stuff you know from maths.
edit: I have no criticism of maths here, maths is great.
Easier to reason about by whom? Most programming models have their own calculi which are exploited by their respective verification tools. They don't, however, expose that to the developer.
> I think Haskell will be better in this regard if you are mathematically minded
I think that Haskell is better if you are mathematically minded and interested in thinking (a lot!) about abstractions. I like to keep my abstractions cognitively simple (primitive, even), and my algorithms sophisticated and powerful. I'd rather the people writing the verifier put their mathematical leanings to use on the relevant calculus, while I use mine on the algorithmic domain I'm working in.
Just like I don't require the developer using my database to be familiar with data structures and concurrency, I don't want those interested in formal verification requiring me to change my programming style to make their job easier. In fact, their job is quite the opposite: provide verification tools that are useful for the programming languages that make for a good cognitive fit, rather than design languages that make verification easier with the (possibly misplaced) hope that by some unexplained coincidence, a language designed to make a machine's reasoning easier would also do the same for a human.
Software verification is a fascinating algorithmic topic. Language design is a fascinating UI endeavour tasked -- like all UI -- with coming up with great cognitive abstractions. The two should, of course, cross-pollinate one another, but shouldn't become the same. Language designers should spend their time thinking how humans think about computation and how they interact with computers, not how to make programming models fit certain branches of math.
As I said, by those who are aware of the math concepts they are based on. Monads are I think the best example for Haskell; particularly in how they behave with respect to functions, which fits quite nicely with a categorical point of view.
> I think that Haskell is better if you are mathematically minded and interested in thinking (a lot!) about abstractions.
I'm not sure you can be mathematically minded in the pure mathematics sense (which is what Haskell draws on) without being interested in thinking a lot about abstractions. If you are more focused on concrete mathematics, then Haskell is unlikely to be your ideal language, but when people reference mathematics in regards to the language I suspect they are referring to the pure variant. In any case, I am, which is I think most of the confusion here.
> Just like I don't require the developer using my database to be familiar with data structures and concurrency, I don't want those interested in formal verification requiring me to change my programming style to make their job easier.
The only built in formal verification I can think of in Haskell is the type system. In that case, I think your analogy breaks down; I'm not sure how one can manage to program (effectively) in any language without understanding its type system. If you are referring to an external static analysis tool, then I'm not sure how you can put that on Haskell itself.
> a language designed to make a machine's reasoning easier would also do the same for a human
I believe it does for humans who come from a pure math background, in the same way programmers who know an object oriented language will find it easier to grasp another object oriented language. Moreover, it seems pretty clear that this is no a coincidence; it was the driving force behind Haskell's most fundamental design (e.g. Curry Howard) and its library (e.g. monads). So if you (as you say) are not interested in abstract mathematics, then it is no wonder you don't care for its methods.
> Language designers should spend their time thinking how humans think about computation and how they interact with computers, not how to make programming models fit certain branches of math.
Unless, of course, they are focusing on humans that are not only well versed and comfortable with certain branches of math, but prefer that mode of thinking. Haskell's unofficial motto of "avoid success at all costs" makes me think that focusing on this demographic is likely one of the language's design features, as opposed to making the best language ever for everybody. In any case, I would argue it is good for mathematicians, but not that all of the effort to move to that way of thinking is something universally useful for programmers.
> So by linking the programming structures you are using (e.g. monads) with associated mathematical structures (also called monads[1]), all of the mathematical knowledge surrounding that structure suddenly becomes available.
Only if what the language does with monads exactly mirrors what CT does with monads. Otherwise the language subtly lies to you.
In fact, this may be the case. I seem to recall that Haskell monads only have to obey two of the three monad laws. That is, they don't have to truly be (CT) monads. That's supposed to not matter, but if nothing else, it kind of makes your argument a bit less well grounded.
> This is rather like how doing numerical calculations with matrices benefits from the wealth of knowledge about linear algebra.
In the exact same way, this is only true if the matrix package truly implements linear algebra correctly. To the degree that the matrix package doesn't do that, your reasoning based on linear algebra leads you astray.
> In fact, this may be the case. I seem to recall that Haskell monads only have to obey two of the three monad laws. That is, they don't have to truly be (CT) monads.
That's news to me! Mathematicians wouldn't find the monads that appear in Haskell especially interesting, I think (at least not the ones that are instances of `Monad`), but that's beside the point.
So I'm repeating what somebody said rather than my own claim, but I think it was that Haskell monads didn't have to be... one of left- or right-associative. But they had to be the other.
Wish I knew of a good way to search HN; it was in a comment here maybe 6 months or a year ago...
If that Haskeller you're referring to was me then I'd like to clarify: I think mathematics is a good fit for humans because it's the output of a few centuries of careful large scale use and design, not because it's simpler by any direct merit.
Human languages might then be said to be the process of an even longer historical evolutionary process—so they must be the best, right? My thoughts here are that the natural selection pressures of human languages are very different than that of mathematical languages and we should direct the conversation there.
Mathematics expresses a desire for (a) elegance and (b) economy of proof which might be taken to be an expression of the complete logic of a complex idea. This isn't the perfect thing to optimize for, but I think it's well in-line with what we're doing in programming.
So I tend to take the bet that in the long run PLs and mathematics will converge and PLs will have further to move than the bulk of modern mathematical literature.
I have two issues with your argument. The first is the (implied, I think) assumption that C-H is the only way to apply math to programming languages. I think C-H has so far disappointed, because 1., people don't seem to want it, and 2. because other verification systems (like KeY) or the verifiable imperative, reactive real-time languages of the eighties (like Esterel) whose descendants are still in use today, have so far been used to much better effect.
The second is the assumption that by virtue of being well-studied an academic field would somehow prove a good fit with human psychology. That demands some empirical results which, again, are so far very disappointing.
> So I tend to take the bet that in the long run PLs and mathematics will converge
If by that you mean that the C-H approach will win, I'm happy to take that bet.
I believe that over the past 15 years, the relative usage of Haskell by developers has only decreased (the number of people using Haskell has grown less than the total growth in the number of programmers, much due to JavaScript and the web), and even though I could be wrong about that (I basically made it up :)), it's not too far from the truth.
Don't get me wrong, there are certainly academic influences (coming from C-H languages) on mainstream, or hopefully-mainstream languages (Rust is a clear example), but I don't see any signs of a trend towards adopting C-H as a basic language design philosophy.
Where is this strange obsession with Curry-Howard coming from? It's not a practical issue, with regard to verification or otherwise, and the Haskell world in general doesn't try to claim it's one. It's just an interesting symmetry, that's all.
What I mean by that is the philosophy that verification should be based on explicit types in the original program source. This intricately ties the process of authoring software with that of verifying it. The result is needlessly constraining both the developer and the verifier. It only lets you (easily) write code within what it can prove, and only lets you prove properties that can be expressed by the types that you use.
Other approaches say, we'll let people write code however they feel most comfortable doing (more or less), using types whose expressivity is defined not by the limits of the verifier but only by various concerns over language complexity, mental overhead, performance and teamwork, while we let the verifier prove whatever types it can. If it proves that the variable x is of type "an integer which is always equal to the current time in seconds modulo seven, except on Saturdays when it's always zero", then that's great!
I think given your clarification here I can directly state that (a) I don't think this is necessary and (b) I don't think C-H is a good name for this exactly. Point (b) is a bit silly to discuss, but point (a) is key.
I think there are workflow and ux efficiencies for having the compiler reject programs which are poorly typed. This typing can arise from either internal or external verifications for all I care, but the ergonomics of each vary a lot as well.
I think for both practical and ergonomic reasons, completely automated proofs are weak. There's no reason not to run them if you agree with the validity and importance of the theorem they represent, but by their nature the meaning and effectiveness is highly limited.
By having your proving language align with your programming language you sidestep this issue by forcing everyone's hand into revealing their intentions to the verifier. From a process perspective, this keeps people from cheating ("whip and bondage and all that") and it also, more imprtantly encourages the code author to tell a complete story of what they're doing instead of merely creating the artifact. This is great for the more "conversational" style of proving that occurs with things like Agda, Coq, Epigram.
External proofs can act similarly but I think your examples are often really archaic. It seems to me that I nteresting theorems must be derived alongside your program be they internal or external. JonPRL is an interesting demonstration of this method (as is NuPRL, but you can't really access that one).
> forcing everyone's hand into revealing their intentions to the verifier
Only those intentions which the verifier can prove. Want a stronger verifier? You need to change the language.
> encourages the code author to tell a complete story of what they're doing instead of merely creating the artifact
But it's not a complete story. Haskell's type system can't express some of the most interesting bits of the story. I find it ironic that Haskell's type system can't even verify the validity of monads (I mean compliance with the monad laws)...
> External proofs can act similarly but I think your examples are often really archaic.
I don't know if they're archaic, but I do know that Amazon uses TLA+ to verify their Java code (and TLA+ is somewhat similar to Esterel's verifier)
To the first point: I'm happy to generalize my claim outside of merely C-H school (which I'd call the "logical" school, maybe this is the same or maybe it's more broad than what you mean by C-H) and make a mere subclaim that the logical school will prevail.
To the second point: I think there's plenty of existing empirical results, but not direct ones. E.g., how many advanced physicists today are using their own, highly divergent language for writing their theories (not their code, notably).
I think the direct empirical results—like the one you claim and I'd, without verification, buy—are conflated with technological progress and desire to get things done over getting them done consistently or well.
> make a mere subclaim that the logical school will prevail.
I'm not entirely sure what you mean, but if you mean that programmers will write code in languages that are directly tied to logical proofs (through the type system), then I'll take that bet.
> E.g., how many advanced physicists today are using their own, highly divergent language for writing their theories (not their code, notably).
Writing programs is not (or no longer) a scientific endeavor. I think software development is much more similar to the work of an architect. Some crucial parts can and must be verified sufficiently, others may have different MTBF requirements, other aspects are just aesthetic, very often you need to cut corners to meet budget and time requirements, and above all, your approach must be pragmatic and take into consideration that the building will need to be maintained and renovated by people other than you -- people with varying qualifications -- long after you've moved on to other projects.
> desire to get things done over getting them done consistently or well.
Perhaps, but what makes you think this desire will change? (I don't think it should change, but that's a different issue)
I think your point about architecture versus science is a good thing to think about. On one end, I agree with everything you're saying. On the other end, I see that architecture only works due to a large set of bombproof civil engineering practices employed both in the past to lay down the principles of architecture and continuously to validate that the work of the architects won't kill anyone.
My understanding is that these people are crucial to the success of any architecture project.
So with that I might ask whether the foundations for CS have been laid and whether the built-in civil engineers we employ are sufficiently able to do their jobs.
There is an immediate question as to whether we should care... and I'll just wave that aside for the moment since I have as least an aesthetic horse in the race that we should care.
After that we might ask if the founders of modern CS have done a sufficient job laying their foundations so we can use them. I'd believe there is some significant debate about whether this is true (Dijkstra would argue we're not even close, is my guess).
I'm willing to put a little, conservative bet on the idea that even if the "founders" all agreed their work was done that they'd be wrong, too. I just think this field is too young.
I thought about exactly that as I was writing the analogy, and I just don't know... :)
I do think, though, that just like with architecture, different components require different care, and those that require absolute correctness are a rather small part. I also think that math (as in architecture) is only a very small part of the solution. It's essential, but it doesn't dictate the path of the project, and in the end most of it is just a lot of knowledge of how people behave (the people who live or work in the building and the people who will run, clean and maintain it).
I do, however, believe in the combination of familiar "cognitive" languages alongside verification tools. For example, see how Amazon uses TLA+ to verify their Java programs[1]. This alone is more real-world usage than Haskell has ever seen.
I think you're pointing out the difference between the "art" and "science" of architecture. A similar thing arises, IMO, in technical products between product and technology. There's a whole hell of a lot of psychology on the product side, and I think there always will be. Constrained by some artificial line to the technology side, I take my bet that mathematical tendencies will dominate.
It was me[1] and I'm afraid you are still mistaken on a number of points:
> The truth is that Haskell -- like all programming language -- has made a design choice, which was to bake in correctness proofs based on the Curry-Howard correspondence into the language
Haskell is not a logic and is not used to write proofs for program verification. General-purpose functional languages don't have all that much to do with CH, and they don't include any design decisions that favors a particular approach to verification. The type system helps you enforce useful invariants in your code and can make verification easier, but it's completely orthogonal to how you can prove programs correct.
> I think C-H has so far disappointed, because 1., people don't seem to want it, and 2. because other verification systems (like KeY) or the verifiable imperative, reactive real-time languages of the eighties (like Esterel) whose descendants are still in use today, have so far been used to much better effect.
Theorem provers based on C-H are general-purpose logics that can be used for a lot more than just program verification. So, different tools for a different purpose. Having said that, they have proven to be very useful for verification of complex systems like those you listed, where existing tools have trouble even stating what correctness means. That is, they can help us answer the question "how can we be certain that this compiler, verification system, etc. is correct?" That typically includes representing your object language in your theorem prover and proving things about the language, which is quite different than showing that an algorithm meets some specification.
> But the notion of subset => simpler is a mathematical notion with no relevance to cognition. If humans operated in this way, natural languages would have been much more mathematically simpler, too.
You can show that, quite literally, any "true property" of a program in your more complicated language is also true of the corresponding program in your simpler language. So, if you come to some correct conclusion, even if the way you reason is completely bogus and just happens to work, it will be valid for your simpler language.
There's no need to solve the much, much harder (and poorly specified) problem of "what is easiest for humans". That's not to say that the current crop of functional languages is presented in the easiest way for people to understand, but that's an entirely different problem independent of semantic considerations.
> The type system helps you enforce useful invariants in your code and can make verification easier, but it's completely orthogonal to how you can prove programs correct.
That's true for Java. Haskell, OTOH, constructs the entire language around verification through types and pure functions, and everything else is secondary. I don't know if the connection to C-H was made later or was a conscious design decision, but the result is the same from the user's perspective.
> Theorem provers based on C-H are general-purpose logics that can be used for a lot more than just program verification
We've discussed this before, and I believe those uses are too niche for the Haskell approach to gain any significant traction.
> You can show that, quite literally, any "true property" of a program in your more complicated language is also true of the corresponding program in your simpler language.
Of course, but that is only useful if a human programmer writing in the more "complicated" (by your definition) language would use the "simpler" subset, which, by my definition, she wouldn't do.
> There's no need to solve the much, much harder (and poorly specified) problem of "what is easiest for humans".
It's not a mathematical problem with a clear solution, and therefore requires no robust specification -- just like architecture (of buildings), or any other UI design problem. We learn what works through trial and error and try to do better.
I find it hard to accept the point of view which considers UI to be a methematical puzzle, or, alternatively, that doesn't recognize programming languages as UIs.
> but that's an entirely different problem independent of semantic considerations.
From a UI design perspective, you can't separate the two, because if the user fails to interact with the machine well, you don't know if it's a matter of design or presentation unless you empirically test it.
> That's true for Java. Haskell, OTOH, constructs the entire language around verification through types and pure functions, and everything else is secondary. I don't know if the connection to C-H was made later or was a conscious design decision, but the result is the same from the user's perspective.
The type system of Haskell is far more interesting and powerful than that of Java, but I think you're overstating the difference. While Haskell's type system does allow you to do a few more things than just demonstrate restraints on your code (such as construct more expressive types and make use of type classes and polymorphism), at the end of the day the overwhelming utility of the type system is being able to more easily make guarantees about your code, just as it is in Java.
Curry-Howard only really becomes interesting once you have dependent types, and only becomes bullet-proof when your language is total. Haskell has neither property. The connection to C-H is something which might interest a large subset of Haskell programmers (who would likely take that interest into a study of Coq, Agda or Idris), but it has pretty much no bearing on the day-to-day usage of Haskell.
A Haskeller tried convincing me that such a language is a good fit for human programmers, too, because all it is is a subset of more traditional, impure languages. But the notion of subset => simpler is a mathematical notion with no relevance to cognition. If humans operated in this way, natural languages would have been much more mathematically simpler, too.
-
I totally agree on this point and I've tried my hand at Haskell several times only to find myself going to some other language like F#, OCaml, or Common Lisp. It just feels too 'stiff' in terms of a programming language. I can't seem to write in it without running into its limitations (Not strictness. I can handle strictness in a language.). When I write a program I think about what I have to pull data from and what the data should become along the way. Even if parts of my program will be standardized into a library for a project I still like being able to get it setup in such a way that I can modify it easily. Haskell doesn't seem to let me do that without redoing the entire function signatures and sometimes due to the notation the signatures don't fit the data well if at all.
I don't think the connection between Haskell and Curry-Howard is that strong (it's stronger since GADTs were added, but still, functions are not even terminating). Rather, I think the main thing that makes Haskell "mathematical" and easy to reason about is that it is pure, so you can reason about things equationally.
That will help verification no matter what framework you are using. For example, look at the seL4 verification. That was done in Isabelle/HOL, so it doesn't use Curry-Howard in any way. But even so, they proceeded by writing two versions of the operating system, one in C, and one in Haskell to make it easier to reason about.
> Category Theory always felt almost mechanical to me. Lots of definitions and rules for assembling diagrams and such, but then what.
Category Theory is more about getting (as one of my professors put it) "a God's-eye view of mathematics". What I have found to be the most deep and useful is the way it provides concrete definitions which unite constructions that mathematicians were already calling the same thing (e.g. products, co-products). It is also absolutely amazing at showing links between different (seemingly unrelated) parts of mathematics. In this way, it forms a sort of lingua-franca for mathematics. This math StackExchange answer[1] gives some really good concrete examples.
If you want a book which gives a lot of more concrete results along with category theory, you should try Algebra: Chapter 0. It is more focused on abstract algebra, but it does it all from a categorical perspective while not assuming you know one iota of category theory (at the start).
They are mechanical because CT is more a language than a field in its own right. They're mechanical in the same way that a lot of logic or universal algebra is—the interesting stuff arises when you start making choices and using the tools to express your ideas.
On the other hand, CT was invented to talk about homology, so if you want to see a ton of examples of CT being used "in its natural habitat" then look there.
> This is infuriatingly true. Everything you do in Haskell you can understand in terms of what code gets executed where and how. This is just as true of Haskell as it is of any other high-level language.
I think anyone who really understands what they're talking about (which is, admittedly, not everyone who talks, sometimes loudly) would happily agree with you and then wonder what we were arguing about :)
Haskell tries to take the line that the "compilation semantics" are second class citizens to the "equational semantics". A lot of the technology Haskell introduces is in some way or another directly aimed at this. It doesn't, however, mean that the compilation (or operational, though that word gets abused and won't take you where you want to go exactly if you google it) semantics isn't totally valid, interesting in it's own right, and a useful thing to study.
Some of the core-er Haskellers like to write about this stuff too. I particularly love Edward Yang's posts on the Haskell Heap
http://blog.ezyang.com/2011/04/the-haskell-heap/
But all that said, the reason for having the compilation semantics take a back seat is not because it's unimportant but instead because if you don't intentionally take that stance then it, due to its front-and-center role in executing your language, will become dominant.
What we really want is a language with many compatible and well-developed choices of semantics! Then we can pick and choose our reasoning tools on the fly and know that we won't be led astray by small (or vast) incompatibilities.
It turns out this is tremendously difficult.
It's also important to note that Haskell only does a so-so job at it. Interestingly, this makes it one of the absolute best examples of any large language pulling it off. SML probably takes the cake depending on what flavos you're interested in, though.
Well, it might be more accurate to say it was written with the influence of everything. The object system borrows liberally from smalltalk, much of the other new features are influenced by Haskell, and it's all wrapped up in a Perlish way of looking at programming.
In short, if you hate how Perl 5 allows for multiple ways to accomplish something and think it's a big mess, I imagine you might find more to dislike in Perl 6. If you like power, flexibility, sane defaults for the common case, and lots of features, Perl 6 will be a wonderful place.
Then again, I'm of the opinion that a large percentage of people that think they dislike Perl 5 as an unorganized mess are actually laboring under a false partial impression. Perl is actually highly organized on core principles, it's just that those principles are far more radically different than the familiar syntax implies. Approaching Perl from a procedural C-style background and expecting the same will mostly work, but there will be aggravating times where you just don't understand why certain design choices were made that continually cause friction.
At my school (NTNU) there's even talk of removing the compiler course. It's a ye-olde write-a-compiler-for-a-c-like-language-in-c course ("the cs students should learn C at some point "), which I find all kinds of wrong, but it's still sad.
Just curious, what's wrong with writing a compiler for a C-like language in itself?
It seems to me that one of the goals of a compiler course is to fill the mysterious gap between what machines do and what high-level languages do. For maximum educational effect, the language being compiled should be the same as the language used for the compiler. Anything higher level than C would probably require a runtime, with garbage collection etc., so a C-like language seems like the obvious sweet spot.
Well, because writing a compiler is pretty hard, and I would like to not juggle pointers to arrays of pointers while doing it. C wastes your time, basically. And a C-like language is profoundly unexciting, at least to me; I can almost see the assembly lying underneath.
Well, they could write a list in python, but that's more a single night's homework than an entire class (with the appropriate foreknowledge) to my understanding, so what's an interesting middle ground? A logical language (e.g. Prolog) would be interesting, but I have no idea the complexities involved. I imagine implementing in python would reduce the complexities of the actual programming step allowing more time for what I think are the interesting parts of a compiler course.
My compilers course used Java to implement a simplified Java-like language. Writing a compiler in Java isn't as gloriously straightforward as in Lisp/ML/Haskell, but it's a lot less annoying than C.
Just checked NTNU.I've spent sometime in Trondheim when doing consulting for NetCom, nice city.
My university (FCT/UNL) had a strong focus in language design, data structures and systems programming back in the 90's as part of the compulsory lectures, with everything else being optional credits.
At least for me, coming from an imperative language OCaml's runtime seemed like magic. But I know now that it's only because I was comparing OCaml to C.
I think getting lost in the implementation details is what gets many Haskell learners, because the majority know an imperative language and use it as the base.
That's the definition of a Turing complete language.
What I dislike the most is that the Haskell execution order in particular isn't hard to reason about[+]. Yet, because it's things don't simply happen in the order you write them, people tell beginners to not think about execution order at all, and wait until laziness bite them.
[+] Really. If you just assume that the only things that gets evaluated is the first argument of the IO monad (>>) operator (not (>>=)), and it pulls evaluation until it gets all the data it needs, you'll have 99.9% of the model done. Read about GHC green threads and the FFI and you'll get everything that does not use unsafePerformIO right.
That's a really magical analogy right there. Though I'd take it a step further and say nowadays even stumping for an FP language smacks of the same sort of thing.
Case in point: I've probably done more to evangelize functional programming by just demonstrating to colleagues that they too can write their own higher-order functions - just like LINQ! - in C# than an army of Haskellers could accomplish in a lifetime of hurling jargon from category theory. My thought is, if FP is really so great then I should be able to show people by writing some functional code and then having someone else come along later and go, "Hey, that's great!" When I find I can't do that then that's a signal that it's time to double-check the ingredients of my fruit-flavored beverage.
Yeah, that does mean I have to accept that Racket will forever be a hobby language for me. My boring sellout Herman Miller-cradled enterprise developer butt is comfortable with that.
> My thought is, if FP is really so great then I should be able to show people by writing some functional code and then having someone else come along later and go, "Hey, that's great!" When I find I can't do that then that's a signal that it's time to double-check the ingredients of my fruit-flavored beverage.
I don't know about that. Sometimes people must make the effort to move beyond their comfort zones in order to find tools that are (potentially) more useful. You cannot expect every improvement to be self-evident with minimal effort. There are significant rewards in "making the jump", so to speak.
I'd submit that if I can't find a spot where my favorite trick solves a problem in a way that I can easily demonstrate is better than the existing alternatives, then my favorite trick is a solution in search of a problem.
I agree with everything you say except for the "easily" part. Sometimes you have to make a cognitive effort; said effort is by definition harder than not making the effort. But the payoff may be worthwhile!
I don't subscribe to the notion that for something to be worthwhile, it must be readily apparent and "easy" to understand. Especially in the context of FP, where most programmers -- but this is changing, thankfully! -- are traditionally trained in imperative languages, which means for them learning the FP "purist" mindset implies a significant effort.
My grandmother (R.I.P.) could never learn how to use an ATM. For her, using one was daunting and too much of an effort. It terrified her that she might make a mistake and push the wrong button. Back when she was young, ATMs didn't exist and she didn't need them. Her solution in modern days was to ask a helpful relative to use the ATM for her. Does this mean these hellish machines are "a solution in search of a problem"?
One technical reason that makes me prefer OCaml to Haskell is Haskell's weak module system. I think there should be a good way to parameterize one module over another module (interface) and to use different instantiations of the module in the same program.
I think people do such things using type classes and typing constraints. But I found this being awkward, because the compiler needs to be able to resolve the constraints automatically and there are issues with overlapping instances and the like.
To quote Bob Harper:
As a consequence, using type classes is, in Greg Morrisett’s term, like steering the Queen Mary: you have to get this hulking mass pointed in the right direction so that the inference mechanism resolves things the way you want it to. [https://existentialtype.wordpress.com/2011/04/16/modules-mat...] (I think this applies most of all if one is using type classes to emulate a module system. For other purposes, type classes are quite nice.)
I've toyed with Haskell before (enjoy toying with languages in my spare time). Its a nice language, that has a good community. But for some reason I could not see myself using it as one of my main goto (no pun intended) languages. Not due to any technical reason. Mostly because there is no easy introduction into it. No small projects to undertake showing what practical uses it might have. No twitter clone using SQLite (laugh all you want, this type of tutorial project helps understand how the language should be used and showcases libraries). Its just seems purely about doing math with it. But Im probably wrong. Last time I looked into it was about a year ago. I'll gladly look again if anyone can comment about it.
Have you checked out Real World Haskell by Bryan Sullivan? It's different than most Haskell resources I've found in that it introduces you to monads & IO fairly quickly, so you're not trapped with toy math problems. That was an issue I had with Learn You a Haskell for Great Good - finding the sum of all odd squares that are smaller than 10,000 is good and all, but not exactly a programming problem I have often...
Note that the book is a bit dated, though most chapters are still "valid". See this question from Stackoverflow [1] - Which parts of Real World Haskell are now obsolete or considered bad practice?
I have not. Will do so now. Thank you for recommending it. For anyone interested, here is the direct link to the online version of the book: http://book.realworldhaskell.org/read/
I’ve been fortunate enough to use Haskell professionally at 2 jobs, and it’s been my go-to language for hobby projects for a few years. I find it useful for infrastructural-type projects where people would ordinarily reach for a managed imperative language such as Java or C#.
The main advantage for me is in code reuse and refactoring, something that really shines for medium-to-large projects, and is unfortunately hard to showcase in tutorial style.
Getting started wasn’t easy, but as you imply, not really due to the language—mainly because the learning materials didn’t suit me. I couldn’t bring myself to finish Learn You a Haskell or any other tutorial, so I just used the language to build stuff and figured things out.
Been thinking about writing a book about my experiences, but it’s Yet Another Project I’ll have to make the time for…
Perhaps a bit of a different viewpoint, but one of the reasons that I keep coming back to Haskell is that it lets me refactor things things that I can't in other languages. Take the following C code:
Writing all of those if statements grows tedious. A lazier programmer could just drop them, but then the program would crash whenever the program received malformed data.
With Hackell, though, I can refactor out the if statements and get the following equivalent code:
response :: Messy -> Maybe String
response message = do
b <- body message
lines <- text body
lines `atMay` 10
I want to be clear on a couple of points. First, the Haskell code performs all of the same "null" and bounds checks that the C code made, so this is an apples-to-apples comparison. Furthermore, this isn't using exceptions, like a similar piece of C++ code might try. There's no stack unwinding and nothing can escape to contaminate the rest of your program. Finally, this isn't just a weird special case built into the language. With a minimal amount of effort, I could have each failure return a unique error Code.
errorCode :: Maybe a -> b -> Either b a
errorCode (Just value) _ = Right value
errorCode Nothing code = Left code
response :: Messy -> Either Int String
response message = do
b <- body message `errorCode` 1
lines <- text body `errorCode` 2
(lines `atMay` 10) `errorCode` 3
I don't know a way of doing that in any of the C style languages. In those languages, you're forced to add in another if statement every time your code touches message->body to ensure that you're not going to segfault. With Haskell, I can refactor away those if statements and I don't have that code being duplicated all throughout my project.
Just a small side note, you don't even need to write your errorCode function. That and a number of other useful functions are provided in the errors package (http://hackage.haskell.org/package/errors). Also, I like to use string error messages because they're easier to grep for. Here's what your response function would look like with those changes.
response :: Messy -> Either String String
response message = do
b <- note "error in body" $ body message
lines <- note "error in text" $ text b
note "error in atMay" $ lines `atMay` 10
For anyone reading this, I want to confirm that mightybyte's code would be far more idiomatic. I would never use integer error codes in a Haskell program, but I wanted to make the clear connection to C idioms. Also, using the errors package would be more appropriate than rolling your own function there, but I wanted to emphasize that it was simple, two line function, and not any fancy wizardry that requires caring about category theory.
EDIT: rechecking the errors library, the final line would probably be more clear with
I've used a similar construct in dealing with some JSON data recently, using Either String a:
(?) :: Maybe a -> String -> Either String a
Nothing ? s = Left s
Just a ? _ = Right a
foo :: Value -> Either String Int
foo val = do
nested <- val ^? key "foo" . key "bar" ? "Could not find foo.bar"
mid <- nested ^? nth 2 . _Integral ? "Second element of foo.bar was not an Integral value"
return mid
which has really helped deal with the stringly typed nature of JSON. I should also mention I stole the idea (and operator name) from one of the Haskell parser combinator libraries which allowed you to name a whole parser expression, so errors would say "could not find ident" instead of "Expected on of a,b,c,d,e,d,f,g,h...., found X"
> Furthermore, this isn't using exceptions, like a similar piece of C++ code might try. There's no stack unwinding and nothing can escape to contaminate the rest of your program.
It always escapes to contaminate the rest of your program. In the C code, it escapes as a returned NULL. In C++/Java, it would be a thrown exception. In Haskell, it's the Maybe.
The Maybe is pretty close to isomorphic to a checked exception. The caller either has to handle it, or to return a Maybe/declare that it throws an exception itself. At some layer, you have to handle the Nothing/catch the exception. The Nothing short-circuits part of the computation; so does the exception. From where I sit, they're playing almost exactly the same roles. (So is the NULL in C, but it's clumsier than either an exception or a Maybe.)
You're right that the Maybe and Either types are very similar to checked exceptions. And, from that perspective, you're absolutely right that it contaminates the rest of the code around above it, similar to a checked exception. However, I would subjectively argue that Maybe is the less clumsy alternative. For the lazy programmer, the easy way to handle exceptions is to ignore them and let upstream deal with it. Even for checked exceptions, ignoring the problem is just a throws statement away. My subjective argument, though, is that monads are just slightly awkward enough to handle that you're encouraged to deal with the problem, instead of just ignoring it. I'd much rather handle whatever condition occurred than constantly deal with liftM and >>=, so I make sure to handle my "execeptions" as soon as I have enough information, just to make my own life easier. Your mileage may vary.
These aren’t quite the same. With “x && x->y”, you’re using a boolean to encode the assumption that “x” is valid, but with “x >>= getY”, the type system has a proof, which makes refactoring safer.
Yes, but pjmlp is massively understating the case. Haskell's type system and fine-grained level of compositionality is remarkably helpful when it comes to refactoring.
Although you can get some serious mileage out of C's type system (such as it is). Refactoring in Python is... broken. I can't imagine why anyone would use Python when they need their code to stay nimble.
For me, using yesod has been quite a revelation. All of the errors in db operations get caught during my cabal build. This link https://github.com/yesodweb/yesod/wiki/ghci is something i used to test my db code all within the repl cycle after setting some pg env variables. All in all, its way smoother than many other environments i have worked with.
I've reached for Haskell for a number of small, scripty projects and the experience has been good. Not much better or worse than a similarly small scripty thing would have been in any other language.
Haskell shines the brightest when refactoring a moderately sized project (I have been yearning for this, working in a smallish Python codebase), but nothing prevents using it for small things.
These exist in Haskell as "Pattern Synonyms". Here's a partial translation of some of the F# examples on MSDN to Haskell;
{-# LANGUAGE PatternSynonyms, ViewPatterns #-}
pattern Even <- ((`mod` 2) -> 0)
pattern Odd <- ((`mod` 2) -> 1)
testNumber x = show x ++
case x of
Even -> " is even"
Odd -> " is odd"
data Color = Color { r :: Int, g :: Int, b :: Int }
pattern RGB r g b = Color r g b
-- NB: this is bidirectional automatically
printRGB :: Color -> IO ()
printRGB (RGB r g b) = print $ "Red: " ++ show r ++ " Green: " ++ show g ++ " Blue: " ++ show b
-- pretend we have functions to and from RGB and HSV representation
toHSV :: Color -> (Int, Int, Int)
toHSV = undefined -- implement this yourself!
fromHSV :: Int -> Int -> Int -> Color
fromHSV = undefined
pattern HSV h' s' v' <- (toHSV -> (h', s', v'))
-- here we explicitly provide an inversion
where HSV = fromHSV
printHSV :: Color -> IO ()
printHSV (HSV h s v) = print $ "Hue: " ++ show h ++ " Saturation: " ++ show s ++ " Value: " ++ show v
-- demonstrating being able to use pattern
-- to construct a value
addHue :: Int -> Color -> Color
addHue n (HSV h s v) = HSV (h + n) s v
> When I met a Haskeller at the pub after a mini-conference and I mentioned that I didn't like Haskell he began frothing at the mouth and punching the table. When we got up to leave he refused to shake my hand.
This is just gold!
All the Haskellers I have met seem to be quite normal, helpful. Maybe this guy rubs them up the wrong way.
That's not my experience. Most Haskellers are indeed good people, but quite a few "vocal" ones are extremely arrogant and intellectually dishonest. Let me give some examples that I've met recently:
- A Haskeller once said Haskell is just as easy as Go to learn. I mean, yeah, many moderately complex languages might be as complex as Haskell, but seriously, Go? It is designed to be as simple as possible. Hell, it even doesn't have generics, which is considered pretty essential in nowadays' statically typed languages. And this claim isn't an outcome of ignorance. He is the author of a very popular Haskell learning material.
- Another said: "Steep learning curve is an interesting expression, because what it actually means is that one is learning quickly." Ok, that's not my interpretation of "steep learning curve" of Haskell.
- "The average Java programmers are worse than the average Haskell programmers" thing. This sentiment even existed in one of the "how to spread Haskell to the industry" presentation. I'm pretty sure if Haskell does be accepted by the industry, it will have the exactly same dumb programmers using it. So, what's the point?
You must be lucky if you haven't met any of them in the Haskell community. Actually, their reputation outside of the community is quite terrible. This must be resolved if Haskell wants to succeed. (And yeah, please stop saying "avoid success at all cost". I know it can be interpreted as "avoid $ success at all cost", just stop it.)
All communities of significant size will have good and bad apples. The Haskell community is no exception. But in my experience most of the community are fantastically patient and helpful people.
This may indeed be true, but Haskell has nonetheless developed a reputation, and it's not a reputation that it shares with just any other community. The only comparison I can think of is Lisp in the '90s and early 2000's.
My suspicion is this: There are lots of ways to make people feel inferior, or at least make them feel like you think they're superior. Most of them are ones that people probably don't do on purpose. But when you do it, the person you do it to is going to think you're a smug jerk nonetheless. There are also a lot of people who are supremely self-confident and therefore largely immune to being made to feel inferior. Many of them have been like this all their lives, and are therefore oblivious to this social subtlety.
One easy social gaffe that doesn't get discussed much is having no sense of humility about your language's obtuse syntax. Yes, I know, it's a familiarity issue, all languages have obtuse syntax, etc. etc. Doesn't matter. What matters is, if your language's syntax is different enough from Java's or Pascal's then it's going to be widely perceived as being obtuse. And responding to that with anything along the lines of, "Huh, makes perfect sense to me," risks coming across as a smug jerk.
Using the M-word is another famously risky behavior. Ironically, at least in my experience using it is also actively harmful to any attempts to explain the underlying concept, because at this point that word is @$#@ loaded. You can take someone who is already implicitly comfortable with the concept and uses them all the time in practice, and reduce them to a confused heap just by mentioning the M-word. And at that point, any attempt to pursue your goals will also risk coming across as a smug jerk. No, trying to address this issue head-on by writing blog posts with titles like "You already understand. . ." does not help. It just makes you come across as a condescending smug jerk.
I once had the fortunate experience to spend a week at a hot spring in Japan with many of the top Haskell people. I have to say, they were quite polite, intelligent, open minded, and could drink more than I could.
But every language community has its toxic elements.
> I'm pretty sure if Haskell does be accepted by the industry, it will have the exactly same dumb programmers using it. So, what's the point?
That's a property of all niche languages. Yes, if they get mainstream, they'll lose it, but until then it makes for an easy way to bias in a good way the set of people you could hire for a position.
If we assume companies do not filter well their hires, or that filtering them is costly (a huge "if"), it becomes a competitive advantage. And, as most competitive advantages, if enough people use it, it goes away.
Why do I have the sensation that critiquing programming language communities in general is going to end in, shall we say, less than constructive conversations?
"All the Haskellers I have met seem to be quite normal, helpful"
This is the same kind of weak response you see from Rubyists whenever someone points out their community's well-documented asshole problem (eg: Steve Klabnik bringing a woman to tears by publicly ridiculing her project and offering only an insincere non-apology when called on it, any blog post ever by DHH or ZS, Felipe Contreras' tantrums on public mailing lists):
"Well, in my experience, everyone in my community is super nice and helpful, so you must be wrong."
Off the top of my head, I can easily recall several Haskellers on HN who've come across to me in the past as pricks, and on multiple occasion at that: dons, loup-vaillant, and coolsunglasses.
The Haskell community, like that of Ruby, is quickly getting the reputation it deserves.
The thing is that it's not a "weak response", it really is people's opinion.
I've experienced some cases where I've been spoken to a bit sharply, and some where people have come across a bit impatient when I haven't "got" something they understand. The thing is that I don't process it as people coming across as a dick, so my subjective experience is that the community is helpful. As to the three examples [1] you picked out of Haskellers coming across as pricks, I honestly, truely, completely don't see anything in any of these comments beyond forthright assertions of opinion. Would you be able to maybe break down what you find objectionable?
[1] As disclosure: I currently work with dons, and I also know willtim.
You're defending a guy who marshaled his supporters to spam his unremarkable tiling window manager here so heavily in some apparent (and unsuccessful) Haskell promotion campaign that there are now nearly as many posts (search if you don't believe me) about it as there are about KDE. (At least the size of the downloads are comparable; a whopping half GB for those without the exact required versions of GHC and sundry cabal dependencies installed.)
If you don't understand what's offensive about condescendingly spewing falsehoods like "if it compiles, it works" or saying that you can't respect a CS department that uses Python for its introductory CS courses instead of Haskell (and thus doesn't expect its freshmen to know what a Kleisli category is), you've obviously drank too much of the Haskell Kool-Aid to maintain objectivity.
I'm not defending anyone - I'm seeking to understand your point of view better. But language like "Marshaled his supporters" makes me wonder if there's something else I've missed. I use XMonad and quite like it other haskellers I know have moved on to i3. Not sure I've ever seen it being "spammed".
The if it compiles works thing... That's unfortunately overblown and hyperbolic, but in my experience of the Haskell community in London, it's usually uttered as a tongue in cheek joke. I'm not sure I'd characterize it as condescendingly spewing falsehoods.
However, let me tell you about my first experience building a little thing while learning Haskell: As a total newbie, I wanted to pull some timeseries from a database and plot them on a UI. It took me days of grabbing some time here and there, learning which DB library I should use and how the diagrams package worked. I finally got it to compile without any errors. As a python programmer this was a completely new way of programming for me, and I now fully expected the real work of making it run to begin. The VERY first time I ran it - up popped a timeseries visualization. Mind. Blown!
So as you state it, "if it compiles it works" might be a falsehood, but there's more than a grain of truth there.
As for your other comment. What Tim actually said was:
> An institution that teaches Python under the banner of computer science, certainly loses prestige in my opinion
He didn't say that he "can't respect" it, nor did he say that CS should exclusively be taught in Haskell. He said, that teaching it in python lowers its prestige (and later clarifies that he means that as a vehicle for teaching CS rather than programming).
As I mentioned, I'm a self taught programmer, beginning with python. Since then I've worked at hedge funds and investment banks in python and built a startup on Haskell. My opinion is built on my own direct experience, and I find it somewhat disrespectful that you dismiss it as "drinking the Haskell Kool-Aid".
Do you seriously believe that if xmonad had been written in anything else (and perhaps by anyone else) HN and proggit still would have been inundated with as many posts about it as they were? Please. It was rediculously obvious to anyone (or at least it should have been) what was going on.
"The VERY first time I ran it - up popped a timeseries visualization. Mind. Blown!"
Almost every programmer using almost every language has experiences like this. That hardly justifies making such expansive claims, claims that even a dependently-typed language would have difficulty justifying.
"Since then I've worked at hedge funds and investment banks in python and built a startup on Haskell. My opinion is built on my own direct experience, and I find it somewhat disrespectful that you dismiss it as "drinking the Haskell Kool-Aid"
I am not trying to be disrespectful to you, but I think you are being far to charitable in your interpretation of the conduct of your peers in the Haskell community.
> I am not trying to be disrespectful to you, but I think you are being far to charitable in your interpretation of the conduct of your peers in the Haskell community.
This is probably where the disconnect lies. It seems like boothead is operating under the principle of charity (https://en.wikipedia.org/wiki/Principle_of_charity). HN's guidelines don't specifically mention it but seem like they have this principle in mind. You have absolutely no evidence that dons actually "marshaled his supporters to spam" and accusations that he did are not very charitable (as you yourself admit). It's just as likely that some people got really excited about it and posted it to HN.
I operate under the principle of common sense. Why so many posts about a tiling window manager, and why xmonad and not i3, ratpoison, or dwm? Because it was written in Haskell and was thus being used as a vehicle to aggressively (if ineffectively) push the language on people. As for dons, he's posted plenty of xmonad crap here and on proggit, where he also happens to be a mod.
Ok, you don't like XMonad or Haskell, fine. I use XMonad daily. It has a giant pile of contributed modules, it's extremely flexible and used by a reasonable amount of people. I can give you one reason for not i3, ratpoison or dwm: as far as I know, they are all manual tiling WM (no notion of "master" window like XMonad). They're still tiling WMs, but of a different breed.
I think it's undeniable that a substantial portion of the interest in XMonad came about because it's written in Haskell, but that's very different from saying that all of the interest in it comes from a concerted propaganda campaign on the part of the Haskell community.
You're judging a whole community based on interractions you had with Haskellers most likely many years ago - dons almost never speaks publicly these days, since moving to SC (which is shame, he's someone I've looked up to and learned a huge amount from). If I were to characterise the OCaml community based on what jdh says, I'd be writing to politicians to ban all use of OCaml everywhere because his comments are toxic, and far more dishonest than I've seen from anyone in the Haskell community.
He seems pretty active in comments in the linked discussion... BUt I'm pretty sure John also carries a list around with him of reasons why he doesn't like Haskell, just in case he needs it. Truly unpleasant.
On the other hand (without commenting on specifics of Haskell) the parts do represent the whole. People in a community learn tropes from each other. If many of the community's tropes are toxic, the community is toxic in a way that survives the departure of individuals because those individuals really weren't that special.
I wouldn't begrudge anyone who made similar judgments about the Common Lisp community based on their interactions with some of its more colorful personalities. I further wouldn't disagree that there are major flaws in CL the language. Why do Haskellers have so much difficulty doing likewise? Other than minor concessions, such as records, Haskell is apparently perfect (and spaceleaks are only ever a problem if you're a n00b)... The whole thing reeks of a cult.
Your experiences are completely different to mine, and I've been a member of the community for about 8 years now. Haskellers are in general very aware of the flaws in the language, and this isn't limited to records - space leaks are the primary reason for the proliferation of streaming IO libraries. And for records, we've been lead to lenses, which are an extremely powerful abstraction which surpasses, IMO, whats available in most other languages in terms of being able to interact with nested data structures (giving them such a limited definition is doing them a disservice, because they're far more powerful than that). You keep mentioning the use of the "if it compiles, it works" phrase - this is clearly a meme, and often used as a joke, but is also used as a goal for writing reliable software: if we can encode enough of our problem domain into the type system, then we can be much more certain that if we write programs which compile, they likely do what we want them to. If it not a synonym for "If it compiles, it has no bugs and is perfect", which is how you seem to be interpreting it.
For the record, I first heard the phrase, "it takes longer to get it to compile, but if it compiles it works" spoken about OCaml.
In both cases, I think it speaks to something important, and is obviously not true in any strict sense.
"If you write or edit <language>, with some reasonable practices and trying to get to a solution, the kinds of mistakes you will usually make will be usually caught by the type system."
In my experience, this is marginally more true of Haskell than OCaml, of which it is notably more true than C, of which it is notably more true than Python. With the caveat that much traditional/common practice in C does not satisfy the "reasonable practices" criterion.
For what it's worth, @steveklabnik's apology was pretty good. I definitely wouldn't describe it as "an insincere non-apology" - he acknowledged the effect he had on the other person, and actually said the words "I am sorry" instead of something passive and weaselly like "I apologise" or "Sorry if...".
"I'm sorry, and feel terrible that I made someone feel terrible" is tantamount to saying "I'm sorry you were offended." He's merely expressing regret at her reaction, not admitting that his actions that caused said reaction--specifically, mocking her for writing a sed-like utility in JS--were uncalled for. Further, I'm confident that if she'd written the program that spurred her undeserved public shaming in Ruby, he and his fellow Twitter bullies would not have voiced any complaint.
I haven't re-read my blog post, but let me be clear: my actions back then are something that I deeply regret to this day. I think about it on something like a weekly basis, still.
I'm not sure you read the same apology I just did.
> SK: i dont want people to think i'm saying "i got caught being an asshole so i'm saying sorry"
> SK: i want to say "I was accidentally being an asshole so i'm sorry"
> So, I'll just leave it at that. Twitter makes it so hard not to accidentally be an asshole.
He's admitting that he was "being an asshole" and is trying to empathize.
"I'm sorry you were offended" means being offended was your choice. "I made someone feel terrible" means I did something bad to that person.
I'm not familiar with the JS community or Steve Klabnik in particular, but that's a pretty straightforward and sincere apology.
> "I'm sorry, and feel terrible that I made someone feel terrible" is tantamount to saying "I'm sorry you were offended."
Wow, I could not read that any more differently.
It's like arguing "I'm sorry, and feel terrible that I hurt your toe" is tantamount to saying "I'm sorry your toe hurts when it's stepped on". I find that notion a little strange.
It's curious seeing you mention this, while completely ignoring how overwhelmingly negative the linked to thread is, devolving into personal attacks for no good reason - the other linked discussions between various language communities have been very civil and fun to read, but reading this bunch of OCamlers writing long lists of often minor deficiencies shows to me a level of insecurity I haven't seen elsewhere, and I find it very unappealing. It's enough for me to not want to try OCaml because the community seems very hostile. And it's not just jdh, even without his (your?) comments, it's overwhelmingly negative, and needlessly so.
> The Haskell community, like that of Ruby, is quickly getting the reputation it deserves.
By the same token, you could describe the community of Haskell as witty and friendly by taking SPJ as an example. In my experience, the Haskell community is fairly free of vitriol and of the drama which is fairly common in open-source communities.
Jon Harrop often comes across as too negative and is described as "well known internet troll". However, his thoughts are usually quite valuable and often better than what he is credited for. His contributions to the F# community are well known.
The Lisp community remembers when he trolled them. Basically he wanted to create clicks for his commercial offerings, unrelated to Lisp. Some kind of anti-marketing strategy, which fired back. Since then his business failed and he trolls the Haskell community as a hobby, from time to time.
Really 'valuable' thoughts from him are rare. 'Valuable' more for him. ;-)
> Yoneda-crazy: I know Haskell, I know some category theory, but I am highly sceptical that teaching the Yoneda Lemma to C++ programmers is actually useful in any way.
This is infuriatingly true. Everything you do in Haskell you can understand in terms of what code gets executed where and how. This is just as true of Haskell as it is of any other high-level language.
Indeed, you can take an arbitrary Haskell program, and rewrite it in C with the same semantics, and all you'll learn is how GHC's runtime is implemented and how it executes your program. For a small concise program, this actually makes for a nice exercise. I don't know why, but people sometimes seem to think the runtime is "magic".