Leaving aside the category stuff for a moment, I think the first couple of paragraphs of this article make an excellent point.
> Functional programming is all the rage these days, but in this post I want to emphasize that functional programming is a subset of a more important overarching programming paradigm: compositional programming.
Indeed. So there is another article just begging to be written: "Introduction to Compositional Programming". I've seen any number of articles that say something like, "I especially like these because they are composable." But I've seen very little on figuring out how to solve problems by using composable components, or so that the solution itself is composable.
Another thought: we really should be giving more thought to composition when designing programming-language syntax and features. Haskell excels at writing composable components; it also has nice syntax for composing them. Both are much less natural in "C". Various replacements for "C" are being proposed; is thought being given to composability in them? Similarly, the popularity of Python's generators is largely because they allow for the easy design of composable components. Etc.
The point of this article is that you don't want to leave the category stuff aside. A category is essentially a well-defined and systematic way to reason about--and program with--composition. An introduction to programming with categories would, by its very nature, have to be an introduction to programming compositionally.
Moreover, I think a introduction using terms and ideas from category theory would be more useful, more thorough and more enlightening than an introduction to "compositional programming" without any math. It provides a nice structure and enables you to consider composition more abstractly, which lets you use compositional programming in more places.
> Moreover, I think a introduction using terms and ideas from category theory would be more useful, more thorough and more enlightening than an introduction to "compositional programming" without any math.
Yeah, this article left me wanting to know more about category theory. But that in itself is valuable. I googled and found a couple introductory papers [1,2], some slides introducing category theory for software engineers [3] (with a NSFW diagram on page 8), and an introduction to category theory as it applies to Haskell [4].
(1) There is still a software-design aspect to all this that has been neglected. For example, one can find all kinds of material on how to design OO software (e.g., the GoF book and its offspring). Find me something about how to design software using composable components. There isn't much out there.
(2) "Software engineers need category theory" is a statement that is almost too vague to be discussed. Certainly, a traditional presentation of category theory is largely useless to these people. What exactly is useful, and how should it be presented? People are starting to answer these questions, but only starting. Note, for example, that, in the brief intro to categories in the linked article, the fact that the categories allow for multiple identities (one for each object) is glossed over. It this a good idea? I don't know.
Composability in Haskell arises from abstraction made possible by a rich type system and simple semantics. IMHO and from my experience with dynamic languages, any syntactic solution to Composability will be a dead end, i.e. will not beget any other useful abstractions.
All true, but let's not forget that there is a nuts-and-bolts aspect to it, as well. The fact that we can write the wonderfully clear & concise
f = f1 . f2 . f3
in Haskell arises from the things you mentioned and also from the language syntax. In a language that is only slightly different, we might need to write something like
f = compose(compose(f1, f2), f3)
Indeed, despite Lisp's vaunted expressiveness, in many flavors of Lisp you'd have to write something about as complicated.
Certainly syntax is very important; but I would suggest my point still stands even if you allow all of haskell's syntactic bits (in this case the ability to define infix functions of non-word characters).
And I think haskell's type system actually comes into play in your second code block example: languages where function arity is dynamic or untyped will never be able to make use of that simple infix syntax rule in your first code block.
Indeed, despite Lisp's vaunted expressiveness, in many flavors of Lisp you'd have to write something about as complicated.
Common Lisp lets you redefine syntax to support equally easy function composition. You can even get rid of the parentheses syntax altogether and switch to Haskell syntax. People don't do it, though, because Common Lisp is not a functional programming language, and that also would be against Lisp nature.
I don't know if Clojure has reader macros that are as expressive as Common Lisp ones (I'd be happy to learn the answer from some Clojure hacker). Other Lisps are either very old or domain specific.
Clojure has a function (comp ...) that composes an arbitrary number of functions, and the macros -> and ->> that sort of pipe a value through a list of expressions. (Still new to clojure myself)
On an aside if you haven't checked out Gabriel's ``pipes`` library, its definitely worth a read. It's some of the most elegant Haskell I've had the pleasure of reading.
This is a nice explanation about how category theory relates to programming, but it doesn't explain how mathematical rigor helps us in this area. Making a rough analogy with function composition and unix pipes seems sufficient to get the point across to most programmers. I suppose you could actually write generic code that works with more than one category but that seems like unnecessary and unhelpful abstraction, compared to just writing the code twice in more concrete terms.
There are several advantages to writing code at that level of abstraction.
It's a great way to encourage code reuse, even in fairly disparate domains. Haskell has some of the most versatile libraries of any language I've used by using mathematical abstractions like this. As a concrete example, there are functions like foldM which is like a normal fold except it can handle errors, state, nondeterminism or even parsing!
The abstractions are well-understood, which makes it easier to reason about. You get a bunch of theorems and conclusions for free just by formulating your program in mathematical terms. You can then easily take advantage of these to simplify, verify or optimize your code. For example, if you know that your data type forms a valid functor, you can always rewrite fmap a . fmap b as fmap (a . b), turning two passes into one! If you wrote a map-like function separately for each different type, taking advantage of this pattern would be more difficult.
You are also restricted in what you can do--any code that is generic across all categories can only use composition. This leaves you less room to make mistakes. If you write a function just for strings, you can accidentally add characters or always return an empty string or any number of string-specific behaviors. If you write a function against some abstraction like a functor, you cannot make any of those mistakes--there is simply no way to formulate them for all functors.
These abstractions are just a reification of a common pattern across a bunch of domains. Making the pattern explicit makes it easier to see and use and makes your code simpler.
This sounds promising, but it's far too abstract to be convincing. Can you give a example of a useful program where you need to do any of these things?
Thanks, but that's far too much to take in at once. How about just one simple example?
I did skim through them, so to be specific: hakyll, Pandoc, warp, and scotty are what I'd call practical programs, though they approximate more popular programs written in other languages, so there's the question of why someone who wasn't already a Haskell programmer should choose them. Pipes, conduits, and most of the stuff by Kmett appear to be libraries implementing other abstractions that themselves would have be justified by use in a practical program.
> Pipes, conduits, and most of the stuff by Kmett appear to be libraries implementing other abstractions that themselves would have be justified by use in a practical program.
I'm not sure I understand what you mean by "practical programming".
This may just be a cultural difference but one of the big ideas of Haskell is using a library whose "core calculus" is provably correct and then combining them in ways that preserve that correctness. So I would argue that the justification of the libraries is not in the practical ( if I understand your usage of the word?) usage but in the thoroughness and compositional semantics that they provide. That's where the category theory inspired patterns ( zippers, monads, arrows, etc ) become important.
Okay, let's define "practical programming" as creating programs that will be of use to someone who doesn't know the programming language that the program was written in. So in this case, this means writing Haskell programs that will be used by people who don't know Haskell.
This isn't the only reason to write a program of course. You can do it just for fun or to explore the mathematics. But I'm mostly interested in abstractions that might be useful for solving problems outside mathematics.
None of the libraries or patterns I mentioned above are designed for the purpose of exploring category theory. Though some people do do that, but they tend to live in their own world.
A lot of Haskellers just happen to exploit bits of CT it because its a great framework for writing and reasoning about code. Some people do use Haskell as a way to explore category theory but for example Tekmo isn't writing pipes as a means to explore the theory of free monads, they've been well studied since the 80s. I believe he just happens to find that they are an excellent way of chaining together bits of control flow.
At that heart of the problem he's tackling is very practical problem that's tied to data flow and resource management. For example, I've written a library against pipes which handles resource management of ZeroMQ sockets for a large distributed system and find that my code under pipes is much more manageable.
Okay, sure, resource management is what I'd call a practical problem, and pipelines are familiar from Unix (and Go). It looks like this pipe library has a nice practical example: enforcing the category laws means that it handles terminating pipelines consistently:
> Functional programming is all the rage these days, but in this post I want to emphasize that functional programming is a subset of a more important overarching programming paradigm: compositional programming.
Indeed. So there is another article just begging to be written: "Introduction to Compositional Programming". I've seen any number of articles that say something like, "I especially like these because they are composable." But I've seen very little on figuring out how to solve problems by using composable components, or so that the solution itself is composable.
Another thought: we really should be giving more thought to composition when designing programming-language syntax and features. Haskell excels at writing composable components; it also has nice syntax for composing them. Both are much less natural in "C". Various replacements for "C" are being proposed; is thought being given to composability in them? Similarly, the popularity of Python's generators is largely because they allow for the easy design of composable components. Etc.