> It seems to me that some design patterns are essentially ad-hoc, informally specified, specialised instances of basic category theory concepts.
There is a flaw in this type of thinking which I think is not addressed here, but should be. (I took the quote from the summary, but I think it's fair.) The usual issue with ad-hoc informally specified things is that the ad-hocness and the informality leaves something out, making sure that if you understand the informal thing, but not the formal, you end up understanding only some part of the whole thing that you could understand if you tried.
So in the context of design patterns, that means there should exist situations where the extra abstraction, the non-ad-hoc non-specialized understanding, allows you to write code that is better than otherwise, and (more importantly) that you would not have otherwise written. This would show explicitly the gap between the ad-hoc ideas and the formal ideas instead of leaving you to guess.
Whereas the way this post tries to explain it, as I understand it, is that it shows you things (design patterns) that you would have otherwise written, and shows how it can be talked about in more general terms. But the things that you wouldn't have written are missing.
E.g., in some monad tutorials at some point one of the exercises is the nondeterministic choice monad, which is totally something you might not have written yourself, which makes it a way of showing why monads are a useful concept. If the only monad you had was IO, it'd be a much more useless idea (it would merely be a different, not even necessarily better, way of writing things that you already knew).
> There is a flaw in this type of thinking which I think is not addressed here, but should be. (I took the quote from the summary, but I think it's fair.) The usual issue with ad-hoc informally specified things is that the ad-hocness and the informality leaves something out, making sure that if you understand the informal thing, but not the formal, you end up understanding only some part of the whole thing that you could understand if you tried.
I’m fairly sure that’s the entire premise of his series.
Wow, this really struck home the idea of useful forms of generality for me. Definitely reminded me of an interesting section in Polya's book Induction and Analogy in Mathematics that discusses different forms of generality, and when it's useful (in the context of a problem to figure out).
And then you wake up to the fact that most programmers are really bad at higher abstractions and especially mathematics. They start off way better than general population too...
But they can deal with nested smaller abstractions or impure smaller abstractions just fine.
It is easier to think of a set of logic properties than essentially equation describing construction of an object with such properties. And that is the difference between the design pattern and a category.
To put something in a mathematical category you have to actually prove it constructively or you're doing adhockery.
And due to so many levels of abstraction nesting proving anything nontrivial in maths is really a chore at times.
A simple example: prove some operation is a Strategy morphism compared to describing it as an object with a set of properties.
> most programmers are really bad at […] mathematics.
I wouldn't be so sure. Many of us got into programming to flee what we often call "mathematics". But the stuff we learned in school is quite different from actual mathematics (rote application of recipes, and tedious exercises, mostly).
I'm pretty sure we can teach maths to those traumatised programmers. Just don't utter the "M" word so they don't recoil in horror.
Monoids for instance are a deeply mathematical, yet very simple concept. And useful too: want to do map-reduce? make sure your binary operation is associative (meaning, make sure your stuff is a monoid), or you won't be able to parallelise your reduce step.
And you made the mistake here just as I expected. Map-reduce does not require associativity as long as you know the order of the operation execution. (Or error due to operation reordering is bounded and acceptable.)
You break the assumption any time you use floating point math.
Then you've added a property to an operation that does not require it.
Many "math types" like to do such things to simplify proofs and there you end up with variations of spherical cow results, either in applicability or performance.
> Map-reduce does not require associativity as long as you know the order of the operation execution.
Good luck knowing that order when using a magic map-reduce to parallelise things for you.
> You break the assumption any time you use floating point math.
You're just nitpicking here.
Either I care about the non-associativity, and I have to control the ordering of the reduce operation (the best one might be a parallel bottom-up merge), or I don't care, and I'll be using -ffast-math already.
It is not nitpicking. Either a property is true or it not. This is exactly the point article author is making when resisting design pattern ambiguity.
If you cheat you will get invalid results sooner than later. Essentially bugs. Sometimes trivial, sometimes a billion dollar rocket explodes.
If you use math name but lie about it is even worse than if you don't use the concept at all.
For example a String despite what author says is not a Monoid in almost all languages as catenation (operation +) is not strictly associative. (Because memory allocation is different!) Yet he does this mistake...
Hey, you brought up floating points. I talked about monoids, not floating points. I don't even make the assumption you say I break when I use floating points. Of course floating points aren't a monoid, let alone a group or a field.
> catenation (operation +) is not strictly associative. (Because memory allocation is different!)
What the hell are you talking about? The ordering of operation influences the address of the result? Who ever cares about that? Even in C, you don't rely on the value of such addresses —only their uniqueness.
I don't know if they're bad at higher abstractions, or if the pay-off of a higher abstraction isn't always there (or is unclear).
The original comment did a good job of illustrating this. It's not clear from this post what category theory can do for a person programming with design patterns in mind (in other words, how will it change their behaviors?).
Unless you somehow teach people to actually run math proofs on the fly, it won't do any good either way if they start using mathematical descriptions since they will still "fake out" the math.
It is like trying to teach calculus by showing baked approaches at solving integral equations.
Just because you call a thing a Monoid does not mean it is one. Similar in how people use the design patterns in real life, except these do not demand clarity and of you partly misnamed it people are not confused.
It's always possible to write the non-generic version of a generic method and still get it right. I mean, there are mistakes that the formality will help you avoid (e.g. if you violate associativity, even in a seemingly trivial way, it will always come to bite you at the worst possible time), but it's possible to just not make those mistakes. The advantages of the generic formalism over reimplementing it specifically every time are, just like for any other use of generics, saving code and being able to reuse existing library methods (e.g. traverse, mfix, the various recursion-schemes traversal operators).
There are disadvantages too. Most often performance and not in a small way.
Compare general comparison sorts with counting sorts for a simple example. The latter require additional properties on the value making them much less general, but you gain major performance benefits.
Additionally you assume that you can skip the specific proof given the general one. That is not the case a lot of the time.
You have to verify all required properties anyway or you end up in a similar place.
Generic does not mean general, but programming languages do not have an easy way to verify properties so you end up with general.
> Compare general comparison sorts with counting sorts for a simple example. The latter require additional properties on the value making them much less general, but you gain major performance benefits.
In theory sure, but I've never seen anyone use a counting sort in production code.
> You have to verify all required properties anyway or you end up in a similar place.
You go from n * m to n + m though, and since the properties are often simple and standard they might be done for you in the standard library already.
> Generic does not mean general, but programming languages do not have an easy way to verify properties so you end up with general.
Typeclasses give a reasonable representation; newer languages let you require their properties to be verified if you really want.
To rephrase what is often written in a better form, misses the value of teaching the translation. You need to show the things you would never have written, enabled by the better form, to show that it is better.
> I'll write these articles in an authoritative voice, because a text that constantly moderates and qualifies its assertions easily becomes unreadable. Don't consider the tone an indication that I'm certain that I'm right.
Kudos for including this disclaimer.
The article can be trim and readable, while the disclaimer acts to dissuade well-meaning reviewers (like - ahem - me), who sometimes value precision and accuracy more than the big picture, from nitpicking it to death.
Shouldn't this be the default expectation, for every article we read, anyway?
Such a disclaimer makes only sense if the article was written by some very famous author who must reasonably fear that their advice is taken too dogmatically. Speaking of that, I've never seen people like Martin Fowler make such a disclaimer.
To be fair, Martin Fowler's entire blog is a "bliki" because he intends to (and does) change his entries often. He's even ripped out entire swathes of his writing when he thinks he's wrong (occasionally frustrating when you think he's wrong about being wrong ;-) ).
I assume he doesn't make such a disclaimer because he thinks everyone already realises it -- but I have to agree that it would be nice if he (and other people as well) did. A lot of times people seem to think that reading something from a well known author is equivalent to permission to turn their brains off :-( Instead of, "Oh, that's amazing! Let's try it", they think, "Oh, that's amazing! We don't have to try it".
I found another interesting approach to formalizing design patterns a few years ago: https://dl.acm.org/citation.cfm?id=2207827 —rather than expressing them as particular instances of more general things (as you'd do using category theory), he breaks them into a small set of more fundamental things which can be re-combined into familiar design patterns.
I find it pretty interesting that it's possible to do both of these things... I'd also bet category theory isn't the only generalization and 'elemental design patterns' isn't the only decomposition.
Also, while the formalization might be interesting, I wish I could be sold on its value... I'd definitely be interested in seeing some 'train of thought' examples of how someone used category theory (for instance) to reason about some architectural issues for something like... a game engine or web framework or CAD tool—something where the domain isn't going to make it an easy fit for category theory on its own.
I will check that link out, but I'm willing to bet CT actually captures the entirety of design patterns plus any subset that you might make out of them. CT is the most abstract math and is really good for studying structures and patterns of computation itself.
> rather than expressing them as particular instances of more general things (as you'd do using category theory), he breaks them into a small set of more fundamental things which can be re-combined into familiar design patterns.
I haven't read that book, but it sounds similar to how Peter van Roy deconstructs programming paradigms:
> … rather than expressing them as particular instances of more general things (as you'd do using category theory), he breaks them into a small set of more fundamental things which can be re-combined into familiar design patterns.
How do these two approaches differ from each other? As far as I can see, they’re exactly the same thing. From the linked article:
> Smith introduces a foundational layer of patterns terminology: a collection of core patterns that can't be decomposed further.
This is exactly the purpose of category theory. Rather than have group theory, set theory, propositional logic, etc., category theory unites all of these into something that cannot be decomposed further (identity and composition).
Mark Seemann, TFA's autor wrote a highly praised book Dependency Injection in.NET and then slowly gravitated towards F# and Haskell. He presents lot of perspectives on FP often in the context of OO alternatives making it easier to appreciate FP advantages not just in theory, but in the context of practical application problems.
I've found his Pluralsight course on Functional Architecture pretty interesting.
Has anyone come across design patterns w.r.t data science (e.g., importing / transforming data / creating a modelling record / doing a grid search / scoring a new data set / outputting results?) I see these things running in scripts generally, but wondering if there are any applications of design patterns to a data science workflow.
Full disclosure: I have not dabbled with data science, nor I'm an experienced programmer.
But many of the CS pioneers dealt with the issue of reading input, processing it, and giving a meaningful output. Recently I've stumbled upon Jackson's Principles of Program Design and it has really helped me in writing a parser. IMHO general understanding of structuring a program goes a long way than arbitrary design patterns. I now am a propenent of modular programming. For me it resembles functional programming and TDD. Basically it states your program should be made up of numerous self-contained modules that can be independently tested.
For data science workflow it could be an import package, further divided in modules such as general helpers and implementations for different file types; a computation package etc.
Its always hard for me to decide how those modules are going to end up communicating though. Should there be a lingua franca to all your modules? Or do they all have specialized APIs?
Programming to interfaces help in this matter. e.g. if I have a data interface, I could use the same type on any format I liked. If it is the same interface, methods are the same too.
For encapsulation and message passing, coroutines work wonders. You could use a generator in Python, or a goroutine in Golang. And then you could treat different modules of your program, as if they were a standalone independent part.
Better to use discovered languages as they say, but also better to learn the actual abstractions of computation itself, which CT does a great job at, instead of arbitrary patterns that are based on human intuition.
What about the Curry-Howard-Lambek correspondence? This is directly relate to why math works so well on the real world. CT and pure FP languages seem to tap into this fundamental workings of our mind and possibly the universe (CT is used in quantum mechanics). In contrast things like OOP are modeled based on intuitive patterns that solve specific problems, and which are often based on metaphors or abstract analogies, but not a fundamental abstraction of nature. In other words CT seems to capture the patterns of computation itself.
The problem with category theory and monads, etc are that they are inaccessible for some of the popular tools. E.g. javascript, python, ruby .
It invariably happens that learning these patterns needs you to learn haskell, lisp or erlang. Which is where accessibility breaks down.
If there was a conceptual framework that bridges these higher order designs to accessible languages .. even if partially so, that would be killer.
I agree that Haskell is built around those concepts.
But the point of learning category theory is to change your mindset when programming.
And even in Javascript, it is super useful.
(basically because, while trying to learn category theory, you become fluent in functional programming ).
>Part 3 will start to dance the fine line between practical programming and academic absurdity. We'll look at comonads, f-algebras, free monads, yoneda, and other categorical constructs.
I myself like this - https://github.com/valentjedi/ddd-dynamic - but I'm beginning to think maybe all of this is not the right paradigm for programming in languages like javascript or python. The success of tools like React or RxJava in Android makes me think that the tools define the paradigm and not the other way around... no matter how much we want it to be.
I'm not certain, but isn't RxJava just a specific inplementation of a monadic pattern with IObservables?
I thought the original Rx was a Haskeller saying "Hey a 'dsl' built on a monad would be really powerful here in C#" and implemented it. Then it spread to other langs from there.
Not sure what you mean by "inaccessible", but you can use monads pretty much anywhere.
In my opinion the biggest hurdle is the fact that getting to unrestand purely functional programming is pretty difficult. You start with monads, but quickly find out they do not compose. Move on to monad transformers, good for the simple stuff, but then you hit another wall because they don't compose either! Move on (up!) to tagless final and/or free monads. There are _a lot of_ concepts that don't have any equivalent in "the real world". The gang of four design patters are a child's toy in comparison.
He means that there's no type syntax to assist with monad creation or usage which indicates a sort of incomplete understanding of the concept.
As an example, a maybe monad in python can be implemented by having a monadic function return either:
["Just", 1]
["Nothing", None]
and the bind operator (>>=) would be:
bind = lambda a, f: a if a[0] == "Nothing" else f(a[1])
You should also understand that a list itself can be a monad. Monads are just burritos where ANY abstraction/design pattern can be the wrapping, it does not need a haskell type system to wrap something. Albeit the type system in haskell does really help you grok the concept.
Weirdly how someone chooses to implement a monadic value and the bind operator suffers from the same problem as what this article writer complains about in OOP design patterns. Both monads and design patterns really depend on previous understanding of a pattern, and a variation on how someone chooses to implement the bind operator or a design pattern can really throw off people.
Here is an example of an alternative variation of the "Maybe" monad in python, for the return values:
["Only", 1], None
And a variation on bind:
bind = lambda a, f: f(a[1]) if a is not None else None
> You should also understand that a list itself can be a monad.
A list is not a monad — it’s just a container of values, ie. a functor. A monad is a data structure defines a computation in a specific context, and allows composing these in a sequential manner. For example, the Haskell IO monad is a computational context in which you can do input/output, and all of these IO operations are represented using a data structure of the type “IO <return_value>”.
For example, the function “readLine” has the type “IO String”, and represents a computation that reads a line from standard input and returns it as a string. At runtime, evaluating this value will result in the runtime system asking the user for some input, and at compile-time this is represented as the string (that the user enters at runtime) inside the IO monad.
You're probably use to Monads that wrap a single object, like the maybe monad. Monads can wrap multiple objects or anything. This is the case for a list.
Nah no single resource, it took me a lot of time to understand monads, going over many resources.
Just remember Absolutely ANY design pattern / data structure can be a monad as long as you can get an internal value out of that abstraction and define a bind operator to compose monads together.
This rule tells it all:
(>>=) :: m a -> (a -> m b) -> m b
m is the abstraction wrapper and a is the internal thing that is wrapped. You can wrap it in a type string, a list. Even like a binary tree can be a monad.
You can define the bind operator to do whatever you want, it just needs to follow the type signature above and be associative.
Oh and nothing was translated. A monad is a general concept applicable to almost any language. Deep understanding of a monad is to understand it outside of the haskell ecosystem.
genuine question - can this be done in python in a production ready manner ?
I'm have been trying to answer this question for quite some time. In Haskell, writing monads or applying category theory is part of the idiomatic language. Not so in js or python.
for example, if you want to apply haskell like concepts in javascript, the closest thing I know is purescript (http://www.purescript.org/) or Bucklescript. So you need to switch over to a different language to achieve the richness of these concepts.
I'm hardpressed to apply these concepts idiomatically to the mainstream languages.
Compare this to design patterns or OO, or reactive programming (via rxjs or whatever) - which are so accessible that they are now the reason to learn a particular language!
> genuine question - can this be done in python in a production ready manner ?
It can, but without a type system you lose most of the benefits. You can put monads in there but if you refactor fearlessly the way you would in Haskell (without tests), you'll get production bugs.
> Compare this to design patterns or OO, or reactive programming (via rxjs or whatever) - which are so accessible that they are now the reason to learn a particular language!
Isn't that the same thing? Monads etc. are the reason to learn Haskell.
Functional programming is just imperative programming with heavy heavy restrictions. So yes this can be done, as long as you fully understand what functional programming really is...
A functional program is simply a program that is made up of a single mathematical expression. Complexity is achieved in functional programming by composing smaller expressions into a single big expression. It should be possible to write your entire functional program as a single expression or statement (not saying you should do it, but it should be possible). That's it. That means all your python functions should be a single expression only (use lambda always instead of def; or if you must use def make sure all your variables remain immuted)
If you want to apply like category theory to it, then just make sure your python functions always stick to specific types. Don't let a function take in either a string or an int and return a list or a iterator or any bullshit like that. Keep the typing consistent... If you define all your functions to behave mathematically like only taking in a string and only returning an int, not an int or a None... then you are following category theory.
So nothing like this:
def someFunc(value):
if value == "hello"
return 1
elif value == 123:
return [1,"world"]
or this:
blah = lambda a: "hello" if a is None else 304
If you wanted automated type checking at run time you can use a decorator to check the types of variables going into the function and coming out.
If you try this style at work, people will complain haha, I don't recommend it
On the other hand, as soon as you know how to apply the concepts of CT, learning Haskell mostly is a matter of syntax. So, why not?
Besides, I don't think you would gain anything by using CT concepts in a language that won't resolve type polymorphism for you. It will lead to a mess of a code, where you'll have to make everything explicit.
By the way, great language. an await command in a where scope is just phenomenal. Between seamless lambdas, interdependent functional declarations, and forced async I don't think it missed any language at its trolling.
How many stack overflow questions and answers are there for lisp(s)? How many for Java? How many applications on the web are written in a lisp? How many are written in Java (or Python or Ruby or PHP)?
There's a lot more to accessibility than how easy the syntax is to grok. This is just one simple and limited example.
>How many stack overflow questions and answers are there for lisp
Yet there are a ton of books written about Lisp in all kinds of topics and applications. Not to mention really well-written tutorials like "Practical Common Lisp".
Plus, the documentation/reference of the language itself is comprehensive and well written, comparing favorably to the documentation for most programming languages out there.
It's more time consuming to close )}]]})}. With ))) we just repeatedly type ))) until the cursor jumps to the target opening paren we are trying to close. With a mixture of different parens we have to interrupt what we are doing and determine the correct parens which will continue the closing sequence.
I think there are people who value completeness- they enjoy working on systems with many rules tailored for specific use cases, and like accumulating knowledge about which rule is the correct one to apply in each situation; and there are people who value open-endedness- systems that have few simple rules that can be combined to obtain novel and unexpected results, even if these are not as optimized as they could be in a more complete system.
I picture the difference as that between a role playing game and chess: complex, domain-specific rules on one side, simple and abstract rules on the other.
Not really. You can't comment on this until you tried a lisp. I know that you haven't tried it because if you had you would have known about paredit and how it solves the paren problem.
Based on the overview, I would call it "From design patterns to algebra", though. There's (in most cases) no reason to involve categories in a discussion of monoids/semigroups and isomorphisms.
That would be my choice of title, too. I considered writing exactly such a thing a few years ago, but realized that I am almost certainly the wrong person to do so, since I use the algebras but not the design patterns that approximate them.
There is a flaw in this type of thinking which I think is not addressed here, but should be. (I took the quote from the summary, but I think it's fair.) The usual issue with ad-hoc informally specified things is that the ad-hocness and the informality leaves something out, making sure that if you understand the informal thing, but not the formal, you end up understanding only some part of the whole thing that you could understand if you tried.
So in the context of design patterns, that means there should exist situations where the extra abstraction, the non-ad-hoc non-specialized understanding, allows you to write code that is better than otherwise, and (more importantly) that you would not have otherwise written. This would show explicitly the gap between the ad-hoc ideas and the formal ideas instead of leaving you to guess.
Whereas the way this post tries to explain it, as I understand it, is that it shows you things (design patterns) that you would have otherwise written, and shows how it can be talked about in more general terms. But the things that you wouldn't have written are missing.
E.g., in some monad tutorials at some point one of the exercises is the nondeterministic choice monad, which is totally something you might not have written yourself, which makes it a way of showing why monads are a useful concept. If the only monad you had was IO, it'd be a much more useless idea (it would merely be a different, not even necessarily better, way of writing things that you already knew).