a monoid is a set A with an associative binary operator @ on it: that is, for any a, b, c
a @ (b @ c) = (a @ b) @ c
and it also has an "identity" element I, that is, for any a:
a @ I = I @ a = a
common monoids: integers under addition (i.e. @ is addition, the identity is 0), nonzero real numbers under multiplication, lists under concatenation, booleans under AND (identity is True), booleans under OR (what's the identity?), ...
now, when you have a mathematical structure, you often want to study functions between two of them. so you might ask what a reasonable notion of functions between monoids is
one nice way is if, given two monoids, M with operation @ and unit I, and N with operation ^ and unit J, we had a function f for which:
f(I) = J
f(a @ b) = f(a) ^ f(b)
this is loosely described as the function "respecting the structure" of the monoids, and such functions are called monoid homomorphisms or just homomorphisms if the context is clear
--
with that background, we see that `all` should really be be a homomorphism from the monoid of lists (under concatenation, with the empty list as the identity element) to the monoid of booleans under AND (with True as the identity)
the value here is that now you can reason about the all() of two lists by thinking about the all() of sublists, and the conditions required for a homomorphism guarantee that the empty list is not an edge case:
all([T, F]) = all([]) and all([T, F]) = all([T]) and all([F]) = ...
and that it doesn't matter how you cut up a list into sublists and what order you apply the concatenations in to reconstruct it
I was referring to monad, as used in programming, as a transform that can be applied to the elements of a structure.
For instance, a transform that wraps each joining operator, in an expression tree, with a new operator that applies the original operator if all its inputs are defined, but returns a null if any input is null.
The expression tree structure hasn’t changed, but it now consistently (anvoiding error) responds to null values by propagating them.
In my example, a monad converts the list combining operation of concatenation into other list combining operations.
The result was that the identity of concatenation (the empty list) formed the basis for the new list operator’s identity.
—
Both concepts are indeed relevant.
A “monoid-to-aggregator” monad could convert 2-element operations into recursively applied list operations (which also return the single element for unitary lists, and the identity element for empty lists).
Then a structure of monoids processing pairs of values could be transformed into a structure that processes lists.
---
a monoid is a set A with an associative binary operator @ on it: that is, for any a, b, c
a @ (b @ c) = (a @ b) @ c
and it also has an "identity" element I, that is, for any a:
a @ I = I @ a = a
common monoids: integers under addition (i.e. @ is addition, the identity is 0), nonzero real numbers under multiplication, lists under concatenation, booleans under AND (identity is True), booleans under OR (what's the identity?), ...
now, when you have a mathematical structure, you often want to study functions between two of them. so you might ask what a reasonable notion of functions between monoids is
one nice way is if, given two monoids, M with operation @ and unit I, and N with operation ^ and unit J, we had a function f for which:
f(I) = J
f(a @ b) = f(a) ^ f(b)
this is loosely described as the function "respecting the structure" of the monoids, and such functions are called monoid homomorphisms or just homomorphisms if the context is clear
--
with that background, we see that `all` should really be be a homomorphism from the monoid of lists (under concatenation, with the empty list as the identity element) to the monoid of booleans under AND (with True as the identity)
this means that
all(concat(list1, list2)) = and(all(list1), all(list2))
or, in python,
all(a + b) = all(a) and all(b)
and, of course, all([]) = True!
the value here is that now you can reason about the all() of two lists by thinking about the all() of sublists, and the conditions required for a homomorphism guarantee that the empty list is not an edge case:
all([T, F]) = all([]) and all([T, F]) = all([T]) and all([F]) = ...
and that it doesn't matter how you cut up a list into sublists and what order you apply the concatenations in to reconstruct it