Hacker News new | past | comments | ask | show | jobs | submit login
Petri Nets Are Monoids (1990) [pdf] (core.ac.uk)
72 points by boshomi on Feb 4, 2020 | hide | past | favorite | 28 comments



For those who are interested, Petri nets have a lot of applications in CS, for example, spiking neural networks.


For those who don’t know what a monoid is, like me three months ago, I believe a monoid is something that implements monoidal operations, which are binary associative operations with an identity. Integers have monoidal operations like addition and multiplication. You have two arguments, which don’t matter in ordering, and where an identity like the additive or multiplicative identity exists.


This is all correct! Except for order not mattering, which is true of your examples but not true in general. But integer addition and multiplication also have a variety of other interesting algebraic properties: they are invertible (except multiplication by 0) and commutative, one distributes over the other, and integer addition has inverse elements. Some other monoids may give the flavor more clearly:

- string concatenation is a monoid, with the empty string as the identity element. In fact, string concatenation is the most general monoid on a set, called the "free monoid"; all other monoids can be made from it by defining a canonicalization operation. For integer addition, for example, you can collapse a string of integers such as [4, -1, 2] down to their sum, such as [5]. Note that the free monoid is not commutative, and although it is invertible (in "hairy" || " and a half" = "hairy and a half", you can take " and a half" off and get the original "hairy"), it doesn't have inverse elements.

- max and min on a closed set, and more generally all semilattices. Let's write 4 \/ 5 = 5 \/ 4 = 5. This \/ is associative and has an identity (whatever the maximum of the closed set is; we can augment the integers with -∞ and +∞ and use -∞ as the identity element), so it forms a monoid. Any theorem about monoids will work equally well for string concatenation and for this pairwise max operator. Isn't that wild?

- & and | for bitstrings of some length. This is also a lattice.

- Matrix multiplication (for matrices of a particular square shape).

- The morphological dilation operation.


Fascinating. Curious about matrix multiplication and a “particular square shape”. I’m interested if you would care to elaborate on the matrix relationship.


Oh, well, I mean that if you have a 3×3 matrix and a 4×4 matrix, you can't even multiply them. So 4×4 matrices form a monoid under multiplication, as do 3×3 matrices, but matrices in general do not.

It turns out that they actually form a much richer structure called a ring when you add in their addition operator as well: https://en.m.wikipedia.org/wiki/Matrix_ring


ELI5? I appreciate the attempted explanation, but I'm not finding the the new one any more helpful.


A monoid is a set of objects each having a port on the left and on the right, and given two such objects you can join the left port of one object to the right port of the other, constructing a new object. There is also an "empty" object that, when joined to any given object, results in that object unchanged.

For example, stacks of paper can be thought of as a monoid. Given two stacks, you can stack one on top of the other (the top and bottom of the stack standing in for the ports). The vacuous concept of a stack of zero sheets of paper is the "empty" object.

Another example is the set of PNG files. There are a number of monoids you could define, like horizontal concatenation of images, or the vertical concatenation of images. A non-example is blending two images with 50% opacity each, since depending on the order you blend the images, like (AB)C versus A(BC), image A might contribute 25% or 50% to the final image.

For some monoids, the left/right port distinction doesn't matter. Consider buckets of marbles. The way you join two buckets of marbles is to dump them into a single bucket. The empty bucket is the empty object. It doesn't matter in which order you dump the marbles into a bucket --- you'll get the same number of marbles in the end.


This was very helpful. So an important aspect of a set of monodial operations is that the order doesn't effect the outcome...?

This mostly makes sense, although... not for arrays where order matters...?


The word "order" is unfortunately overloaded. The first kind, like what you're talking about, is whether you use the left port or the right port of the first object when joining it to the second object. That certainly can matter, like with the stack of paper example.

The second kind is joining things through time, where it is not supposed to matter when you join things. If you have three stacks of paper, you can stack the first onto the second and put that on the third, or you can stack the second on the third and put the first on top of that, and you get essentially the same pile in either case.

Piles are supposed to be an analogue of arrays, where the first kind of order matters.

Lots of times, the second kind of order (associativity) actually does matter, too, where you get literally different objects, but it's ok since there are "equal" in a useful sense. For example, when you make a text editor, you might use the rope data structure to represent strings. If so, when you concatenate a list of strings, the literal tree structures you get in the representation of the rope might vary depending how you did it, but they'll all be equal in the sense that the strings they represent are equal --- the same characters in the same order.


it's not exactly order independence, it's associativity:

  (a • b) • c  ==  a • (b • c)
i.e. "no matter where you put the parentheses, it'll give the same result". this DOESN'T mean that

  a • b == b • a 
though it may be true for some monoids like addition/multiplication over numbers¹.

you'll notice that associativity indeed holds for strings (or arrays of some T), where concatenation is the monoidal operation:

  ("he" ++ "ll") ++ "o!" ==
  "he ++ ("ll" ++ "o!") ==
   "hello!" 


though at first blush associativity may not seem very useful, it is! it means folding/"summing" a sequence of monoidal values is trivially parallelizable. i.e. if you've got a sequence

  as = [a0, a1, a2, ..., an-1, an]
and you want to "sum"/fold it using a's monoidal operation "•":

  r = (a0 • a1 • a2 • ... • an)
you can split it into chunks across multiple workers and then "sum" the results of summing each chunk:

   r0 = (a0•a1•a2)  // worker 0
   r1 = (a3•a4•a5)  // worker 1
   ...
   rm = (an-2,an-1,an) // worker m

   r = r0•r1•...•rm // sum up what the workers produced
 
the nice thing is, as long as we don't reorder the elements, we'll always get the same result no matter how we chunk up `as`! and this will hold for any monoid `(a, •)`.

btw, an example of a less trivial monoid is `sorted lists of T `:

- `•`, the monoidal operation, is `merge(a, b)`, which combines two sorted lists into a new sorted list: `merge([1,2,3,5], [2,4]) == [1,2,2,3,4,5]`

- `e`, the "neutral element", is just the empty list `[]`

this is a (commutative) monoid, and you could say that mergesort relies on its monoidal properties to work!

---

¹ if you can swap values around and still get the same result, you've got a "commutative monoid".


whoops, that last code block was supposed to have

   rm = (an-2•an-1•an)
instead of the commas.


I thought I'd add: what's the "mono" in "monoid"? It's that the ports come in only one compatibility type. For example, devices that have exactly two ports, one male and one female, for some type of connector, maybe USB-A. If you plug one into another, then you have a new device with the same properties, at least as far as ports are concerned.


Love the non-usual, physical-world examples :)


Consider these equations for integers:

    x * 1 = x
    1 * x = 1
    x * (y * z) = (x * y) * z
These should be familiar if you know about integers.

Now consider the following equations for strings:

    concat(x, "") = x
    concat("", x) = x
    concat(x, concat(y, z)) = concat(concat(x, y), z)
These should be familiar if you know about strings.

Now consider the following equations for booleans:

    x or false = x
    false or x = x
    x or (y or z) = (x or y) or z
These should be familiar if you know about booleans.

If you can see the pattern, you understand monoids. Monoids are everywhere in programming.


This is fantastic, thank you!

Alright, first thing I noticed was the presence of identity operators.

But then you're replacing the identity "operator" with an expression. Combining this with some of what other people are saying, and one aspect of monoids is that they're... chainable? The value of a monoid expression is the same type as the parameters of the monoid expression; `bar someMonoidFunction(bar x, bar y)`...?

This then has some interesting ramifications for chaining/composing, as you demonstrate in the third example of each group.

There's... something further than this, I can tell, but it's not yet clear to me.

Can you do one these wonderful - heh, koans, basically - for something like `sin(cos(x))`?


You've pretty much got it. A monoid is a two argunent operation and a data type with 3 properties:

It's 'chainable', or the more precise mathematical term is "closed". It has a value that can be used as an identity. And it's associative. Where associative means that when 'chaining' multiple operations (as you termed it) it doesn't matter how you group with parentheses; it'll produce the same result.


Function composition is a monoid: (f ∘ g) ∘ h = f ∘ (g ∘ h). For example, (sin ∘ cos) ∘ tan = sin ∘ (cos ∘ tan). It's not commutative in general; sin ∘ cos != cos ∘ sin (in fact, sin(cos(x)) != cos(sin(x)) no matter the value of x).

The identity function I(x)=x is the identity: f ∘ I = f = I ∘ f.


A monoid is when you have some data and an operation that's sort of like addition. Addition on numbers is sort of like addition, so that counts. Addition on vectors counts. Multipication is a bit like addition, and multiplying numbers counts as a monoid. String concatenation is sort of like addition and it counts. Logical OR is kind of like addition for logic where 1 + 0 = 1, 0 + 1 = 1, 0 + 0 = 0, 1 + 1 = 1 where 0 is False, 1 is True, and + is OR.

Monoids are very abstract which means they only have the vaguest kind of sort of like addition on them. Groups are less abstract, semigroups more abstract.

To be sort of like addition up to the level of monoids, you having to:

1. have something that's a bit like 0

2. adding in a random parenthesis doesn't matter

3. it needs to be an operation on two elements, like two strings, two logical values, or two numbers

That's it really. Sometimes you have to squint at an operation to see how it's like addition. Which is to say it's not really about addition at all. It's just an operation. It's about the specific three rules.


Okay, neat, thank you! Addition is sort of like addition, as is multiplication, and subtraction. Is division sort of like addition? And, functions seem like they're not sort of like addition - `sin(cos(x)) ?= cos(sin(x))`, or `log(cos(x)) ?= cos(log(x))` (I honestly don't remember that trig)


Division doesn't count because (1/2)/3 = 1/6, but 1/(2/3) = 3/2. And subtraction doesn't count because 0-(1-2) != (0-1)-2.


Sin and cos are unary operators, they don't take two inputs which is a requirement. See the parent's point #3.


The "Group-like structures" table here seems super helpful: https://en.wikipedia.org/wiki/Monoid#Relation_to_category_th...


Monoid is a set $X$ with a mapping $X \times X \mapsto X$ that is associative and has an unit element. @curryhoward's answer contains some examples that might help you understand this better.


They can also be reasoned about in terms of linear logic.

https://www.researchgate.net/publication/318253097_Petri_Net...


Paged through the article for pictures, but the only example with tensors was trivial. Could anyone who read the thing tell if there's an intuitive explanation of the binary operation on full nets?


They can be used for modelling actions and states, including the tricky cases where an automaton would become too complex.


Indeed. Finite state machines are a subset of Petri nets, having deterministic state transitions. Petri nets are useful where concurrent state transitions needs to be modelled, and are thus useful when modelling the physical world.


Thanks. I forgot to appoint that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: