My hunch: that modern programming often requires concurrent execution of software, but that most ways in which we have to model concurrency in code are at best hard to learn, and are frequently orders of magnitude harder to learn and use.
Node.js is pretty good in this sense. Except for the very hard parts, because of node's async nature, you can introduce good amount of concurrency in your code by default, resulting in a decent amount of IO being concurrent. You have to get used to a fully-async programming model though.
It would be the mainstream languages problem then or something. Concurrency with actor model is easier to learn and use than OOP, which many people seem to be able to use.
I find a lot (most?) OOP code I read is either spaghetti (with weird object interdependencies), diffuse (waay to many classes and subclasses so following the flow of computation becomes difficult) or both. It's a great tool, but perhaps harder than widely realized.
And concurrency is even harder, especially with the ever-popular "tweak until it parses/compiles, then ship" approach, or when many people are working on the same section of code.
The Hewitt Actor approach reduces independencies dramatically, at some cost for certain algorithms, and with added clarity for many others. And it scales somewhat automatically beyond one machine which is a big win these days.
The actor model is non-deterministic and doesn't solve all concurrency problems, such as creating fine-grained or irregular data parallelism. There isn't (and it may not be possible to have) one single solution to 'the concurrency problem'.
Actor model is very deterministic, but it can model unbounded non-determinism, i.e. any concurrency problem. Including fine-grained and irregular data parallelism. It's up to the compiler to generate SIMD instructions out of it, if that's what you mean.
How co you argue it’s deterministic? If one actor asks two other actors to do a job and send the result back, those results come in a nondeterimistic order. That’s a race condition. It’s easy to write programs with bugs in them because of this.
Something like fork-join is deterministic because results come in a fixed order.
And for generating SIMD from actors? Or handling irregularity efficiently? I feel like you’re making the ‘sufficiently clever compiler’ argument.
We cannot currently efficiently solve all parallelism problems in practice using actors, and we don’t know how we would be able to.
Non-deterministic order is not a race condition. You also need some sort of shared resource and being unaware of said non-deterministic order of incoming messages. With actor model you can't have shared resources and can't be unaware that messages come in no specific order.
And I'm not arguing for a sufficiently clever compiler, just that you can express any concurrency with actors. You can definitely create a convention backed by actors that compiles into SIMD if you need it.
> [it is a general race condition if] there is some order among events a and b but the order is not predetermined by the program
(Christoph von Praun)
If two actors do some work concurrently and when finished send a message to another actor, the order those messages arrive at the other actor, event a and event b, is not predetermined by the program. So it's a race condition.
actor a {
do some work;
send 'a' to x;
}
actor b {
do some work;
send 'b' to x;
}
actor x {
receive; <- has this received from 'a' or 'b'? Nobody knows. They've raced.
}
You can express any concurrency with actors, but we do not know how to do so as efficiently as with other concurrency models for parallelism. Someone might be able to implement it efficiently, but nobody has managed it yet, so we're still reliant on shared memory and other approaches concurrency.
It's not my definition! And you're right, normal message passing does leave you vulnerable to race conditions and your program can run a different way each time you run it! That's a major problem with it.
That's why I think other models of concurrency, such as the fork-join model, where the equivalent of 'messages' have to arrive in a deterministic order and so there are no race conditions, are safer.
Message passing forces you to explicitly handle non-deterministic order, how can it leave you vulnerable to race conditions? If you need to receive a specific message first, you wait for that specific message, that's it. Simple and deterministic.
This is a real error I've seen someone make using Erlang in making a parallel sort when teaching parallel programming to masters students.
actor A {
receive half an array
sort it
send it to C
}
actor B {
receive half an array
sort it
send it to C
}
actor C {
(a, b) = divide input array into halves
send a to A
send b to B
receive a'
receive b'
merge a', b'
}
They send to A, send to B, and then think they're going to receive from A first, because they sent first, but B could finish first instead. Sometimes their program works, sometimes it doesn't.
Yeah it's their fault, but the model hasn't helped them not make the bug and worse they may never see the bug until they deploy into production.
If we used a fork-join model, they could not have made this mistake, and even if they did make some kind of mistake, at least they'd see it every time they ran their program.
(a, b) = divide input array into halves
fork {
sort a
}
b' = sort b
a' = join
return a' + b'
As with most things in Erlang; if it's important, you must make it explicit. Implicit ordering works in your fork-join example with only a single fork, but if you require an ordering, you must be explicit about passing information through to enforce the ordering you need.
If you instead did
fork {
sort a
}
fork {
sort b
}
a' = join
b' = join
you would have the same problem as in Erlang. or you could have actor C sort B inside the actor between send a to A and receive a' and you would also have an implicit ordering.
In this case, merge sort could work with either order if a stable sort isn't required, or if the sort key is the whole element.
If it matters, this is easy to defend against, you just send a tag (a ref in Erlang would be perfect for this case, if the merge happened in a fourth actor, a numeric indication of ordering would be more useful) in the message to actors A and B, and use that to enforce an ordering when receiving the replies.
If you have to join all the the jobs at the same time (which is pretty inflexible), how is the ordering of the results determined? My exposure to this model was in the perl threads module (and the forks module which offers the same api with os forking instead), where you join on a specific thread id, so you can easily enforce ordering by first joining a, and then joining b; I assumed a join with no parameters would join a thread that's ready and/or wait for the first to become ready, because that seems like the most useful/basic interface, and anything more specific (like join all the threads, or join the threads in some order) could be added as needed in the context where it's needed.
> If you have to join all the the jobs at the same time (which is pretty inflexible), how is the ordering of the results determined?
The model is that you can start a sequence of jobs to run in parallel, and then you have to wait for them all to finish. You get the results in the same order as the jobs you created. The order can't vary.
Think about a diamond shape - one job create two more jobs, and then they both send their results to the original job which cannot continue until all child jobs are finished.
> because that seems like the most useful/basic interface
Yes useful and basic, but the problem is it makes it easy to cause race conditions, which is where this thread started! You think you'll get some some thread being ready first so you write code assuming that without even thinking about it, and then once in a trillion you actually get the other result first. Yes, it's a programmer bug, but the point is because it's non-deterministic they may not notice until the one time it actually matters and someone dies.
Fork-join is not really a concurrency model, I don't understand why you are trying to push it in every message. But if you insist to treat it like a concurrency model..
> The model is that you can start a sequence of jobs to run in parallel, and then you have to wait for them all to finish.
Yes, that's the model.
> You get the results in the same order as the jobs you created.
No, you don't get the results in the same order. Jobs still finish in random order and store results before synchronization happens. Synchronization happens on join after that. And instead of relying on order you specify exactly from where you are getting the result of each individual job. So, if you have to specify that, why do you need an order then? Oh, you don't need it and you don't get to have it. It's not Erlang, where you can actually have a deterministic order and can wait for messages in any order you want, while it will reorder them for you.
> No, you don't get the results in the same order.
You and I just don't see to be on the same page about what fork-join is, so we probably aren't going to agree on this.
> It's not Erlang, where you can actually have a deterministic order
Can, but my point is you can also not have a deterministic order, which is how Erlang programs can end up being racy, which is the problem with them if you are trying to solve the original problems of threads.
So, if you want this inflexible model, you could easily build it as a library in Erlang, just as you're clearly using someone's inflexible fork-join model library (or an inflexible language implementation); in the mean time you're missing out on things like fork A, B, C, A expected to run much longer than the other two, when B and C return, fork D using the results from B and C , finally wait for A and D to return. Or fork A, B, C and merge the first two that return, then merge that one and the third. Or fork these ten jobs, but only run four of them at any given time (resource constraints).
My first example you could probably structure into your model with an extra fork; the second example won't fit in your model, the third fits if you fork four workers and add a shared queuing mechanism, but that feels more complex.
These kind of techniques are key to using parallelism to reduce latency. Always having to wait for everyone to finish at each step makes for a lot of waiting.
Fork-join is not what you think it is then. It's just a behavior, uncommon outside of the shared memory programming. And there is no order of results. There can be, if it is implemented on top of message passing. But then writing code that relies on order instead of specific names actually becomes error prone.
And lack of order of messages is still not a race condition.
Erlang lets you receive first message first even if it arrives last, you just have to specify which message, exactly as in your example. But order doesn't actually matter for sorting, you cannot possible make a mistake wrt to ordering here.