Hacker News new | past | comments | ask | show | jobs | submit login

Actor model really is for concurrency, not for parallelism.

For pure parallel computing it introduces unnecessary overhead because of the message passing. That overhead in turn hurts performance, which really is the only reason you'd want to compute in parallel.




Parallel compute middleware is often based on message passing when the parallel computer is a cluster. See MPI.


Concurrency allows for parallelism.

It's common to conflate parallel programming with so-called embarrassingly parallel tasks, but this isn't accurate. For tasks which may be executed in parallel but aren't working on different regions of a single state, actors are an excellent choice.


Right, but isn't what you say exactly concurrency instead of what we normally consider parallel computing, at least on a single machine? Depending on the implementation/runtime you may or may not need something like threads, which are a good concurrency primitive, for parallel computing.


I'm not entirely sure what the disconnect is here, but I'll give it a shot.

Parallel computing is simply running more than one aspect of a computation simultaneously, it requires multiple processors/cores, or SIMD. Concurrency is doing more than one thing 'at a time', and this generally includes things like callbacks (to allow more work to be done while waiting on an action) and preemptive threading (which may or may not involve true parallelism).

Concurrency is an opportunity to work in parallel, one which may or may not be achievable. I consider threads a bad concurrency primitive, because they're too low-level and hard to get right, and this becomes even worse when one runs threads in parallel.

Actors, which are a share-nothing concurrency model based on message passing, are a good concurrency primitive. Among the reasons for this are that one can put them on threads and not have to deal with locking and unexpected mutation. You can treat them as an implementation detail that happens below the level the programmer must concern themself with.

This means they're good for running in parallel, as well, which is quite tractable on a single machine given that it has multiple cores (mine has eight).

Colloquially we sometimes say 'parallel computing' when referring specifically to so-called 'embarrassingly parallel' tasks, like some rendering algorithms, where one may bring as many cores to bear on the task as one has available.

But concurrency is always an opportunity for parallelism, and an actor model allows one to take that opportunity given that other aspects of the runtime don't stand in the way. And parallel computing is simply running more than one computation at the same time, it doesn't by itself imply anything else about the algorithm.


> it introduces unnecessary overhead because of the message passing.

Inter-process and inter-machine parallelism do that anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: