Important point: Promises, ContT monadic constructions[1], Threads, and Actors all are just variations on how to structure sequential computation with asynchronous events.
You can prove this to yourself via construction: you can write an interpreter that simulates threads (and even pre-emption) with Promises, Actors, or a more formal monadic style. You can repeat this feat for any single member of the set and if you do it carefully that's a proof by construction.
Essentially what you choose to implement is arbitrary and should be informed by your underlying performance requirements coupled with your preferences.
And, as people have noted: your post seems confused. Everyone benefits somewhat uniformly from asynchronous operations. Java's implementations are often a little bit more tunable because the programmer can provide the threadpool for async workers.
Of all of the languages you named, Erlang is the most distinct as it offers an abstraction (and cost) of 1 heap per actor from the perspective of the garbage collector. This decision (made in a time when GC algorithms were quite a bit worse than they evolved to be) lead Erlang to something they want to keep: an abstraction that makes remote and local computation identical. Other languages that follow in Erlang's footsteps (Pony comes to mind, yay Pony!) do not do things this way.
[1]: Okay EitherT ContT. Sure. Get picky.
P.S., You cannot comment because your overall karma is too low, putting you in a rate-limited category. It generally clears up in a few hours.
However, I do not think there is much more of value to say other than to thank that one poster for a few pretty cool papers!
> you can't use languages that don't share memory between threads ... Python, Go, Ruby
> trust me, I'm not confused
Sorry but you really are confused about this. You said that Python, Go and Ruby don't have shared memory between threads, but they simply do. All of these languages's concurrency and parallelism models are fundamentally shared-memory, and all even allow completely unsynchronised access to shared memory if you wanted to do that.
If you won't believe me on that, here are five peer-reviewed papers from reputable venues on Ruby alone that talk about the shared memory model it has. Either all of these experts and all the reviewers are mistaken, or you are.
B. Daloze, S. Marr, D. Bonetta, H. Mössenböck. Efficient and Thread-Safe Objects for Dynamically-Typed Languages. In Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA), 2016.
C. Ding, B. Gernhardt, P. Li, and M. Hertz. Safe Parallel Programming in an Interpreted Language. In Proceedings of the First Workshop on the High Performance Scripting Languages, 2015.
L. Lu, W. Ji, and M. L. Scott. Dynamic Enforcement of Determinism in a Parallel Scripting Language. In Proceedings of the 35th Conference on Programming Language Design and Implementation (PLDI), 2014.
R. Odaira, J. G. Castanos, and H. Tomari. Eliminating Global Interpreter Locks in Ruby through Hardware Transactional Memory. In Proceedings of the 19th Symposium on Principles and Practice of Parallel Programming (PPoPP), 2014.
W. Ji, L. Lu, and M. L. Scott. TARDIS: Task-level Access Race Detection by Intersecting Sets. In Proceedings of the 4th Workshop on Determinism and Correctness in Parallel Programming (WoDet), 2013.
> R. Odaira, J. G. Castanos, and H. Tomari. Eliminating Global Interpreter Locks in Ruby through Hardware Transactional Memory. In Proceedings of the 19th Symposium on Principles and Practice of Parallel Programming (PPoPP), 2014.
Well thanks for this, can't wait to sit down & read it.
Python, Go and Ruby do not have the ability to run multiple real OS threads that can run on separate cores and share process memory at the same time.
They might try to work around that, but they will never be able to run one common task to/from many sockets on all cores of one machine at the same time.
Yes, they do. Python does. I'm sorry but your wires are getting crossed somewhere. You also completely ignored the comment you where replying to and the sources he cites for you and repeated essentially the same thing you said before. This is quite rude.
Python has a GIL, so only one thread can interact with the Python interpreter at one time. You can have any number of threads running at one time if they release the GIL and don't touch the interpreter (i.e if they are doing IO or calling a C function).
They, however, are still real OS threads, running (waiting for a lock) on separate cores and sharing process memory. That's the definition of a thread. They are not fake threads. They share memory. They run on separate cores (perhaps).
This is not technically true, but often is practically so. No matter, it has no bearing at all on the concept of "async" as modeled or in contexts you've discussed.
I'm not sure why you're trying to salvage this or what your endgame is here but please reconsider.
Would you prefer I use the phrase "completely incorrect about the memory models of the majority of languages you named and the erroneous conclusions you reached from them?"
That is certainly more accurate, but seemed needlessly confrontational in my first draft, so I opted for the more generic phrase "confused post", which implied maybe the post itself was simply jumbled up and didn't clearly offer your intent to the audience. You did ask after all.
I was trying to give a non-confrontational way to explain why I almost downvoted your post for being very misleading, and opted instead to respond with a slightly more correct post.
HN needs less fighting and more knowledge.
I thought what you were trying to say was, "parallelism is not the same as concurrency" but that it got jumbled up. That is a correct statement.
You can prove this to yourself via construction: you can write an interpreter that simulates threads (and even pre-emption) with Promises, Actors, or a more formal monadic style. You can repeat this feat for any single member of the set and if you do it carefully that's a proof by construction.
Related reading, somewhat famous: https://pdfs.semanticscholar.org/2948/a0d014852ba47dd115fcc7...
Essentially what you choose to implement is arbitrary and should be informed by your underlying performance requirements coupled with your preferences.
And, as people have noted: your post seems confused. Everyone benefits somewhat uniformly from asynchronous operations. Java's implementations are often a little bit more tunable because the programmer can provide the threadpool for async workers.
Of all of the languages you named, Erlang is the most distinct as it offers an abstraction (and cost) of 1 heap per actor from the perspective of the garbage collector. This decision (made in a time when GC algorithms were quite a bit worse than they evolved to be) lead Erlang to something they want to keep: an abstraction that makes remote and local computation identical. Other languages that follow in Erlang's footsteps (Pony comes to mind, yay Pony!) do not do things this way.
[1]: Okay EitherT ContT. Sure. Get picky.
P.S., You cannot comment because your overall karma is too low, putting you in a rate-limited category. It generally clears up in a few hours.
However, I do not think there is much more of value to say other than to thank that one poster for a few pretty cool papers!