Hacker News new | past | comments | ask | show | jobs | submit login

Coroutines are a tool for concurrency, threads a tool for parallelism - see e.g. https://stackoverflow.com/questions/1050222/what-is-the-diff... . In my new code coroutines have entirely replaced futures, they make the code much more readable. They can also be used for use cases where parallelism is irrelevant, for instance for generators.

Think of coroutines as a language-level way to turn procedural-looking call trees into state machines where the caller can control when and how the transitions of the state machine happen.




difference between concurrency and parallelism in present lingo is what?

we didn't distinguish in the early days (early 1990s) of working on parallelizing compilers and proliferating shared memory multiproceesors, as i recall, and first heard someone say they meant different things about 12 years ago, so a person roughly 15 years younger than me.


I was also working on early multiprocessors in the early 90s, and it's true that the terms were often treated as synonyms then, but for at least half of the time since then the distinction has been clear and pretty well agreed upon. Concurrency refers to the entire lifetimes of two activities overlapping, and can be achieved via scheduling (preemptive or cooperative) and context switching on a single processor (core nowadays). Parallelism refers to activities running literally at the same instant on separate bits of hardware. It doesn't make a whole lot of sense etymologically, might even be considered backward in that sense, but it's the present usage.

Note: I deliberately use the generic "activity" to mean any of OS threads, user-level threads, coroutines, callbacks, deferred procedure calls, etc. Same principle/distinction regardless.


so "concurrency" is temporal overlap with no statement of allocated cores?


If I'm interpreting your question correctly, yes. Two concurrent activities can run sequentially or alternately on a single core, via context switching. Or they can run in parallel on separate cores. It shouldn't matter; either way it's concurrency with most of the associated complexity around locks and most kinds of data races. OTOH, the two cases can look very different e.g. when it comes to cache coherency, memory ordering, and barriers. I've seen a lot of bugs that remained latent on single-core systems or when related concurrent tasks "just happened" to run on the same core, but then bit hard when those tasks started jumping from core to core. This stuff's never going to be easy.


thank you!



concurrency is a bit overused term, in the early stage it was referring to parallelism too, anyways yes I know the difference between pthread vs coroutines.

the problem of coroutine with me is how to use it on multiple core systems, my current thought is to have a pthread-pool in the # of cores, each thread can run multiple coroutines, as again, coroutine seems not an ideal fit to leverage multiple cores. The pthread-pool + coroutine approach is a combination with simpler code and multicore usage to me.


Theres decades of research on this, search for M:N threading. The best approach for scheduling coroutines depends massively on the application domain in practice.


Not sure if M:N maps to thread+coroutine combo here though.


Threads are also a tool for concurrency.


And concurrency can be a tool to implement threads (albeit, with no parallel speedup).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: