Hacker News new | past | comments | ask | show | jobs | submit login

I was also working on early multiprocessors in the early 90s, and it's true that the terms were often treated as synonyms then, but for at least half of the time since then the distinction has been clear and pretty well agreed upon. Concurrency refers to the entire lifetimes of two activities overlapping, and can be achieved via scheduling (preemptive or cooperative) and context switching on a single processor (core nowadays). Parallelism refers to activities running literally at the same instant on separate bits of hardware. It doesn't make a whole lot of sense etymologically, might even be considered backward in that sense, but it's the present usage.

Note: I deliberately use the generic "activity" to mean any of OS threads, user-level threads, coroutines, callbacks, deferred procedure calls, etc. Same principle/distinction regardless.




so "concurrency" is temporal overlap with no statement of allocated cores?


If I'm interpreting your question correctly, yes. Two concurrent activities can run sequentially or alternately on a single core, via context switching. Or they can run in parallel on separate cores. It shouldn't matter; either way it's concurrency with most of the associated complexity around locks and most kinds of data races. OTOH, the two cases can look very different e.g. when it comes to cache coherency, memory ordering, and barriers. I've seen a lot of bugs that remained latent on single-core systems or when related concurrent tasks "just happened" to run on the same core, but then bit hard when those tasks started jumping from core to core. This stuff's never going to be easy.


thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: