How is this nonsense or anything to do with "language runtimes with disabilities"? An OS running on a single core processor cannot be parallel but it may be concurrent: it can never physically do two things at the same time, but it might be able to logically interleave different tasks.
Parallelism is a physical thing, concurrency is a logical thing.
Parallelism is a physical thing, concurrency is a logical thing.
Fundamentally the difficulty is all about synchronization. People can try to split hairs and say there are two terms for two different things but ultimately it doesn't matter because the underlying problem is the same.
> Single core concurrency doesn't have to deal with hardware memory synchronization.
Yes it does. If you have two threads which (for whatever reason) are sharing memory and they can be run in an arbitrary order, you have to deal with memory synchronization. If it's a single core machine, you still need to synchronize your memory access. Mutexes and semaphores predate systems with multiple processors.
Yes, but those are not hardware memory synchronization mechanisms. You do not have to deal with memory fencing to coordinate reads/writes at the CPU level, because there is only one core that can read memory at a time.
Yes? Parallelism is inherently concurrent but concurrency is not inherently parallel. But single core concurrency still exists as a form of non-parallel concurrency.
The point I'm trying to make is down to brass tacks, single core concurrency is a lot simpler than multi core concurrency, because you don't need any hardware cooperation.
Before you were saying they are two different things, now you're saying they are the same when you have multiple cores and not the same when you have a single core. If they're the same when you have multiple cores, isn't there just single and multi-core concurrency by your own definitions? If so, why are you making a distinction of parallelism and concurrency?
You have just repeated the nonsense I was talking about.
The claim you repeat is meaningless. A program is either parallel / concurrent or not. The situation you describe (when there's a single processor core) isn't parallel or concurrent. In some sense, it emulates concurrent / parallel execution because it imitates the unpredictable ordering of code execution, which sure has its uses... but the whole point of dealing with this unpredictable ordering is that we actually want parallelism / concurrency. The emulation on its own is worthless.
No it's not? How do you characterize running two programs on a single core without calling it concurrent but not parallel? There is a clear distinction between logical multitasking and physical multitasking that I think is useful to taxonomize.
"a program which is not written as concurrent can never be executed as parallel"
Unless of course you're running multiple independent instances of it, each with different parameters, which I gather is a pretty common way of running long running CPU- intensive operations on large data sets. Presumably there's often some final separate step that may be needed to combine the results once they're all finished, which if run manually obviates any concurrency concerns at the software level.
So before 2001 your OS didn't have a scheduler and was incapable of handling more than one process at the same time? The user interface was simply hanging when you told your CPU to do something?
That was the point I was attempting to get across. Single core computers multitask fine, which proves that concurrency is not the same thing as parallelism.
Even in modern times, we have something like the first generation raspberry pi zero, it has a single ARM core, and it multitasks fine.
Parallelism is a physical thing, concurrency is a logical thing.