To clarify, my point was about general-purpose CPUs, not single or multi-core.
A general-purpose CPU requires that a program is transformed (via compilation or interpretation) into a set of basic instructions, ones that the processor knows how to handle. This means that almost every program requires many cycles to complete, even if the underlying logic itself could theoretically be done within a single cycle (or within no cycle at all!).
On the other end of the spectrum are FPGAs and ASICs, programmable or dedicated circuits that allow you to create specialized logic that corresponds directly to a specific need.
Bringing this back to the discussion at hand: Moore's law cares nothing about general-purpose CPUS, and is just focused on number of transistors on an IC doubling. With that said, transistors can only get so small (due to the laws of physics), and so one can presume we'll see an end to scaling eventually.
There are changes we can make to improve general-purpose CPU architecture, regardless of transistor count, and there are changes we can make to how we run programs (moving dedicated logic to dedicated circuits). Forcing any logic into a generic set of steps that run a in a loop will always be less efficient than wiring up the logic itself.
The questions has always been whether to wait for the machine to get faster or to create the dedicated logic yourself. The former has been true since the beginning of computation, and has been closely associated with Moore's law. With that said, it doesn't mean that the literal end of Moore's law is the end of computational efficiency gains.
A general-purpose CPU requires that a program is transformed (via compilation or interpretation) into a set of basic instructions, ones that the processor knows how to handle. This means that almost every program requires many cycles to complete, even if the underlying logic itself could theoretically be done within a single cycle (or within no cycle at all!).
On the other end of the spectrum are FPGAs and ASICs, programmable or dedicated circuits that allow you to create specialized logic that corresponds directly to a specific need.
Bringing this back to the discussion at hand: Moore's law cares nothing about general-purpose CPUS, and is just focused on number of transistors on an IC doubling. With that said, transistors can only get so small (due to the laws of physics), and so one can presume we'll see an end to scaling eventually.
There are changes we can make to improve general-purpose CPU architecture, regardless of transistor count, and there are changes we can make to how we run programs (moving dedicated logic to dedicated circuits). Forcing any logic into a generic set of steps that run a in a loop will always be less efficient than wiring up the logic itself.
The questions has always been whether to wait for the machine to get faster or to create the dedicated logic yourself. The former has been true since the beginning of computation, and has been closely associated with Moore's law. With that said, it doesn't mean that the literal end of Moore's law is the end of computational efficiency gains.