Hasn't the whole massive multicore thing been floating around the server space for a few years now? I remember HP Project Moonshot and Sea Micro and a few other projects to built multi-chip ARM servers. However, I don't recall seeing ANY benchmarks that demonstrated that they were any more efficient, per watt, in a REAL application, than the x86 competition. I would really like to see such a data point. I guess now it's being sold as a novelty to let people play with such a technology, which is fine I guess.
The CISC vs. RISC days are long over and the battle between the two architectures is a bit silly at this point since the gap between the instruction set and the underlying implementation has gotten quite dramatic. Claims that RISC chips are inherently more efficient may have been true in 1995, but I don't see this holding water today.
I'm certainly not going to argue against GPUs (or FPGAs) being used in scientific applications where performance is limited by floating point or vector performance. In certain scientific applications they can hand general-purpose CPUs their butts. I'm talking about ARM/RISC vs x86 CPUs for server applications.
The CISC vs. RISC days are long over and the battle between the two architectures is a bit silly at this point since the gap between the instruction set and the underlying implementation has gotten quite dramatic. Claims that RISC chips are inherently more efficient may have been true in 1995, but I don't see this holding water today.