Hacker News new | past | comments | ask | show | jobs | submit login

> The real myth is the idea of a CISC/RISC dichotomy in the first place

The divergence was one of philosophy, and had unexpected implications.

CISC was a “business as usual” evolution of the 1960s view (exception: Seymour Cray) that you should make it easy to write assembly code so have lots of addressing modes and subroutines (string ops, BCD, etc) in the instruction set.

RISC was realizing that software was good enough that compilers could do the heavy lifting and without all that junk hardware designers could spend their transistor budget more usefully.

That’s all well and good (I was convinced at the time, anyway) but the results have been amusing. For example some RISC experiments turn out to have painted their designs into dead ends (delay slots, visible register windows, etc) while the looseness of the CISC approach allowed more optimization to be done in the micromachine. I did not see that coming!

Agree on the point that the cores themselves have found a common local maximum.




But there wasn't ever a divergence in philosophy. It was a straight switch.

In the 70s, everyone designing an ISA was doing CISC. Then in the 80s, everyone suddenly switched to designing RISC ISAs, more or less overnight. There weren't any holdouts, nobody ever designed a new CISC ISA again.

The only reason why it might seem like there was a divergence is because some CPU microarchitecture designers were allowed to design new ISAs to meet their needs, while others were stuck having to design new microarchitecture for legacy CISC ISAs which were too entrenched to replace.

> For example some RISC experiments turn out to have painted their designs into dead ends

Which is kind of obvious in hindsight. The RISC philosophy somewhat encouraged exposing pipeline implementation details to the ISA, which is a great idea if you can design a fresh new ISA for each new CPU microarchitecture.

But those RISC ISAs became entrenched, and CPU microarchitecture found themselves having to design for what are now legacy RISC ISAs and work around implementation details that don't make sense anymore.

Really the divergence was fresh ISAs vs legacy ISAs.

> while the looseness of the CISC approach allowed more optimisation to be done in the micromachine.

I don't think this is actually an inherent advantage of CISC. It's simply result of the shear amount of R&D that AMD, Intel, and others poured into the problem of making fast microarchitectures for x86 CPUs.

If you threw the same amount of resources at any other legacy RISC ISA, you would probably get the same result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: