Hacker News new | past | comments | ask | show | jobs | submit login

Every architecture that survives gets filled with extensions. That's not the issue.

Even the time period is not the issue per se, though it plays a part in it. The real dividing line is more like 1985 than 2000 though.

CISC is the issue (at least, at the ISA level). Processors of that era were made to be "friendly" to assembly programmers. There are redundant instructions with slightly different semantics, there are many addressing modes, there are not many registers, and the instruction encoding is an illogical variable-length mess hidden away by the assembler.

RISC on the other hand starts from the premise that humans don't write assembly much anymore, they write high-level languages (yes even C counts) which get translated into machine code by a compiler. So an ISA is a compiler target first and foremost. So it's better to spend silicon on more registers and more concurrent/parallel work than on addressing modes and convenience instructions.

Obviously this battle has already been fought and won on a technical level by RISC; that's what the R in Arm originally stood for, and no modern CPU design even considers CISC. Heck, even x86 uses RISC-like micro-ops under the hood and the 64-bit version of it added a lot more registers. But the CISC instruction decoder is still there taking up some silicon.

The real question is: does that extra silicon actually matter that much in practice anymore? It's dwarfed by the other things a modern processor has, especially the cache. The answer seems to be contextual even today. An x86 server, desktop, or gaming console running at or near full load on CPU-intensive tasks still outperforms Arm (though by less and less each year), even though Arm handily takes the crown at lower power levels. Is that due to ISA alone or other factors too?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: