Hacker News new | past | comments | ask | show | jobs | submit login

Not the only pain point. It made register assignment harder for the compiler.

Fundamentally you load from memory in generated code by adding two numbers together (e.g. the struct address and field offset, array address and index...). Motorola figured that you could just pick one number from each of two sets and thus save two bits (i.e. the source registers could be encoded with 3 bits instead of 4) in the encoding for the instruction.

As far as CISC tricks go, it wasn't too bad. But it's aged poorly: no one would design a ISA like that today, while Intel's elaborate addressing modes introduced with the 386 are producing code size and cache efficiency benefits to this very day.




Another consideration that's just as important is in terms of physical design. By dividing one large register file into two smaller ones with fewer ports you should be able to substantially decrease the overall size and power usage of the register files. Probably the bypass network too.

I've never worked on compiler design, does it really make it that much more difficult? I'd imagine that the compiler would have a clear idea of whether a value is a memory address or not and could simply put addresses in the address registers and data in the data registers but it's likely I'm missing something.


[0] has two old (2008) Usenet posts by an AMD CPU designer about these tradeoffs.

[0] http://yarchive.net/comp/register_file_size.html


Most of the alternatives at the time had fewer and more specialised registers - the architecture was childs-play to write compiler targets or compared to contemporary alternatives. My first compiler was for M68k, and it forever made me hate dealing with x86.

I'm not convinced it aged poorly. It failed because Motorola didn't have the resources to compete at keep up with Intel, and of course that could be down to the architecture making it harder.

But developments like [1] imply that this was more a problem with Motorola/Freescale's abilities to produce a sufficiently advanced design at the time - they're starting to beat Coldfire and PPC systems that are clocked far faster with the M68k instruction set on various benchmarks (though they are also adding instructions). With the caveat that this of course also tests things like memory bus speeds etc. What's clear, in any case is that the we never got to see what kind of performance it is possible to squeeze out of the M68k architecture.

[1] http://www.apollo-core.com/


Didn't the 68020 add even more addressing modes, plus scaling? Comparing the 68000 to the 386 doesn't seem too fair.


Addressing modes on the 68020 are kind of crazy. In addition to some relatively straightforward improvements (scaling for the indexed mode, options for larger displacements on both the displacement mode and indexed mode) they also added something called "memory indirect" modes. These allowed you to dereference a pointer in memory in a single operation. In these modes you have a base register, a base displacement, an index register (with scale) and an outer displacement. The index register could be applied either to the base value or to the fetched "outer" value.


Yeah, that's exactly what I was thinking of and why I had to reread the original post twice. The 68020+ was perhaps crazier than the VAX. The last two variants you mentioned were called preindexed and postindexed mode. I wrote a few hundred thousands of lines of 68000 code, but much less for 68020+. Those were the days!


For clarity: I mentioned the 386 addressing modes because fundamentally the ModRM encoding was designed to address the same code generation problem: efficiently encoding one instruction to compute base-plus-offset (-plus-immediate too) in addressing memory. This avoids having to compute an address first for what is one of the most common operations in application code. As it happened, Intel's trick was the better idea. Motorola's original register design was fine but not as good, the '020 madness didn't survive contact with the RISC pipeline.


With the exception of index scaling (which as already mentioned, was added in the 68020) ModR/M and SIB is a strict subset of the 68000 addressing modes. I don't see what this has to do with the address register/data register split though. The 386 only had 8 GPRs and only 7 of those could be used as a base or index register. The reason for the address/data split is to allow 16 registers without needing 4-bit register fields.

Apart from the overly complex memory indirect modes, I'm having a hard time seeing how the 386 ModR/M and SIB setup is superior to what was in the 68020. Twice as many registers (though with usage restrictions), PC-relative addressing and a cleaner encoding. Those first two things have been fixed in x86-64 (and with generally fewer usage restrictions), but at the cost of making the encoding even worse.


> while Intel's elaborate addressing modes introduced with the 386 are producing code size and cache efficiency benefits to this very day.

At the expense of orthogonality, which I think is a great feature for a CPU to have and which the 68K (and 6809 and 6800) had in spades.


Hmm, wasn't the 386 addressing modes pretty much orthogonal? (At least, compared to what 8086 and 80286 had.)


Yes, but the instruction set wasn't. Orthogonality in an instruction set basically means that you know what basic instructions a processor supports, which addressing modes it supports and that allows you to create all possible combinations and they will just work as expected without gaps or strange insertions and if you look at the opcodes you'll be able to make sense of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: