I'd much rather have a security-oriented processor with tagged bounds checked pointers... such a facility could also make garbage collecting more efficient, but what it could do for security would be wonderful.
The 8800 (iAPX 432) provided bounds checking too. Every object reference wasn't a pointer but an "Access Descriptor" that included permissions. So you couldn't access anything out of bounds. I should emphasize that this isn't like a JVM, but was implemented at the machine instruction level. It's hard to understand how strange and radical this processor was.
Every time I hear about abandoned "improved" tech like this, I always wonder why the idea was shelved. The link kinda handwaves it away. Any good resources on why the idea didn't take off?
You'd probably have to find an Intel insider to know the real truth.
But the most likely possibility is that the plan was too grand for the time (note, this was 197x to ~ 1984ish, the IBM PC with its 8088 had only been on the market about three or so years) and therefore much too expensive for any market to bear.
Additionally, the iAPX432 was being worked upon during the same time that the IBM PC suddenly brought the x86 chips to significant popularity.
Combine the concepts of pouring money into an arch. that was too big and grandiose for the integration tech. at the time, with a sudden influx of profit from the x86 chip line, and it seems that a likely reason was simply to devote resources to the chip line that was suddenly producing those same resources.
"Using the semiconductor technology of its day, Intel's engineers weren't able to translate the design into a very efficient first implementation. Along with the lack of optimization in a premature Ada compiler, this contributed to rather slow but expensive computer systems, performing typical benchmarks at roughly 1/4 the speed of the new 80286 chip at the same clock frequency (in early 1982).[7] This initial performance gap to the rather low-profile and low-priced 8086 line was probably the main reason why Intel's plan to replace the latter (later known as x86) with the iAPX 432 failed. Although engineers saw ways to improve a next generation design, the iAPX 432 capability architecture had now started to be regarded more as an implementation overhead rather than as the simplifying support it was intended to be."
That makes sense. I remember absolutely huge data sheets and reference manuals for the IAPX432. So it's very possible that Intel didn't think hard about re-licensing the somewhat janky 0x86 design because they expected it to be a dead end.
Historically speaking, I'm not sure if Intel ever -wanted- to license x86.
The main reason that AMD (and others) manufactured x86 CPUs early on was because IBM had a 'second source' requirement; i.e. there had to be another vendor who could provide the same part.
So an AMD 286 was no different from an Intel 286.
By the time of the 386, IBM had relaxed/dropped the second-source requirement. Thus the Am386 isn't the same design as the i386 (and there was a court battle to try to keep the AMD part out of the market.)
Am386 was reverse engineered design, but microcode was 1:1 Intel copy :). There was no
court battle over this chip. Intel was forced into arbitration due to second source agreements, and lost.
Court battle was over 287 and later Am486. AMD announced clean-room design ... and gave their clean room engineers 386 microcode copy :]
Possibly the fact that the 432 was so slow and the momentum was with the 8086 and successors following the success of the IBM PC.
RISC, which in some senses is the polar opposite of the approach taken with the 432, also started to generate a lot of interest at about the same time.
There is an excellent paper by Robert Colwell exploring why the 432 was slow and how it could have been speeded up:
> Every time I hear about abandoned "improved" tech like this, I always wonder why the idea was shelved. The link kinda handwaves it away. Any good resources on why the idea didn't take off?
My guess it is it was something like the "AI Winter:" a hyped idea fails in in a high-profile way, so it gets shunned for some period of time because people are afraid of failing again in a similar way. It seems like some of the features of that processor lead it to be a nonstarter for practical reasons:
> According to the New York Times, "the i432 ran 5 to 10 times more slowly than its competitor, the Motorola 68000".
Back then, personal computers didn't have much performance to spare.
IIRC, Intel has had several high-profile failures when radically new non-x86 CPU architectures. Only backwards-compatible conservative evolutions of the x86 seem to get traction.
It probably didn't help performance. But there's some good news: ARM has added pretty much the same thing, called MTE (Memory Tagging Extension) [1], which uses the upper 4 bits of memory addresses as a region identifier.
My theory is evolution. The "more advanced" idea has higher up-front costs so the cheaper, worse idea gets used, iterated upon until it's so polished that it's better than the infant stages of the better idea (the better idea has the potential to be even more polished with investment, but never will be) so everybody uses it and now we're stuck with it...
Usually it's one or more of several common factors: cost to make, cost to buy, cost to integrate, slower/inefficient performance, loss of compatibility, some unintended design defect, poor or over-ambitious marketing, or some assumption that compatible/optimized/"unobtanium"/Carnot-efficient software will solve Issue X for us. See Itanium, DEC Alpha, Transmeta, Consumer Power/PowerPC, OS/2, Commodore, PCjr, ETX motherboard standard, Java Processors, etc.