Hacker News new | past | comments | ask | show | jobs | submit login

Intel leaned into this with the 286's protected mode - segments became references to "selectors" so the OS could place your 64k chunks of code/data wherever it wanted within 16M.

I think the existing base of 8080 software (including CP/M software) was on their minds. You see the same thing in the 386/486, they have the large flat virtual address space software devs wanted, but until the early 90s their commercial value was running 8088 software really fast.




>I think the existing base of 8080 software (including CP/M software) was on their minds.

This is the generally accepted history. According to what's in Wikipedia (https://en.wikipedia.org/wiki/Intel_8086#The_first_x86_desig...) the design was developed very quickly, and intentionally built as a straightforward 'extension' of the 8080/8085 chips into the 16-bit world:

-quote-

The first x86 design

The 8086 project started in May 1976 and was originally intended as a temporary substitute for the ambitious and delayed iAPX 432 project. ... Both the architecture and the physical chip were therefore developed rather quickly by a small group of people, and using the same basic microarchitecture elements and physical implementation techniques as employed for the slightly older 8085 (and for which the 8086 also would function as a continuation).

Marketed as source compatible, the 8086 was designed to allow assembly language for the 8008, 8080, or 8085 to be automatically converted into equivalent (suboptimal) 8086 source code, with little or no hand-editing. The programming model and instruction set is (loosely) based on the 8080 in order to make this possible. However, the 8086 design was expanded to support full 16-bit processing, instead of the fairly limited 16-bit capabilities of the 8080 and 8085.

-end quote-




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: