I have heard that IBM chose the 8088 because Intel sowed some FUD about Motorola's ability to supply the 68000 in sufficient quantity for IBM. The decades of insanity we could have saved if IBM had chosen differently...
Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips.
-end quote-
How much really was Intel sowing FUD vs. IBM demanding a second source be available is probably lost to time.
The explanation I read back in the early 80's was Intel had the 8088 which required 8 DRAM chips. And the 68000 required 16 DRAM chips. For early microcomputers the cost of DRAM was the dominant cost.
Interesting point. I know that with the Nintendo 64, the use of RDRAM was similar (lower overall chip count, which also helped with board cost in that case)
I think also a narrower data bus which might allow a smaller PCB with fewer layers. Used to be higher layer count PCB's $$$$.
Friend worked on a layout where they did the board design first and then designed the processor pinout to match. So they could get away with a small two layer board. Ground on the bottom layer and all the signals on the top layer.
And before that, we had decades of insanity. Segmentation. Six different memory models. Extended memory managers. Extended vs. expanded memory. Those needless complications burned tons of person-years of developer time that could have been spent on useful things.
I've heard a story that due to the hardware divide instruction in 68000, IBM engineers were concerned about interrupt latency (the instruction took many cycles to execute and couldn't be pre-empted by in interrupt).
It was at a magic price point though, ten dollars literally could have broken the whole product.