Birth of a Standard: The Intel 8086 Microprocessor
Upon its release, Morse's creation hardly took the computing world by storm. The midrange personal-computer market was saturated
In March 1979, Morse left Intel. Then a series of seemingly unremarkable events conspired to make the 8086 an industry standard.
A few weeks after Morse's departure, Intel released the 8088, which Morse calls "a castrated version of the 8086" because it used an adulterated version of the 8086's 16-bit capability. Since many systems were still 8-bit, the 8088 sent out the 16-bit data in two 8-bit cycles, making it compatible with 8-bit systems.
Two years later, IBM began work on the model 5150, the company's first PC to consist only of low-cost, off-the-shelf parts. It was a novel concept for IBM, which previously emphasized its proprietary technology to the exclusion of all others.
Obviously, an off-the-shelf system demanded an off-the-shelf microprocessor. But which to choose? IBM decided early on that its new machine required a 16-bit processor, and narrowed the choices down to three candidates: the Motorola 68000 (the powerful 16-bit processor at the heart of the first Macintosh), the Intel 8086, and its "castrated" cousin, the Intel 8088.
According to David J. Bradley, an original member of the IBM development team, the company eliminated the Motorola chip from consideration because
IBM then had to choose between the 8086 and the 8088. Ultimately, the decision came down to the simple economics of reducing chip count. IBM selected the 8088, a decision that allowed the company to build cheaper machines because it could use fewer ROM modules and less RAM, Bradley says.
In a sense, though, it didn't matter which of the Intel chips IBM chose. Both were built on the same underlying 8086 code written by Stephen Morse.