Birth of a Standard: The Intel 8086 Microprocessor

Lackluster Release

Upon its release, Morse's creation hardly took the computing world by storm. The midrange personal-computer market was saturated with cookie-cutter business machines based on the Z80 and running CP/M, the OS du jour of the late 1970s. The 8086 first appeared in a few unremarkable PCs and terminals. It gained a bit of a foothold in the portable computer market (in the form of the 80C86). Eventually it found acceptance in the microcontroller and embedded-applications market, most notably in the NASA Space Shuttle program, which uses 8086 chips to control diagnostic tests on its solid-rocket boosters to this day. (The space agency buys electronic relics on eBay to scavenge for the processors.)

In March 1979, Morse left Intel. Then a series of seemingly unremarkable events conspired to make the 8086 an industry standard.

The 8088 chip was built on the same code as the 8086, and its inclusion in IBM's 5150 PC helped make the 8086's code an industry standard. (Photo courtesy of Intel.)
A few weeks after Morse's departure, Intel released the 8088, which Morse calls "a castrated version of the 8086" because it used an adulterated version of the 8086's 16-bit capability. Since many systems were still 8-bit, the 8088 sent out the 16-bit data in two 8-bit cycles, making it compatible with 8-bit systems.

Two years later, IBM began work on the model 5150, the company's first PC to consist only of low-cost, off-the-shelf parts. It was a novel concept for IBM, which previously emphasized its proprietary technology to the exclusion of all others.

Obviously, an off-the-shelf system demanded an off-the-shelf microprocessor. But which to choose? IBM decided early on that its new machine required a 16-bit processor, and narrowed the choices down to three candidates: the Motorola 68000 (the powerful 16-bit processor at the heart of the first Macintosh), the Intel 8086, and its "castrated" cousin, the Intel 8088.

According to David J. Bradley, an original member of the IBM development team, the company eliminated the Motorola chip from consideration because IBM was more familiar and comfortable with Intel processors. Tipping the scales was the fact that Microsoft had a ready and working BASIC interpreter available for the 8086 and, since it shared the same base code, the 8088.

IBM then had to choose between the 8086 and the 8088. Ultimately, the decision came down to the simple economics of reducing chip count. IBM selected the 8088, a decision that allowed the company to build cheaper machines because it could use fewer ROM modules and less RAM, Bradley says.

In a sense, though, it didn't matter which of the Intel chips IBM chose. Both were built on the same underlying 8086 code written by Stephen Morse.

Subscribe to the Power Tips Newsletter

Comments