When introducing its monster 72-core Xeon Phi chip, Intel couldn’t help but take a swipe at graphics processors for being sluggish for some tasks.
Ironically, Xeon Phi is a byproduct of Larrabee, which was supposed to be Intel’s first major GPU but was abandoned in 2009 after multiple delays.
The swipe was a shot at Nvidia, whose GPUs are flourishing in the gaming and machine learning areas. But Nvidia’s success has also raised questions about whether Intel should’ve been patient and pursued Larrabee.
Nevertheless, Xeon Phi has been successfully used in supercomputing, and now Intel wants to challenge Nvidia’s GPU by bringing the chip to machine learning.
The latest 72-core Xeon Phi 7290 chip is company’s fastest chip to date. It will start shipping in September for $6,294, making it Intel’s most expensive processor. The company also announced three other Xeon Phi chips with 64 to 68 cores.
Xeon Phis are already being used in some of the world’s fastest computers. Some of the new chips started shipping months ago, but Intel announced the specifications and prices for the first time at the International Supercomputing Conference in Frankfurt, Germany, this week.
The new chips are packaged much like graphics cards and can be primary chips or co-processors. On supercomputers and servers, the chip will likely serve more as a co-processor to Xeon E5 chips as the primary CPU.
These chips could also be installed as a primary processor in workstations. Don’t expect to run the latest games because the chips are designed to run scientific applications with their juiced up Atom cores and vector processors.
The Xeon Phi chip package also integrates some of the latest technologies. It has 16GB of integrated stacked memory and supports up to 384GB of DDR4 memory in a system. The Xeon Phi 7290 chip package draws 245 watts of power, and the 72 cores operate at a clock speed of 1.5GHz.
Outside of supercomputers, there’s interest in using Xeon Phi in data centers for machine learning and artificial intelligence, said Charles Wuischpard, vice president of the Data Center Group at Intel.
Future Xeon Phis will push the chip further into those areas, Wuischpard said.
The chips are “faster and more scalable than GPUs” for machine learning models in servers, Wuischpard said.
Intel is also testing the new Xeon Phi chips for deep-learning systems. In addition to GPUs, Intel could also face competition in machine learning from Google, which has built its own chip called TPU (Tensor Processing Unit).
As Intel tries to break away from PCs, the chipmaker is also trying to link Xeon Phi chips to its latest technologies like the Optane memory, silicon photonics, FPGAs (field programmable gate arrays), and OmniPath interconnect, Wuischpard said. Intel is placing its bets on the fast-growing data center business.
Some of the top server makers, including Hewlett Packard Enterprise, Dell, and Lenovo, will use the Xeon Phi chips in servers and supercomputers.