Intel’s chip arsenal appears to have some glaring weaknesses. One of them is the lack of a high-end graphics processor, which is important for gaming, virtual reality and machine learning.
However, the company does have powerful alternatives: two monster chips that will be ammunition to take on GPUs and rival chips in the areas of machine learning and supercomputing, which are important to the company.
In 2018, Intel will likely release a faster and more power-efficient Xeon Phi, a supercomputing chip that is already used in some of the world’s fastest computers. Intel is also looking beyond CPUs to FPGAs (field programmable gate arrays), which can be faster at key tasks.
The next Xeon Phi will be faster and more versatile than the current version, code-named Knights Landing, which has up to 72 cores, said Charles Wuischpard, vice president of the Data Center Group at Intel.
Xeon Phi chips can act as primary CPUs or co-processors in servers and supercomputers. That gives it an advantage over rival megachips like GPUs, which can only serve as co-processors alongside CPUs from companies like IBM, ARM, AMD or Intel.
Four Knights Landing chips were announced at the International Supercomputing Conference this week. Knights Landing will be used in supercomputers like the 18-petaflop Stampede 2 at the Texas Advanced Computing Center in Austin, Texas, which will go live next year.
The FPGAs and supercomputing chips are important as Intel reduces its reliance on PCs to focus on data centers, communications and the Internet of things. Ironically, Intel tried to build its own full-fledged GPU called Larrabee, which was scrapped in 2009, and Xeon Phi emerged as a byproduct.
Beyond supercomputing, Xeon Phi will expand to data-center workloads that need a lot of horsepower, like machine learning, telecommunications and the Internet of things, Wuischpard said.
Knights Landing is already being tested in machine learning, Wuischpard said, claiming it is delivering better performance than GPUs. Ostensibly, the next Xeon Phi chip will deliver better machine learning performance.
Intel’s Xeon E-series server chips — which have up to 24 cores — are already being used in machine learning, but Xeon Phi will be a more powerful alternative. Right now, Nvidia’s GPUs are driving machine learning, and Google has also built a custom chip called TPU (Tensor Processing Unit) specifically for machine learning.
Moreover, the upcoming Xeon Phi could be paired with cutting-edge storage and memory like 3D Xpoint, which Intel claims can be 10 times denser than DRAM, and 1,000 times faster and more durable than flash storage.
A new interconnect called OmniPath could deliver faster bandwidth and speed up communications between CPUs, memory, storage and other processors.
Intel is providing a blueprint so systems with these new features can be easily deployed, Wuischpard said.
Intel also has FPGAs (field programmable gate arrays), a technology it is placing bets on as a speedier alternative to CPUs. It acquired FPGA technology after it bought Altera for $16.7 billion last year. FPGAs are used by Microsoft to deliver quicker Bing results, and by Baidu to speed up image search results.
FPGAs have the unique trait of being reprogrammable, and could take on specific machine-learning tasks like object recognition. FPGAs could emulate graphics functions, but aren’t as versatile as GPUs. FPGAs are limited to specific applications programmed into the chip, and are also known to be power hungry.
Intel could plug FPGAs into the already-speedy Xeon Phi chips. Intel is integrating FPGAs alongside Intel Xeon server processors in a multichip package, and hopes to integrate it inside processors in the future.