Intel is taking a new direction in chip development as it looks to the future of artificial intelligence, with the company betting the technology will pervade applications and web services.
The company on Thursday said it is developing new chips that will handle AI workloads, which will increasingly be a part of its chip future. For now, the AI chips will be released as specialized primary chips or co-processors in computers and separate from the major product lines.
But over time, Intel could adapt and integrate the AI features into its mainstream server, IoT, and perhaps even PC chips. The AI features could be useful in servers, drones, robots, and autonomous cars. Intel is aggressively chasing these sectors as it tries to diversify outside the weakening PC market.
AI computing is currently dominated by GPUs from Nvidia and custom chips from companies like Google. Intel’s plan is to offer a wide range of alternate non-GPU chips for deep learning in a bid to accelerate its entry to AI. Intel lost an opportunity in the mobile market because it was a late entrant, and it doesn’t want to repeat that mistake in AI.
Intel lacks a potent GPU to chase AI but hopes the alternative chips will fill the gap. The company believes it does not need a GPU and doesn’t want to put all its eggs in one basket, like Nvidia has.
Intel is developing a monster AI chip code-named Knights Mill targeted at deep learning, and it will be part of the Xeon Phi chip family. The company has shared few details about the chip, but it’ll be four times faster in deep learning tasks than the current Xeon Phi chip code-named Knights Landing, said Jason Waxman, corporate vice president in the Data Center Group at Intel.
Knights Mill will ship next year, and the timing provides a snapshot of how urgently Intel is trying to speed up its entry into the AI space. There was a four-year gap between the release of its two previous Xeon Phi processors.
Knights Mill will have several unique features compared to other chips Intel has developed. Unlike Intel’s high-performance chips that focus on precise calculations, Knights Mill will string together a bunch of speedy low-level floating-point calculations to reach conclusions. Those conclusions get to the essence of deep learning—some conclusions like the identification of an image may not always be precise. But as the computer deep-learning model grows stronger, the conclusions will be more accurate.
In the first half of next year, Intel will also release the first deep-learning hardware from its recent acquisition of Nervana Systems. That chip will be targeted mostly toward training—creating computer models for deep learning. It could also be used for inferencing, which enhances deep-learning models through additional input.
The Nervana chip will be mainly targeted toward servers. Intel will initially release it as a card that can be plugged into a standard PCI-Express port. But over time, the hardware will be integrated closer to the processor, said Naveen Rao, vice president and general manager of artificial intelligence solutions at Intel and the founder of Nervana Systems.
The two new chips will add up to a stash of AI chips already in Intel’s arsenal. Intel recently acquired Movidius, maker of computer vision chips used in Google Glass. The Movidius chips could be used in wearables, drones, and robots for object recognition and depth measurement.
Intel also sells FPGAs (field programmable gate arrays), which are circuits that can reprogrammed to do specific tasks. Intel wants to put FPGAs in servers, autonomous cars, robots, and drones. Intel next year will ship the Deep Learning Inference Accelerator, an FPGA that competes with inferencing chips like Google’s Tensor Processing Unit.
Intel’s urgency is sparked by a surge of interest in AI, a technology still in its infancy. Digital information is being fed from sensors, and AI is an emerging technique to make sense of that data.
Large companies like Google, Facebook and Amazon are deploying software and hardware that can try to make sense of the large amounts of information. One AI example is Microsoft’s Cortana, which applies algorithms and FPGAs to recognize speech.
The effectiveness of these systems is heavily defined by the software stack used to sandbox deep-learning models. The software stack acquired from Nervana will serve as Intel’s parallel programming framework for deep learning. The open-source framework will compete with popular options like Caffe, Torch, Google’s TensorFlow, and Microsoft’s CNTK (Cognitive Toolkit).
But will all these AI chips cause confusion for customers? The more the better, Waxman said; Intel wants to provide customers with a wide range of alternatives. Some chips are better at specific tasks; for example, FPGAs are better at single-inferencing tasks like recognizing cats or dogs in images, Waxman said.
It’s important Intel move quickly to get a piece of the AI market, said Jim McGregor, the principal analyst at Tirias Research.
By throwing many AI chips into the market, Intel wants to see which one will stick, he said. “It’s good Intel’s getting out there with multiple solutions. But Intel doesn’t have any advantage over anyone else,” McGregor said.
But there are risks. Intel bought Nervana Systems for its software stack, and any attempt to lock customers to those tools won’t be accepted by the industry, McGregor said. Nervana’s tools are open source but are designed for Intel’s chips, but competing frameworks like Caffe are gaining in popularity.
“It’s a questionable strategy. Intel’s done for this for the past decade, and they tried to push everyone down their path,” McGregor said.
But deep learning is in its infancy. It’ll take a long time to perfect computational techniques for deep learning, and new types of hardware like quantum computers and brain-mimicking chips could alter the landscape.
“We’re still learning how to learn,” McGregor said.