Intel's developing perceptual computing chips to help PCs hear, see, and feel

Intel's road to perceptual computing will be paved with its own silicon: its microprocessors, graphics chips, and a new line of dedicated silicon  for perceptual computing that Intel will create.

Mooly Eden, the president of Intel Israel and its 8,000 employees, relaxes by also directly managing the perceptual computing business at Intel. There, Eden imagines a future where PCs communicate with users in the same way that two friends chat in a cafe: with sight, voice, and gestures all contributing to the conversation.

Intel took a step further into that future with the announcement that several OEMs, including Asus, Dell, Hewlett Packard, and Lenovo, would integrate a 3D depth sensing camera into the bezel of the display. Intel also launched the Intel Capital Experiences and Perceptual Computing Fund in June, dedicated to spending $100 million to the technology over the next two or three years and winning partners to the cause. Touch, speech, and other interfaces need to be natural, intuitive, and immersive, Eden said.

Perceptual computing in action

Watch out, that snake bites.

On Wednesday, Eden demonstrated how the technology worked: Cameras sensed the position of fingers, and spun virtual lightning between them. A user's hands could be used to tickle a virtual child. And in another demo, Eden played the game Portal 2 by waving his hands and orally commanding the computer to drop a Companion Cube. Finally, Eden demonstrated a new version of the Nuance virtual assistant, co-developed by Intel and Nuance and running on top of Intel's Atom and Core silicon.

Eden also showed off the new camera module: at about three inches, a fraction the size of Microsoft's own Kinect depth camera. (Eden declined to let the module be photographed.) But, he said, Intel wouldn't be able to enable it with just the main microprocessor alone.

"This technology is so taxing that some of it will run on the general-purpose CPU, some on the GPU, and some of that on a dedicated chip," governing object recognition, voice recognition, and so forth, he said.

Some of the technologies can be done in the cloud. But others can't, such as the real-time translation of Hebrew into English and vice versa, when the latency of sending data to and from a server is just "annoying," Eden said.

From prototype to perceptive computing

Eden plays Valve's Portal 2, controlled by gesture and voice.

Eden said that Intel is already engaged in development work to make its vision a reality, but that that sort of development would take several years. Eden said that Intel would pursue a hybrid stategy of its own development work, acquiring talent, and taking licenses to technology it did not own.

One example of that is Nuance's own Dragon Assistant technology, which allows users to "talk" to their PC in much the same way as an Apple Siri or Google Now. In mid-August, Lenovo, Asus, Acer and Toshiba agreed to pre-load Dragon Assistant on a total of 12 ultrabook models. The new assistant wil run on both the Atom processor as well as the Core.

Computer vision is sort of the opposite of a traditional 3D game, where a computer tells the graphics chips to render polygons in a certain way, noted Patrick Moorhead, principal analyst at Moor Insights and Strategy. Instead, stereopsis—as it's known—translates the real world into a virtual landscape that the machine can understand. Both require massive amounts of computing power. The difference between the PC and the smartphone is that the PC has more CPU horsepower, and is often plugged in, he said.

It's likely that the standalone chip is a DSP or other controller that's always on, and always listening, a way to save power when the PC is on the move, Moorhead said. That's the same approach that the Moto X smartphone uses to always "listen" for a user's commands.

In any event, "perceptual computing is disruptive," Eden said.

Subscribe to the Daily Downloads Newsletter

Comments