Nvidia Emerald supercomputer

Nvidia's GPU neural network tops Google

A year ago, Google constructed a “neural network” of servers that eventually learned how to recognize cats. On Tuesday, Nvidia said that a team of Stanford researchers had used its own graphics cores to create another approximately 6.5 times more powerful, using just 16 servers.

The Stanford and Nvidia researchers showed off their work at the International Supercomputing Conference this week in Leipzig, Germany, where the list of the top 500 most powerful supercomputers was unveiled.

Neural networks attempt to re-create the brain’s structure by approximating not only the millions of neurons within it, but also how the brain itself learns. The overarching principle is to create a framework by which the network can teach itself. That process can lead in unexpected directions, such as the Google network teaching itself to identify images of a cat inside a number of YouTube videos that Google exposed it to. Japanese researchers also developed a neural network that taught a robot how to pour a glass of water.

While Google’s efforts to create a neural network most likely attracted attention because of its whimsical results, neural networks are a serious endeavor. In March, Google acquired DNNresearch for its work on layered, “deep neural networks,” which it will apply to a variety of services. Although Google did not say to what purpose it would put DNNresearch, it’s likely that its intelligence could be applied to everything from translation to Google Now, the service that Google uses to parse a user’s data and to show him or her relevant information, such as the time to leave to arrive in time for the next appointment.

The EU has also dedicated a billion euros to the Human Brain Project to simulate the human brain in silicon. In March, the Obama administration proposed $100 million in funding for a similar, U.S.-led initiative that would map how the brain’s neural circuitry interacts with itself.

Google’s network used 16,000 microprocessor cores across its data centers, and the company didn’t disclose how many servers it used. (Most modern microprocessors contain four or eight cores.) Nvidia first used its GPU cores to create a neural network that matched what Google had constructed using just three servers, then expanded the network to what it claimed was 6.5 times that of Google, using 16 servers in all. In all, Nvidia’s network covered 11.2 billion “parameters,” which describe how the artificial neurons are organized, interact, and compute. Google’s network forged a billion “connections,” Google said at the time.

Nvidia K20X Tesla acceleratorNvidia
The Nvidia K20X “Tesla” GPU.

The underlying message that Nvidia wished to convey, however, is that GPUs aren’t just good for rendering images of the world’s latest games. Instead, they can be used as coprocessors or accelerators, assisting high-performance computers by offloading specialized, repetitive tasks. Just over 10 percent of the TOP500 list of supercomputers use a coprocessor like the Nvidia K20X that the second most powerful supercomputer, ORNL’s Titan, uses. PC gamers can also buy a version of that chip for their own use.

Nvidia revealed that Nuance Communications and its speech recognition algorithms run on Nvidia GPUs.

“Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modeling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at Nvidia, in a statement. “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Nvidia also said that it had updated its CUDA programming language for those GPUs to include ARM chips, allowing future servers makers to pair the low-power ARM chips with Nvidia’s GPU accelerators. ARM chips for servers are due through late 2013 and early 2014 from Applied Micro, AMD, and a number of other chip suppliers.

Unfortunately, the Stanford team didn’t say what, if any, odd conclusions its neural networks had drawn. We’ll just have to wait and see what it learns.

Subscribe to the Power Tips Newsletter

Comments