Nvidia Corp. is promoting the idea that the graphics processing units (GPUs) it makes can also operate, when needed, as additional CPUs (central processing units) to vastly increase computing power.
"It's (the GPU) just sitting there and a lot of people got this great idea, 'Gee, let's do some computing with this,'" said John Nickolls, director of architecture for Nvidia, in a presentation Wednesday at the Microprocessor Forum 2007 in San Jose, California.
When it's not performing graphics-related tasks, the GPU can be used in parallel with the central processing unit (CPU), Nickolls said, delivering up to 200 billion FLOPS (floating-point operations per second), a measure of computer performance.
Nvidia released a beta version of software called CUDA (for compute unified device architecture) in February. A general release is expected in the second half of this year. CUDA allows software developers to write programs that instruct the GPU to perform some computing functions normally done only by the CPU. Using the GPU as a CPU is not new, said Nickolls, but CUDA should make it easier for software developers to do.
CUDA only works on Nvidia's GeForce 8800 and 8600 and the Quadro FX 4600 and 5600 GPUs introduced in November 2006.
The GeForce GPU, for example, can act as a co-processor to the CPU, has its own 16K-bit memory and runs more than 128,000 instruction threads in parallel, he said. Groups of threads can also work together to accomplish one task.
"People buy GPUs to do graphics but it's just sitting there in your PC most of the time," said Nickolls. "It's a wonderful high-performance massively parallel computer so we're trying to open that up a bit."
Applications that could run on a GPU used as a CPU include those used in the fields of science, medicine, finance or other work that depends on high processing power.
Nvidia's approach differs from the one touted by Advanced Micro Devices Inc. Tuesday at Microprocessor Forum. AMD, which acquired the graphics chip company ATI Technologies Inc. in 2006, is in the early stages of development of a combined CPU/GPU called Fusion, which is due some time in 2009. Fusion is offered more as a cost play, Nickolls said, in order to reduce cost by combining the GPU and CPU, rather than to boost performance.
The two-day Microprocessor Forum was hosted by the research firm In-Stat.