On Tuesday, Nvidia announced it was going to support x86 processors as a target for CUDA applications. This means that apps that are currently written to support Nvidia’s GPU line for compute applications will be able to run on standard x86 CPUs–no GPU needed.
Today, I spoke with Mathew Colgrove of The Portland Group. The Portland Group specializes in compilers for high performance, parallel computing applications. A subsidiary of ST Micro, Portland Group focuses mostly on a version of Fortran designed for multiprocessor systems. As a programming language, Fortran is still heavily used in the scientific computing arena, and leans heavily towards making procedural computation – it’s not meant to be a general purpose language like C or C++.
Colgrove noted that the initial version of the x86 CUDA Fortran would focus on multiprocessor AMD and Intel CPUs, and take advantage of the latest versions of SSE for floating point. He also said that they had working simulations of Intel AVX (advanced vector instruction) versions of the compiler in the lab. So when Intel ships their Sandy Bridge CPUs with AVX support, parallel applications should see a substantial performance boost.
I also asked Colgrove if AMD GPUs could be targets for CUDA applications, but he replied that they had no plans to support AMD’s GPU line. “We’d love to be able to support AMD GPUs, but we’d need a lot of support from AMD, something that hasn’t been forthcoming.
Supporting both Nvidia GPUs and x86 processors will eventually allow applications developers to build software with a single binary that can run on mixed systems (x86 + GPU) and systems that lack GPU hardware.
It’s interesting that Nvidia would lend a hand to this effort. While CUDA apps will likely run more slowly on x86, CUDA’s proprietary nature has been something of a competitive advantage for the GPU-oriented company. Given emerging standards for compute APIs, like OpenCL and Microsoft’s DirectCompute, perhaps Nvidia felt it needed to support other processor technologies to avoid being left behind.
More GPU Tech Conference coverage from PCWorld’s GeekTech blog…