First Look: Nvidia GeForce GTX 200 Series
Nvidia's not playing games. With today's introduction of the GeForce GTX 200 series, the company is giving a graphics processor a whole new role.
Every new GPU ushers in new levels of realism and computational power, but don't call the GeForce GTX 200 Series simply "graphics cards." A little over ten years after games like Tomb Raider and GLQuake hit the scene, a new kind of GPU is being born. Nvidia has designed more than just a DirectX10 board that makes games scream and Vista's Aero interface hum. It's a secondary processor. It's a physics calculator. And it's about time.
The cards will sell in two flavors. The first, a high-end GeForce GTX 280 with 240 processors and 1GB of frame buffer memory, sells at a spit-take-worthy price of $649 starting June 17. (As expensive as that may sound -- and it is -- this is the consistent ceiling price for high-end consumer cards these days). The more "mainstream" model, the $399 GeForce GTX 260, ships June 25with 192 processors and a 896MB frame buffer.
What does that that mean for these PCI Express 2.0 cards that pack 1.4 billion transistors? Nvidia promises that the GTX 280 is 1.5 times faster than a high-end 8800 / 9800-series GPU.
We are still in the process of compiling numbers - and tweaking a new series of GPU benchmarks - but we had some hardware lying around and wanted to see for ourselves if the GTX 200 cards could live up to the claims.
Our current testbed: an Intel QX9770 CPU running on an Intel DX48BT2 motherboard with 4GB DDR3 RAM and two Western Digital Raptor X HDDs. Running Widows Vista Ultimate 32-bit edition, we saw EA's DirectX 10 game, Crysis, run 10 Frames Per Second (FPS) faster without Anti-Aliasing. Then, with it switched on to 4xAA, Crysis pulled further ahead - going twice as fast at 1920 by 1200-pixel resolution.
It also hardly comes as any surprise that Unreal Tournament 3 showed performance gains as well. At 1920 by 1200 and 2560 by 1600 resolutions, the GTX 280 card outpaced the 8800GT by about 24 Frames Per Second.
GPU vs. CPU
The new GPU is the result of smart collaborations. Nvidia's engineering pool, now loaded with the newly-acquired AEGIA team, has created a plug-in card designed to amplify system performance in many ways.
The Compute Unified Device Architecture (CUDA) shows that Nvidia's GPU is capable of a much more than rendering death-dealing aliens. In February 2007, Nvidia released the SDK that allowed 8800-series owners to develop programs that push the GPU. A quick visit to Nvidia Cuda Zone reveals applications that do everything from complex financial calculations to mapping the human genome.
Thanks to the SDK's release, others are creating applications that are a little less academic. A great example is Elemental Technologies' upcoming BadaBOOM Media Converter, a video encoder that runs entirely off an Nvidia GPU -- as opposed to just about every other encoder around that's CPU-bound. Nvidia promises video encoding speeds at least twice as fast as a CPU-bound one. Initial tests in our labs could verify the claim: a two-minute clip optimized for the iPod Touch (480 by 320-pixel resolution, AAC audio) took about 24 seconds. That same video, compressed using AVS Video converter 5.5, took 56 seconds. That's impressive, no doubt, but it really is an apples-to-oranges comparison. You see, for the fairest test, we'd need to have the provided BadaBOOM software (which, by the way, works amazingly well) have a toggle to switch between GPU encoding and CPU encoding. We are, however, currently running other lab tests for a better read on the difference between GPU and CPU computing and will update this story as soon as we have the final results.