First Look: Nvidia GeForce GTX 200 Series
Nvidia's not playing games. With today's introduction of the GeForce GTX 200 series, the company is giving a graphics processor a whole new role.
Every new GPU ushers in new levels of realism and computational power, but don't call the GeForce GTX 200 Series simply "graphics cards." A little over ten years after games like Tomb Raider and GLQuake hit the scene, a new kind of GPU is being born. Nvidia has designed more than just a DirectX10 board that makes games scream and Vista's Aero interface hum. It's a secondary processor. It's a physics calculator. And it's about time.
The cards will sell in two flavors. The first, a high-end GeForce GTX 280 with 240 processors and 1GB of frame buffer memory, sells at a spit-take-worthy price of $649 starting June 17. (As expensive as that may sound -- and it is -- this is the consistent ceiling price for high-end consumer cards these days). The more "mainstream" model, the $399 GeForce GTX 260, ships June 25with 192 processors and a 896MB frame buffer.
What does that that mean for these PCI Express 2.0 cards that pack 1.4 billion transistors? Nvidia promises that the GTX 280 is 1.5 times faster than a high-end 8800 / 9800-series GPU.
We are still in the process of compiling numbers - and tweaking a new series of GPU benchmarks - but we had some hardware lying around and wanted to see for ourselves if the GTX 200 cards could live up to the claims.
Our current testbed: an Intel QX9770 CPU running on an Intel DX48BT2 motherboard with 4GB DDR3 RAM and two Western Digital Raptor X HDDs. Running Widows Vista Ultimate 32-bit edition, we saw EA's DirectX 10 game, Crysis, run 10 Frames Per Second (FPS) faster without Anti-Aliasing. Then, with it switched on to 4xAA, Crysis pulled further ahead - going twice as fast at 1920 by 1200-pixel resolution.
It also hardly comes as any surprise that Unreal Tournament 3 showed performance gains as well. At 1920 by 1200 and 2560 by 1600 resolutions, the GTX 280 card outpaced the 8800GT by about 24 Frames Per Second.
GPU vs. CPU
The new GPU is the result of smart collaborations. Nvidia's engineering pool, now loaded with the newly-acquired AEGIA team, has created a plug-in card designed to amplify system performance in many ways.
The Compute Unified Device Architecture (CUDA) shows that Nvidia's GPU is capable of a much more than rendering death-dealing aliens. In February 2007, Nvidia released the SDK that allowed 8800-series owners to develop programs that push the GPU. A quick visit to Nvidia Cuda Zone reveals applications that do everything from complex financial calculations to mapping the human genome.
Thanks to the SDK's release, others are creating applications that are a little less academic. A great example is Elemental Technologies' upcoming BadaBOOM Media Converter, a video encoder that runs entirely off an Nvidia GPU -- as opposed to just about every other encoder around that's CPU-bound. Nvidia promises video encoding speeds at least twice as fast as a CPU-bound one. Initial tests in our labs could verify the claim: a two-minute clip optimized for the iPod Touch (480 by 320-pixel resolution, AAC audio) took about 24 seconds. That same video, compressed using AVS Video converter 5.5, took 56 seconds. That's impressive, no doubt, but it really is an apples-to-oranges comparison. You see, for the fairest test, we'd need to have the provided BadaBOOM software (which, by the way, works amazingly well) have a toggle to switch between GPU encoding and CPU encoding. We are, however, currently running other lab tests for a better read on the difference between GPU and CPU computing and will update this story as soon as we have the final results.
The obvious implication of the AEGIA acquisition is PhysX. In the same way that a typical GPU offloads graphic chores, a physics processing unit handles other tasks for the CPU. Like, say, realistically ripping fabric, calculating a flood of water and how it'll affect an environment, or simply tracking blades of grass when an avatar walks through a field.
Sounds great, but AEGIA was having a tough time selling folks on the notion of a separate physics processor add-in card (a PPU, if you will). It was a chicken-and-egg problem: How do you sell these $200-plus discrete cards without the games to show what the PPU can do if not enough developers were making enough good games that took advantage of it?
Back in the 90s, GLQuake and Tomb Raider sold users on the need for graphics cards. And now graphics cards will be selling the need for physics. By incorporating a PPU into the GX200 series, half that old issue is solved. With a wave of users buying one of these boards, developers have a much wider audience -- and a good excuse to add more realistic physics to their games, knowing the card exists.
Gaming remains the most obvious reason to buy these boards, though. But here are a few caveats before you run to a store (physical or virtual) for the best deals, though. Both cards are dual slot design (roughly the same size and girth as the 9800 GX2 cards) so make certain that you have a wide enough case (or don't mind cutting through the hard drive cage just to accommodate a graphics card. Even though the new card promises energy efficiency for the tasks at hand (according to Nvidia, 25 watts when idling in 2D mode and maxing out at 236 watts. In our test machine, the average Watt usage was 307 on the new card while the 8800GT kept cooler at 238 Watts.), you still need to make certain that your power supply is rigged to juice these boards. The GTX 260 requires two six-pin power connectors and the GTX 280 needs a six-pin and eight-pin connector. Crack your case before buying!
Which should you buy? I have a sneaking suspicion that the $400 model will be a more cost-effective speed boost. No definitive answer yet, though, since we are still awaiting a GTX 260 card to arrive in the offices. This page will