Nvidia GeForce GTX 1080 Founders EditionPCWorld Rating
The Nvidia GeForce GTX 1080 is the first graphics card built using 16nm technology after GPUs stalled on 28nm for four long years. The performance and power efficiency gains are nothing short of...
“It’s insane,” Nvidia CEO Jen-Hsun Huang proudly proclaimed at the GeForce GTX 1080’s reveal, holding the graphics card aloft. “The 1080 is insane. It’s almost irresponsible amounts of performance… the 1080 is the new king.”
He wasn’t joking. The long, desolate years of stalled GPU technology are over, and this beast is badass.
A giant leap for GPU-kind
As wondrous as it is, the outrageous performance leap of the GTX 1080 (starting at $599 MSRP, $699 Nvidia Founders Edition reviewed) doesn’t exactly come as a surprise.
Faltering graphics processor process technology left graphics cards from both Nvidia and AMD stranded on the 28-nanometer transistor node for four long years—an almost unfathomable length of time in the lightning-fast world of modern technology. Plans to move to 20nm GPUs fell by the wayside due to technical woes. That means the 16nm Pascal GPUs beating inside the GTX 1080’s heart (and AMD’s forthcoming 14nm Polaris GPUs) represent a leap of two full process generations.
That’s nuts, and it alone could create a big theoretical jump in performance. But Nvidia didn’t stop there.
Pascal GPUs adopted the advanced FinFET “3D” transistor technology that made its first mainsteam appearance in Intel’s Ivy Bridge computer processors, and the GTX 1080 is the first graphics card powered by GDDR5X memory, a supercharged new version of the GDDR5 memory that’s come standard in graphics cards for a few years now.
On top of all that, Nvidia invested significantly in the new Pascal architecture itself, particularly in tweaking efficiencies to increase clock speeds while simultaneously reducing power requirements, as well as many more under-the-hood goodies that we’ll get to later—including enhanced asynchronous compute features that should help Nvidia’s cards perform better in DirectX 12 titles and combat a major Radeon advantage.
Oh, and did I mention all the new features and performance-enhancing software landing alongside the GTX 1080?
Note: Because this is a major GPU advancement, we’ll spend more time than usual discussing under-the-hood details and tech specs. If that’s not your thing, jump to page two for discussion on the GTX 1080’s big new technical wonders and page three for its new consumer-facing features. Performance talk starts on page four.
Let’s kick things off with an Nvidia-supplied spec sheet comparison of the GTX 1080 vs. its predecessor, the GTX 980. (Side note: The mere fact that the company’s comparing the GTX 1080 directly against the GTX 980 is noteworthy. Usually, GPU makers compare new graphics cards against GPUs two generations back in review materials. The GTX 960 was compared against the GTX 660—not the GTX 760—in Nvidia’s official materials, for example.)
Here, some of the benefits to switching to 16nm jump out immediately. While the “GP104” Pascal GPU’s 314mm2 die size is considerably smaller than 398mm2 die in the older GTX 980, it still manages to squeeze in 2 billion more transistors overall, as well as 25 percent more CUDA cores—2560 in the GTX 1080, versus 2048 in the GTX 980.
And pick up your jaw! The GTX 1080 indeed rocks utterly ridonkulous 1,607MHz base clock and 1,733MHz (!!!!) boost clock speeds—and that’s just the stock speeds. We managed to crank it to over 2GHz on air without breaking a sweat or tinkering with the card’s voltage. Add it all up and the new graphics card blows its predecessor out of the water in both gaming performance and compute tasks, leaping from 4,981 GFLOPS in the GTX 980 all the way to 8,873 GFLOPS in the GTX 1080.
Diving even deeper, each Pascal Streaming Multiprocessor (SM) features 128 CUDA cores, 256KB of register file capability, a 96KB shared memory unit, 48KB of L1 cache, and eight texture units. Each SM is paired with a GP104 PolyMorph engine that handles vertex fetch, tessellation, viewport transformation, vertex attribute setup, perspective correction, and the intriguing new Simultaneous Multi-Projection technology (which we’ll get to later), according to Nvidia.
A group of five SM/PolyMorph engines with a dedicated raster engine forms a Graphics Processing Cluster, and there are four GPCs in the GTX 1080. The GPU also features eight 32-bit memory controllers for a 256-bit memory bus, with a total of 2,048KB L2 cache and 64 ROP units among them.
That segues nicely into another technological advance in Nvidia’s card: the memory. Despite rocking a 256-bit bus the same size as its predecessor, the GTX 1080 managed to push the overall memory bandwidth all the way to 320GBps, from 224GBps in the GTX 980. That’s thanks to the 8GB of cutting-edge Micron GGDR5X memory inside, which runs at a blistering 10Gbps—a full 3Gbps faster than the GTX 980’s already speedy memory. How fast is that, really? Nvidia’s GTX 1080 whitepaper sums it up:
“To put that speed of signaling in context, consider that light travels only about an inch in a 100 picosecond time interval. And the GDDR5X IO circuit has less than half that time available to sample a bit as it arrives, or the data will be lost as the bus transitions to a new set of values.”
Implementing such speedy memory required Nvidia to redesign both the GPU circuit architecture as well as the board channel between the GPU and memory dies to exacting specifications—a process that will also benefit graphics cards equipped with standard GDDR5 memory, Nvidia says.
Pascal achieves even greater data transfers capabilities thanks to enhanced memory compression technology. Specifically, it builds on the delta color compression already found in today’s Maxwell-based graphics cards, which reduces memory bandwidth demands of grouping like colors together. Here’s how Nvidia’s whitepaper describes the technology:
“With delta color compression, the GPU calculates the differences between pixels in a block and stores the block as a set of reference pixels plus the delta values from the reference. If the deltas are small then only a few bits per pixel are needed. If the packed together result of reference values plus delta values is less than half the uncompressed storage size, then delta color compression succeeds and the data is stored at half size (2:1 compression).”
The new Pascal GPUs perform 2:1 delta color compression more effectively, and added 4:1 and 8:1 delta color compression for scenarios where the per-pixel color variation is minimal, such as a darkened night sky. Those are targets of opportunity, though, since the compression needs to be lossless. Gamers and developers would gripe if GeForce cards started screwing with image quality.
Using color compression to reduce memory needs isn’t new at all—AMD’s Radeon GPUs also do it—but Nvidia says that between this new, more effective form of compression and GDDR5X’s benefits, the GTX 1080 offers 1.7x the total effective memory bandwidth of the GTX 980. That’s not shabby at all, and it takes some of the sting out of the card’s lack of revolutionary high-bandwidth memory, which debuted in AMD’s Radeon Fury cards, albeit in capacities limited to 4GB.
The Pascal GPU’s technological enhancements and leap to 16nm FinFET also make it incredibly power efficient. Despite firmly outpunching a Titan X, the GTX 1080 sips only 180 watts of power over a single 8-pin power connector. By comparison, the GTX 980 Ti sucks 250W through 6-pin and 8-pin connectors, while the 275W Fury X uses a pair of 8-pin connectors. The GTX 1080 does a lot more performance with a lot less power.
Next page: New features! Async compute, simultaneous multi-projection, and more
Nvidia GeForce GTX 1080 Founders EditionPCWorld Rating
The Nvidia GeForce GTX 1080 is the first graphics card built using 16nm technology after GPUs stalled on 28nm for four long years. The performance and power efficiency gains are nothing short of astounding.
- Outrageous performance leap over GTX 980
- Hugely power efficient
- Attractive premium design
- Numerous new features
- Doesn't blow away Radeon cards in heavily AMD-optimized games