Wow, what a massive performance difference leaping ahead two technological generations can make.
If Nvidia had released the GeForce GTX 1070 (starting at $380 MSRP, $450 Nvidia Founders Edition reviewed) just last week it would’ve been the most powerful single-GPU graphics card to ever grace the earth, edging out the awe-inspiring—and $1,000—GTX Titan X. In the wake of the launch of the “new king” GTX 1080 (starting at $600 MSRP on Newegg), however, Nvidia’s new card can't lay claim to the performance crown. Still, that fact doesn't diminish this card’s stunning achievement.
The GeForce GTX 1070 offers Titan-level, no-compromises graphics oomph for a mere $380. The GTX 1080 may be the new king of graphics cards, but this powerful new prince’s blend of price and performance will no doubt make it the people’s champion—though it’s not quite the steal that the GTX 970 was.
Let’s dig in!
Programming note: If you want to cut straight to the chase, jump to the final page of this review for our bottom line on the GTX 1070.
The GeForce GTX 1070 under the hood
The GeForce GTX 1070’s power stems from the same source as the GTX 1080’s: Nvidia’s Pascal GPU. Graphics cards from both Nvidia and AMD have been stuck on the same underlying 28nm technology for four long years, but Pascal—and forthcoming Radeon cards based on AMD’s Polaris GPUs—finally break the appalling trend, leaping forward two full process generations, shrinking down to 16nm transistors and integrating 3D “FinFET” technology as well. For all the nitty-gritty details, check out the first page of our GTX 1080 review, but in a nutshell, Pascal GPUs represent a huge step forward in performance and power efficiency.
The GTX 1070 features the same “GP104” Pascal GPU as the bigger, badder brother, but with five of its 20 Streaming Multiprocessor units disabled. That leaves it with 1,920 CUDA cores and 120 texture processing units, but Nvidia left the GPU’s full complement of 64 render output units (ROPs) intact. The GTX 1070’s clock speeds have also been nerfed a bit, down to a 1,506MHz base clock and 1,683MHz boost clock, but that’s still far superior to the previous-generation graphics cards—the GTX 980 topped out at 1,216MHz boost clocks, while AMD’s Radeon R9 390X topped out at 1,050MHz.
You can see the GTX 1070’s full specification breakdown in the chart above.
Another subtle, but key difference from the GTX 1080 is the GTX 1070’s memory. While the GTX 1080 adopted cutting-edge GDDR5X memory clocked at a blistering 10Gbps, the GTX 1070 employs 8GB of traditional GDDR5 RAM over a 256-bit bus instead, clocked at 8Gbps for an effective memory bandwidth of 256GBps.
Don’t be disappointed by the lack of GDDR5X or the high-bandwith memory found in AMD’s Radeon Fury cards, though: The GTX 1070’s 8GB of memory is more than enough for today’s games, even at 4K resolution, and the Pascal GPU’s new lossless delta color compression tricks (which again, we covered in the GTX 1080 write-up) makes it even more effective.
From the outside, the GeForce GTX 1070 Founders Edition mirrors the GTX 1080 Founders Edition, with an aggressive, polygon-inspired aluminum shroud, “GEFORCE” spelled out in illuminated green letters on the edge, a blower-style fan that exhausts hot air through the I/O plate on the rear of your system, and a low-profile backplate with a removable portion to improve airflow when you’re running a multi-GPU SLI setup. (Be sure to check out our dives into what “Nvidia Founders Edition” means and the big SLI changes in the GeForce 10-series if you’re curious about either topic.)
You’ll also find the same single HDMI 2.0b connection, a single dual-link DVI-D connector, and three full-sized DisplayPorts that are DP 1.2 certified, but ready for DP 1.3 and 1.4. That last tidbit means the card will be able to power 4K monitors running at 120Hz, 5K displays at 60Hz, and even 8K displays at 60Hz—though you’ll need a pair of cables for that last scenario.
There are some key differences between the two graphics cards, though. Rather than using advanced vapor chamber cooling, the GTX 1070 dissipates heat using a trio of copper heatpipes embedded in an aluminum heatsink. And while the card sports the same 8-pin power connector as the GTX 1080, the GTX 1070 sips only 150W of power, rather than 180W. But the truly astonishing thing about that number is how much more performance the GTX 1070 is able to eke out of the comparatively meager power draw; the similarly performing Titan X sucks 250W through 6-pin and 8-pin connectors, while the 275W Fury X uses a pair of 8-pin connectors.
Like we said: The move to 16nm FinFET technology is a potent jump, indeed.
Next page: GTX 1070 features
Nvidia GeForce GTX 1070 features
The GTX 1070 also benefits from all of the Pascal GPU’s new under-the-hood and software/hardware-derived features.
On the more hardware-centric side of things, that includes the Simultaneous Multi-Projection technology capable of supercharging performance in games and VR, as well as rendering images on multi-monitor setups more accurately. Pascal also packs new asynchronous computing tricks to improve VR and DirectX 12 performance, though it remains to be seen how effectively it counters the dedicated asynchronous shader hardware in Radeon graphics cards. (We detailed SMP and Pascal’s async abilities in depth on page two of our GTX 1080 review.)
The GTX 1070 also supports HDCP 2.2 for transmitting protected 4K content over HDMI, Microsoft’s PlayReady 3.0 DRM for streaming 4K content to PCs, and high dynamic range video technology, including HEVC HDR video encoding and decoding.
Nvidia’s new graphics card will also support the powerful Ansel “in-game 3D camera” when games begin supporting the supercharged screenshot tool later this year. Other notable Pascal additions include GPU Boost 3.0, which can scan the overclocking capabilities of your GPU at different voltages to create a custom overclocking profile tailored to your individual card, and Fast Sync, which uses frame buffering tricks to eliminate screen tearing with a microscopic latency hit in games where your GPU is pumping out hundreds of frames per second—far more than your monitor can actually display. That’s more for use in DirectX 9-based e-sports titles like League of Legends, Dota 2, and Counter-Strike: Global Offensive, though.
Want to know more about Ansel, Fast Sync, GPU Boost 3.0, or the GTX 1070’s HDR and streaming content support? We dig into all four technologies on page three of the GTX 1080 review.
But enough chit-chat! It’s time to put the GTX 1070 to the test.
Next page: Performance tests begin
Nvidia GeForce GTX 1070 performance benchmarks
We tested the GeForce GTX 1070 on PCWorld’s dedicated graphics card benchmark system, which was built to avoid potential bottlenecks in other parts of the machine and show true, unfettered graphics performance. Key highlights of the build:
- Intel’s Core i7-5960X with a Corsair Hydro Series H100i closed-loop water cooler, to eliminate any potential for CPU bottlenecks affecting graphical benchmarks
- An Asus X99 Deluxe motherboard
- Corsair’s Vengeance LPX DDR4 memory, Obsidian 750D full-tower case, and 1,200-watt AX1200i power supply
- A 480GB Intel 730 series SSD
- Windows 10 Pro
To see what the Nvidia GeForce GTX 1070 Founders Edition is truly made of, we compared it against several different cards. The $328 EVGA GTX 970 FTW was a no-brainer, along with the $325 Sapphire Nitro R9 390, as those are the GTX 1070’s direct previous-generation peers. Since Pascal’s 16nm FinFET leap represents a big performance boost, we also benchmarked the reference GTX 980, the $460 MSI Radeon 390X Gaming 8GB, and the $650 Radeon Fury X, as well as the $1,000 Titan X—the latter being a performance rival that Nvidia specifically called out during the GTX 1070’s reveal. Because the GTX 980 Ti’s performance closely mirrors the Titan X’s, we didn’t test that card.
Sadly, time constraints—this card is launching during Computex, one of the biggest PC industry trade shows of the year—prevented us from testing the GTX 1070 Founders Edition’s overclocking capabilities.
First up: Tom Clancy’s The Division, Ubisoft’s third-person shooter/RPG that mixes elements of Destiny and Gears of War. The game’s set in a gorgeous and gritty recreation of post-apocalyptic New York, running on Ubisoft’s new Snowdrop engine. Despite incorporating Nvidia Gameworks features—which we disabled during benchmarking to level the playing field—the game scales well across all hardware and isn’t part of Nvidia’s “The Way It’s Meant to be Played” lineup. AMD hardware’s slight lead at 4K resolution with everything cranked might be part of the reason why.
Here, Nvidia’s claims hold true. The $380 GTX 1070 indeed goes toe-to-toe with the $1,000 Titan X—though the leap in performance from the GTX 970 to the GTX 1070 isn’t as momentous as the performance leap from the GTX 980 to the GTX 1080. Where the GTX 1080 delivered frame rates roughly 70 percent higher than its direct predecessor, the GTX 1070 only offers a 23.5 percent bump at 1080p, a 53 percent bump at 1440p, and a 25 percent bump at 4K.
Next page: Far Cry Primal
Far Cry Primal
Far Cry Primal is another Ubisoft game, but running on the latest version of the long-respected Dunia engine that’s been underpinning the series for years now. We tested these GPUs with the game’s free 4K HD texture pack enabled.
The game tends to favor AMD cards at higher resolutions, but as you can see, the GTX 1070 beats out every other graphics card tested with the exception of its big brother, the GTX 1080. Here, the performance gains over the last-gen GTX 970 are a bit more respectable; 41.3 percent at 1080p, 66.6 percent at 1440p, and 73.6 percent at 4K using the Ultra preset.
Next page: Rise of the Tomb Raider
Rise of the Tomb Raider
The gorgeous Rise of the Tomb Raider scales very well across all GPU hardware, though it clearly prefers Nvidia GPUs to the Fury X once you reach the upper echelon of graphics cards. We didn’t include DirectX 12 results because the game actually pushes fewer average frames in that mode compared to DirectX 11, though the DX12 minimum frame times are much higher, providing a less stuttery experience.
Once again, the GTX 1070 delivers more raw frames per second than the Titan X, but just barely. The GTX 1070 offers roughly 56 to 58 percent more performance than the GTX 970 across the board.
Next page: Hitman
Hitman is where things start to get interesting. This glorious murder-simulating sandbox’s Glacier engine is heavily optimized for AMD titles, with Radeon cards significantly outpunching their GeForce GTX 900-series counterparts, especially at higher resolutions. We’re still having trouble coaxing the game’s bolted-on DirectX 12 mode to launch in the wake of a borked game update, so these results are limited to DX11 only.
While the GTX 1080’s raw power helps bolster it into top-dog status despite Hitman’s Radeon-centric leanings, AMD’s 390X and Fury lineup manage to equal or flat-out beat the GTX 1070 in raw frame rates here. That drives home how important in-engine support for a particular graphics architecture can be.
That said, the GTX 1070 still manages to outperform Nvidia’s own Titan X by a hair in all resolutions once again. The GTX 1070 performance gains over the GTX 970 increase the further you move up in resolution, with a 26.1 percent jump at 1080p, a 39.1 percent jump at 1440p, and a 49 percent jump at 4K.
Next page: Ashes of the Singularity and DirectX 12
Ashes of the Singularity
The varied failings of Tomb Raider and Hitman’s DirectX 12 modes left us with a single DX12 game to test: The superb Ashes of the Singularity, running on Oxide’s custom Nitrous engine.
AoTS was an early flag-bearer for DirectX 12, and the performance gains AoTS offers in DX12 over DX11 are mind-blowing—at least for AMD cards. AoTS’s DX12 implementation makes heavy use of asynchronous compute features, which are supported by dedicated hardware in Radeon GPUs, but not GTX 900-series Nvidia cards. In fact, the software pre-emption workaround that Maxwell-based Nvidia cards use to mimic the async compute capabilities tank performance so hard that Oxide’s game is coded to ignore async compute when it detects a GeForce GPU.
That leads to some interesting conclusions. Nvidia’s GTX 900-series graphics cards perform worse in DX12 than in DX11, and that’s with async compute features disabled. That’s not the case with the GTX 1070 (or the GTX 1080); in fact, once Pascal GPUs surpass graphics bottlenecks, they can achieve a decent leap in performance in DX12, as shown in the 1440p/high and 1080/high results. I’m extremely curious to see if Oxide decides to lift the hard lock on async compute capabilities for GTX 10-series cards, and if so, whether the Pascal GPU’s new async tricks lead to more consistent DX12 performance gains.
As it stands today, the GTX 1070 again triumphs over the Titan X here, and in fact, in DX11 it’s hands-down the second-most-powerful card after the GTX 1080. But the really intriguing thing is how much of an advantage AMD’s dedicated async compute engine hardware provides in DirectX 12. Radeon cards see sizable performance increases across the board in DX12, as the ACE hardware helps distribute workloads to avoid the GPU bottlenecking seen in Nvidia’s cards. In DirectX 12, the Fury X handily beats the GTX 1070, and the Fury and R9 390X both hang awfully close.
Next page: SteamVR benchmark, 3DMark Fire Strike
SteamVR Performance Test
A lot of the Pascal GPU’s potential performance benefits revolve around VR and developer adoption of Simultaneous Multi-Projection. Unfortunately, those heady days still fit firmly in the future despite the recent launches of the Oculus Rift and HTC Vive.
Granular VR benchmark tools coming from Crytek and Basemark haven’t hit the streets, and no released VR games support the GTX 1070’s new software features yet. There’s no way to quantify the GTX 1070’s potential VR performance increase over the competition except for the SteamVR benchmark, which is better for determining whether your rig is capable of VR at all than direct head-to-head GPU comparisons.
Alas. The GTX 1070 doesn’t quite crank things to 11 like the GTX 1080 does, but it still matches the Titan X’s overall score, and comes out comfortably ahead of the best AMD has to offer.
3DMark Fire Strike and Fire Strike Ultra
We also tested the GTX 1080 using 3DMark’s highly respected Fire Strike and Fire Strike Ultra synthetic benchmarks. Fire Strike runs at 1080p, while Fire Strike Ultra renders the same scene, but with more intense effects, at 4K resolution.
We see the now-familiar pattern repeated yet again: The GTX 1070 squeaks out just ahead of the Titan X yet again.
Next page: Power and heat
Power and heat
Finally, let’s take a look at the GTX 1080’s power and thermal results.
All of AMD’s recent cards consume far more power than Nvidia’s, full stop. That’s sure to change when the new 14nm FinFET Polaris-based Radeon cards roll out, but it’s reality today. But let’s compare something a bit more apples-to-apples: While the GTX 1070’s performance virtually mirrors the Titan X’s, it uses a full 50 percent less power under load, peaking at the exact same whole-system wattage as the older GTX 970. That’s crazy, and a testament to the Pascal GPU’s power efficiency.
Power is measured by plugging the entire system into a Watts Up meter, then running a stress test with Furmark for 15 minutes. It’s basically a worst-case scenario, pushing graphics cards to their limits.
The GTX 1070 runs a wee bit hotter than the GTX 970, on the other hand—a trend we also saw in the jump from the GTX 980 to the GTX 1080. That makes sense; cramming all those billions of transistors into such a tiny footprint results in more focused heat than with previous-generation GPUs. The GTX 1070’s fan still keeps remarkably quiet for a reference design, and you won’t see much thermal throttling—the GPU dynamically scaling back clock speeds to keep cool—since it tops out at a mere 78 degrees Celcius.
The outlier on this chart, AMD’s Fury X, stays so chilly with help from an integrated closed-loop water cooler. Its radiator actually makes more noise, subjectively, than the GTX 1070’s blower-style fan under load.
Next page: Wrap-up
The new prince
If the GTX 1080 is the new graphics king, the GeForce GTX 1070 is a prince worthy of a royal reception. This beast lives up to the lofty promises set by Nvidia, delivering Titan-toppling performance for half the power and a whopping 62 percent lower price. That’s breathtaking. Thanks, 16nm FinFET!
For the first time in history, a graphics card in the $350 to $400 price range delivers truly no-compromises 1440p/60fps performance. The GTX 1070 is probably overkill for 1080p resolution unless you’re playing on a 144Hz screen. If you’re playing on a 60Hz 1080p screen, you’re better off from a price-to-performance standpoint waiting to see what the eventual GTX 1060 and its Radeon rival can do. And like the Titan X, the GTX 1070’s probably underkill for playing at 4K resolution. You’ll clear 30fps, but struggle to hit 60fps in many games unless you dial back graphics details—though investing in a 4K G-Sync monitor would transform any lingering stiffness into buttery-smooth performance onscreen.
What’s also interesting is how the GTX 1070 affects the rest of the graphics card world.
Not only does the GTX 1070 immediately invalidate any reason for gamers to buy the Titan X—though ferociously overclocked GTX 980 Ti custom cards can likely meet or top its results, for much more money—it renders AMD’s Fury lineup obsolete. Unless you need the $500 Radeon Nano’s mini-ITX form factor or the $650 Fury X’s closed-loop liquid cooling for specific build needs, the GTX 1070 is the clear winner, and for less money. There’s zero reason to buy an air-cooled $500 Radeon Fury or even a $400 Radeon R9 390X over this. Just don’t do it—unless AMD responds by drastically slashing prices, of course, which it may have to do as a response (although that might not be an option for the Fury line, given the sky-high cost of high-bandwidth memory).
AMD may very well have an ace up its sleeve with Radeon graphics cards based on its own 14nm FinFET Polaris GPUs. We have no idea what Team Red has planned on that front, but it’s hosting a livestream for “Polaris updates” from Computex in just a few days, at 10 p.m. Eastern on May 31. Employee leaks suggest a Radeon RX 480 might be revealed, but nothing's been confirmed at the time of this writing. Between the time this review launches on May 30 and the GTX 1070 actually hits the streets on June 10, we’ll have a more concrete idea of what AMD has planned.
Even though the GTX 1070 delivers Titan X-class power and comes heartily recommended, it doesn’t feel like quite as much of a slam dunk as the GTX 1080. While the new king improved performance over its predecessor by a solid 70 percent across the board, this new prince’s gains over the older GTX 970 vary from 25 to 73 percent, depending on the game and the resolution setting.
Part of that’s no doubt because the GTX 970 was such a remarkable deal, delivering roughly 85 percent of the GTX 980’s performance for a mere $330—an incredibly unique value proposition. The GTX 1070 delivers roughly 70 to 75 percent of the GTX 1080’s performance for $380.
And with performance always falling just the slightest bit above the Titan X, it feels like Nvidia intentionally held back a bit, giving the GTX 1070 juuuuust enough oomph to justify the “Faster than Titan X” headline, and not an inch more. Because of that, the GTX 1070 doesn’t necessarily vanquish AMD’s heavy-hitters in heavily AMD-optimized games like Hitman—though it equals them for considerably less money, and devastates them in other titles.
A few questions also remain: Will you even be able to find $380 versions of the GTX 1070 on launch day, or will the release be limited to the $450 Founders Edition? Will Pascal’s new async capabilities level the playing field with AMD’s dedicated hardware, or will Polaris-based Radeon cards blow away their GeForce rivals in DX12 games that lean heavily on the asynchronous computing? Will developers embrace new Pascal-only tools like Ansel and Simultaneous Multi-Projection? We simply don’t know yet.
But put all those nitpicks aside. The GeForce GTX 1070 delivers Titan X-level performance for $380 and that’s amazing—full stop. The people have a new champion. Don’t hesitate to buy one immediately if you’re looking for the ultimate 1440p gaming experience… unless AMD hard launches the Radeon R9 490 and 490X at Computex, that is.
It sure is a thrilling time to be a PC gamer.