The new Radeon R9 390 and R9 390X graphics cards aren’t mere re-brands!
That’s the message AMD’s PR has been shouting at every possible opportunity ever since the “new” Radeon R300 graphics cards were released in June. There’s good reason for that: While AMD’s new Fury X and Fury graphics cards rock a beefy new Fiji graphics processor with high-bandwidth memory, the R300 lineup packs the same graphics processors that beat in the hearts of the older R200 series graphics cards. The fact that reviewers weren’t given samples to test before the R300 cards hit the streets only added fuel to the flames of suspicion.
But AMD’s PR was kinda-sorta right. These cards weren’t simply slapped with a new name and pushed back onto store shelves in a fresh package. While the older R9 290X was closer in performance to the $330 Nvidia GeForce GTX 970, the R9 390X outpunches the mighty $500 GTX 980 at stock speeds—and for significantly less cost than Nvidia’s offering.
On the other hand… well, let’s just say it’s complicated.
Meet the Asus Strix R9 390X
At its core, the $429 Radeon R9 390X packs the same basic 28nm “Hawaii” GPU as the 290X, though the retuned version has been christened “Grenada.” Core GPU specs remain the exact same: You’ll still find 2,816 stream processors, 176 texture units, 64 ROPs, and a 512-bit memory bus inside the R9 390X, just like with its predecessor. There’s little truly new here, especially since the recent Catalyst 15.7 drivers brought previously R300 series-exclusive software features to the older R200 series cards.
That said, AMD engineers spent over a year optimizing the GPU to coax more performance out of it, enabling the company to push core clock speeds an additional 50MHz to 1,050MHz, and boost memory clock speeds from 1,250MHz to 1,500MHz, causing overall memory bandwidth to jump from 320GBps to 384GBps. (The GTX 980 offers 224GBps.) There’s flat-out more memory too, with all versions of the R9 390 and 390X sporting a hefty 8GB of GDDR5 RAM. AMD says the GPU’s power management micro-architecture was completely rewritten as well, though it’s still a power-hungry animal, as well see later.
The $470 Asus Strix R9 390X we tested pushes things even further. Out of the box, the card is overclocked to 1,070MHz, and you can opt to easily push that to 1,090MHz with the click of a button in the included GPU Tweak II software.
Aiding that overclock is Asus’ beefy, stellar DirectCU III custom cooler design—the same one found on Asus’ Radeon Strix Fury graphics card. It’s comprised of a large, full-length heatsink augmented with massive, snaking 10mm heat pipes, topped with a trio of “triple wing-blade 0dB fans.” Ignoring all the marketing words, the Asus Strix R9 390X proves to be remarkably quiet in practice, producing barely a whisper even when you’re slamming it with the heftiest of workloads. The fans won’t even turn on until the GPU hits roughly 65 degrees Celsius, which means you’ll get a truly silent gaming experience with more modest titles. DirectCU III truly delivers.
The Strix R9 390X delivers more thoughtful touches, as well. The card comes with a slick backplate adorned with the Strix owl logo. A nifty, pulsating red LED on the side is also festooned with a white Strix logo. (I’m a sucker for case lighting.) Asus manufactures the card with what it calls “8-phase super alloy II” materials, claiming it all to be aerospace-quality parts. The super alloy II capacitors boast two-and-a-half-times the lifetime of traditional capacitors, while Asus says its fortified components and DIGI + VRM power delivery solution delivers top-tier overclocking capabilities.
This card screams “premium,” but be warned, tiny case owners: All those features turn the Strix R9 390X into a fairly bulky graphics card.
The Strix R9 390X sips 275W of power through one six-pin and one eight-pin connector, and it packs all the connections you could reasonably need, with DVI-I, HDMI, and a trio of DisplayPorts. The HDMI connection is only 1.4a, meaning it’s limited to 30Hz at 4K resolutions, but realistically speaking neither the 390X nor Nvidia’s competing GTX 980 deliver a compelling single-card 4K experience, no matter what each company’s marketing teams claim. These cards are better for 2560×1440 gaming. If you decide to try it anyway, or want to slap multiple R9 390X cards in your system in a CrossFire setup, the DisplayPorts support 4K at 60Hz.
Speaking of gaming…
Asus Strix R9 390X gaming performance
As ever, we tested the Asus Strix R9 390X on PCWorld’s dedicated graphics card benchmark system. Our build guide for the rig details its innards in-depth, but here’s the Cliffs Notes version of the key details:
We tested each title using the in-game benchmark provided, and stuck to the default graphics settings unless mentioned otherwise. V-Sync, G-Sync, and FreeSync were always disabled.
Continue to the next page for the Strix R9 390X’s performance results.
To get a true feel for the $469 Strix R9 390X’s place in the world, we’ve compared it to Asus’ $580 Strix Fury and AMD’s older reference R9 290X, as well as Nvidia’s $500 reference GTX 980. Benchmarks for the $650 Fury X and reference GTX 980 Ti are also included, so you can get a feel for the card’s 4K capabilities since AMD keeps hammering the 390X’s 4K potential. (Spoiler: Don’t use it as a single-card 4K gaming solution unless you want to play at Medium settings.)
First up: Middle-earth: Shadow of Mordor. The game may kick off with an Nvidia splash screen, but AMD graphics cards have held the upper hand in pure frame rate over the past couple of months. We test the game using the default Medium and High graphics presets, and then by cranking everything to its highest setting (which the Ultra preset doesn’t actually do). For the “crank everything to 11” test we use the free, optional HD Textures Pack add-on, which just chews through memory. Not that the R9 390X’s 8GB of RAM really cares.
You can click on any image in this article to enlarge it, including all graphs.
Next: Grand Theft Auto V. This gorgeous, loving recreation of L.A. also has a reputation for being a memory hog and comes with graphics options galore. We tested it three ways: at 4K with every graphics setting set to ‘Very High’ with FXAA enabled, at 2560×1440 with the same settings, and at 2460×1440 with the same settings but with 4x MSAA and 4x reflection MSAA also enabled. (Note: Fury X results are omitted as we haven’t had a chance to retest the game with AMD’s new Catalyst 15.7 drivers yet. Those drivers delivered a big performance boost in GTAV.)
Next we have a reversal of the Shadow of Mordor situation: Dragon Age Inquisition (which uses EA’s ubiquitous Frostbite engine) was promoted heavily by AMD at launch, but Nvidia’s card actually holds a slight lead here.
Sniper Elite III, another AMD Gaming Evolved title and Mantle supporter, isn’t as strenuous on graphics cards as some of the other titles here, but it’s still plenty purty, and playing it makes for an enjoyable afternoon.
Sleeping Dogs was a sleeper hit, and its Definitive Edition is a graphics-enhanced remake of this Hong Kong action game. So graphics-enhanced, in fact, that it cripples even the highest-end cards with its graphics options set to the Extreme preset.
Metro: Last Light Redux, another remake of another gem of a game, features a built-in benchmark than runs through a lengthy battle scene. It’s built using 4A Games’ custom 4A Engine.
Alien Isolation is the most terrifying Alien experience since the original movie. Fortunately, it’s less scary to graphics hardware, scaling well across GPUs of all shapes, sizes, and price points.
Bioshock Infinite utilizes the tried and true Unreal Engine 3, like so many other games, and both AMD and Nvidia have had plenty of time to optimize their drivers for it by this point in the game’s lifespan.
Here’s how the cards stack up in 3DMark Fire Strike and Unigine Valley, two widely used synthetic graphics benchmarks. Fire Strike Ultra is a more strenuous version of Fire Strike, created specifically to test a graphics card’s mettle at 4K resolution.
To test power consumption and GPU temperature, we run the grueling worst-case-scenario Furmark benchmark—which AMD and Nvidia dub a “power virus”—for 15 minutes, taking temperature information at the end using the tool’s built-in temperature gauge and verifying it with SpeedFan. Power draw is measured during the run on a whole system basis, not the GPU individually, by plugging the computer into a Watts Up Pro meter rather than directly into the wall.
Despite AMD’s “complete rewrite of the GPU power management micro-architecture,” the Grenada GPU is still a power-hungy beast. In fact, it draws 90W more than its 290X predecessor under load, and that card was legendary for its power consumption needs. It draws a full 200W more than the GTX 980, which was built using Nvidia’s supremely power efficient Maxwell GPU architecture.
All that said, the card still runs whisper-quiet. Props to Asus and its DirectCU III cooler for that.
Next page: Our final thoughts on the R9 390X’s value proposition and place in the world.
Asus Strix R9 390X bottom line
When you think about it, it’s impressive just how much AMD and Asus were able to accomplish with the R9 390X. When Nvidia’s GTX 970 and 980 debuted in September 2014 they were so impressive that AMD was forced to immediately—and drastically—slash prices. Now, the 390X proves that the ol’ Hawaii GPU architecture still has some life in it, outperforming the stock GTX 980 while offering twice the memory, all for $40 less than the 980 when you’re buying the custom, overclocked Asus Strix, or a full $70 for a stock 390X.
But as impressive as that is, the R9 390X isn’t a slam-dunk must-buy over the GTX 980.
Similarly custom-cooled variants of the GTX 980 will no doubt bring performance in line with, or slightly better than, the 390X. I have an EVGA model coming in later this week for testing, and Nvidia’s Maxwell chips are famous for their overclocking capabilities, which frequently allow you to boost core clock speeds by a whopping 15 to 20 percent. If you’re lucky enough to snag a 980 that’s so overclock-friendly (and most are), then you can coax out massive amounts of additional performance.
While I’m not one to spend too much time worrying about power use in a desktop system, the extra 200W of energy the R9 390X demands over the GTX 980 is… whoa. Sheesh. That’s pretty insane—enough even to give me pause.
That said, the R9 390X—and particularly Asus’ well-designed, utterly quiet version of it—still shines in many scenarios. If you’re not interested in overclocking and just want something that rocks out of the box, the Strix R9 390X slightly beats the GTX 980 for a whole lot less money. While the fact that the card isn’t potent enough to handle 4K games at high resolutions at reasonable frame rates (no matter what AMD marketing claims) makes the 8GB of RAM a bit superfluous, that hefty helping of memory would be much appreciated in a CrossFire scenario with two or more 390Xs driving a 4K display or multi-monitor gaming setup. The GTX 980 packs only 4GB of RAM, and a third less memory bandwidth than the R9 390X.
The fact that the R9 290X is being cleared out for $300 to $350 is another fly in the ointment. The older card isn’t quite as potent as the 390X—obviously—but that’s a lot less dough. And if you already have a 290X, there’s no compelling reason to upgrade to the 390X.
The R9 390X is a solid GPU, and Asus’ Strix version is a great implementation of it. In most games, it delivers 90 percent of the performance of the vaunted Radeon Fury, but for $130 less. Ideally, it’d be nice if a 4GB variant was introduced at a sub-$400 price point, because the 8GB of RAM is pointless if you’re playing at 1080p or 2560×1440, just to create a more compelling price/performance difference with the GTX 980 given the 390X’s weak points.
But as it stands you won’t be disappointed if you pick one up. Just be aware of its drawbacks versus the GTX 980, and aware of what you personally plan to use the card for. With that knowledge in hand, you’ll be able to decide whether it’s worthwhile for you to spend $40 to $100 more (depending on the model) for a GTX 980’s vastly better power efficiency and overclocking chops.
Brad Chacos spends his days digging through desktop PCs and tweeting too much. He specializes in graphics cards and gaming, but covers everything from security to Windows tips and all manner of PC hardware.