Let’s move on to what may be the second most important category for the Ryzen: gaming. That’s pretty much where the Ryzen story goes from incredible performance per dollar to head-scratching, even maddening, results.
First up is the always popular 3DMark test. Now owned by UL, this is a synthetic performance test that measures gaming. Yes, it’s synthetic, but for the most part, it’s widely regarded as being neutral ground. The first result is the overall score in 3DMark FireStrike. The Core i7-6900K takes the top spot, with the Core i7-7700K taking the second spot, and the Ryzen 7 1800X a close third. It’s pretty much a yawner.
3DMark’s Graphics sub score is designed to stress the graphics card. If 3DMark is doing its job, we should see very little difference among our machines, as we used identical GeForce GTX 1080 cards for our testing. Everything is right as rain. Only the FX is slightly slower, which could be because, well, it’s an AMD FX. Or maybe it’s the PCIe 2.0 on the platform. Either way, nothing to get excited about.
3DMark’s Physics test stresses core count, and we see the 8-core chips ahead by a good margin. The low-wattage Ryzen 7 1700 actually appears to break even with the 6-core Core i7-6800K here. The lower clock speed of the R7 1700 could be hurting it. And FX, yeah, “8 cores” indeed.
3DMark also includes a test of a CPU’s capabilities when tasked with issuing draw calls under DirectX 12. The results clearly put the Intel chips in the lead. Not only is the Core i7-7700K just about dead-even with both Ryzen chips, the 6- and 8-core parts are a sizable distance ahead. If you’re wondering why the 6-core Broadwell is neck-and-neck with the 8-core part, I’ve found this particular test doesn’t scale much beyond six cores.
Ashes of the Singularity DX12 performance
The good news is, we also have a real-world DX12 game in Ashes of the Singularity. This game is basically the tech demo for what can be done in DirectX 12, and it loves CPU cores. For this test run, I ran at 1920x1080 resolution with the visual quality set to low. Ashes has a GPU-focused mode and a CPU-focused mode. I chose the latter because I wanted to see the frame rate when a crazy amount of objects (and draw calls) are thrown at a game. The result was again confusing. The Core i7-6900K walks away from the pack, and even the 6-core Core i7-6800K shows Kaby Lake what-for in the test. The Ryzens are oddly slow considering they have more cores than both the Core i7-6800K and the Core i7-7700K. To be fair, much like 3DMark, I haven’t seen this test scale to crazy amounts. Right before we went to press, AMD send out a statement from the Ashes developer Oxide:
“Oxide games is incredibly excited with what we are seeing from the Ryzen CPU. Using our Nitrous game engine, we are working to scale our existing and future game title performance to take full advantage of Ryzen and its 8-core, 16-thread architecture, and the results thus far are impressive,” said Stardock and Oxide CEO Brad Wardell. “These optimizations are not yet available for Ryzen benchmarking. However, expect updates soon to enhance the performance of games like Ashes of the Singularity on Ryzen CPUs, as well as our future game releases.”
So, not valid?
Tomb Raider performance
I decided to look at Ryzen’s performance using the older version of Tomb Raider. Those looking at the theoretical performance of a CPU in games typically want to take the graphics card out of the equation by running the game at lower settings or even lower resolutions than one would normally use with their given hardware. For this test, I ran Tomb Raider at 1920x1080 resolution at the normal setting. The performance gap again put Ryzen in a bad spot. In fact, it’s frankly a very puzzling result. One can argue that when you’re pushing in excess of 300 or 400 frames per second, it’s kinda pointless, but why isn’t Ryzen, which so handily matches Intel’s Broadwell-E in other tests, right up there with Broadwell-E?
Rise of the Tomb Raider performance
Let’s move on to Rise of the Tomb Raider. Rise is newer and tougher on the GPU. At 1920x1080 and the medium setting, we’re again seeing rather disappointing performance numbers for Ryzen. I'd expected Ryzen to be near lock-step with Broadwell-E but it’s not even close.
Tom Clancy’s Rainbow Six Siege performance
Moving away from Lara, I ran Tom Clancy’s Rainbow Six Siege at 1920x1080 and Medium. We’re pushing into silly frame-rate territory again, but as I said previously, this would normally stress the CPU. It’s how most reviewers would attempt to measure the theoretical gaming performance of a CPU.
So what the hell is going on?
Presented with the most confusing CPU results I’ve seen in 15 years, I asked AMD what could possibly be the issue. I was given an updated BIOS, which had no impact. I was asked if I had a clean install of Windows, which I did. What about turning off the SMT? Umm, and why give up the performance? Was it my motherboard? The result of my using lower JEDEC 2133 RAM speeds vs. 2933? Not exactly.
I did test the Core i7-7700K at DDR4/2933 speeds alongside the Ryzen 7 1800X. While the 1800X jumped from 82 fps to 136 fps, the Kaby Lake chip went from 87 fps to 181 fps. I frankly have no idea why my gaming performance on Ryzen isn’t where I’d expect it to be. My gut says it’s some kind of plumbing issue with PCIe or somewhere outside the cores themselves.
I also ran Sid Meier’s Civilization VI’s AI test. It measures how long it takes to calculate between moves. The result was pretty much a tie (except for FX, of course). I still don’t know what's going on.
(For the record, Brad Chacos experienced similar baffling results in his own tests of the Ryzen 7 1700.)
At ‘realistic’ settings it doesn’t matter
But here’s why the anomalies may not bother many (although it probably should): At actual practical resolutions and game settings, it doesn’t seem to matter.
Most of AMD’s public presentations were at 4K resolution using two cards or a mighty Titan X Pascal card. In those scenarios, it was pretty much a tie. You don’t, after all, buy a $500 CPU and $500 GPU to run at 1920x1080 at “normal” settings. Those settings would be great for integrated graphics, but a GeForce GTX 1080? No. And here’s proof: In Tomb Raider, for example, once you move up to the Ultimate preset, they're all even.
Once you move Rise of the Tomb Raider up to 2560x1600 resolution, Ryzen is right there with Core i7.
And yes, here’s Tom Clancy’s Rainbow Six at 1920x1080 resolution using the Ultra setting. You can again see that everything’s okay, right?
Sure, you’re looking at the charts above and fist-bumping AMD fans, but the odd performance at lower game settings should still disturb you. Based on the charts above, for example, you’d think it would be fine to buy an Athlon FX-8370 chip.
Very late in the review process, AMD’s John Taylor reached out to PCWorld with a comment on the odd performance we were seeing.
Here’s why, AMD says
“As we presented at Ryzen Tech Day, we are supporting 300+ developer kits with game development studios to optimize current and future game releases for the all-new Ryzen CPU. We are on track for 1,000+ developer systems in 2017. For example, Bethesda at GDC yesterday announced its strategic relationship with AMD to optimize for Ryzen CPUs, primarily through Vulkan low-level API optimizations, for a new generation of games, DLC and VR experiences,” Taylor said. “Oxide Games also provided a public statement today on the significant performance uplift observed when optimizing for the 8-core, 16-thread Ryzen 7 CPU design—optimizations not yet reflected in Ashes of the Singularity benchmarking. Creative Assembly, developers of the Total War series, made a similar statement today related to upcoming Ryzen optimizations.
“CPU benchmarking deficits to the competition in certain games at 1080p resolution can be attributed to the development and optimization of the game uniquely to Intel platforms—until now. Even without optimizations in place, Ryzen delivers high, smooth frame rates on all ‘CPU-bound’ games, as well as overall smooth frame rates and great experiences in GPU-bound gaming and VR. With developers taking advantage of Ryzen architecture and the extra cores and threads, we expect benchmarks to only get better, and enable Ryzen to excel at next-generation gaming experiences as well. Game performance will be optimized for Ryzen and continue to improve from at-launch frame rate scores.”
To boil it down: The world of game developers basically develop for two platforms: Intel’s small socket or Intel’s large socket. AMD, as much as it pains the faithful, has been invisible outside of the budget realm, and the results are showing up in the tests. Whether that's what’s really going on I can’t say for sure, and I doubt anyone can at the moment, but it’s at least plausible.
Watch PCWorld's Full Nerd crew discuss Ryzen benchmarks, performance vs. Intel processors, the GeForce GTX 1080 Ti, and more.
In the end, AMD’s Ryzen is arguably the most disruptive CPU we’ve seen in a long time for those who need more cores. The CPU basically sells itself when you consider that for the same price as an Intel 8-core Core i7-6900K, you can have an 8-core Ryzen 7 1800X and a GeForce GTX 1080. Hell, you can go a step further and give up a little performance with the Ryzen 7 1700 but step up to a GeForce GTX 1080 Ti—for the same price as that Intel chip. Damn.
But that’s the world Intel has wrought by keeping 8-core CPUs at what many would say are artificially high prices for so long.
Ryzen, however, isn’t a knockout. The gaming disparities at 1080p are sure to spook some buyers. In fact, if you read our Ryzen 7 1700 build against a 5-year-old Core i5 Intel box, you’ll likely be filled with fear, uncertainty, and doubt. Is this really just a game optimization problem as AMD says, or is it some deeper flaw that can’t be corrected?
Still, let’s give AMD credit for what it has pulled off today in essentially democratizing CPU core counts.