Intel’s 10th gen Core i9-10900K is—without a doubt—exactly as Intel has described it: “the world’s fastest gaming CPU.”
Intel’s problem has been weaknesses outside of gaming, and its overall performance value compared to AMD’s Ryzen 3000 chips. With the Core i9-10900K, Intel doesn’t appear to be eliminating that gap, but it could get close enough that you might not care.
What is Core i9-10900K?
Despite its 10th-gen naming, Intel’s newest desktop chips continue to be built on the company’s aging 14nm process. How old is it? It was first used with the 5th-gen Core i7-5775C desktop chip from 2015. Many tricks, optimizations, and much binning later, we have the flagship consumer Core i9-10900K, announced April 30. The CPU features 10 cores and Hyper-Threading for a total of 20 threads, with a list price of $488.
The Core i9-10900K does bring a few changes. Intel officials said the chip uses a thinner die and thinner solder thermal interface material (STIM) to improve thermal dissipation. Intel also had to make a thicker heat spreader (that little metal lid to keep you from crushing the delicate die).
Why make the die and STIM thinner, but the heat spreader thicker? The reason is cost. Intel said it had to keep the height of the heat spreader on all of its CPUs the same so they’d be compatible with existing cooling hardware. Intel officials did say the materials used for the heat spreader help compensate for that compromise, so overall the new chip is better at power dissipation.
A new socket?!
It’s true: Intel’s new 10th-gen CPUs bring with them a new LGA1200 socket that is—of course—incompatible with the previous 9th-gen CPUs. Intel took flack for introducing a new chipset with its 8th-gen desktop chips that was incompatible with the previous generation, so you can understand the anger for those who just want to upgrade the CPU, not also the motherboard.
The LGA1200 socket and new Z490 don’t seem to change much. You still install the CPU almost the same way, and if you have an existing LGA1151 cooler, it’ll still fit. Sadly, rumors of PCIe 4.0 on the X490 proved untrue, leaving Intel at a disadvantage compared to Ryzen 3000 chips that have the faster interface for SSDs and GPUs.
How We Tested
For this review, we stick with Intel’s flagship, the $488 Core i9-10900K. Its natural competitor is AMD’s Ryzen 9 3900X with 12 cores and 24 threads. Its list price is $499, but its street price as of this writing is actually more like $410 on AmazonRemove non-product link. The Ryzen 9 3900X comes with a decent air cooler, too.
The only other CPU we’d compare would be the Ryzen 9 3950X, but with a street price of $720 on AmazonRemove non-product link (as of this writing) the math doesn’t work. So we’ll stick with the 12-core Ryzen 9 3900X. It was tested on an MSI X570 MEG Godlike with 16GB of DDR4/3600 in dual-channel mode. We typically use the same SSD on all platforms, but we feel that’s unfair to AMD, which can run PCIe 4, so we used a Corsair MP600 PCIe 4.0 drive.
For the Core i9-10900K, we used an Asus ROG Maximus VII Extreme board with 16GB of DDR4/3200 in dual-channel mode and a Samsung 960P SSD.
Both systems used Windows 10 1909, identical GeForce RTX 2080 Ti Founders Edition cards, and NZXT Kraken X62 coolers with fans set to 100 percent. Both boards were run open-case, with matching desk fans blowing cool air directly onto the graphics cards and the boards’ socket area. All systems used the same drivers, the latest UEFI’s, and the latest Windows security updates.
Due to time and other constraints, we ran the boards with MCE and PBO set to Auto, and 2nd-level XMP and AMP profiles selected. While these factory settings are beyond what is stock, we think it’s close to what a consumer see will see out of the box.
Core i9-10900K Rendering Performance
We’ll kick this off where we normally do: Maxon’s Cinebench R20. It’s a 3D modelling test built on the company’s Cinema4D engine, and it’s integrated into such products as Adobe After Effects. Like most 3D modelling apps, more cores and more threads typically yield more performance.
Our results for the Core i9-10900K and Ryzen 9 3900X are fresh, but we decided to sprinkle in results from previous reviews for more context. Although those older results are not using the latest version of Windows, Cinebench is very reliable. The R20 version uses AVX2 and AVX512 and takes about three times as long to run as the older R15 version. That means any boost performance should matter less.
Remember these results, because for the most part it won’t change too much as we move through multi-threaded performance: Cores matter. The Ryzen 9 3900X’s 12 cores prevail over the the Core i9-10900K’s 10. If it’s any consolation, the latest Core chip fares noticeably better than the Core i9-9900K and Core i9-9900KS, which were both handcuffed by their “mere” 8 cores.
Switching Cinebench R20 to single-threaded performance, constrain the load to a single CPU core. The results are so close that no would or should care. We expected the Core i9-10900K to have a little more of an edge, but maybe it’s the luck of the draw.
Corona is an unbiased photo-realistic 3D renderer, which means it takes no shortcuts in how it renders a scene. It loves cores and threads, so the results here follow the trend, but the Core i9 finishes just 7 percent shy of the Ryzen 9. In Cinebench R20, the Ryzen 9 had a larger 15-percent advantage.
The Chaos Group’s V-Ray Next is like Cinema4D’s engine, and it’s a biased rendererer—it takes shortcuts to finish projects so you can, you know, win an Academy Award like the V-Ray has. It loves thread count, so guess what: The Ryzen 9 comes out about 14 percent faster than the Core i9.
Our last rendering test measures ray tracing performance using the latest version of POV Ray. The Ryzen comes in almost 17 percent faster than the Core i9. That’s pretty close to the thread-count advantage the Ryzen 9 has, which equals 20 percent more.
Switching POV Ray to single-threaded performance, the Ryzen 9 squeezes by the Core i9, which surprised us—we thought the Core i9 would take the lead.
Core i9-10900K Encoding Performance
Video encoding needs fast CPUs, too. That’s why we use the latest version of HandBrake to convert a 4K video short to 1080p using H.265. Using the CPU to finish the task, the 12-core Ryzen 9 finished with a 16-percent advantage over the 10-core Core i9. So far, that’s pretty much everything we’ve expected.
Our next test uses Cinegy’s Cinescore to assess CPU and GPU performance across several dozen broadcast industry profiles from SD to 8K, using codecs from H.264, to MPEG2, XDCAM and AVC Ultra as well as Nvidia H.265 and Daniel 2. It runs entirely in RAM to remove storage as a bottleneck. (Note: The version of Cinescore we use is older and no longer works without setting an older date on the PC—the version has timed out the codec license.)
While the Core i9 doesn’t win, it gets awfully close to the Ryzen 9. This could mean Cinescore and its CODECs don’t care that much about thread count, or the higher clock speeds of the Core i9 may be of more value. Sorry, Ryzen 9 fans.
Our next test is a video test, but not in a traditional sense. While standard encoding or transcoding isn’t all that smart in how it downsamples or upsamples video, Topaz Lab’s Video Enhance AI claims to look at every frame and use machine learning inferencing to decide what will make each frame look better on the upscale, based on studying other videos. For the test, we take a two-minute family video shot on a Kodak video camera and upscale it from 720p to 1080p, using the app’s Gaia HQ preset.
This upscale would typically be done on a GPU, where it would be significantly faster, but we ordered it to use the CPU cores for the upscale. Topaz Labs uses Intel’s own OpenVINO for the Deep Learning. Doing a frame-by-frame upscale isn’t easy and still takes literally hours to complete. The Ryzen 9 finishes with about a five-percent advantage over the Core i9. Too close for comfort, but a win is a win.
Core i9-10900K Compression Performance
Moving on to compression and decompression performance, we first use RARLab’s WinRAR. As with prior Ryzen CPUs, the result is bad—a loss for the Ryzen 9 and a big win for the Core i9. The Ryzen architecture has long performed poorly here. In the built-in benchmark, the Core i9 is 82 percent faster.
Switching WinRAR to single-core performance, nothing changes except the Core i9’s win grows to a 194-percent advantage. We use WinRAR because it’s worth pointing out that some software will heavily penalize Ryzen’s microarchitecture. Intel has fielded software support to companies for much longer than AMD, and it shows.
The gap closes with the much more popular (and free) 7Zip. We set the built-in benchmark to use the number of threads available to the CPU—in this case, 24 for Ryzen 9 and 20 for Core i9. The first result is multi-core.
Decompressing performance, according to the developer, leans heavily on integer performance, branch prediction, and instruction latency. Compressing performance leans more on memory latency, cache performance, and out-of-order performance. It doesn’t matter either way, as the Core i9 falls to the Ryzen 9 in both areas. The Ryzen 9 is about 21 percent faster in decompression, and a whopping 44 percent faster in compression.
Moving to single-threaded performance, we see the Core i9’s fortunes reverse, with about 7 percent faster decompression and 17 percent faster compression.
Core i9-10900K Gaming Performance
Intel didn’t call the Core i9-10900K ‘the best CPU for multi-threaded performance’ because it likely knew it wasn’t going to squeeze out the Ryzen 9 3900X. On the other hans, Intel’s chips have long led in gaming performance, ever since the first Ryzen was introduced.
Rather than have you scroll through 16 charts, we combined 16 results into one megachart, from a list of games run at varying resolutions and settings. We’ll run through it from top to bottom.
In Far Cry New Dawn we see the Core i9 vary from about 9 percent to about 14 percent over the Ryzen 9, depending on the resolution and game setting. As you jack up the resolution or the game setting, the test is increasingly GPU-bound.
Ashes of the Singularity: Escalation, long the poster child for DX12 performance, is hailed for actually using the extra CPU cores available to gamers today. For this run, we use the Crazy quality preset and select the CPU-focused benchmark, which is supposed to throw additional units into the game. The result: about a 7.5-percent advantage for the Core i9-10900K.
Chernobylite, an early-access game, features a benchmark to showcase its beautiful graphics. Set to high, where the game is not limited by GPU performance, we see that familiar 7.9-percent advantage for the Core i9 over the Ryzen 9. As we crank up the graphics from high to ultra, it becomes an increasingly GPU-bound test.
The only red (AMD) bar longer than a blue (Intel) bar is in Civilization VI Gathering Storms—but unfortunately for the Ryzen 9, this particular test measures how long it takes for the computer to make a move, and shorter time is better. The Core i9 is about 6.5 percent faster.
For Metro Exodus, we run the game at 19×10 resolution using the built-in benchmark presets for High, Extreme, and RTX. On High, when the game is less of a GPU test and more of a CPU test, the Core i9 has about a 6.7-percent advantage over the Ryzen. On Extreme, where the GPU is the bottleneck, the gap closes to about 3 percent. When the RTX preset is used, the Core i9’s advantage is about 4.3 percent, which is a surprise—we assumed performance differences would dissipate because ray tracing is so intensive.
Quake II RTX kind of proves our case, as it’s a fully path-traced version of the original classic. We assumed the RTX 2080 Ti results would basically render this test nil, and for the most part it is pretty close. The Core i9’s 2.7-percent lead is likely within the margin of error, but even this small edge says something.
Here’s a result that’s not so close, in Gears of War 5. On its medium setting, the Core i9 leads by almost 49 percent over the Ryzen 9. On the Ultra preset, the Core i9 clears by a heavy 17.2 percent. As we move up to higher resolutions, the GPU is again the bottleneck, and we see about a 6.2-percent lead for the Core i9 over Ryzen 9. We also include the game’s CPU Frame Rate report, which projects how many frames the CPU could push out if not bound by the graphics card. It shows a hefty 35-percent advantage for the Core i9.
This last result, while not as common among the games we ran, is a reminder that there are many games that simply don’t run as well on Ryzen’s microarchitecture, though it’s certainly improved over the original Ryzen launch (which was almost always at a 20-percent deficit). Intel retains an advantage in the vast majority of games—how much will depend on your graphics card and the resolution you choose to play at.
Let’s highlight a couple of wins for AMD—the first one is a bit of a surprise. It’s Counter Strike: Global Offensive, using the FPS Workshop test. We suspect the Ryzen 9 3900X’s large cache helps it here, but we plan to retest to make sure. And yes, both are in excess of 400 fps.
We’ll close this out with UL’s 3DMark Time Spy Extreme CPU test. It tests physics, and generally the more cores you have, the better the performance. While the win goes to AMD, don’t get too excited. While we support using more cores for gaming, few developers are even pushing 8 cores. Consider this an aspirational win.
What about power consumption?
The Core i9-10900K’s power consumption has inspired many rumors, especially consideting its official TDP of 120 watts. Intel has long been at a disadvantage against the 7nm Ryzens on power. Once you add two cores and more clocks, it’s not going to get better.
For our testing, we unfortunately did not have access to matching power supplies. Both PCs used 1,000-watt units, but the Ryzen 9 used an 80 Plus Silver, while the Core i9 used an 80 Plus Gold, which is slightly more efficient in how it converts AC to DC. (An 80 Plus Gold must be at least 88-percent efficient at a 20 percent load, while an 80 Plus Silver must be 85-percent efficient.)
Both systems also featured motherboards bedazzled with LEDs and OLEDs—we don’t know how much power they consumed. Both had the same liquid coolers and the same GPU models and internal case fans, so we monitored both using watt meters that measured the power consumed at the wall socket during various loads. This testing should be taken with a high probably of inaccuracy, though, until we can match components.
On lighter loads, you can see the purple line (Ryzen 9) consumed more power at idle and also on light loads. It doesn’t take long before the Core i9 (red line) starts guzzling, though.
Here’s another view of the power both systems consumed during Cinebench R20. The Intel system (red) hits about 290 watts of power vs. the 250 watts for the AMD system. When you also consider that the Intel system takes longer to finish the run, it’s really all in Ryzen 9’s favor. Even though much has been made of the power and heat, in our opinion it probably shouldn’t be the deciding factor for most people.
Core i9-10900K Conclusion
If you were expecting Intel’s 10th-gen to hammer Ryzen 3000 CPUs, you were wrong. Intel’s creaky 14nm fabrication process can’t fully stand up against AMD’s (and TSMC’s) 7nm, and Intel was never going to offer more multi-core performance than AMD’s chips.
What Intel has done, however, is close the gap. The company’s previous standard bearer, the Core i9-9900K, had an ocean-sized gap in multi-core performance. With the additional 2 cores in Core i9-10900K, the gap narrows to the point that some might gladly trade it for the gaming and performance in lighter loads the Core i9 offers.
Intel’s pricing restraint helps, too. At $488 list, the 10th-gen Core i9-10900K gives you two more cores for the same price as the 9th-gen Core i9-9900K. In Intel’s world, that’s a major price cut. In fact, Intel’s entire 10th-gen lineup is a major improvement, as the Core i7 and Core i5 finally get Hyper-Threading. Those two models were embarrassingly underpowered against their AMD counterparts. AMD’s CPUs are still a raging deal—but at least 10th-gen Intel isn’t just a wholesale write-off.
Rather than focus on the deal AMD offers, there’s another positive trend we want to point out. The height of Intel’s hubris could be defined by 2016’s 10-core Core i7-6950X. That chip had a price of $1,723, which meant you were paying $86 per thread. By 2017, AMD’s Ryzen had emerged as a threat, and the 10-core Core i9-7900X had cut the price per thread to $49. Intel kept that price steady for 2018’s 10-core Core i9-9900X. With the 10-core Core i9-10900X, Intel slashed prices to $30 per thread, or $590.
With the $488 Core i9-10900K, the price of 10 cores from Intel hits an all-time low, which is good news for consumers no matter what side of the fence you sit on.
In the end, AMD still makes it a better choice and a better deal for most—but for those who want higher clock speeds and more performance on lighter workloads—Intel’s 10th gen Core i9-10900K and its sibling at least are worth considering. That, frankly, is a victory from the situation it has been in.