Intel. Strikes. Back. The massive 18-core Core i9-7980X and 16-core Core i8-7960X are the chipmaker’s response to AMD’s Ryzen Threadripper, which has been eating Intel’s lunch for many months.
But can Goliath Intel rise from its stunning defeat to challenge David AMD to a re-match? Can it shake off rumors of clock speeds and high temperatures? To find out you’ll have to read on.
Because there’s so much to say about Core i9, we put the prices, features and FAQs into a separate story you’ll want to read for background. For this review, we’ll walk through some of the under-the-hood issues directly related to performance, and then we’ll dive into the benchmarks.

Core i9, under the hood
Core i9 is the first new “Core i” Intel has introduced in 10 years. The company guarded the secret so closely that it even intentionally mislabeled the first round of chips (including our review sample) as “Core i7” to throw off leakers. Our 16- and 18-core samples are labeled correctly, however.

CPU-Z thinks the Core i9 is a Core i7.
Like most major Intel launches, the Core i9 family represents a new platform, not just a new CPU, which means a new chipset, the X299, and a new socket, the LGA2066, all incompatible with previous CPUs.
The new platform also does something no previous one did by unifying two CPU families. Before today, if you wanted the company’s latest Kaby Lake core, you had to buy a motherboard using the LGA1151 socket. And if you wanted to buy, say, a 6-core Skylake CPU such as Intel’s Core i7-6800K, you had to buy an LGA2011 V3-based motherboard.
With X299 and LGA2066, you can now pick your poison, because the platform encompasses everything from a 4-core Core i5 Kaby Lake CPU to an 18-core Core i9 Extreme Edition, which is a Skylake CPU. For clarity’s sake, the Kaby Lake CPUs, also called Kaby Lake-X, are the Core i5-7640X and the Core i7-7740X. The rest of the Core i7 and Core i9 CPUs are Skylake, collectively called Skylake-X.

The Core X series is made up of CPUs constructed with Skylake-X cores and Kaby Lake-X cores. The monster 18-core part is due in October.
This union has been greeted with some confusion and trepidation. It’s likely that X299 motherboards will be expensive. Some are rightly wondering who would buy a $350 motherboard to install a $250 CPU.
[ Further reading: Every Intel X299 motherboard revealed: Core i9 and Skylake-X’s companions ]
Intel’s motives for the Kaby Lake-X may actually be a nod to the overclocking sports. Unlike LGA1151-socketed Kaby Lake chips, Kaby Lake-X chips have no integrated graphics capability. In fact, we’ve been told the chips physically have no IGP on the die at all. This allows the two Kaby Lake-X chips to overclock potentially far higher than the LGA1151 versions. At the recent Computex show in Taipei, in fact, Intel said a record was set for the highest overclock of a Kaby Lake chip, and it was on X299.
In a perfect world, we’d all have 18-core CPUs, but the truth is there are those who buy cheap CPUs on nice motherboards. Kaby Lake-X is for you.
PCIe lanes: Still being rationed
Still, having Kaby Lake-X and Skylake-X on the same socket is bound to create confusion. Exhibit A is the PCIe rationing. With the Core i9-7900X, for example, you get quad-channel memory support and 44 PCIe Gen 3 lanes directly from the CPU. If you were to drop a Core i7-7740K into the same build, the motherboard drops down to dual-channel memory support. Perhaps worse, the PCIe lanes drop down to 16, because that’s the maximum supported by the Kaby Lake core. That means some of the slots on a motherboard would fall back in performance or be completely disabled.
While Kaby Lake-X’s 16-lane limit is due to the CPU’s design, Intel dials back PCIe lanes on Skylake-X intentionally. Rather than the 44 lanes the 10-core version gets, the 6-core and 8-core versions of Skylake-X get just 28 lanes. From what we understand, there’s no technical reason for it, just “market segmentation,” which is a business school way of saying, “so we can charge you more.” Oy.

You may have to purchase a special dongle key like this if you want to use X299’s VROC feature enabling RAID up to 20 NVMe drives.
Intel VROC
Even more controversial than PCIe rationing is Intel’s VROC, or Virtual RAID on CPU. It’s a nifty feature on Skylake-X that allows a user to configure up to 20 NVMe PCIe drives in RAID into a single bootable partition.
The problem? Intel apparently intends to charge more money for the feature. Details haven’t been released, but vendors at Computex told us they believed RAID 0 would be free, RAID 1 would cost $99, and RAID 5 and RAID 10 could cost $299. Once you’ve ponied up the cash, you get a hardware dongle that unlocks the feature.
It gets worse: VROC will work only with Intel SSDs and pricier Skylake-X parts. If you buy Kaby Lake-X, you’re shut out. VROC also applies only to PCIe RAID that runs directly through the CPU’s PCIe lanes. X299 still supports various RAID 0, 1, 5, 10 through the chipset, but the chipset RAID won’t touch the performance you get from VROC.
We’ll reserve final judgment until Intel confirms pricing. Considering the face-palms it caused at Computex, we’re interested to see how this shakes out if it ever does.

AVX 512 in the Skylake-X promises far more performance—but only if the code supports it.
How Core i9 changes Skylake
Once you’ve gotten beyond the platform confusion and controversy, there’s a reward. The Skylake-X chip itself is indeed something to admire, because it’s built unlike any previous high-end Intel consumer chip.
Previous “enthusiast” or “extreme” CPUs have mostly been the same. That is, a 4-core Haswell Core i7-4770K wasn’t all that different from from an 8-core Haswell-E Core i7-5960X except for the support of quad-channel RAM.
With Skylake-X, Intel breaks from tradition, with some major tinkering under the hood. The most noticeable is an increase in Mid-Level Cache (MLC), or L2 cache: Intel has quadrupled it to 1MB per core, up from 256MB in last year’s Broadwell-E and the majority of Intel’s CPUs. The Last-Level Cache (L3) actually gets smaller, with 1.375MB per core versus the 2.5MB of the previous Broadwell-E chip, but Intel compensates with the larger MLC and also the use of a non-inclusive cache design. Compared to the inclusive design in Broadwell-E, which might keep data that’s not needed, non-inclusive cache attempts to track what should be in the cache so it can more efficiently use the available space.

Skylake is very different from Sklyake-X, and much of that has to do with the cache, AVX512, and a new mesh interface.
Intel also ditches the ring bus architecture it has used for several years (including Kaby Lake and Skylake) for a new mesh architecture. If you think of a quad-core CPU as four homes connected by a bus line that makes stops at each home, it’s perfectly fine until you add, say, 12 or 18 homes to the community. You could connect two bus lines, but that still isn’t as fast as simply driving from one home to the next, which is what the new mesh architecture does.

The ring bus architecture of recent CPUs gets dumped for a mesh architecture that can scale up far more cores.
Intel’s use of a mesh design clearly puts it on a better footing to compete with Threadripper, as more and more cores are added to CPUs. AMD’s Ryzen series uses something it calls an Infinity Fabric, which is essentially a super-high-speed mesh network.
The last feature worth noting is the improved Turbo Boost Max 3.0. This is the feature wherein Intel identifies the “best” CPU core at the factory and gives it a little more boost speed. With Broadwell-E CPUs, only one core was chosen. With Skylake-X, two cores are identified as the “best” and allowed to run a couple of hundred megahertz faster.

Core Wars: Episode IV (can you spot the boo-boo in this picture?
18-core Core i9 performance
For our performance testing, we ejected the 10-core Core i9-7900X from the socket of our Asus Prime X299-Deluxe and installed our 18-core Core i9-7980X. Other than that, the components haven’t changed from our original Core i9-7900X review, including: a GeForce GTX 1080 Founders Edition, 32GB of DDR4/2600 RAM, and a HyperX 240GB Savage SATA SSDs. For our Adobe Premiere CC 2017 test, the source project and the target drive used a Plextor M8pe PCIe SSD in all but the Core i5 and the Ryzen 5 CPUs. This exception is due to a problem with the Ryzen 5’s motherboard, which failed to recognize the Plextor drive. A Samsung 960 Pro NVMe SSD was swapped in. The AMD Ryzen Threadripper 1950X remains the same as when we first reviewed the chip.
Due to time constraints, some of the tests also feature scores from a Core i9-7960X—the 16-core version of the chip. The CPU was used in a pair of identical Falcon Northwest Talon systems we used for our upcoming Threadripper vs. Core i9 showdown. Although that system features completely different GPUs, the CPU is bone stock, so the CPU-only tests are valid to compare.
Cinebench R15 performance
Our first test is CineBench R15, a free 3D rendering test based on Maxon’s professional Cinema4D engine. It’s almost entirely CPU bound and also scales very well as you increase the number of CPU cores and threads.
The top dog is not a surprise: Intel’s 18-core Core i9-7980X, with the 16-core Core i9-7960X getting the silver medal. AMD’s Threadripper 1950X, once undisputed among consumer CPUs, has to settle for the bronze.
But the placing of that Threadripper 1950X is nothing to be ashamed of. Yes, AMD fans, yes, we know: It costs significantly less. Let’s just acknowledge that now so you can read the rest of this review without constantly want to scream, “but it cost a ton less!” Just silently repeat that phrase after every test result you see.

Cinebench R15 gives the 18-core Core i9 the gold medal, the 16-core Core i9 a silver and Threadripper 1950X is consigned to a bronze.
Multi-threaded performance isn’t the end-all be all. The sad truth is the vast majority of applications and games just don’t use all of those cores, so we also use CineBench to measure single-threaded performance. Surprise: The Core i9-7980X comes out on top, pulling ahead of even the higher clocked Core i7-7700K CPU. For the most part we really see three tiers of performance, with the Kaby Lake and Skylake-X chips on top, and then Broadwell and Zen-based chips under that.
To keep this all in perspective, we’re really not looking at a huge difference between a Skylake-X chip and a Broadwell-E or Zen chip. But the winner is clearly Core i9 and Skylake-X.

Single-threaded performance in Cinebench R15 is valuable for gauging how a CPU will perform in the vast majority of applications and games.
POV Ray performance
The Persistence of Vision Raytracer actually traces its lineage back to the day of the Commodore Amiga, and it continues to be supported by an active community of developers. Like Cinebench, it favors high core- and thread-count chips. The results are predictable, with 18-core Core i9-7980X on top. That 16-core Ryzen Threadripper 1950X does pretty well, but two more cores pay real dividends.

POV Ray’s internal benchmark says more cores equals moar performance and 18 > 16.
As we still want to know how CPUs do under much lighter loads, we also run POV Ray using a single thread. The results again favor high-clock-speed quad-cores, but the Skylake-X chips hang close, and the Zen and Broadwell-E chips aren’t that far behind. The only dog here is AMD’s ancient Vishera-based FX CPU.

POV Ray 3.7 puts the highest clocked and highest IPC chips on the top of the stack.
Blender performance
Our next test is the open-source Blender modeler. It’s a popular app used to create effects for many indie movies. Blender performance results can vary greatly by the workload. For example, sample benchmarks provided for Intel’s quad-core Kaby Lake and AMD’s Ryzen don’t really scale with core count. For this test, we run the popular Mike Pan BMW benchmark. The winners again are Intel’s two latest Core i9 CPUs, with the Threadripper 1950X coming in slightly behind them.
It’s a good showing for all three but, again, Blender’s performance very much depends on what model and what you’re doing. We’ve also found Blender to be very sensitive to the operating system.

The open-source Blender renderer favors CPUs with more cores too. We use Mike Pan’s popular BMW render benchmark file.
Because these are the big-nerd chips, we decided to throw something heavier at them in the form of the Gooseberry benchmark file. It’s a benchmark frame from the Blender Institute’s upcoming open-source movie: Cosmos Laundromat. While the BMW benchmark takes a couple of minutes or three to run, Gooseberry pushes it to more than 20 minutes to render a frame.
The results in Gooseberry look great for the new Core i9s and definitely paint a worse picture for the 16-core Threadripper 1950X in our Falcon Northwest Talon PC. Even worse, a test on our reference machine was a bit slower, too.

Gooseberry puts Intel’s new Core i9 chips well ahead of the AMD Threadripper 1950X.
WinRAR performance
We know from our original Core i9-7900X and our Threadripper 1950X review that WinRAR just doesn’t seem to like the mesh-like architectures of the Threadripper and Core i9 chip. No surprise that we’re seeing the same thing here, but it is surprising that Intel’s older Broadwell-E chips orevail. Threadripper just does very poorly here.

RARLab’s WinRAR doesn’t particulary like the new mesh architecture in Skylake-X but it really seems to hate AMD’s Zen architecture.
7-Zip performance
We use the 9.20 version of the free 7-Zip and run the built-in multi-threaded test. The clear winners by a larger-than-expected margin are the new Core i9’s.

The free and popular 7 Zip puts the chips the most cores at the front of the line.
Corona Renderer performance
If you look at Cinebench, Blender and POV, the performance spreads between the 16-core Threadripper and new Core i9 is there, though small. In Corona Renderer, we’re seeing performance gaps that might wake you up. The 16-core Core i9-7960X enjoys a 25-percent performance gap over the equivalent 16-core Threadripper 1950X. It’s even worse with the 18-core Core i9-7980X.
Before you scream the benchmark is cooked in favor of Intel’s micro-architecture, this particular benchmark was something introduced to most of us by AMD for the original Threadripper review. Frankly, this just isn’t pretty.

The Corona Render shows the 16-core Threadripper getting hammered by the 16- and 18-core Core i9 chips.
Handbrake performance
Not everyone does 3D modelling for a living, but an awful lot edit or convert videos, and that’s where having more cores generally benefits you. To look at encoding performance of the new Core i9, we used the popular and free Handbrake encoder to convert a 30GB 1080p video using the built-in Android tablet preset.
One issue we’ve seen with the version of Handbrake and the workload we use is the diminishing returns on the big-core CPUs. You can see the we pick up performance as we go from quad-cores to 10 cores, but once we’re beyond 10 cores, we’re not picking up the performance dividends you’d expect at 16 or 18 cores.
Still, both Core i9s are in front, with Threadripper also giving up very respectable performance here.

Our workload for the free Handbrake encoder favors more cores too but it doesn’t pay the same performance dividends the way a professional 3D renderer would.
Premiere Creative Cloud Performance
The other half of video encoding is obviously in video editing. For this particular test we use Adobe Premiere Creative Cloud 2017 and export an actual project shot by our video department, so it’s as real-world as you can get. The footage was shot on a Sony Alpha camera at 4K resolution and then exported using the Blu-ray preset at 1080p resolution. We also enable the maximum render quality option, which helps improve image quality when changing the resolution. The workload is rendered on the CPU, which some video snobs say still renders the highest-quality video.
Although this is very much a CPU-intensive task, we do try to take the storage subsystem out of the equation so for all but the original Ryzen 5 and Core i5 systems we used Plextor PCIe NVMe SSD as the source and target drive. Like Handbrake, we’re not seeing stellar scaling with core count, but the 18-core Core i9 rules them all.
Still, if you’re buying a big CPU as a video editor, you have to consider whether it’s worth it.

Video snobs say CPU-based rendering is superior and if you do that—you’ll want more cores.
One thing we also want to add: A lot of people say CPUs are irrelevant in the age of GPU-based encoding. To prove it either way, we switched Adobe Premiere from CPU rendering to CUDA rendering on the GeForce GTX 1080 card. As you can see, you pick up a huge advantage by using the GPU to encode, but it’s clear that having more cores still yields dividends. The upshot is not to think a dual-core CPU will be better than a 10-core CPU for video editing.

Even if you use your GPU for encoding, having a CPU with more cores will still lower your render times.
Rise of Tomb Raider performance
Stop: If you’re buying a 16- or 18-core CPU to primarily play games, you’re doing it wrong. Your funds would be more wisely spent on a faster graphics card. But if you do play games, and you also model 3D, you want to know which CPU gives you the best performance. The answer, as we suspect you know, is Core i9.
We say this because we already know both the 10-core Core i7-6950X and 10-core Core i9-7900X have an advantage in games. The newest Core i9’s don’t change that pattern.
First up is Rise of the Tomb Raider, which has been patched to address some inefficiency on the Ryzen and Threadripper platforms. We run the game at 1920×1080 resolution and the Medium setting in DirectX 11 mode.
In front of the pack is the 18-core Core i9-7980X, but for most part, it’s pretty close to the 10-core Core i9-7900X. Threadripper does decently well once kicked into its Game Mode, but the advantage goes to Core i9.

Intel’s Skylake-X continues to show better performance in most games but Threadripper 1950X is still very much in the ball game.
Tom Clancy’s Rainbow Six Siege performance
We actually ran quite a few games on the CPUs, but for the most part the 18-core Core i9-7980X either led the way or was near the front of the pack. We saw the same trend in Tom Clancy’s Rainbow Six Siege when run at 1920×1080 resolution and Medium settings. We do this to remove the video card from being the bottleneck in performance testing.

Core i9 takes top honors in Rainbow Six
3DMark Time Spy 1.0 performance
Our last gaming test is 3DMark’s Time Spy 1.0 test. The score is only the CPU portion, because that’s all we care about here. Again, the Core i9-7980X is large and in charge.

3DMark’s TimeSpy puts the 18-core on top—but it’s clear the engine doesn’t scale with the core count much.
Power Consumption and clock speeds
One thing that’s come with Core i9-7900X is its power consumption, and how much more power it uses than AMD. That’s typically not an easy thing to test without identical hardware, but as we noted earlier, boutique PC builder Falcon Northwest sent us two nearly identical, heavily loaded Talon PCs. Both feature 128GB of DDR4/2400 RAM, Samsung 960 Pro drives, Titan Xp’s in SLI, and matching power supplies, coolers, and cases. The only differences between them were the 16-core CPUs and motherboards.
This setup let us run the same CPU loads while measuring the power consumed at the socket. Because most workloads don’t actually use all cores, we also decided to look at power consumption while ramping up 1 to 32 threads. The results confirm what everyone knew: Core i9 uses more power.

Using a pair of nearly identical 16-core systems, AMD’s Threadripper 1950X proves to be more power-efficient than Intel’s 16-core Core i9-7960X.
These power measurements aren’t exact, but they’re close enough to give us an idea. It’s interesting that the Threadripper 1950X seems to level off in power consumption at about 20 threads, while the Core i9 continues to climb.
Threadripper definitely has the advantage in power consumption, but that’s not enough of an edge. If you really care about your multi-threaded performance, an extra dollop of power consumption probably isn’t going to matter to you.
It’s very much like Threadripper’s gaming performance. Core i9 has a definite advantage, but to be honest, it’s probably not going to matter that much given that anyone buying this class of CPU buys for content-creation priorities first.
We’ll close this out with the scaling performance of the 18-core Core i9-7980X under varying workloads.
We did this for our Threadripper review, and we think it’s a great way to understand what you’re getting out of these chips. When it was just the 10-core Core i9-7900X versus the 16-core Threadripper 1950X, the Core i9 had the edge in light loads, but the AMD CPU led in heavier loads.
With these new Core i9’s that’s shifted. Not only does Intel have an edge in light loads—it now has the edge in heavy loads. If you look at our Cinebench R15 results below, you can see the 18-core gives up no quarter to the AMD chip.

Using CineBench R15, we varied workloads from one thread to 36 to show just where the performance peaks.
The price: If you have to ask…
The asterisk hanging over Core i9 and the entire Core X series is the value proposition. Ever since our original Core i9-7900X and Threadripper 1950X reviews, we were pretty sure Intel wouldn’t have any issue being the performance leader.
The problem is being the price leader. Trying to peg value per performance can be pretty slippery, because performance is relative. We do know that generally Threadripper is just a little off the pace of Core i9. So we decided to crunch the numbers on cost per thread of all Core X and Threadripper chips. We also threw in the 10-core Core i7-6950X, because, well, $1,723.

Why isn’t Ben Franklin smiling? He probably paid the full $1,723 for a Core i7-6950X Broadwell-E, which has the highest cost per thread of the CPUs here.
Thread for thread, the worst value is that Broadwell-E chip. Unsurprisingly, Intel’s Core i5-7640X comes in second-worst at $61 per thread. The real shocker here is the best value: AMD’s 32-thread Threadripper 1950X.
Conclusion
There’s really two ways to look at Core i9. The first is from a performance perspective, where there’s no question that Intel is in charge. You’d have to look very hard and very far to find any multi-threaded benchmarks where the 16-core and 18-core Core i9 lose to AMD’s Threadripper. Once you move to lighter loads that favor Intel’s higher clock speeds and better IPC efficiency, it only gets worse.
So, for performance freaks who absolutely, positively, must have the fastest CPUs for both light duty and heavy duty, the Core i9-7960X and Core i9-7980X are the new speed demons.
The problem, of course, is the cost differential. That last chart above should give you an idea of the value AMD is offering. Yes, Core i9 may officially be faster in every way you can measure, but it can’t outrun its own price.
Maybe it depends on who’s paying. If, for example, your boss asks you to spec out a new build for your video editing workstation, you’ll say Intel. But if you’re building on your own dime and trying to make the dollars stretch? The natural answer is AMD.
But make no mistake, Core i9 is clearly the performance leader today.