There’s no two ways around it: The PC is slowing down with age.
That may be a bit harsh—computers are faster and smaller than ever before—but processor performance simply isn’t advancing at its past breakneck pace. At one time, 50 to 60 percent leaps in year-to-year performance were commonplace. Now, 10 to 15 percent improvements are the norm.
Luckily, five-plus-year-old computers can still tackle everyday tasks just fine, so the performance slowdown isn’t a huge issue. Plus, it’s nice not having to replace your PC every other year during a down economy. But technology doesn’t advance by sticking to the status quo. The future needs speed!
Fortunately, the biggest names in PC processors aren’t satisfied with the status quo. Chip makers are working furiously to solve the problems posed by a slowing Moore’s Law and the rise of the power wall, in a bid to keep the performance pedal to the metal.
So what kinds of radical tricks do they have up their sleeves? Several different kinds, actually—and each holds great potential for the future. Let’s take a look behind the curtain.
Intel: Building on the shoulders of giants
Can we chalk up today’s paltry performance gains to a breakdown in Moore’s Law? Not quite. Moore’s legendary line might be frequently misquoted to talk about CPU performance, but the letter of the Law revolves around the number of transistors on a circuit doubling every two years.
While other chip makers have struggled to shrink transistors and squeeze more of them onto a chip, Intel—the company Moore himself cofounded—has kept pace with Moore’s Law since its utterance, an achievement that can be laid at the feet of Intel’s small army of engineers. Not just any engineers, though. Clever engineers.
As transistors become more tightly packed, heat and power-efficiency concerns become major problems. Now that transistors are reaching almost infinitesimally small sizes—each of the billion-plus transistors in Intel’s Ivy Bridge chips measure 22 nanometers (nm), or roughly 0.000000866 inch—conquering those woes takes creative thinking.
“There’s no doubt it’s getting hard,” Intel technical manufacturing manager Chuck Mulloy said in a phone interview. “Really, really hard. I mean, we’re at the atomic level.”
To keep progress a-rollin’, Intel has made some significant changes to the base design of transistors over the past decade. In 2002, the company announced that it was switching to so-called “strained silicon,” which increased chip performance by 10 to 20 percent by slightly deforming the structure of silicon crystals.
Mo’ power means mo’ problems, though. Specifically, as transistors continue to shrink, they suffer from increased electron “leakage,” which makes them far less efficient. Two recent tweaks combat that leakage in novel ways.
Without getting too geeky, the company started by swapping out the transistors’ standard silicon dioxide insulators in favor of more efficient “high-k metal-gate” insulators during its shift to the 45nm manufacturing process. It sounds simple, but it was actually a big deal. That was followed by an even more monumental change, with the introduction of “tri-gate” or “3D” transistor technology in Intel’s current Ivy Bridge chips.
Traditional “planar” transistors have a pair of “gates” on either side of the channels that carry electrons. Tri-gate transistors shattered that two-dimensional thinking with the addition of a third gate over the channel, connecting the two side gates. The design improves efficiency by reducing leakage while lowering power needs. Again, it sounds simple, but manufacturing three-dimensional transistors requires immense technical precision. At the moment, Intel is the only chip maker shipping processors with 3D transistors.
So what’s next for Intel? The company isn’t telling. In fact, Mulloy says that any technology the company might use—like, say, the next-gen extreme ultraviolet lithography fabrication process—goes into a PR “black hole” years before Intel introduces it in its chips. But, he stressed, the past improvements discussed above don’t just stop when they’re introduced to the public.
“People tend to think ‘Intel used this, now they’re on to the next thing,'” Mulloy said. “Strained silicon did not go away when we added the capabilities of high-k metal gate. High-k metal gate didn’t go away when we went to tri-gate transistors—we’re still building and improving on that. We’re at the fourth generation of strained silicon, the third generation of high-k metal gate, and our upcoming 14nm chips will be the second generation of tri-gate.”
The best chip technology out there just keeps getting better, in other words.
Oh, and for what it’s worth, Intel thinks Moore’s Law will continue unabated for at least two more transistor-shrink generations.
AMD: Parallel computing all the way
Intel isn’t the only chip maker in town, though. Rather than betting purely on improvements to transistor technology, rival AMD thinks the future of performance hinges on cutting CPUs some slack by shifting some of the workload to other processors that might be better suited for particular tasks. Graphics processors, for example, smoke through tasks that require a multitude of simultaneous calculations, such as password cracking, Bitcoin mining, and many scientific uses.
Ever heard of parallel computing? That’s what we’re talking about.
“Going into smaller nodes on the transistor side increases [CPU] performance by 6 to 8 to maybe 10 percent, year to year,” says Sasa Marinkovic, a senior technology marketing manufacturer at AMD. “But adding a GPU with GPU compute capabilities gives much larger gains. For example, for Internet Explorer 8 to IE9 the performance increase was 400 percent—four times the performance of the previous generation, and it’s all thanks to [IE9’s] GPU acceleration.”
“We see that type of performance leap playing within today’s power envelope, or you can greatly lower the power envelope and see the same performance [you have today],” Marinkovic says.
AMD has been inching toward a heterogeneous system architecture—as the method of distributing the workload amongst several processors on a single chip is called—in its popular accelerated processing units, or APUs, including the one powering the upcoming PlayStation 4 gaming console. APUs contain traditional CPU cores and a large Radeon graphics core on the same die, as shown in the block diagram above. The CPU and GPU in AMD’s next-gen Kaveri APUs will share the same pool of memory, blurring the lines even further and offering even faster performance.
AMD isn’t the only chip maker backing the idea of parallel computing. The company was a founding member of the HSA Foundation, a consortium of top chip makers—albeit sans Intel and Nvidia—that are working together to create standards that should hopefully make programming for parallel computing easier in the future.
It’s a good thing that industry-leading companies provide the backbone of the HSA Foundation’s vision, because in order for the grand heterogeneous future of parallel computing to come to fruition, programs and applications need to be specifically written to take advantage of the hardware designs.
“Software is the key,” Marinkovic admits. “When you look at APUs with [full HSA compatibility] and without full HSA, the software will have to change. But it will be a change for the better…Where we want to get to is code-once, and use everywhere. Once you have the HSA architecture across all these different HSA Foundation companies, hopefully you’ll be able to write a program for a PC and run it on your smartphone or tablet with some small tweaks or compilation.”
You can already find application processing interfaces (APIs) that enable parallel GPU computing, such as Nvidia’s GeForce-centric CUDA platform, the DirectCompute API baked into DirectX 11 on Windows system, and OpenCL, an open-source solution managed by the Khronos Group.
Support for hardware acceleration is picking up among software developers, though most of the programs handle intensive graphics in some way. Internet Explorer and Flash are on the bandwagon, for instance. Just last week, Adobe announced it was adding OpenCL support for the Windows version of Premiere Pro. According to representatives, users with AMD discrete graphics card or APUs will be able to tap into that GPU acceleration to edit HD and 4K videos in real time, or export videos up to 4.3 times faster than the base nonaccelerated software.
“I don’t think there’s any ifs or buts about this,” Marinkovic says. “Heterogeneous architectures are the way of the future.”
OPEL: So long, silicon, hello, gallium arsenide!
But is that future based on silicon technology, as today’s computing is?
Definitely, for the short term. Definitely not, in the long term. Sometime in the future—experts don’t know exactly when—silicon will reach its limits and simply won’t be able to be pushed any further. Chip makers will have to switch to another material.
That day is a long way off, but researchers are already exploring alternatives. Graphene processors receive a lot of hype as a potential silicon successor, but OPEL Technologies thinks the future lies in gallium arsenide.
OPEL has been fine-tuning the gallium arsenide technology at the heart of its POET (Planar Opto Electronic Technology) platform for more than 20 years, and the company has worked with BAE and the U.S. Department of Defense (among others) to validate it. While past processor forays into gallium arsenide have ended in mild disappointment, OPEL representatives say their proprietary technology is ready for the big time.
OPEL only recently exited the R&D stage and hasn’t tried to make itty-bitty transistors at Ivy Bridge’s 20nm size, but the company claims that at 800nm, gallium arsenide processors are faster than today’s silicon and use roughly half as much voltage.
“If you wanted to match the speed of today’s silicon processors, at roughly a 3GHz clock rate, you wouldn’t have to go all the way down to 20 or 30 nanometers,” says OPEL chief scientist Dr. Geoffrey Taylor. “Heck, you could probably hit that at 200nm.” And that’s using planar technology, not 3D transistors.
One of the biggest problems any silicon alternative faces is that silicon is the most cutting-edge technology in the world, with billions invested in manufacturing silicon processors to maximum efficiency. It’s going to be hard to convince Intel, AMD, ARM, and the HSA Foundation to drop all that for a new material. OPEL says its technology has a large overlap with current silicon fabrication methods.
“It’s scalable, and it’s bolt-on to CMOS,” says executive director Peter Copetti. “That’s very important. In our discussions with different foundries and semiconductor companies, the first thing they ask is ‘Do I have to retool my facilities?’ The investment here is minimal because our system is complementary to what’s out there right now.” OPEL also says its wafers are reusable.
The International Technology Roadmap for Semiconductors has identified gallium arsenide as a potential silicon replacement sometime between 2018 and 2026. There is still a ton of testing and transitioning to be done before gallium arsenide captures any of the mainstream PC processor market, but if even a fraction of OPEL’s claims hold true, its technology could very well power the processors of the future.
Well, at least until we crack molecular transistors or quantum computing. But that’s a whole ‘nother article…
Striding toward a face-melting tomorrow
So, after all that—whew!—you have a better idea of where the future of PC performance is headed. The initiatives from Intel, AMD, and OPEL each tackle big problems in decidedly different ways, but that’s a good thing. You don’t want all of your potential eggs in a single basket, after all.
And best of all, if all those disparate pieces of the PC performance puzzle prove successful, they could theoretically merge in Voltron-like fashion to create an uber-powerful, GPU-assisted, tri-gate gallium arsenide processor that could blow the pants off even the beefiest of today’s Core i7 processors.
Today’s performance curve may be flattening out, but the future has never looked so beastly.