All the hand-wringing over SSD reliability aside, the future of computing still belongs to solid-state drives. To be frank, no one who cares about performance is going back to mechanical drives as a primary boot device unless hell freezes over.
The problem performance junkies face going forward is just how all this storage performance will fit into their PCs. With SSDs literally getting faster than the interfaces they’re supposed to go into every year, the path forward is murky.
The drive that puts the exclamation mark on this problem is Intel’s glorious 750-series SSD, which we reviewed recently.
This drive marks a major turning point for the PC. When these drives are powered down and thrown into the scrap heap in a few years, they will still be remembered for ushering in the era of Non-Volatile Memory Express. NVMe is the replacement for the Adaptive Host Control Interface that most hard drives and SSDs run on today. AHCI was designed for hard drives and the inherent latency they bring with them. As you can imagine, running an SSD on protocols, commands and queue depths designed for a head moving around a disc isn’t great.
NVMe is designed for a greater magnitude of parallelism. Among its most impressive specs is the ability to manage almost 65,000 commands at a time, vs. 32 commands for AHCI. Today’s SSD may not be able to push that many commands, but think of it as headroom for future memory technologies—and like always, the future will get here faster than we expect.
That brings us to how exactly we connect our storage down the road. Today it’s the familiar Serial ATA, or SATA port.
SATA Express was supposed to be its replacement. Unfortunately, it underestimated today’s storage needs. SATA Express as currently implemented in PCIe mode (it also supports plain SATA) has a maximum throughput of 10Gbps theoretical, using two lanes of PCIe Gen 2.0. That’s better than the 6Gbps of SATA, but not by much, and we’re well beyond that already.
I remember being shown the first Z97 motherboards with SATA Express last year (and questioning its limits already) and being told the natural move would be to add more lanes or increase it to PCIe Gen 3.0 support.
Instead, these days, the talk positions SATA Express for future hard drives only, so clearly the industry doesn’t think SATA Express is going to be where we plug our future SSDs.
The other option seems to be M.2, and why not? Originally called NGFF or next-gen form factor, M.2 today can support up to four lanes lanes of PCIe Gen 2.0 or Gen 3.0., running Gen 3.0 speeds. That’s almost 4GBps, theoretically.
The problem with using M.2 alone is its limited capacity: It was always intended as a laptop storage option with the chips mounted to it. Today, that limits it to 512GB drives. I’m sure M.2’s max will increase, but it won’t keep pace with standard drives which already easily reach 6TB.
One option being explored, though it doesn’t look pretty, is connecting to existing M.2 using a Mini-SAS connector. Intel even makes a version of its 750 drive in the SFF-8639 trim that’s meant for servers and workstation. To connect it to today’s motherboards, you’d mount the Intel 750 drive in a standard drive bay and run a cable to the Mini-SAS connector into your motherboard’s M.2 slot. The setup is ungainly and the cable length short.
The only path for us today and in the near future is occupying a PCIe slot. It’s the easiest way to get to the most performance, and more bandwidth can be added by just adding lanes. The Intel 750 drive, for example, uses four PCIe lanes in PCIe Gen 3.0 mode.
The problem there is what happens when you run more than one graphics card. with a drive as fast as the Intel SSD installed, a typical consumer gaming box will cut the bandwidth to the video card in half. Granted, most games and most video cards don’t really use all of that bandwidth. But what happens when that gamer wants to run the Intel drive with two video cards? To paraphrase Dana Carvey paraphrasing George HW Bush: “It’s not gonna happen.”
And if you think M.2 solves the problem it doesn’t. Remember it needs PCIe in Gen 3.0 mode, too, and there just isn’t enough to go around on consumer systems.
That means drives like Intel’s will really only match up with the highest gaming systems, using the company’s X99 chipset and Core i7-5960X CPU. Unlike the consumer-focused Haswell chips with support for but 16 PCIe lanes, the Core i7-5960X packs 40 lanes. Extreme gamers who like to run three or four video cards could still run into bottlenecks, but most other users won’t feel the pinch quite yet.
The “easy” answer to all of this is PCIe Gen 4.0. That, spec, however, won’t be available until 2016, and integration into CPUs and motherboards will certainly be far beyond that date. This is a good “problem” to have, but it’s amazing to think it’s a problem at all.