Tested: Why the iPad Pro really isn't as fast a laptop
One benchmark makes it look good. A lot of other benchmarks show a different story. Get all the details here.
Stop the presses. The PC is dead, and so is the Mac. Killed by the more powerful iPad Pro. At least that’s what some tech writers proclaimed after Apple’s latest iPad Pro wonder hit the streets.
But is the iPad Pro really a PC killer? After days of poking and prodding, I can safely say Hell no. Far from it.
Before this turns into a flame-fest, let me say this: The iPad Pro is shockingly fast, as Macworld’s review drives home, and Apple has again worked its mastery of hardware, software and virtually unlimited resources to build an amazingly fast chip for the iPad Pro. But let’s not get ahead of ourselves. My battery of tests shows that in some things, it ain’t that fast.
How we got here
What started the “Intel and its CPUs are doomed” talk were benchmarks showing the A9X SoC in the iPad Pro overpowering Intel’s older Haswell chips and even its newest Skylake CPUs.
Many of those conclusions were based on performance results from the popular multi-platform Geek Bench 3 benchmark, as well as browser-based benchmarks such as Mozilla’s Kraken and Google’s Octane 2.0. This limited data set had the faithful buzzing that the end was nigh for x86.
If you like to test hardware, you know the weakness of the last two tests: A browser test isn’t a test of the CPU/SoC, it’s a test of the chip plus the browser and OS optimizations underneath it. On the iPad Pro the browsers are pretty much the same, as Apple makes all use its highly optimized rendering engine. On the PC, your browser pick matters. Browser-based benchmarks are hardly the best tools on the PC either.
Geek Bench 3 is different. The creators of Geek Bench 3 have stated their goals are to create a cross-platform test that isolates the CPU as much as possible, using algorithms that it believes are valid for chip performance. If you peep at the chart below, you can see what got people in a tizzy.
Yes: Whoa. That iPad Pro in single-core performance (which is a good metric to use to judge across platforms where some chips have more cores) is every bit as fast as the CPU in the newest mid-range Core i5 Surface Pro 4 in Geek Bench 3. It’s uncomfortably close to that Core i7-6600U in the far pricier top-end Surface Book, too.
For the record: Almost all of the tests in this section were run within the last few days, with the latest OSes and updates applied. The only OS that was out of date was my corporate-issue Windows 8.1 box with its 3.4GHz Core i7-2600, which I threw in for kicks.
Although I think it matters less, I’ll hit you with the results from Geek Bench 3 for multi-core too. The iPad in multi-core performance is on a par with the older Haswell-based Surface Pro 3, but it loses to the newer Skylake-based Surface Pro 4. Why? I’m not sure, but the Intel chips’ Hyper-Threading resource management could be a factor. That’s why I think the single-core performance is more meaningful.
So how do all the devices stack up in other benchmarks? First up is BAPCo’s TabletMark V3. While Geek Bench 3 attempts to create what its makers think is an accurate measure of CPU performance using seconds-long “real world” algorithms, BAPCo’s approach is actually more “real world.” BAPCo’s consortium of mostly hardware makers set out to create workloads across all the different platforms that would simulate what a person does, such as actually editing a photo with HDR, browsing the web, or sending email.
Because there’s no universal app that runs in Windows, Windows RT, Android and iOS, BAPCO set out and custom-created apps that did the same thing with the same interface across all platforms. Indeed, when you watch it run on the platforms, it looks like someone is using an application on all three doing the same task on all three.
A white paper on the benchmark discloses the approach as well as the libraries, compilers and APIs used in the test. The test runs in real time, which can take a few hours on some devices. Here’s how the iPad Pro fares.
In TabletMark V3, the iPad Pro doesn’t look quite as threatening, does it? Even the Intel Haswell Core i5-4300u in the two-year old Surface Pro 3 easily outpaces the A9X here. It isn’t even far ahead of the tablet pack. The worst performer for x86 is the budget Surface 3 with its Atom X7-z8700. For shame, Atom, for shame.
The benchmark has two performance modules, which give you an idea of how fast the device would be in web browsing and email. The result for iPad Pro is tepid, with performance just beating the Nexus 9 and its Tegra K1.
TabletMark V3 also measures photo and video performance, which gives the iPad Pro a healthy lead over the ARM competitors and the Atom X7-Z8700. But the A9X doesn’t come close to the Core i5 or Core i7 devices above it in the chart, or even the Core M.
The puzzler is the performance of the Surface Pro 3 and the Dell Venue 11 Pro, which use older chips. I expected this to be in the bag for the Skylake parts, but the Broadwell-based Core M and the even older Haswell Core i5 are hanging right there.
Every other test I’ve run shows Skylake with a healthy performance bump over Broadwell and Haswell. I attribute that to the chip running at higher clock speeds, and other micro-architecture improvements. For what it’s worth, I don’t generally bother with TabletMark V3 when I test anything with any actual performance potential. I haven’t found it to scale with faster CPUs, and other tests are far more intensive.
3DMark and graphics performance
For graphics performance I turned to 3DMark’s Ice Storm Unlimited. It’s a popular test that happens to run on iOS, Windows and Android. It renders the test without regard to the screen resolution and is a pretty good measurement of lower-grade graphics performance. By lower grade I mean this isn’t Assassin’s Creed Syndicate, which will reduce even a $650 GeForce 980 Ti to 45fps.
All of the devices here used integrated graphics for this test. The Surface Book was in Clipboard mode, with its GPU disconnected and two feet away. The overall score factors in game physics and the graphics performance.
Apple put a lot of resources into giving the A9X a bunch of graphics performance, and it shows. It slightly outpaces the Nvidia Tegra K1 in the Nexus 9 and the Shield Tablet in 3DMark. But if you keep looking up that chart, you’ll see the A9X is still a good clip behind the Dell Venue 11 Pro and the Surface Pro 3. Please note, that Venue 11 Pro’s Core M is an older power-sipping chip that uses 4.5 watts, not a 15-watt chip like in Surface Pro 3.
The improved graphics core in the Skyake Core m3 is even more impressive. I’m currently testing the Asus UX305 with the Skylake-based Core m3, and it’s posted an overall 3DMark score of 51,181, which would make it third in the chart above.
I had access to an Nvidia Shield TV, which can run 3DMark in Android TV, so I threw the score from the Tegra X1 into the mix for reference. The idea is to to show where Google’s Pixel C could fall, as it should be the first mobile use of a Tegra X1. Before you think the Tegra X1 will whip the A9X, you should remember that the Shield TV is thicker than any tablet and runs on unlimited AC, not DC. There’s no need to worry about chewing through the battery in the Shield TV, unlike with the upcoming Pixel C, so the latter’s graphics performance could fall shorter. We’ll see.
3DMark breaks out performance for two areas: Graphics and physics. Here are the scores for the same devices in graphics. The Asus UX305 with its Core m3 isn’t on the chart, but it produces a score of 65,904, so third again.
One thing I will say after all of this is my opinion on Atom X7 is changing for the worse. It would be nice if Intel’s budget chip didn’t drag its butt across the finish line dead last in just about every test.
3DMark also runs a physics test, which measures how a platform would run a theoretical game engine. In short, it’s supposed to measure how fast a device’s CPU would be, not its GPU. The result here actually puts the iPad Pro and the A9X at a pretty big disadvantage against all of the x86 chips—yes, even the lowly Atom. Nvidia’s Shield Tablet and the Shield TV also run past Apple’s A9X. The rest of the legit x86 chips are sipping lemonade and reading the paper while the iPad Pro crosses the finish line.
The search for answers
The iPad Pro’s poor showing may lie in how the A9X works and the way Futuremark builds its benchmark. Futuremark has been through this before, when the iPhone 5s proved no faster than the iPhone 5 in the physics test despite claims of double the performance from Apple. Futuremark’s investigation led to how the A7 chip in the iPhone 5s (and iPad Air) handles non-sequential memory structures. Futuremark said it was a conscious design change Apple made between the A6 and A7 that hurt its performance, and 3DMark was showing the result of that.
But rather than make a change just to help show off Apple’s performance, Futuremark chose to stick to its benchmarking method, declaring:
“3DMark is designed to benchmark real world gaming performance. The Physics test uses an open source physics library that is used in Grand Theft Auto V, Trials HD and many other best-selling games for PC, console and mobile. Higher scores in 3DMark Ice Storm Physics test directly translate into improved performance in games that use the Bullet Physics Library and are a good indicator of improved performance in other games.”
Keep reading for even more benchmarks...
For comprehensive coverage of the Android ecosystem, visit Greenbot.com.
Tested: Why the iPad Pro really isn't...Next Page