The developer of the most widely used test for ranking the performance of supercomputers has said his metric is out of date and proposed a new test that will be introduced starting in November.
Jack Dongarra, distinguished professor of computer science at the University of Tennessee, said the Linpack test he developed in the 1970s, which has been the basis for the Top500 list of the world’s fastest computers for the past 20 years, is no longer the most useful benchmark for how well a system can perform.
The new metric, he said, could change the way vendors design their supercomputers and will provide customers with a better measure of the performance they can expect for the types of real-world applications they’ll be running.
The Top500 list is published twice a year, in June and November, and is closely watched as vendors and nations seek bragging rights for who has the fastest system. The current leader is the Tianhe-2, developed by China’s National University of Defense Technology.
Linpack has been used to rank the systems since the first Top500 list was published in 1993, but it’s no longer an indicator of real application performance, Dongarra said.
“Linpack measures the speed and efficiency of linear equation calculations,” according to a statement Wednesday announcing the new benchmark, called the High Performance Conjugate Gradient (HPCG). “Over time, applications requiring more complex computations have become more common. These calculations require high bandwidth and low latency, and access data using irregular patterns. Linpack is unable to measure these more complex calculations.”
HPCG is needed, Dongarra said in a telephone interview, in part because computer vendors optimize their systems to rank highly on the Top500 list. If that list is based on an out of date test, it encourages vendors to architect their systems in a way that’s not optimal for today’s applications.
“We don’t want to build a machine that does well on this ‘fake’ problem. We want to build a machine that does well for a larger set of applications,” said Dongarra, who developed the new test with a colleague, Michael Heroux, of Sandia National Laboratories in Albuquerque.
Because of the way the new test is being introduced, however, it could potentially spark disagreements over who really has the world’s fastest supercomputer. That’s because HPCG will be introduced gradually over time, and it could be years before it becomes the primary method for ranking the Top500.
University of Tennessee
“One of the nice things about Linpack is that there’s one number, so it’s very clear what we mean by the fastest computer. This will in fact generate two numbers,” Dongarra said.
He plans to maintain the Linpack test alongside HPCG in part for the valuable trending information that Linpack provides, he said. But it will also continue to be used because it could take years before a significant number of supercomputers are tested against the new benchmark.
“I expect in November we’ll just have a few entries based on this new benchmark. Populating the list with 500 entries is going to take some time, so I’d guess over the next five years we’d have a chance of seeing that list fully populated,” he said.
Starting in November, “we’re going to have a list of the Top500, and then we’re going to have a second column, and that second column will be the new benchmark,” Dongarra said.
“It may ultimately lead to a list that is based on this new benchmark, but certainly not right away,” he said.
The dueling benchmarks could potentially lead to different supercomputing centers, which covet positions on the Top500, claiming leadership based on both the old and the new tests. That could make it hard to say definitively who has the fastest supercomputer, though it seems the Top500 will consider Linpack to be the primary ranking metric at least for now.
The new test could lead to some “big changes” in which systems show the greatest performance potential, Dongarra said. The HPCG benchmark stresses architectural features which might not be easy for systems that perform well on the Linpack test to optimize for, he said.
“I think individuals will have to then evaluate what number makes sense for their particular mix of problems. And over time I would hope that the new [benchmark] would carry more weight.”
HPCG was developed partly at the behest of the U.S. Department of Energy, Dongarra said. “They’re looking towards exascale now, and the concern is that if you build an exascale computer that will do this Linpack test well, it may not do well at other problems. So that’s one of the issues here.”
The University of Tennessee conducts joint projects with the DOE and Dongarra said he’s familiar with their application requirements. But he said the new test will be a good indicator of how computers will run other types of applications as well, such as those used for oil and gas exploration or weather modeling.
“One of the problems with the Linpack is that is stresses only one component, that being the floating point potential of the computer,” he said. It doesn’t stress areas like system latency and memory hierarchy, and the new test will be able to expose weaknesses of systems as they relate to those areas.
Dongarra plans to distribute the software for the new test to computer vendors in the next few months, giving them a chance to begin optimizing their systems and to propose changes to HPCG before it’s introduced formally at the SC13 supercomputing conference in Denver this November, where the next Top500 list will be announced.