The increasing power of supercomputers has gone from gee-whiz to ho-hum. IBM, with its seems to maintain a near permanent position on the Top500 supercomputer list, and it may be a longtime before it gives up the crown.
But following a ranking of top supercomputers is not the same as being a fan of Tiger Woods. Its real value is found by viewing the list from the bottom up.
To make the Top500 list, compiled by academic researchers in the U.S and Europe and released this week, now requires computing power of 17.1 TFLOPS, up from 12.64 TFLOPS six months ago, which represents increases in performance -- and price.
Just two years ago, the entry point on the list was 4 TFLOPS. And you can buy a system of similar power for as little as $10,000.
As the PC democratized computing, the increasing power of supercomputing's low-end, is where the big changes are happening.
The ability of these machines to effectively run applications across hundreds and thousands of cores for little money has the potential of putting affordable, high-performance computing in many more offices and research centers that could benefit from their power.
What's helping the low end is the increasing use of GPUs(graphics processing unit), which cost less than CPUs (central processing unit) and are a good fit for simulations, modeling other high-performance uses.
Among the machines demonstrated at the supercomputing conference in Germany this week was one by U.K.-based Boston Ltd. that uses CPU and GPU processors (AMD's FireStream 9250), which will first be sold as a 1U rack but later as a desktop tower. It also reaches TFLOPS. Pricing wasn't immediately available.
What's most interesting is the increasing activity in low-end supercomputing, especially in language and architectures for supporting parallel environments that include GPU processors. OpenCL, initially developed by Apple Inc., was released late last year by consortium the Khronos Group. And Nvidia Corp.s Cuda was released in 2007.
But one limiter in the use of GPU/CPU-based systems isn't the hardware as much as it is "the progress people have made to date in porting various applications to GPUs," said Steve Conway, an HPC analyst at IDC.
Conway said he believes it will still be a few years before applications increase enough to broaden adoption. He expects that in five years applications will come of age, but for the immediate future the GPU will be much more a limited-purpose processor.
The Top500 list remains interesting for many reasons. It's a map, for one, of emerging and diversifying economies. Saudi Arabia, for instance, was ranked 14th on the list with a 65,500-core system from IBM.
There are also many systems that aren't signing up for ranking. The National Security Agency, which runs some large systems, last appeared on the Top500 list in 1998.
But when Roadrunner broke the petaflop record last year, capable of more than one thousand trillion (one quadrillion) sustained floating-point operations per second, it brought supercomputers into a new territory, the exaflop, which is a million trillion calculations, or a quntillion, per second -- a thousand times faster than a petaflop.
The petaflop was the big-bang moment for high-performance computing, and a similar, exaflop moment, is many years away, so focusing on it may mean missing the real changes ahead.
The revolution in truly accessible high-performance systems is just beginning.
This story, "Supercomputers Lose Glamour, Price Tag" was originally published by Computerworld.