Many attempts have been made over the last 46 years to rewrite Amdahl’s law, a theory that focuses on performance relative to parallel and serial computing. One scientist hopes to prove that Amdahl’s law can be surpassed, and that it doesn’t apply in certain parallel computing models.
A presentation titled “Breaking the Law” at the International Supercomputing Conference this week in Leipzig, Germany, will show how “pitfalls of Amdahl’s law can be avoided in specific situations,” according to a blog entry that provides a teaser on the presentation.
The presentation will “challenge Amdahl’s generalized law by exposing it to a new class of experiments in parallel computing,” wrote Thomas Lippert, director of the Jülich Supercomputing Centre at Jülich, Germany, in the blog entry. Lippert will lead the presentation.
What is Amdahl’s law?
Amdahl’s law, established in 1967 by noted computer scientist Gene Amdahl when he was with IBM, provides an understanding on scaling, limitations and economics of parallel computing based on certain models. The theory states that computational tasks can be decomposed into portions that are parallel, which helps execute tasks and solve problems quicker. However, the speed of task execution is limited by tasks—in the case of computers it could be serial tasks—that cannot be parallelized.
“If you throw enough hardware at parallel you can solve a problem; you still have to do some in serial, which is a limiting factor in speeding up tasks,” said Nathan Brookwood, principal analyst at Insight 64.
The mathematics of Amdahl’s law assume there is a limit to parallel speed-up, assuming some things are constant, such as the problem size and the nature of the processors doing the computation.
Previous challenges
Amdahl’s law has been challenged in the past. Amdahl’s law was re-evaluated by John Gustafson, who provided an understanding of parallelism among processors in which the size of a problem can be meaningfully increased. The corollary, called Gustafson’s law, assumes that problem size is not constant, and parallel computer speed can scale up accordingly. Gustafson now works at Advanced Micro Devices as senior fellow and chief graphics product architect.
The mathematic equations resulting from Amdahl’s law and corollaries have become reference points as chip and software makers try to scale supercomputing performance. Such computing power is necessary to find scientific solutions in fields such as biotechnology and meteorology. Countries are also developing faster supercomputers for economic forecasting and national security reasons.
The ISC presentation has been triggered by a past history of optimizing simple and efficient systems for simulations in high-performance computing such as in Blue Gene/L, which took over as the world’s fastest computer in 2004, said Lippert in an email. The systems are highly scalable and energy efficient, but have been restricted to problems that are not adapted to parallel processing.

The presentation, however, is based on experiments done as part of the DEEP Project, which investigates highly parallel computing models that help speed up supercomputers. In addition to investigating software development and programming tools, the project involves building high-performance systems called JUROPA (Jülich Research on Petaflop Architectures).
“My team was building the very effective JUROPA system together with Bull, Partec and Intel. This machine is ideal for highly complex problems that exhibit a lower concurrency, in general. Most codes live somewhere in between. I want to find out, if we can bring the concepts together. The different architectures can assign the … different code parts according to the concurrency,” Lippert said.
Performance in supercomputers has scaled thanks to new programming models and hardware such as accelerators and graphics cards. Code needs to be structured according to concurrency levels, such as in programming languages like the one provided by Barcelona Supercomputing Center’s OmpSS, Lippert said.
Despite the title of the presentation, the aim is not to challenge Amdahl’s law, Lippert said.
“On the contrary, I think, we are not taking [Amdahl’s law] serious enough. It is simply obvious that we should adapt the right piece of hardware to the corresponding concurrency,” Lippert said. “Only this approach has the potential to be most energy efficient and performance oriented at the same time.”
Revisiting Amdahl’s law
While Lippert’s blog entry did not provide much detail on how Amdahl’s law is being challenged, academics said it is always interesting to see the law being revisited.
Unlike Moore’s Law, which is an observation, Amdahl’s law cannot be “broken” in any mathematical sense, and is still relevant, said Paul Lu, associate professor at University of Alberta’s Department of Computing Science.
As with any mathematical theorem, if the assumptions are no longer true, the law is not relevant. “That is not to say that the law has been ‘broken’; it just means that the law does not apply to that situation,” Lu said.
While the size of a problem can be meaningfully increased, there are cases in which the size of a problem is fixed, and Amdahl’s law is relevant.
“For fixed-sized problems, Amdahl’s law is a sobering reminder of reality,” Lu said.
Faster computers are designed to reduce execution time for fixed-size problems, but there are other metrics that need to be taken into account, said Xian-He Sun, chair and professor of the Department of Computer Science at the Illinois Institute of Technology.
“Amdahl’s law is a law, which shows even when you have reduced your communication and other overhead—such as memory access delay, software and hardware delay—to zero, with the sequential portion of your program, your parallel processing gain is still very limited,” Xian-He said.
“It gives the limitation of parallel processing, and does not matter how much you have improved your hardware,” Xian-He said. “Gustafson’s law says there is no limitation of parallel processing if you allow problem size increase.”
Xian-He revisited Amdahl’s law and established Sun-Ni’s law, which introduces memory constraint as a limitation on problem execution.
“With the long-standing memory-wall problem and the newly emerged big data problem, the problem size increase, however, is limited by memory access delay,” Xian-He said, adding that memory-bound speedup needs to be followed.
“That means software models and hardware need to be redesigned to reduce data access time in order to get better scalability,” Xian-He said.
While new theories will provide new insights, Amdahl’s law provides a base.
“As with all real laws and theorems, one must always revisit things like the main assumptions, just as Gustafson did. The original law still stands, but corollaries might be very interesting,” said University of Alberta’s Lu.