Programming for multicore systems can be complex, so an industry consortium led by Advanced Micro Devices has taken a step ahead in its goal to eliminate development challenges so applications are portable across devices, architectures and operating systems
The HSA (Heterogeneous System Architecture) Foundation on Tuesday is expected to introduce a new uniform memory architecture called HUMA that makes different memory types in a system accessible to all processors. By breaking down barriers that separate different memory types, developers have access to a larger pool of shared memory in which code could be executed.
The specification is part of HSA’s open-hardware standard so program execution can be easily distributed to processing resources in servers, PCs and mobile devices. HSA’s goal is to create a basic interface around industry-standard parallel programming tools so code can be written and compiled once for multiple devices.
Computers and mobile devices today combine CPUs with many co-processors to speed up computing tasks. Some of the co-processors include GPUs (graphics processing units), DSPs (digital signal processors), network processors, FPGAs (field programmable gate arrays) and specialized ASICs (application-specific integrated circuits). Some of the world’s fastest computers harness the joint computing power of GPUs and CPUs for complex math calculations, while mobile devices have multiple processors for graphics and security.
Efficient processing leads to better smartphone and tablet performance, and also longer battery life, said Phil Rogers, corporate fellow at AMD, during a conference call to discuss the new specification.
AMD later this year is expected to release laptop and desktop processors code-named Kaveri in which CPUs and graphics processors will be able to share memory. The HSA Foundation’s goals are loosely tied to AMD’s chip strategy in which the company integrates third-party intellectual property so chips can be customized to customer needs. For example, AMD is making a customized chip for Sony’s upcoming PlayStation 4 gaming console.
HSA also wants to lower development costs and reduce the need to recompile code to devices or chip architectures. Some of the features of HUMA include dynamic memory allocation and fast GPU access to system memory.
“Every compute unit ... is going to have the same priority and going to all be able to look at the same memory,” said Jim McGregor, principal analyst at Tirias Research.
HUMA ensures every hardware unit has access to the same data, so the information doesn’t need to be copied into different memory types. GPUs and CPUs today have access to different cache and memory types and the specification would break the traditional mold in which CPUs allocate memory for code execution, but the information is copied into GPU memory for execution by the graphics processor.
“The other part is it is unifying the hardware and also software architecture. If you are writing in C++, you can say I want the GPU to execute it,” McGregor said.
The specification also reduces the need to transfer data between memory, and that eases bottleneck issues, McGregor said.
AMD’s Rogers said the specification recognizes multiple storage and networking interconnects, but did not say whether it would address nonvolatile storage units mimicking memory. Many server installations have solid-state drives as a form of cache in which data is copied and stored for a temporary period as a task is being executed. Facebook has floated the idea of using SSDs as a replacement for DRAM.
HSA Foundation backers also include ARM, Sony, MediaTek, Qualcomm, Samsung, Texas Instruments, LG Electronics, Imagination Technologies and ST Ericsson. Intel is not a member of the HSA Foundation and is using its own co-processors, compilers and programming tools to accompany its chips.
The idea of shared memory resources is also being chased by AMD rival Nvidia, which is not a member of the HSA Foundation. Nvidia next year plans to release a graphics processor based on the Maxwell architecture, which will unify GPU and CPU memory. The GPUs will be able to address CPU memory and vice versa, and applications will be easier to write with unified memory resources. Smartphones and tablets could get unified memory with Nvidia’s upcoming Tegra 5 processor code-named Logan, which will have a graphics processor built on the Maxwell architecture and also support CUDA, which is Nvidia’s proprietary set of tools for parallel programming.
HUMA is compatible with popular programming languages such as C, C++ and Python, and multiple operating systems, AMD said.