AMD introduces hUMA, a new way to empower CPU and GPU integration
Until now, multi-core processors access memory in a uniform way (UMA), sharing the address space among all and being able to see and access each core to the entire memory pool. When a GPU is introduced for computing, the memory space accessed is separate from the memory accessed by the processor, so access is not uniform (nUMA), such that when the GPU is required to process certain data, the usual thing is that the processor copies it from its own memory and sends it to the memory space of the GPU, where it is copied so that the GPU can process that data, in case it wants to send it back to the processor , the process would have to be repeated. This, apart from reducing efficiency and adding latency to the data processing between CPU and GPU, represents greater complexity for developers, who have to take into account access to different memory spaces and copy and verify the data, use Specific APIs, etc., and therefore adds a higher cost and time to the development of applications that use GPU acceleration.
The concept introduced by the hUMA architecture is that of one memory space for CPU and GPU, using the same logical memory space. This allows that, as in a multi-core processor, each core can access all the memory space, the GPU can also do so, so that no need to copy data from CPU to GPU memory For its process, with the hUMA architecture both can dynamically reserve memory taking it from all the available space, both in physical and virtual memory, so that any modification of the data made by the GPU or by the CPU, will be seen instantly by anyone of the two, without the need to verify or update the data.
Another novelty that is introduced is the support for pointers, variables that reference or “point” to a memory area, so that it is much easier for the programmer to use, modify or create more complex and dynamic data structures. Until now, the memory space of the GPUs did not allow the use of pointers, with hUMA the GPU will be able to access data structures through pointers, so to send a data from the CPU to the GPU it will only be necessary to send that pointer or reference, greatly simplifying the task of developers. In addition, AMD has already announced that this architecture will have support for programming languages āāalready used and known in a general way, such as Java, C ++, or Python and without the need for special APIs.
All this becomes for the consumer more powerful applications with a better use of resources, higher performance and an optimization of consumption
The hUMA architecture has been published as an open standard available to any company, so several companies have already shown their support for the project, among them, apart from AMD itself, we find manufacturers as important as ARM, Qualcomm and Samsung among others.
A very interesting point is that we will be able to see the first chips using this technology during the second half of this same year 2013, time will come AMD Kavery APUs including heterogeneous and unified memory access and HSA (Heterogeneous System Architecture) architecture. Also, according to what has been seen to date of the chip that the PlayStation 4 will carry, although its details are still scarce, it is an AMD APU that will share memory, of the GDDR5 type between the GPU and processor, using similar technology, or even the same, that hUMA.