A major stumbling block in processor-centric architectures is the movement of data between the processor chips and the main memory module. This leads to heavy energy consumption, a critical consideration for modern-day computing that hinges on gigantic in-memory datasets. The breakdown of Dennard scaling has added to the energy constraints of computer systems. In this milieu, near-memory processing (NMP) is emerging as an energy-optimizing technology, particularly with the arrival of commercially viable 3D chip stacking technology.
Existing studies have attempted to present the limitations and opportunities of this technology, but a seminal work is being conducted by EcoCloud researchers to explore new implementations of functionality and services based on NMP.
In an earlier research, the team worked on algorithms suitable for database join operators near memory, where they showed that sort join can render superior performance and efficiency than hash join in NMP—a situation diametrically opposite to the performance for CPUs. The study demonstrated that performing a join near the memory can augment energy efficacy by 4.5-10.7x and performance by 1.9-5.1x in comparison with CPU execution.
Taking that finding forward, the researchers are now investigating specific queries on the implementation of functionalities near memory. These include data-specific functionalities such as data filtering, data recognition, and data fetch, and system-level functionalities such as security, compression, and remote memory access. The study also looks into ways and means of employing near-threshold logic in NMP.
The inquiry comes at a juncture when computing platforms are striving hard to keep up with the requirements of large-scale data analytics, wherein energy consumption and efficiency are two key components. The NMP approach embraced by the researchers could go a long way toward bridging the gap between humungous datasets and computing architecture, thus taking a cogent step toward greater processing efficiency.