SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication

We introduce novel sparsity-aware in-DRAM matrix mapping techniques and a corresponding DRAM-based acceleration framework, termed SpDRAM, which utilizes a triple row activation scheme to efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operations by sparsity reli...

Full description

Saved in:
Bibliographic Details
Main Authors: Jieui Kang, Soeun Choi, Eunjin Lee, Jaehyeong Sim
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10766585/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce novel sparsity-aware in-DRAM matrix mapping techniques and a corresponding DRAM-based acceleration framework, termed SpDRAM, which utilizes a triple row activation scheme to efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operations by sparsity relies heavily on how matrices are mapped into DRAM banks, which operate row by row. These banks operate row by row. From this insight, we developed two distinct matrix mapping techniques aimed at maximizing the reduction of row operations with minimal design overhead: Output-aware Matrix Permutation (OMP) and Zero-aware Matrix Column Sorting (ZMCS). Additionally, we propose a Multiplication Deferring (MD) scheme that leverages the prevalent bit-level sparsity in matrix values to decrease the effective bit-width required for in-bank multiplication operations. Evaluation results demonstrate that the combination of our in-DRAM acceleration methods outperforms the latest DRAM-based PIM accelerator for SpMV, achieving a performance increase of up to <inline-formula> <tex-math notation="LaTeX">$7.54\times $ </tex-math></inline-formula> and a <inline-formula> <tex-math notation="LaTeX">$22.4\times $ </tex-math></inline-formula> improvement in energy efficiency in a wide range of SpMV tasks.
ISSN:2169-3536