Enhancing DNN Computational Efficiency via Decomposition and Approximation
The increasing computational demands of emerging deep neural networks (DNNs) are fueled by their extensive computation intensity across various tasks, placing a significant strain on resources. This paper introduces DART, an adaptive microarchitecture that enhances area, power, and energy efficiency...
Saved in:
| Main Authors: | Ori Schweitzer, Uri Weiser, Freddy Gabbay |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10813351/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Approximate CNN Hardware Accelerators for Resource Constrained Devices
by: P Thejaswini, et al.
Published: (2025-01-01) -
SLID: Exploiting Spatial Locality in Input Data as a Computational Reuse Method for Efficient CNN
by: Fatmah Alantali, et al.
Published: (2021-01-01) -
A Low-Power DNN Accelerator With Mean-Error-Minimized Approximate Signed Multiplier
by: Laimin Du, et al.
Published: (2024-01-01) -
Approximation-Aware Training for Efficient Neural Network Inference on MRAM Based CiM Architecture
by: Hemkant Nehete, et al.
Published: (2025-01-01) -
Cooperative inference analysis based on DNN convolutional kernel partitioning
by: Jialin ZHI, et al.
Published: (2022-12-01)