Exploiting Neural-Network Statistics for Low-Power DNN Inference
Specialized compute blocks have been developed for efficient nn execution. However, due to the vast amount of data and parameter movements, the interconnects and on-chip memories form another bottleneck, impairing power and performance. This work addresses this bottleneck by contributing a low-power...
Saved in:
Main Authors: | Lennart Bamberg, Ardalan Najafi, Alberto Garcia-Ortiz |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Open Journal of Circuits and Systems |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10498075/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Cooperative inference analysis based on DNN convolutional kernel partitioning
by: Jialin ZHI, et al.
Published: (2022-12-01) -
Scalable Low Power Accelerator for Sparse Recurrent Neural Network
by: Panshi JIN, et al.
Published: (2023-12-01) -
Cloud-edge-device fusion architecture oriented to spectrum cognition and decision in low altitude intelligence network
by: Chao DONG, et al.
Published: (2023-11-01) -
INTELLIGENT LOW-POWER SMART HOME ARCHITECTURE
by: CORNEL POPESCU, et al.
Published: (2018-09-01) -
Research on Wi-Fi HaLow for the Internet of things
by: Le TIAN, et al.
Published: (2019-09-01)