A low functional redundancy-based network slimming method for accelerating deep neural networks
Deep neural networks (DNNs) have been widely criticized for their large parameters and computation demands, hindering deployment to edge and embedded devices. In order to reduce the floating point operations (FLOPs) running DNNs and accelerate the inference speed, we start from the model pruning, an...
Saved in:
Main Authors: | Zheng Fang, Bo Yin |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016824017162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A flexible pruning on deep convolutional neural networks
by: Liang CHEN, et al.
Published: (2022-01-01) -
Convolutional Neural Network Compression via Dynamic Parameter Rank Pruning
by: Manish Sharma, et al.
Published: (2025-01-01) -
Coding redundancy controlled data forwarding mechanism in opportunistic networks
by: Da-peng WU, et al.
Published: (2015-03-01) -
Tilted-Mode All-Optical Diffractive Deep Neural Networks
by: Mingzhu Song, et al.
Published: (2024-12-01) -
Research and Achievement of Redundancy Elimination of Backbone Network
by: Baojian Liu, et al.
Published: (2013-09-01)