A flexible pruning on deep convolutional neural networks

Despite the successful application of deep convolutional neural networks, due to the redundancy of its structure, the large memory requirements and the high computing cost lead it hard to be well deployed to the edge devices with limited resources.Network pruning is an effective way to eliminate net...

Full description

Saved in:
Bibliographic Details
Main Authors: Liang CHEN, Yaguan QIAN, Zhiqiang HE, Xiaohui GUAN, Bin WANG, Xing WANG
Format: Article
Language:zho
Published: Beijing Xintong Media Co., Ltd 2022-01-01
Series:Dianxin kexue
Subjects:
Online Access:http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022004/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite the successful application of deep convolutional neural networks, due to the redundancy of its structure, the large memory requirements and the high computing cost lead it hard to be well deployed to the edge devices with limited resources.Network pruning is an effective way to eliminate network redundancy.An efficient flexible pruning strategy was proposed in the purpose of the best architecture under the limited resources.The contribution of channels was calculated considering the distribution of channel scaling factors.Estimating the pruning result and simulating in advance increase efficiency.Experimental results based on VGG16 and ResNet56 on CIFAR-10 show that the flexible pruning reduces FLOPs by 71.3% and 54.3%, respectively, while accuracy by only 0.15 percentage points and 0.20 percentage points compared to the benchmark model.
ISSN:1000-0801