A flexible pruning on deep convolutional neural networks
Despite the successful application of deep convolutional neural networks, due to the redundancy of its structure, the large memory requirements and the high computing cost lead it hard to be well deployed to the edge devices with limited resources.Network pruning is an effective way to eliminate net...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | zho |
Published: |
Beijing Xintong Media Co., Ltd
2022-01-01
|
Series: | Dianxin kexue |
Subjects: | |
Online Access: | http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022004/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841528992655474688 |
---|---|
author | Liang CHEN Yaguan QIAN Zhiqiang HE Xiaohui GUAN Bin WANG Xing WANG |
author_facet | Liang CHEN Yaguan QIAN Zhiqiang HE Xiaohui GUAN Bin WANG Xing WANG |
author_sort | Liang CHEN |
collection | DOAJ |
description | Despite the successful application of deep convolutional neural networks, due to the redundancy of its structure, the large memory requirements and the high computing cost lead it hard to be well deployed to the edge devices with limited resources.Network pruning is an effective way to eliminate network redundancy.An efficient flexible pruning strategy was proposed in the purpose of the best architecture under the limited resources.The contribution of channels was calculated considering the distribution of channel scaling factors.Estimating the pruning result and simulating in advance increase efficiency.Experimental results based on VGG16 and ResNet56 on CIFAR-10 show that the flexible pruning reduces FLOPs by 71.3% and 54.3%, respectively, while accuracy by only 0.15 percentage points and 0.20 percentage points compared to the benchmark model. |
format | Article |
id | doaj-art-19f95b45232d493aab2d44b18a80e165 |
institution | Kabale University |
issn | 1000-0801 |
language | zho |
publishDate | 2022-01-01 |
publisher | Beijing Xintong Media Co., Ltd |
record_format | Article |
series | Dianxin kexue |
spelling | doaj-art-19f95b45232d493aab2d44b18a80e1652025-01-15T03:26:33ZzhoBeijing Xintong Media Co., LtdDianxin kexue1000-08012022-01-0138839459808970A flexible pruning on deep convolutional neural networksLiang CHENYaguan QIANZhiqiang HEXiaohui GUANBin WANGXing WANGDespite the successful application of deep convolutional neural networks, due to the redundancy of its structure, the large memory requirements and the high computing cost lead it hard to be well deployed to the edge devices with limited resources.Network pruning is an effective way to eliminate network redundancy.An efficient flexible pruning strategy was proposed in the purpose of the best architecture under the limited resources.The contribution of channels was calculated considering the distribution of channel scaling factors.Estimating the pruning result and simulating in advance increase efficiency.Experimental results based on VGG16 and ResNet56 on CIFAR-10 show that the flexible pruning reduces FLOPs by 71.3% and 54.3%, respectively, while accuracy by only 0.15 percentage points and 0.20 percentage points compared to the benchmark model.http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022004/convolutional neural networknetwork pruningscaling factorchannel contribution |
spellingShingle | Liang CHEN Yaguan QIAN Zhiqiang HE Xiaohui GUAN Bin WANG Xing WANG A flexible pruning on deep convolutional neural networks Dianxin kexue convolutional neural network network pruning scaling factor channel contribution |
title | A flexible pruning on deep convolutional neural networks |
title_full | A flexible pruning on deep convolutional neural networks |
title_fullStr | A flexible pruning on deep convolutional neural networks |
title_full_unstemmed | A flexible pruning on deep convolutional neural networks |
title_short | A flexible pruning on deep convolutional neural networks |
title_sort | flexible pruning on deep convolutional neural networks |
topic | convolutional neural network network pruning scaling factor channel contribution |
url | http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2022004/ |
work_keys_str_mv | AT liangchen aflexiblepruningondeepconvolutionalneuralnetworks AT yaguanqian aflexiblepruningondeepconvolutionalneuralnetworks AT zhiqianghe aflexiblepruningondeepconvolutionalneuralnetworks AT xiaohuiguan aflexiblepruningondeepconvolutionalneuralnetworks AT binwang aflexiblepruningondeepconvolutionalneuralnetworks AT xingwang aflexiblepruningondeepconvolutionalneuralnetworks AT liangchen flexiblepruningondeepconvolutionalneuralnetworks AT yaguanqian flexiblepruningondeepconvolutionalneuralnetworks AT zhiqianghe flexiblepruningondeepconvolutionalneuralnetworks AT xiaohuiguan flexiblepruningondeepconvolutionalneuralnetworks AT binwang flexiblepruningondeepconvolutionalneuralnetworks AT xingwang flexiblepruningondeepconvolutionalneuralnetworks |