Probabilistic Automated Model Compression via Representation Mutual Information Optimization
Deep neural networks, despite their remarkable success in computer vision tasks, often face deployment challenges due to high computational demands and memory usage. Addressing this, we introduce a probabilistic framework for automated model compression (Prob-AMC) that optimizes pruning, quantizatio...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-12-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/13/1/108 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841549157831016448 |
---|---|
author | Wenjie Nie Shengchuan Zhang Xiawu Zheng |
author_facet | Wenjie Nie Shengchuan Zhang Xiawu Zheng |
author_sort | Wenjie Nie |
collection | DOAJ |
description | Deep neural networks, despite their remarkable success in computer vision tasks, often face deployment challenges due to high computational demands and memory usage. Addressing this, we introduce a probabilistic framework for automated model compression (Prob-AMC) that optimizes pruning, quantization, and knowledge distillation simultaneously using information theory. Our approach is grounded in maximizing the mutual information between the original and compressed network representations, ensuring the preservation of essential features under resource constraints. Specifically, we employ layer-wise self-representation mutual information analysis, sampling-based pruning and quantization allocation, and progressive knowledge distillation using the optimal compressed model as a teacher assistant. Through extensive experiments on CIFAR-10 and ImageNet, we demonstrate that Prob-AMC achieves a superior compression ratio of 33.41× on ResNet-18 with only a 1.01% performance degradation, outperforming state-of-the-art methods in terms of both compression efficiency and accuracy. This optimization process is highly practical, requiring merely a few GPU hours, and bridges the gap between theoretical information measures and practical model compression, offering significant insights for efficient deep learning deployment. |
format | Article |
id | doaj-art-8a3b1a0e65ec450480b44ff668a881c9 |
institution | Kabale University |
issn | 2227-7390 |
language | English |
publishDate | 2024-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Mathematics |
spelling | doaj-art-8a3b1a0e65ec450480b44ff668a881c92025-01-10T13:18:16ZengMDPI AGMathematics2227-73902024-12-0113110810.3390/math13010108Probabilistic Automated Model Compression via Representation Mutual Information OptimizationWenjie Nie0Shengchuan Zhang1Xiawu Zheng2Department of Artificial lntelligence, Xiamen University, Xiamen 361101, ChinaDepartment of Artificial lntelligence, Xiamen University, Xiamen 361101, ChinaDepartment of Artificial lntelligence, Xiamen University, Xiamen 361101, ChinaDeep neural networks, despite their remarkable success in computer vision tasks, often face deployment challenges due to high computational demands and memory usage. Addressing this, we introduce a probabilistic framework for automated model compression (Prob-AMC) that optimizes pruning, quantization, and knowledge distillation simultaneously using information theory. Our approach is grounded in maximizing the mutual information between the original and compressed network representations, ensuring the preservation of essential features under resource constraints. Specifically, we employ layer-wise self-representation mutual information analysis, sampling-based pruning and quantization allocation, and progressive knowledge distillation using the optimal compressed model as a teacher assistant. Through extensive experiments on CIFAR-10 and ImageNet, we demonstrate that Prob-AMC achieves a superior compression ratio of 33.41× on ResNet-18 with only a 1.01% performance degradation, outperforming state-of-the-art methods in terms of both compression efficiency and accuracy. This optimization process is highly practical, requiring merely a few GPU hours, and bridges the gap between theoretical information measures and practical model compression, offering significant insights for efficient deep learning deployment.https://www.mdpi.com/2227-7390/13/1/108probabilistic model compressionrepresentation mutual informationneural network compressionautomated compression pipelineinformation theory |
spellingShingle | Wenjie Nie Shengchuan Zhang Xiawu Zheng Probabilistic Automated Model Compression via Representation Mutual Information Optimization Mathematics probabilistic model compression representation mutual information neural network compression automated compression pipeline information theory |
title | Probabilistic Automated Model Compression via Representation Mutual Information Optimization |
title_full | Probabilistic Automated Model Compression via Representation Mutual Information Optimization |
title_fullStr | Probabilistic Automated Model Compression via Representation Mutual Information Optimization |
title_full_unstemmed | Probabilistic Automated Model Compression via Representation Mutual Information Optimization |
title_short | Probabilistic Automated Model Compression via Representation Mutual Information Optimization |
title_sort | probabilistic automated model compression via representation mutual information optimization |
topic | probabilistic model compression representation mutual information neural network compression automated compression pipeline information theory |
url | https://www.mdpi.com/2227-7390/13/1/108 |
work_keys_str_mv | AT wenjienie probabilisticautomatedmodelcompressionviarepresentationmutualinformationoptimization AT shengchuanzhang probabilisticautomatedmodelcompressionviarepresentationmutualinformationoptimization AT xiawuzheng probabilisticautomatedmodelcompressionviarepresentationmutualinformationoptimization |