Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation

Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained model.Specially, the size of trained models genera...

Full description

Saved in:
Bibliographic Details
Main Authors: Bin WANG, Simin LI, Yaguan QIAN, Jun ZHANG, Chaohao LI, Chenming ZHU, Hongfei ZHANG
Format: Article
Language:English
Published: POSTS&TELECOM PRESS Co., LTD 2022-12-01
Series:网络与信息安全学报
Subjects:
Online Access:http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2022074
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841529672869871616
author Bin WANG
Simin LI
Yaguan QIAN
Jun ZHANG
Chaohao LI
Chenming ZHU
Hongfei ZHANG
author_facet Bin WANG
Simin LI
Yaguan QIAN
Jun ZHANG
Chaohao LI
Chenming ZHU
Hongfei ZHANG
author_sort Bin WANG
collection DOAJ
description Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained model.Specially, the size of trained models generated by the adversarial training will significantly increase for defending against adversarial attacks.This imposes constraints on the usability of adversarial training, especially in a resource-constraint environment.Thus, how to reduce the model size while ensuring the robustness of the trained model is a challenge.To address the above issues, a lightweight defense mechanism was proposed against adversarial attacks, with adaptive pruning and robust distillation.A hierarchically adaptive pruning method was applied to the model generated by adversarial training in advance.Then the trained model was further compressed by a modified robust distillation method.Experimental results on CIFAR-10 and CIFAR-100 datasets showed that our hierarchically adaptive pruning method presented stronger robustness under various FLOP than the existing pruning methods.Moreover, the fusion of pruning and robust distillation presented higher robustness than the state-of-art robust distillation methods.Therefore, the experimental results prove that the proposed method can improve the usability of the adversarial training in the IoT edge computing environment.
format Article
id doaj-art-11e03ddd5364402d90c464d71f3d1c8b
institution Kabale University
issn 2096-109X
language English
publishDate 2022-12-01
publisher POSTS&TELECOM PRESS Co., LTD
record_format Article
series 网络与信息安全学报
spelling doaj-art-11e03ddd5364402d90c464d71f3d1c8b2025-01-15T03:16:04ZengPOSTS&TELECOM PRESS Co., LTD网络与信息安全学报2096-109X2022-12-01810210959574502Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillationBin WANGSimin LIYaguan QIANJun ZHANGChaohao LIChenming ZHUHongfei ZHANGAdversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained model.Specially, the size of trained models generated by the adversarial training will significantly increase for defending against adversarial attacks.This imposes constraints on the usability of adversarial training, especially in a resource-constraint environment.Thus, how to reduce the model size while ensuring the robustness of the trained model is a challenge.To address the above issues, a lightweight defense mechanism was proposed against adversarial attacks, with adaptive pruning and robust distillation.A hierarchically adaptive pruning method was applied to the model generated by adversarial training in advance.Then the trained model was further compressed by a modified robust distillation method.Experimental results on CIFAR-10 and CIFAR-100 datasets showed that our hierarchically adaptive pruning method presented stronger robustness under various FLOP than the existing pruning methods.Moreover, the fusion of pruning and robust distillation presented higher robustness than the state-of-art robust distillation methods.Therefore, the experimental results prove that the proposed method can improve the usability of the adversarial training in the IoT edge computing environment.http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2022074adversarial defensespruningrobust distillationlightweight network
spellingShingle Bin WANG
Simin LI
Yaguan QIAN
Jun ZHANG
Chaohao LI
Chenming ZHU
Hongfei ZHANG
Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
网络与信息安全学报
adversarial defenses
pruning
robust distillation
lightweight network
title Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
title_full Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
title_fullStr Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
title_full_unstemmed Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
title_short Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
title_sort lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
topic adversarial defenses
pruning
robust distillation
lightweight network
url http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2022074
work_keys_str_mv AT binwang lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation
AT siminli lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation
AT yaguanqian lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation
AT junzhang lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation
AT chaohaoli lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation
AT chenmingzhu lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation
AT hongfeizhang lightweightdefensemechanismagainstadversarialattacksviaadaptivepruningandrobustdistillation