Improve the robustness of algorithm under adversarial environment by moving target defense

Traditional machine learning models works in peace environment,assuming that training data and test data share the same distribution.However,the hypothesis does not hold in areas like malicious document detection.The enemy attacks the classification algorithm by modifying the test samples so that th...

Full description

Saved in:
Bibliographic Details
Main Authors: Kang HE, Yuefei ZHU, Long LIU, Bin LU, Bin LIU
Format: Article
Language:English
Published: POSTS&TELECOM PRESS Co., LTD 2020-08-01
Series:网络与信息安全学报
Subjects:
Online Access:http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2020052
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Traditional machine learning models works in peace environment,assuming that training data and test data share the same distribution.However,the hypothesis does not hold in areas like malicious document detection.The enemy attacks the classification algorithm by modifying the test samples so that the well-constructed malicious samples can escape the detection by machine learning models.To improve the security of machine learning algorithms,moving target defense (MTD) based method was proposed to enhance the robustness.Experimental results show that the proposed method could effectively resist the evasion attack to detection algorithm by dynamic transformation in the stages of algorithm model,feature selection and result output.
ISSN:2096-109X