A novel perturbation attack on SVM by greedy algorithm

With the increasing concern of machine learning security issues, an adversarial sample generation method for SVM was proposed. This attack occurred in the testing stage by manipulating with the sample tofool the SVM classification model. Greedy strategy was used to search for salient feature subsets...

Full description

Saved in:
Bibliographic Details
Main Authors: Yaguan QIAN, Xiaohui GUAN, Shuhui WU, NBensheng YU, Dongxiao REN
Format: Article
Language:zho
Published: Beijing Xintong Media Co., Ltd 2019-01-01
Series:Dianxin kexue
Subjects:
Online Access:http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000−0801.2019014/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the increasing concern of machine learning security issues, an adversarial sample generation method for SVM was proposed. This attack occurred in the testing stage by manipulating with the sample tofool the SVM classification model. Greedy strategy was used to search for salient feature subsets in kernel space and then the perturbation in the kernel space was projected back into the input space to obtain attack samples. This method made the test samples misclassified by less than 7% perturbation. Experiments are carried out on two data sets, and both of them are successful. In the artificial data set, the classification error rate is above 50% under 2% perturbation. In MNIST data set, the classification error rate is close to 100% under 5% perturbation.
ISSN:1000-0801