Adversarial Attacks to Manipulate Target Localization of Object Detector

Adversarial attack has gradually become an important branch in the field of artificial intelligence security, where the potential threat brought by adversarial example attack is more not to be ignored. This paper proposes a new attack mode for the task of object detection. We find that by attacking...

Full description

Saved in:
Bibliographic Details
Main Authors: Kai Xu, Xiao Cheng, Ji Qiao, Jia-Teng Li, Kai-Xuan Ji, Jia-Yong Zhong, Peng Tian, Jian-Xun Mi
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10771719/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Adversarial attack has gradually become an important branch in the field of artificial intelligence security, where the potential threat brought by adversarial example attack is more not to be ignored. This paper proposes a new attack mode for the task of object detection. We find that by attacking the localization task in object detection, a kind of adversarial attack on target bounding boxes can be realized. We discover that, for a certain target in the input image, the areas concerned by the classification and localization of the object detection model are determined but different. Therefore, we propose a local perturbation based adversarial attack method for object detection localization, which identifies key areas affecting target localization and adds adversarial perturbations to these areas to achieve bounding box attacks on target bounding box localization while ensuring high stealthiness. Experimental results on MS COCO dataset and self-built dataset show that our method generates adversarial examples that can make the object detector locate abnormally. More importantly, studying adversarial example attack is beneficial to understanding deep networks and developing robust models.
ISSN:2169-3536