Light Attack: A Physical World Real-Time Attack Against Object Classifiers
It is well known that deep neural networks (DNNs) are vulnerable to adversarial examples. In the digital world, most of the existing work makes classifiers or detectors fail by adding perturbations that are imperceptible to humans. In the physical world, existing work mostly invalidates classifiers...
Saved in:
| Main Authors: | Ruizhe Hu, Ting Rui, Yan Ouyang, Jinkang Wang, Qunyan Jiang, Yinan Du |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/9791340/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Adversarial Attacks to Manipulate Target Localization of Object Detector
by: Kai Xu, et al.
Published: (2024-01-01) -
DOG: An Object Detection Adversarial Attack Method
by: Jinpeng Li, et al.
Published: (2025-01-01) -
You Only Attack Once: Single-Step DeepFool Algorithm
by: Jun Li, et al.
Published: (2024-12-01) -
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
Investigating the Transferability of TOG Adversarial Attacks in YOLO Models in the Maritime Domain
by: Phornphawit Manasut, et al.
Published: (2025-01-01)