Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
Deep neural networks (DNNs) are crucial in safety-critical applications but vulnerable to adversarial attacks, where subtle perturbations cause misclassification. Existing defense mechanisms struggle with small perturbations and face accuracy-robustness trade-offs. This study introduces the ...
Saved in:
| Main Authors: | Yebon Kim, Jinhyo Jung, Hyunjun Kim, Hwisoo So, Yohan Ko, Aviral Shrivastava, Kyoungwoo Lee, Uiwon Hwang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10766602/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Adversarial attacks and defenses in deep learning
by: Ximeng LIU, et al.
Published: (2020-10-01) -
Survey on adversarial attacks and defenses for object detection
by: Xinxin WANG, et al.
Published: (2023-11-01) -
Moving target defense against adversarial attacks
by: Bin WANG, et al.
Published: (2021-02-01) -
Adversarial attack and defense on graph neural networks: a survey
by: Jinyin CHEN, et al.
Published: (2021-06-01) -
Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
by: Bin WANG, et al.
Published: (2022-12-01)