An Adversarial Attack via Penalty Method
Deep learning systems have achieved significant success across various machine learning tasks. However, they are highly vulnerable to attacks. For example, adversarial examples can fool deep learning systems easily by perturbing inputs with small, imperceptible noises. There has been extensive resea...
Saved in:
Main Authors: | Jiyuan Sun, Haibo Yu, Jianjun Zhao |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10839396/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Adversarial attacks and defenses in deep learning
by: Ximeng LIU, et al.
Published: (2020-10-01) -
Enhancing adversarial transferability with local transformation
by: Yang Zhang, et al.
Published: (2024-11-01) -
You Only Attack Once: Single-Step DeepFool Algorithm
by: Jun Li, et al.
Published: (2024-12-01) -
Dual-Targeted adversarial example in evasion attack on graph neural networks
by: Hyun Kwon, et al.
Published: (2025-01-01) -
Targeted Discrepancy Attacks: Crafting Selective Adversarial Examples in Graph Neural Networks
by: Hyun Kwon, et al.
Published: (2025-01-01)