Friend-Guard Textfooler Attack on Text Classification System
Deep neural networks provide good performance for image classification, text classification, speech classification, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a sample created by adding a little noise to the original sample d...
Saved in:
Main Author: | Hyun Kwon |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9432814/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Dual-Targeted adversarial example in evasion attack on graph neural networks
by: Hyun Kwon, et al.
Published: (2025-01-01) -
Targeted Discrepancy Attacks: Crafting Selective Adversarial Examples in Graph Neural Networks
by: Hyun Kwon, et al.
Published: (2025-01-01) -
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
Adversarial attacks and defenses in deep learning
by: Ximeng LIU, et al.
Published: (2020-10-01) -
Enhancing adversarial transferability with local transformation
by: Yang Zhang, et al.
Published: (2024-11-01)