Defending Deep Neural Networks Against Backdoor Attack by Using De-Trigger Autoencoder
A backdoor attack is a method that causes misrecognition in a deep neural network by training it on additional data that have a specific trigger. The network will correctly recognize normal samples (which lack the specific trigger) as their proper classes but will misrecognize backdoor samples (whic...
Saved in:
| Main Author: | Hyun Kwon |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/9579062/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A survey of backdoor attacks and defences: From deep neural networks to large language models
by: Ling-Xin Jin, et al.
Published: (2025-09-01) -
Backdoor defense method in federated learning based on contrastive training
by: Jiale ZHANG, et al.
Published: (2024-03-01) -
Stealthy graph backdoor attack based on feature trigger
by: Yang Chen, et al.
Published: (2025-06-01) -
Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects
by: Muhammad Abdullah Hanif, et al.
Published: (2025-01-01) -
Natural Occlusion-Based Backdoor Attacks: A Novel Approach to Compromising Pedestrian Detectors
by: Qiong Li, et al.
Published: (2025-07-01)