MDAPT: Multi-Modal Depth Adversarial Prompt Tuning to Enhance the Adversarial Robustness of Visual Language Models
Large visual language models like Contrastive Language-Image Pre-training (CLIP), despite their excellent performance, are highly vulnerable to the influence of adversarial examples. This work investigates the accuracy and robustness of visual language models (VLMs) from a novel multi-modal perspect...
Saved in:
Main Authors: | Chao Li, Yonghao Liao, Caichang Ding, Zhiwei Ye |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/1/258 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
On the Adversarial Robustness of Decision Trees and a Symmetry Defense
by: Blerta Lindqvist
Published: (2025-01-01) -
Ontology-based prompt tuning for news article summarization
by: A. R. S. Silva, et al.
Published: (2025-02-01) -
Survey on adversarial attacks and defenses for object detection
by: Xinxin WANG, et al.
Published: (2023-11-01) -
Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation
by: Bin WANG, et al.
Published: (2022-12-01) -
A Cross-Modal Tactile Reproduction Utilizing Tactile and Visual Information Generated by Conditional Generative Adversarial Networks
by: Koki Hatori, et al.
Published: (2025-01-01)