Mixture of prompts learning for vision-language models
As powerful pre-trained vision-language models (VLMs) like CLIP gain prominence, numerous studies have attempted to combine VLMs for downstream tasks. Among these, prompt learning has been validated as an effective method for adapting to new tasks, which only requires a small number of parameters. H...
Saved in:
| Main Authors: | Yu Du, Tong Niu, Rong Zhao |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Frontiers Media S.A.
2025-06-01
|
| Series: | Frontiers in Artificial Intelligence |
| Subjects: | |
| Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2025.1580973/full |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
MPVT: An Efficient Multi-Modal Prompt Vision Tracker for Visual Target Tracking
by: Jianyu Xie, et al.
Published: (2025-07-01) -
Public Opinion Classification on Government Policy Using Social Media: An Exploration of ChatGPT’s Capabilities and Limitations
by: Tammy Babad-Falk, et al.
Published: (2025-05-01) -
Evaluation of open and closed-source LLMs for low-resource language with zero-shot, few-shot, and chain-of-thought prompting
by: Zabir Al Nazi, et al.
Published: (2025-03-01) -
Few-shot learning for novel object detection in autonomous driving
by: Yifan Zhuang, et al.
Published: (2025-12-01) -
Exploring the Limits of Large Language Models’ Ability to Distinguish Between Objects
by: Hyeongjin Ju, et al.
Published: (2025-04-01)