Mini-InternVL: a flexible-transfer pocket multi-modal model with 5% parameters and 90% performance
Abstract Multi-modal large language models (MLLMs) have demonstrated impressive performance in vision-language tasks across a wide range of domains. However, the large model scale and associated high computational cost pose significant challenges for training and deploying MLLMs on consumer-grade GP...
Saved in:
| Main Authors: | Zhangwei Gao, Zhe Chen, Erfei Cui, Yiming Ren, Weiyun Wang, Jinguo Zhu, Hao Tian, Shenglong Ye, Junjun He, Xizhou Zhu, Lewei Lu, Tong Lu, Yu Qiao, Jifeng Dai, Wenhai Wang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2024-12-01
|
| Series: | Visual Intelligence |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44267-024-00067-6 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
GPT understands, too
by: Xiao Liu, et al.
Published: (2024-01-01) -
Enhancing domain-specific text generation for power grid maintenance with P2FT
by: Yi Yang, et al.
Published: (2024-11-01) -
LOGIC: LLM-originated guidance for internal cognitive improvement of small language models in stance detection
by: Woojin Lee, et al.
Published: (2024-12-01) -
A Study on Text Classification in the Age of Large Language Models
by: Paul Trust, et al.
Published: (2024-11-01) -
CPT: Colorful Prompt Tuning for pre-trained vision-language models
by: Yuan Yao, et al.
Published: (2024-01-01)