An empirical study of LLaMA3 quantization: from LLMs to MLLMs
Abstract The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural...
Saved in:
Main Authors: | Wei Huang, Xingyu Zheng, Xudong Ma, Haotong Qin, Chengtao Lv, Hong Chen, Jie Luo, Xiaojuan Qi, Xianglong Liu, Michele Magno |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2024-12-01
|
Series: | Visual Intelligence |
Subjects: | |
Online Access: | https://doi.org/10.1007/s44267-024-00070-x |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
T-LLaMA: a Tibetan large language model based on LLaMA2
by: Hui Lv, et al.
Published: (2024-12-01) -
Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework
by: He Jinliang, et al.
Published: (2025-01-01) -
FEASIBILITY OF USING LOW-PARAMETER LOCAL LLMS IN ANSWERING QUESTIONS FROM ENTERPRISE KNOWLEDGE BASE
by: Marcin BADUROWICZ, et al.
Published: (2024-12-01) -
Deformation Quantization of Nonassociative Algebras
by: Elisabeth Remm
Published: (2024-12-01) -
Sentiment Analysis of Product Reviews Using Fine-Tuned LLaMa-3 Model: Evaluation with Comprehensive Benchmark Metrics
by: Wang Yili
Published: (2025-01-01)