An empirical study of LLaMA3 quantization: from LLMs to MLLMs

Abstract The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural...

Full description

Saved in:
Bibliographic Details
Main Authors: Wei Huang, Xingyu Zheng, Xudong Ma, Haotong Qin, Chengtao Lv, Hong Chen, Jie Luo, Xiaojuan Qi, Xianglong Liu, Michele Magno
Format: Article
Language:English
Published: Springer 2024-12-01
Series:Visual Intelligence
Subjects:
Online Access:https://doi.org/10.1007/s44267-024-00070-x
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841558993408884736
author Wei Huang
Xingyu Zheng
Xudong Ma
Haotong Qin
Chengtao Lv
Hong Chen
Jie Luo
Xiaojuan Qi
Xianglong Liu
Michele Magno
author_facet Wei Huang
Xingyu Zheng
Xudong Ma
Haotong Qin
Chengtao Lv
Hong Chen
Jie Luo
Xiaojuan Qi
Xianglong Liu
Michele Magno
author_sort Wei Huang
collection DOAJ
description Abstract The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3’s capabilities when quantized to low bit-width. This exploration can potentially provide new insights and challenges for the low-bit quantization of LLaMA3 and other future LLMs, especially in addressing performance degradation issues that suffer in LLM compression. Specifically, we comprehensively evaluate the 10 existing post-training quantization and LoRA fine-tuning (LoRA-FT) methods of LLaMA3 on 1-8 bits and various datasets to reveal the low-bit quantization performance of LLaMA3. To uncover the capabilities of low-bit quantized MLLM, we assessed the performance of the LLaMA3-based LLaVA-Next-8B model under 2-4 ultra-low bits with post-training quantization methods. Our experimental results indicate that LLaMA3 still suffers from non-negligible degradation in linguistic and visual contexts, particularly under ultra-low bit widths. This highlights the significant performance gap at low bit-width that needs to be addressed in future developments. We expect that this empirical study will prove valuable in advancing future models, driving LLMs and MLLMs to achieve higher accuracy at lower bit to enhance practicality.
format Article
id doaj-art-02c0cca950694326b2c63289293d26aa
institution Kabale University
issn 2731-9008
language English
publishDate 2024-12-01
publisher Springer
record_format Article
series Visual Intelligence
spelling doaj-art-02c0cca950694326b2c63289293d26aa2025-01-05T12:50:15ZengSpringerVisual Intelligence2731-90082024-12-012111310.1007/s44267-024-00070-xAn empirical study of LLaMA3 quantization: from LLMs to MLLMsWei Huang0Xingyu Zheng1Xudong Ma2Haotong Qin3Chengtao Lv4Hong Chen5Jie Luo6Xiaojuan Qi7Xianglong Liu8Michele Magno9Department of Electrical and Electronic Engineering, The University of Hong KongSchool of Computer Science and Engineering, Beihang UniversitySchool of Computer Science and Engineering, Beihang UniversityDepartment of Information Technology and Electrical Engineering, ETH ZurichSchool of Computer Science and Engineering, Beihang UniversitySchool of Computer Science and Engineering, Beihang UniversitySchool of Computer Science and Engineering, Beihang UniversityDepartment of Electrical and Electronic Engineering, The University of Hong KongSchool of Computer Science and Engineering, Beihang UniversityDepartment of Information Technology and Electrical Engineering, ETH ZurichAbstract The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3’s capabilities when quantized to low bit-width. This exploration can potentially provide new insights and challenges for the low-bit quantization of LLaMA3 and other future LLMs, especially in addressing performance degradation issues that suffer in LLM compression. Specifically, we comprehensively evaluate the 10 existing post-training quantization and LoRA fine-tuning (LoRA-FT) methods of LLaMA3 on 1-8 bits and various datasets to reveal the low-bit quantization performance of LLaMA3. To uncover the capabilities of low-bit quantized MLLM, we assessed the performance of the LLaMA3-based LLaVA-Next-8B model under 2-4 ultra-low bits with post-training quantization methods. Our experimental results indicate that LLaMA3 still suffers from non-negligible degradation in linguistic and visual contexts, particularly under ultra-low bit widths. This highlights the significant performance gap at low bit-width that needs to be addressed in future developments. We expect that this empirical study will prove valuable in advancing future models, driving LLMs and MLLMs to achieve higher accuracy at lower bit to enhance practicality.https://doi.org/10.1007/s44267-024-00070-xModel quantizationLarge language modelMulti-modalDeep learning
spellingShingle Wei Huang
Xingyu Zheng
Xudong Ma
Haotong Qin
Chengtao Lv
Hong Chen
Jie Luo
Xiaojuan Qi
Xianglong Liu
Michele Magno
An empirical study of LLaMA3 quantization: from LLMs to MLLMs
Visual Intelligence
Model quantization
Large language model
Multi-modal
Deep learning
title An empirical study of LLaMA3 quantization: from LLMs to MLLMs
title_full An empirical study of LLaMA3 quantization: from LLMs to MLLMs
title_fullStr An empirical study of LLaMA3 quantization: from LLMs to MLLMs
title_full_unstemmed An empirical study of LLaMA3 quantization: from LLMs to MLLMs
title_short An empirical study of LLaMA3 quantization: from LLMs to MLLMs
title_sort empirical study of llama3 quantization from llms to mllms
topic Model quantization
Large language model
Multi-modal
Deep learning
url https://doi.org/10.1007/s44267-024-00070-x
work_keys_str_mv AT weihuang anempiricalstudyofllama3quantizationfromllmstomllms
AT xingyuzheng anempiricalstudyofllama3quantizationfromllmstomllms
AT xudongma anempiricalstudyofllama3quantizationfromllmstomllms
AT haotongqin anempiricalstudyofllama3quantizationfromllmstomllms
AT chengtaolv anempiricalstudyofllama3quantizationfromllmstomllms
AT hongchen anempiricalstudyofllama3quantizationfromllmstomllms
AT jieluo anempiricalstudyofllama3quantizationfromllmstomllms
AT xiaojuanqi anempiricalstudyofllama3quantizationfromllmstomllms
AT xianglongliu anempiricalstudyofllama3quantizationfromllmstomllms
AT michelemagno anempiricalstudyofllama3quantizationfromllmstomllms
AT weihuang empiricalstudyofllama3quantizationfromllmstomllms
AT xingyuzheng empiricalstudyofllama3quantizationfromllmstomllms
AT xudongma empiricalstudyofllama3quantizationfromllmstomllms
AT haotongqin empiricalstudyofllama3quantizationfromllmstomllms
AT chengtaolv empiricalstudyofllama3quantizationfromllmstomllms
AT hongchen empiricalstudyofllama3quantizationfromllmstomllms
AT jieluo empiricalstudyofllama3quantizationfromllmstomllms
AT xiaojuanqi empiricalstudyofllama3quantizationfromllmstomllms
AT xianglongliu empiricalstudyofllama3quantizationfromllmstomllms
AT michelemagno empiricalstudyofllama3quantizationfromllmstomllms