Efficient Fine-Tuning of Large Language Models via a Low-Rank Gradient Estimator
In this paper, we present a Low-Rank Gradient Estimator (LoGE) to accelerate the finetune-time computation of transformers, especially large language models (LLMs). Unlike Parameter-Efficient Fine-Tuning (PEFT) methods, which primarily aim to minimize the number of fine-tuning parameters, LoGE also...
Saved in:
Main Authors: | Luoming Zhang, Zhenyu Lou, Yangwei Ying, Cheng Yang, Hong Zhou |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-12-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/1/82 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology
by: Yihao Hou, et al.
Published: (2025-01-01) -
Low-Rank Adaptation of Pre-Trained Large Vision Models for Improved Lung Nodule Malignancy Classification
by: Benjamin P. Veasey, et al.
Published: (2025-01-01) -
Klasifikasi Citra Generasi Artificial Intellegence menggunakan Metodde Fine Tuning pada Residual Network
by: Sulthan Abiyyu Hakim, et al.
Published: (2024-07-01) -
Augmented prediction of vertebral collapse after osteoporotic vertebral compression fractures through parameter-efficient fine-tuning of biomedical foundation models
by: Sibeen Kim, et al.
Published: (2024-12-01) -
ClassWise-SAM-Adapter: Parameter-Efficient Fine-Tuning Adapts Segment Anything to SAR Domain for Semantic Segmentation
by: Xinyang Pu, et al.
Published: (2025-01-01)