Parameter-efficient fine-tuning of large language models using semantic knowledge tuning
Abstract Large Language Models (LLMs) are gaining significant popularity in recent years for specialized tasks using prompts due to their low computational cost. Standard methods like prefix tuning utilize special, modifiable tokens that lack semantic meaning and require extensive training for best...
        Saved in:
      
    
          | Main Authors: | Nusrat Jahan Prottasha, Asif Mahmud, Md. Shohanur Islam Sobuj, Prakash Bhat, Md Kowsher, Niloofar Yousefi, Ozlem Ozmen Garibay | 
|---|---|
| Format: | Article | 
| Language: | English | 
| Published: | Nature Portfolio
    
        2024-12-01 | 
| Series: | Scientific Reports | 
| Online Access: | https://doi.org/10.1038/s41598-024-75599-4 | 
| Tags: | Add Tag 
      No Tags, Be the first to tag this record!
   | 
Similar Items
- 
                
                    NMC and the Fine-Tuning Problem on the Brane        
                          
 by: A. Safsafi, et al.
 Published: (2014-01-01)
- 
                
                    The current status of fine-tuning in supersymmetry        
                          
 by: Melissa van Beekveld, et al.
 Published: (2020-01-01)
- 
                
                    Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks        
                          
 by: Nermeen Abou Baker, et al.
 Published: (2024-12-01)
- 
                
                    Fine-tuning neural network quantum states        
                          
 by: Riccardo Rende, et al.
 Published: (2024-12-01)
- 
                
                    Enhancing semantical text understanding with fine-tuned large language models: A case study on Quora Question Pair duplicate identification.        
                          
 by: Sifei Han, et al.
 Published: (2025-01-01)
 
       