Review of Enhancement Research for Closed-Source Large Language Model

With the rapid development of large language models in the field of natural language processing, performance enhancement of closed-source large language models represented by the GPT family has become a challenge. Due to the inaccessibility of parameter weights inside the models, traditional trainin...

Full description

Saved in:
Bibliographic Details
Main Author: LIU Hualing, ZHANG Zilong, PENG Hongshuai
Format: Article
Language:zho
Published: Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press 2025-05-01
Series:Jisuanji kexue yu tansuo
Subjects:
Online Access:http://fcst.ceaj.org/fileup/1673-9418/PDF/2407021.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849313492988854272
author LIU Hualing, ZHANG Zilong, PENG Hongshuai
author_facet LIU Hualing, ZHANG Zilong, PENG Hongshuai
author_sort LIU Hualing, ZHANG Zilong, PENG Hongshuai
collection DOAJ
description With the rapid development of large language models in the field of natural language processing, performance enhancement of closed-source large language models represented by the GPT family has become a challenge. Due to the inaccessibility of parameter weights inside the models, traditional training methods, such as fine-tuning techniques, are difficult to be applied to closed-source large language models, which makes it difficult for further optimization on these models. Meanwhile, closed-source large language models have been widely used in downstream real-world tasks, and thus it is important to investigate how to enhance the performance of closed-source large language models. This paper focuses on the enhancement of closed-source large language models, analyzes three techniques, namely prompt engineering, retrieval augmented generation, and agent, and further subdivides them according to the technical characteristics and modular architectures of the different methods. The core idea, main method and application effect of each technology are introduced in detail, and the superiority and limitation of different augmentation methods in terms of reasoning ability, generation credibility and task adaptability are studied. In addition, this paper also discusses the combined application of these three techniques, combining with specific cases to emphasize the great potential of the combined techniques in enhancing model performance. Finally, this paper summarizes the research status and problems of the existing techniques, and looks forward to the future development of enhancement techniques for closed-source large language models.
format Article
id doaj-art-a5b593eac1a24d35b72cf88c206b7a86
institution Kabale University
issn 1673-9418
language zho
publishDate 2025-05-01
publisher Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press
record_format Article
series Jisuanji kexue yu tansuo
spelling doaj-art-a5b593eac1a24d35b72cf88c206b7a862025-08-20T03:52:43ZzhoJournal of Computer Engineering and Applications Beijing Co., Ltd., Science PressJisuanji kexue yu tansuo1673-94182025-05-011951141115610.3778/j.issn.1673-9418.2407021Review of Enhancement Research for Closed-Source Large Language ModelLIU Hualing, ZHANG Zilong, PENG Hongshuai0School of Statistics and Information, Shanghai University of International Business and Economics, Shanghai 201620, ChinaWith the rapid development of large language models in the field of natural language processing, performance enhancement of closed-source large language models represented by the GPT family has become a challenge. Due to the inaccessibility of parameter weights inside the models, traditional training methods, such as fine-tuning techniques, are difficult to be applied to closed-source large language models, which makes it difficult for further optimization on these models. Meanwhile, closed-source large language models have been widely used in downstream real-world tasks, and thus it is important to investigate how to enhance the performance of closed-source large language models. This paper focuses on the enhancement of closed-source large language models, analyzes three techniques, namely prompt engineering, retrieval augmented generation, and agent, and further subdivides them according to the technical characteristics and modular architectures of the different methods. The core idea, main method and application effect of each technology are introduced in detail, and the superiority and limitation of different augmentation methods in terms of reasoning ability, generation credibility and task adaptability are studied. In addition, this paper also discusses the combined application of these three techniques, combining with specific cases to emphasize the great potential of the combined techniques in enhancing model performance. Finally, this paper summarizes the research status and problems of the existing techniques, and looks forward to the future development of enhancement techniques for closed-source large language models.http://fcst.ceaj.org/fileup/1673-9418/PDF/2407021.pdfclosed-source model; large language model; prompt engineering; retrieval augmented generation; agent
spellingShingle LIU Hualing, ZHANG Zilong, PENG Hongshuai
Review of Enhancement Research for Closed-Source Large Language Model
Jisuanji kexue yu tansuo
closed-source model; large language model; prompt engineering; retrieval augmented generation; agent
title Review of Enhancement Research for Closed-Source Large Language Model
title_full Review of Enhancement Research for Closed-Source Large Language Model
title_fullStr Review of Enhancement Research for Closed-Source Large Language Model
title_full_unstemmed Review of Enhancement Research for Closed-Source Large Language Model
title_short Review of Enhancement Research for Closed-Source Large Language Model
title_sort review of enhancement research for closed source large language model
topic closed-source model; large language model; prompt engineering; retrieval augmented generation; agent
url http://fcst.ceaj.org/fileup/1673-9418/PDF/2407021.pdf
work_keys_str_mv AT liuhualingzhangzilongpenghongshuai reviewofenhancementresearchforclosedsourcelargelanguagemodel