Survey on large language models alignment research

With the rapid development of artificial intelligence technology, large language models have been widely applied in numerous fields. However, the potential of large language models to generate inaccurate, misleading, or even harmful contents has raised concerns about their reliability. Adopting alig...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU Kunlin, QU Xinji, TAN Fang, KANG Honghui, ZHAO Shaowei, SHI Rong
Format: Article
Language:zho
Published: Beijing Xintong Media Co., Ltd 2024-06-01
Series:Dianxin kexue
Subjects:
Online Access:http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2024151/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the rapid development of artificial intelligence technology, large language models have been widely applied in numerous fields. However, the potential of large language models to generate inaccurate, misleading, or even harmful contents has raised concerns about their reliability. Adopting alignment techniques to ensure the behavior of large language models is consistent with human values has become an urgent issue to address. Recent research progress on alignment techniques for large language models were surveyed. Common methods for collecting instruction data and human preference datasets were introduced, research on supervised tuning and alignment adjustments was summarized, commonly used datasets and methods for model evaluation were discussed, and future research directions were concluded.
ISSN:1000-0801