An Enhanced Retrieval Scheme for a Large Language Model with a Joint Strategy of Probabilistic Relevance and Semantic Association in the Vertical Domain

Large language model (LLM) processing, with natural language as its core, carries out information retrieval through intelligent Q&A. It has a wide range of application scenarios and is commonly considered a kind of generative AI. However, when LLMs handle generation tasks, the results generated...

Full description

Saved in:
Bibliographic Details
Main Authors: Qi Chen, Weifeng Zhou, Jian Cheng, Ji Yang
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/14/24/11529
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large language model (LLM) processing, with natural language as its core, carries out information retrieval through intelligent Q&A. It has a wide range of application scenarios and is commonly considered a kind of generative AI. However, when LLMs handle generation tasks, the results generated by fundamental LLMs with an insufficient comprehensive performance, specifically in the vertical domain, are often inaccurate due to a poor generalization ability, resulting in the so-called “hallucination” phenomenon. To solve these problems, in this study, an enhanced retrieval scheme for LLM processing was developed, named the BM-RAGAM (BM25 retrieval-augmented generation attention mechanism), by constructing a vectorized knowledge base, utilizing a hybrid joint retrieval strategy of keyword matching through searching and a semantic-enhanced association with an attention mechanism and taking ocean-front- and eddy-related knowledge in oceanography as an example. This scheme realized accurate word-based matching with the BM25 algorithm and text generation through a semantic-enhanced association using RAG, and it was used to construct a vector database of the text knowledge on ocean fronts and eddies. The output was compared and analyzed with the fundamental LLM of Qwen2-72B using the proposed scheme, and an ablation experiment was conducted. The results show that the proposed scheme greatly reduced hallucination generation in the process of text generation, making its outputs more interpretable.
ISSN:2076-3417