LLM Hallucination: The Curse That Cannot Be Broken

Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon,...

Full description

Saved in:
Bibliographic Details
Main Author: Hussein Al-Mahmood
Format: Article
Language:Arabic
Published: University of Information Technology and Communications 2025-08-01
Series:Iraqi Journal for Computers and Informatics
Subjects:
Online Access:https://ijci.uoitc.edu.iq/index.php/ijci/article/view/546
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon, discussing its different types, the multi-faceted reasons that lead to it, its impact, and the statement regarding the inherent nature of current LLMs that make hallucinations inevitable. After examining several techniques, each chosen for their different implementation, to detect and mitigate hallucinations, including enhanced training, tagged-context prompts, contrastive learning, and semantic entropy analysis, the work concludes that none are efficient to mitigate hallucinations when they occur. The phenomenon is here to stay, hence calling for robust user awareness and verification mechanisms, stepping short of absolute dependence on these models in healthcare, journalism, legal services, finance, and other critical applications that require accurate and reliable information to ensure informed decisions.
ISSN:2313-190X
2520-4912