LLM Hallucination: The Curse That Cannot Be Broken
Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon,...
Saved in:
| Main Author: | Hussein Al-Mahmood |
|---|---|
| Format: | Article |
| Language: | Arabic |
| Published: |
University of Information Technology and Communications
2025-08-01
|
| Series: | Iraqi Journal for Computers and Informatics |
| Subjects: | |
| Online Access: | https://ijci.uoitc.edu.iq/index.php/ijci/article/view/546 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations
by: Sangyeon Yu, et al.
Published: (2025-05-01) -
GPT-4 generated psychological reports in psychodynamic perspective: a pilot study on quality, risk of hallucination and client satisfaction
by: Namwoo Kim, et al.
Published: (2025-03-01) -
Evaluating Reasoning in Large Language Models with a Modified Think-a-Number Game: Case Study
by: Petr Hoza
Published: (2025-07-01) -
LLM based expert AI agent for mission operation management
by: Sobhana Mummaneni, et al.
Published: (2025-03-01) -
Conversational AI agent for precision oncology: AI-HOPE-WNT integrates clinical and genomic data to investigate WNT pathway dysregulation in colorectal cancer
by: Ei-Wen Yang, et al.
Published: (2025-08-01)