”My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews
Abstract Large Language Models (LLMs) are increasingly integrated into AI-powered mobile applications, offering novel functionalities but also introducing the risk of “hallucinations” generating plausible yet incorrect or nonsensical information. These AI errors can significantly degrade user experi...
Saved in:
| Main Authors: | Rhodes Massenon, Ishaya Gambo, Javed Ali Khan, Christopher Agbonkhese, Ayed Alwadain |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-15416-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
LLM Hallucination: The Curse That Cannot Be Broken
by: Hussein Al-Mahmood
Published: (2025-08-01) -
Use me wisely: AI-driven assessment for LLM prompting skills development
by: Dimitri Ognibene, Gregor Donabauer, Emily Theophilou, Cansu Koyuturk, Mona Yavari, Sathya Bursic, Alessia Telari, Alessia Testa, Raffaele Boiano, Davide Taibi, Davinia Hernandez-Leo, Udo Kruschwitz and Martin Ruskov
Published: (2025-07-01) -
Mitigating LLM Hallucinations Using a Multi-Agent Framework
by: Ahmed M. Darwish, et al.
Published: (2025-06-01) -
Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations
by: Sangyeon Yu, et al.
Published: (2025-05-01) -
Securing LLM Workloads With NIST AI RMF in the Internet of Robotic Things
by: Hassan Karim, et al.
Published: (2025-01-01)