When AI goes wrong: Fatal errors in oncological research reviewing assistance Open AI based

In this letter to the editor, the use of artificial intelligence (AI) techniques, specifically the Chat-GPT based “Review Assistant” by Elsevier, for reviewing scientific articles is discussed. While the tool has many benefits such as detecting linguistic and typographical errors in manuscripts, it...

Full description

Saved in:
Bibliographic Details
Main Author: Marwan Al-Raeei
Format: Article
Language:English
Published: Elsevier 2024-06-01
Series:Oral Oncology Reports
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2772906024001389
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this letter to the editor, the use of artificial intelligence (AI) techniques, specifically the Chat-GPT based “Review Assistant” by Elsevier, for reviewing scientific articles is discussed. While the tool has many benefits such as detecting linguistic and typographical errors in manuscripts, it also has limitations. An example is highlighted where the AI gave an incorrect and potentially dangerous answer regarding the bond energies of molecules in an oral tumor. This mistake shows that the use of AI for scientific research evaluation can be a double-edged sword, as it may provide inaccurate information that could have serious consequences.
ISSN:2772-9060