Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report
ChatGPT has demonstrated significant potential in various aspects of medicine, including its performance on licensing examinations. In this study, we systematically investigated ChatGPT’s performance in Iranian medical exams and assessed the quality of the included studies using a previously publish...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wolters Kluwer Medknow Publications
2024-11-01
|
| Series: | Journal of Education and Health Promotion |
| Subjects: | |
| Online Access: | https://journals.lww.com/10.4103/jehp.jehp_1210_24 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846134477414203392 |
|---|---|
| author | Alireza Keshtkar Farnaz Atighi Hamid Reihani |
| author_facet | Alireza Keshtkar Farnaz Atighi Hamid Reihani |
| author_sort | Alireza Keshtkar |
| collection | DOAJ |
| description | ChatGPT has demonstrated significant potential in various aspects of medicine, including its performance on licensing examinations. In this study, we systematically investigated ChatGPT’s performance in Iranian medical exams and assessed the quality of the included studies using a previously published assessment checklist. The study found that ChatGPT achieved an accuracy range of 32–72% on basic science exams, 34–68.5% on pre-internship exams, and 32–84% on residency exams. Notably, its performance was generally higher when the input was provided in English compared to Persian. One study reported a 40% accuracy rate on an endodontic board exam. To establish ChatGPT as a supplementary tool in medical education and clinical practice, we suggest that dedicated guidelines and checklists are needed to ensure high-quality and consistent research in this emerging field. |
| format | Article |
| id | doaj-art-24dc69a5cfdb48ff98d12797b8fedf08 |
| institution | Kabale University |
| issn | 2277-9531 2319-6440 |
| language | English |
| publishDate | 2024-11-01 |
| publisher | Wolters Kluwer Medknow Publications |
| record_format | Article |
| series | Journal of Education and Health Promotion |
| spelling | doaj-art-24dc69a5cfdb48ff98d12797b8fedf082024-12-09T13:10:18ZengWolters Kluwer Medknow PublicationsJournal of Education and Health Promotion2277-95312319-64402024-11-0113142142110.4103/jehp.jehp_1210_24Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief reportAlireza KeshtkarFarnaz AtighiHamid ReihaniChatGPT has demonstrated significant potential in various aspects of medicine, including its performance on licensing examinations. In this study, we systematically investigated ChatGPT’s performance in Iranian medical exams and assessed the quality of the included studies using a previously published assessment checklist. The study found that ChatGPT achieved an accuracy range of 32–72% on basic science exams, 34–68.5% on pre-internship exams, and 32–84% on residency exams. Notably, its performance was generally higher when the input was provided in English compared to Persian. One study reported a 40% accuracy rate on an endodontic board exam. To establish ChatGPT as a supplementary tool in medical education and clinical practice, we suggest that dedicated guidelines and checklists are needed to ensure high-quality and consistent research in this emerging field.https://journals.lww.com/10.4103/jehp.jehp_1210_24artificial intelligencechatgptiranmedical education |
| spellingShingle | Alireza Keshtkar Farnaz Atighi Hamid Reihani Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report Journal of Education and Health Promotion artificial intelligence chatgpt iran medical education |
| title | Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report |
| title_full | Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report |
| title_fullStr | Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report |
| title_full_unstemmed | Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report |
| title_short | Systematic review of ChatGPT accuracy and performance in Iran’s medical licensing exams: A brief report |
| title_sort | systematic review of chatgpt accuracy and performance in iran s medical licensing exams a brief report |
| topic | artificial intelligence chatgpt iran medical education |
| url | https://journals.lww.com/10.4103/jehp.jehp_1210_24 |
| work_keys_str_mv | AT alirezakeshtkar systematicreviewofchatgptaccuracyandperformanceiniransmedicallicensingexamsabriefreport AT farnazatighi systematicreviewofchatgptaccuracyandperformanceiniransmedicallicensingexamsabriefreport AT hamidreihani systematicreviewofchatgptaccuracyandperformanceiniransmedicallicensingexamsabriefreport |