ChatGPT-4 Responses on Ankle Cartilage Surgery Often Diverge from Expert Consensus: A Comparative Analysis

Background: There are few studies that have evaluated whether large language models, such as ChatGPT, can provide accurate guidance to clinicians in the field of foot and ankle surgery. This study aimed to assess the accuracy of ChatGPT's responses regarding ankle cartilage repair by comparing...

Full description

Saved in:
Bibliographic Details
Main Authors: Takuji Yokoe MD, PhD, Giulia Roversi MD, PhD, Nuno Sevivas MD, PhD, Naosuke Kamei MD, PhD, Pedro Diniz MD, PhD, Hélder Pereira MD, PhD
Format: Article
Language:English
Published: SAGE Publishing 2025-08-01
Series:Foot & Ankle Orthopaedics
Online Access:https://doi.org/10.1177/24730114251352494
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background: There are few studies that have evaluated whether large language models, such as ChatGPT, can provide accurate guidance to clinicians in the field of foot and ankle surgery. This study aimed to assess the accuracy of ChatGPT's responses regarding ankle cartilage repair by comparing them with the consensus statements from foot and ankle experts as a standard reference. Methods: The open artificial intelligence (AI) model ChatGPT-4 was asked to answer a total of 14 questions on debridement, curettage, and bone marrow stimulation for ankle cartilage lesions that were selected at the 2017 International Consensus Meeting on Cartilage Repair of the Ankle. The ChatGPT responses were compared with the consensus statements developed in this international meeting. A Likert scale (scores, 1-5) was used to evaluate the similarity of the answers by ChatGPT to the consensus statements. The 4 scoring categories (Accuracy, Overconclusiveness, Supplementary, and Incompleteness) were also used to evaluate the quality of ChatGPT answers, according to previous studies. Results: The mean Likert scale score regarding the similarity of ChatGPT’s answers to the consensus statements was 3.1 ± 0.8. Regarding the results of 4 scoring categories of the ChatGPT answers, the percentages of answers that were considered “yes” in the Accuracy, Overconclusiveness, Supplementary, and Incompleteness were 71.4% (10/14), 35.7% (5/14), 78.6% (11/14), and 14.3% (2/14), respectively. Conclusion: This study showed that ChatGPT-4 often provides responses that diverge from expert consensus regarding surgical treatment of ankle cartilage lesions. Level of Evidence: Level V, expert opinion.
ISSN:2473-0114