Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation
Abstract Background Large language models (LLMs), such as ChatGPT-4 and Gemini, represent a new frontier in surgical education by offering dynamic, interactive learning experiences. Despite their potential, concerns about the accuracy, depth of knowledge, and bias in LLM responses persist. This stud...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-07-01
|
| Series: | BMC Medical Education |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12909-025-07550-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Background Large language models (LLMs), such as ChatGPT-4 and Gemini, represent a new frontier in surgical education by offering dynamic, interactive learning experiences. Despite their potential, concerns about the accuracy, depth of knowledge, and bias in LLM responses persist. This study evaluates the effectiveness of LLMs in aiding surgical trainees in plastic and reconstructive surgery through comparison with traditional case-preparation textbooks. Methods Six representative cases from key areas of plastic and reconstructive surgery—craniofacial, hand, microsurgery, burn, gender-affirming, and aesthetics—were selected. Four types of questions were developed for each case to cover clinical anatomy, indications, contraindications, and complications. Responses from LLMs (ChatGPT-4 and Gemini) and textbooks were compared using surveys distributed to medical students, research fellows, residents, and attending surgeons. Reviewers rated each response on accuracy, thoroughness, usefulness for case preparation, brevity, and overall quality using a 5-point Likert scale. Statistical analyses, including ANOVA and unpaired T-tests, were conducted to assess the differences between LLM and textbook responses. Results A total of 90 surveys were completed. LLM responses were rated as more thorough (p < 0.001) but less concise (p < 0.001) than textbook responses. Textbooks were rated superior for answering questions on contraindications (p = 0.027) and complications (p = 0.014). ChatGPT was perceived as more accurate (p = 0.018), thorough (p = 0.002), and useful (p = 0.026) than Gemini. Gemini was rated lower in quality (p = 0.30) compared to ChatGPT along with being inferior to textbook answers for burn-related questions (p = 0.017) and anatomical questions (p = 0.013). Conclusion While LLMs show promise in generating thorough educational content, they require improvement in conciseness, accuracy, and utility for practical case preparation. ChatGPT generally outperforms Gemini, indicating variability in LLM capabilities. Further development should focus on enhancing accuracy and consistency to establish LLMs as reliable tools in medical education and practice. |
|---|---|
| ISSN: | 1472-6920 |