Evaluating large language models as graders of medical short answer questions: a comparative analysis with expert human graders
The assessment of short-answer questions (SAQs) in medical education is resource-intensive, requiring significant expert time. Large Language Models (LLMs) offer potential for automating this process, but their efficacy in specialized medical education assessment remains understudied. To evaluate th...
Saved in:
| Main Authors: | Olena Bolgova, Paul Ganguly, Muhammad Faisal Ikram, Volodymyr Mavrych |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Taylor & Francis Group
2025-12-01
|
| Series: | Medical Education Online |
| Subjects: | |
| Online Access: | https://www.tandfonline.com/doi/10.1080/10872981.2025.2550751 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Cross-Encoder-Based Semantic Evaluation of Extractive and Generative Question Answering in Low-Resourced African Languages
by: Funebi Francis Ijebu, et al.
Published: (2025-03-01) -
A question-answering framework for geospatial data retrieval enhanced by a knowledge graph and large language models
by: Hao Li, et al.
Published: (2025-08-01) -
Artificial intelligence assisted automated short answer question scoring tool shows high correlation with human examiner markings
by: H.M.T.W. Seneviratne, et al.
Published: (2025-08-01) -
Multimodal representative answer extraction in community question answering
by: Ming Li, et al.
Published: (2023-10-01) -
Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion
by: Junkai Zhang, et al.
Published: (2025-04-01)