Performance of large language models on veterinary undergraduate multiple-choice examinations: a comparative evaluation

The integration of artificial intelligence, particularly large language models (LLMs), into veterinary education and practice presents promising opportunities, yet their performance in veterinary-specific contexts remains understudied. This research comparatively evaluated the performance of nine ad...

Full description

Saved in:
Bibliographic Details
Main Authors: Santiago Alonso Sousa, Syed Saad Ul Hassan Bukhari, Paulo Vinicius Steagall, Paweł M. Bęczkowski, Antonio Giuliano, Kate J. Flay
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-08-01
Series:Frontiers in Veterinary Science
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fvets.2025.1616566/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The integration of artificial intelligence, particularly large language models (LLMs), into veterinary education and practice presents promising opportunities, yet their performance in veterinary-specific contexts remains understudied. This research comparatively evaluated the performance of nine advanced LLMs (ChatGPT o1Pro, ChatGPT 4o, ChatGPT 4.5, Grok 3, Gemini 2, Copilot, DeepSeek R1, Qwen 2.5 Max, and Kimi 1.5) on 250 multiple-choice questions (MCQs) sourced from a veterinary undergraduate final qualifying examination. Questions spanned various species, clinical topics and reasoning stages, and included both text-based and image-based formats. ChatGPT o1Pro and ChatGPT 4.5 achieved the highest overall performance, with correct response rates of 90.4 and 90.8% respectively, demonstrating strong agreement with the gold standard across most categories, while Kimi 1.5 showed the lowest performance at 64.8%. Performance consistently declined with increased question difficulty and was generally lower for image-based than text-based questions. OpenAI models excelled in visual interpretation compared to previous studies. Disparities in performance were observed across specific clinical reasoning stages and veterinary subdomains, highlighting areas for targeted improvement. This study underscores the promising role of LLMs as supportive tools for quality assurance in veterinary assessment design and indicates key factors influencing their performance, including question difficulty, format, and domain-specific training data.
ISSN:2297-1769