Performance assessment of ChatGPT 4, ChatGPT 3.5, Gemini Advanced Pro 1.5 and Bard 2.0 to problem solving in pathology in French language
Digital teaching diversifies the ways of knowledge assessment, as natural language processing offers the possibility of answering questions posed by students and teachers. Objective This study evaluated ChatGPT's, Bard's and Gemini's performances on second year of medical studies’ (DF...
Saved in:
Main Authors: | Georges Tarris, Laurent Martin |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2025-01-01
|
Series: | Digital Health |
Online Access: | https://doi.org/10.1177/20552076241310630 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Readability and Appropriateness of Responses Generated by ChatGPT 3.5, ChatGPT 4.0, Gemini, and Microsoft Copilot for FAQs in Refractive Surgery
by: Fahri Onur Aydın, et al.
Published: (2024-12-01) -
Comparative analysis of ChatGPT and Gemini (Bard) in medical inquiry: a scoping review
by: Fattah H. Fattah, et al.
Published: (2025-02-01) -
Bard versus ChatGPT: An adjunct to scientific writing
by: Shweta Dobhada, et al.
Published: (2024-03-01) -
Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study
by: Ying-Mei Wang, et al.
Published: (2025-01-01) -
الخدمة المرجعية والرد على الاستفسارات باستخدام (ChatGpt3.5) و (Gemini)
دراسة تقييمية مقارنة
by: عبد الرحمن صابر عبد الرحمن عمار
Published: (2024-10-01)