EYE-Llama, an in-domain large language model for ophthalmology

Summary: Training large language models (LLMs) on domain-specific data enhances their performance, yielding more accurate and reliable question-answering (Q&A) systems that support clinical decision-making and patient education. We present EYE-Llama, pretrained on ophthalmology-focused datasets,...

Full description

Saved in:
Bibliographic Details
Main Authors: Tania Haghighi, Sina Gholami, Jared Todd Sokol, Enaika Kishnani, Adnan Ahsaniyan, Holakou Rahmanian, Fares Hedayati, Theodore Leng, Minhaj Nur Alam
Format: Article
Language:English
Published: Elsevier 2025-07-01
Series:iScience
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589004225012453
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary: Training large language models (LLMs) on domain-specific data enhances their performance, yielding more accurate and reliable question-answering (Q&A) systems that support clinical decision-making and patient education. We present EYE-Llama, pretrained on ophthalmology-focused datasets, including PubMed abstracts, textbooks, and online articles, and fine-tuned on diverse Q&A pairs. We evaluated EYE-Llama against Llama 2, Llama 3, Meditron, ChatDoctor, ChatGPT, and several other LLMs. Using BERT (Bidirectional Encoder Representations from Transformers) score, BART (Bidirectional and Auto-Regressive Transformer) score, and BLEU (Bilingual Evaluation Understudy) metrics, EYE-Llama achieved superior scores. On the MedMCQA benchmark, it outperformed Llama 2, Meditron, and ChatDoctor. On PubMedQA, it achieved 0.96 accuracy, surpassing all models tested. These results demonstrate that domain-specific pretraining and fine-tuning significantly improve medical Q&A performance and underscore the value of specialized models such as EYE-Llama.
ISSN:2589-0042