Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study

Pharmacovigilance plays a crucial role in ensuring the safety of pharmaceutical products. It involves the systematic monitoring of adverse events and the detection of potential safety concerns related to drugs. Manual literature screening for pharmacovigilance related articles is a labor-intensive a...

Full description

Saved in:
Bibliographic Details
Main Authors: Dan Li, Leihong Wu, Mingfeng Zhang, Svitlana Shpyleva, Ying-Chi Lin, Ho-Yin Huang, Ting Li, Joshua Xu
Format: Article
Language:English
Published: Frontiers Media S.A. 2024-06-01
Series:Frontiers in Drug Safety and Regulation
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fdsfr.2024.1379260/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846164113687838720
author Dan Li
Leihong Wu
Mingfeng Zhang
Svitlana Shpyleva
Ying-Chi Lin
Ying-Chi Lin
Ho-Yin Huang
Ho-Yin Huang
Ting Li
Joshua Xu
author_facet Dan Li
Leihong Wu
Mingfeng Zhang
Svitlana Shpyleva
Ying-Chi Lin
Ying-Chi Lin
Ho-Yin Huang
Ho-Yin Huang
Ting Li
Joshua Xu
author_sort Dan Li
collection DOAJ
description Pharmacovigilance plays a crucial role in ensuring the safety of pharmaceutical products. It involves the systematic monitoring of adverse events and the detection of potential safety concerns related to drugs. Manual literature screening for pharmacovigilance related articles is a labor-intensive and time-consuming task, requiring streamlined solutions to cope with the continuous growth of literature. The primary objective of this study is to assess the performance of Large Language Models (LLMs) in automating literature screening for pharmacovigilance, aiming to enhance the process by identifying relevant articles more effectively. This study represents a novel application of LLMs including OpenAI’s GPT-3.5, GPT-4, and Anthropic’s Claude2, in the field of pharmacovigilance, evaluating their ability to categorize medical publications as relevant or irrelevant for safety signal reviews. Our analysis encompassed N-shot learning, chain-of-thought reasoning, and evaluating metrics, with a focus on factors impacting accuracy. The findings highlight the promising potential of LLMs in literature screening, achieving a reproducibility of 93%, sensitivity of 97%, and specificity of 67% showcasing notable strengths in terms of reproducibility and sensitivity, although with moderate specificity. Notably, performance improved when models were provided examples consisting of abstracts, labels, and corresponding reasoning explanations. Moreover, our exploration identified several potential contributing factors influencing prediction outcomes. These factors encompassed the choice of key words and prompts, the balance of the examples, and variations in reasoning explanations. By configuring advanced LLMs for efficient screening of extensive literature databases, this study underscores the transformative potential of these models in drug safety monitoring. Furthermore, these insights gained from this study can inform the development of automated systems for pharmacovigilance, contributing to the ongoing efforts to ensure the safety and efficacy of pharmacovigilance products.
format Article
id doaj-art-4fb1097fde0a401ebc5def1a72d69c78
institution Kabale University
issn 2674-0869
language English
publishDate 2024-06-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Drug Safety and Regulation
spelling doaj-art-4fb1097fde0a401ebc5def1a72d69c782024-11-18T14:48:06ZengFrontiers Media S.A.Frontiers in Drug Safety and Regulation2674-08692024-06-01410.3389/fdsfr.2024.13792601379260Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative studyDan Li0Leihong Wu1Mingfeng Zhang2Svitlana Shpyleva3Ying-Chi Lin4Ying-Chi Lin5Ho-Yin Huang6Ho-Yin Huang7Ting Li8Joshua Xu9Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United StatesDivision of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United StatesDivision of Epidemiology, Office of Surveillance and Epidemiology, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, MD, United StatesDivision of Biochemical Toxicology, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United StatesSchool of Pharmacy, College of Pharmacy, Kaohsiung Medical University, Kaohsiung, TaiwanMaster/Doctoral Degree Program in Toxicology, College of Pharmacy, Kaohsiung Medical University, Kaohsiung, TaiwanSchool of Pharmacy, College of Pharmacy, Kaohsiung Medical University, Kaohsiung, TaiwanDepartment of Pharmacy, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, TaiwanDivision of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United StatesDivision of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United StatesPharmacovigilance plays a crucial role in ensuring the safety of pharmaceutical products. It involves the systematic monitoring of adverse events and the detection of potential safety concerns related to drugs. Manual literature screening for pharmacovigilance related articles is a labor-intensive and time-consuming task, requiring streamlined solutions to cope with the continuous growth of literature. The primary objective of this study is to assess the performance of Large Language Models (LLMs) in automating literature screening for pharmacovigilance, aiming to enhance the process by identifying relevant articles more effectively. This study represents a novel application of LLMs including OpenAI’s GPT-3.5, GPT-4, and Anthropic’s Claude2, in the field of pharmacovigilance, evaluating their ability to categorize medical publications as relevant or irrelevant for safety signal reviews. Our analysis encompassed N-shot learning, chain-of-thought reasoning, and evaluating metrics, with a focus on factors impacting accuracy. The findings highlight the promising potential of LLMs in literature screening, achieving a reproducibility of 93%, sensitivity of 97%, and specificity of 67% showcasing notable strengths in terms of reproducibility and sensitivity, although with moderate specificity. Notably, performance improved when models were provided examples consisting of abstracts, labels, and corresponding reasoning explanations. Moreover, our exploration identified several potential contributing factors influencing prediction outcomes. These factors encompassed the choice of key words and prompts, the balance of the examples, and variations in reasoning explanations. By configuring advanced LLMs for efficient screening of extensive literature databases, this study underscores the transformative potential of these models in drug safety monitoring. Furthermore, these insights gained from this study can inform the development of automated systems for pharmacovigilance, contributing to the ongoing efforts to ensure the safety and efficacy of pharmacovigilance products.https://www.frontiersin.org/articles/10.3389/fdsfr.2024.1379260/fullpharmacovigilancelarge language modelsLLMsliterature based discoveryartificial intelligence
spellingShingle Dan Li
Leihong Wu
Mingfeng Zhang
Svitlana Shpyleva
Ying-Chi Lin
Ying-Chi Lin
Ho-Yin Huang
Ho-Yin Huang
Ting Li
Joshua Xu
Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study
Frontiers in Drug Safety and Regulation
pharmacovigilance
large language models
LLMs
literature based discovery
artificial intelligence
title Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study
title_full Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study
title_fullStr Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study
title_full_unstemmed Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study
title_short Assessing the performance of large language models in literature screening for pharmacovigilance: a comparative study
title_sort assessing the performance of large language models in literature screening for pharmacovigilance a comparative study
topic pharmacovigilance
large language models
LLMs
literature based discovery
artificial intelligence
url https://www.frontiersin.org/articles/10.3389/fdsfr.2024.1379260/full
work_keys_str_mv AT danli assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT leihongwu assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT mingfengzhang assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT svitlanashpyleva assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT yingchilin assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT yingchilin assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT hoyinhuang assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT hoyinhuang assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT tingli assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy
AT joshuaxu assessingtheperformanceoflargelanguagemodelsinliteraturescreeningforpharmacovigilanceacomparativestudy