Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK

Abstract Background While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially crit...

Full description

Saved in:
Bibliographic Details
Main Authors: Saoudi CE Nouis, Victoria Uren, Srushti Jariwala
Format: Article
Language:English
Published: BMC 2025-07-01
Series:BMC Medical Ethics
Subjects:
Online Access:https://doi.org/10.1186/s12910-025-01243-z
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Background While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially critical in domains such as pathology and blood sciences, where opaque AI algorithms and non-representative datasets can impact clinical outcomes. The present work focuses on a single NHS context and does not claim broader generalization. Methods We conducted a local qualitative study across multiple healthcare facilities in a single NHS Trust in the West Midlands, United Kingdom, to investigate healthcare professionals’ experiences and perceptions of AI-assisted decision-making. Forty participants—including clinicians, healthcare administrators, and AI developers—took part in semi-structured interviews or focus groups. Transcribed data were analyzed using Braun and Clarke’s thematic analysis framework, allowing us to identify core themes relating to the benefits of AI, ethical challenges, and potential mitigation strategies. Results Participants reported notable gains in diagnostic efficiency and resource allocation, underscoring AI’s potential to reduce turnaround times for routine tests and enhance detection of abnormalities. Nevertheless, accountability surfaced as a pervasive concern: while clinicians felt ultimately liable for patient outcomes, they also relied on AI-generated insights, prompting questions about liability if systems malfunctioned. Transparency emerged as another major theme, with clinicians emphasizing the difficulty of trusting “black box” models that lack clear rationale or interpretability—particularly for rare or complex cases. Bias was repeatedly cited, especially when algorithms underperformed in minority patient groups or in identifying atypical presentations. These issues raised doubts about the fairness and reliability of AIassisted diagnoses. Conclusions Although AI demonstrates promise for improving efficiency and patient care, unresolved ethical complexities around accountability, transparency, and bias may erode stakeholder confidence and compromise patient safety. Participants called for clearer regulatory frameworks, inclusive training datasets, and stronger clinician–developer collaboration. Future research should incorporate patient perspectives, investigate long-term impacts of AI-driven clinical decisions, and refine ethical guidelines to ensure equitable, responsible AI deployment. Trial registration : Not applicable.
ISSN:1472-6939