Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK
Abstract Background While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially crit...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-07-01
|
| Series: | BMC Medical Ethics |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12910-025-01243-z |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849332445186359296 |
|---|---|
| author | Saoudi CE Nouis Victoria Uren Srushti Jariwala |
| author_facet | Saoudi CE Nouis Victoria Uren Srushti Jariwala |
| author_sort | Saoudi CE Nouis |
| collection | DOAJ |
| description | Abstract Background While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially critical in domains such as pathology and blood sciences, where opaque AI algorithms and non-representative datasets can impact clinical outcomes. The present work focuses on a single NHS context and does not claim broader generalization. Methods We conducted a local qualitative study across multiple healthcare facilities in a single NHS Trust in the West Midlands, United Kingdom, to investigate healthcare professionals’ experiences and perceptions of AI-assisted decision-making. Forty participants—including clinicians, healthcare administrators, and AI developers—took part in semi-structured interviews or focus groups. Transcribed data were analyzed using Braun and Clarke’s thematic analysis framework, allowing us to identify core themes relating to the benefits of AI, ethical challenges, and potential mitigation strategies. Results Participants reported notable gains in diagnostic efficiency and resource allocation, underscoring AI’s potential to reduce turnaround times for routine tests and enhance detection of abnormalities. Nevertheless, accountability surfaced as a pervasive concern: while clinicians felt ultimately liable for patient outcomes, they also relied on AI-generated insights, prompting questions about liability if systems malfunctioned. Transparency emerged as another major theme, with clinicians emphasizing the difficulty of trusting “black box” models that lack clear rationale or interpretability—particularly for rare or complex cases. Bias was repeatedly cited, especially when algorithms underperformed in minority patient groups or in identifying atypical presentations. These issues raised doubts about the fairness and reliability of AIassisted diagnoses. Conclusions Although AI demonstrates promise for improving efficiency and patient care, unresolved ethical complexities around accountability, transparency, and bias may erode stakeholder confidence and compromise patient safety. Participants called for clearer regulatory frameworks, inclusive training datasets, and stronger clinician–developer collaboration. Future research should incorporate patient perspectives, investigate long-term impacts of AI-driven clinical decisions, and refine ethical guidelines to ensure equitable, responsible AI deployment. Trial registration : Not applicable. |
| format | Article |
| id | doaj-art-1447d06df0964d97a7bc0a527a3eb0f5 |
| institution | Kabale University |
| issn | 1472-6939 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | BMC |
| record_format | Article |
| series | BMC Medical Ethics |
| spelling | doaj-art-1447d06df0964d97a7bc0a527a3eb0f52025-08-20T03:46:12ZengBMCBMC Medical Ethics1472-69392025-07-0126111110.1186/s12910-025-01243-zEvaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UKSaoudi CE Nouis0Victoria Uren1Srushti Jariwala2Biochemistry Department, Worcester Royal HospitalAston Business School, Aston UniversityMicrobiology Department, Worcester Royal HospitalAbstract Background While artificial intelligence (AI) has emerged as a powerful tool for enhancing diagnostic accuracy and streamlining workflows, key ethical questions remain insufficiently explored—particularly around accountability, transparency, and bias. These challenges become especially critical in domains such as pathology and blood sciences, where opaque AI algorithms and non-representative datasets can impact clinical outcomes. The present work focuses on a single NHS context and does not claim broader generalization. Methods We conducted a local qualitative study across multiple healthcare facilities in a single NHS Trust in the West Midlands, United Kingdom, to investigate healthcare professionals’ experiences and perceptions of AI-assisted decision-making. Forty participants—including clinicians, healthcare administrators, and AI developers—took part in semi-structured interviews or focus groups. Transcribed data were analyzed using Braun and Clarke’s thematic analysis framework, allowing us to identify core themes relating to the benefits of AI, ethical challenges, and potential mitigation strategies. Results Participants reported notable gains in diagnostic efficiency and resource allocation, underscoring AI’s potential to reduce turnaround times for routine tests and enhance detection of abnormalities. Nevertheless, accountability surfaced as a pervasive concern: while clinicians felt ultimately liable for patient outcomes, they also relied on AI-generated insights, prompting questions about liability if systems malfunctioned. Transparency emerged as another major theme, with clinicians emphasizing the difficulty of trusting “black box” models that lack clear rationale or interpretability—particularly for rare or complex cases. Bias was repeatedly cited, especially when algorithms underperformed in minority patient groups or in identifying atypical presentations. These issues raised doubts about the fairness and reliability of AIassisted diagnoses. Conclusions Although AI demonstrates promise for improving efficiency and patient care, unresolved ethical complexities around accountability, transparency, and bias may erode stakeholder confidence and compromise patient safety. Participants called for clearer regulatory frameworks, inclusive training datasets, and stronger clinician–developer collaboration. Future research should incorporate patient perspectives, investigate long-term impacts of AI-driven clinical decisions, and refine ethical guidelines to ensure equitable, responsible AI deployment. Trial registration : Not applicable.https://doi.org/10.1186/s12910-025-01243-zElectronic health recordArtificial intelligenceClinical Decision-MakingAccountabilityTransparencyBias |
| spellingShingle | Saoudi CE Nouis Victoria Uren Srushti Jariwala Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK BMC Medical Ethics Electronic health record Artificial intelligence Clinical Decision-Making Accountability Transparency Bias |
| title | Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK |
| title_full | Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK |
| title_fullStr | Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK |
| title_full_unstemmed | Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK |
| title_short | Evaluating accountability, transparency, and bias in AI-assisted healthcare decision- making: a qualitative study of healthcare professionals’ perspectives in the UK |
| title_sort | evaluating accountability transparency and bias in ai assisted healthcare decision making a qualitative study of healthcare professionals perspectives in the uk |
| topic | Electronic health record Artificial intelligence Clinical Decision-Making Accountability Transparency Bias |
| url | https://doi.org/10.1186/s12910-025-01243-z |
| work_keys_str_mv | AT saoudicenouis evaluatingaccountabilitytransparencyandbiasinaiassistedhealthcaredecisionmakingaqualitativestudyofhealthcareprofessionalsperspectivesintheuk AT victoriauren evaluatingaccountabilitytransparencyandbiasinaiassistedhealthcaredecisionmakingaqualitativestudyofhealthcareprofessionalsperspectivesintheuk AT srushtijariwala evaluatingaccountabilitytransparencyandbiasinaiassistedhealthcaredecisionmakingaqualitativestudyofhealthcareprofessionalsperspectivesintheuk |