Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review
BackgroundArtificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetua...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
JMIR Publications
2025-01-01
|
Series: | Journal of Medical Internet Research |
Online Access: | https://www.jmir.org/2025/1/e60269 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841555910612221952 |
---|---|
author | Maxime Sasseville Steven Ouellet Caroline Rhéaume Malek Sahlia Vincent Couture Philippe Després Jean-Sébastien Paquette David Darmon Frédéric Bergeron Marie-Pierre Gagnon |
author_facet | Maxime Sasseville Steven Ouellet Caroline Rhéaume Malek Sahlia Vincent Couture Philippe Després Jean-Sébastien Paquette David Darmon Frédéric Bergeron Marie-Pierre Gagnon |
author_sort | Maxime Sasseville |
collection | DOAJ |
description |
BackgroundArtificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap in the current understanding of strategies used to assess and mitigate bias in primary health care algorithms related to individuals’ personal or protected attributes.
ObjectiveThis study aimed to describe the attempts, strategies, and methods used to mitigate bias in AI models within primary health care, to identify the diverse groups or protected attributes considered, and to evaluate the results of these approaches on both bias reduction and AI model performance.
MethodsWe conducted a scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO (Ovid), and Web of Science databases for studies published between January 1, 2017, and November 15, 2022. Pairs of reviewers independently screened titles and abstracts, applied selection criteria, and performed full-text screening. Discrepancies regarding study inclusion were resolved by consensus. Following reporting standards for AI in health care, we extracted data on study objectives, model features, targeted diverse groups, mitigation strategies used, and results. Using the mixed methods appraisal tool, we appraised the quality of the studies.
ResultsAfter removing 585 duplicates, we screened 1018 titles and abstracts. From the remaining 189 full-text articles, we included 17 studies. The most frequently investigated protected attributes were race (or ethnicity), examined in 12 of the 17 studies, and sex (often identified as gender), typically classified as “male versus female” in 10 of the studies. We categorized bias mitigation approaches into four clusters: (1) modifying existing AI models or datasets, (2) sourcing data from electronic health records, (3) developing tools with a “human-in-the-loop” approach, and (4) identifying ethical principles for informed decision-making. Algorithmic preprocessing methods, such as relabeling and reweighing data, along with natural language processing techniques that extract data from unstructured notes, showed the greatest potential for bias mitigation. Other methods aimed at enhancing model fairness included group recalibration and the application of the equalized odds metric. However, these approaches sometimes exacerbated prediction errors across groups or led to overall model miscalibrations.
ConclusionsThe results suggest that biases toward diverse groups are more easily mitigated when data are open-sourced, multiple stakeholders are engaged, and during the algorithm’s preprocessing stage. Further empirical studies that include a broader range of groups, such as Indigenous peoples in Canada, are needed to validate and expand upon these findings.
Trial RegistrationOSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/
International Registered Report Identifier (IRRID)RR2-10.2196/46684 |
format | Article |
id | doaj-art-8fae76b1d12649d4b29afa4b7e5b34d2 |
institution | Kabale University |
issn | 1438-8871 |
language | English |
publishDate | 2025-01-01 |
publisher | JMIR Publications |
record_format | Article |
series | Journal of Medical Internet Research |
spelling | doaj-art-8fae76b1d12649d4b29afa4b7e5b34d22025-01-07T22:30:32ZengJMIR PublicationsJournal of Medical Internet Research1438-88712025-01-0127e6026910.2196/60269Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping ReviewMaxime Sassevillehttps://orcid.org/0000-0003-1694-1414Steven Ouellethttps://orcid.org/0000-0003-2158-0043Caroline Rhéaumehttps://orcid.org/0000-0002-1863-4410Malek Sahliahttps://orcid.org/0009-0003-4633-4698Vincent Couturehttps://orcid.org/0000-0002-8811-0524Philippe Despréshttps://orcid.org/0000-0002-4163-7353Jean-Sébastien Paquettehttps://orcid.org/0000-0002-9524-6761David Darmonhttps://orcid.org/0000-0002-4425-4163Frédéric Bergeronhttps://orcid.org/0000-0003-0978-7420Marie-Pierre Gagnonhttps://orcid.org/0000-0002-0782-5457 BackgroundArtificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap in the current understanding of strategies used to assess and mitigate bias in primary health care algorithms related to individuals’ personal or protected attributes. ObjectiveThis study aimed to describe the attempts, strategies, and methods used to mitigate bias in AI models within primary health care, to identify the diverse groups or protected attributes considered, and to evaluate the results of these approaches on both bias reduction and AI model performance. MethodsWe conducted a scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO (Ovid), and Web of Science databases for studies published between January 1, 2017, and November 15, 2022. Pairs of reviewers independently screened titles and abstracts, applied selection criteria, and performed full-text screening. Discrepancies regarding study inclusion were resolved by consensus. Following reporting standards for AI in health care, we extracted data on study objectives, model features, targeted diverse groups, mitigation strategies used, and results. Using the mixed methods appraisal tool, we appraised the quality of the studies. ResultsAfter removing 585 duplicates, we screened 1018 titles and abstracts. From the remaining 189 full-text articles, we included 17 studies. The most frequently investigated protected attributes were race (or ethnicity), examined in 12 of the 17 studies, and sex (often identified as gender), typically classified as “male versus female” in 10 of the studies. We categorized bias mitigation approaches into four clusters: (1) modifying existing AI models or datasets, (2) sourcing data from electronic health records, (3) developing tools with a “human-in-the-loop” approach, and (4) identifying ethical principles for informed decision-making. Algorithmic preprocessing methods, such as relabeling and reweighing data, along with natural language processing techniques that extract data from unstructured notes, showed the greatest potential for bias mitigation. Other methods aimed at enhancing model fairness included group recalibration and the application of the equalized odds metric. However, these approaches sometimes exacerbated prediction errors across groups or led to overall model miscalibrations. ConclusionsThe results suggest that biases toward diverse groups are more easily mitigated when data are open-sourced, multiple stakeholders are engaged, and during the algorithm’s preprocessing stage. Further empirical studies that include a broader range of groups, such as Indigenous peoples in Canada, are needed to validate and expand upon these findings. Trial RegistrationOSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/ International Registered Report Identifier (IRRID)RR2-10.2196/46684https://www.jmir.org/2025/1/e60269 |
spellingShingle | Maxime Sasseville Steven Ouellet Caroline Rhéaume Malek Sahlia Vincent Couture Philippe Després Jean-Sébastien Paquette David Darmon Frédéric Bergeron Marie-Pierre Gagnon Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review Journal of Medical Internet Research |
title | Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review |
title_full | Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review |
title_fullStr | Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review |
title_full_unstemmed | Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review |
title_short | Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review |
title_sort | bias mitigation in primary health care artificial intelligence models scoping review |
url | https://www.jmir.org/2025/1/e60269 |
work_keys_str_mv | AT maximesasseville biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT stevenouellet biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT carolinerheaume biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT maleksahlia biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT vincentcouture biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT philippedespres biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT jeansebastienpaquette biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT daviddarmon biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT fredericbergeron biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview AT mariepierregagnon biasmitigationinprimaryhealthcareartificialintelligencemodelsscopingreview |