Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models

Abstract As the adoption of machine learning algorithms expands across industries, the focus on how these tools can perpetuate existing biases have gained attention. Given the expanding literature in a nascent field, an example of how leading bias indicators could be aggregated and deployed to evalu...

Full description

Saved in:
Bibliographic Details
Main Author: Jacob Ford
Format: Article
Language:English
Published: Springer 2025-05-01
Series:Management System Engineering
Subjects:
Online Access:https://doi.org/10.1007/s44176-025-00043-4
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849310405218795520
author Jacob Ford
author_facet Jacob Ford
author_sort Jacob Ford
collection DOAJ
description Abstract As the adoption of machine learning algorithms expands across industries, the focus on how these tools can perpetuate existing biases have gained attention. Given the expanding literature in a nascent field, an example of how leading bias indicators could be aggregated and deployed to evaluate the fairness of a machine learning tool would prove useful for future data scientists. This research addresses how algorithmic bias may be quantified by conducting a case study of a machine learning alternative credit risk metric, comparing threshold effects for various protected classes contrasted with a conventional credit score (FICO). Our research pursues two objectives: (1) to scrutinize the extent of bias based on classical fairness benchmarks, and (2) to innovate in bias analysis by augmenting the number of protected categories observed and proposing a simple quantitative evaluation heuristic. Notably, our findings indicate that for low-income customers, the variance across all threshold scenarios was over seven times lower when using a machine learning model compared to traditional FICO scores, signifying a significant reduction in bias. The insights emphasize the importance of evaluating bias in machine learning implementations across multiple protected classes, while serving as a model case study for how future exercises of applying leading bias indicators can be done in practice.
format Article
id doaj-art-8be71643f95e4f9e90d6e5d9c73a71fe
institution Kabale University
issn 2731-5843
language English
publishDate 2025-05-01
publisher Springer
record_format Article
series Management System Engineering
spelling doaj-art-8be71643f95e4f9e90d6e5d9c73a71fe2025-08-20T03:53:46ZengSpringerManagement System Engineering2731-58432025-05-014111310.1007/s44176-025-00043-4Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit modelsJacob Ford0Solstice Power Technologies, LLCAbstract As the adoption of machine learning algorithms expands across industries, the focus on how these tools can perpetuate existing biases have gained attention. Given the expanding literature in a nascent field, an example of how leading bias indicators could be aggregated and deployed to evaluate the fairness of a machine learning tool would prove useful for future data scientists. This research addresses how algorithmic bias may be quantified by conducting a case study of a machine learning alternative credit risk metric, comparing threshold effects for various protected classes contrasted with a conventional credit score (FICO). Our research pursues two objectives: (1) to scrutinize the extent of bias based on classical fairness benchmarks, and (2) to innovate in bias analysis by augmenting the number of protected categories observed and proposing a simple quantitative evaluation heuristic. Notably, our findings indicate that for low-income customers, the variance across all threshold scenarios was over seven times lower when using a machine learning model compared to traditional FICO scores, signifying a significant reduction in bias. The insights emphasize the importance of evaluating bias in machine learning implementations across multiple protected classes, while serving as a model case study for how future exercises of applying leading bias indicators can be done in practice.https://doi.org/10.1007/s44176-025-00043-4Machine learning biasProtected classesRisk evaluation
spellingShingle Jacob Ford
Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
Management System Engineering
Machine learning bias
Protected classes
Risk evaluation
title Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
title_full Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
title_fullStr Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
title_full_unstemmed Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
title_short Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
title_sort fairness in focus quantitative insights into bias within machine learning risk evaluations and established credit models
topic Machine learning bias
Protected classes
Risk evaluation
url https://doi.org/10.1007/s44176-025-00043-4
work_keys_str_mv AT jacobford fairnessinfocusquantitativeinsightsintobiaswithinmachinelearningriskevaluationsandestablishedcreditmodels