Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
Abstract As the adoption of machine learning algorithms expands across industries, the focus on how these tools can perpetuate existing biases have gained attention. Given the expanding literature in a nascent field, an example of how leading bias indicators could be aggregated and deployed to evalu...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-05-01
|
| Series: | Management System Engineering |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44176-025-00043-4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract As the adoption of machine learning algorithms expands across industries, the focus on how these tools can perpetuate existing biases have gained attention. Given the expanding literature in a nascent field, an example of how leading bias indicators could be aggregated and deployed to evaluate the fairness of a machine learning tool would prove useful for future data scientists. This research addresses how algorithmic bias may be quantified by conducting a case study of a machine learning alternative credit risk metric, comparing threshold effects for various protected classes contrasted with a conventional credit score (FICO). Our research pursues two objectives: (1) to scrutinize the extent of bias based on classical fairness benchmarks, and (2) to innovate in bias analysis by augmenting the number of protected categories observed and proposing a simple quantitative evaluation heuristic. Notably, our findings indicate that for low-income customers, the variance across all threshold scenarios was over seven times lower when using a machine learning model compared to traditional FICO scores, signifying a significant reduction in bias. The insights emphasize the importance of evaluating bias in machine learning implementations across multiple protected classes, while serving as a model case study for how future exercises of applying leading bias indicators can be done in practice. |
|---|---|
| ISSN: | 2731-5843 |