Fairness in focus: quantitative insights into bias within machine learning risk evaluations and established credit models
Abstract As the adoption of machine learning algorithms expands across industries, the focus on how these tools can perpetuate existing biases have gained attention. Given the expanding literature in a nascent field, an example of how leading bias indicators could be aggregated and deployed to evalu...
Saved in:
| Main Author: | Jacob Ford |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-05-01
|
| Series: | Management System Engineering |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44176-025-00043-4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Cross-domain fairness audit of sentiment label bias in foundation models: Comparing human and machine annotations on tweets and reviews
by: Blessing Ogbuokiri, et al.
Published: (2025-09-01) -
Evaluation Bias and its Control*
by: Michael Scriven
Published: (2011-01-01) -
Analyzing Fairness of Computer Vision and Natural Language Processing Models
by: Ahmed Rashed, et al.
Published: (2025-02-01) -
Data Biases in Geohazard AI: Investigating Landslide Class Distribution Effects on Active Learning and Self-Optimizing
by: Jing Miao, et al.
Published: (2025-06-01) -
Unravelling Bias: A Sardinian perspective on taxonomic, spatial, and temporal biases in vascular plant biodiversity data from GBIF
by: Raimondo Melis, et al.
Published: (2025-12-01)