fairadapt: Causal Reasoning for Fair Data Preprocessing

Machine learning algorithms are useful for various prediction tasks, but they can also learn how to discriminate, based on gender, race or other sensitive attributes. This realization gave rise to the field of fair machine learning, which aims to recognize, quantify and ultimately mitigate such alg...

Full description

Saved in:
Bibliographic Details
Main Authors: Drago Plečko, Nicolas Bennett, Nicolai Meinshausen
Format: Article
Language:English
Published: Foundation for Open Access Statistics 2024-09-01
Series:Journal of Statistical Software
Online Access:https://www.jstatsoft.org/index.php/jss/article/view/4729
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Machine learning algorithms are useful for various prediction tasks, but they can also learn how to discriminate, based on gender, race or other sensitive attributes. This realization gave rise to the field of fair machine learning, which aims to recognize, quantify and ultimately mitigate such algorithmic bias. This manuscript describes the R package fairadapt, which implements a causal inference preprocessing method. By making use of a causal graphical model alongside the observed data, the method can be used to address hypothetical questions of the form "What would my salary have been, had I been of a different gender/race?". Such individual level counterfactual reasoning can help eliminate discrimination and help justify fair decisions. We also discuss appropriate relaxations which assume that certain causal pathways from the sensitive attribute to the outcome are not discriminatory.
ISSN:1548-7660