Project Description
Machine Learning based predictions models and, in particular, automated decision-making (ADM) based on such models, has been recognized to be often unfair to groups (e.g., minorities) or to an individual. Several fairness metrics to measure the problem and some solutions have been proposed (see Berk et al., 2021 and Mehrabi et al., 2022). However, the analysis of unfairness in the literature has been somewhat disconnected, in parts ad hoc, and mostly focused on ADM. We propose a single, simple formal framework to analyze the various root causes, how they interact, and the implications for fairness metrics, for any kind of data analysis. The framework is a fairness extension of the framework presented by Gruber et al. (2023). The formal work is accompanied by simulation studies conducted within the project “Fairness Aspects of Machine Learning in Official Statistics”.