Project Description
Fairness in machine learning continues to be a highly relevant issue, with unfair models making headlines on a regular basis. In this project, we re-evaluate the complete machine learning pipeline from the sourcing of data, over the design of ML systems all the way to their implementation with an eye on algorithmic fairness and robustness. We develop new methodologies and collect data to better understand the reliability of findings in the field. We further critically examine the usage and composition of datasets, highlighting gaps and providing recommendations for more sustainable practices. Among other things, we focus on the influence of design decisions, highlighting potential issues of fairness hacking and introducing a new methodology to systematically study and address issues of reliability.
Contact Person
Publications
- J. Simson, A.Fabris, C. Kern. 2024. Lazy Data Practices Harm Fairness Research. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), June 03–06, 2024, Rio de Janeiro,
Brazil. ACM, New York, NY, USA, 18 pages. https://doi.org/10.1145/3630106.3658931 - J. Simson, F. Pfisterer, C. Kern. 2024. One Model Many
Scores: Using Multiverse Analysis to Prevent Fairness Hacking and Evaluate
the Influence of Model Design Decisions. In The 2024 ACM Conference on
Fairness, Accountability, and Transparency (FAccT ’24), June 03–06, 2024, Rio
de Janeiro, Brazil. ACM, New York, NY, USA, 16 pages.https://doi.org/10.1145/3630106.3658974