Publication

Promoting Fairness through Hyperparameter Optimization

André Cruz, Pedro Saleiro, Catarina Belém, Carlos Soares, Pedro Bizarro

Published at ICDM 2021 conference, ICLR 2021 - Responsible AI workshop

AI Research

Abstract

Considerable research effort has been guided towards algorithmic fairness but real-world adoption of bias reduction techniques is still scarce. Existing methods are either metric- or model-specific, require access to sensitive attributes at inference time, or carry high development or deployment costs. This work explores the unfairness that emerges when optimizing ML models solely for predictive performance, and how to mitigate it with a simple and easily deployed intervention: fairness-aware hyperparameter optimization (HO). We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband. We validate our approach on a real-world bank account opening fraud case-study, as well as on three datasets from the fairness literature. Results show that, without extra training cost, it is feasible to find models with 111% mean fairness increase and just 6% decrease in performance when compared with fairness-blind HO.

Materials
PDF arXiv GitHub ICDM PDF

Page printed in 21 Dec 2024. Plase see https://research.feedzai.com/publication/promoting-fairness-through-hyperparameter-optimization for the latest version.