Publication

Fair-OBNC: Correcting Label Noise for Fairer Datasets

Inês Oliveira e Silva, Sérgio Jesus, Hugo Ferreira, Pedro Saleiro, Inês Sousa, Pedro Bizarro, Carlos Soares

Published at ECAI 2024

AI Research

Abstract

Data used by automated decision-making systems, such as Machine Learning models, often reflects discriminatory behavior that occurred in the past. These biases in the training data are sometimes related to label noise, such as in COMPAS, where more African-American offenders are wrongly labeled as having a higher risk of recidivism when compared to their White counterparts. Models trained on such biased data may perpetuate or even aggravate the biases with respect to sensitive information, such as gender, race, or age.

Materials
arXiv

Page printed in 21 Dec 2024. Plase see https://research.feedzai.com/publication/fair-obnc-correcting-label-noise-for-fairer-datasets for the latest version.