The FATE group stands at the forefront of Responsible AI with a mission of building the next-gen RiskOps AI that is responsible and explainable by design. FATE develops state-of-the-art research and product innovations using Human-Centered AI to mitigate bias, promote transparency, and amplify human abilities and control.

Research Focus:

  • Fairness
  • Explainable ML
  • ML Evaluation
  • Human-AI Collaboration
  • ML Robustness

Recent Publications

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

José Pombal, André F. Cruz, João Bravo, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro

Published at KDD 2022 - Machine Learning in Finance workshop

PDF | arXiv | Youtube

ConceptDistil: Model-Agnostic Distillation of Concept Explanations

João Bento, Ricardo Moreira, Vladimir Balayan, Pedro Saleiro, Pedro Bizarro

Published at ICLR 2022 - PAIR2Struct workshop

PDF | arXiv

Related Blog Posts

Page printed in 30 Sep 2022. Plase see https://research.feedzai.com/research_area/fate for the latest version.