FATE

The FATE group stands at the forefront of Responsible AI with a mission of building the next-gen RiskOps AI that is responsible and explainable by design. FATE develops state-of-the-art research and product innovations using Human-Centered AI to mitigate bias, promote transparency, and amplify human abilities and control.

Research Focus:

  • Fairness
  • Explainable ML
  • ML Evaluation
  • Human-AI Collaboration
  • ML Robustness

Recent Publications

Turning the Tables: Biased, Imbalanced, Dynamic Tabular Datasets for ML Evaluation

Sérgio Jesus, José Pombal, Duarte Alves, André Cruz, Pedro Saleiro, Rita P. Ribeiro, João Gama, Pedro Bizarro

Published at NeurIPS 2022

PDF | arXiv | Kaggle

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

José Pombal, André F. Cruz, João Bravo, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro

Published at KDD 2022 - Machine Learning in Finance workshop

PDF | arXiv | Youtube

Related Blog Posts

TimeSHAP: Explaining recurrent models through sequence perturbations

Recurrent Neural Networks (RNNs) are a family of models used for sequential tasks, such as predicting financial fraud based on customer behavior. These models are very powerful, but their decision processes are opaque and unintelligible to humans and rendering them black boxes to humans. Understanding how RNNs work is imperative to assess whether the model is relying on any spurious correlations or discriminating against certain groups. In this blog post, we provide an overview of TimeSHAP, a novel model-agnostic recurrent explainer developed at Feedzai. TimeSHAP extends the KernelSHAP explainer to recurrent models. You can try TimeSHAP at Feedzai’s Github.

Joao Bento, André Cruz, Pedro Saleiro

Understanding FairGBM: Feedzai’s Experts Discuss the Breakthrough

Feedzai recently announced that we are making our groundbreaking FairGBM algorithm available via open source. In this vlog, experts from Feedzai’s Research team discuss the algorithm’s importance, why it represents a significant breakthrough in machine learning fairness beyond financial services, and why we decided to release it via open source.

Pedro Saleiro

Why Responsible AI Should be Table Stakes in Financial Services

As artificial intelligence (AI) becomes increasingly used in financial services, it’s essential that financial institutions (FIs) trust the technology to work as intended and that it aligns with their ethical values. Implementing Responsible AI principles is not only the most effective way FIs can protect their customers and their brand from misbehaving AI - it’s also the right thing to do.

Pedro Saleiro

TimeSHAP: Explaining recurrent models through sequence perturbations

Recurrent Neural Networks (RNNs) are a family of models used for sequential tasks, such as predicting financial fraud based on customer behavior. These models are very powerful, but their decision processes are opaque and unintelligible to humans and rendering them black boxes to humans. Understanding how RNNs work is imperative to assess whether the model is relying on any spurious correlations or discriminating against certain groups. In this blog post, we provide an overview of TimeSHAP, a novel model-agnostic recurrent explainer developed at Feedzai. TimeSHAP extends the KernelSHAP explainer to recurrent models. You can try TimeSHAP at Feedzai’s Github.

Joao Bento, André Cruz, Pedro Saleiro

Understanding FairGBM: Feedzai’s Experts Discuss the Breakthrough

Feedzai recently announced that we are making our groundbreaking FairGBM algorithm available via open source. In this vlog, experts from Feedzai’s Research team discuss the algorithm’s importance, why it represents a significant breakthrough in machine learning fairness beyond financial services, and why we decided to release it via open source.

Pedro Saleiro

Why Responsible AI Should be Table Stakes in Financial Services

As artificial intelligence (AI) becomes increasingly used in financial services, it’s essential that financial institutions (FIs) trust the technology to work as intended and that it aligns with their ethical values. Implementing Responsible AI principles is not only the most effective way FIs can protect their customers and their brand from misbehaving AI - it’s also the right thing to do.

Pedro Saleiro

Page printed in 6 Jun 2023. Plase see https://research.feedzai.com/research_area/fate for the latest version.