State-of-the-Art Innovations
to Prevent Financial Risk
The Feedzai Research department invests in applied research to improve our products and help users have a better experience. We work closely with Product and Customer Success to develop and transfer innovations. We focus on long-term, disruptive, state-of-the-art research, produce and protect our IP, publish peer reviewed work, contribute to open-source, partner with external researchers, and sponsor scholarships.
Recent Publications

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
Published at ICLR 2023 workshop Trustworthy ML

Fairness-Aware Data Valuation for Supervised Learning
Published at ICLR 2023 workshop Trustworthy ML
Latest News
Data Viz Lisboa meetup by our Data Viz group, Jun 19, 6 PM
Montreal AI Ethics Institute highlights Feedzai Research paper
FairGBM paper accepted at ICLR
Feedzai leads Portuguese organizations in European patent submissions
“Turning the tables” paper and fairness testing datasets presented at NeurIPS’2022
Our LaundroGraph algorithm hits the news
Data Viz Lisboa meetup by our Data Viz group, Jun 19, 6 PM
Montreal AI Ethics Institute highlights Feedzai Research paper
FairGBM paper accepted at ICLR
Feedzai leads Portuguese organizations in European patent submissions
“Turning the tables” paper and fairness testing datasets presented at NeurIPS’2022
Our LaundroGraph algorithm hits the news
Recent Blog Posts

TimeSHAP: Explaining recurrent models through sequence perturbations
Recurrent Neural Networks (RNNs) are a family of models used for sequential tasks, such as predicting financial fraud based on customer behavior. These models are very powerful, but their decision processes are opaque and unintelligible to humans and rendering them black boxes to humans. Understanding how RNNs work is imperative to assess whether the model is relying on any spurious correlations or discriminating against certain groups. In this blog post, we provide an overview of TimeSHAP, a novel model-agnostic recurrent explainer developed at Feedzai. TimeSHAP extends the KernelSHAP explainer to recurrent models. You can try TimeSHAP at Feedzai’s Github.
Joao Bento, André Cruz, Pedro Saleiro

Understanding FairGBM: Feedzai’s Experts Discuss the Breakthrough
Feedzai recently announced that we are making our groundbreaking FairGBM algorithm available via open source. In this vlog, experts from Feedzai’s Research team discuss the algorithm’s importance, why it represents a significant breakthrough in machine learning fairness beyond financial services, and why we decided to release it via open source.
Pedro Saleiro

Analyzing Data Drift: How We Designed Visualizations to Support Feature Investigation
Find more about how we designed visualizations to support our new tool to automatically detect drift in data over time, Feature Investigation.
João Palmeiro

TimeSHAP: Explaining recurrent models through sequence perturbations
Recurrent Neural Networks (RNNs) are a family of models used for sequential tasks, such as predicting financial fraud based on customer behavior. These models are very powerful, but their decision processes are opaque and unintelligible to humans and rendering them black boxes to humans. Understanding how RNNs work is imperative to assess whether the model is relying on any spurious correlations or discriminating against certain groups. In this blog post, we provide an overview of TimeSHAP, a novel model-agnostic recurrent explainer developed at Feedzai. TimeSHAP extends the KernelSHAP explainer to recurrent models. You can try TimeSHAP at Feedzai’s Github.
Joao Bento, André Cruz, Pedro Saleiro

Understanding FairGBM: Feedzai’s Experts Discuss the Breakthrough
Feedzai recently announced that we are making our groundbreaking FairGBM algorithm available via open source. In this vlog, experts from Feedzai’s Research team discuss the algorithm’s importance, why it represents a significant breakthrough in machine learning fairness beyond financial services, and why we decided to release it via open source.
Pedro Saleiro

Analyzing Data Drift: How We Designed Visualizations to Support Feature Investigation
Find more about how we designed visualizations to support our new tool to automatically detect drift in data over time, Feature Investigation.
João Palmeiro
Research Areas
Data Visualization
The Data Visualization group aims to better elucidate complex data for Fraud Analysts & Data Scientists with insightful beautiful data experiences.
Learn More
FATE
The FATE group aims to build the next-generation RiskOps platform through a series of innovations in Responsible & Explainable AI.
Learn More
Machine Learning
The Machine Learning (ML) group aims to fight fraud and financial crime with state-of-the-art AI solutions.
Learn More
Systems Research
The Systems Research group aims to enhance performance & reliability of the RiskOps Platform through innovation in a number of key areas.
Learn More
Data Visualization
The Data Visualization group aims to better elucidate complex data for Fraud Analysts & Data Scientists with insightful beautiful data experiences.
Learn More
FATE
The FATE group aims to build the next-generation RiskOps platform through a series of innovations in Responsible & Explainable AI.
Learn More
Machine Learning
The Machine Learning (ML) group aims to fight fraud and financial crime with state-of-the-art AI solutions.
Learn More
Systems Research
The Systems Research group aims to enhance performance & reliability of the RiskOps Platform through innovation in a number of key areas.
Learn More
Page printed in 6 Jun 2023. Plase see https://research.feedzai.com for the latest version.