Building trust in AI
decisions

The TRUST Framework is an operational backbone turning Responsible AI into a measurable, actionable reality. By focusing on Transparency, Robustness, Unbiased outcomes, Security, and Testing, the TRUST Framework provides a rigorous standard for evaluating and building AI systems that inspire confidence and ensure long-term sustainability, offering a clear pathway to implement and validate Responsible AI across all parts of the ecosystem.

Learn More

Recent Publications

Recent Blog Posts

Benchmarking LLMs in Real-World Applications: Pitfalls and Surprises

"Are newer LLMs better?", In this post, Jean Vieira Alves and Ferran Pla Fernández explore that question with rigorous benchmarking work. Spoiler/hint: recall Betteridge's law of headlines ("Any headline that ends in a question mark can be answered by the word no.")

Jean V. Alves and Ferran Pla Fernández

Causal Concept-Based Explanations

Over the years, we have evolved from using simple, often rule-based algorithms to sophisticated machine learning models. These models are incredibly good at finding patterns in large datasets, but due to their complexity it is frequently challenging for a human to understand why a certain input leads to its respective output. This is especially problematic in areas where high-stakes decisions are being made and where human-AI collaboration is critical.

Jacopo Bono

Feedzai TrustScore: Enabling Network Intelligence to Fight Financial Crime

Detecting financial fraud is like finding a moving needle in a shifting haystack. Fraud accounts for a tiny fraction of financial transactions, often less than 0.1%. At the same time, fraudsters are constantly adapting their tactics to evade detection. And this happens within a live and dynamic environment, where financial behaviors and technologies are changing over time. In short, this is an exceptionally difficult problem for financial institutions.

Sofia Guerreiro, Ricardo Ribeiro Pereira, Iker Perez, Jacopo Bono

Benchmarking LLMs in Real-World Applications: Pitfalls and Surprises

"Are newer LLMs better?", In this post, Jean Vieira Alves and Ferran Pla Fernández explore that question with rigorous benchmarking work. Spoiler/hint: recall Betteridge's law of headlines ("Any headline that ends in a question mark can be answered by the word no.")

Jean V. Alves and Ferran Pla Fernández

Causal Concept-Based Explanations

Over the years, we have evolved from using simple, often rule-based algorithms to sophisticated machine learning models. These models are incredibly good at finding patterns in large datasets, but due to their complexity it is frequently challenging for a human to understand why a certain input leads to its respective output. This is especially problematic in areas where high-stakes decisions are being made and where human-AI collaboration is critical.

Jacopo Bono

Feedzai TrustScore: Enabling Network Intelligence to Fight Financial Crime

Detecting financial fraud is like finding a moving needle in a shifting haystack. Fraud accounts for a tiny fraction of financial transactions, often less than 0.1%. At the same time, fraudsters are constantly adapting their tactics to evade detection. And this happens within a live and dynamic environment, where financial behaviors and technologies are changing over time. In short, this is an exceptionally difficult problem for financial institutions.

Sofia Guerreiro, Ricardo Ribeiro Pereira, Iker Perez, Jacopo Bono

Page printed in 5 Dec 2025. Plase see https://research.feedzai.com for the latest version.