Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Robust Explainable AI - Francesco Leofante, Matthew Wicker

Robust Explainable AI

Buch | Softcover
XII, 71 Seiten
2025
Springer International Publishing (Verlag)
978-3-031-89021-5 (ISBN)
CHF 67,35 inkl. MwSt

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.

This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.

As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.

Francesco Leofante is a researcher affiliated with the Centre for Explainable AI at Imperial College. His research focuses on explainable AI, with special emphasis on counterfactual explanations for AI-based decision-making. His recent work highlighted several vulnerabilities of counterfactual explanations and proposed innovative solutions to improve their robustness.

Matthew Wicker is an Assistant Professor (Lecturer) at Imperial College London and a Research Associate at The Alan Turing Institute. He works on formal verification of trustworthy machine learning properties with collaborators form academia and industry. His work focuses on provable guarantees for diverse notions of trustworthiness for machine learning models in order to enable responsible deployment.

Foreword.- Preface.- Acknowledgements.- 1. Introduction.- 2. Explainability in Machine Learning: Preliminaries & Overview.- 3. Robustness of Counterfactual Explanations.- 4. Robustness of Saliency-Based Explanations.

Erscheinungsdatum
Reihe/Serie SpringerBriefs in Intelligent Systems
Zusatzinfo XII, 71 p. 20 illus., 17 illus. in color.
Verlagsort Cham
Sprache englisch
Maße 155 x 235 mm
Themenwelt Informatik Netzwerke Sicherheit / Firewall
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik
Schlagworte adversarial robustness • Counterfactual Explanations • deep neural networks • Explainable AI • Fairness • Feature Attribution • privacy • Saliency-Based Explanations • trustworthy AI • XAI
ISBN-10 3-031-89021-3 / 3031890213
ISBN-13 978-3-031-89021-5 / 9783031890215
Zustand Neuware
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Das Lehrbuch für Konzepte, Prinzipien, Mechanismen, Architekturen und …

von Norbert Pohlmann

Buch | Softcover (2022)
Springer Vieweg (Verlag)
CHF 53,15
Management der Informationssicherheit und Vorbereitung auf die …

von Michael Brenner; Nils gentschen Felde; Wolfgang Hommel

Buch (2024)
Carl Hanser (Verlag)
CHF 97,95